The $354 Million Question Nobody Saw Coming

How ghosting candidates just became the most expensive mistake in HR history

 

A 40-something professional, impeccably dressed, resume gleaming like a fresh soles, walks into what they think is a job interview. Except there's no one there. Just a screen. A very judgmental screen that's about to ghost them harder than your ex after seeing your Instagram story.

Welcome to 2025, where getting hired has become more Black Mirror meets a dash of The Circle thrown in for good measure. The recruitment process now resembles a high-stakes reality show where contestants are eliminated not by public vote but by algorithms fed a steady diet of historical hiring data, unconscious bias, and whatever passes for "cultural fit" in Silicon Valley boardrooms.

The robots are about to get sued. Hard. Like, class-action-lawsuit-that-could-reshape-the-entire-tech-industry hard.

On May 16, 2025, Judge Rita Lin certified Mobley v. Workday as a collective action, potentially affecting hundreds of millions of job seekers. The message is clear: AI hiring tools need to explain themselves, and companies can't hide behind "the algorithm did it" anymore.


MEET THE PROTAGONIST: DEREK MOBLEY

Derek Mobley isn't your average plaintiff filing a lawsuit from his kitchen table with a public defender on speed dial. He's the kind of person who probably has a LinkedIn profile that makes you question your life choices, complete with professional headshots that cost more than most people's monthly grocery budget and a list of accomplishments that reads like a masterclass in corporate climbing.

Derek's story reads like a modern-day David and Goliath tale, if David had a law degree and Goliath was a collection of servers humming quietly in a data centre somewhere in Northern California. He approached the job market with the systematic precision of someone who understood that finding employment in today's economy requires the strategic thinking of a chess master, the emotional resilience of a marathon runner, and the technical savvy of someone who grew up with smartphones.

His methodology was flawless: targeted applications, customised cover letters, perfectly formatted resumes that passed through applicant tracking systems like silk through a needle.

His stats? 100+ job applications. 100+ rejections. 0% success rate.

That's not bad luck, darling. That's not a competitive market. That's not even a recession-induced hiring freeze. That's algorithmic assassination, delivered with the cold efficiency of a machine that doesn't understand the difference between correlation and causation, between pattern recognition and human prejudice.


Modern AI hiring platforms are genuinely impressive pieces of technology. They use natural language processing to analyse resumes, computer vision to evaluate video interviews, and predictive analytics to score candidates across multiple dimensions.

The problem isn't the sophistication, these systems learned from decades of biased hiring data. When you train AI on historical patterns that excluded certain groups, you get bias amplification, not objectivity.

But here's the encouraging part: the technology to fix this already exists. Fairness-aware machine learning, explainable AI, and algorithmic auditing tools are rapidly evolving. The industry just needs the right incentives to implement them.

While U.S. courts are playing catch-up, other regions are already building the guardrails:

Europe's AI Act (2024) puts hiring AI in the "high-risk" category, requiring bias testing, transparency, and human oversight. It's like having quality control for algorithms.

The UK is taking a lighter touch, updating existing discrimination laws rather than creating new ones. Very British approach: "Let's see how this goes, shall we?"

Canada, Australia, Japan, and Singapore are each developing their own frameworks, creating a patchwork of regulations that multinational companies will need to navigate.

The good news? There's growing consensus that AI hiring tools need better accountability mechanisms.

THE BOTTOM LINE

AI hiring tools aren't going anywhere, but they're about to get a lot smarter about fairness and transparency. The Mobley v. Workday case is a speed bump; it's exactly the kind of course correction the industry needs.

The future of hiring will likely involve more sophisticated human-AI collaboration, better algorithmic accountability, and systems that are both efficient and equitable. Companies that embrace these changes early will have a competitive advantage in attracting top talent and avoiding legal headaches.

Derek Mobley's 100 rejections might have just opened the door to fairer hiring for millions of people. The case proceeds through discovery in 2025, where we'll likely see more details about how these AI systems actually work.

Meanwhile, expect to see:

  • More companies are proactively auditing their hiring AI

  • New startups focused on fairness-aware recruiting tools

  • Increased investment in explainable AI research

  • Better industry standards for algorithmic accountability

The algorithm is about to learn some better manners.

Quick Links:

Previous
Previous

Europe's AI Act Guidelines Drop Just Days Before The Big Deadline

Next
Next

The Runway That's Literally Glowing