
AI Bias: Is it this the Mirror Big Tech Keeps Avoiding?
It may not come as a surprise to you that during Book Me In — my future-focused virtual book club —we don’t just read books. We decode signals. We make connections. And we ask the uncomfortable questions. In recent discussions sparked by titles like Supremacy by Parmy Olson, one theme keeps surfacing: technology is holding up a mirror. The question is — are we paying attention to what it’s showing us?
One conversation that we keep returning to relates to AI and bias. I wrote a Forbes opinion article about this issue last year titled “The gender gap in AI: Unmasking the challenges in advancing gender equality“.
In a recent book club read, we learned about Google Photos’ long-standing decision to disable its “gorilla” tag feature after its image recognition AI mislabelled Black people — and to this day, hasn’t fixed it. Instead of improving the tech, in 2023 Google quietly removed the feature altogether.
This isn’t a one-off. It’s part of a larger pattern — where the complexity of fixing bias seems to outweigh the will to do so.
A 2019 MIT Media Lab study found that facial recognition software from major tech companies had error rates of up to 34.7% when identifying darker-skinned women, compared to less than 1% for lighter-skinned men. These are not small margins. These are massive, measurable failures. And yet, most of these systems were deployed publicly before being fully audited.
Even inside Big Tech, red flags have been raised — and met with resistance. In 2020, Google famously forced out two prominent AI ethics researchers, Dr. Timnit Gebru and Dr. Margaret Mitchell, after they co-authored a paper warning that large language models — the kind powering today’s AI boom — risk becoming “stochastic parrots”: systems that can mimic human language fluently, but without understanding, accountability, or ethical grounding. The paper also flagged bias, environmental costs, and the dangers of scale without reflection.
It was a mic drop moment — and a mirror moment — that many chose to look away from.
AI doesn’t invent bias, it amplifies the bias it’s trained on. When datasets reflect historical discrimination or lack diversity, algorithms learn to replicate those patterns. What we see in these flawed outputs is not just a tech problem — it’s a societal one. As one Book Me In member reflected: AI is holding up a mirror. The question is, are we bold enough to look?
The U.S. Department of Housing and Urban Development (HUD) filed a complaint against Meta (formerly Facebook) for allowing advertisers to use its algorithm to exclude users from seeing housing ads based on race, gender, religion, and other protected characteristics. Facebook’s ad delivery system was found to be disproportionately showing housing ads to certain demographics, even when the advertiser didn’t explicitly target by race or gender — the algorithm was doing the discrimination on its own. In 2022, Meta agreed to change its ad delivery system and adopt a new “variance reduction system” to ensure more equitable outcomes. The company was also fined and required to undergo regular oversight.
We can’t keep treating these moments as tech glitches. Bias in AI isn’t just a bug; it’s a reflection of who builds the system, how, and with whose values in mind. Coding is not — and never was — value-neutral. Yet, within many engineering cultures there remains a lingering belief that tech is “objective.” It’s not. And the sooner we accept that, the better equipped we’ll be to design systems that reflect more inclusive thinking.
Yes, eliminating bias as a human is incredibly complex — we’re all shaped by culture, experience, and unconscious assumptions. But here’s where AI holds promise: when designed with care, code can be a tool for consistency. While we can’t expect to filter out all bias with a line of code, we can program systems to flag it, to learn from it, and even to counteract it — if we’re intentional. The idea that technology is neutral is outdated. The real power lies in designing technology that’s bias-aware by design.
And just as we’ve learned to call out bias in boardrooms, dinner parties, and group chats, we must bring that same courageous curiosity to our interactions with AI. When we notice skewed responses, unfair assumptions, or problematic patterns, it’s on us — as users — to name them. This is not just about being digitally literate. It’s about being more humanly conscious in an AI age.
We often hear (valid) existential fears about AI — that it could someday end humanity. But maybe, just maybe, we should redirect some of that energy into a more urgent, grounded goal: what if we used AI to end bias instead?
Disabling an AI feature might prevent PR fallout, but it doesn’t solve the core issue. Innovation without inclusion is just inertia with a glossy interface. So the better question is: what if this is our chance to do better? To code for consciousness, not just convenience?
The mirror is here. Whether we look into it — and act — is entirely up to us.
Oh and if you’d like to have these kinds of future focused conversations, I invite you to join Book Me In this May, June and July as we explore the future through bold books and even bolder conversations. Your seat at the (virtual) table is waiting. Book “in” here.
Be Bold, Be Curious, Be Disruptive.
X Anna
………………………………………………………………………………………………………………………………………
I’m on a mission to help law and business adapt to the digital age. I invite you to build your Innovation Intelligence (what I call IQ2.0) with me:
@legallyinnovative | annalozynski.com | Anna Lozynski – LinkedIn | inCite Legal Tech