‘Taking On Tech is an informative series that explores artificial intelligence, data science, algorithms, and mass censorship. In this inaugural report, For(bes) The Culture kicks things off with Dr. Timnit Gebru, a former researcher and co-lead of Google’s Ethical AI team.
When Gebru was forced out of Google after refusing to retract a research paper that was already cleared by Google’s internal review process, a conversation about the tech industry’s inherent diversity problem resurfaced.
The paper raised concerns on algorithmic bias in machine learning and the latent perils that AI presents for marginalized communities. Around 1,500 Google employees signed a letter in protest, calling for accountability and answers over her unethical firing.
Consumers are now demanding similar transparency with Instagram. The social media platform came under fire recently after discreetly rolling out their “Sensitive Content Tool.”The new feature was inconspicuously placed on users’ accounts as a default setting, and many are reporting an inability to remove it.
MORE FOR YOU
Instagram CEO Adam Mosseri publicly lauds the tool as an opportunity for users to better customize their browsing experience. The feature was designed as a safety measure that limits content from the Explore page that Instagram deems “unsafe.” Users on the app are confronting Mosseri with concerns over the censorship of marginalized voices. Activists, educators, and tech experts are making the case that Instagram’s arbitrary definition of what is “safe” disproportionately impacts members of vulnerable communities. When their content is blocked from the Explore feed, their ability to expand their reach, grow their audience, build support systems, or benefit from Instagram’s digital economy—is limited.
How then does an algorithm consistently differentiate between harmful content and the lived experiences of those being harmed? In short, it doesn’t. AI hasn’t quite reached the acme of sophistication necessary for tackling social inequality on digits platforms. For(bes) The Culture spoke to Dr. Gebru about the ongoing issue.
For(bes) The Culture: First and foremost, can you talk about AI models and how they differentiate hate speech from standard expression?
Dr. Timnit Gebru: There are a lot of issues there because they often don’t. Zeerak Waseem is a great resource for this. His research is on hate speech specifically. I know a bit about it, but I mostly work with large language models that are trained.
For(bes) The Culture: What does the training of language models entail?
Dr. Gebru: They’re trained on many texts. If they want to do hate speech detection, they would have text labeled by people as hate speech versus not. Then they would train it to discriminate between the two. If I train it on texts where the labelers say text is not hate speech when it is, or when text is labeled hate speech when it’s not, the model will be trained to do that. Hate speech can be so nuanced and culture-specific, so a lot of these models don’t detect it. Forget the models, the content moderators and social media don’t capture it at all.
For(bes) The Culture: How fast are these language models trained?
Dr. Gebru: Many of these models are trained on static data. For instance, after the BLM movement began, Wikipedia articles on BLM were rapidly updated. But they didn’t make it into the training of some of these models. They were lagging behind, even after society moved forward with related discussions and topics. It’s the same with what’s considered harassment versus not. That definition changed a lot based on conversations about #MeToo. Nuance is lost when technologists don’t consider social movements as they’re training these models.
For(bes) The Culture: Can you think of a common scenario in which a given model could proactively target a protected class of people?
Dr. Gebru: In our paper called “On the Dangers of Stochastic Parrots,”the paper that supposedly got me fired from Google, we cite works by members of the Ethical AI team explaining how some of these models detect hate speech or what they call ‘toxicity.’ For example, comments on YouTube or in an article can be flagged as toxic. The findings showed that they would flag a lot of nontoxic sentences that have specific names, mentions of disability, or even being queer. An example would be: I am a man versus I am a gay man. That would raise the toxicity score when it shouldn’t. I am a man versus I’m a mentally ill man would also raise the toxicity score. This is problematic and the issue remains unresolved.
For(bes) The Culture: Are there any particular pitfalls when attempting to capture diversity in training data?
Dr. Gebru: When they’re training models, the data they use might not have any of the voices we mentioned. Let’s say, the people writing about BLM or harassment only use Twitter data. People affiliated with BLM may not participate on Twitter because they’re harassed. One of our co-authors, Mark Diaz, did research showing that a lot of people talk about ageism. He found that many elderly people like to write blog posts for instance, or use other mechanisms and are not on social media.
For(bes) The Culture: Do the people building these models have an additional step they can take to measure accuracy?
Dr. Gebru: When it comes to the technologists who create these models, it’s not really a thing they consider. They might consider it, but also be like, “Ah, whatever. It’s fine. We’ll just do it,” kind of thing. Then, journalists are harassed on Twitter, Instagram, or Facebook and the AI they have is not flagging it. People know the issues. The problem is that the technologists are not talking to the people who know the issues when they’re creating these models.
For(bes) The Culture: What happens when an employee of one of these companies is harassed? Does it become a greater priority?
Dr. Gebru: One of my former teammates, who was the last person I hired before I got fired, is of Moroccan origin. He was being threatened by people acting on behalf of the Moroccan government on Twitter. One of them posted a picture of a machine gun pointing at him and was like, “This is what needs to happen to you, blah, blah, blah.” It was reported. Did it get removed? No, it didn’t. A lot of these companies even have infiltrators. They’re not thinking about us because we’re not their priority. We’re just collateral damage until they’re forced to do something about it, and so far they haven’t. Nobody has held them accountable.
For(bes) The Culture: What type of legislation would you like to see passed to help mitigate this?
Dr. Gebru: Maybe stronger whistleblower protection laws, anti-discrimination laws, and labor and union laws. All of these tech companies are union-busting. They have an entire playbook. In order for workers to do anything internally, they need to have power. Right now they don’t. The way you build power is by organizing in solidarity. They know that by keeping us isolated, they take away our power and we can’t make an impact. Pay attention to what’s happening and support worker led movements in the tech industry.
For(bes) The Culture: Have you identified any particular barriers to passing such legislation?
Dr. Gebru: Legislators are controlled by companies. The companies pay lobbyists who have the time and resources to get laws passed that benefit companies. These companies also have money to pay armies of lawyers who harass whistleblowers day and night. Journalists have really been doing a great job of trying to hold them accountable, but if all parties don’t play their part, I don’t see how things will change. I think there needs to be a partnership between all stakeholders.
For(bes) The Culture: After your departure from Google, the company promised to amend its diversity and retention policy. What were your thoughts when you heard the news?
Dr. Gebru:Oh yes, that they were going to hire more staff to do retention or whatever. That same afternoon they fired my co-lead, Margaret Mitchell. They should have hired their retention staff and retained her. Instead they started a smear campaign talking about how, “she exfiltrated confidential business-sensitive documents.” They were making her sound like a spy or something. They always make commitments and never keep them; just like during the Black Lives Matter protest back in the summer of 2020. There was no real effort to make any real impact. They even fired April Curley, a prominent recruiter who single-handedly brought in their HBCU students as engineers at Google. She fought day in and day out for them. Then they fired me. I don’t want to hear them make any more commitments because it’s much more harmful.
For(bes) The Culture: Can you elaborate on how those broken commitments exacerbate harm?
Dr. Gebru: Each time they make these commitments, white supremacists see them, think we’re getting handouts, and come after us. It’s actually much worse than them not saying anything at all or just telling us what they actually plan to do.
We call that diversity branding or ‘diversity theater.’ It does more harm than good because it brings us a lot more harassment. Chanda Prescod, a professor and maybe the only Black woman who is a theoretical physicist, talks about her monthly security bill because of a publication called ‘Campus Reform’. They think there’s this liberal agenda and they’re out to get everyone. Death threats, arrests, all of that. We get that because of all this performative talk, which is the complete opposite of their actions. I just want them to eliminate the position of the Chief Diversity Officer. They just rubber stamp all the horrible things the company’s doing.
For(bes) The Culture: Would you say this problem also exists outside of Google?
Dr. Gebru: Sara Ahmed talks and writes a lot about the diversity industry and how it’s really created to protect companies from litigation. That’s why it should just be eliminated. That’s how I feel about their centralized diversity efforts at Google. They block anything grassroots.
For(bes) The Culture: How has your departure from Google influenced your plans for the future? What’s next for you?
Dr. Gebru: I want to take some real time off. I’m barely working right now, but I am. I want to be on the beach doing nothing—disconnect. I can’t imagine working at another big tech company right now. I can’t say never, but at the moment I just can’t. My friend, Lynn Champion, who is currently a PhD student at Stanford, is working on a research institute led by Black women whose projects will be informed by their own needs. I’m super excited about the prospect of that happening. I am just so sick of fighting with institutions. Academic institutions are no better. They’re huge gatekeepers and super racist. At least at Google I felt like I was able to have a team, hire great people, et cetera. The gatekeeping in academia is too much. I’m more interested in doing something a bit more from the ground up. I’d like to mentor other Black people, and help bring their voices to the surface. That’s still something I’m passionate about. Even helping strengthen legislation to protect whistleblowers. Going to another tech company and fighting with their institution is just not something that I have energy for at the moment.
For(bes) The Culture: Do you foresee big tech being held more accountable?
Dr. Gebru: We just don’t have any laws on our side right now. We need to have laws they’re actually afraid of. There are bills in the works on algorithmic oversight, et cetera. I think that kind of regulation needs to exist, but lobbyists for corporations shouldn’t be involved in drafting it. Impacted communities should definitely have a say. I want to see more accurate framing of what it’s really like at these companies. Tech is a conservative force and we say that in our paper. As social movements become larger on these platforms, the tech industry will keep trying to set them back.
The conversation has been edited and condensed for clarity.