Skip to main content

Google employee group urges Congress to strengthen whistleblower protections for AI researchers

Google AI logo
Image Credit: Khari Johnson / VentureBeat

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


Google’s decision to fire its AI ethics leaders is a matter of “urgent public concern” that merits strengthening laws to protect AI researchers and tech workers who want to act as whistleblowers. That’s according to a letter Google employees published today in support of the Ethical AI team at Google and former co-leads Margaret Mitchell and Timnit Gebru, whom Google fired two weeks ago and in December 2020, respectively.

Firing Gebru, one of the best known Black female AI researchers in the world and one of few Black women at Google, drew public opposition from thousands of Google employees. It also led critics to claim the incident may have “shattered” Google’s Black talent pipeline and signaled the collapse of AI ethics research in corporate environments.

“We must stand up together now, or the precedent we set for the field — for the integrity of our own research and for our ability to check the power of big tech — bodes a grim future for us all,” reads the letter published by the group Google Walkout for Change. “Researchers and other tech workers need protections which allow them to call out harmful technology when they see it, and whistleblower protection can be a powerful tool for guarding against the worst abuses of the private entities which create these technologies.”

Google Walkout for Change was created in 2018, and organizers said the group’s global walkout that year involved 20,000 Googlers in 50 cities around the world.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

In a tweet days before Google fired her, Gebru asked whether anyone was working on regulation to protect AI ethics whistleblowers. In the days after being fired, she voiced support for unionization as a means of protecting AI researchers. The Alphabet Workers Union cites Gebru’s dismissal among the reasons it was formed in January.

The letter also urges academic conferences to refuse to review papers subjected to editing by corporate lawyers and to begin declining sponsorship from businesses that retaliate against ethics researchers. “Too many institutions of higher learning are inextricably tied to Google funding (along with other Big Tech companies), with many faculty having joint appointments with Google,” the letter reads.

The letter addressed to state and national lawmakers cites an article VentureBeat published two weeks after Google fired Gebru. That piece looks at potential policy outcomes of the firing, including unionization and changes to whistleblower protection laws. The analysis — which drew on conversations with ethics, legal, and policy experts — cites UC Berkeley Center for Law and Technology co-director Sonia Katyal, who analyzed whistleblower protection laws in 2019 in the context of AI. In an interview with VentureBeat late last year, Katyal called these protections “totally insufficient.”

“What we should be concerned about is a world where all of the most talented researchers like [Gebru] get hired at these places and then effectively muzzled from speaking. And when that happens, whistleblower protections become essential,” Katyal told VentureBeat.

This is the second time in as many weeks groups have urged Congress to extend protections to AI workers wanting to alert the world to harmful applications. Last week, the National Security Commission on Artificial Intelligence, chaired by former Google CEO Eric Schmidt, sent recommendations to President Biden and Congress that include a call for government protections for workers who feel compelled to raise concerns about “irresponsible AI development.”

VentureBeat spoke with two sources familiar with Google AI ethics and policy matters who said they want to see stronger whistleblower protection for AI researchers. One person familiar with the matter said that at Google and other tech companies, people sometimes know something is broken but won’t fix it because they either don’t want to or don’t know how to.

“They’re stuck in this weird place between making money and making the world more equitable, and sometimes that inherent tension is very difficult to resolve,” the person, who spoke on condition of anonymity, told VentureBeat. “But I believe that they should resolve it because if you want to be a company that touches billions of people, then you should be responsible and held accountable for how you touch those billions of people.”

After Gebru was fired, that source described a sense among Google employees from underrepresented groups that if they pushed the envelope too far they might be perceived as hostile and people would start filing complaints to push them out. She said this creates a feeling of “genuine unsafety” in the workplace and a “deep sense of fear.”

She also told VentureBeat that when it comes to technology with the power to shape human lives, we need to have people throughout the design process with the authority to overturn potentially harmful decisions and ensure models learn from mistakes.

“Without that, we run the risk of … allowing algorithms that we don’t understand to literally shape our ability to be human, and that inherently isn’t fair,” she said.

The letter also criticizes Google leadership for “harassing and intimidating” not only Gebru and Mitchell, but other Ethical AI team members as well. Ethical AI team members were reportedly told to remove their names from a paper under review at the time Gebru was fired. The final copy of that paper, titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” was published this week at the Fairness, Accountability, and Transparency (FAccT) conference and does not cite any authors from Google. But a copy of the paper VentureBeat obtained lists Mitchell as a coauthor, as well as three other members of the Ethical AI team, each with extensive expertise in biased language models and human speech. Google AI chief Jeff Dean questioned the veracity of the research represented in that paper in an email to Google Research. Last week, FAccT organizers told VentureBeat the organization has suspended sponsorship from Google.

The letter published today calls on academics and policymakers to take action and follows changes to company diversity policy and the reorganization of 10 teams within Google Research. These include Ethical AI, now led by Google VP Marian Croak, who will report directly to AI chief Jeff Dean. As part of the change, Google will double staff devoted to employee retention and enact policy to engage HR specialists when certain employee exits are deemed sensitive. Google CEO Sundar Pichai mentioned better de-escalation strategies as part of the solution in a companywide memo. But in an interview with VentureBeat, Gebru called his memo “dehumanizing” and an attempt to characterize her as an angry Black woman.

A Google spokesperson told VentureBeat in an email following the company’s reorganization last month that diversity policy changes were undertaken based on the needs of the organization, not in response to any particular team at Google Research.

In the past year or so, Google’s Ethical AI team has explored a range of topics, including the need for a culture change in machine learning and an internal algorithm auditing framework, algorithmic fairness issues specific to India, the application of critical race theory and sociology, and the perils of scale.

The past weeks and months have seen a rash of reporting about the poor experiences of Black people and women at Google, as well as concerns about corporate influence over AI ethics research. Reuters reported in December 2020 that AI researchers at Google were told to strike a positive tone when referring to “sensitive” topics. Last week, Reuters reported that Google will reform its approach to research review and additional instances of interference in AI research. According to an email obtained by Reuters, the coauthor of another paper about large language models referred to edits made by Google’s legal department as “deeply insidious.”

In recent days, the Washington Post detailed how Google treats candidates from historically Black colleges and universities in a separate and unequal fashion, and NBC News reported that Google HR told employees who experienced racism or sexism to “assume good intent” and encouraged them to take mental health leave instead of addressing the underlying issues.

Instances of gender discrimination and toxic work environments for women and people of color have been reported at other major tech companies, including Amazon, Dropbox, Facebook, Microsoft, and Pinterest. Last month, VentureBeat reported that dozens of current and former Dropbox employees, particularly women of color, reported witnessing or experiencing gender discrimination at their company. Former Pinterest employee Ifeoma Ozoma, who previously spoke with VentureBeat about whistleblower protections, helped draft the proposed Silenced No More Act in California last month. If passed, that law will allow employees to report discrimination even if they have signed a nondisclosure agreement.

After Gebru was fired in December 2020, thousands of Google employees signed a Google Walkout letter protesting the way she was treated and what they termed “unprecedented research censorship.” That letter also called for a public inquiry into Gebru’s termination for the sake of Google users and employees. Members of Congress who have proposed regulations like the Algorithmic Accountability Act, including Rep. Yvette Clark (D-NY) and Sen. Cory Booker (D-NJ), also sent Google CEO Sundar Pichai an email  raising concerns over the way Gebru was fired, Google’s research integrity, and steps the company is taking to mitigate bias in large language models.

About a week after Gebru was fired, members of the Ethical AI team sent their own letter to company leadership. According to a copy VentureBeat obtained, Ethical AI team members demanded Gebru be reinstated and Samy Bengio remain the direct report manager for the Ethical AI team. They also state that reorganization is sometimes used to “[shunt] workers who’ve engaged in advocacy and organizing into new roles and managerial relationships.” The letter described Gebru’s termination as having a demoralizing effect on the Ethical AI team and outlined a number of steps needed to reestablish trust. That document cosigns letters of support for Gebru from Google’s Black Researchers group and the DEI Working Group. A Google spokesperson told VentureBeat outside counsel conducted an investigation but declined to share details. The Ethical AI letter also demands Google maintain and strengthen the department, guarantee the integrity of independent research, and clarify its sensitive review process by the end of Q1 2021. And the letter calls for a public statement that guarantees research integrity at Google, including in areas tied to the company’s business interests, such as large language models and datasets like JFT-300, which has over a billion labeled images.

A Google spokesperson said Croak will oversee the work of about 100 AI researchers going forward. A source familiar with the matter told VentureBeat a reorganization that brings Google’s numerous AI fairness efforts under a single leader makes sense and had been discussed before Gebru was fired. The question, this person said, is whether Google will fund fairness testing and analysis.

“Knowing what these communities need consistently becomes hard when these populations aren’t necessarily going to make the company a bunch of money,” a person familiar with the matter told VentureBeat. “So yeah, you can put us all under the same team, but where’s the money at? Are you going to give a bunch of headcount and jobs so that people can actually go do this work inside of products? Because these teams are already overtaxed — like these teams are really, really small in comparison to the products.”

Google walkout organizers Meredith Whittaker and Claire Stapleton claimed they also experienced retaliation before leaving the company, as did employees who attempted to unionize, many of whom identify as queer. Shortly before Gebru was fired, the National Labor Review Board filed a complaint against Google that accuses the company of retaliating against employees and illegally spying on them.

The AI Index, an annual accounting of performance advances and AI’s impact on startups, business, and government policy, was released last week. The report found that the United States differs from other countries in its large quantity of industry-backed research and called for more fairness benchmarks. The report also cited research finding only 3% of AI Ph.D. graduates in the U.S. are Black and 18% are women.  The index noted that Congress is talking about AI more than ever and that AI ethics incidents — including Google firing Gebru — were among the most popular AI news stories of 2020.

VentureBeat requested an interview with Google VP Marian Croak, but a Google spokesperson declined on her behalf.

In a related matter, VentureBeat analysis about the “fight for the soul of machine learning” was cited in a paper published this week at FAccT about power, exclusion, and AI ethics education.

Updated 11:40 a.m. Pacific, March 9 to mention the National Security Commission on Artificial Intelligence’s recommendation to Congress and to note that Timnit Gebru spoke about whistleblower protections before Google fired her and about unionization after she was fired as ways to protect AI researchers from retaliation.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.