Ex-Googler's Ethical AI Startup Models More Inclusive Approach

Backed by big foundations, ethical AI startup DAIR promises a focus on AI directed by and in service of the many rather than controlled just by a few giant tech companies. How do its goals align with your enterprise's own AI ethics program?

Jessica Davis, Senior Editor

December 6, 2021

4 Min Read
robot hand touching a computer screen
Kittipong Jirasukhanont via Alamy Stock Photos

Issues around ethical AI have garnered more attention over the past several years. Tech giants from Facebook to Google to Microsoft have already established and published principles to demonstrate to stakeholders -- customers, employees, and investors -- that they understand the importance of ethical or responsible AI.

So it was a bit of a black eye last year when the co-head of Google’s Ethical AI group, Timnit Gebru, was fired following a dispute with management over a scholarly paper she coauthored and was scheduled to deliver at a conference.

Now Gebru has established her own startup focused on ethical AI. The Distributed Artificial Intelligence Research Institute (DAIR) produces interdisciplinary AI research, according to the organization. It is supported by grants from the MacArthur Foundation, the Ford Foundation, Open Society Foundations, and a gift from the Kapor Center.

Ethical AI will be among the topics at the top of mind for progressive CIOs in 2022. Forrester Research predicts that the market for responsible AI solutions will double in 2022.

Enterprise organizations that accelerated their digital transformations, including investments in AI, during the pandemic may be looking to refine their practices now.

Organizations that already have invested in ethical/responsible AI, however, will be the most likely to pursue continuing improvement of their practices, according to Gartner distinguished research VP Whit Andrews. These organizations are likely to have stakeholders that are paying attention to ethical AI issues, whether it’s the pursuit of unbiased data sets or the avoidance of problematic facial recognition software.

Should these enterprises look to the tech giants for guidance or should they look to smaller institutes like Gebru’s DAIR?

DAIR’s Mission

Gebru’s new institute was created “to counter Big Tech’s pervasive influence on the research, development, and deployment of AI,” according to the organization’s announcement of its formation.

The foundations that funded DAIR point out the importance of independent voices representing the interests of people and communities, not just the interests of corporations.

“To shape a more just and equitable future where AI benefits all people, we must accelerate independent, public interest research that is free from corporate constraints, and that centers the expertise of people who have been historically excluded from the AI field,” said John Palfrey, president of the MacArthur Foundation, in a prepared statement. “MacArthur is proud to support Dr. Gebru’s bold vision for the DAIR Institute to examine and mitigate AI harms, while expanding the possibilities for AI technologies to create a more inclusive technological future.”

DAIR identified specific research directions of interest including developing AI for low resource settings, language technology serving marginalized communities, coordinated social media activity, data related work, and robustness testing and documentation.

“We strongly believe in a bottoms-up approach to research, supporting ideas initiated by members of the DAIR community, rather than a purely top-down direction dictated by a few,” according to the institute’s statement of research philosophy.

An Enterprise Approach to Ethical AI

For enterprises looking to their own ethical/responsible AI practices, Gartner’s Andrews offers a few recommendations to get started. First, create an in-house practice that defines what the word “ethics” or “responsibility” means in your organization.

“I guarantee that the folks here in Western Massachusetts have a different idea of what ethics is than the folks in Japan, or China, or Bali, or India,” he says. UNESCO just released recommendations on the ethical use of AI last month. “This is a sensitive topic.” That’s why it needs to be carefully defined before it can be implemented.

For instance, Facebook could encourage people to register to vote in an election. Some people would think that is ethical behavior. Others think that is unethical behavior.

To avoid this kind of conflict, organizations should spell out what they consider ethical or unethical.

Next, Andrews recommends that organizations introduce their chief ethics officer to their chief data officer and their CIO.

“Have you established a shared creed for them to follow?” asks Andrews. “If not, the organization’s executives need to sit down and create an ethics creed.”

What to Read Next:

How and Why Enterprises Must Tackle Ethical AI

Why Enterprises are Training AI for Local Markets

AI Liability Risks to Consider

Read more about:

Technology Startups

About the Author(s)

Jessica Davis

Senior Editor

Jessica Davis is a Senior Editor at InformationWeek. She covers enterprise IT leadership, careers, artificial intelligence, data and analytics, and enterprise software. She has spent a career covering the intersection of business and technology. Follow her on twitter: @jessicadavis.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights