Today, the U.S. Department of Housing and Urban Development announced it is suing Facebook for violating the Fair Housing Act. It accuses the tech giant of “encouraging, enabling, and causing housing discrimination” by allowing advertisers to block real estate ads from race, religion, country of birth, and other protected characteristics. Advertisers could effectively red-line by excluding inhabitants of certain zip codes from seeing their ads. They could also filter out people who are non-American-born, non-Christian, interested in Hispanic culture, or even interested in “deaf culture.”

With all of the ethical scandals plaguing Facebook in the last 12 months, “deaf culture” seems like an apt description of the company itself.

How did this happen? Well, in the era of AI and machine learning, companies have an unprecedented ability to target specific groups of people with their products and services. Often, this makes perfect sense, as today’s customers expect a higher level of personalization. However, targeting specific people inherently means excluding others. And when it comes to housing, this exclusion is discriminatory and illegal.

I’ve written on the ethics of AI and how to prevent harmful discrimination in models. My report talks about unintentional bias models that can learn from incomplete or inaccurate data. The difference in the case of Facebook is that it derived these attributes so they could explicitly be used to discriminate against people! Facebook needs to follow the lead of Salesforce, which hired a chief ethical and human use officer last year, or Google, which has a board focused on the ethics of AI.

Later this year, I plan to write a report about accountability in AI — who to blame when AI goes wrong. As we saw today, Facebook, the creator of the algorithm, is taking the biggest hit, rather than the advertisers that used it. It’s similar to what’s been happening with Boeing, which is bearing the brunt of the crash backlash instead of the airlines.

Morally and legally, this is still a new area and is best illustrated by self-driving cars. Last year, a self-driving car killed Elaine Herzberg in Tempe, Arizona. Who was at fault? Was it Uber, the creator of the self-driving technology? Volvo, the car manufacturer? Or the backup “driver” sitting in the vehicle? In this case, Arizona prosecutors found that Uber was not criminally liable. But companies and policymakers have a ways to go before we solve the ethical question of accountability.

Keep an eye out for my report later this year.