Skip to main content

What Kamala Harris’ record says about major AI policy issues

OAKLAND, CA - JANUARY 27: Senator Kamala Harris (D-CA) speaks to her supporters during her presidential campaign launch rally in Frank H. Ogawa Plaza on January 27, 2019, in Oakland, California. Twenty thousand people turned out to see the Oakland native launch her presidential campaign in front of Oakland City Hall. (Photo by Mason Trinca/Getty Images)
Senator Kamala Harris (D-CA) speaks to her supporters during her presidential campaign launch rally in Frank H. Ogawa Plaza on January 27, 2019, in Oakland, California.
Image Credit: Mason Trinca/Getty Images

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


Democratic presidential candidate Joe Biden chose Sen. Kamala Harris (D-CA) to be his vice presidential running mate today. Born in Oakland, California and raised in Berkeley, Harris is the first African American woman and first Asian American woman to be chosen as a U.S. vice presidential candidate on a major party ticket. Though a vice presidential pick can sometimes be considered ceremonial, if elected, Biden will become the oldest president in U.S. history, at 78 years old.

Pundits will meticulously analyze Harris’ stance on a range of issues in the days leading up to the Democratic National Convention and the weeks leading up to Election Day. A former prosecutor, Harris is known as a member of the Senate Judiciary Committee, but she has also led cybersecurity proposals in Congress and raised tech policy issues in the San Francisco Bay Area. Here we look back at how Kamala Harris the U.S. Senator and presidential candidate reacted to issues at the intersection of AI and policy.

Facial recognition

In fall 2018, Harris joined other members of the U.S. Senate in sending a series of letters to federal agencies urging them to address algorithmic bias based on race, gender, or other characteristics.

One letter asked the Equal Employment Opportunity Commission (EEOC) how it investigates claims of algorithmic bias in hiring practices and whether the agency considers use of facial recognition a violation of existing civil rights and workplace antidiscrimination law.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

The FTC letter imagines a scenario in which a Black woman is falsely arrested due to the use of facial recognition software. In June, we saw some of the first known cases of false arrests of Black men due to misidentification with facial recognition.

The FBI letter demanded to know whether the agency had responded to a 2016 request by the Government Accountability Office (GAO) report recommending it test its facial recognition tech and take other steps to ensure its accuracy. The facial recognition system in question is used by the FBI, as well as state and local officials. Months later, the FBI’s failure to act on the GAO’s recommendation became a main subject of discussion in a House Oversight and Reform Committee hearing.

Each of the letters cites an ACLU study that misidentified several members of Congress as criminals and the work of Joy Buolamwini and the Gender Shades project, which found that facial recognition systems from companies like IBM and Microsoft performed better for white men than for women with dark skin.

As a presidential candidate, Harris introduced a criminal justice plan that involved working with civil rights groups and law enforcement to ensure facial recognition and other technology does not advance racial bias. The plan included using federal funding to convince state and local officials to do the same. Candidates like Bernie Sanders supported an outright ban of facial recognition use by federal law enforcement, including the FBI.

Harris’ stance on issues like predictive policing and facial recognition may be especially important amid fervent calls for police reform and the end of institutional racism following the death of George Floyd. Her position on these issues may also be viewed in the context of her record as California Attorney General, San Francisco District Attorney, and a public prosecutor in Alameda County.

Federal AI policy

Harris was one of four U.S. senators behind the AI in Government Act, a bill first introduced in 2018 and reintroduced last year that would have helped develop a more cohesive federal AI policy through the General Services Administration (GSA).

The AI in Government Act of 2019 would direct the heads of each federal agency to recommend ways to remove barriers to AI adoption, as well as “best practices for identifying, assessing, and mitigating any discriminatory impact or bias on the basis of any classification protected under federal nondiscrimination laws, or any unintended consequence of the use of artificial intelligence by the federal government.”

It would also create a Center of Excellence within the General Services Administration to assist federal agencies in acquiring AI services and supply technical expertise. The Center of Excellence would advise the White House Office of Science of Technology and consult with the Pentagon, as well as the National Science Foundation, and help develop policy related to the use of AI by federal agencies.

A unified AI policy that considers the nation’s research and development needs and broader strategy is important for economic activities but is also increasingly tied to the research and development strategies of national governments and militaries. The U.S. and China, for instance, have set goals to maintain a kind of AI supremacy over other nations.

In the end, the AI in Government Act, like a range of bills introduced in recent years to regulate AI and inform a robust U.S. strategy, has yet to come up for a vote, though it notably attracted bipartisan support. Harris wasn’t the first lawmaker to introduce AI regulation, nor is she the most associated with facial recognition regulation today. But her public comments and support for legislative safeguards indicate her concern with AI’s potential to perpetuate bias.

“When we look also at these emerging fields and look at issues like AI — artificial intelligence and machine learning — there is a real need to be very concerned about how [racial bias is] being built into it,” Harris said in a video posted by her Senate office in April 2019. “It’s a real issue and it’s happening in real time. And the thing about racial bias in technology is that unlike the racial bias that you can [identify] pretty easily — all of us can detect when you get stopped in a department store or while you’re driving — the bias that is built into technology will not be very easy to detect. And so machine learning is literally that the machine is learning, and it’s learning what it’s being fed, which is going to be a function of who’s feeding it and what it’s being fed.”

The Biden-Harris 2020 platform may differ from Kamala Harris’ stance as a Senator or as a presidential candidate. Nonetheless, Harris has introduced early legislation aimed at creating a cohesive AI strategy for the federal government and has openly criticized tech with the potential, and proven likelihood, to perpetuate bias.

Kamala Harris can be called the first person on a major party ticket in a lot of categories, but however you consider it, Harris appears to be one of the most prepared to take national AI policy and algorithmic bias seriously.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.