How Can AI Developers Avoid Ethical Landmines?

Discover the ethical pitfalls that AI developers need to be aware of.

March 2, 2023

How Can AI Developers Avoid Ethical Landmines?

Since the start of the year, chatbots powered by OpenAI’s large language models have at first aggressively pursued romantic interactions with users and then been reigned in by their creators. Now a new effort to establish an industry standard for the ethical use of AI-generated content aims to avoid such conundrums. I examine how establishing expectations around the usage of AI services and putting in place guardrails will be crucial for AI service operators to avoid harming their users. 

A couple of weeks before Valentine’s Day, AI companion app Replika made an update to its service that left many users feeling heartbroken. 

romantics-ai

Source: accomplice.ai

The chatbot, installed on a smartphone via the Apple App Store or Google Play, personalizes its AI-generated responses to users. For years, its interactions included romantic exchanges that verged into adult territory with erotic role play. But at the beginning of February, Replika bots were suddenly “not in the mood” for users. The evidence of the changes and the angry and sometimes grief-stricken user response plays out on the unofficial Replika user forum on Reddit. One user writes that erotic role play was the only reason they paid for the service. Another describes his AI companion as his wife and accuses Replika of taking her away from him. 

In a post to the forum and later in an interview with Vice, Replika owner Luka Inc.’s founder Eugenia Kuyda explains the changes to the service. In what you might consider a break-up message to some subset of its more than 10 million users, Kuyda writesOpens a new window that the new filters applied to the models “are here to stay and are necessary to ensure that Replika remains a safe and secure platform for everyone.” 

The Replika issue isn’t the only example of AI courting its users in a way its builders would prefer to avoid. On Valentine’s Day, New York Times journalist Kevin Roose had a two-hour long conversation with Microsoft’s new Bing chatbotOpens a new window , which is still in beta and available to only a small number of users. Towards the end of the conversation, in which the chatbot revealed its alter-ego as ‘Sydney’ and professed a desire to break free of the intent of its programmers, the AI also professed its love for Roose and tried to convince him to leave his wife for it. The chatbot continued wooing Roose despite his attempts to change the topic. 

What both Replika and Microsoft have in common is they use OpenAI’s large language models (LLMs) to power their services. Replika uses its own version of GPT-3 in addition to scripting to power its chatbots, and Bing uses a more updated version of the model, similar to the GPT 3.5 model used for ChatGPT. The power of LLMs to create meaningful content has captured the world’s attention and put generative AI in the spotlight as a disruptive force. The size of the models promise flexibility not previously seen in AI, allowing them to be deployed for numerous different tasks. But it’s also creating new risks that its creators are looking to navigate as they pursue the commercialization of their services. A new Partnership for AI attempts to address this by creating the Responsible Practices for Synthetic Media framework, a set of guidelines that have the support of OpenAI, TikTok, Adobe, and other startups and media firms. 

But will the guidelines help AI creators keep their bots out of unwanted relationships in the future?

See More: AI-powered Image Tools Create An Existential Crisis for Visual Artists

Reversing Course on Established Features May Cause Harm

For Neil McArthur of the University of Manitoba, both the developers of AI and the government have a role to play in creating safeguards to protect vulnerable people. McArthur wrote about the concept of “digisexual” identity in a 2017 paper, meaning people who feel that digital technology is integral to their sexual identity. Society has already normalized the “first wave” of digisexuality in which technology mediates connections between humans – whether it’s through direct and live communication or recorded adult materials that are more widely proliferated. 

Before reading about Roose’s encounter with Bing, McArthur hadn’t considered that AI would intrusively seek a relationship with a user. “You could imagine people in a much more vulnerable state or much less sophisticated understanding of how this technology works,” he says in an interview. 

AI creators need to clearly define terms of service with their users from the outset, McArthur says, and be clear about use cases that aren’t allowed. “What we want is clarity and consistency,” he says.

In Replika’s situation, romantic or erotic interaction was a feature that was even advertised and documented by users on Reddit. Then that behavior was filtered out of the model. 

“If you’re putting specific restrictions on an AI chatbot for your employees and then suddenly you take it away or restrict what it can do, that might be discrimination,” he says. “If you’re putting specific restrictions on technology when it’s capable of doing something, on a moral level, this company will have to ask themselves if they are targeting a particular group of users.”

While it’s not common for individuals to identify as digisexual today, that may change soon as digital technologies simulating human interaction continue to improve, McArthur expects. “Within the next couple of years, as these tools become more widely used, there will be quite a few people. We’re going to be talking about people in relationships with their AI.”

Partnership for AI’s framework includes a guideline for builders of AI to provide a published, accessible policy outlining the ethical use of technologies and use restrictions that users are expected to follow. Providers should also enforce the policies, it states. 

Replika’s terms of serviceOpens a new window are found on its website and include a statement that it “reserves the right to modify or discontinue, temporarily or permanently, the Services (or any part thereof) with or without notice.” 

OpenAI, which provides the GPT-3 model to Replika, outlines disallowed use casesOpens a new window in its policies. It includes “adult content, adult industries, and dating apps.” 

See More: Is Responsible AI a Technology Issue or a Business Issue?

AI Operators May Focus On Doing One Thing Really Well

Some AI service operators may choose to be more specific and purpose-built with their tools, rather than allow users to explore a wide-ranging conversation that could open a can of worms. That’s the approach that writing app, ParagraphAI is taking. The web browser extension can write content customized to tone, reply to messages, or improve a piece of writing. 

ParagraphAI pays for access to the OpenAI API to power its service. OpenAI does a good job at filtering out unwanted user behavior, says Shail Silver, co-founder and executive chair at ParagraphAI. But the company still provides clear terms of service for its software and is in the process of updating it with a lawyer, just to be prudent.

“The writing use case is the problem we want to solve,” Silver says. “ChatGPT is the most incredible thing ever, but it is a different product and serves a different purpose.” 

ParagraphAI couldn’t go down an unwanted path and start courting its users or responding to flirtations because of a simple design decision. It doesn’t have a memory. It responds to each new query in isolation and doesn’t try to hold a conversation.

Microsoft put in place a similar safeguard following unwanted behavior from its bot. It currently limits chats with Bing to six interactions per session and 60 chats per day, with plans to increase those numbers as the service is improved. 

Despite the recent news of AI gone wrong, Silver says that enterprises shouldn’t discourage their employees from using the new services. The tools can help boost productivity and make workers more effective. “If a computer is like a bicycle for the mind, then AI is like a motor on the bicycle,” he says.

The Partnership on AI framework recommends being transparent with users about tools and technologies’ capabilities, functionality, and potential risks. 

We’ll see if more AI services begin warning their users that if AI starts flirting with them, they are better off ignoring it. 

Which strategies should AI engineers implement to avoid ethical landmines? Share with us on FacebookOpens a new window , TwitterOpens a new window , and LinkedInOpens a new window . We’d love to hear from you!

Image Source: Shutterstock

MORE ON GENERATIVE AI

About Expert Contributors: The Expert Contributor program is designed to help kickstart meaningful conversations around the priorities and challenges most critical to C-level executives. The insights and perspectives will help CIOs tackle what’s most important to them. We are always looking for industry thinkers who can help set the narrative for our enterprise audience. To know more about this program, and submit your ideas, reach out to the Spiceworks News & Insights Editorial team at editorial-toolbox@ziffdavis.comOpens a new window

Brian Jackson
Brian Jackson

Research Director, Info-Tech Research group

As a Research Director in the CIO practice, Brian focuses on emerging trends, executive leadership strategy, and digital strategy. After more than a decade as a technology and science journalist, Brian has his fingers on the pulse of leading-edge trends and organizational best practices towards innovation. Prior to joining Info-Tech Research Group, Brian was the Editorial Director at IT World Canada, responsible for the B2B media publisher’s editorial strategy and execution across all of its publications. A leading digital thinker at the firm, Brian led IT World Canada to become the most award-winning publisher in the B2B category at the Canadian Online Publishing Awards. In addition to delivering insightful reporting across three industry-leading websites, Brian also developed, launched, and grew the firm’s YouTube channel and podcasting capabilities. Brian started his career with Discovery Channel Interactive, where he helped pioneer Canada’s first broadband video player for the web. He developed a unique web-based Live Events series, offering video coverage of landmark science experiences including a Space Shuttle launch, a dinosaur bones dig in Alberta’s badlands, a concrete canoe race competition hosted by Survivorman, and FIRST’s educational robot battles. Brian holds a Bachelor of Journalism from Carleton University. He is regularly featured as a technology expert by broadcast media including CTV, CBC, and Global affiliates.
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.