The Ethical Conundrum: Combatting the Risks of Generative AI

How can the risks of generative AI be handle effectively and ethically?

March 22, 2023

The competitive race surrounding generative AI is heating up – quickly. It’s imperative to develop and evaluate these powerful tools with a clear ethical framework that lays out rules and regulations, educates consumers and prevents malpractice and unethical AI applications, says CF Su, VP of machine learning at Hyperscience.

In its first public demo, Bard, an AI-based conversational chatbot launched by Google to rival Microsoft’s recent investments in OpenAI’s ChatGPT, made an incredibly costly mistake. Bard responded to a user prompt with an inaccurate claim, causing the market value of Alphabet, Google’s parent company, to drop by roughly $100 billion. Since Bard’s hiccup, Meta announced its competitive solution, LLaMa, and retailers like Instacart, Coke and Patron have all shared plans to incorporate ChatGPT into their platforms. 

Generative AI solutions are not only here to stay, but will increasingly impact the consumer experience. As software providers rush to market their generative AI offerings and companies seek to integrate equally as quickly, ethical considerations must be at the forefront. 

The Changing Role of Search Engines

Search engines have long been classified as information aggregators, as they gather intelligence that another party previously generated. However, the role of search engines will be changing with the use of large language models (LLM) like Google Bard in delivering search results as LLMs become better at hallucinating content following user prompts, evolving their role from aggregator to generator.

Becoming an information generator is an important distinction that makes companies like Microsoft and Google more responsible for ethical concerns. Historically, as information aggregators, search engines were largely protected from libel and other related lawsuits under Section 230Opens a new window of the Communications Decency Act, while the actual content creator could be held liable. But when Bard generates a response containing potentially libelous content, Google could ultimately be held accountable.

While many companies who have announced partnerships with ChatGPT and the like are not search engines, understanding the implications of ethically compromised content will be an important area to watch. For example, when one peer-to-peer mental health service used AIOpens a new window to generate mental health support for its users, the social media backlash was swift. It is incredibly risky to use a generative tool that could hallucinate content in healthcare, and companies in the space must consider the public response before taking this step.

AI’s unprecedented growth and development have sparked debate about who should be held responsible – and those that regulate themselves internally will be better off than those that don’t.

How to Implement an Ethical Framework

Strong ethical frameworks require internal education and buy-in. Every employee must be aware of AI’s extensive capabilities and pitfalls. To do this, organizations should prioritize creating an AI ethics committee focusing on education and engagement. These committees provide a system of checks and balances to technological development and help organizations align on how regulators can protect the public against potentially harmful applications. 

Here are several areas to consider to create a successful AI ethics team this year: 

  1. Put transparency first: To start, lay out clear goals and objectives for your committee. Stakeholders should align on the end goals and not limit these conversations solely to the committee itself. Employees and other technical leaders across the organization may have something to say about the committee’s direction, and listening to all voices is important. With every decision and milestone, transparent communication will accelerate trust and buy-in from the organization and should not be understated. 
  2. Avoid over-committing: Artificial intelligence is an incredibly complex field with many intricacies yet to be explored. That’s why narrowing your committee’s scope and remaining focused is critical. If you try to tackle everything under the sun, things will inevitably fall flat. Understand how your company plans to deploy, build or leverage technology, and use this knowledge to be intentional in your committee’s plans to drive the most impact.
  3. Embrace diverse perspectives: Those experienced with AI and deep tech offer the most technical expertise, but a well-rounded committee embraces perspectives and stakeholders from across the entire business. Team members from legal, creative, marketing and engineers, to name a few, should all be present, giving your committee representation in all areas where concerns may arise. Once the committee is underway, engage in company-wide conversations to bring everyone into the fold. 

Aside from employee buy-in and support, the most impactful ethics committees will engage with people and teams outside the organization to keep a pulse on industry conversations, challenges and solutions. For example, teams could work closely with regulators to define rules that would protect individuals from the potential negative impacts of AI. This extends to welcoming customer feedback to understand the ethical questions facing their teams as well. 

See More: Is Synthetic Data Set To Become a Mainstream AI Strategy?

Launching Ethical Principles into Practice

This past fall, the White House released a proposed blueprint to help developers and organizations better navigate ethical AI. Though following the AI Bill of RightsOpens a new window guidelines is voluntary, the plan offers insight into potential future federal-level regulation, the looming requirements of public sector customers, and, perhaps most importantly, common language to spark internal discussion and external communications.

The AI Bill of Rights is certainly a step in the right direction for establishing ethical uses of AI, though it primarily serves as guidance in its current state. For everyday individuals, the AI Bill of Rights will increase awareness of the potential negative impacts of AI technology and automated systems, similar to the increasing concerns and awareness from the public on online data privacy.

While we navigate the early stages of regulations, companies should strongly consider taking preventative steps to avoid unethical applications of AI as it’s used and implemented. As an extension to the AI Ethics committee, consider assessing the primary areas outlined by the Bill of Rights as a ‘sniff test’ to determine whether emerging generative AI use cases are ethical. 

As the White House outlines, the key areas to evaluate include:

  1. Safe and effective systems: You should be protected from unsafe or ineffective systems. 
  2. Algorithmic discrimination protections: You should not face discrimination by algorithms, and systems should be used and designed in an equitable way.
  3. Data privacy: You should be protected from abusive data practices via built-in protections, and you should have agency over how data about you is used.
  4. Notice and explanation: You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you. 
  5. Human alternatives, consideration, and fallback: You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.

Like most scenarios, AI regulation requires a delicate balance of continuously fostering advancement while carefully considering its widespread usage. If it is designed, implemented, and applied correctly, AI technology’s possibilities could be infinite. 

As generative AI technology continues to rise in popularity, so does the risk level for companies leveraging these tools. Those that lead with an ethical framework are better suited to manage potential ethical concerns, while companies that rush their offerings to market risk a scenario like Bard’s initial demo, which can erode consumer confidence for years.

Would the ethical debate around generative AI be resolved in the next two years? Share with us on FacebookOpens a new window , TwitterOpens a new window , and LinkedInOpens a new window . We’d love to hear from you!

MORE ON GENERATIVE AI:

CF Su
CF Su

VP of Machine Learning, Hyperscience

CF brings over 15 years of R&D experience in the tech industries. He’s led engineering teams in fast-paced start-ups as well as big Internet giants. His expertise includes areas of search ranking, content classification, online advertisement, and data analytics. Most recently, CF was the Head of Machine Learning at Quora, where his teams developed ML applications of recommendation systems, content understanding, and text classification models. Before that, he held technical leadership positions at Polyvore (acquired by Yahoo), Shanda Innovations America, and Yahoo Search and was a senior researcher at the Fujitsu Lab of America. To date, CF’s industry contributions include 14 U.S. patents and more than 20 technical papers.
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.