Cloud AI is like nuclear power

With incredible potential for both good and harm, AI needs worldwide regulation to ensure it isn't misused

Cloud AI is like nuclear power
Thinkstock

In a recent speech, Google and Alphabet CEO Sundar Pichai called for new regulations in the world of AI, with the obvious focus that AI has been commoditized by cloud computing. This is no surprise, now that we’re debating the ethical questions that surround the use of AI technology: most especially, how easily AI can weaponize computing—for businesses as well as bad actors.

Pichai highlighted the dangers posed by technologies such as facial recognition and “deepfakes,” in which an existing image or video of a person is replaced with someone else’s likeness using artificial neural networks. He also stressed that any legislation must balance “potential harms ... with social opportunities.”

AI is much more powerful today than it was just a few years ago. AI once resided in the realm of supercomputers that cost budget-busting sums to utilize. Cloud computing made AI an on-demand service, affordable for even small businesses. Moreover, there is a huge boom in R&D spending on AI services. AI providers are racing to the top in terms of innovations and the sheer number of features and functions they can offer. This includes knowledge models that are easy to build and train and can easily integrate with new and existing applications.

I would make the analogy that AI is much like nuclear power. Both have potential that needs to be captured. Both need limits to ensure they are not misused. Nuclear power provides cheap, carbon-light electricity, and AI has the potential to give us driverless cars and save hundreds of thousands of lives in the healthcare vertical. Don’t both need regulation?

Most technology has the potential to be used for good and bad. AI and nuclear power certainly fall into that category. The risk with AI is that some organizations may leverage it for perfectly sound reasons but end up doing ethically questionable things with it.

For example, facial recognition in a retail store can build a database of images and personal information that can be sold to marketing firms. It’s one thing to have security cameras always present, but another when they can find out who you are, your marital status, sexuality, demographics, and other information that can be culled using AI-driven big data analytics.

The law of unintended consequences is really what’s at stake here. If regulations are created and adopted but not implemented worldwide, they will have little effect in limiting the misuse of AI. Public clouds are international. If some pattern of AI usage is illegal in one country, it’s simple to move to another region. We already do that with data processing security. AI processing won’t be any different.

Copyright © 2020 IDG Communications, Inc.