by Jamila Qadir

How the UAE is finding an ally in AI

Feature
Jun 15, 20229 mins
Artificial IntelligenceRegulation

As IT departments steadily promote to the C-suite and Industry 4.0 gathers pace, AI in all its applications is becoming a central priority. At the recent World AI Show in Dubai, industry leaders pored over IT growth, collaboration and the ethics of how machine learning is always a work in progress, especially in efforts to eliminate bias.

diversity saudi arabia turkey middle east networking globe map connections by dem10 gettyimages 118
Credit: Dem10 / Getty Images

It’s an imperative today for both public and private sector organisations to invest in, implement and revolve company culture around technologies of the future. And Industry 4.0, otherwise known as the fourth industrial revolution (4IR), is at the core of this effort, which involves a synergised approach to production based on a groundswell of cutting-edge IT, large-scale automation of business processes, and a massive spread of AI.

While 4IR was always an inevitability, the pandemic accelerated its necessity in terms of infrastructure and knowledge to cope with huge amounts of data supplied by thousands of sensors and smart devices. That, and the need to regulate the use of AI and its inherent ethical concerns, were recurring themes of the recent World AI Show in Dubai last month.

Laying the groundwork

In 2017, the UAE announced 4IR strategies based on three pillars: making the UAE a technology hub for Industry 4.0, digital transformation of the economy, and increasing the efficiency of government services, said Saeed Alhebsi, advisor in AI ministry of human resource and Emiratisation, who spoke on the first day of the event.

4IR also focuses on a number of specific fields including remote and in-person education, intelligent genomic medicine, and robotic healthcare, which encourage ministries to adopt emerging technologies and implement them to provide intelligent and interactive services, he explained.

Alhebsi said his ministry has already finished the education process for all its 1,000 employees who are now up to speed.

“Now, we’ll look at what kind of advanced technologies we can use and where to implement them,” he said. “As a labour ministry, we’re looking into using a smart contract [programme].”

Fighting bias in AI

AI is becoming an increasingly integral part of face and voice recognition systems, which have substantial business implications that directly affect people. According to some estimates, the use of AI in recruitment will replace about 16% of recruitment sector jobs by 2029.

But these systems are vulnerable to biases and mistakes implemented at the human level. Plus, data used to train these AI systems may also contain bias toward nationality and gender as well. Specifically in recruitment, some AI-powered algorithms are used to favour male applicants, said Shaily Verma, director of data and insights and Damac Properties, referring to an infamous case where Amazon’s algorithms showed bias against female applicants.

Unconscious biases are also detrimental to the search and retention of staff, so top technology companies need to pioneer innovations to fight bias in AI, she suggested, adding, however, that the situation is changing with better uses of AI.

“Often while making some business decisions, we include some biases we didn’t mean to include,” said Daniel Shearly, VP of products at GfK, UK.

One can program AI unconsciously, including those biases, but at the same time by focusing on the outcome of AI, one can analyse the data to remove them, he said.

Ethical AI on the rise

The ethics of AI is viewed as the principles and human input that underlie decisions made by it, as well as the behavior of AI in situations that directly affect people. The latter fundamentally distinguishes the ethics of AI from those of other digital technologies.

The introduction of AI systems into everyday life is associated with many ethical issues, which will become more challenging in the coming years. Early examples were fatal accidents in self-driving cars, Google AI developers protesting participation in US Department of Defense projects, sexism and racism in facial recognition algorithms, and targeted advertising.

These and other large-scale issues arise when public and private services use AI to gather information about the population. So industry experts agree that strategies and regulatory frameworks are needed to ensure the latest technologies are used for the benefit of humanity. UNESCO, in fact, is proposing to develop a comprehensive global normative act to provide AI with a solid ethical foundation that won’t only protect but promote respect for human rights and respect for human dignity.

After its adoption, this document will become an ethical guideline and a global regulatory framework to ensure compliance with the rule of law in the digital world. But since there’s little collaboration between governments and states, all these plans are merely good intentions on paper, experts say.

Need for legislation

On day two of the show, Ashraf Elnagar, professor of AI in the Department of Computer Science at the University of Sharjah, said that apart from the human side of AI, there’s the legal aspect of it where law enforcement comes in, where rules and regulations must live. The problem, however, is the lag in legislation due to the fast rate of technology innovation, which, of course, has been an IT issue for decades. But governments have a vital role when it comes to the responsible use of new technologies, said Elnagar, and the UAE government is acting on it with a ministry for AI, a dedicated university for AI, and most other universities providing degrees in AI.

“Therefore, I’m sure that legislation will follow to regulate AI solutions when it comes to serving humans,” he said in the context of ethical AI and biases. “In IT, we’ve seen problems like this for so many years. When we had viruses, we would invent antiviruses. The same applies for AI solutions. We try to mitigate and reduce the bad effects of it but we can’t eliminate it entirely. Let’s be optimistic though.”

Legislation will help mitigate these problems, since it’s difficult to predict the future use and misuse of AI, but strict laws and strong regulations will reduce bad effects and wrongdoings.

According to Moayad Majed, UAE government advisor on digital transformation, the UAE government is also working on how to upgrade the IT department to the C-suite level, so within a few years, IT departments will report directly to CEOs because IT will manage everything.

“We need to take action because digital transformation is not just copy-paste,” he said, further suggesting global government leaders must form a coalition to set guidelines for digital transformation and use of modern technologies like AI. In the UAE, for instance, that could be a federal organisation to shape rules and regulations for the safe use of AI, blockchain and even cryptocurrencies, he said, and that new technologies should be governed both in public and private sectors. Once regulated, governments and private organisations will be able to reap benefits from using all of these modern technologies.

Also on the second day of the show, Thuraiya Al-Harthi, senior specialist of innovation and emerging technology development at the Ministry of Transport, Communication & Information Technology of Oman, echoed that it was imperative, especially for governments, to set up rules and regulations governing the use of AI.

Any new technologies should be reflected in policy, regulations, and frameworks, she said, adding that AI should be utilised and adopted in an ethical way within organisations.

“In government, we created a regulative framework to organise things,” Al-Harthi said. “Many governments want to adopt AI but we have to come up with a clear plan to define which sector will adopt AI, and it should all align with a big umbrella organisation that will govern and regulate all these issues.”

She added that while there is a digital transformation plan, there’s still the need to set up guidelines and policies, which should be clear to users before using these services.

“For example in the case of AI, we need to have data classification and a data privacy policy, and a lot of other policies should be in place because AI is using the data—our data,” she said. “How will I be protected by the organisation that is using my data?”

According to Sana Farid, co-chair of the healthcare committee at the VR/AR Association, the UAE is working heavily on ways to govern the use of ethics-based AI, especially in the healthcare sector, with robotic-assisted operations and insurance coverage.

The UAE, after all, is the first and only country to have an AI Ministry, which was set up to help organisations embrace new emerging technologies. So it’s the responsibility of other ministries, companies, and public and private organisations to contact this Ministry to understand how it works, she said, adding that the future of works of governments is to collaborate.

“We’re starting from scratch and it will take some time to come up with new rules,” she said. “We’re working on building standards and they’ll come in, combining efforts of professors, innovators, and visionary people from everywhere.”

Sheeba Hasnain, former head of IT operations for Roads & Transport Authority, Sharjah, added: “We are moving towards the data economy. We must be ready in our organisations on how to manage data in the future. Data must be the priority.” Mankind is moving towards a connected world and it will have a greater level of accuracy and ethics in the future. It’s not a single person’s work, of course. It’s a collaboration of all contributing towards this goal, she said.