Funds are being allocated to establish nine new research hubs throughout the UK, alongside a collaboration with the US on responsible AI. Credit: RistoArnaudov/istock Amid growing concerns about the adverse effects of AI, the British government has announced a $125 million (£100 million) investment to support regulators and advance research and innovation in AI. Close to $113 million (£90 million) is being allocated to establish nine new research hubs throughout the UK, alongside a collaboration with the US on responsible AI. These hubs will bolster UK expertise in AI, applying the technology in the fields of healthcare, chemistry, and mathematics. “AI is moving fast, but we have shown that humans can move just as fast,” Secretary of State for Science, Innovation, and Technology, Michelle Donelan, said in a statement. “By taking an agile, sector-specific approach, we have begun to grip the risks immediately, which in turn is paving the way for the UK to become one of the first countries in the world to reap the benefits of AI safely.” The announcement comes as $12.5 million (£10 million) has been earmarked to equip regulators with the necessary training and skills to navigate the challenges and seize the opportunities presented by this critical technology. Many countries worldwide are grappling with the challenges of regulating AI without hindering growth and development. Some measures already in place The UK government’s statement further noted that some regulators have already initiated measures. For example, the Information Commissioner’s Office has revised its guidance to clarify the application of stringent data protection laws to AI systems that handle personal data, emphasizing the importance of fairness. Additionally, it has maintained its oversight role by enforcing compliance, including the issuance of enforcement notices to organizations. “However, the UK government wants to build on this by further equipping them for the age of AI as use of the technology ramps up,” the statement read. “The UK’s agile regulatory system will simultaneously allow regulators to respond rapidly to emerging risks while giving developers room to innovate and grow in the UK.” To enhance transparency and build trust among British businesses and the public, leading regulators, including Ofcom and the Competition and Markets Authority, have been instructed to outline their AI management approaches by April 30. The advantage of leveraging existing laws The UK is relying on its existing legal framework to regulate AI in areas that affect a lot of people, such as employment, pointed out Adam Leon Smith, an AI expert from BCS, The Chartered Institute for IT. However, even with “old-fashioned” AI, the country needs to balance the risks with the opportunities. “It is, therefore, right that the government moves to fund and empower those existing regulators with the tools they need to do their job,” Smith said. “We also need to remember that this future will be shaped by AI professionals. Managing the risk of AI and building public trust will be most effective when the people creating it are professionally registered and accountable to clear standards.” The government’s statement also contained comments from several major tech companies including Microsoft and Google, all of whom welcomed the latest steps taken. Related content news PagerDuty seeks to ease incident response with generative AI The IR SaaS company has enhanced its PagerDuty Copilot to provide natural-language post-incident reviews, among other automation, summarization, and analysis features. By Evan Schuman May 22, 2024 4 mins Incident Response Generative AI Enterprise Applications how-to Download our enterprise architecture (EA) tools buyer’s guide From the editors of CIO, this enterprise buyer’s guide helps CIOs and other IT leaders understand what enterprise architecture (EA) can do for them and the kinds of tools available to do EA well. By Sarah K. White and Peter Wayner May 22, 2024 1 min Enterprise Architecture Development Tools Enterprise Buyer’s Guides news Big tech companies commit to new safety practices for AI The Frontier AI Safety Commitments can provide a guide not only for AI model developers but also for CIOs to better understand the risks associated with deploying the technology. By Elizabeth Montalbano May 22, 2024 4 mins Regulation IT Governance Frameworks Generative AI feature Where’s the ROI for AI? CIOs struggle to find it Nearly half of all AI leaders question how to estimate or demonstrate the value of AI-related technologies — and for good reason, based on early implementations at many companies. By Grant Gross May 22, 2024 7 mins Generative AI Business IT Alignment ROI and Metrics PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe