The agreement, which excludes China, includes recommendations for monitoring AI systems for abuse, protecting data, and vetting software suppliers. Credit: Getty Images Twenty-two law enforcement and intelligence agencies from 18 different countries signed an international agreement on AI safety over the weekend, which is designed to make new versions of the technology “secure by design.” This agreement comes months after the European Union signed its EU AI Act in June, banning certain AI technologies including biometric surveillance and predictive policing, and classifying AI systems that could significantly impact health, safety, rights, or elections as high risk. “AI systems have the potential to bring many benefits to society. However, for the opportunities of AI to be fully realized, it must be developed, deployed, and operated in a secure and responsible way,” the latest agreement stated. The agreement emphasized that with the rapid pace of AI development, security must not be an afterthought but rather a core requirement integrated throughout the life cycle of AI systems. “AI systems are subject to novel security vulnerabilities that need to be considered alongside standard cyber security threats,” the report said. “When the pace of development is high — as is the case with AI — security can often be a secondary consideration.” One way that AI-specific security is different is the existence of a phenomenon called “adversarial machine learning.” Called a critical concern in the developing field of AI security by the report, adversarial machine learning is defined as the strategic exploitation of fundamental vulnerabilities inherent in machine learning components. By manipulating these elements, adversaries can potentially disrupt or deceive AI systems, leading to erroneous outcomes or compromised functionality. Aside from the EU’s AI bill, in the US, President Joe Biden signed an executive order in October to regulate AI development, requiring developers of powerful AI models to share safety results and critical information with the government. China is not a signatory The agreement was signed by government agencies from Australia, Canada, Chile, the Czech Republic, Estonia, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, Poland, South Korea, and Singapore in addition to the UK and the US. Absent from the agreement was China, a powerhouse of AI development, and the target of several trade sanctions from the US to limit its access to the high-powered silicon required for AI development. In a speech at a chamber of commerce event in Taiwan on Sunday, TSMC’s chairman Mark Liu argued that the US move to exclude China will lead to a global slowdown in innovation and a fragmentation of globalization. AI remains a legal minefield The agreement, while nonbinding, primarily offers general recommendations and does not address complex issues regarding the proper applications of AI or the methods of data collection for AI data models. It does not touch on the ongoing civil litigation within the US over how AI models ingest data to grow their large language models, and if this is compliant with copyright law. Within the US, several authors are suing OpenAI and Microsoft, alleging copyright infringement and intellectual property violations for using their creative works in training OpenAI’s ChatGPT, highlighting growing concerns about AI’s impact on traditional creative and journalistic industries. According to K&L Gates, OpenAI and other defendants in these cases are leveraging defenses like lack of standing and fair use doctrine, with courts skeptically approaching early cases, making the future of AI litigation “uncertain.” Related content news OpenAI unveils ‘Model Spec’: A framework for shaping responsible AI This first-of-its-kind document outlines the principles guiding model behavior in its API and ChatGPT, OpenAI announced in a blog post. By Gyana Swain May 09, 2024 4 mins Technology Industry Emerging Technology feature Windows 11 Insider Previews: What’s in the latest build? Get the latest info on new preview builds of Windows 11 as they roll out to Windows Insiders. Now updated for Build 26212 released for the Canary channel on May 8, 2024. By Preston Gralla May 09, 2024 253 mins Small and Medium Business Microsoft Windows 11 opinion Think Shadow AI is bad? Sneaky AI is worse It’s bad enough when an employee goes rogue and does an end-run around IT; but when a vendor does something similar, the problems could be broadly worse. By Evan Schuman May 09, 2024 5 mins Vendor Management Security Vendors and Providers feature Office 365: A guide to the updates Get the latest info on new features, bug fixes, and security updates for Office 365/Microsoft 365 for Windows as they roll out from Microsoft. Now updated for Version 2404 (Build 17531.20140), released on May 7, 2024. By Preston Gralla May 09, 2024 110 mins Microsoft 365 Microsoft Office Office Suites Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe