Skip to main content

AI Weekly: UN proposes moratorium on ‘risky’ AI while ICLR solicits blog posts

data science
Image Credit: Petri Oeschger/Getty Images

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


UN High Commissioner for Human Rights Michelle Bachelet this week called for a moratorium on the sale and use of AI systems that pose “a serious risk to human rights.” Bachelet said adequate safeguards must be put in place before development resumes on such systems and that any systems that can’t be used in compliance with international human rights law should be banned.

“AI can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights,” Bachelet said.

Of course, defining which systems pose a risk to human rights isn’t necessarily a straightforward task. The Human Rights Council outlines a few examples in a new report, including systems that “deepen privacy intrusions” through the increased use of personal data and “lead to discriminatory decisions.” But as recent comments submitted to the European Parliament and European Council suggest, definitions of “risky” can vary widely, depending on the stakeholder.

As Wired’s Khari Johnson recently wrote, some businesses responding to the European Union’s AI Act — which proposes oversight of “high-risk” AI — believe the legislation goes too far, with innovation-stifling and potentially costly rules. Meanwhile, human rights groups and ethicists maintain it doesn’t go far enough, leaving people vulnerable to those with the resources to deploy powerful algorithms.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

While an agreed-upon definition of “risk” remains elusive, particularly as companies like Alphabet’s DeepMind and OpenAI work toward general-purpose, multitasking systems that defy conventional labels, the Human Rights Council’s report identifies ways to help prevent and limit the harms introduced by AI. For example, the report argues that AI development must be equitable and non-discriminatory, with participation and accountability embedded as core parts of the processes. In addition, it asserts that requirements of legality, legitimacy, necessity, and proportionality must be “consistently applied” to AI technologies, which should be deployed in a way “that facilitates the realization of economic, social, and cultural rights.”

“AI now reaches into almost every corner of our physical and mental lives and even emotional states. AI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online,” Bachelet said. “We cannot afford to continue playing catch-up regarding AI — allowing its use with limited or no boundaries or oversight and dealing with the almost inevitable human rights consequences after the fact … Action is needed now to put human rights guardrails on the use of AI, for the good of all of us.”

ICLR introduces a blog post track

In other news this week, the International Conference on Learning Representations (ICLR), one of the largest machine learning conferences in the world, announced a call for contributions to the very first Blog Post Track. The goal is to solicit submissions in blog format, allowing researchers to discuss previously published research papers that have been accepted to ICLR.

“[Blog Post Track] recognizes and values summarization work as opposed to novel work,” Sebastien Bubeck, ICLR blog post chair and senior principal research manager at Microsoft Research, told VentureBeat via email. “For example, certain published papers might have difficult and technical mathematical proofs for quite abstract settings. Blog posts in that case might work out a specific subcase of that general theory, distilling the insights into some practical examples. Alternatively, a post might propose a new simpler proof of the same result, or perhaps connect the proof with ideas in other areas of computer science.”

Bubeck believes encouraging researchers to review older, peer-reviewed scientific work might allow them to highlight studies’ shortcomings and help synthesize knowledge in the AI community. He traces the initiative to the period following the second world war in France, when a collective of mathematicians under the pseudonym Nicolas Bourbaki decided to write a series of textbooks about the foundations of mathematics.

“For more applied papers, blog posts might be a good way to revisit experiments, with the overall goal [of helping] with the reproducibility crisis in machine learning. In fact, … contrary to main conference papers, blog posts might focus on smaller-scale experiments, investigating whether certain phenomenon are due to scale​ or whether they are intrinsic to the architecture or problem at hand,” Bubeck said.

AI, like many scientific fields, has a reproducibility problem. Studies often provide benchmark results in lieu of source code, which becomes problematic when the thoroughness of the benchmarks is in question. One recent report found that 60% to 70% of answers provided by natural language processing models were embedded somewhere in the benchmark training sets, indicating that the models were often simply memorizing the answers. Another study — a meta-analysis of over 3,000 AI papers — found that metrics used to benchmark AI and machine learning models tended to be inconsistent, irregularly tracked, and not particularly informative.

For the first edition of the Blog Post Track at ICLR, Bubeck says the conference chairs will select the reviewers for the submitted posts. Going forward, he hopes to bring blog post tracks to more computer science conferences — not only those focused primarily on AI and machine learning.

“Blog posts provide an opportunity to informally discuss scientific ideas. They offer substantial value to the scientific community by providing a flexible platform to foster open, human, and transparent discussions about new insights or limitations of a scientific publication,” Bubeck said.

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.