Skip to main content

Researchers propose framework to measure AI’s social and environmental impact

Woman's hand showing digital big data.
Image Credit: Getty Images

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


In a newly published paper on the preprint server Arxiv.org, researchers at the Montreal AI Ethics Institute, McGill University, Carnegie Mellon, and Microsoft propose a four-pillar framework called SECure designed to quantify the environmental and social impact of AI. Through techniques like compute-efficient machine learning, federated learning, and data sovereignty, the coauthors assert scientists and practitioners have the power to cut contributions to the carbon footprint while restoring trust in historically opaque systems.

Sustainability, privacy, and transparency remain underaddressed and unsolved challenges in AI. In June 2019, researchers at the University of Massachusetts at Amherst released a study estimating that the amount of power required for training and searching a given model involves the emission of roughly 626,000 pounds of carbon dioxide — equivalent to nearly 5 times the lifetime emissions of the average U.S. car. Partnerships like those pursued by DeepMind and the U.K.’s National Health Service conceal the true nature of AI systems being developed and piloted. And sensitive AI training data often leaks out into the public web, usually without stakeholders’ knowledge.

SECure’s first pillar, then — compute-efficient machine learning — aims to lower the computation burdens that typically make access inequitable for researchers who aren’t associated with organizations that have heavy compute and data processing infrastructures. It proposes creating a standardized metric that could be used to make quantified comparisons across hardware and software configurations, allowing people to make informed decisions in choosing one system over another.

The second pillar of SECure proposes the use of federated learning approaches as a mechanism to perform on-device training and inferencing of machine learning models. (In this context, federated learning refers to training an AI algorithm across decentralized devices or servers holding data samples without exchanging those samples, enabling multiple parties to build a model without liberally sharing data.) As the coauthors note, federated learning can decrease carbon impact if computations are performed where electricity is produced using clean sources. As a second-order benefit, it mitigates the risks and harm that arise from data centralization, including data breaches and privacy intrusions.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

SECure’s third pillar — data sovereignty — refers to the idea of strong data ownership and affording individuals control over how their data is used, for what purposes, and for how long. It also allows users to withdraw consent if they see fit while respecting differing norms regarding ownership typically ignored in discussions around diversity and inclusion as they relate to AI. The coauthors point out that some indigenous perspectives on data require that data be maintained on indigenous land or used, for example, or processed in ways consistent with certain values.

“In the domain of machine learning, especially where large data sets are pooled from numerous users, the withdrawal of consent presents a major challenge,” the researchers wrote. “Specifically, there are no clear mechanisms today that allow for the removal of data traces or of the impacts of data related to a user … without requiring a retraining of the system.”

The last pillar of SECure — LEED-esque certification — draws on the Leadership in Energy and Environmental Design for inspiration. The researchers propose a certification process that would provide metrics allowing users to assess the state of an AI system in comparison with others, including measures of the cost of data tasks and custom workflows (in terms of storage and compute power). It would be semi-automated to reduce administrative costs, with the tools enabling organizations to become compliant developed and made available in open source. And it would be intelligible to a wide group of people, informed by a survey designed to determine the information users seek from certifications and how it can be best conveyed.

The researchers believe that if SECure were deployed at scale, it would create the impetus for consumers, academics, and investors to demand more transparency on the social and environmental impacts of AI. People could then use their purchasing power to steer the progress of technological progress, ideally accounting for those two impacts. “Responsible AI investment, akin to impact investing, will be easier with a mechanism that allows for standardized comparisons across various solutions, which SECure is perfectly geared toward,” the coauthors wrote. “From a broad perspective, this project lends itself well to future recommendations in terms of public policy.”

The trick is adoption, of course. SECure competes with Responsible AI Licenses (RAIL), a set of end-user and source code license agreements with clauses restricting the use, reproduction, and distribution of potentially harmful AI technology. IBM has separately proposed voluntary factsheets that would be completed and published by companies that develop and provide AI, with the goal of increasing the transparency of their services.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.