Home Computing AI Governance Proposal: Control AI by Controlling Compute – High-Performance Computing News Analysis

AI Governance Proposal: Control AI by Controlling Compute – High-Performance Computing News Analysis

To control compute – to squeeze or open the spigot of processing power – is to control AI. In doing so, AI can be steered toward beneficial results while avoiding, or punishing, bad ones. That’s the argument put forward in a 78-page white paper issued by 15 research centers and universities in the U.S., Canada and the UK – along with one company: OpenAI, whose launch of ChatGPT in November 2022 ignited the generative AI craze and caused, among other impacts, growing concerns about uncontrolled AI.

Adding to OpenAI irony is the recent news that Sam Altman, OpenAI’s CEO, is on a mission to raise up to $7 trillion to build dozens of foundries around the world for the manufacture of, yes, advanced AI chips.

“The central thesis of this paper is that governing AI compute can play an important role in the governance of AI,” state the authors of the paper. “Other inputs and outputs of AI development (data, algorithms, and trained models) are easily shareable, non-rivalrous intangible goods, making them inherently difficult to control; in contrast, AI computing hardware is tangible and produced using an extremely concentrated supply chain.”

Control of AI is needed, argue the authors of “Computing Power and the Governance of Artificial Intelligence” (earlier reported by Data Centre Dynamics) because “Increasingly powerful AI systems could profoundly shape society over the coming years; indeed, they are already affecting many areas of our lives, such as productivity, mobility, health, and education…. The risks and benefits of AI raise questions about the governance of AI: what are the norms, institutions, and policies that can influence the trajectory of AI for the better…?”

In its own way, the paper’s proposal echoes export and sanction decisions taken by western governments in controlling the sale of advanced chips and chip manufacturing equipment (ASML) to countries, such as China and Russia, deemed hostile to western interests.

“Policy makers are already making significant decisions about compute,” the paper stated. “Governments have invested heavily in the domestic production of compute, imposed export controls on sales of computing hardware to competing countries, and subsidized compute access to those outside of big technology companies….”

Controlling AI compute can bring about effective AI governance in the three key areas, the authors stated. It can increase regulatory visibility into AI capabilities and use; it can alter the direction of AI development by impacting “allocation of resources toward safe and beneficial uses of AI,” and it can “enhance enforcement of prohibitions against reckless or malicious development or use.”

The authors allow that their proposal wouldn’t address all AI-related risks – including those posed by non-state actors, such as terrorists and fraudsters. “…approaches beyond compute governance are likely needed to address small-scale uses of compute that could pose major risks, like specialized AI applied to military use.”

They also say that compute governance would need to be implemented carefully, to avoid, for example, privacy risks.

“Since compute governance is still in its infancy, policymakers have limited experience in managing its unintended consequences,” stated the authors. “To mitigate these risks, we recommend implementing key safeguards, such as focusing on governance of industrial-scale compute and incorporating privacy-preserving practices and technology.”

The authors of the paper are from the following institutions: Centre for the Governance of AI (GovAI), Leverhulme Centre for the Future of Intelligence at the University of Cambridge, Oxford Internet Institute, Institute for Law & AI, University of Toronto Vector Institute for AI, Georgetown University, ILINA Program, Harvard Kennedy School, AI Governance Institute at the University of Oxford, Centre for the Study of Existential Risk at the University of Cambridge, the University of Cambridge, University of Montreal / Mila, Bennett Institute of the University of Cambridge — and Girish Sastry, a researcher at OpenAI.


 

Reference

Denial of responsibility! TechCodex is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment