Home Artificial Intelligence AI lobbying spikes nearly 200% as calls for regulation surge

AI lobbying spikes nearly 200% as calls for regulation surge

The hundreds of organizations that lobbied on AI last year ran the gamut from Big Tech and AI startups to pharmaceuticals, insurance, finance, academia, telecommunications and more. Until 2017, the number of organizations that reported AI lobbying stayed in the single digits, per the analysis, but the practice has grown slowly but surely in the years since, exploding in 2023.

More than 330 organizations that lobbied on AI last year had not done the same in 2022. The data showed a range of industries as new entrants to AI lobbying: Chip companies like AMD and TSMC, venture firms like Andreessen Horowitz, biopharmaceutical companies like AstraZeneca, conglomerates like Disney and AI training data companies like Appen.

Organizations that reported lobbying on AI issues last year also typically lobby the government on a range of other issues. In total, they reported spending a total of more than $957 million lobbying the federal government in 2023 on issues including, but not limited to, AI, according to OpenSecrets.

In October, President Biden issued an executive order on AI, the U.S. government’s first action of its kind, requiring new safety assessments, equity and civil rights guidance and research on AI’s impact on the labor market. The order tasked the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) to develop guidelines for evaluating certain AI models, including testing environments for them, and be partly in charge of developing “consensus-based standards” for AI.

After the executive order’s unveiling, a frenzy of lawmakers, industry groups, civil rights organizations, labor unions and others began digging into the 111-page document and making note of the priorities, specific deadlines and, in their eyes, the wide-ranging implications of the landmark action.

One core debate has centered on the question of AI fairness. Many civil society leaders told CNBC in November that the order does not go far enough to recognize and address real-world harms that stem from AI models — especially those affecting marginalized communities. But they said it’s a meaningful step along the path.

Since December, NIST has been collecting public comments from businesses and individuals about how best to shape these rules, with plans to end the public comment period after Friday, February 2. In its Request for Information, the Institute specifically asked responders to weigh in on developing responsible AI standards, AI red-teaming, managing the risks of generative AI and helping to reduce the risk of “synthetic content” (i.e., misinformation and deepfakes).

CNBC’s Mary Catherine Wellons and Megan Cassella contributed reporting.

 

Reference

Denial of responsibility! TechCodex is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment