AI lobbying spikes 185% as calls for regulation surge – CNBC

Generate an illustration in a cartoonish style portraying the concept of an AI-powered search engine. The scene includes a large magnifying glass floating above a cityscape, with lines representing search queries radiating out from it. Around it, there should also be a few futuristic skyscrapers symbolizing different data sources with lights within them indicating search activity. Also, represent the Copilot feature as a friendly robot lending a guiding hand to the queries. On one edge of the city, depict a group of people with diverse genders and descents looking appreciatively at the magnifying glass.

Artificial intelligence (AI) lobbying has seen a significant increase in 2023, with over 450 organizations participating, marking a 185% rise from the previous year. This surge in lobbying comes as calls for AI regulation grow and the Biden administration aims to establish rules for AI. Companies such as TikTok owner ByteDance, Tesla, Spotify, and Samsung have joined the lobbying efforts to influence how regulations may impact their businesses. The organizations involved in AI lobbying span various industries, including Big Tech, startups, pharmaceuticals, insurance, finance, academia, and telecommunications. The number of organizations lobbying on AI has steadily grown since 2017, but it exploded in 2023. Notably, more than 330 organizations that lobbied on AI in 2023 had not done so in 2022. These new entrants include chip companies, venture firms, biopharmaceutical companies, conglomerates, and AI training data companies. In 2023, organizations lobbying on AI also engaged in lobbying on other issues, collectively spending over $957 million on federal lobbying. President Joe Biden’s executive order on AI, issued in October, has sparked intense scrutiny and analysis from various stakeholders. One key area of debate revolves around AI fairness and its impact on marginalized communities. The order has been seen as a meaningful step, although some civil society leaders believe it falls short in addressing real-world harms caused by AI models. The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) has been tasked with developing guidelines and standards for AI evaluation. NIST has been seeking public input on responsible AI standards, vulnerability testing, risk management, and reducing the risk of synthetic content.

Full article

Leave a Reply