top of page
Search

AI Regulation in 2024: A Global Roundup

Nygina Mills

If 2022 was the year that the general public woke up to the promise and peril of artificial intelligence, 2023 kicked off what’s likely to be a multiyear, international effort to regulate AI.


Proposed rules and laws governing artificial intelligence aren’t identical across the board. What follows is a general, high-level overview of what’s been proposed or enacted globally.


AI Regulation & Legal Disputes in North America


The United States is the most active front for AI-related intellectual property disputes. For companies and IP owners based in the United States, it’s the most salient as well.


U.S.-based content creators have filed suit against multiple AI firms, with notable cases pitting Universal Music group artists against Anthropic, various writers (including John Grisham and George R. R. Martin) against OpenAI, and numerous programmers against GitHub, Microsoft, and OpenAI.


These cases could go all the way to the U.S. Supreme Court. Fairly or not, their outcomes could shape external perceptions of the United States’ regulatory posture toward AI. Pro-creator outcomes could encourage AI companies to base their operations in other jurisdictions.


AI Regulation in Europe


The European Union is likely to pass the world’s first comprehensive national or supra-national AI regulation. Assuming it becomes law as expected this year, the EU AI Act will assign AI applications to one of four risk categories:


  • Unacceptable risk: The EU is likely to ban AI systems that fall into this category. Unacceptable uses include social scoring, real-time and/or individualized biometric identification, and cognitive behavioral manipulation.

  • High risk: These systems won’t be banned outright, but EU regulators will likely scrutinize them closely both before and after go-to-market. Examples include AI systems integrated into products already covered by EU product safety legislation, such as medical devices and cars, and systems involved in economically sensitive activities such as worker protection and critical infrastructure.

  • Generative and general-purpose AI: These systems, including consumer-grade applications like ChatGPT, must comply with transparency requirements that include disclosure of AI-generated content, public summaries of training data, and guardrails to prevent creation of illegal content.

  • Limited risk: These systems will be expected to abide by transparency requirements, such that users know when they’re interacting with content generated or manipulated by AI, but won’t be regulated as strictly as those in the other three risk categories.


AI Regulation Elsewhere 


The most impactful AI regulation activity outside North America and Europe is occurring in Asia, and China in particular. 


China’s Provisions on the Administration of Deep Synthesis Internet Information Services explicitly forbids the use of AI in misinformation, disinformation, and anti-state activity. These guidelines also assign “primary responsibility” for AI systems’ activities, security, and related matters to “deep synthesis service providers” — that is, those managing or licensing the AI models. Their responsibilities include drafting and disclosing management agreements and verifying the identities of system users to prevent abuse and illegal activity.


These AI regulations and legal disputes are only the beginning of what’s sure to be a long, uncertain, and globally fragmented process. Whether this process stays comfortably ahead of rapidly advancing AI capabilities is an open question.



Nygina Mills writes these articles in her personal capacity and she does not represent any organization other than her own personal views.


30 views
  • Youtube
  • crunchbase.256x256
  • Medium
  • X
  • LinkedIn
bottom of page