OpenAI CEO, Sam Altman, has recently been advocating for global AI regulation during his visits to various capitals worldwide, emphasizing the need for such measures. However, behind the scenes, OpenAI has been involved in lobbying efforts to dilute significant aspects of the European Union’s (EU) comprehensive AI legislation, known as the AI Act. Freedom of information requests reveal OpenAI’s engagement with EU officials in this regard.
OpenAI proposed amendments to the AI Act that were later incorporated into the final text of the legislation, which was approved by the European Parliament on June 14. The proposed changes sought to prevent OpenAI’s general-purpose AI systems, including GPT-3 (predecessor to ChatGPT) and the image generator Dall-E 2, from being classified as “high risk.” Such a designation would subject them to stringent legal requirements, such as transparency, traceability, and human oversight. OpenAI’s stance aligned with that of Microsoft and Google, both of which have also lobbied EU officials to ease the regulatory burden on major AI providers. These companies argued that the strictest requirements should apply to those explicitly using AI for high-risk applications, rather than encompassing larger companies developing general-purpose AI systems.
OpenAI’s lobbying efforts in Europe were not previously reported, although Altman has recently expressed more public criticism of the legislation. While Altman initially suggested that OpenAI might consider ceasing operations in Europe if compliance with the regulation was not feasible, he later clarified that the company had no plans to leave and intended to cooperate with the EU.
The lobbying efforts by OpenAI seem to have been successful, as the final version of the AI Act approved by EU lawmakers did not consider general-purpose AI systems inherently high risk. Instead, the law focused on requirements for “foundation models,” powerful AI systems trained on extensive data sets. OpenAI supported the introduction of the “foundation models” category in the Act. However, during the earlier stages, OpenAI objected to an amendment that would have categorized generative AI systems like ChatGPT and Dall-E as high risk if they produced text or images that could be perceived as human-generated and authentic. OpenAI argued that sufficient labeling and disclosure measures could address concerns related to AI-generated content.
The OpenAI White Paper, shared with European officials in September 2022, revealed the company’s efforts to mitigate risks associated with its general-purpose AI systems. OpenAI emphasized its safety mechanisms, policies, and tools to prevent misuse. The document implied that these measures should exempt its systems from being classified as high risk. However, experts and critics expressed skepticism about OpenAI’s approach, perceiving it as a request for self-regulation.
OpenAI also advocated for amendments to the AI Act that would allow AI providers to swiftly update their systems for safety reasons without undergoing lengthy assessments by EU officials. Additionally, OpenAI sought carve-outs for certain uses of generative AI in education and employment, arguing against blanket categorization of these sectors as high risk. Some of OpenAI’s concerns were addressed in the final version of the Act, incorporating exemptions that align with the company’s wishes.
OpenAI has maintained engagement with EU officials involved in the AI Act. In a meeting held on March 31, 2023, OpenAI demonstrated the safety features of ChatGPT and emphasized the importance of learning from operational use. They also highlighted that AI instructions could be adjusted to prevent sharing potentially dangerous information, although researchers have demonstrated that ChatGPT can be vulnerable to certain exploits, bypassing safety filters under specific circumstances.