OpenAI's Future in the EU at Risk? CEO Discusses Concerns with AI Regulations
OpenAI CEO Sam Altman has expressed concerns about the EU AI Act and stated that the company may withdraw its services from the European market if it cannot comply with the regulations. Altman's remarks came after a talk in London, where he emphasized the importance of the Act's details. According to The Financial Times, Altman said, "We will try to comply, but if we can't comply we will cease operating."
Altman highlighted the potential classification of systems like ChatGPT as "high risk" under the EU legislation, which would require OpenAI to meet safety and transparency requirements. Altman acknowledged that there are technical limits to what is achievable in meeting these requirements, stating, "Either we'll be able to solve those requirements or not."
Compliance with the EU AI Act poses technical challenges and business threats to OpenAI. The Act includes provisions that necessitate disclosure of information about the design and training of foundation models. OpenAI, which previously shared such details, now considers keeping training methods and data sources secret as necessary to protect its work from being copied by competitors.
Furthermore, requiring OpenAI to identify its use of copyrighted data could expose the company to potential lawsuits. Generative AI systems like ChatGPT and DALL-E rely on large amounts of data scraped from the web, some of which is copyright protected. Disclosing these data sources leaves companies vulnerable to legal challenges, as evidenced by the lawsuit Getty Images filed against OpenAI rival Stability AI for using copyrighted data to train its AI image generator.
Altman's recent comments shed light on OpenAI's stance on regulation. While he has advocated for regulations to mainly apply to future, more advanced AI systems when addressing US politicians, the EU AI Act focuses more on the current capabilities of AI software.
Comments
Post a Comment