Join top executives in San Francisco on July 11-12, to listen to how leaders are integrating and optimizing AI investments for good results. Find out A lot more
It has been a complicated 7 days for OpenAI, as calls for generative AI regulation mature louder: Nowadays, Italy’s facts safety agency said it was blocking access OpenAI’s well-liked ChatGPT chatbot and had opened a probe due to considerations about a suspected knowledge collection breach.
The agency explained the restriction was non permanent, right until OpenAI abides by the EU’s Standard Facts Defense Regulation (GDPR) laws. A translation of the announcement said that “a data breach impacting ChatGPT users’ discussions and information on payments by subscribers to the services had been claimed on 20 March.” It included that “no details is delivered to buyers and information topics whose details are collected by Open up AI extra importantly, there appears to be no lawful basis underpinning the substantial assortment and processing of own knowledge in purchase to ‘train’ the algorithms on which the platform depends.”
>>Follow VentureBeat’s ongoing generative AI coverage<<
A week of calls for large-scale AI regulation
The announcement comes just a day after the Federal Trade Commission (FTC) received a complaint today from the Center for AI and Digital Policy (CAIDP), which called for an investigation of OpenAI and its product GPT-4. The complaint argued that the FTC has declared that the use of AI should be “transparent, explainable, fair, and empirically sound while fostering accountability,” but claims that OpenAI’s GPT-4 “satisfies none of these requirements” and is “biased, deceptive, and a risk to privacy and public safety.”
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
And on Wednesday, an open letter calling for a six-month “pause” on large-scale AI development beyond OpenAI’s GPT-4 highlighted the complex discourse and fast-growing, fierce debate around AI’s various risks, both short-term and long-term.
Critics of the letter — which was signed by Elon Musk, Steve Wozniak, Yoshua Bengio, Gary Marcus and other AI experts, researchers and industry leaders — say it fosters unhelpful alarm around hypothetical dangers, leading to misinformation and disinformation about actual, real-world concerns. Others pointed out the unrealistic nature of a “pause” and said the letter did not address current efforts towards global AI regulation and legislation.
Questions about how the GDPR applies to ChatGPT
The EU is currently working on developing a proposed Artificial Intelligence Act. According to Avi Gesser, partner at Debevoise & Plimpton and co-chair of the firm’s Cybersecurity, Privacy and Artificial Intelligence Practice Group, the EU Act would be a “risk-based regime to address the highest-risk outcomes of artificial intelligence.”
However, the EU AI Act won’t be fully baked or take effect for some time, so some are turning to the GDPR, which was enacted in 2018, for regulatory authority on issues related to ChatGPT. In fact, according to Infosecurity, some experts are questioning “the very existence of OpenAI’s chatbot for privacy reasons.”
Infosecurity quoted Alexander Hanff, member of the European Data Protection Board’s (EDPB) support pool of experts, who said that “If OpenAI obtained its training data through trawling the internet, it’s unlawful.”
“Just because something is online doesn’t mean it’s legal to take it,” he added. “Scraping billions or trillions of data points from sites with terms and conditions which, in themselves, said that the data couldn’t be scraped by a third party, is a breach of the contract. Then, you also need to consider the rights of individuals to have their data protected under the EU’s GDPR, ePrivacy directive and Charter of Fundamental Rights.”
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.