Privacy

ChatGPT is violating Europe’s privacy laws, Italian DPA tells OpenAI

Comment

OpenAI logo is being displayed on a mobile phone screen in front of computer screen with the logo of ChatGPT
Image Credits: Didem Mente/Anadolu Agency / Getty Images

OpenAI has been told it’s suspected of violating European Union privacy, following a multi-month investigation of its AI chatbot, ChatGPT, by Italy’s data protection authority.

Details of the Italian authority’s draft findings haven’t been disclosed. But the Garante said today OpenAI has been notification and given 30 days to respond with a defence against the allegations.

Confirmed breaches of the pan-EU regime can attract fines of up to €20 million, or up to 4% of global annual turnover. More uncomfortably for an AI giant like OpenAI, data protection authorities (DPAs) can issue orders that require changes to how data is processed in order to bring an end to confirmed violations. So it could be forced to change how it operates. Or pull its service out of EU Member States where privacy authorities seek to impose changes it doesn’t like.

OpenAI was contacted for a response to the Garante’s notification of violation. We’ll update this report if they send a statement.

Update: OpenAI said:

We believe our practices align with GDPR and other privacy laws, and we take additional steps to protect people’s data and privacy. We want our AI to learn about the world, not about private individuals. We actively work to reduce personal data in training our systems like ChatGPT, which also rejects requests for private or sensitive information about people. We plan to continue to work constructively with the Garante.

AI model training lawfulness in the frame

The Italian authority raised concerns about OpenAI’s compliance with the bloc’s General Data Protection Regulation (GDPR) last year — when it ordered a temporary ban on ChatGPT’s local data processing which led to the AI chatbot being temporarily suspended in the market.

The Garante’s March 30 provision to OpenAI, aka a “register of measures”, highlighted both the lack of a suitable legal basis for the collection and processing of personal data for the purpose of training the algorithms underlying ChatGPT; and the tendency of the AI tool to ‘hallucinate'(i.e. its potential to produce inaccurate information about individuals) — as among its issues of concern at that point. It also flagged child safety as a problem.

In all, the authority said that it suspected ChatGPT to be breaching Articles 5, 6, 8, 13 and 25 of the GDPR.

Despite identifying this laundry list of suspected violations, OpenAI was able to resume service of ChatGPT in Italy relatively quickly last year, after taking steps to address some issues raised by the DPA. However the Italian authority said it would continue to investigate the suspected violations. It’s now arrived at preliminary conclusions the tool is breaking EU law.

While the Italian authority hasn’t yet said which of the previously suspected ChatGPT breaches it’s confirmed at this stage, the legal basis OpenAI claims for processing personal data to train its AI models looks like a particularly crux issue.

This is because ChatGPT was developed using masses of data scraped off the public Internet — information which includes the personal data of individuals. And the problem OpenAI faces in the European Union is that processing EU people’s data requires it to have a valid legal basis.

The GDPR lists six possible legal bases — most of which are just not relevant in its context. Last April, OpenAI was told by the Garante to remove references to “performance of a contract” for ChatGPT model training — leaving it with just two possibilities: Consent or legitimate interests.

Given the AI giant has never sought to obtain the consent of the countless millions (or even billions) of web users’ whose information it has ingested and processed for AI model building, any attempt to claim it had Europeans’ permission for the processing would seem doomed to fail. And when OpenAI revised its documentation after the Garante’s intervention last year it appeared to be seeking to rely on a claim of legitimate interest. However this legal basis still requires a data processor to allow data subjects to raise an objection — and have processing of their info stop.

How OpenAI could do this in the context of its AI chatbot is an open question. (It might, in theory, require it to withdraw and destroy illegally trained models and retrain new models without the objecting individual’s data in the training pool — but, assuming it could even identify all the unlawfully processed data on a per individual basis, it would need to do that for the data of each and every objecting EU person who told it to stop… Which, er, sounds expensive.)

Beyond that thorny issue, there is the wider question of whether the Garante will finally conclude legitimate interests is even a valid legal basis in this context.

Frankly, that looks unlikely. Because LI is not a free-for-all. It requires data processors to balance their own interests against the rights and freedoms of individuals whose data is being processed — and to consider things like whether individuals would have expected this use of their data; and the potential for it to cause them unjustified harm. (If they would not have expected it and there are risks of such harm LI will not be found to be a valid legal basis.)

The processing must also be necessary, with no other, less intrusive way for the data processor to achieve their end.

Notably, the EU’s top court has previously found legitimate interests to be an inappropriate basis for Meta to carry out tracking and profiling of individuals to run its behavioral advertising business on its social networks. So there is a big question mark over the notion of another type of AI giant seeking to justify processing people’s data at vast scale to build a commercial generative AI business — especially when the tools in question generate all sorts of novel risks for named individuals (from disinformation and defamation to identity theft and fraud, to name a few).

A spokesperson for the Garante confirmed that the legal basis for processing people’s data for model training remains in the mix of what it’s suspected ChatGPT of violating. But they did not confirm exactly which one (or more) article(s) it suspects OpenAI of breaching at this point.

The authority’s announcement today is also not yet the final word — as it will also wait to receive OpenAI’s response before taking a final decision.

Here’s the Garante’s statement (which we’ve translated from Italian using AI):

[Italian Data Protection Authority] has notified OpenAI, the company that runs the ChatGPT artificial intelligence platform, of its notice of objection for violating data protection regulations.

Following the provisional restriction of processing order, adopted by the Garante against the company on March 30, and at the outcome of the preliminary investigation carried out, the Authority considered that the elements acquired may constitute one or more unlawful acts with respect to the provisions of the EU Regulation.

OpenAI, will have 30 days to communicate its defence briefs on the alleged violations.

In defining the proceedings, the Garante will take into account the ongoing work of the special task force set up by the Board that brings together the EU Data Protection Authorities (EDPB).

OpenAI is also facing scrutiny over ChatGPT’s GDPR compliance in Poland, following a complaint last summer which focuses on an instance of the tool producing inaccurate information about a person and OpenAI’s response to that complainant. That separate GDPR probe remains ongoing.

OpenAI, meanwhile, has responded to rising regulatory risk across the EU by seeking to establish a physical base in Ireland; and announcing, in January, that this Irish entity would be the service provider for EU users’ data going forward.

Its hopes with these moves will be to gain so-called “main establishment” status in Ireland and switch to having assessment of its GDPR compliance led by Ireland’s Data Protection Commission, via the regulation’s one-stop-shop mechanism — rather than (as now) its business being potentially subject to DPA oversight from anywhere in the Union that its tools have local users.

However OpenAI has yet to obtain this status so ChatGPT could still face other probes by DPAs elsewhere in the EU. And, even if it gets the status, the Italian probe and enforcement will continue as the data processing in question predates the change to its processing structure.

The bloc’s data protection authorities have sought to coordinate on their oversight of ChatGPT by setting up a taskforce to consider how the GDPR applies to the chatbot, via the European Data Protection Board, as the Garante’s statement notes. That (ongoing) effort may, ultimately, produce more harmonized outcomes across discrete ChatGPT GDPR investigations — such as those in Italy and Poland.

However authorities remain independent and competent to issue decisions in their own markets. So, equally, there are no guarantees any of the current ChatGPT probes will arrive at the same conclusions.

ChatGPT resumes service in Italy after adding privacy disclosures and controls

Italy gives OpenAI initial to-do list for lifting ChatGPT suspension order

 

More TechCrunch

The AI industry moves faster than the rest of the technology sector, which means it outpaces the federal government by several orders of magnitude.

Senate study proposes ‘at least’ $32B yearly for AI programs

The FBI along with a coalition of international law enforcement agencies seized the notorious cybercrime forum BreachForums on Wednesday.  For years, BreachForums has been a popular English-language forum for hackers…

FBI seizes hacking forum BreachForums — again

The announcement signifies a significant shake-up in the streaming giant’s advertising approach.

Netflix to take on Google and Amazon by building its own ad server

It’s tough to say that a $100 billion business finds itself at a critical juncture, but that’s the case with Amazon Web Services, the cloud arm of Amazon, and the…

Matt Garman taking over as CEO with AWS at crossroads

Back in February, Google paused its AI-powered chatbot Gemini’s ability to generate images of people after users complained of historical inaccuracies. Told to depict “a Roman legion,” for example, Gemini would show…

Google still hasn’t fixed Gemini’s biased image generator

A feature Google demoed at its I/O confab yesterday, using its generative AI technology to scan voice calls in real time for conversational patterns associated with financial scams, has sent…

Google’s call-scanning AI could dial up censorship by default, privacy experts warn

Google’s going all in on AI — and it wants you to know it. During the company’s keynote at its I/O developer conference on Tuesday, Google mentioned “AI” more than…

The top AI announcements from Google I/O

Uber is taking a shuttle product it developed for commuters in India and Egypt and converting it for an American audience. The ride-hail and delivery giant announced Wednesday at its…

Uber has a new way to solve the concert traffic problem

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

Google is preparing to launch a new system to help address the problem of malware on Android. Its new live threat detection service leverages Google Play Protect’s on-device AI to…

Google takes aim at Android malware with an AI-powered live threat detection service

Users will be able to access the AR content by first searching for a location in Google Maps.

Google Maps is getting geospatial AR content later this year

The heat pump startup unveiled its first products and revealed details about performance, pricing and availability.

Quilt heat pump sports sleek design from veterans of Apple, Tesla and Nest

The space is available from the launcher and can be locked as a second layer of authentication.

Google’s new Private Space feature is like Incognito Mode for Android

Gemini, the company’s family of generative AI models, will enhance the smart TV operating system so it can generate descriptions for movies and TV shows.

Google TV to launch AI-generated movie descriptions

When triggered, the AI-powered feature will automatically lock the device down.

Android’s new Theft Detection Lock helps deter smartphone snatch and grabs

The company said it is increasing the on-device capability of its Google Play Protect system to detect fraudulent apps trying to breach sensitive permissions.

Google adds live threat detection and screen-sharing protection to Android

This latest release, one of many announcements from the Google I/O 2024 developer conference, focuses on improved battery life and other performance improvements, like more efficient workout tracking.

Wear OS 5 hits developer preview, offering better battery life

For years, Sammy Faycurry has been hearing from his registered dietitian (RD) mom and sister about how poorly many Americans eat and their struggles with delivering nutritional counseling. Although nearly…

Dietitian startup Fay has been booming from Ozempic patients and emerges from stealth with $25M from General Catalyst, Forerunner

Apple is bringing new accessibility features to iPads and iPhones, designed to cater to a diverse range of user needs.

Apple announces new accessibility features for iPhone and iPad users

TechCrunch Disrupt, our flagship startup event held annually in San Francisco, is back on October 28-30 — and you can expect a bustling crowd of thousands of startup enthusiasts. Exciting…

Startup Blueprint: TC Disrupt 2024 Builders Stage agenda sneak peek!

Mike Krieger, one of the co-founders of Instagram and, more recently, the co-founder of personalized news app Artifact (which TechCrunch corporate parent Yahoo recently acquired), is joining Anthropic as the…

Anthropic hires Instagram co-founder as head of product

Seven orgs so far have signed on to standardize the way data is collected and shared.

Venture orgs form alliance to standardize data collection

As cloud adoption continues to surge toward the $1 trillion mark in annual spend, we’re seeing a wave of enterprise startups gaining traction with customers and investors for tools to…

Alkira connects with $100M for a solution that connects your clouds

Charging has long been the Achilles’ heel of electric vehicles. One startup thinks it has a better way for apartment dwelling EV drivers to charge overnight.

Orange Charger thinks a $750 outlet will solve EV charging for apartment dwellers

So did investors laugh them out of the room when they explained how they wanted to replace Quickbooks? Kind of.

Embedded accounting startup Layer secures $2.3M toward goal of replacing QuickBooks

While an increasing number of companies are investing in AI, many are struggling to get AI-powered projects into production — much less delivering meaningful ROI. The challenges are many. But…

Weka raises $140M as the AI boom bolsters data platforms

PayHOA, a previously bootstrapped Kentucky-based startup that offers software for self-managed homeowner associations (HOAs), is an example of how real-world problems can translate into opportunity. It just raised a $27.5…

Meet PayHOA, a profitable and once-bootstrapped SaaS startup that just landed a $27.5M Series A

Restaurant365, which offers a restaurant management suite, has raised a hot $175M from ICONIQ Growth, KKR and L Catterton.

Restaurant365 orders in $175M at $1B+ valuation to supersize its food service software stack 

Venture firm Shilling has launched a €50M fund to support growth-stage startups in its own portfolio and to invest in startups everywhere else. 

Portuguese VC firm Shilling launches €50M opportunity fund to back growth-stage startups

Chang She, previously the VP of engineering at Tubi and a Cloudera veteran, has years of experience building data tooling and infrastructure. But when She began working in the AI…

LanceDB, which counts Midjourney as a customer, is building databases for multimodal AI