On 31 March 2023, the Italian Data Protection Authority (“DPA”) imposed a temporary ban and launched an investigation into the OpenAI-powered platform ChatGPT’s compliance with the General Data Protection Regulation (“GDPR”) following a data breach. The data breach took place on March 20th over a critical nine-hour period that left ChatGPT Plus (the subscription version of the chatbot) users’ chat history and payment-related information exposed and visible to other users. The breach was followed by an investigation carried out by the Italian DPA, which highlighted several issues with ChatGPT’s system which found that:
- There is no legal basis for the “massive collection and processing of personal data in order to train the algorithms on which the platform relies”;
- ChatGPT occasionally processes and generates inaccurate personal information about its data subjects; and
- ChatGPT also has inadequate age verification mechanisms to determine whether users meet minimum age requirements.
The first issue surrounds the legal basis on which ChatGPT relies for the collection and processing of such large quantities of data. Under the GDPR, legal basis must be established by either; (i) obtaining consent to process data, or (ii) having one of the legitimate reasons under Article 6 to process personal data. It was determined by the Italian DPA that ChatGPT could not successfully establish such legal basis through either of these avenues, merely claiming that such collection and processing is necessary for the purpose of continuously training their algorithms.
The investigation further brought about concern that ChatGPT is using and generating inaccurate information. The Italian DPA acknowledged concerns surrounding the ability of AI to generate factually inaccurate information about real people, at a clear detriment to its users. This constitutes another example of non-compliance with the GDPR Article 5(1)(d) which stipulates that personal data shall be “accurate and, where necessary kept up to date”.
Next steps for ChatGPT and AI systems
In the absence of a legal entity in the EU, OpenAI’s European representative shall now have 20 days to report back to the Italian Supervisory Authority with the measures that have been implemented to rectify these issues. On 5 April OpenAI began such conversations with the Italian DPA with an optimistic attitude towards lifting their ban, although an official outcome is yet to be announced. If the Italian DPA determine that OpenAI have failed to implement adequate corrective measures, the platform may face fines of up to €20 million or 4% of their total worldwide annual turnover.
The increasing threats that AI continues to pose are likely to push the European Parliament even more so in the direction of getting their highly anticipated EU Artificial Act over the line in attempt to regulate such a data-sensitive industry. However, whilst developers should pre-empt new landmark AI laws and regulation, the ChatGPT ban should serve as a fundamental reminder that the existing GDPR should not be an afterthought, given that AI development shall always inevitably involve the processing of large pools of data.
Meanwhile in the UK, the Information Commissioner’s Office released a statement stressing that “there really can be no excuse for getting the privacy implications of generative AI wrong”. Their statement is supplemented with addition guidance for AI platform development to ensure that developers maintain ongoing adherence and have an adequate legal basis for their data processing activities within the scope of the UK GDPR.
Find the Italian DPA’s temporary ban here.
The material in this article is only for general review of the topics covered and does not constitute legal advice. No legal or business decision should be based on its content.
This article is written in English language. Preiskel & Co LLP is not responsible for any translation of all or part of its content into any language.