The General Data Protection Regulation (GDPR) was implemented in 2018, and was the first data protection law, followed by other laws around the world. It has played a vital role in safeguarding the personal data of website users and regulating businesses.
However, with the release of ChatGPT in 2023 and other AI technologies advancing rapidly, questions arise regarding the effectiveness of GDPR. Is it still relevant or already becoming outdated? Can organizations use AI-based technologies without violating data protection laws? What do data privacy trends suggest?
This article examines data protection and privacy challenges faced by organizations wishing to employ AI in their everyday business and how they can remain compliant with the GDPR.
How do AI-Based Systems Work?
Unlike traditional software that follows predefined instructions, Artificial Intelligence (AI) tools learn from initial data and can output data that is unknown to humans. AI tools operate by continuously learning from data, adapting their responses, and offering new solutions. AI systems process vast amounts of data, identifying patterns and making decisions.
AI-based systems are used for customer support, content generation as well as virtual assistants. Businesses could use AI-powered systems to automatically answer customer queries, provide information about their products, and generate specific content such as emails or blog posts.
In principle, the use of AI processing tools is not new. It was already used previously by large tech companies like Google, Amazon, or Meta, by insurance companies or recruitment agencies. What is new is that with the introduction of AI models by OpenAI, the AI tools became very popular and accessible to businesses of all sizes and even individuals. Now, OpenAI tools can be provided with input from social media accounts, including text, photos, and videos, and can collect a vast amount of personal data about individuals.
What’s The Problem with AI and Compliance with privacy laws?
The problem of using AI algorithms and complying with privacy laws could be illustrated by showing the compliance issues with the GDPR, however, similar problems arise also with other privacy laws.
The GDPR has the following principles regarding data collection and processing for compliance:
- Purpose limitation. Organizations must only gather personal data for a specified purpose. They must define what their purpose is and only collect data for the time that they need to proceed with this purpose.
The problem: AI algorithms need much data for training, that can be collected through a process called data scraping or web scraping. This involves unsolicited data collection from various sources, such as social media accounts, business profiles, dating sites, and other personal data, found on the internet.
The problem: AI algorithms lack transparency. It could be difficult or even impossible for organizations to understand how AI algorithms make decisions and how to explain those decisions to customers.
- Data minimization. According to the GDPR, organizations should only collect the minimal amount of data needed for their business practices.
The problem: AI algorithms usually require as much data as possible for training. So, the data minimization principle could not be followed by AI systems from the very beginning.
- Storage limitation. To be compliant with the GDPR, personal data must not be kept longer than is necessary.
The problem: Many AI models require just the opposite- the more data is collected, and the longer it is kept- the better result would be produced by AI algorithms.
- Accuracy. Personal data collected should be accurate and up-to-date. Organizations must review information they hold about individuals regularly and delete or amend inaccurate information. The problem: Businesses can’t control what personal data is collected, is it accurate and up-to-date.
- Confidentiality and integrity. Businesses must safeguard personal data collected from internal threats, like accidental damage or loss, unauthorized use, and from external threats, like data leaks or cyber attacks.
The problem: Businesses can’t control what data is collected, and what data will be delivered as an output.
- Accountability. Organizations should take responsibility for the data they collect. This included establishing a personal data inventory, ensuring proper user consent is given, performing Data Protection Impact Assessments, and other means.
The problem: This principle could also be potentially challenged, for example, it could be difficult to establish a personal data inventory.
- Legal basis. Organizations should ensure that there is a valid legal basis for the data processing.
The problem: In the case of AI-based technologies, legitimate interests could not be relevant, as the risks associated with involving AI are often high and thus unlikely to outweigh the interests of data subjects.
AI under Regulators’ Focus
Since the AI tools, especially ChatGPT, appeared so popular very fast, the regulatory mechanisms are lagging. At the moment, there are no general privacy laws in action neither in Europe nor federal laws in the USA. There are proposed laws, going through different legislative processes.
Temporarily, every EU country has its own approach to the usage of AI regarding personal data management.
Italy has become the first EU country to ban ChatGPT, raising concerns over the lack of information provided to users and the absence of a legal basis for the extensive collection and processing of personal data to train the platform’s algorithms. On March 31, 2023, Italy’s data regulator Garante banned ChatGPT over data security concerns. The Italian regulator also expressed worries about the lack of age restrictions using ChatGPT.
Now ChatGPT is available again in Italy, but OpenAI has to fulfill various requirements of the Garante:
- Legal basis. OpenAI could no longer rely on Art. 6(1) (b) GDPR, using the performance of the contract as a legal basis, but must instead collect the data subject’s consent or have a legitimate interest.
- Data correction or deletion. OpenAI must implement mechanisms by which data subjects can request correction and deletion of their data or do not agree to the use of their data.
- Age verification system. OpenAI needs to obtain age confirmation and must implement an age verification system to prevent the use of the tool for minors under the age of 13 and for users between the ages of 13 and 18 who do not have parental consent.
- Information campaign. OpenAI also has to conduct an information campaign through different channels (radio, television, Internet) to inform Italian users about the use of their personal data for training algorithms.
In April 2023, the German Data Protection Conference (DSK) established the AI Taskforce to undertake a coordinated data protection review of ChatGPT.
The German authorities contacted OpenAI about the data sources and algorithms behind the automated data processing. The deadline for answering the questions has already been extended once, so the German authorities expect to receive answers in 2024. Depending on this, the AI Taskforce will then conduct a data protection review of ChatGPT.
In Switzerland, the Federal Administration is currently evaluating the approach to the regulation of AI. This work is expected to be completed by the end of 2024, after which the Federal Council will decide regarding the regulations of AI. Switzerland currently takes a sectoral approach to technology-specific regulation.
The FDPIC informs that, regardless of the future regulation, the companies, owning AI-based tools, must comply with the data protection laws already in force. The Federal Act on Data Protection (FADP) covers all types of technologies and is therefore also directly applicable to AI systems.
The European Artificial Intelligence Act (EU AI Act) is a proposed European law that is currently going through the EU legislative process. If passed, it will be the first law on AI by a major regulator in the world.
The EU AI Act has a strict approach to AI regulation and tries to ensure that AI systems do not challenge EU individual’s personal data.
Unlike Europe, the United States is not likely to pass a national law regulating AI over the coming few years. However, no common national law does not mean no regulation.
First, it is expected to see sector-specific regulations especially in health care, financial services, child safety, and workforce.
Second, new state-level privacy laws, also regulating the usage of AI, continue to emerge.
In January 2023, the National Institute of Standards and Technology issued the AI Risk Management Framework (AI RMF) to provide guidance for the usage of AI systems. The framework is voluntary and has no penalties for non-compliance.
Also, the US AI Executive Order sets the obligation to ensure safe and ethical AI use.
On the Federal level, the American Data Privacy & Protection Act (ADPPA), contains provisions regarding accountability and fairness of AI algorithms. The ADPPA features AI and privacy-related provisions. The bill imposes limits on personal information collection and usage, requiring that personal data management be only necessary and proportionate for providing products or services.
On October 30, 2023, President Biden signed an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The order calls on Congress to pass bipartisan data privacy legislation to protect all Americans, including prioritizing the use of privacy-preserving techniques and strengthening privacy guidance for federal agencies.
Checklist for Complying with Privacy Laws While Using AI
When the privacy laws regulating the usage of AI are still emerging, you may want to know how to use AI without violating the coming data protection laws. Here are a few tips:
- Avoid processing personal data with AI. If possible, avoid processing personal data with AI and implement privacy-by-design practices.
- Minimalize data processing. Use only the minimum amount of data needed to reach your purpose. Do not process large amounts of personal data just because you can, because you could have problems in future, including penalties and the need to train AI systems again, without the use of personal data.
- Collect the data subject’s consent or get a have a legitimate interest. You must know why you need to process personal data with AI tools, explain this to your customers, and get consent.
- Limit the share of personal data. If data processors process data with AI on your behalf, make sure that they process data lawfully and do not disclose the data to other parties. Limit the processing with third parties. Also, do not share personal data with advertising agencies to target your customers.
- Limit the data retention period. Limit the data retention period to the minimum and control how long the AI tools store the data.
- Do not transfer data to unsafe countries. The GDPR is strict about transferring personal data to countries that do not protect personal data, so always store or process data in Europe or in other safe countries. If you process the data in the United States, make sure your data processors are certified with the EU-US Privacy Framework.
- Conduct a data protection risk assessment. A data protection risk assessment is required by both EU and US laws. Using AI to process personal data could fall under the scope of risk assessments.
- Appoint a data protection officer. Some countries in the EU require to have a data protection officer.
- Train your employees and contractors. Your employees, responsible for the management of individuals’ information, as well as your contractors must be familiar with the best data protection practices and regulations.
Frequently Asked Questions
Are there laws in Europe regulating the usage of AI?
Does the USA regulate the usage of AI regarding personal data protection?
The United States does not have and is not likely to pass a national law regulating AI over the coming few years. However, no common national law does not mean no regulation. First, it is expected to see sector-specific regulations, especially in health care, financial services, child safety, and workforce. Second, new state-level privacy laws, also regulating the usage of AI, continue to emerge. Get a CookieScript CMP to ensure compliance with privacy laws.
What’s The Problem with AI and Compliance with Privacy Laws?
Is there a legal basis for AI systems to use personal data?
How to comply with privacy laws while using AI?
To comply with privacy laws, collect the data subject’s consent or get a have a legitimate interest, limit the share of personal data, be transparent with your users, limit the data retention period, do not transfer data to unsafe countries, conduct a data protection risk assessment, and train your employees and contractors. Get a CookieScript CMP to ensure compliance.