With the release of the generative AI project ChatGPT, technological discussions were dominated by AI during 2023. Thus, data privacy has become extremely important for everyone: from social media users and website users to businesses, app developers, privacy professionals, regulators, and more, everyone has a stake in data privacy.
With all this attention, the field of data privacy will be evolving enormously — not just in terms of legislation, but also in terms of awareness, management of breaches, and best practices.
Here are the top data privacy trends and tendencies you need to understand in 2024.
Generative AI Governance
AI will be on the top of the data privacy trends list in 2024. With the presence of ChatGPT-4, Bard, and other generative AI models, AI technology is going to advance exponentially over the next couple of years. It will affect everyone, not only social media users or website users, but also developers, researchers, and regulators.
AI is already used by developers, and its usage will increase enormously. However, if developers don’t use the technology safely and responsibly, it could lead to data breaches.
Forrester 2024 Predictions for Cybersecurity predicts that insecure AI-generated code will be responsible for at least 3 data breaches in 2024 due to security failings in the AI-generated.
AI offers many advantages, but there are also plenty of ways AI can be misused, including the unethical use of AI. For example, AI applications can be used to scrape biometric information from photos and videos on the web.
Another potential problem likely to occur is the accidental exposure of private information. People provide personal information in a chat with ChatGPT. If an AI model is trained on personal information, there is a possibility that this information could be exposed in its outputs to other users. The privacy regulations will follow the case, requiring deleting personal information from the AI model. As a result, the AI model itself must be destroyed at great expense to their organizations since there are no ways to delete just some of its knowledge.
New Laws, Regulating AI
In 2024, there will be a surge in global AI legislation as countries try to adapt to AI's evolving landscape.
On December 9, 2023, the European Parliament reached a provisional agreement with the European Council on the EU AI Act, which is expected to become EU law in 2024. The digital marketing field will continue to evolve significantly. The Digital Markets Act and the Digital Services Act became applicable in 2023, regulating a variety of tech services, including search engines, cloud services, social networks, video-sharing platforms, online advertising networks, and other services, products, and platforms owned by large digital companies. In addition, Article 22 of the GDPR grants individuals the right to object to automated decisions.
In the USA, the US AI Executive Order sets the obligation to ensure safe and ethical AI use. Also, the U.S. Department of Homeland Security and the UK's National Cyber Security Centre, along with 21 global agencies, released Guidelines for Secure AI System Development.
Privacy-Enhancing Computation Techniques
Privacy-enhancing technologies (PET) have been already known but not used widely. With the recent surge in AI technologies, privacy-enhancing technologies will become extremely important in 2024. The PET market is expected to grow to reach $25.8 billion by 2033.
In 2024, the following PET technologies are expected to develop and grow:
- Federated learning: an AI technique where individual nodes host a machine-learning model to produce separate outcomes. These outcomes are shared with a centralized AI cluster, but the input, the training data is not shared.
- Differential privacy: a mathematical approach for detecting the leakage of personal information in an AI model, and then taking steps to stop that leakage.
- Homomorphic encryption: a method that enables complex mathematical operations to be performed on encrypted data without compromising privacy.
- Secure multiparty computation: a method that enables multiple parties to compute a function without sharing individual data.
The “Pay or Okay” Approach
In 2023, Meta has offered EU users the choice to either Use Facebook and Instagram for free in exchange for their data or pay a monthly subscription fee and see no ads. The collected data will be used for targeted advertising purposes or could be sold to third parties. This was called the “Pay or Okay” approach.
However, it’s a controversial approach, since only 3% of survey respondents would consent to sharing personal data if they have a free choice, but 99.9% would consent if they need to pay a fee for a service. Consent for personal data collection couldn’t be valid, since the GDPR requires that consent has to be “freely given” consent, which is not the case. Currently, the “pay or okay” model is being evaluated in European courts, and we will see if it is legal over the course of 2024.
If the “pay or okay” marketing model violates the GDPR or other privacy laws, Meta will be fined heavily since the approach affects millions of people. If not, other social media platforms could follow this model. TikTok is already experimenting with it.
In practice, if every social media platform follows this model, then it could become very expensive or impossible to keep personal information just for yourself without sharing it with social media businesses.
Automotive Industry Privacy Restart
A recent Mozilla Foundation article revealed significant privacy concerns, including the excessive collection, sharing, and selling of personal information by car manufacturers. The study detected the worst situation so far regarding privacy issues. Car manufacturers extensively track consumers using various technologies such as direct trackers, cameras, microphones, and sensors. The study calls for urgent and increased control and investigation over privacy rights for drivers. The outcome of the investigation will be closely monitored by both data protection authorities and car owners.
In 2024, the outcome of the investigation will have huge impact on automotive industry privacy.
Applying AI to Mitigate Privacy Risk
A 2023 report by IBM revealed that the average time to detect and stop a data breach is 277 days, much too long! Hackers could do much harm with the breached data.
In 2024, privacy risks will continue to grow, and AI-related risks will get more complex. But generative AI also offers an opportunity. AI can remedy potential data privacy risks by analyzing large datasets and detecting patterns, prompts, and trends across data. AAI can also help to adapt to emerging regulations, making it a valuable tool for effectively identifying and mitigating privacy risks.
Greater Focus on Children’s Safety
The California Age-Appropriate Design Code Act (CAADCA) will take effect on July 1, 2024. It compels online platforms to proactively assess the privacy and protection of children under 18 years of age.
In 2024, Maryland and Minnesota are set to reintroduce legislation similar to California's CAADCA. Florida and Utah are working on the enhancement of child safety on social media platforms.
Frequently Asked Questions
What data privacy trends are going to be in 2024?
It is expected that in 2024, the most important data privacy trends are going to be generative AI governance and privacy-enhancing computation techniques; new laws, regulating AI, like the EU AI Act and Guidelines for Secure AI System Development in the US, the evaluation of the “Pay or Okay” approach, automotive industry privacy restart, AI application to mitigate privacy risk, and greater focus on children’s safety.
Will automotive industry privacy be changed in 2024?
What new laws, regulating AI, are expected in 2024?