The UK’s Online Safety Act 2023 (OSA) is a comprehensive piece of legislation designed to regulate social media companies and search services and to increase protections for individuals online. It draws comparisons to the EU’s Digital Services Act, with both laws include provisions relating to safety and transparency—seeking to balance the need to protect people online with fundamental rights such as the right to freedom of expression and privacy. Importantly, it applies not just to digital service providers in the UK but to any service with links to the UK.
California’s Significant AI laws Go into Effect
January 1, 2025, marked the start of a series of significant AI laws going into effect in California. California’s 18 new AI laws represent a significant step toward regulating this space, establishing requirements regarding deepfake technology, AI transparency, data privacy and use of AI in the health care arena. These laws reinforce the state’s desire to be a pioneer in this space.
In California’s AI Laws Are Here—Is Your Business Ready?, Jeewon K. Serrato, Christine Mastromonaco, Shruti Bhutani Arora, Andrew Caplan, Erin Choo, Mia Rendar, Leighton Watson, Anne M. Voigts, Shani Rivaux, Johnna Purcell and Dayo Feyisayo Ajanaku provide a detailed look at the enacted legislation, addresses compliance timelines and serves as a guide for businesses as they navigate compliance with California’s evolving AI landscape.
EU AI Act: First Set of Requirements Go into Effect February 2, 2025
The first binding obligations of the European Union’s landmark AI legislation, the EU AI Act (the Act), came into effect on February 2, 2025. Essentially, from this date, AI practices which present an unacceptable level of risk are prohibited and organizations are required to ensure an appropriate level of AI literacy among staff. For a comprehensive overview of the Act, see our earlier client alert here.
GDPR Enforcement: Lessons from Recent Data Privacy Penalties
Recent decisions by the French data protection authority (CNIL) have highlighted the importance of GDPR compliance, particularly in the areas of data retention, consent for processing sensitive personal data, and marketing practices. On October, 10, 2024, CNIL fined two companies offering remote clairvoyance services a total of €400,000—€250,000 for Cosmospace and €150,000 for Telemaque—for breaches including excessive data retention, failure to obtain explicit consent for sensitive data processing, and non-compliance with marketing consent rules. These decisions serve as a reminder for businesses to evaluate their data protection policies to avoid costly penalties and maintain consumer trust.
CPPA Continues Rulemaking on AI, the New Delete Request and Opt-Out Platform (DROP), Cybersecurity Audits and Privacy Risk Assessments
The California Privacy Protection Agency (CPPA) has released the agenda for its upcoming public board meeting on October 4, 2024. This meeting is set to cover important regulatory and enforcement matters related to the California Consumer Privacy Act (CCPA), as amended by the California Privacy Rights Act (CPRA).
Here’s a breakdown of the substantive agenda:
California Legislature Passes Generative AI Training Data Transparency Bill (UPDATED)
The California legislature recently passed Assembly Bill 2013 (AB 2013) on August 27, 2024, a measure aimed at enhancing transparency in AI training and development. If signed into law by Governor Gavin Newsom, developers of generative AI systems or services that are made available to Californians would be required to disclose significant information on the data used to train such AI systems or services. This, in turn, may raise novel compliance burdens for AI providers as well as unique challenges for customers in interpreting the information.
Regulation Evolves to Address Deepfakes, Robocalls and More
While major legal cases involving AI have largely focused on copyright issues, few cases thus far have directly addressed truthful advertising of AI products and AI-generated content. Indeed, the ease with which consumers and the public can be deceived by AI, as well as the fear of mal-intentioned interference in political elections, has underscored the urgency of considering legislation and regulations that are capable of addressing these issues directly.
In Truth-in-AI and Robo-Deception: How Regulation Is Evolving to Address Deepfakes, Robocalls and More to Avoid the Erosion of Consumer Trust, colleagues Marcus Leonard, Shani Rivaux and Sam Eichner discuss the evolving legislation and regulations that will address these issues directly.
The UK Introduces Tougher Penalties for Consumer Protection Breaches
In May 2024 the UK passed the new Digital Markets, Competition and Consumers Act (DMCC). Amongst other changes, the DMCC grants the UK Competition and Markets Authority (CMA) new powers to directly impose fines of up to 10% of a business’s global turnover for consumer protection breaches and to issue notices requiring changes to online interfaces, significantly enhancing the CMA’s enforcement capabilities.
New Report Latest to Cast Uncertainty over EU-U.S. Data Privacy Framework
A new report issued in May 2024 by the Centre for European Policy Studies (CEPS), an independent thinktank, is the latest development to cause concerns over the EU-U.S. Data Privacy Framework (DPF), predicting that it will likely fail if challenged before the Court of Justice of the European Union (CJEU).
From Encryption to Employment, U.S. Federal Agencies Brace for the Effects of Quantum Computing, AI and More
In this week’s edition of Consumer Protection Dispatch, we look at the latest regulatory developments from the U.S. Department of Commerce, Consumer Financial Protection Bureau, and the Securities and Exchange Commission regarding data and AI.