The California Privacy Rights Act (“CPRA”) places significant power in the hands of the California Privacy Protection Agency (“CPPA” or “Agency”) to influence the future of privacy regulation in the United States, including—perhaps most importantly—the authority to issue regulations in twenty-two specific, enumerated areas to achieve the broad objective of “further[ing] the purposes of” the CPRA.

As to automated decision-making and profiling, the CPRA has granted the Agency the equivalent of a regulatory blank check. In this regard, the CPRA references profiling or automated decision-making a total of two times throughout the voluminous text of the statute: first, in defining the term “profiling,” and second, in the law’s broad rulemaking mandate:

Issuing regulations governing access and opt-out rights with respect to businesses’ use of automated decision-making technology, including profiling and requiring businesses’ response to access requests to include meaningful information about the logic involved in such decision-making processes, as well as a description of the likely outcome of the process with respect to the consumer.

For this reason, the CPPA has focused a significant amount of its preliminary rulemaking activities on automated decision-making and profiling. This focus began in the fall of 2021 when profiling and automated decision-making were included as part of nine topics on which the Agency sought public comment. In late March, the CPPA hosted informational sessions—during which time the Agency discussed automated decision-making for the majority of an entire day, including cross-jurisdictional approaches to automated decision-making and profiling under the EU’s General Data Protection Regulation.

Just last week, the CPPA held stakeholders sessions (Agenda here) over the course of three days, during which it set aside three hours in the first half of the first day for stakeholders to comment on automated decision-making. Importantly, these comments—provided by a range of stakeholders—offer key insights into some of the more complex, challenging issues that businesses will face when adapting their privacy programs to comply with the new rules and restrictions that will be placed on automated decision-making under the CPRA beginning at the start of 2023.

The comments and positions of the individuals that spoke on the topic of automated decision-making varied widely. However, there were several common, key themes reiterated throughout the session that shine a light on concerns shared by various stakeholders, as well as the tug of war between their (and others’) competing interests. The stakeholder comments also highlighted the complexity of striking a balance between regulating automated decision-making technology and profiling in a privacy-protective manner while at the same time avoiding overly restrictive regulations that would hamper innovation. Many of the comments made fell under the following themes:

  • The Type of Automated Decision-Making Activities That Should Be Regulated: Many speakers highlighted the potentially significant, unintended ramifications of an overly broad scope for the term “automated decision-making technology,” which would result in producing little benefit to consumers while at the same time greatly hampering the operations of businesses across all sectors. For that reason, many speakers emphasized the need to limit the reach of automated decision-making regulation to: (1) fully automated decision-making technology; and (2) technology that produces legal or similarly significant effects, such as those bearing on a consumer’s employment or credit; and/or (3) high risk activities, sensitive data, and/or automated decision-making that constitutes profiling. In addition, several other speakers noted the need for a requirement that the term encompasses only those activities that involve the processing of personal information (which would seem to be inherent in the CPRA regardless).
  • Consumer Rights Relating to the Use of Automated Decision-Making Technology: Speakers also frequently highlighted the need for balance as it relates to consumers’ access rights regarding automated decision-making technology. On the one hand, as many speakers suggested, the CPRA should not impose requirements on businesses to disclose information to consumers on low-risk automated decision-making technology, such as spell check or spreadsheets. On the other, the CPPA was cautioned to avoid crafting regulations that afforded access rights that would require businesses to provide detailed descriptions of complex algorithms involved in automated decision-making, as doing so would fail to provide average consumers with “meaningful” information regarding the information and logic underlying automated processing. At the same time, the required disclosure of algorithms and similar sensitive business information would also likely conflict with the right of businesses to protect their trade secrets and similar types of information.
  • Consumer Opt-Out Rights Relating to Automated Decision-Making: Many speakers shared the common concern that the significant benefits offered by automated decision-making technology to consumers and businesses alike could be severely hampered by granting consumers overbroad opt-out rights as it relates to activities that fall under the definition of automated decision-making. At a minimum, several speakers suggested, regulations relating to automated decision-making should be tethered to the CPRA’s statutory rights of access and opt-outs.
  • Alignment with the GDPR and other Regulatory Schemes: Many stakeholders, including a representative of the Future of Privacy Forum, urged that the regulations should align with GDPR Article 22. Others pointed to the EU’s pending Digital Services Act, as well as the Artificial Intelligence Act, for other schemes with which the CPRA’s regulations should be consistent.

Conclusion

Following the CPPA’s May stakeholder sessions, the CPPA will begin the formal rulemaking process, but final Regulations are not anticipated to be issued until sometime in early 2023. Companies should monitor for developments in the area of CPPA rulemaking to ensure they are aware of any anticipated changes in the law, which will go into effect at the start of 2023. In addition, companies should immediately begin adapting their privacy programs for compliance not only with the CPRA but also with the Colorado, Connecticut, Virginia, and Utah laws that will also come online over the course of 2023 as well.

For more information on the stakeholder sessions, including other topics discussed, you can visit the CPPA’s events page here.

Check back often for more of SPB’s and CPW’s thought leadership on the CPRA and the other 2023 state privacy laws, as well as on AI and automated decision-making. For a further discussion of the CPPA’s approach to rulemaking on automated decision-making and profiling, you can view a recording of our recent webinar 2022 Developments and Trends Concerning Biometric Privacy and Artificial Intelligence. In addition, SPB Partners Kyle Fath and Kristin Bryan will take a deeper dive into this and related topics in our June 2 webinar hosted by the International Association of Privacy Professionals (IAPP). Registration for the IAPP webinar is available here (free for IAPP members).