In case you missed it, below are recent posts from Consumer Privacy World covering the latest developments on data privacy, security and innovation. Please reach out to the authors if you are interested in additional information.

CPW’s Stephanie Faber Speaks at French Association of Personal Data Protection Correspondents Annual Meeting

Future Uncertain for the American Data Privacy and Protection Act

Online Webinar Now Available: Kristin Bryan and Kyle Fath Discuss AI and Biometrics Privacy Trends and Developments

Ninth Circuit District Court Finds No Standing for Alleged Lost Commercial and Proprietary Data in Privacy Litigation

FTC Signals Intention to Begin Rulemaking on Privacy and AI, Hints at Areas of AI Focus in Congressional Report

New Webinar Recording: “Employee and Other HR Data Under the California Privacy Rights Act”

NOW AVAILABLE CPW’s Kristin Bryan and Glenn Brown on The Fraudian Slip Podcast ITRC – What You Should Know About State Privacy Laws

Heated Debate Surrounds Proposed Federal Privacy Legislation

Motion for Preliminary Approval of Accellion Data Breach Settlement Filed in California Federal Court

Following Court Win on BIPA Claims, FTC Ramps Up Investigation into Company’s Sharing of Biometric Data

Third Circuit Affirms Law Student’s Cyberstalking Plea, Holding Federal Criminal Cyberstalking Statute Does Not Violate Constitution

Fourth Circuit Grants Summary Judgment to Defendant in Driver Privacy Litigation

CPW’s Kristin Bryan and Shea Leitch Quoted in GDR re National Privacy Legislation

STILL TIME TO REGISTER FOR JUNE 14 CLE WEBINAR: Employee and Other HR Data Under the California Privacy Rights Act

CPW’s Ericka Johnson and Gicel Tomimbang to Speak at ISSA LA on Administration’s Response to Increased Cyber Threats

CPPA Holds First Public Meeting Following Publication of First Draft of Proposed Regulations and Initial Statement of Reasons

Congress Proposes Federal Privacy Legislation to Preempt Certain State Privacy Laws, Hearing Scheduled for Next Week

SEC Cyber Regulation Efforts: A Mid-Year Review

Federal Court Stays BIPA Litigation While Applicable Statute of Limitations is Still in Question

Updates to Automatic Renewal Laws with New Consent, Notice, and Cancellation Requirements in the United States and Germany

ABA Webinar featuring CPW’s Kristin Bryan

At the recent annual meeting of the French Association of Personal Data Protection Correspondents (AFCDP), CPW’s Stephanie Faber presented the latest changes on data privacy in the US (providing a global overview with details on data protection for consumers in five states, requirements for opt out and OOPS, the Federal bill, initiatives of the FTC and the possible timeline for the new US EU framework of exchange of personal data ) and in China (covering the data security act, cybersecurity act, PIPL with details on the localization requirements and international transfers).

The AFCDP is the largest French association for privacy professionals and is also the founding member of the CEDPO (the Confederation of European Data Protection Organisations).

It is clearly a challenge for DPOs based in the EEA to keep up with all the new laws around the world, and the audience expressed how keen they are to better understand the trends and the differences especially for countries like the US and China, with which French companies have important business relations and data flows. Several questions were asked about the difference in scope of such laws when compared to GDPR, and whether the right to privacy could become a constitutional right in the US.

Thanks to the collective working knowledge and know-how of the firm’s global Data Privacy, Cybersecurity & Digital Assets team, Stephanie was able to share valuable insights on these laws and the implementation of regulations and standards.

The presentation will be provided in replay by the AFCDP for those who could not attend in person.

Earlier this week, the leaderships of the House Energy and Commerce Committee formally introduced the American Data Privacy and Protection Act (HB 8152). The legislation is being marked up today in the Subcommittee on Consumer Protection and Commerce of the House Committee on Energy and Commerce. The legislation likely will be slated for full committee consideration the week of July 11 and could reach the House floor before Congress breaks for its August recess. As the legislation gains traction in the House, Senate Commerce Chair, Maria Cantwell, has expressed doubt regarding the prospects for the legislation in the Senate, signaling that Senate Democrats will not support the bill in its current form. According to a PoliticoPro report, Cantwell says “there’s no way” Senate Majority Leader Chuck Schumer will bring the bill to the floor of the Senate for consideration, arguing the legislation would set “a weak federal standard” which would preempt state law. We will continue to monitor developments in the effort for federal privacy legislation on the Consumer Privacy World blog.

Earlier this month CPW’s Kristin Bryan and Kyle Fath presented a webinar on “AI and Biometrics Privacy: Trends and Developments” with the International Association of Privacy Professionals (“IAPP”), the largest global community of privacy professionals.  A recording of that webinar is available to all IAPP members and available (for CPE credit) here.

As summarized in the program description on the IAPP website:

Artificial intelligence and biometrics privacy are top-of-mind issues for companies and their privacy professionals, regardless of the industry sector in which they operate. AI will soon be regulated in the U.S. in an unprecedented manner: The patchwork of 2023 state privacy laws imposes restrictions and obligations on organizations carrying out AI, profiling and automated decision-making processes, and the Federal Trade Commission is poised to promulgate regulations on automated decision-making and related topics. Organizations employing facial recognition and other biometrics technologies are under the constant threat of putative privacy class-action litigations under Illinois’ Biometric Information Privacy Act and a handful of other state laws. With BIPA copycats and similar legislation introduced across the country, and a lack of clarity in the current case law, the risk associated with biometrics will certainly continue, and likely increase. Needless to say, global developments in these areas add further complexity to organizations with international operations.

The program addresses, among others:

  • AI, biometrics and privacy compliance — Restrictions on and obligations under forthcoming privacy laws in California, Colorado, Utah and Virginia, including with respect to profiling, automated decision-making, and sensitive data.
  • AI and biometrics litigation overview — The current litigation landscape concerning AI and biometrics, including facial recognition.
  • Legislative and regulatory priorities — Pending and anticipated legislative and regulatory developments, both federal and state, as well as globally.

Kristin and Kyle are also covering on CPW key developments regarding AI and biometric privacy in the realm of regulation, compliance and litigation.  You can check out their analyses of these issues here, here and here, with contributions from David Oberly and other team members.

For more on this, stay tuned.  CPW will be there to keep you in the loop.

Recently, a federal court in California held that the loss of stored data, without more, is insufficient to establish Article III standing to withstand a motion to dismiss.  In so doing, the court joined a number of other courts in holding that allegations of speculative harm devoid of allegations that personal information was stolen or hacked is likewise insufficient to establish an injury in fact.

In Riordan v. Western Digit. Corp., Plaintiffs brought a slew of claims against Western Digital arising out of an attack by third-party hackers on Western Digital’s legacy Internet-connected hard drives, My Book Live and My Book Live Duo (“Products”).  2022 U.S. Dist. LEXIS 101685, at *2 (N.D. Cal. June 7, 2022).  The third-party hackers performed a factory reset of Western Digital’s Products, remotely erasing all data stored on the Products.  Id.

Plaintiffs alleged two theories of injuries resulting from the breach.  Plaintiffs generally alleged that due to the attack, they lost years’ worth of sensitive, intimate, and valuable personal, commercial, and/or proprietary information, including important financial information and priceless personal items, such as personal photographs.  Id. at *2-3.  Plaintiffs did not otherwise specify the types of information that were lost.  Plaintiffs further alleged that they faced a risk of future data misuse “if [their personal data] has made its way into the hands of cyber-criminals.”  Id. at *7.

In granting Western Digital’s motion to dismiss, the court held that Plaintiff’s blanket allegation that their data was deleted and could not be recovered from the Products did not allege an injury in fact.  Id.  at *8.  The court reasoned that “[p]laintiffs failed to describe whether their data was permanently lost, and/or whether another copy of the data was stored elsewhere[,]” and “fail[ed] to describe the type of data lost, or explain why it was valuable and why its loss would cause harm.”  Id.

The court likewise held that Plaintiffs’ allegation that their data may have “made its way into the hands of cyber-criminals” was insufficient.  “Plaintiffs’ speculative allegations of harm do not establish an injury in fact.”  Id. at *9.  “Plaintiffs do not allege that through the breach, their specific personal information was stolen or that any harm resulted from the breach (i.e., through hackers).”  Id.

This case is yet another example where courts have dismissed complaints that generally allege harm based on generalized, speculative injury for lack of Article III standing.  With Riordan, federal courts continue to demonstrate their willingness to dismiss inadequately pleaded complaints in data privacy cases for lack of standing.

Stay tuned for more developments.  CPW will keep you in the loop.

The Federal Trade Commission (“FTC” or “Agency”) recently indicated that it considers initiation of pre-rulemaking “under section 18 of the FTC Act to curb lax security practices, limit privacy abuses, and ensure that algorithmic decision-making does not result in unlawful discrimination.”  This follows a similar indication from Fall 2021 where the FTC had signaled its intention to begin pre-rulemaking activities on the same security, privacy, and AI topics in February 2022. This time, the FTC has expressly indicated that it will submit an Advanced Notice of Preliminary Rulemaking (“ANPRM”) in June with the associated public comment period to end in August, whereas it was silent on a specific timeline when it made its initial indication back in the Fall. We will continue to keep you updated on these FTC rulemaking developments on security, privacy, and AI.

Also, on June 16, 2022 the Agency issued a report to Congress (the “Report”), as directed by Congress in the 2021 Appropriations Act, regarding the use of artificial intelligence (“AI”) to combat online problems such as scams, deepfakes, and fake reviews, as well as other more serious harms, such as child sexual exploitation and incitement of violence. While the Report is specific in its purview—addressing the use of AI to combat online harms, as we discuss further below—the FTC also uses the Report as an opportunity to signal its positions on, and intentions as to, AI more broadly.

Background on Congress’s Request & the FTC’s Report

The Report was issued by the FTC at the request of Congress, which—through the 2021 Appropriations Act—had directed the FTC to study and report on whether and how AI may be used to identify, remove, or take any other appropriate action necessary to address a wide variety of specified “online harms.” The Report itself, while spending a significant amount of time addressing the prescribed online harms and offering recommendations regarding the use of AI to combat the same, as well as caveats for over-reliance on them, also devotes a significant amount of attention to signaling its thoughts on AI more broadly. In particular, due to specific concerns that have been raised by the FTC and other policymakers, thought leaders, consumer advocates, and others, the Report cautions that the use of AI should not necessarily be treated as a solution to the spread of harmful online content. Rather, recognizing that “misuse or over-reliance on [AI] tools can lead to poor results that can serve to cause more harm than they mitigate,” the Agency offers a number of safeguards. In so doing, the Agency raises concerns that, among other things, AI tools can be inaccurate, biased, and discriminatory by design, and can also incentivize relying on increasingly invasive forms of commercial surveillance, perhaps signaling what may be areas of focus in forthcoming rulemaking.

While the FTC’s discussion of these issues and other shortcomings focuses predominantly on the use of AI to combat online harms through policy initiatives developed by lawmakers, these areas of concern apply with equal force to the use of AI in the private sector. Thus, it is reasonable to posit that the FTC will focus its investigative and enforcement efforts on these same concerns in connection with the use of AI by companies that fall under the FTC’s jurisdiction. Companies employing AI technologies more broadly should pay attention to the Agency’s forthcoming rulemaking process to stay ahead of the issues.

The FTC’s Recommendations Regarding the Use of AI

Another major takeaway of the Report pertains to the series of “related considerations” that the FTC has cautioned will require the exercise of great care and focused attention when operating AI tools. Those considerations entail (among others) the following:

  • Human Intervention: Human intervention is still needed, and perhaps always will be, in connection with monitoring the use and decisions of AI tools intended to address harmful conduct.
  • Transparency: AI use must be meaningfully transparent, which includes the need for these tools to be explainable and contestable, especially when people’s rights are involved or when personal data is being collected or used.
  • Accountability: Intertwined with transparency, platforms and other organizations that rely on AI tools to clean up harmful content that their services have amplified must be accountable for both their data and practices and their results.
  • Data Scientist and Employer Responsibility for Inputs and Outputs: Data scientists and their employers who build AI tools—as well as the firms procuring and deploying them—must be responsible for both inputs and outputs. Appropriate documentation of datasets, models, and work undertaken to create these tools is important in this regard. Concern should also be given to the potential impact and actual outcomes, even though those designing the tools will not always know how they will ultimately be used. And privacy and security should always remain a priority focus, such as in their treatment of training data.

Of note, the Report identifies transparency and accountability as the most valuable direction in this area—at least as an initial step—as being able to view and allowing for research behind platforms’ opaque screens (in a manner that takes user privacy into account) may prove vital for determining the best courses for further public and private action, especially considering the difficulties created in crafting appropriate solutions when key aspects of the problems are obscured from view. The Report also highlights a 2020 public statement on this issue by Commissioners Rebecca Kelly Slaughter and Christine Wilson, who remarked that “[i]t is alarming that we still know so little about companies that know so much about us” and that “[t]oo much about the industry remains opaque.”

In addition, Congress also instructed the FTC to recommend laws that could advance the use of AI to address online harms. The Report, however, finds that—given that major tech platforms and others are already using AI tools to address online harms—lawmakers should instead consider focusing on developing legal frameworks to ensure that AI tools do not cause additional harm.

Taken together, companies should expect the FTC to pay particularly close attention to these issues as they begin to take a more active approach in policing the use of AI.

FTC: Our Work on AI “Will Likely Deepen”

In addition to signaling what areas of focus may be moving forward when addressing Congress’ mandate, the FTC veered outside of its purview to highlight its recent AI-specific enforcement cases and initiatives, describe the enhancement of its AI-focused staffing, and provide commentary on its intentions as to AI moving forward. In one notable sound bite, the FTC notes in the Report that its “work has addressed AI repeatedly, and this work will likely deepen as AI’s presence continues to rise in commerce.” Moreover, the FTC specifically calls out its recent staffing enhancements as it relates to AI, highlighting the hiring of technologists and additional staff with expertise in and specifically devoted to the subject matter area.

The Report also highlights the FTC’s major AI-related initiatives to date, including:

Conclusion

The recent Report to Congress strongly indicates the FTC’s overall apprehension and distrust as it relates to the use of AI, which should serve as a warning to the private sector of the potential for greater federal regulation over the utilization of AI tools. That regulation may come sooner than later, especially in light of the Agency’s recent ANAPR signaling the FTC’s consideration of initiating rulemaking to “ensure that algorithmic decision-making does not result in unlawful discrimination.”

At the same time, although the FTC’s Report calls on lawmakers to consider developing legal frameworks to help ensure that the use of AI tools does not cause additional online harms, it is also likely that the FTC will increase its efforts in investigating and pursuing enforcement actions against improper AI practices more generally, especially as it relates to the Agency’s concerns regarding inaccuracy, bias, and discrimination.

Taken together, companies should consult with experienced AI counsel to obtain advice on proactive measures that can be implemented at this time to get ahead of the compliance curve and put themselves in the best position to mitigate legal risks moving forward—as it is only a matter of time before regulation governing the use of AI is enacted, likely sooner rather than later.

In our widely-attended webinar last week, SPB’s Carmen Cole, Annette Demmel, Kyle Fath, Alan Friel and Shea Leitch joined forces to discuss how the California Privacy Rights Act (CPRA) will present significant new challenges − even for companies that are currently California Consumer Privacy Act (CCPA) compliant. This is due, in part, to the CPRA’s regulation of the collection, use and disclosure of employee, applicant, independent contractor and other “HR Data” that is currently largely exempt from the CCPA. Starting on Jan. 1, 2023, organizations will have to apply various obligations to otherwise common and routine HR data processing. Importantly, businesses should be aware that the scope of application will likely extend beyond California-based employers to those outside of the state, such as in the case of remote workers and California-based job applicants.

In case you missed the program, you can access the recording and hear what our team had to say on these topics:

  • The “consumer” rights and business obligations that apply to HR data under the CPRA
  • Completing a data inventory for HR Data and otherwise preparing for compliance in view of delayed regulations
  • Balancing the obligations under the CPRA with a tangled web of California employment laws and regulations
  • Preparing for the notoriously litigious employment plaintiffs’ bar to use CPRA rights as an alternative, pre-litigation discovery mechanism
  • Lessons learned from GDPR employee data subject access requests, including regarding emails and unstructured data
  • The scope of privilege, trade secrets and protection of another person’s privacy rights
  • Status of pending legislation that would extend HR and B-to-B Data exemptions
  • The potential distinction between business data and personal information
  • How new purpose and retention limitations will help minimize access
  • The use of self-serve access and focusing of requests to limit search parameters and of Section .145(h)(3)(formerly (g)(3)) to limit access
  • Application of deletion exception retention purposes to HR data

Our slide presentation is also available here. And, coming soon: our deep dive series of blogs on many of these issues. BOLO!

CPW’s Kristin Bryan and Glenn Brown recently jointed James Lee, Chief Operating Officer of the Identity Theft Resource Center (“ITRC”) and Eva Velasquez, Chief Executive Officer of the ITRC to discuss recent developments in privacy laws and privacy litigation.  Their podcast, which addresses recently enacted privacy laws, litigation trends, and what may be on the horizon in this space, is available here.  Be sure to check it out.  And for more on data privacy, security and innovation, stay tuned.  CPW will be there to keep you in the loop.

As we previously reported on the CPW blog, the leadership of the House Energy and Commerce Committee and the Ranking Member of the Senate Commerce Committee released a discussion draft of proposed federal privacy legislation, the American Data Privacy and Protection Act (“ADPPA”), on June 3, 2022. Signaling potential differences amongst key members of the Senate Committee on Commerce, Science, and Transportation, Chair Maria Cantwell (D-WA) withheld her support. Staking out her own position, Cantwell is reportedly floating an updated version of the Consumer Online Privacy Rights Act (“COPRA”), originally proposed in 2019.

Early Stakeholder Disagreement

As soon as a discussion draft of the ADPPA was published, privacy rights organizations, civil liberty groups, and businesses entered the fray, drawing up sides for and against the bill. The ACLU came out as an early critic of the legislation. In an open letter to Congress sent June 10, the group urged caution, arguing that both the ADPPA and COPRA contain “very problematic provisions.” According to the group, more time is required to develop truly meaningful privacy legislation, as evidenced by “ACLU state affiliates who have been unable to stop harmful or effectively useless state privacy bills from being pushed quickly to enactment with enormous lobbying and advertising support of sectors of the technology industry that resist changing a business model that depends on consumers not having protections against privacy invasions and discrimination.” To avoid this fate, the ACLU urges Congress to “bolster enforcement provisions, including providing a strong private right of action, and allow the states to continue to respond to new technologies and new privacy challenges with state privacy laws.”

On June 13, a trio of trade groups representing some of the largest tech companies sent their open letter to Congress, supporting passage of a federal privacy law, but ultimately opposing the ADPPA. Contrary to the position taken by the ACLU, the industry groups worry that the bill’s inclusion of a private right of action with the potential to recover attorneys’ fees will lead to litigation abuse. The groups took issue with other provisions as well, such as the legislation’s restrictions on the use of data derived from publicly-available sources and the “duty of loyalty” to individuals whose covered data is processed.

Industry groups and consumer protection organizations had the opportunity to voice their opinions regarding the ADPPA in a public hearing on June 14. Video of the proceedings and prepared testimony of the witnesses are available here. Two common themes arose in the witnesses’ testimony: (1) general support for federal privacy legislation; and (2) opposition to discrete aspects of the bill. As has been the case for the better part of a decade in which Congress has sought to draft a federal privacy bill, two fundamental issues continue to drive the debate and must be resolved in order for the legislation to become law: the private right of action to enforce the law and preemption of state laws or portions of them. . While civil rights and privacy advocacy groups maintain that the private right of action does not go far enough and that federal privacy legislation should not preempt state law, industry groups argue that a private right of action should not be permitted and that state privacy laws should be broadly preempted.

The Path Forward

The Subcommittee on Consumer Protection and Commerce of the House Energy and Commerce Committee is expected to mark up the draft bill the week of June 20. We expect the subcommittee to approve the draft bill with little or no changes. The full Energy and Commerce Committee should complete work on the bill before the August recess. Given the broad bipartisan support for the legislation in the House, we anticipate that the legislation, with minor tweaks, is likely to be approved by the House, setting up a showdown with the Senate after a decade of debate.

With the legislative session rapidly drawing to a close, the prospects for the ADPPA’s passage remain unclear. Intense disagreement remains amongst key constituency groups regarding important aspects of the proposed legislation. Yet, in spite of the differences, a review of the public comments to date regarding the ADPPA reveal one nearly unanimous opinion: the United States needs federal privacy legislation. In light of the fact that most interested parties agree that the U.S. would benefit from federal privacy legislation, Congress has more incentive than ever to reach compromise regarding one of the proposed privacy bills.

In case you missed it, below are recent posts from Consumer Privacy World covering the latest developments on data privacy, security and innovation. Please reach out to the authors if you are interested in additional information.

CPW’s Kristin Bryan and Shea Leitch Quoted in GDR re National Privacy Legislation | Consumer Privacy World

Fourth Circuit Grants Summary Judgment to Defendant in Driver Privacy Litigation | Consumer Privacy World

Third Circuit Affirms Law Student’s Cyberstalking Plea, Holding Federal Criminal Cyberstalking Statute Does Not Violate Constitution | Consumer Privacy World

Following Court Win on BIPA Claims, FTC Ramps Up Investigation into Company’s Sharing of Biometric Data | Consumer Privacy World

Motion for Preliminary Approval of Accellion Data Breach Settlement Filed in California Federal Court | Consumer Privacy World

Updates to Automatic Renewal Laws with New Consent, Notice, and Cancellation Requirements in the United States and Germany

Federal Court Stays BIPA Litigation While Applicable Statute of Limitations is Still in Question

SEC Cyber Regulation Efforts: A Mid-Year Review

Congress Proposes Federal Privacy Legislation to Preempt Certain State Privacy Laws, Hearing Scheduled for Next Week

CPPA Holds First Public Meeting Following Publication of First Draft of Proposed Regulations and Initial Statement of Reasons

OOPS! And Other Takeaways from the First Draft of CPRA Regulations

Start Vetting Your Data Processors! Key Takeaways From the Forum Case

Ninth Circuit Revives Session Replay Software Litigation, Finding Plaintiff Sufficiently Alleged His Online Communications Were Tracked Without His Express Prior Consent

The ASA’s Top Tips on Advertising “Free Trials”

FCC Announces Nine More State Robocall Investigation Partnerships

FTC Targets Children’s Privacy and Stealth Advertising Directed at Children

In Case You Missed Our Webinar “Navigating Opportunities and Challenges: Cross-border Data, the Cookiepocalypse and Standard Contractual Clauses”

Two More Nails in the Coffin for Opportunistic Data Breach Claims

Agency to Reveal Timing on First Draft of CPRA Regs at May 26 Meeting