The Federal Trade Commission (FTC) has released a staff reportBringing Dark Patterns to Light, which discusses misleading and manipulative design practices—dark patterns—in web and mobile apps. These design choices take advantage of users’ cognitive biases to influence their behavior and prevent them from making fully informed decisions about their data and purchases. Dark patterns are employed to get users to surrender their personal information, unwittingly sign up for services, and purchase products they do not intend to purchase. The consequences of dark patterns have been increasingly noticed in the regulatory and legislative sphere, both in the United States and Europe

Continue Reading Dark Patterns under the Regulatory Spotlight Again

Dark patterns are top of mind for regulators on both sides of the Atlantic. In the United States, federal and state regulators are targeting dark patterns as part of both their privacy and traditional consumer protection remits. Meanwhile, the European Data Protection Board (EDPB) is conducting a consultation on proposed Guidelines (Guidelines) for assessing and avoiding dark pattern practices that violate the EU General Data Protection Directive (GDPR) in the context of social media platforms. In practice, the Guidelines are likely to have broader application to other types of digital platforms as well. Continue Reading “Dark Patterns” Are Focus of Regulatory Scrutiny in the United States and Europe

This month, CPW’s Kyle Fath, Kristin Bryan, Christina Lamoureux & Elizabeth Helpling explained how data privacy and cybersecurity were Federal Trade Commission (“FTC”) priorities.  As they wrote, there were “three key areas of interest to consumer privacy that are now in the FTC’s spotlight, as well as their relation to state privacy legislation and their anticipated impact to civil litigation.”  One area of interest they identified was deceptive and manipulative conduct on the Internet (including so-called “dark patterns”).  Today, the FTC announced that it was going to ramp up enforcement against illegal dark patterns that trick consumers into subscriptions.  Read on to learn more and what it means going forward.

First, some background.  The term “dark patterns” collectively applies manipulative techniques that can impair consumer autonomy and create traps for online shoppers (for instance, think of multi-click unsubscription options).  As CPW previously explained, “[e]arlier this year, the FTC hosted a workshop called “Bringing Dark Patterns to Light,” and sought comments from experts and the public to evaluate how dark patterns impact customers.”  The genesis for this workshop was the FTC’s concern with harms caused by dark patterns, and how dark patterns may take advantage of certain groups of vulnerable consumers.

Notably, the FTC is not alone in its attention to this issue as California’s Attorney General previously announced regulations that banned dark patterns and required disclosure to consumers of the right to opt-out of the sale of personal information collected through online cookies.  Dark patterns has also been targeted in civil litigation.  This year, the weight-loss app Noom faced a class action alleging deceptive acts through Noom’s cancellation policy, automatic renewal schemes, and marketing to consumers.

Building off these prior developments, today, the FTC announced a new enforcement policy statement “warning companies against deploying illegal dark patterns that trick or trap consumers into subscription services.”  As the FTC cautioned, “[t]he agency is ramping up its enforcement in response to a rising number of complaints about the financial harms caused by deceptive sign up tactics, including unauthorized charges or ongoing billing that is impossible cancel.”

As summarized in the FTC’s press release announcing this development, businesses going forward must follow three key requirements in this area or run the risk of an enforcement action (including potential civil penalties):

  • (1) Disclose clearly and conspicuouslyall material terms of the product or service:  This includes disclosing how much a product and/or service costs, “deadlines by which the consumer must act to stop further charges, the amount and frequency of such charges, how to cancel, and information about the product or service itself that is needed to stop consumers from being deceived about the characteristics of the product or service.”
  • (2) Obtain the consumer’s express informed consent before charging them for a product or services: This means “obtaining the consumer’s acceptance of the negative option feature separately from other portions of the entire transaction, not including information that interferes with, detracts from, contradicts, or otherwise undermines the consumer’s ability to provide their express informed consent.”
  • (3) Provide easy and simple cancellation to the consumer: Marketers are also to “provide cancellation mechanisms that are at least as easy to use as the method the consumer used to buy the product or service in the first place.”

This development is likely one of only many anticipated to be rolled out in light of the FTC’s continued focus on data privacy and cybersecurity.  For more on this, stay tuned—CPW will be there to keep you in the loop.

As Rosa BarceloMatus HubaLucia Hartnett and Bethany Simmonds discuss in greater detail here, “[t]he European Data Protection Board (“EDPB”), a body with members from all EEA supervisory authorities (and the European Data Protection Supervisor), has recently established a taskforce to coordinate the response to complaints concerning compliance of cookie banners filed with several European Economic Area (“EEA”) Supervisory Authorities (“SAs”) by a non-profit organization NOYB. NOYB believes that many cookie banners, including those of ‘major’ companies, engage in “deceptive designs” and “dark patterns”.  The EDPB taskforce is established in accordance with Art. 70(1)(u) of the GDPR, which states that the EDBP must promote the cooperation and effective bilateral and multilateral exchange of information and best practices between SAs. The aim of this taskforce is to harmonize and coordinate the approach to investigating and responding to cookie banner complaints from NOYB. It remains to be seen how this will actually be done in practice and whether EDPB will limit the harmonization to procedural approach to the complaints, or whether it will also attempt to ensure consistent application of the underlying substantive rules.”

They provide a detailed analysis at the Security Privacy Bytes blog and comment that “the development of the taskforce could have a significant impact in streamlining the handling of the complaints it is set to investigate and could help companies better understand what is an acceptable pan-EU approach to cookie banners.”

The European Data Protection Board (“EDPB”), a body with members from all EEA supervisory authorities (and the European Data Protection Supervisor), has recently established a taskforce to coordinate the response to complaints concerning compliance of cookie banners filed with several European Economic Area (“EEA”) Supervisory Authorities (“SAs”) by a non-profit organisation NOYB. NOYB believes that many cookie banners, including those of ‘major’ companies, engage in “deceptive designs” and “dark patterns”. Continue Reading EDPB Establishes Cookie Banner Taskforce, Which Will Also Look Into Dark Patterns and Deceptive Designs

On October 17, 2022, the California Privacy Protection Agency (“CPPA” or “Agency”) published Modified Text of Proposed Regulations (“Modified Regs”) and Explanation of Modified Text of Proposed Regulations (“Explanation of Modified Regs”). The CPPA review of the Modified Regs has been postponed and is now scheduled to be considered during the October 28-29, 2022 public meeting.

Recall that earlier this year, on May 27, 2022, the CPPA published the first draft of the proposed CPRA Regs and initial statement of reasons. The Agency commenced the formal rulemaking process to adopt the Regs on July 8, 2022, and the 45-day public comment period closed on August 23, 2022. The comments submitted in response to the first draft of the Regs are available here. Continue Reading Revised Proposed CPRA Regs To Be Considered At October 28, 2022 Meeting

In case you missed it, below are recent posts from Consumer Privacy World covering the latest developments on data privacy, security and innovation. Please reach out to the authors if you are interested in additional information.

Passage of Federal Privacy Bill Remains Possible This Year, Remains a Continued Priority | Consumer Privacy World

Webinar Registration Open: Mitigating Cybersecurity Class Action Litigation Risks: Policies, Procedures, Service Providers, Notification, Damages | Consumer Privacy World

Kyle Fath appointed to Connecticut Privacy Legislation Working Group | Consumer Privacy World

FCC Adopts Rulemaking Proposal to Protect Consumer Privacy From Invasion by Unwanted Text Messages | Consumer Privacy World

Update on the California Privacy Protection Agency: Still No Date Certain for the CPRA Regulations | Consumer Privacy World

“Delaware Ruling Highlights Challenges Of Data Breach Biz Disputes” Article, Co-Authored by CPW’s Kristin Bryan, Jesse Taylor and Caroline Dzeba, is Published on Law360 | Consumer Privacy World

Third Circuit Announces Standard for Determining Accuracy of Credit Reports Under FCRA | Consumer Privacy World

2023 State Privacy Laws: How to Assess and Ensure Readiness by Year-end

Malcolm Dowden and Niloufar Massachi Discuss Vendor Contracting Requirements Under New US Privacy Laws and the GDPR

New topic for EDPB’s coordinated enforcement action: the DPO

Dark Patterns under the Regulatory Spotlight Again

CPW’s Shea Leitch and Kyle Dull to Speak at ACC South Florida’s 12th Annual CLE Conference

CPW’s David Oberly Examines Recent Major Changes to Consumer Privacy Legal Landscape in Latest Issue of the Cincinnati Bar Association’s CBA Report Magazine

CPW’s Kristin Bryan Discusses Session Replay Software Litigation Trends With The Seattle Times

Office of Management and Budget Takes Action to Enhance the Security of Software Supply Chain

CPW’s Kristin Bryan, Jesse Taylor and Shing Tse Co-Author Chapter for Lexis Practical Guidance on Privacy, Cybersecurity and Data Breach Litigation: Key Laws and Considerations

Data Protection and Digital Information Bill Delayed – Aspects to Consider While We Wait

CPW’s David Oberly Analyzes the FTC’s Largest FTC Contact Lens Rule Settlement to Date in Law360

 

In case you missed it, below are recent posts from Consumer Privacy World covering the latest developments on data privacy, security and innovation. Please reach out to the authors if you are interested in additional information.

2023 State Privacy Laws: How to Assess and Ensure Readiness by Year-end

Malcolm Dowden and Niloufar Massachi Discuss Vendor Contracting Requirements Under New US Privacy Laws and the GDPR

New topic for EDPB’s coordinated enforcement action: the DPO

Dark Patterns under the Regulatory Spotlight Again

CPW’s Shea Leitch and Kyle Dull to Speak at ACC South Florida’s 12th Annual CLE Conference

CPW’s David Oberly Examines Recent Major Changes to Consumer Privacy Legal Landscape in Latest Issue of the Cincinnati Bar Association’s CBA Report Magazine

CPW’s Kristin Bryan Discusses Session Replay Software Litigation Trends With The Seattle Times

Office of Management and Budget Takes Action to Enhance the Security of Software Supply Chain

CPW’s Kristin Bryan, Jesse Taylor and Shing Tse Co-Author Chapter for Lexis Practical Guidance on Privacy, Cybersecurity and Data Breach Litigation: Key Laws and Considerations

Data Protection and Digital Information Bill Delayed – Aspects to Consider While We Wait

CPW’s David Oberly Analyzes the FTC’s Largest FTC Contact Lens Rule Settlement to Date in Law360

Congratulations to CPW’s Kristin Bryan on Being Named a 2022 Cybersecurity & Privacy MVP by Law360!

FCC Reportedly Issues Letters of Inquiry Seeking Further Information on Wireless Providers Data Privacy Practices

Webinar Registration Open: Navigating Cross-border Challenges Relating to HR Data Protection and Employee Right-to-Work Compliance

HR and B-to-B Data Compliance Deadline Looming – Legislative Efforts to Extend California Consumer Privacy Act Exemptions Fail

For years now, California has led the way by setting the standard for privacy and data protection regulation in the United States. Recently— and as calls for greater controls over the addictive nature of social media grow louder—legislators in the Golden State have moved closer toward enacting a new, first-of-its-kind privacy law that would prohibit the development and utilization of “addictive” features by social media platforms. At the same time, state legislators also advanced a second bill that would put in place stringent online privacy protections for minors.

Businesses should monitor the progress of these bills closely, as their enactment—combined with an increased focus on children’s privacy by both federal lawmakers and the Federal Trade Commission (“FTC”)—may have a ripple effect in other states and municipalities, with legislators following close behind to enact similar children’s online privacy laws.

Continue Reading California Moves Closer to Enacting More Stringent Online Privacy Protections for Children

The Federal Trade Commission (“FTC” or “Agency”) recently indicated that it considers initiation of pre-rulemaking “under section 18 of the FTC Act to curb lax security practices, limit privacy abuses, and ensure that algorithmic decision-making does not result in unlawful discrimination.”  This follows a similar indication from Fall 2021 where the FTC had signaled its intention to begin pre-rulemaking activities on the same security, privacy, and AI topics in February 2022. This time, the FTC has expressly indicated that it will submit an Advanced Notice of Preliminary Rulemaking (“ANPRM”) in June with the associated public comment period to end in August, whereas it was silent on a specific timeline when it made its initial indication back in the Fall. We will continue to keep you updated on these FTC rulemaking developments on security, privacy, and AI.

Also, on June 16, 2022 the Agency issued a report to Congress (the “Report”), as directed by Congress in the 2021 Appropriations Act, regarding the use of artificial intelligence (“AI”) to combat online problems such as scams, deepfakes, and fake reviews, as well as other more serious harms, such as child sexual exploitation and incitement of violence. While the Report is specific in its purview—addressing the use of AI to combat online harms, as we discuss further below—the FTC also uses the Report as an opportunity to signal its positions on, and intentions as to, AI more broadly.

Background on Congress’s Request & the FTC’s Report

The Report was issued by the FTC at the request of Congress, which—through the 2021 Appropriations Act—had directed the FTC to study and report on whether and how AI may be used to identify, remove, or take any other appropriate action necessary to address a wide variety of specified “online harms.” The Report itself, while spending a significant amount of time addressing the prescribed online harms and offering recommendations regarding the use of AI to combat the same, as well as caveats for over-reliance on them, also devotes a significant amount of attention to signaling its thoughts on AI more broadly. In particular, due to specific concerns that have been raised by the FTC and other policymakers, thought leaders, consumer advocates, and others, the Report cautions that the use of AI should not necessarily be treated as a solution to the spread of harmful online content. Rather, recognizing that “misuse or over-reliance on [AI] tools can lead to poor results that can serve to cause more harm than they mitigate,” the Agency offers a number of safeguards. In so doing, the Agency raises concerns that, among other things, AI tools can be inaccurate, biased, and discriminatory by design, and can also incentivize relying on increasingly invasive forms of commercial surveillance, perhaps signaling what may be areas of focus in forthcoming rulemaking.

While the FTC’s discussion of these issues and other shortcomings focuses predominantly on the use of AI to combat online harms through policy initiatives developed by lawmakers, these areas of concern apply with equal force to the use of AI in the private sector. Thus, it is reasonable to posit that the FTC will focus its investigative and enforcement efforts on these same concerns in connection with the use of AI by companies that fall under the FTC’s jurisdiction. Companies employing AI technologies more broadly should pay attention to the Agency’s forthcoming rulemaking process to stay ahead of the issues.

The FTC’s Recommendations Regarding the Use of AI

Another major takeaway of the Report pertains to the series of “related considerations” that the FTC has cautioned will require the exercise of great care and focused attention when operating AI tools. Those considerations entail (among others) the following:

  • Human Intervention: Human intervention is still needed, and perhaps always will be, in connection with monitoring the use and decisions of AI tools intended to address harmful conduct.
  • Transparency: AI use must be meaningfully transparent, which includes the need for these tools to be explainable and contestable, especially when people’s rights are involved or when personal data is being collected or used.
  • Accountability: Intertwined with transparency, platforms and other organizations that rely on AI tools to clean up harmful content that their services have amplified must be accountable for both their data and practices and their results.
  • Data Scientist and Employer Responsibility for Inputs and Outputs: Data scientists and their employers who build AI tools—as well as the firms procuring and deploying them—must be responsible for both inputs and outputs. Appropriate documentation of datasets, models, and work undertaken to create these tools is important in this regard. Concern should also be given to the potential impact and actual outcomes, even though those designing the tools will not always know how they will ultimately be used. And privacy and security should always remain a priority focus, such as in their treatment of training data.

Of note, the Report identifies transparency and accountability as the most valuable direction in this area—at least as an initial step—as being able to view and allowing for research behind platforms’ opaque screens (in a manner that takes user privacy into account) may prove vital for determining the best courses for further public and private action, especially considering the difficulties created in crafting appropriate solutions when key aspects of the problems are obscured from view. The Report also highlights a 2020 public statement on this issue by Commissioners Rebecca Kelly Slaughter and Christine Wilson, who remarked that “[i]t is alarming that we still know so little about companies that know so much about us” and that “[t]oo much about the industry remains opaque.”

In addition, Congress also instructed the FTC to recommend laws that could advance the use of AI to address online harms. The Report, however, finds that—given that major tech platforms and others are already using AI tools to address online harms—lawmakers should instead consider focusing on developing legal frameworks to ensure that AI tools do not cause additional harm.

Taken together, companies should expect the FTC to pay particularly close attention to these issues as they begin to take a more active approach in policing the use of AI.

FTC: Our Work on AI “Will Likely Deepen”

In addition to signaling what areas of focus may be moving forward when addressing Congress’ mandate, the FTC veered outside of its purview to highlight its recent AI-specific enforcement cases and initiatives, describe the enhancement of its AI-focused staffing, and provide commentary on its intentions as to AI moving forward. In one notable sound bite, the FTC notes in the Report that its “work has addressed AI repeatedly, and this work will likely deepen as AI’s presence continues to rise in commerce.” Moreover, the FTC specifically calls out its recent staffing enhancements as it relates to AI, highlighting the hiring of technologists and additional staff with expertise in and specifically devoted to the subject matter area.

The Report also highlights the FTC’s major AI-related initiatives to date, including:

Conclusion

The recent Report to Congress strongly indicates the FTC’s overall apprehension and distrust as it relates to the use of AI, which should serve as a warning to the private sector of the potential for greater federal regulation over the utilization of AI tools. That regulation may come sooner than later, especially in light of the Agency’s recent ANAPR signaling the FTC’s consideration of initiating rulemaking to “ensure that algorithmic decision-making does not result in unlawful discrimination.”

At the same time, although the FTC’s Report calls on lawmakers to consider developing legal frameworks to help ensure that the use of AI tools does not cause additional online harms, it is also likely that the FTC will increase its efforts in investigating and pursuing enforcement actions against improper AI practices more generally, especially as it relates to the Agency’s concerns regarding inaccuracy, bias, and discrimination.

Taken together, companies should consult with experienced AI counsel to obtain advice on proactive measures that can be implemented at this time to get ahead of the compliance curve and put themselves in the best position to mitigate legal risks moving forward—as it is only a matter of time before regulation governing the use of AI is enacted, likely sooner rather than later.