Ad Law Access https://www.kelleydrye.com/viewpoints/blogs/ad-law-access Updates on advertising law and privacy law trends, issues, and developments Fri, 19 Jul 2024 06:09:01 -0400 60 hourly 1 AI Legislative and Regulatory Efforts Pick Up Steam: What We’re Watching https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/ai-legislative-and-regulatory-efforts-pick-up-steam-what-were-watching https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/ai-legislative-and-regulatory-efforts-pick-up-steam-what-were-watching Wed, 03 Jul 2024 13:00:00 -0400 AI capabilities are growing by the day, and with them, so are increasing government efforts to put in place guardrails, principles, and rules to govern the AI space. In May alone, Utah’s Artificial Intelligence Policy Act became the first state-level AI law to take effect, Colorado and Minnesota enacted new laws addressing AI, and the European Union passed historic comprehensive AI regulations. Meanwhile, the FTC continues to issue AI-related guidance materials that emphasize the importance of transparency in human-AI interactions, especially those involving native advertising (prior guidance here and here). As we continue to monitor the flurry of activity underway, we outline below new laws and important bills, standards, and initiatives to monitor.

Federal Efforts

American Privacy Rights Act

Last week, the House Energy and Commerce Committee abruptly canceled a scheduled markup of the latest American Privacy Rights Act (APRA) discussion draft, Congress’s most recent comprehensive privacy proposal. Some privacy advocates welcomed the cancellation, strongly opposing the removal of AI and civil rights protections in the latest draft. These protections included prohibitions against algorithmic discrimination and requirements for transparency and impact assessments for AI systems.

At present, it seems APRA may not advance as far as the 2022 American Data Privacy and Protection Act, which was passed out of the Energy and Commerce Committee but ultimately never received a floor vote. With the August recess and October break ahead of the November elections approaching, the likelihood of any comprehensive privacy legislation reaching the House floor this year seems dim. However, we will continue to monitor these federal legislative efforts and their potential impact on AI providers.

White House Executive Order

Last year, the White House released the federal government’s first comprehensive guidelines regarding AI. Although the Executive Order focuses almost entirely on the government’s own use of AI, the ultimate effects of the order will be significant for private sector businesses engaging with federal agencies.

Pursuant to the Executive Order, on April 29, 2024, NIST released a draft risk management profile specifically addressing generative AI. The Generative AI Profile—which is intended as a companion resource to NIST’s AI Risk Management Framework—offers voluntary best practice guidance regarding the design, deployment, and operation of generative AI systems. As states continue to draft AI legislation, the NIST AI Risk Management Framework will likely continue to serve as an instructive reference point for legislators across the country.

State Legislation

Colorado AI Act

The Colorado AI Act, SB 205, is now set to take effect February 1, 2026, although the freshly-signed law is already slated for revisions: in a recent letter, Gov. Jared Polis, AG Phil Weiser and Senate Majority Leader Robert Rodriguez acknowledged that “a state by state patchwork of regulation” on AI poses “challenges to the cultivation of a strong technology sector” and promised to engage in a process to revise the new law to “minimize unintended consequences associated with its implementation.”

As drafted, the law introduces new obligations and reporting requirements for both developers and deployers of AI systems. Key requirements include:

  • Transparency. Moving forward, any businesses that use AI systems to interact with consumers must disclose this fact during consumer interactions.
  • Algorithmic Discrimination in High-Risk AI Systems. The new law seeks to combat “algorithmic discrimination,” where the use of AI results in outcomes that disfavor consumers based on several personal and sensitive data categories. High-risk AI systems are defined as systems used to make decisions about individuals in the areas of education, employment, finance or lending, government services, healthcare, housing, insurance, and legal. Developers and deployers of such systems have a duty to use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination, and the law identifies specific obligations such entities must undertake.
  • Consumer Notice, Correction, and Opt-Out Rights. Consumers must be notified when high-risk AI systems are used to make any decisions about them in the areas outlined above (e.g., education, employment, etc.), and must have the right to correct inaccurate data and appeal the decision to a human reviewer.
  • Existing Obligations Under the Colorado Privacy Act (CPA). Deployers must also respect the existing rights of consumers under the CPA, including the right to opt-out of the processing of personal information for profiling with legal or similarly significant effects concerning the consumer, including decisions made using AI. In April, Colorado amended the CPA’s definition of sensitive data to include both biological and neural data used either in insolation or in combination with other personal data elements for identification purposes. The CPA additionally creates AI-related disclosure obligations, requiring businesses to provide privacy policy language that details the personal data categories used for profiling, a plain-language explanation regarding the AI logic in use, explanations describing its benefits and potential consequences, and text explaining whether the system has been evaluated for accuracy, fairness or bias.
  • Enforcement. The Colorado attorney general has sole authority to enforce the Colorado AI Act, and the law includes no private right of action. Violations are considered breaches of Colorado's general consumer protection laws, which can result in a maximum civil penalty of $20,000 per violation. Notably, each violation is counted individually for every affected consumer or transaction. Consequently, just 50 impacted consumers could result in a maximum civil penalty of $1 million. Actions must be brought within three years of the violation occurring, or from the time when the violation was discovered.

We’ll keep an eye on whether all these requirements survive the revision process suggested above.

Utah Artificial Intelligence Policy Act

On May 1, 2024, Utah’s Artificial Intelligence Policy Act, SB 149, became effective. Generally, Utah’s legislature has pursued a far lighter touch to AI regulation than Colorado. Key takeaways include:

  • Disclosure Upon Request. Most businesses and individuals will only be required to disclose the use of AI when prompted by a consumer.
  • Disclosing the Use of AI in Regulated Professions. Businesses and individuals operating within regulated professions (e.g., healthcare professionals) must prominently disclose the use of AI before its use with customers.
  • Responsibility for Generative AI Outputs. Companies are responsible for the outputs of their generative AI tools and cannot pass on blame if those tools violate Utah consumer protection laws.

Comprehensive State Privacy Laws

Twenty states have now passed comprehensive state privacy laws: California, Colorado, Connecticut, Delaware, Florida, Indiana, Iowa, Kentucky, Maryland, Minnesota, Nebraska, New Hampshire, New Jersey, Oregon, Rhode Island, Tennessee, Texas, Utah, and Virginia. These states, with the exceptions of Utah and Iowa, impose additional requirements on companies engaging in “profiling,” which is defined as the automated processing of personal data to analyze or predict something personal about an individual, such as one’s economic situation, behavior, health, or personal preferences. Under these laws, consumers must be able to opt-out of being profiled in a manner that could lead to a “legal effect” on that consumer or another “similarly significant effect.” Although a few of these laws are currently effective, the majority come into effect over the next few years. Here are the key dates to keep mind:

  • Effective in 2024. Florida, Montana, Oregon, and Texas have comprehensive privacy laws coming into effect in the next several months.
  • Effective in 2026. Kentucky and Indiana have enacted comprehensive data privacy laws that will become effective on Jan. 1, 2026. The Rhode Island legislature also passed the Rhode Island Data Transparency and Privacy Protection Act, SB 2500 / HB 7787, on June 13, 2024. If signed, the law will also become effective on Jan. 1, 2026.

California Privacy Protection Agency Initiatives

The California Privacy Protection Agency is currently considering rules and engaging in pre-formal rulemaking stakeholder sessions regarding the use of automated decision making technology (ADMT). California defines ADMT as technology that collects, uses, retains or discloses personal information and either replaces or substantially facilitates human decision making. Algorithmic “profiling,” discussed above, is encompassed within this definition. Examples include resume-screening tools used by businesses to decide whether to interview applicants and analytics tools that place consumers into audience groups to further target them with advertising.

Businesses subject to the California Consumer Privacy Act (CCPA) and that use ADMT for “extensive profiling,” to make “significant decisions” regarding consumers, or that use personal information to train ADMT would be subject to new transparency and opt-out requirements. Behavioral advertising, the practice of tracking users’ online activities to deliver ads tailored to their interests, is included within the definition of “extensive profiling.” Further discussion regarding the terms “extensive profiling” and “significant decisions” can be found here. Businesses would be required to offer a pre-use notice informing consumers of how the company uses ADMT and of the individual’s CCPA opt-out rights.

Ongoing Legislative Efforts

Currently, a multitude of states, including New York, California, and Massachusetts, are working on proposed AI governance bills. In addition, new legislation in Illinois addressing AI usage currently awaits the Governor’s signature.

  • California. The Assembly recently advanced multiple bills addressing AI usage. These bills include provisions prohibiting algorithmic discrimination and would establish new compliance and reporting requirements for AI providers. Additionally, these bills would require businesses to implement watermarking systems identifying AI-generated content and to publicize information regarding the methods used to train AI models.
  • Illinois. On May 24, 2024, the Illinois legislature passed HB 3773, amending the Illinois Human Rights Act by adding new provisions regarding the use of predictive data analytics for employment and credit decisions.

Europe

The EU AI Act

On May 21, 2024, the EU Council unanimously passed the EU AI Act (AIA). Businesses, whether EU-based or not, should pay close attention to the upcoming changes for two reasons. First, the AIA applies to all providers of AI systems placed on the EU market, regardless of where the provider is located. Second, the penalties for non-compliance are some of the toughest in the world, allowing for fines up to €35 million EUR or 7% of a company’s annual revenue.

Broadly, the AIA creates a risk classification scheme, which places AI systems into one of several categories. The categories are:

  • Unacceptable Risk. AI systems constituting an unacceptable risk are prohibited entirely. These include systems used to manipulate or exploit individuals, classify or evaluate individuals based upon their personal traits, and emotion-recognition systems used in workplace and educational contexts.
  • High Risk. The AIA defines high risk systems as those presenting a significant risk to health, safety, or fundamental rights. Examples of AI systems falling under this category include those used in education, employment, healthcare, and banking settings. Providers of high-risk systems are subject to a number of strict regulations, including required registration in a public EU database. Additionally, providers of these systems must perform regular impact assessments and implement procedures that ensure transparency, security, and human oversight of their systems.
  • Limited Risk. For systems posing limited risks, such as chatbots interacting with humans and AI-generated content, the AIA imposes transparency obligations to ensure humans are informed that an AI system was involved. Providers of AI-generated content must ensure it is identifiable as such.
  • Minimal or No Risk. Minimal-risk AI uses, which present little to no risk to the rights or safety of individuals, can be freely used under the AIA. Examples include AI-enabled video games and spam filters. Most AI systems currently deployed are likely to fall under this category.
  • General Purpose AI (GPAI). GPAI refers to AI systems trained on broad datasets capable of serving a variety of purposes. Popular examples include OpenAI’s ChatGPT and DALL-E programs. Providers of GPAI models are required to produce technical documentation and release detailed summaries of their training data. For GPAI models that present systemic risks, providers must also implement cybersecurity measures, mitigate potential risks, and perform evaluations that include adversarial testing.

We will continue to monitor these ongoing state, federal, and international AI legislative efforts and provide you with the latest updates to help you prepare for what lies ahead.

Summer Associate Joe Cahill contributed to this post

]]>
2024 AGA Annual Meeting Wrap-Up https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/2024-aga-annual-meeting-wrapup https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/2024-aga-annual-meeting-wrapup Fri, 28 Jun 2024 11:00:00 -0400 The Attorney General Alliance (AGA) hosted its 2024 Annual Meeting this June, bringing together State AGs, staff, and industry for discussions on a number of topics important to AGs, including AI (again), nonpartisan cooperation, partnering with criminal law enforcement including in the fight against fentanyl, supporting small businesses and free enterprise, and protecting America’s youth. We provide some highlights below.

Attorney General Formella of New Hampshire kicked off the conference with a fireside chat on AI, where platforms promoted open access principles, more STEM graduates to compete internationally in AI, and combatting CSAM. Panelists discussed how just because an AI service is built on the same model does not mean that it works in the same way. South Carolina Attorney General Wilson followed up with a panel on the ethics of using AI as attorneys, where panelists noted that it may actually be a violation of ethical obligations to underutilize AI. Arizona Attorney General Miyares led another fireside chat touching on AI, asking his panelist about responsible AI and whether AI regulation will actually solve problems in the space.

Kansas Attorney General Kobach led a discussion regarding preemption, noting that it cuts both ways politically. While conservatives may view preemption as positive because unified federal norms make regulation easier for businesses, he noted progressives want preemption in areas such as immigration. Panelists then discussed the value of ERISA preemption and pointed to alternatives to addressing potential issues with pharmacy benefit managers such as using UDAP laws or regulating medical practitioners in that space instead.

Attorney General Ken Paxton of Texas spoke with small business owners in different industries and a professor to learn how tech platforms may both help and harm businesses. Small businesses noted that review platforms and targeted advertising have made positive impacts on their businesses by providing access and feedback on customers’ needs, while the professor countered that big tech also carries some additional downsides. General Paxton elaborated on the risks to small businesses including a constantly changing dynamic environment, government taxes, and a lack of “bailouts”.

Members of the conference applauded Oregon Attorney General Rosenblum as she commemorates her last year as Attorney General. She led a panel related to her National Association of Attorneys General initiative on America’s Youth. General Rosenblum began her panel noting that she believed her initiative was something all AGs could come together on -- the health, well-being, and success of young people. The panel focused on preventing computer generated CSAM and sexploitation of children, then turned to the effectiveness of COPPA and promoting safety by design. Panelists remarked that even a tracking cookie question where you can’t find the “no” button may be a dark pattern, especially when it comes to children online. Others noted that privacy torts can be used creatively, as the New Mexico AG’s office did, to bring lawsuits affecting children’s privacy.

The conference concluded with breakout sessions including an update on the Organized Retail Crime enforcement space. There we learned about new legislation, including amendments to state INFORM laws and the creation of new task forces, from our own Paul Singer. Other panelists provided updates on the new scams criminals are using to evade detection including sophisticated skimmer rings.

Stay tuned, as the National Association of Attorneys General Presidential Initiative conference will take place in early September to focus more on AGs protecting America’s youth.

]]>
Telemarketing in 2024 – A Mid-Year Review https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/telemarketing-in-2024-a-mid-year-review https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/telemarketing-in-2024-a-mid-year-review Thu, 27 Jun 2024 11:00:00 -0400 As we approach the 2024 halfway mark, businesses that rely on texting and calling to promote their products and services face an onslaught of new and significant legal and regulatory developments. To help with tracking these developments all in one place, below we summarize key telemarketing law developments and corresponding timelines to keep in mind:

  • 1:1 Consent – At the end of 2023, the FCC adopted an amendment to the definition of “prior express written consent” under its TCPA rules to require that a consumer give specific consent to be contacted by a particular seller for marketing purposes, and that such consent must be “logically and topically related” to the context in which it was obtained. This rule will officially go into effect on January 27, 2025, but we have seen a trend among service providers in the industry (particularly calling and texting platforms) requiring that their customers implement 1:1 consent well ahead of that deadline (and make corresponding changes to their privacy policy about sharing consent data with third parties). It would be prudent for affected businesses to take this time to carefully review their opt-in processes and privacy policies to assess what changes are necessary from both a commercial and legal compliance perspective.
  • AI and the TCPA – On February 8, 2024, the FCC voted unanimously in favor of a Declaratory Ruling that classifies AI-generated voices on robocalls as ​“an artificial or pre-recorded voice” under the TCPA. This means that calls using AI technology to generate a simulated or pre-recorded human voice must satisfy the TCPA’s consent requirements (including prior express written consent for marketing calls using AI). While the FCC focused the ruling on the common use and accessibility of AI-generated voices by bad actors to perpetrate fraud and spread misinformation, the development underscores the heightened regulatory scrutiny on a business’s use of AI to mimic human behavior for marketing purposes. The FTC also outlined in a recent blog post some of the potential consumer protection and privacy concerns that can arise from the use of AI chatbots to interact with consumers.
  • Expanded Opt-Out Rules – On February 15, 2024, the FCC adopted a Report and Order and Further Notice of Proposed Rulemaking to amend its TCPA rules and clarify the ways in which consumers can revoke consent to receive calls and texts. Among the changes were the adoption of various “per se” reasonable methods for revoking consent, including by texting the words ​“stop,” ​“quit,” ​“end,” ​“revoke,” ​“opt out,” ​“cancel,” or ​“unsubscribe.” The FCC also made clear that businesses cannot prescribe a particular method for revoking consent, and must honor reasonable opt-out requests within 10 business days. Importantly, while businesses are permitted to send a one-time text to clarify the scope of a consumer’s opt-out request if that consumer has previously consented to receive multiple types of messages, if the consumer does not respond to that message, they are presumed to revoke consent for all further non-emergency communications. The effective date for the amended revocation of consent rule is still uncertain, as it is undergoing a review by the Office of Management and Budget. Once that review is complete, the FCC will issue a notice, and the rule will be effective six months thereafter. Businesses can prepare for this change by evaluating and testing their technology and processes to confirm they can honor opt-outs in accordance with the new requirements.
  • Telemarketing Sales Rule Changes – Looking beyond regulatory changes at the FCC, the FTC announced in March a significant update to the Telemarketing Sales Rule, most notably by expanding parts of the rule to business-to-business calls, and expanding the scope and timeline of recordkeeping obligations for telemarketers. These amendments generally became effective on May 16, 2024, except for the “call detail” records subsection, for which the FTC had previously announced a 180-day grace period to give affected businesses time implement systems, software, or procedures necessary to comply. As such, businesses will have until October 15, 2024 to adhere to that particular provision of the rule.
  • New and Updated State Telemarketing Laws. A number of recently-enacted state laws related to telemarketing have taken effect (or will take effect) in 2024, including:
  • Maryland – The “Stop the Spam Calls Act of 2023” became effective on January 1, 2024. Key provisions of the new law include a requirement for “prior express written consent” for telephone solicitations that involve “an automated system for the selection or dialing of telephone numbers,” as well as call time and frequency restrictions similar to those adopted in other states, and a private right of action for alleged violations. To date, we are not aware of any private litigant bringing forward an action in court under the new law.
  • Maine – Earlier this year, Maine adopted a first-of-its-kind amendment to its telephone solicitation law that requires solicitors to scrub against the FCC’s reassigned number database prior to initiating a call. While limited in scope due to underlying exemptions in the statute, the requirement will become effective on July 16, 2024.
  • Georgia – Several changes to an existing telemarketing law in Georgia were recently enacted, including: (1) eliminating the requirement for a “knowing” violation of the law to pursue enforcement; (2) extending liability for calls made “on behalf of any person or entity” in violation of the law; (3) allowing private plaintiffs to pursue claims for violations as part of a class action with no limitation on damages; and (4) creating a safe harbor defense for solicitations made to a consumer “whose telephone number was provided in error by another subscriber” if the caller “did not know, or have reason to know, that the telephone number was provided in error.” These amendments will become effective on July 1, 2024.
  • Mississippi – By a series of amendments to its existing telephone solicitation law, Mississippi severely restricted the ability of businesses to contact consumers by phone about Medicare Advantage plans (unless a consumer first initiates a call to the business about such plans), and effectively banned telemarketing for Medicare supplement plans. These restrictions are unique among state telemarketing regulations because they are narrowly focused on calls about certain Medicare plans, and may be challenged on First Amendment grounds. In the interim, however, the restrictions will take effect on July 1, 2024.

If you have any questions about how these developments may affect your business, please contact Alysa Hutnik or Jenny Wainwright. For more telemarketing updates, subscribe to our blog.

]]>
Consumer Enforcement Overview: 2024 NAAG Consumer Protection Spring Conference https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/consumer-enforcement-overview-2024-naag-consumer-protection-spring-conference https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/consumer-enforcement-overview-2024-naag-consumer-protection-spring-conference Thu, 23 May 2024 09:00:00 -0400 Last week, State Attorneys General (AGs) and staff convened to discuss the hot topics in consumer protection in private and public sessions during the NAAG Consumer Protection Spring Conference. The Executive Director of NAAG, Brian Kane, started off the day with a theme that would echo throughout the panels – that businesses should be mindful that their industry may be defined by “the least of” the actors in the space. We provide some of the most relevant insights from the public sessions.

IS IT REAL OR IS IT NOT? A DIVE INTO DEEP FAKES, DARK PRACTICES, AND INDUSTRY’S RESPONSE

In yet another AI-related panel for the year, speakers focused on the potential consumer protection issues that could occur from deepfakes, and how the deepfake technology has grown rapidly even in the last few months. The panelists expressed that they are often concerned more with the distribution channels for AI, referred to as “gatekeepers” and platforms, than with the manufacturers themselves. They gave the example of a large technology platform’s efforts to curb AI-generated book publications by limiting uploads to 3 books per day – a solution the panelists found to be inadequate. But the speakers acknowledged that it is not always simple to determine the “right” limits. They also predicted more personalization in advertising through AI. Overall, they emphasized obtaining consent to the use of content and disclosure of the use of AI to address the problem.

AG staff in attendance raised questions such as how to educate consumers about the proper use of AI, whether there are any clear lines for what should not be allowed for AI, and why they need to regulate the industry when scammers won’t comply. As a response, the panelists reminded the audience that they need to regulate the “good guys” and then go after the “bad guys” along the way.

PANEL OF ATTORNEYS GENERAL

Attorneys General Ellen Rosenblum of Oregon, Kwame Raoul of Illinois, and Edward Manibusan of the Northern Mariana Islands participated in a panel discussing their experiences with consumer protection issues as AGs. AG Rosenblum explained that we are all consumers, and having a free and fair marketplace is critical to consumers and businesses. AG Raoul discussed that he was often surprised by the extent that consumer protection impacted other areas of the office and how it could also be leveraged, for instance, to curb criminal activity. AG Manibusan noted prices and labeling as being especially important for his constituents given their geographic location.

On the power of the multistate consumer protection investigation, all three AGs agreed banding together for collective efforts is useful and involves compromise, with AG Raoul comparing the process to learning to play well together in a kindergarten sandbox. AG Raoul also emphasized the importance of injunctive relief and discouraged “unreasonable holdouts” from the group dynamic. AG Rosenblum noted that ideally multistate investigations could move faster, but pointed to some of the reasons they may not, including the complexity of the cases and the amount of documents to review. AG Rosenblum also mentioned that AGs are usually willing to take meetings, but it is important that they still communicate with staff to avoid “going behind their back.”

SOCIAL MEDIA LEGISLATION- A DISCUSSION OF EXISTING, PENDING, AND FUTURE LEGISLATION

Representatives from the AG offices of Minnesota, Arkansas, Utah, and New York presented on some of the latest social media laws that are pending or passed, and how they are working to overcome challenges. Many of these laws would require companies provide certain default privacy settings and parental controls for teens. The industry representative on the panel described the challenges in complying with these types of laws because of the variety of content on platforms, and free speech and access to information concerns. Furthermore, asking for certain age verification requires more collection of personal information, which could raise concerns both for privacy advocates and consumers wary of handing over their data.

When the panelists were asked why social media legislation is needed in addition to their existing UDAP laws, the states explained that UDAP laws work on a fact-specific level but where there is an industrywide problem, they need to level the playing field. For example, one bad company can ruin things for everyone. Additionally, panelists said these statutes help provide additional information to parents and encourage parental involvement.

OTHER PANELS

On the FTC Rulemaking session, Tom Dahdouh of the FTC described the agency’s recent rulemaking efforts. He described the importance of having a great relationship with State AG offices as “vital” to the agency as a result of the AMG decision. The FTC cannot get redress in many instances, but states can. Dahdouh pointed to the recent FTC Collaboration Report and the 33 joint actions with states and local DAs since 2020. He also described the need for rulemakings as a reaction to AMG.

In the State Privacy session, representatives from California, Colorado, and Indiana described the similarities and differences between their comprehensive privacy laws and the authority and makeups of their enforcement teams.

CONCLUSION

NAAG hosts two consumer protection conferences a year. These are good events to learn about issues important to attorneys general across the country, which can be helpful for the business community. Moreover, these events are great opportunities to hear from and interact with consumer protection staff, who are often driving the enforcement initiatives at AG offices. It is important to connect with AGs and staff alike to stay on top of office priorities.

]]>
AGs Protect Children from AI (and Chainsaws) https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/ags-protect-children-from-ai-and-chainsaws https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/ags-protect-children-from-ai-and-chainsaws Wed, 17 Apr 2024 11:00:00 -0400 Last week in South Carolina, AGs, staff, and members of the community gathered for the AI and Preventing Child Exploitation Seminar, presented jointly by the Attorney General Alliance (AGA) and the National Association of Attorneys General (NAAG). Sessions focused on robocalls, online platforms, youth digital wellness and mental health, and the potential benefits of AI.

Attorney General Perspective

The first panel “AI and Exploitation of Children” featured South Carolina Attorney General Alan Wilson and members of his staff, Whitney Michael, Senior Advisor, Joseph Spate, Assistant Deputy Solicitor General, and Kyle Senn, Senior Assistant Attorney General. This panel provided an excellent summary of the perspective of State AGs on combatting child exploitation and how AI can both harm and benefit society.

AG Wilson explained that social media and AI are replacing tobacco and opioids as the new bipartisan issues, with AGs, including Oregon Attorney General Ellen Rosenblum, working to keep the topics at the forefront. He explained that providing personal information over the internet is now expected and natural, and while our ability to protect ourselves has increased, so has the ability to hack. Unfortunately, predators also take advantage of the fact that children are comfortable providing information over the internet, and they use a variety of online platforms to exploit people on the Internet.

AG Wilson said it best when he compared AI to a chainsaw – a valuable tool in the hands of a lumberjack, but a deadly weapon in the hands of Jason Voorhees. He touted the joint 54 state and territory letter to Congress spearheaded in part by his office. The letter, cosponsored by Oregon, North Carolina, and Mississippi, asked Congress to help evolve the legal landscape in light of changing technology, which he described as both amazing but capable of incomprehensible feats. AGs are working together to fill in the gaps in current laws to prevent and enforce against a variety of ways AI can be used to create child sexual abuse material (CSAM). In the wake of this letter, AG Wilson explained that Congress is now setting up an ad hoc committee to study AI, and other bipartisan bills are dropping.

Industry Thoughts

We heard on other panels from industry representatives how they are working to address child exploitation. One gaming platform described a range of tools including AI moderation in combination with human moderators to help combat child exploitation. It uses automated chat filtering for personal information and machine-learning to remove inappropriate language in violation of community standards. The platform scans each image upload using AI to ensure it is appropriate and compares it to hashed National Center for Missing and Exploited Children (NCMEC) databases. The platform does not allow images of real life people and provides account monitoring by parents for users under 18. Finally, the platform reports to the FBI and NCMEC using automated tools and escalates review of “trusted flagger” reports. They use a law enforcement response tool to speed up subpoena response times.

One AI platform described its safety-first principles as it seeks to benefit humanity. Policies outline the appropriate use, and the company constantly evaluates risk from pre-training to launch to ongoing monitoring. Pre-training excludes adult content, dark web, payroll and other content from data aggregators. Post training, automated and human evaluators work to tune the AI so it behaves in accordance with policies, such as refusing to answer when appropriate to avoid providing harmful material or personal information.

Conclusion

AGs, social and online platforms, and AI programs themselves are working to combat the dark side of AI including child exploitation. However, if third-party platforms or AI companies themselves fail to implement appropriate safeguards for children, it is likely they will encounter an AG inquiry in the civil or criminal realm.

]]>
What We Learned From … NAAG’s Director of the Center for Consumer Protection https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/what-we-learned-from-naags-director-of-the-center-for-consumer-protection https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/what-we-learned-from-naags-director-of-the-center-for-consumer-protection Thu, 04 Apr 2024 09:00:00 -0400 What trends are shaping consumer protection in 2024?

From kids on social media to fake reviews and junk fees, state AGs are working across state (and partisan) lines on initiatives that promise to mold the consumer protection landscape for years to come. In this post, we reflect on our conversation with Todd Leatherman, who works at the forefront of these issues as Director of the National Association of Attorneys General (NAAG) Center for Consumer Protection.

Trend 1 – Protecting America’s Online Youth

For state enforcers, children are top-of-mind, especially when it comes to social media. A coalition of 33 state AGs filed a federal lawsuit in California alleging Meta violated state consumer protection laws and the Children's Online Privacy Protection Act. The AGs claim that Meta knowingly designed and deployed addictive and harmful features on its social media platforms, intentionally addicting children and teens and misleading the public about whether its services were safe for younger children. A number of other states have filed similar lawsuits in state courts including Nevada, which also targeted TikTok and Snap. These lawsuits are ongoing and will no doubt affect how social media platforms engage younger consumers.

This year, Oregon AG and NAAG President Ellen Rosenblum chose her Presidential Initiative as: “America’s Youth: AGs Looking Out for the Next Generation.” This initiative and corresponding NAAG Presidential Summit will include programming on technology, physical health, mental and behavioral health, and financial literacy.

On the legislative front, we have seen new laws aimed at protecting young people online. Florida recently passed a law banning social media accounts for minors under 14 and requiring parental consent for 14 and 15-year-olds. Georgia may soon also require minors under 16 obtain parental consent to create an account, following similar restrictions passed in Louisiana, Texas, Arkansas (currently enjoined pending litigation), and Utah. Generals Letitia James of New York and Rob Bonta of California have also advocated for state legislation targeting the addictive features of social media. Given the aforementioned, we expect AGs to tune into emerging issues affecting children for years to come.

Trend 2 – Big Tech’s Advertising Practices

For years, big tech has been a leading issue for bipartisan cooperation among state enforcers. Last year, we saw a $700 million settlement with Google and 53 state AGs over the Google Play Store. This led to significant reforms in Google’s practices, including how consumers access apps and how payments are processed. Currently, 38 state AGs and the Department of Justice have sued Google over alleged anti-trust violations, including monopolizing the search market. The cases were consolidated with closing arguments slated to begin May 1st.

Since our conversation with Mr. Leatherman, DOJ and 16 other state attorneys general announced a landmark lawsuit against Apple alleging that it monopolized the smartphone market. This includes allegations that Apple intentionally makes it difficult for consumers to switch cellphones and undermines innovation, among other claims.

Trend 3 – Algorithms and AI

The promise and perils of AI have drawn major focus at AG offices across the nation and at NAAG, according to Leatherman. Last year, 54 AGs sent a letter to Congressional leaders encouraging them to study how AI may lead to child sexual abuse and exploitation online. Another collation of 26 AGs submitted a comment to the FCC on the use of AI in robocalls with the FCC later voting to ban robocalls using AI-generated voices. (Revisit our post on Washington’s new AI task force here.)

Now, we’re seeing AGs particularly concerned about racial and gender bias in AI programs used in employment, housing, and financial lending and services. Enforcers are also looking into the marketing of AI, including whether companies are overpromising on what the technology can actually provide. Given how quickly AI is advancing across sectors, we expect to see more scrutiny in the months ahead. And stay tuned for additional information on AGs and AI as our team will be reporting on the NAAG and AGA Southern Region Meeting on Artificial Intelligence and Preventing Child Exploitation occurring in April.

Trend 4 – Fake Reviews

Fake reviews, including misleading influencer content, have drawn AG attention. This year, 22 AGs submitted a letter to the FTC largely supporting a new rule that would govern and ban fake reviews. That rulemaking is ongoing.

States, including New York and Washington, have taken individual action against companies engaged in deceptive review practices. This includes instructing employees or associates to post positive reviews, threatening or intimidating consumers who post negative reviews, or requiring consumers to sign NDAs to receive services. Notably, states are able to enforce the Consumer Review Fairness Act, a federal law.

Trend 5 – Automatic Renewals

States continue to enforce their recently enacted automatic renewal statutes or provisions (for example, laws in California, New York, Washington D.C., and Virginia), which generally impose disclosure requirements, require that companies obtain affirmative consent from consumers, and mandate cancellation mechanisms. This includes requiring an online cancellation option when a consumer signs up for a service online. That said, states do not necessarily need a new law to target these practices as their general consumer protection laws likely apply. AGs may also enforce the federal Restore Online Shoppers' Confidence Act.

Trend 6 – Junk Fees

Companies that advertise one price and then tack on fees should beware. Enforcers are making so-called “junk” or hidden fees a priority. California has passed a new law governing fees and Massachusetts is in the process of instating new regulations governing them. Not to be outdone, the FTC has also proposed a rule on fees with a virtual hearing to take place in late April. (This aligns with the Biden administration’s whole-of-government approach to junk fees with other rulemaking and guidance out of the FCC, CFPB, HUD, and DOT).

That said, AGs take the position they do not necessarily need new legislation to target fees. Pennsylvania has led the way in asserting claims under state consumer protection laws and the Consumer Financial Protection Act against companies that impose fees. Similarly, Connecticut and the FTC have joined forces in litigation against a car dealer that allegedly deceived consumers about the nature of fees and add-ons. And Washington D.C. has warned restaurants that service charges could be unlawful if they are not disclosed before an order is placed.

Trend 7 – Privacy

States continue to pass and enact new privacy laws. Earlier this year, New Hampshire became the 15th state to pass a comprehensive state privacy law and several other privacy bills are currently making their way through the legislative process. Many of the new laws will become effective this year through 2026, spurring enhanced AG interest in privacy matters.

In California, we saw the first investigative sweep in this arena with General Rob Bonta sending out letters to popular streaming apps and device companies alleging they failed to comply with California’s new privacy law. According to the office, the investigation will focus on opt-out requirements for business that sell or share consumer personal information.

Trend 8 – Veterans

While veterans have long been a priority for state AGs, the uptick in businesses offering to “counsel” or support veterans in applying for government benefits has sparked new AG activity in this space. Last year, a bipartisan group of 44 AGs sent a letter to Congress urging the body to pass legislation that further protects veterans in the application process and the Texas AG’s office sued a company that misled veterans about their ability to help obtain benefits and charged alleged excessive fees in the process.

Trend 9 – Health

In the health space, opioid marketing, vaping, and illegal cannabis products continue to take center stage. While the larger opioid cases have concluded, litigation is far from over. AGs have been leading the way in targeting manufacturers, distributers, and pharmacies that engaged in deceptive marketing tactics around opioids. We’ve also seen a focus on nicotine and cannabis products, particularly those that may appeal to children. A group of 33 AGs sent a letter to the FDA urging more stringent regulations on electronic nicotine delivery products, including on the marketing of e-cigarettes and the use of influencers to promote them. Connecticut and Nebraska have also cracked down on illegal marketing of cannabis products using their state consumer protection laws.

Trend 10 – Rapid Response

Many businesses fail to realize how substantial a role AGs play in emergencies and urgent consumer issues. They face public pressure to respond to events in real-time. For instance, the Taylor Swift concert ticket debacle led to more than 2,600 consumer complaints in Pennsylvania alone.

And, when it comes to a market disruption or natural disaster, some states have specific price gouging laws that provide state AGs enforcement authority. These laws vary by state and it can sometimes be difficult for companies to know when they are in place. We’ve seen a rise in AGs targeting companies following emergency situations for increasing prices on consumer staples and targeting charities that mislead consumers about donations in the time of crisis.

Kelley Drye’s state AG team will continue to monitor consumer protection trends in 2024. To view our full conversation with NAAG’s Todd Leatherman, click here. To stay up-to-date with our AdLaw Access blog, subscribe here.

]]>
Attorney General Alliance Meeting Recap: Focus on Director Chopra’s Remarks https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/attorney-general-alliance-meeting-recap-focus-on-director-chopras-remarks https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/attorney-general-alliance-meeting-recap-focus-on-director-chopras-remarks Tue, 02 Apr 2024 12:00:00 -0400 Last week, state attorneys general (AGs) gathered to discuss Nevada Attorney General and Attorney General Alliance Chair Aaron Ford’s Initiative, focusing on consumer protection education. Attendees heard from many panels discussing topics ranging from consumer financial literacy, digital literacy, and cybersecurity, to the continued hot topic of AI. We are highlighting the fireside chat between AG Ford and CFPB Director Rohit Chopra, as they discussed a variety of important topics and collaboration with State AGs.

Cooperation and Role of State AGs

Director Chopra began by complimenting the State AGs for their important consumer protection work, including by ringing the alarm bells on the foreclosure crisis. While he discussed the roles of states throughout his remarks, he emphasized that states and the CFPB need to work together. He reminded the AGs that early on, the CFPB published a procedural rule clarifying that State AGs can bring suit under the CFPB’s organic statute including a whole host of consumer credit and data laws, which several states have utilized. Director Chopra said the CFPB is looking for ways to collaborate with more states, noting they have been able to collect billions in penalties that can be used for consumer redress even in unrelated cases to make victims whole. Director Chopra asked that consumers send complaints to both State AG offices and the CFPB, because their consumer complaint system immediately routes complaints to the financial company at issue to get a response and resolution without staff intervention. The CFPB also is able to use data visualization tools to provide states with information about hot issues in different regions. The CFPB has used complaint data in collaboration with states to work on medical debt and other collection cases. Director Chopra said the CFPB is always looking upstream to identify warning signs to avoid future crises like the one caused by subprime mortgages.

Consumer Data and Security

Director Chopra explained that President Biden’s executive order on protecting sensitive personal data highlights a broad bipartisan interest in stopping the bulk transfer of consumer data. He explained that State AGs can work alongside the CFPB to enforce the Fair Credit Reporting Act, not only against the big three reporting companies, but also against data brokers. He noted the CFPB will propose additional rules to require data brokers to adhere to accuracy standards and otherwise protect consumer data. Director Chopra described categories of data and lists that brokers can purchase about vulnerable consumers, and his concern that there be a way for these people to be able to participate in the digital world without sacrificing privacy and security. He pointed to state laws requiring additional privacy and security such as Washington’s recent My Health My Data Act, and said he supported the fight against preemption of state privacy laws.

Big Tech and AI

Director Chopra reminded the audience of big tech’s efforts to become payment processors, providing them consumer transaction data. He noted these payment methods were used as a vector for imposter fraud, specifically citing the DOJ and states’ March lawsuit against Apple. Director Chopra explained the CFPB has recruited additional technologists with knowledge of user interfaces and design, and the agency has hosted enforcer roundtables with states to discuss issues with AI and technology, including how to draft CIDs.

On AI, Director Chopra said the CFPB is looking at marketing and advertising for discriminatory or manipulative AI. They are also reviewing how loans are being written, because if AI cannot explain why it denied credit to a person, it is a violation of federal law that requires explanation for denial. Director Chopra also said chat bots are another form of AI used by banks for customer service. He alluded that it could be considered deceptive to use human names and “…” thinking signals to simulate human activity. Director Chopra said he wants to see institutions affirmatively describe these chat bots as robots and ensure the bots do not provide inaccurate information or a poor customer service experience.

Bank Relationships

Director Chopra said the CFPB’s work has shifted some from mortgage lending issues with banks into non-banks. He said they have also heard from the AG community that national chartered banks have not cooperated on investigations, claiming preemption. Director Chopra said when that happens, the CFPB will work with the states to obtain the information themselves. His expectation is that banks work with the states to ensure consumers are protected.

Junk Fees

As a former businessperson himself, Director Chopra said pricing consultants he encountered in the past left a big impression on him. He noted that industries such as air travel, event ticketing, and banking have made it difficult to compare pricing resulting in reduced competition. He described certain bank fees such as a paper statement fee (when nothing is printed and no paper is sent) as “fake fees,” and harkened back to past CFPB actions against banks reordering payments to trigger multiple overdraft fees. Director Chopra also said that some credit card issuers created a business model based on rooting for people to be late, causing late fees. The CFPB has proposed rules to close what he described as a loophole in the credit industry, stating people understand they will have to pay interest but do not understand the other layers of fees they may not be able to control. Director Chopra also pointed to potential concerns with credit card reward “bait and switch” offers as a core truth in advertising concept. Though the CFPB is using rulemaking and enforcement actions to combat junk fees, Director Chopra also gave credit to the business community for taking initiative to become more upfront and transparent.

Stay tuned as our team will be hearing more from the State AG community on AI and other tech topics in less than two weeks at the NAAG and AGA’s Southern Regional Meeting/Artificial Intelligence and Preventing Child Exploitation Seminar.

]]>
Washington State Poised to Launch Artificial Intelligence Task Force https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/washington-state-poised-to-launch-artificial-intelligence-task-force https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/washington-state-poised-to-launch-artificial-intelligence-task-force Thu, 21 Mar 2024 09:30:00 -0400 As we have previously reported, state attorneys general (AGs) have great interest in artificial intelligence (AI) and we do not see this stopping anytime soon. This time, our focus is on a bipartisan legislative proposal from Washington Attorney General Bob Ferguson to create an AI task force, which the Washington State Legislature passed (Senate Bill 5838) and now awaits the governor’s signature.

The 19-member task force would consist of technology industry representatives, a civil liberties organization representative, subject matter experts, and other stakeholders. Indicative of the importance to AGs of protecting children, one of those members must represent a statewide teachers association. The taskforce would also include a representative of a statewide retail association and of an independent business association. The task force would meet at least twice a year to review policies, identify emergent risks, and provide recommendations to the legislature related to AI technology. The bill provides that the AG’s office administer the task force, whose duties would also include:

  • Examining the development and use of generative AI by both private and public sector entities;
  • Making recommendations to the legislature regarding standards for the use and regulation of generative AI systems to protect the safety, privacy, and civil and intellectual property rights of the state’s citizens.

While businesses should keep an eye on AI developments in Washington state as the AG is generally on the forefront of many consumer matters, the reporting by the taskforce won’t be seen for quite some time if the bill passes; an interim report is due December 1, 2025 and a final report is due June 1, 2027.

AGs remain incredibly focused on AI and are continually looking for opportunities to develop policy and enforcement initiatives around this powerful technology. AG Ferguson’s emphasis on AI through an inclusive task force is not a novel initiative, as states such as Alabama, Massachusetts, New Jersey and Wisconsin have already launched similar task forces:

  • Massachusetts’s task force was established on February 14 to study AI and generative AI technology and its impact on the state, private businesses, higher education institutions, and constituents. The aim of Massachusetts’s new task force is to provide recommendations for how the state can best support its businesses in leading sectors around AI adoption. In addition, the task force will provide recommendations focused on startups’ ability to scale and succeed in Massachusetts. The task force will present its final recommendations to the governor later this year.
  • Alabama’s task force launched on February 8 and will recommend policies for the responsible and effective use of generative AI in state executive-branch agencies. A report on the task force’s findings on current generative AI use in executive branch agencies and their recommendations for responsibly deploying such technology is due to the governor by November 30, 2024.
  • New Jersey’s task force launched on October 10, 2023 and is focused on studying emerging AI technologies. New Jersey’s task force is also responsible for analyzing AI’s potential impacts on society as well as preparing recommendations to identify government actions encouraging the ethical use of AI technologies. The task force’s findings and recommendations will be presented to the governor no later than 12 months from the effective date of the order.
  • Wisconsin’s task force launched on August 23, 2023 to study the effects of AI on Wisconsin’s workforce. The task force is responsible for gathering and analyzing information to produce an advisory action plan for the governor, such as recommending policy directions and investments related to workforce development and educational systems to capitalize on the AI transformation. The goal is to have an action plan for the governor’s consideration in early 2025.

There will likely be more states developing task forces and legislation in the coming years, as states continue to balance AI’s utility with its risk. And don’t forget; state consumer protection laws are broad and already apply to AI.

]]>
FCC Adopts Changes to TCPA Consent Revocation Rules https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/fcc-adopts-changes-to-tcpa-consent-revocation-rules https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/fcc-adopts-changes-to-tcpa-consent-revocation-rules Tue, 27 Feb 2024 10:00:00 -0500 At its most recent open meeting on February 15, 2024, the Federal Communications Commission (FCC or Commission) voted unanimously to adopt yet another round of rule changes related to the Telephone Consumer Protection Act (TCPA). These rule changes, focused on expanding consumers’ ability to revoke consent to receive calls and texts, build on the FCC’s other recent TCPA actions – namely the adoption of a one-to-one consent requirement, and a ruling that calls to consumers using artificial intelligence technologies are considered “artificial or prerecorded” messages subject to regulation under the TCPA.

The specific rule changes address three issues related to revocation of consent, explained in more detail below. The item also includes a further notice of proposed rulemaking on whether the TCPA applies to autodialed or artificial/prerecorded voice calls or texts from wireless providers to their own subscribers, as well as a possible mandate for an automated opt-out mechanism on every call that contains an artificial or prerecorded voice.

The timing for implementation of the rule changes is unclear at this point, because most of them will be delayed until six months after completion of a review by the Office of Management and Budget (OMB).

Nevertheless, these new requirements should be reviewed carefully, including through consultation with counsel, to prepare for the upcoming changes. For example, businesses will need to review their internal processes and modify them as appropriate to ensure that they are properly processing opt-out requests, and may need to develop new methods for honoring non-standard language opt-outs, as well as update training and compliance materials to adhere to the new requirements.

A general note for readers: Throughout this post, you’ll see the term “robocall” and “robotext” when we quote from the order. To be clear, the TCPA, and therefore the FCC’s regulatory authority, is limited to autodialed and/or artificial/prerecorded voice calls and texts.

Changes to the Revocation of Consent Rules

1. “Reasonable” Means of Revoking Consent

The order codifies the FCC’s longstanding position that a called party may revoke consent “by using any reasonable method.” This “reasonable method” standard was established in a 2015 declaratory ruling that was the subject of vigorous litigation, including on the issue of revocation of consent, which was ultimately upheld by the U.S. Court of Appeals for the D.C. Circuit.

To provide further clarification on what constitutes a “reasonable” means of revoking consent, the FCC put forth the following specific requirements:

  • “Any revocation request made using an automated, interactive voice or key press-activated opt-out mechanism on a call; using the words ‘stop,’ ‘quit,’ ‘end,’ ‘revoke,’ ‘opt out,’ ‘cancel,’ or ‘unsubscribe’ sent in reply to an incoming text message; or pursuant to a website or telephone number designated by the caller to process opt-out requests constitutes a reasonable means per se to revoke consent.”
  • “If a reply to an incoming text message uses words other than ‘stop,’ ‘quit,’ ‘end,’ ‘revoke,’ ‘opt out,’ ‘cancel,’ or ‘unsubscribe’ the caller must treat that reply text as a valid revocation request if a reasonable person would understand those words to have conveyed a request to revoke consent.”
  • “Should the text initiator choose to use a texting protocol that does not allow reply texts, it must provide a clear and conspicuous disclosure on each text to the consumer that two-way texting is not available due to technical limitations of the texting protocol, and clearly and conspicuously provide on each text reasonable alternative ways to revoke consent.”

Callers will not be permitted to “designate any exclusive means to request revocation of consent,” and will be required to honor revocations made in a reasonable manner within 10 business days of receipt of the request.

If a consumer attempts to revoke consent using a method or language other than what is prescribed in the rules, or in the event of a dispute, the order establishes a standard of review based on the “totality of circumstances,” but there is a “rebuttable presumption” that the consumer has properly revoked consent if the consumer can “produce evidence that such a request has been made.”

2. Confirmatory Opt-Out Texts

The order also codifies a previous FCC determination that a “one-time text message confirming a request to revoke consent from receiving any further calls or text messages does not violate [the TCPA] as long as the confirmation text merely confirms the text recipient’s revocation request and does not include any marketing or promotional information, and is the only additional message sent to the called party after receipt of the revocation request.” In general, the order requires the confirmatory text to be sent within 5 minutes of receipt of the opt-out request, or “the sender will have to make a showing that such delay was reasonable.”

Additionally, the as-written rule provides that “[t]o the extent that the text recipient has consented to several categories of text messages from the text sender, the confirmation message may request clarification as to whether the revocation request was meant to encompass all such messages; the sender must cease all further texts for which consent is required absent further clarification that the recipient wishes to continue to receive certain text messages.” The above language only contemplates clarifying text message opt-outs, and in the order, the FCC states it intent to “limit this opportunity to request clarification to instances where the text recipient has consented to several categories of text messages from the text sender” and that “this rule will give consumers an opportunity to specify which types of text messages they wish to no longer get.” However, the FCC in the next sentence states that the “request for clarification can seek confirmation that the consumer wishes to opt out of all categories of messages from the sender, provided the sender ceases all further robocalls and robotexts absent an affirmative response from the consumer that they do, in fact, wish to receive further communications from the sender.” This arguably could encompass both voice calls and text messages.

Finally, “the timing of the confirmation text does not impact the obligation to honor the revocation within [10 business days after receipt of the request].” And a “lack of any response to the confirmation text must be treated by the sender as a revocation of consent for all robocalls and robotexts from the sender.”

The order makes clear that an opt-out message cannot attempt to persuade the recipient to reconsider their decision to opt-out. But one can look back at prior FCC statements on confirmatory opt-out messages and learn that FCC has previously suggested that “confirmation texts that include contact information or instructions as to how a consumer can opt back in fall reasonably within consumer consent.”

[Note: This particular rule change will not be subject to an OMB review, and is expected to become effective 30 days after the order is published in the Federal Register. While publication time can vary from a few days to a few weeks, affected parties should be prepared for this rule to go into effect as early as the end of March 2024.]

3. Scope of Revocation of Consent

The order acknowledges that certain types of calls and texts do not require consent, and clarifies that “when a consumer revokes consent with regard to telemarketing robocalls or robotexts, the caller can continue to reach the consumer pursuant to an exempted informational call, which does not require consent, unless and until the consumer separately expresses an intent to opt out of these exempted calls.”

It explains that “[w]here the consumer has revoked consent in response to a telemarketing call or message, it remains unclear whether the consumer has expressed an intent to opt out of otherwise exempted informational calls absent some indication to the contrary. … If the revocation request is made directly in response to an exempted informational call or text, however, this constitutes an opt-out request from the consumer and all further non-emergency robocalls and robotexts must stop.”

Additionally, “when consent is revoked in any reasonable manner, that revocation extends to both robocalls and robotexts regardless of the medium used to communicate the revocation of consent. For example, if the consumer revokes consent using a reply text message, then consent is deemed revoked not only to further robotexts but also robocalls from that caller.”

Further Notice of Proposed Rulemaking

In addition to adopting the rule changes outlined above, the item adopted at the open meeting also includes a further notice of proposed rulemaking (FNPRM) to seek comment on two issues. First, the FCC asks whether autodialed and/or artificial/prerecorded voice calls and texts from wireless providers to their own subscribers are subject to the TCPA. The FNPRM suggests the FCC thinks the answer is “yes,” which in turn leads to questions about whether a wireless carrier would have to get specific consent to send autodialed or prerecorded calls or messages to their customers or whether they “satisfy any TCPA consent obligation pursuant to the unique nature of the relationship and service that they provide to their subscribers.” The FCC then asks whether such consent based on that relationship would extend to calls and texts that contain telemarketing or advertisements. The FNPRM also proposes “that wireless subscribers, as any other called party, be able to revoke such consent by communicating a revocation of consent request to their wireless provider and that such request must be honored.” Second, the Commission seeks comment on a proposal by the National Consumer Law Center to “require an automated opt-out mechanism on every call that contains an artificial or prerecorded voice.”

Initial comments in response to the further FNPRM will be due 30 days after the item is published in the Federal Register, and reply comments will be due 45 days after publication.

* * *

If you have any questions about how these changes may affect your business, or are interested in filing comments, please reach out to Alysa Hutnik or Jenny Wainwright. You can also hear more about this order on Kelley Drye’s Full Spectrum podcast. For more telemarketing updates, please subscribe to our blog.

]]>
Commerce Proposes KYC and Other Cybersecurity Requirements on Cloud Services and AI Training https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/commerce-proposes-kyc-and-other-cybersecurity-requirements-on-cloud-services-and-ai-training https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/commerce-proposes-kyc-and-other-cybersecurity-requirements-on-cloud-services-and-ai-training Tue, 06 Feb 2024 11:00:00 -0500 On January 29, 2024, the Commerce Department’s Bureau of Industry and Security (BIS) published a notice of proposed rulemaking (NPRM) introducing a Customer Identification Program (CIP) and other requirements applicable to U.S. providers and foreign resellers of Infrastructure as a Service (IaaS) products. The proposal also includes reporting requirements covering foreign transactions with U.S. cloud services to train ​“dual-use” AI foundational models that may enable malicious cyber activity. The NPRM implements Executive Orders addressing threats to U.S. critical infrastructure or national security posed by malicious, cyber-enabled activities.

The Commerce Department is soliciting comment on the proposed rules for 90 days, with submissions due to the agency by April 29, 2024. Key features of the NPRM and areas for comment are summarized below.

Customer Identification Program

The new rule would require that U.S. providers of IaaS products (including U.S. resellers) implement and maintain a written, risked-based Customer Identification Program (CIP). The CIP is a Know-Your-Customer (KYC) program that would consist of data collection procedures for ascertaining and verifying the identities of current and prospective customers. Importantly, the requirement extends to confirming beneficial owners. For many companies, the requirements extend beyond the identification information currently collected from customers. Moreover, U.S. IaaS providers would need to ensure that foreign resellers of their IaaS products maintain and implement adequate CIP programs. U.S. IaaS provider would need to terminate their relationship with foreign resellers who do not adequately comply. To reduce compliance burdens, the Department proposes to allow foreign resellers, by agreement, to adopt or reference CIP programs created by U.S. IaaS providers. Providers would need to report to Commerce that they and their foreign resellers have a CIP, and annually certify information about the CIP thereafter. Although the Department is considering an adjustment period, compliance with any final rule would be required within one year of publication.

In response to comment, the Department has clarified that foreign subsidiaries of U.S. IaaS providers would not be covered under the current interpretation of the rules.

Additionally, the NPRM envisions a mechanism for requesting exemption from CIP requirements, and requests comment on proposed standards and procedures for adjudicating the same. The Department also welcomes information regarding (1) security best practices to deter abuse of U.S. IaaS products and (2) safe harbor activities that may form the basis of an exemption.

Special Measures

The NPRM proposes a procedure for imposing restrictions on certain foreign persons opening or maintaining IaaS accounts. Notably, the Department would be empowered to impose restrictions on specific foreign actors and all customers and potential customers within a specified foreign jurisdiction. If the Department exercises this authority, companies would need procedures in place to make sure prohibited foreign parties cannot open or maintain accounts. The Department would undergo a thorough investigation, based on its own accord or upon referral from other executive agencies or providers, to determine whether the following reasonable grounds exist that warrant special intervention:

  • For foreign actors, the Department would need to find reasonable grounds that the person has established a pattern of conduct of offering U.S. IaaS products that are used for malicious cyber-enabled activities or directly obtaining U.S. IaaS products for use in malicious cyber-enabled activities; and
  • For foreign jurisdictions, the Department would need to find a significant number of foreign persons offering U.S. IaaS products that are, in turn, used for malicious cyber-enabled activities, or a significant number of foreign persons directly obtaining U.S. IaaS products and using them in malicious cyber-enabled activities.

AI Training

In accordance with Executive Order, the proposed rule would require reports to the Department on instances of ​“training runs” by foreign persons for ​“large AI models with the potential for malicious cyber-enabled activity.” The requirement would cover transactions that result or could result in AI training meeting certain technical conditions. Providers would need to build in procedures to identify potential transactions for reporting.

By way of example, the Department notes that a foreign corporation proposing to train a large AI model on the computing infrastructure of a U.S. IaaS provider—and signs an agreement to provide such training—would be covered by the proposed requirement so long as the AI model’s specifications meet certain technical conditions. At this point, the Department’s standard for determining what technical conditions trigger the AI reporting requirement would reference interpretive rules published in the Federal Register and be updated based on technological advancements.

Nonetheless, the Department seeks comment on (1) the definition of ​“large AI models with the potential for malicious cyber-enabled activity” and (2) what red flags the Department should adopt that would create a presumption that a foreign person is training an AI model meeting the requisite technical conditions.

Outlined in the NPRM are several other elements of and considerations relating to the proposal, including data collection requirements and a discussion of cost burdens associated with implementing a CIP program. And the Department is soliciting comment on several other areas of the rule, including challenges that U.S. IaaS providers may face in investigating and remediating malicious cyber activity, the potential impact of the rule on small businesses, and more. Again, any such comments must be received by the Department by April 29, 2024.

]]>
AGs, AI, and Robocalls https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/ags-ai-and-robocalls https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/ags-ai-and-robocalls Thu, 25 Jan 2024 15:00:00 -0500 Two of the hottest consumer protection topics for state attorneys general (AGs) are robocalls and artificial intelligence (AI). AGs have been prioritizing the fight against robocalls for many years, and AI seems to be on the agenda of nearly every AG conference in recent memory. These two consumer protection issues have intersected in the FCC’s notice of inquiry (NOI) which sought comment to better understand the impact of emerging AI technologies on robocalls and robotexts. Because these two issues are a priority with many AGs, it is not surprising that a bipartisan group of 26 AGs took this opportunity to provide comments.

In the AGs’ comments, they voiced their support for the work of the FCC, other federal regulators, and responsible actors in the telecomm industry who have worked collaboratively to fight illegal robocalls and text messages. The AGs then focused on whether or not calls made through the use of AI should be treated the same as a live agent pursuant to the Telephone Consumer Protection Act (TCPA). This distinction is of great consequence because under the TCPA, robocalls (considered to be calls made using a prerecorded or “artificial voice”) are generally prohibited unless the caller gets the prior express written consent of the consumer.

In their comments, the AGs take the position that any type of AI technology that generates a human voice should be considered an “artificial voice” for purposes of the TCPA. Consequently, if any TCPA-regulated entity wants to call a consumer using this technology, it should follow the TCPA’s requirements including for express written consent. The AGs also stated the FCC should reject future arguments that a “business’s advanced AI technology acts as a functional equivalent of a live agent because it has been programmed to interact with the called party,” citing its past rejection of soundboards.

Businesses that use AI to contact consumers, or are considering doing so, should be on the lookout for potential next steps by the FCC related to the NOI to address issues presented by the intersection of AI and robocalls/robotexts. We know the AGs will be watching.

]]>
NAAG Capital Forum Wrap Up 2023 Part 1– More AI and AGs https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/naag-capital-forum-wrap-up-2023-part-1-more-ai-and-ags https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/naag-capital-forum-wrap-up-2023-part-1-more-ai-and-ags Wed, 03 Jan 2024 10:00:00 -0500 The National Association of Attorneys General (NAAG) closed out the year with its 2023 Capital Forum in early December. This year’s Forum focused heavily on AI questions and concerns and past and future NAAG Presidential Initiatives. In this first post, we will cover the highlights of the AI panels.

Enterprise AI Strategy for Government

Attorneys General David Yost of Ohio and Brenna Bird of Iowa kicked off the first panel focused on how government agencies can implement AI. Panelists reminded the AGs that AI has been around for many years, though generative AI was only released last year. AI is as good as a legal intern – users should check and review output to make sure it is doing a good job. Attorney General Platkin of New Jersey stated that while AI is not always a negative thing, it can produce “hallucinations.” Platkin gave an example where a bio written with AI described him as the AG of Pennsylvania. However, AG Platkin later described the benefits his state has seen using AI to review police body camera information more efficiently. During the session, panelists described considerations for how governments can roll out AI in a thoughtful way, including through upgrading technology generally (the cloud) and having individuals responsible for AI in the organization.

Protecting the Public in the Age of AI – What Tools Are Right for the Job?

Attorneys General Andrea Campbell of Massachusetts and John Formella of New Hampshire moderated the next panel, shifting to enforcement and regulation of AI. AG Formella began by emphasizing regulators do not want to get in the way of new technologies but rather should explore practical ways to mitigate harm. Panelists described the different types of AI and the importance of monitoring inputs to help prevent errors in the end use case. When describing the AI landscape, the panelists agreed that there is a historical parallel to the internet .com bubble – where regulation was avoided and now is subject to greater scrutiny.

Panelists discussed an example of using AI for restaurant recommendations; while AI in that situation may be low risk, even low risk use cases should be transparent and ultimately not become deceptive. The panel emphasized that consumers need to understand when AI is being used. AG Formella asked on what immediate harms AGs should focus? The panelists said less well-known risks including the “poisoning” of inputs – inaccurate data can get reused and amplified as misinformation. In addition, high risk AI uses should be transparent and have robust monitoring. Discrimination bias is already documented, but the panelists said AGs should keep in mind that design choices are often embedded and the end user should perhaps not be held accountable if the designer was at fault. When AG Campbell asked about bias and discrimination considerations, panelists raised existing laws already in the AG tool kit, including those protecting against discrimination and fair lending. Regarding unlicensed practices, such as the use of AI for medical or legal advice, AG Formella said there are already enforcement tools such as New Hampshire’s consumer protection statute, but that they could certainly still beef up the laws. Other panelists pointed out that lawyers already have ethical obligations, and the state of California even has guidance for lawyers and the use of AI.

AG Campbell asked for more information on how consumer protection and antitrust apply to AI. The panelists described how AI may have become the most efficient spam generator, causing more mundane and insidious problems for society. Fake information and doctored content may erode public trust. Some panelists also raised concern that AI may ultimately create an environment that concentrates power and influence. It can also be used to generate thousands of comments to rulemakings or thousands of complaints in a day. Regarding future enforcement, panelists questioned what remedies could be applied – including providing data to universities or nonprofits.

The Role of States in Internet Policy

Attorney General Phil Weiser of Colorado moderated this panel consisting of current and former FTC members (Samuel Levine, Director of the Bureau of Consumer Protection, and Maureen Ohlhausen, Former Chair), academia (Prof. Danielle Citron), and industry/former FCC (Michael Powell, President & CEO of the NCTA and former FCC Chair). While the topic covered regulation of the internet generally, it also specifically covered AI. Levine echoed the concerns of those in the previous panel regarding the “history lesson” of the internet, and the desire to be more proactive with AI by coming up with principles, stating the FTC has made clear it believes Section 5 applies to AI use and deployment. Levine cautioned not to let the perfect be the enemy of the good, in terms of taking steps now to protect against fraud and inaccuracy and protect data security and privacy.

AG Weiser agreed with other panelists that legacy institutions often think about how they used to do things, when they should continue to look at bringing in new tools for new technologies. Levine said while the FTC has 15 technologists, it is not enough. However, he also said in defense of institutions that UDAP provisions have been incredibly versatile over the years adapting for radio, TV, internet and even AI, and that flexibility was by design. Ohlhausen pushed back some, explaining that Congress did put guardrails on unfairness, and that courts are currently more skeptical of regulatory agencies and she would hate to see the FTC lose the authority it has.

AI and Child Exploitation

Finally there was a brief session with South Carolina Attorney General Alan Wilson and New Mexico Attorney General Raul Torrez. AG Wilson began by pointing to his office leading a letter joined by 54 attorneys general to Congress asking them to look at how AI may impact child exploitation and sex abuse laws at the federal and state levels. AG Wilson summarized the letter, explaining how AI can use a child’s ordinary photo and create child sexual abuse material (CSAM), or AI can wholly create CSAM using generative abilities. AG Wilson said the letter urges Congress to create a commission at the federal level to be proactive on these issues, and study where to evolve the laws on AI. He also asked colleagues in the states to consider using the letter to Congress as a template for a letter to their own state legislatures.

AG Torrez discussed his background as an internet-crimes-against-children prosecutor at the office he now leads. He said his expectation is that a company that enables a depiction of CSAM can be held legally responsible, and wants to work with federal and state prosecutors to make sure they have the tools they need. AG Torrez suggested that corporate leaders need to be committed to solve the problem and get in front of the issue. [Note that the same day, AG Torrez’s office announced a lawsuit focused on similar issues.]

Bottom line? AGs remain incredibly focused on AI, and will continue to look for opportunities to develop policy and enforcement initiatives around AI in 2024.

]]>
A Conversation with NAAG and AGA Executive Directors https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/a-conversation-with-naag-and-aga-executive-directors https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/a-conversation-with-naag-and-aga-executive-directors Wed, 22 Nov 2023 09:00:00 -0500 A Conversation with NAAG and AGA Executive Directors

December 14 | 2:00 p.m. – 3:00 p.m. ET

Join Kelley Drye State Attorneys General practice Co-Chair Paul Singer, Special Counsel Abby Stempson, and Senior Associate Beth Chun and the executive directors of The National Association of Attorneys General (NAAG) and the Attorney General Alliance (AGA) for a discussion on the significance of these organizations and state attorneys general to the business community. Guest speakers Brian Kane, Executive Director of NAAG and Karen White, Executive Director of AGA will highlight:

  • The importance of businesses understanding AG priorities which include hot topics such as:
    • Data privacy, artificial intelligence, consumer protection, organized retail crime, and cannabis
  • Each organization’s history, membership, and leadership
  • How businesses can use NAAG and AGA as a resource
  • State AG elections and other items of interest in the new year

Register Here

]]>
Big Brother & Biased Bots: Practical Considerations for Using AI in Employment Decision-Making https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/big-brother-biased-bots-practical-considerations-for-using-ai-in-employment-decision-making https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/big-brother-biased-bots-practical-considerations-for-using-ai-in-employment-decision-making Mon, 02 Oct 2023 08:00:00 -0400 The adoption of artificial intelligence (AI) in the workplace is accelerating with an increasing number of employers integrating AI-related technologies into every stage of the employment lifecycle – from recruitment to separation. While these technologies offer employers opportunities to streamline certain processes and make others more objective, they also pose certain challenges and legal risks.

In a recent webinar, Kelley Drye Partners, Kimberly Carter and Katherine White, and General Counsel & Deputy Comptroller for Legal Affairs, NYC Office of the Comptroller, Justina K. Rivera, explored the current AI legal and regulatory landscape in the U.S. and the opportunities and challenges associated with using AI-related technologies in the employment context. In this blog post, we summarize the high-level takeaways from the session.

AI Legal and Regulatory Landscape in the U.S.

Currently, no federal law specifically regulates the use of AI-related technologies. However, in recent years, federal regulators, including the Equal Employment Opportunity Commission (EEOC) and the Department of Justice (DOJ), have taken certain steps that demonstrate that they have been (and will remain) focused on the use of AI in the workplace.

More notably, in April 2023, the Federal Trade Commission (FTC), the Consumer Financial Protection Bureau (CFPB), the DOJ’s Civil Rights Division, and the EEOC issued a joint statement noting that existing laws apply to the use of AI-related technologies and affirming that each agency/department will use its existing statutory authority to protect against unlawful discrimination in AI systems.

Moreover, in the absence of federal legislation regulating the use of AI-related technologies, certain state and local lawmakers have enacted laws that regulate the use of such tools in the employment context.

  • Illinois: In 2019, Illinois enacted the Artificial Intelligence Video Interview Act (effective January 1, 2020), which, among other things, requires an Illinois-based employer to provide notice and obtain a job applicant’s consent prior to using AI to analyze the applicant’s video interview and consider the applicant’s fitness for a position.
  • Maryland: In 2020, Maryland enacted House Bill 1202 (effective October 1, 2020), which prohibits a Maryland employer from using certain facial recognition services during an interview without the job applicant’s consent.
  • New York City: In 2021, New York City enacted the most expansive law in the U.S. regulating the use of AI in the employment context to date. Local Law 144 (effective January 1, 2023; enforceable July 5, 2023) prohibits a covered employer/employment agency from using an automated employment decision tool in hiring or promotion decisions unless: (1) the tool has been subject to an annual bias audit; and (2) the employer/employment agency has provided a notice to each job applicant or employee at least 10 business days prior to the use of the tool.

Several other states, including California, Massachusetts, New Jersey, and Vermont, are considering legislation that would also regulate AI in the workplace.

Putting It Into Practice

As federal, state, and local lawmakers and regulators continue to respond to the proliferation of AI-related technologies, an organization that is using these tools in the employment context should take certain steps to ensure legal compliance and help avoid regulatory scrutiny, including the following:

  • Assess current uses of AI-related technologies to determine whether any existing laws and regulations are applicable.
  • Conduct bias audits of AI-related technologies, including those provided by third-party vendors, to ensure that there is no disparate impact on protected classes (e.g., race, sex, disability, etc.).
  • Prepare and distribute required notices to job applicants and/or employees regarding the use of AI-related technologies and obtain required consents.
  • Ensure that the use of AI-related technologies is explainable and consider implementing a dispute process so that job applicants and/or employees can dispute and correct inaccurate information that may have contributed to an adverse employment decision.
  • Monitor developments and consult legal counsel to ensure that the organization is aware of proposed and recently enacted laws and regulations and meeting its obligations.
]]>
Can’t Lie About Your AI: The FTC’s Most Recent Case with AI Allegations https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/cant-lie-about-your-ai-the-ftcs-most-recent-case-with-ai-allegations https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/cant-lie-about-your-ai-the-ftcs-most-recent-case-with-ai-allegations Fri, 01 Sep 2023 00:00:00 -0400 The FTC is not holding its breath on whether Congress will enact AI legislation. Instead, as we have previously reported, the FTC is relying on its own toolkit and has warned businesses that false or unsubstantiated claims related to AI could run afoul of the FTC Act.

A recent example came earlier this month, in a lawsuit the FTC filed against Automators, Inc., three principals, and several related entities. The FTC alleged that the defendants made baseless claims that consumers could make significant income by investing in ecommerce stores – promising “4k-6k consistently monthly net profits,” soliciting false endorsements, and touting non-existent venture capital backing. In fact, the FTC alleged, most consumers didn’t make the promised earnings or even recoup their investments. According to the complaint, defendants took in at least $22 million from consumers in connection with these unlawful practices.

Among their false and unsubstantiated claims, says the FTC, defendants said they used AI tools to maximize revenues, offered an “artificial intelligence-integrated” business opportunity to help consumers find top-selling products to sell, and promoted an AI-powered coaching program. As part of the coaching program, consumers were told to use Chatgpt to write customer service scripts and heard claims like:

  • “We’ve recently discovered how to use AI tools for our 1 on 1 Amazon coaching program, helping students achieve over 10,000/month in sales!”
  • “That is how you make $6000 net profit and that is how you find a product in 5 minutes using AI, Grabbly, Priceblink.”

Atop at least one of Defendants’ advertisements was a claim in bold that read:

Soon after the FTC filed its case, a federal court granted the agency’s request for a temporary restraining order (TRO) with a hearing for preliminary injunction set for mid-September. Among other things, the TRO prohibits the company from misrepresenting that its products “will use Artificial Intelligence (AI) machine-learning to maximize revenues.”

Big takeaway: While the AI claims were only part of the alleged deception here, this case shows that AI deception is gaining steam, and that the FTC is unafraid of calling it out. Although some aspects of AI are uncharted territory, ensuring that claims are truthful and substantiated is a trail already blazed.

]]>
FTC Warns That Deceptive AI Content Ownership Claims Violate the FTC Act https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/ftc-warns-that-deceptive-ai-content-ownership-claims-violate-the-ftc-act https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/ftc-warns-that-deceptive-ai-content-ownership-claims-violate-the-ftc-act Tue, 22 Aug 2023 00:00:00 -0400 https://s3.amazonaws.com/cdn.kelleydrye.com/content/uploads/Listing-Images/ftc_building.webp FTC Warns That Deceptive AI Content Ownership Claims Violate the FTC Act https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/ftc-warns-that-deceptive-ai-content-ownership-claims-violate-the-ftc-act 128 128 The buzz around generative AI has raised many IP-related questions, such as the legality of using IP to train AI algorithms or ownership of AI-generated content. But the FTC warns that claims about content ownership don’t just give rise to IP concerns – they could also constitute FTC Act violations if they meet the unfair or deceptive standard in Section 5. (Click here and here for our take on other recent AI-related guidance from the FTC.)

In a recent business blog, the Agency lays out several practices that could trigger scrutiny and enforcement:

  • Promising full ownership but delivering a limited-use license. Telling consumers that they’re buying full rights to a digital product when in fact they’re just getting a limited-use license or being enrolled in a subscription service is likely to violate Section 5. The FTC warns companies against unilaterally changing their terms or undermining reasonable ownership expectations post-purchase, including in cases where the primary purchaser is deceased and survivors’ rights to the digital property are affected. This principle is hardly AI-specific. After all, the FTC has been bringing cases about deceptive offer terms and hidden negative options for decades – but could be increasingly relevant today, in a context where consumers’ digital purchases live largely in the cloud and companies have more control over post-purchase access and use.
  • Failing to disclose use of IP in training data. Generative AI products that are trained on copyrighted or otherwise protected content should disclose that their outputs may include IP and failing to do so may be a deceptive practice under the FTC Act. Clear disclosures about the use of IP will help consumers and companies make informed choices about which AI products to use. For companies using generative AI tools for commercial purposes, such information could be particularly important, as they may be held liable for improperly including IP in their products.
  • Passing off AI content as human-generated content. Advertising a digital product as created by a person when it was generated through AI would be a clear example of false advertising and, again, aligns with decades of FTC enforcement activity. The prohibition stands even though some platforms may assure users that the generated content “belongs” to them.
  • Misleading creators about content ownership or use. When inviting content creators to upload content, platforms must be clear about ownership and access rights, as well as how the content will be used. If the platform will use the content to train AI algorithms or generate new content, this information must be clearly communicated up front.

Although these practices generally fall within well-established principles of unfairness and deception under Section 5, this blogpost highlights the FTC’s continued focus on all aspects and angles of the generative AI space. In short, expect extra scrutiny of any claims surrounding capabilities, features, ownership, and uses of AI tools and content. The summer may be finally cooling off, but regulators’ interest in AI is just heating up.

]]>
This Summer’s Hot Topic: AGs and AI https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/this-summers-hot-topic-ags-and-ai https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/this-summers-hot-topic-ags-and-ai Thu, 17 Aug 2023 00:00:00 -0400 This summer has been hot all around, but perhaps the hottest topic on the minds of state attorneys general (AGs) continues to be artificial intelligence (AI). As we recently heard from Colorado Attorney General Phil Weiser, AI is a big concern for regulators trying to understand all the ways in which AI permeates our daily lives in order to effectively regulate the algorithms that create the AI.

While the benefits of AI are clear and constantly expanding to different sectors, the AG community believes potential harms to consumers cannot be overstated. In addition to calling for transparency with the use of AI, AGs are grappling with the varied outputs of AI and are looking at tools they can use to address consumer concerns that deal with privacy, discrimination, and data security. At both the recent 2023 AG Alliance Annual Meeting and the NAAG Eastern Region Meeting, AGs heard from AI experts and stakeholders on the state of play for AI and potential tools they can use to curb consumer harms.

AG Alliance Annual Meeting

At the 2023 AG Alliance Annual Meeting, AGs focused on how they enhance and refine their approaches to consumer data and privacy to include AI. Attendees heard from two panels: (1) “The Evolving World of Consumer’s’ Data & Privacy,” which addressed the regulatory landscape of AI; and (2) “AI and the AG,” which was geared towards the role that an AG could play in preventing misconduct and maximizing the benefits of AI and its technologies.

AI requires substantial data. Therefore, according to panelists, we cannot have ethical and responsible AI without rules about data. Some use of AI can be regulated by existing laws (a recurring theme throughout the panels). For example, health insurance providers, regardless of whether they rely on AI, are bound by HIPAA and must follow detailed privacy and security provisions to protect data, including data breach notifications. State UDAP laws have already been used to address AI. In 2020, then Vermont Attorney General T.J. Donovan filed a lawsuit against Clearview AI for allegedly violating the Vermont Consumer Protection Act for using facial recognition technology to map faces of Vermont residents (including children), and sold the data to private businesses, individuals, and law enforcement.

Additionally, New York City adopted NYC 144 prohibiting employers or agencies from using an automated employment decision tool (AEDT) to make an employment decision unless the tool is audited for bias on an annual basis and the employer publishes summaries of the audit, and the employer provides notice to the applicants and employees who are subject to screening by the AEDT.

AGs were asked to hear from stakeholders on how each sector relies on AI and to refrain from relying on a “one size fits all” policy solution for AI. Using AI to make recommendations for a movie or song would require a different approach from using AI to make decisions in the lending or education sectors. Additionally, AGs were asked to consider collaboration and consistency with policymaking to reduce duplicative or disjoined rules between states. Finally, AGs heard that laws and regulations should be responsive to outcomes rather than the specific type of technology due to the ever evolving nature of technology.

NAAG Eastern Region Meeting

At the NAAG Eastern Region Meeting, attendees heard about the role AI is playing in antitrust and consumer protection – as well as the all-important “Tong Tasting” of oysters. In addition to exemplifying the dangers of AI by making a fake audio recording of General Tong, General James and General Tong touched on the ways AI impacts markets, particularly how AI can lead to market dominance by large firms in antitrust. The increased concentration of industries can create a “big firm advantage” as data is often proprietary with large training costs, essentially creating a barrier to entry for smaller players.

On the consumer protection side, the panel noted that possible consumer safeguards include: (1) applying general state consumer protection laws to AI such as state UDAP laws analogous to the FTC Act; (2) using state privacy laws and opting out of AI use; and (3) drafting state/federal AI-specific legislation.

In the NAAG meeting, panelists noted that we are seeing a shift based on recent FTC guidance which focuses on generative AI. Echoing what we previously reported, the panelists stated that the FTC can enforce company pledges to manage the risks posed by AI. As such, the FTC emphasized that claims about AI should not mislead consumers. AI should also not be used for “bad things” such as fraud and scams, especially when they prey on vulnerable populations like the elderly. Similar to the sentiment expressed in the AG Alliance Meeting, businesses using AI have called for clear and consistent regulations. Businesses have expressed concerns about the relationships between AI and the current regulatory schemes such as private rights of action on state wiretapping laws.

In addition to being transparent about their AI practices, businesses can and should address the risks AI creates by:

  • Reviewing claims to ensure they are accurate and not exaggerated.
  • Figuring out who is responsible for each chain of AI.
  • Building compliance mechanisms into AI.

Kelley Drye will continue monitoring the AI regulatory landscape.

]]>
Those AI Commitments from the Tech Companies Aren’t “Just Voluntary” – They’re Enforceable by the FTC https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/those-ai-commitments-from-the-tech-companies-arent-just-voluntary-theyre-enforceable-by-the-ftc https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/those-ai-commitments-from-the-tech-companies-arent-just-voluntary-theyre-enforceable-by-the-ftc Tue, 25 Jul 2023 00:00:00 -0400 https://s3.amazonaws.com/cdn.kelleydrye.com/content/uploads/Listing-Images/ftc_building.webp Those AI Commitments from the Tech Companies Aren’t “Just Voluntary” – They’re Enforceable by the FTC https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/those-ai-commitments-from-the-tech-companies-arent-just-voluntary-theyre-enforceable-by-the-ftc 128 128 On July 21, 2023, the White House announced that it had secured commitments from the leading artificial intelligence companies to manage the risks posed by AI. As stressed in the press release and in news articles since, these commitments are just the beginning of a longer process to ensure the “safe, secure, and transparent” development of AI.

The press release (and articles) also emphasized the voluntary nature of the commitments, noting that the Administration is currently developing an executive order and will pursue bipartisan legislation, presumably to expand on the commitments and make them compulsory. Advocacy groups and some members of Congress, in turn, heralded the announcement as a “good first step” but stressed the need for guardrails that would actually be enforceable.

Not enforceable? Actually, the FTC can enforce these pledges. True, the commitments provide wiggle room, using words like “developing” and “prioritizing” and, in some cases, reflecting practices that are already common among these companies. (See this critique in the New York Times.) And true, the tech companies only agreed to commitments they wanted to agree to – other issues may have been left on the cutting room floor. For example, there don’t appear to be commitments regarding the data inputs that “teach” the algorithm how to “think.”

However, the FTC can still enforce these pledges for what they are, using its authority under the FTC Act to challenge statements shown to be false or misleading to consumers. (I should note here that the States have virtually identical authority under their so-called “UDAP” laws.)

Consider the following:

  • Here, high-level officials from each company (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI) stood at the White House and publicly affirmed their agreement to eight principles published on the White House’s website. While many of the principles are indeed vague or refer to future actions, at least some of them are actionable now, such as the commitment to perform internal and external testing for a host of listed risks, and the commitment to publicly report system capabilities and limitations to users. (Note that the White House’s press release links to a more specific list of commitments.)
  • At least some of the companies announced the commitments on their own websites, thus amplifying them and/or explaining how they apply to that particular company. See Microsoft website (includes commitments about, e.g., testing, cybersecurity, transparency, and compliance with the NIST AI Risk Management Framework); Google (discusses various frameworks and programs it has put in place to promote safe and secure AI); Open AI (posts commitments and explains their importance).
  • Under the FTC Act, the Commission can take action against companies that make promises to consumers (whether in a privacy policy, terms of service, blogpost, public forum, or other means of communication) and then fail to deliver on them. This includes promises to adhere to voluntarily principles. For example, the FTC has brought numerous cases against companies that falsely claimed they complied with the (now-defunct) US-EU Safe Harbor and Privacy Shield programs governing the transfer of EU citizens’ data to the US. Similarly, the FTC has challenged companies’ statements that they complied with self-regulatory principles governing advertising. (See here and here). The FTC’s ability to challenge a company’s failure to adhere to voluntary pledges also underlies the FTC-administered Safe Harbor program under the Children’s Online Privacy Protection Act (COPPA).
  • Finally, when interpreting the statements that companies make to consumers, the FTC will consider both “express” and “implied” claims; view such claims from the perspective of a “reasonable consumer”; and analyze the “net impression” of the statement(s) made. (See the FTC’s Policy Statement on Deception) In other words, even if there is wiggle room in the language, the FTC will examine the overall message conveyed to an ordinary consumer (not to a contracts lawyer). That is how the FTC has been able to bring hundreds of cases challenging statements in all of those privacy policies famous for being opaque and/or overly complex. (Admittedly, though, most of the FTC’s privacy cases are settlements.)

Now, I’m not saying that the voluntary commitments made by these AI companies are a substitute for legislation, regulation, or more specific requirements covering the full set of issues raised by AI. I’m just saying that the FTC can find ways to enforce them, and probably will. After all, the FTC has emphasized repeatedly, in one way or another, that it has the tools to regulate AI and it intends to use them. See, for example, the Joint Statement by DOJ, CFPB, EEOC, and FTC on AI; Lina Khan’s New York Times Op Ed; and the press leak revealing that the FTC is investigating OpenAI.

]]>
When Chatbots Go Rogue https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/when-chatbots-go-rogue https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/when-chatbots-go-rogue Wed, 07 Jun 2023 10:20:10 -0400 Last week, a mental-health chatbot used by the National Eating Disorder Association suddenly began giving diet advice to people seeking help for eating disorders. The rogue chatbot had apparently been developed as a closed system, but the software developer rolled out an AI component to the chatbot in 2022. NEDA claims it was not consulted about the update and did not authorize it. The organization has now taken the chatbot offline.

This incident demonstrates the potential dangers companies face when employing AI chatbots to provide customer service and address consumer needs.

Regulators and law enforcement agencies are taking note. In recent blog posts and reports, both the CFPB and FTC have cautioned companies about over-relying on chatbots and generative AI to provide customer service and resolve consumer concerns.

CFPB Spotlights the Use of Chatbots by Financial Institutions

On June 6, the CFPB released a new issue spotlight on the use of chatbots by banks and other financial institutions. The report notes that banks have increasingly moved from “simple, rule-based chatbots towards more sophisticated technologies such as large language models (“LLMs”) and those marketed as ‘artificial intelligence.’” While these chatbots are intended to simulate human-like responses, they can end up frustrating consumers’ attempts to obtain answers and assistance with financial products or services. Some of the CFPB’s listed concerns are:

  • Limited ability to solve complex problems, resulting in inadequate levels of customer assistance (for example, difficulty understanding requests, requiring use particular phrases to trigger resolution, difficulty knowing when to connect with live agent). The CFPB argues this is particularly concerning in the context of financial services, where consumers’ need for assistance could be “dire and urgent.”
  • The potential for inaccurate, unreliable, or insufficient information. In contexts where financial institutions are required to provide people with certain information that is legally required to be accurate, such lapses may also constitute law violations.
  • Security risks associated with bad actors’ use of fake impersonation chatbots to conduct phishing attacks at scale, as well as privacy risks both in securing customers’ inputted data or in illegally collecting and using personal data for chatbot training purposes.

The CFPB notes that is actively monitoring the market to ensure financial institutions are using chatbots in a manner consistent with customer and legal obligations.

FTC Raises Concerns Regarding Chatbots and “Dark Patterns”

The FTC addressed the intersection of chatbots and “dark patterns” in a recent blog post. (As explained in more detail here and here, “dark patterns” are sometimes defined as practices or formats that may manipulate or mislead consumers into tacking actions they would not otherwise take.) The Commission is worried that consumers may place too much trust in machines, and expect that they are getting accurate and neutral advice.

The agency cautioned companies that using chatbots to steer people into decisions that are not in their best interests, especially in areas such as finance, health, education, housing, and employment, is likely to be an unfair or deceptive act or practice under the FTC Act.

In addition, the FTC warned companies to ensure that native advertising present in chatbot responses is clearly identified, so that users are clearly aware of any commercial relationships present in listed results. The blog was very clear that “FTC staff is focusing intensely on how companies may choose to use AI technology…in ways that can have actual and substantial impact on consumers.”

Given the regulators’ avowed interest in this space, companies should take care that their use of chatbots comports with this most recent guidance.

]]>
AGs and AI: Transparency is Key https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/ags-and-ai-transparency-is-key https://www.kelleydrye.com/viewpoints/blogs/ad-law-access/ags-and-ai-transparency-is-key Mon, 22 May 2023 08:26:00 -0400 As we have previously reported, State Attorneys General have joined other enforcers in addressing the latest AI technology. At the recent 2023 NAAG Consumer Protection Spring Conference, two separate panels discussed how the AGs are focusing on AI.

When asked about concerns with AI, New Hampshire Attorney General Formella explained that technology often moves faster than the government. He is working to engage with the private sector to understand better what emerging technologies are doing, and encourages an open line of communication. New York’s First Assistant Attorney General, Jennifer Levy, noted that her office has brought recent actions involving algorithmic decision-making, including: 1) working with the state education department to put guardrails around a contract with a vendor using facial recognition for school discipline, given potential algorithmic bias, 2) bringing litigation with the CFPB against Credit Acceptance Group, alleging they used algorithms to skew the principal and interest ratio, and 3) settling with lead generators of fake comments regarding the repeal of net neutrality. She echoed that laws don’t always catch up to practices.

Later in the day, attendees were treated to a panel on “Artificial Intelligence & Deep Fakes: The Good, The Bad & The Ugly.” Kashif Chand, Chief of the New Jersey Division of Law’s Data Privacy & Cybersecurity Section, moderated with Patrice Malloy, Chief of the Multistate and Privacy Bureau of the Florida Attorney General’s Office, and they were joined by panelists Santiago Lyon, Head of Advocacy and Education for the Adobe-led Content Authenticity Initiative and Serge Jorgensen, Founding Partner & CTO of the Sylint Group. Chand began by explaining that years ago states relied on general UDAP laws to address new technologies, and now many states have technologists and additional laws to handle privacy and technology issues. He noted that to deal with deep fake issues, for instance, states can use misrepresentation and deception claims as well as unfairness and unconscionability. Turning to AI, Chand focused on whether consumers are being told what the intended use of the AI is. Specifically, there may be significant omissions by creators that would lead consumers to think something is going to happen when it is not, which could give rise to an unfairness claim. Chand pointed to Italy’s block of Chat GPT because of potential processing issues and children’s access, not relying on new laws, but instead using the GDPR generally. But even states without specific data privacy laws can still rely on UDAP theories to address these same concerns.

Lyon described the importance of provenance to the future of AI: the Internet must allow for transparency and labeling of content’s origins to determine authenticity. Jorgensen echoed that one issue is consumers may not even know when AI in use, such as meeting software transcribing notes or AI making hiring decisions. Malloy raised the question as to how consumers can consent if they don’t even know the technology is being used. Jorgensen said developers can consider security and privacy by design, and that the industry will have to think more about this.

Lyon and Jorgensen both raised concerns that data training sets could become tainted with either copyrighted or illicitly gained data. However, as panelists pointed out, if more limits are put in place over data sets, it is an open question how certain AI models can gain enough data to generate output. Chand emphasized that transparency is key for consumers to understand what they are giving up and what they are getting in return. Chand also noted that once a company makes data claims, it is hard to verify other than with the use of white hat hackers and researchers. Chand noted that as AI learns more, businesses need to monitor how it is being used to ensure they do not create deceptive trade practices.

With misinformation becoming tougher to spot, panelists emphasized the need for increased transparency and consumer education and information. Chand noted that future generations will continue to have a better understanding of the use of technology and controls over privacy as they benefit from today’s regulations and education.

Based on this panel, adopters of AI in their business should consider the following:

  • How will you disclose the use of AI technology?
  • How will you educate consumers about the potential risks, benefits, and limitations?
  • How can you consider consumer choice when training AI?
  • How will you monitor how your AI is evolving?
  • How will you prevent potential algorithmic bias?
  • How will you protect children’s data?
  • How will you protect proprietary or copyrighted data?

While answers to the aforementioned may differ depending on the specific situation of each business, remember that transparency with consumers and the public is key to staying off the radar of enforcers.

]]>