We have frequently written on AGs interest in AI. In what Texas calls the first-of-its-kind healthcare generative AI” settlement, the state resolved its investigation into Pieces Technologies’ alleged misleading statements about accuracy of products deployed in major hospitals. Pieces claimed the product summarizes, charts, and drafts clinical notes for your doctors and nurses…so they don’t have to.” The company further claimed accuracy of a <1 per 100,000 severe hallucination rate,” a phenomena of generative AI products creating an output that is incorrect or misleading.” Texas found this to be likely inaccurate” and alleged these representations may have violated the DTPA.”

The Texas AG’s settlement, which is in the form of an Assurance of Voluntary Compliance, requires Pieces to make clear and conspicuous disclosures regarding the meaning of its metrics or, in the alternative, third party substantiation by an auditor. Pieces may not misrepresent or make unsubstantiated claims regarding its accuracy, testing, monitoring, metrics, or data training set, otherwise mislead customers regarding its functionality or purpose, and must disclose any financial arrangements with endorsers. Pieces also must provide customers with documentation that clearly and conspicuously discloses:

  • any known or reasonably knowable harmful or potentially harmful uses or misuse of its products or services” including data or models used to train,
  • a detailed explanation of the intended purpose of its products or services and any training or documentation needed to facilitate proper use,
  • known or reasonably knowable risks or limitations of its products or services such as physical or financial injury as a result of inaccurate outputs,
  • any known or reasonably knowable misuses of its products or services that can increase the risk of inaccurate outputs or harm to individuals, and
  • all other documentation reasonably necessary for a user to understand the nature and purpose of an output generated by a product or service, monitor for patterns of inaccuracy, and reasonably avoid misuse...”

The settlement automatically terminates after 5 years. This list of settlement prohibitions provides helpful insight into what state enforcers may believe is required under their UDAP laws for any generative AI company. As a practical note, these settlement prohibitions may also serve as helpful diligence vetting or risk assessment questions for businesses as they evaluate third party provided generative AI tools.

As an overall take-away, it’s worth underscoring Attorney General Paxton’s press statement, which emphasizes the importance of transparency, particularly when used in high-risk settings” and asks “[h]ospitals and healthcare entities” to consider whether AI products are appropriate and train their employees accordingly.” We anticipate this will not be the last enforcement on this topic to make headlines over the near term. State AGs (and other government enforcers) are taking to heart a risk-based approach to AI enforcement, even without a specific enforcement regime. On a closing note, we also point out Colorado AG Weiser’s recent remarks on how the state enforcement community might find a balanced approach to regulating and enforcing in the AI space. More to come. Stay tuned.