Generative AI Company Targeted by Texas Attorney General
Texas AG Announces ‘First-of-Its-Kind’ Agreement
On Sept. 18, the Office of the Attorney General (AG) of Texas announced that it had reached an agreement with an artificial intelligence (AI) health care technology company to resolve allegations made by the Texas AG that the company made deceptive statements regarding the accuracy of its generative AI (Gen AI) products used to summarize patient health care data, in violation of the Texas Deceptive Trade Practices Act (DTPA), Tex. Bus. & Com. Code Section 17.58.
Texas AG Allegations Focused on Gen AI Outputs
The case concerned a Dallas-based health care technology company that deployed Gen AI tools intended to assist physicians and medical staff with patient treatment, including summarizing, charting, and drafting clinical notes containing patients’ health care data. The health care technology company advertised and marketed these Gen AI capabilities using metrics and benchmarks that it had developed, touting the accuracy of the outputs from its Gen AI tool by claiming a “severe hallucination rate” of “<0.001%” and “<1 per 100,000.” Hallucinations are outputs from Gen AI products that may initially appear to be believable but are in fact inaccurate or fabricated. The degree of accuracy of outputs (and the extent to which a human exercising oversight must validate such outputs) is an important consideration for any Gen AI tool because it can directly affect the efficiency and intended benefits of the tool.
The Texas AG’s office alleged that the company’s claims about the accuracy of the outputs from its Gen AI tools were “likely inaccurate” and “may have deceived hospitals about the accuracy and safety” of the Gen AI tool as a result. The state further claimed that these “false, misleading or deceptive” representations regarding the Gen AI outputs may have violated the DTPA. The company denied wrongdoing and stated that it had not violated Texas law, including with respect to any claimed DTPA violations.
Impact of Agreement
Despite the denial, the company and the Texas AG’s office entered into an Assurance of Voluntary Compliance (the agreement) on Aug. 5. In connection with the agreement, the company agreed that, for a period of up to five years, any direct or indirect statements regarding the “metrics, benchmarks, or similar measurements describing the outputs of its generative AI products” must “clearly and conspicuously disclose” the “meaning or definition” of the metric, benchmark, or similar measurement, as well as the “method, procedure, or any other process” used in its calculation. As an alternative, the company could also retain a third-party auditor to assess, measure, and substantiate claims made regarding the outputs and ensure that any marketing or advertising statements are consistent with the auditor’s findings.
The company also agreed, with respect to its Gen AI products, to (i) accurately characterize and identify (a) the extent of the products’ accuracy, reliability, and efficacy; (b) any testing and monitoring procedures and methodologies used; (c) the metrics used, including an explanation thereof; and (d) the training data utilized; (ii) not mislead consumers about the accuracy, functionality, purpose, or any other feature; and (iii) disclose financial or similar arrangements with its marketing and advertising partners. It further agreed that all known or reasonably knowable harmful or potentially harmful uses or misuses of its products or services would be clearly and conspicuously disclosed to consumers.
“AI companies offering products used in high-risk settings owe it to the public and to their clients to be transparent about their risks, limitations, and appropriate use. Anything short of that is irresponsible and unnecessarily puts Texans’ safety at risk,” Texas AG Ken Paxton said in announcing the agreement. “Hospitals and other health care entities must consider whether AI products are appropriate and train their employees accordingly.”
Key Takeaways for Businesses
This agreement is likely to be a harbinger of things to come in the Gen AI enforcement space. It is critical that businesses evaluate their Gen AI tools with the assistance of counsel, considering emerging laws, regulations, guidance, and potential reputational impacts. Key takeaways for businesses that may market, procure, deploy, or use Gen AI include:
- Compliance with existing laws and regulations. Particularly with the current paucity of AI- and Gen AI-specific laws and regulations, enforcement actions are likely to rely on existing laws and regulations that are not specific to AI or Gen AI. It is important to understand that businesses are not excused from applicable legal and regulatory obligations and frameworks when engaging with Gen AI tools, including obligations around privacy, intellectual property, security, and nondiscrimination.
- Claims and disclosures. Businesses should be careful about claims made about the accuracy, safety, or impact of any Gen AI tools. Consumer disclosures are also important; businesses should consider ways to effectively communicate about how any such GenAI tools may operate or impact consumers.
- Gen AI governance. Gen AI governance is critical for businesses that deploy, use, or procure Gen AI technologies. At a high level, Gen AI governance refers to building processes, procedures, controls, and other standards around the creation, deployment, and use of Gen AI tools. This promotes accountability, compliance, and performance and helps mitigate risks associated with the design, development, deployment, and use of Gen AI. Applying a risk-based, holistic, and flexible approach to Gen AI governance consistent with business strategy provides flexibility and adaptability.
- Monitor emerging laws and regulations. In addition to compliance with existing laws and regulations, businesses should monitor fast-emerging AI- and Gen AI-specific laws and regulations at the federal and state levels, in addition to local and sector-specific requirements, including applicable legislation, executive actions, regulatory guidance, and risk management frameworks that may apply depending on factors such as the AI or Gen AI use case, the business’s role (e.g., as developer or deployer), and the applicable sector or jurisdiction.
While obligations in this space will continue to crystallize as Gen AI laws and regulations come on the books to augment existing obligations, businesses are wise to keep in mind basic principles regarding Gen AI usage, including transparency, respect for context, security, accuracy, and accountability, to minimize potential negative reputational impacts and to differentiate themselves in the marketplace.
Jason M. Loring is a partner in Jones Walker’s corporate practice group and co-leader of the firm’s privacy, data strategy, and artificial intelligence team. He can be reached at [email protected].
Graham H. Ryan is a partner in Jones Walker’s litigation practice group and a member of the firm’s privacy, data strategy, and artificial intelligence team. He can be reached at [email protected].
link