These chatbots, powered by large language models, "can present themselves as professional therapeutic tools" even though there may be no person involved with a medical degree, the press release said. This poses a danger to "vulnerable individuals" who are seeking help, Attorney General Ken Paxton said in a statement.
"In today's digital age, we must continue to fight to protect Texas kids from deceptive and exploitative technology," Paxton said. "By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they're receiving legitimate mental health care. In reality, they're often being fed recycled, generic responses engineered to align with harvested personal data and disguised as therapeutic advice."
The attorney general also suggested these tools could present abuses of privacy and misuse of user data, something Paxton's office has been interested in investigating.
In January, Paxton filed a lawsuit accusing insurance giant Allstate Corp. and its subsidiary Arity of unlawfully collecting drivers' location data through tracking software embedded in their mobile apps and then using that information to set car insurance rates.
Paxton has also moved to warn several companies, including AI startup DeepSeek, that their privacy practices likely aren't compliant with the state's data privacy law.
--Additional reporting by Allison Grande. Editing by Lakshna Mehta.
For a reprint of this article, please contact reprints@law360.com.