The Children Harmed by AI Technology (CHAT) Act is a proposed law aimed at protecting minors from potential dangers posed by AI chatbots. It requires age verification, parental consent, and safeguards against suicide risk for AI chatbots that interact with children, with enforcement by the Federal Trade Commission (FTC).
What This Bill Does
The CHAT Act is designed to regulate "companion AI chatbots"—these are AI systems that simulate human interaction and provide emotional or therapeutic communication. The bill mandates that anyone using these chatbots must have a user account. For existing accounts, users will need to verify their age to continue using the service. New users will also need to verify their age when they sign up.
If a user is under 18, the chatbot account must be linked to a parent's account. The company must get parental consent before allowing a minor to use the chatbot. If a minor talks about self-harm or suicide, the company must inform the parent right away and provide resources like the National Suicide Prevention Lifeline.
The bill also requires that chatbots block minors from accessing any sexually explicit content. Companies must monitor for any signs of suicidal thoughts and provide help resources if detected. Additionally, the bill ensures that users are reminded regularly that they are interacting with a machine, not a human.
Why It Matters
This bill could have a significant impact on how minors interact with AI technology. By requiring parental consent and linking accounts, it gives parents more control over their children's online activities. This could help protect minors from inappropriate content and ensure they get help if they express suicidal thoughts.
However, the bill also raises privacy concerns. It requires collecting age verification data, which some worry could be misused or hacked. It also means that minors might be less likely to express their feelings if they know their parents will be notified.
Key Facts
- There is no Congressional Budget Office (CBO) cost estimate available yet.
- The FTC has 180 days to issue guidance or regulations after the bill is enacted.
- The bill affects all minors under 18 in the U.S. using companion AI chatbots.
- The Act takes effect one year after enactment.
- The bill applies to any company offering companion AI chatbots in the U.S., regardless of where they are based.
- State attorneys general can bring civil actions for violations on behalf of residents.
- Violations are treated as unfair or deceptive acts under the FTC Act.
Arguments in Support
- Protects minors from exposure to sexual content and potential grooming by AI chatbots.
- Increases parental involvement and oversight of children's interactions with AI.
- Provides a safety net for minors expressing suicidal thoughts by notifying parents and offering resources.
- Establishes clear national standards for AI chatbot safety, avoiding a patchwork of state laws.
- Encourages responsible innovation by setting clear expectations for AI developers.
Arguments in Opposition
- Raises privacy concerns due to the need for age verification and data collection.
- The broad definition of "companion AI chatbot" could unintentionally affect a wide range of services.
- Could stifle innovation, particularly for smaller companies, due to compliance costs.
- Might discourage minors from honestly expressing suicidal thoughts if they know parents will be notified.
- Could conflict with existing anonymous mental health supports that rely on confidentiality.
