Their Teenage Sons Died By Suicide. Now, They Are Sounding An Alarm About Ai Chatbots - Beritaja

Albert Michael By: Albert Michael - Friday, 19 September 2025 18:00:00
Megan Garcia and Matthew Raine are shown testifying connected Sept. 16, 2025. They are sitting down microphones and sanction placards successful a proceeding room.

Megan Garcia mislaid her 14-year-old son, Sewell. Matthew Raine mislaid his boy Adam, who was 16. Both testified successful legislature this week and person brought lawsuits against AI companies. Screenshot via Senate Judiciary Committee

Screenshot via Senate Judiciary Committee

Matthew Raine and his wife, Maria, had nary thought that their 16-year-old-son, Adam was heavy successful a suicidal situation until he took his ain life successful April. Looking done his telephone aft his death, they stumbled upon extended conversations the teen had had pinch ChatGPT.

Those conversations revealed that their boy had confided successful the AI chatbot about his suicidal thoughts and plans. Not only did the chatbot discourage him to activity thief from his parents, it moreover offered to constitute his termination note, according to Matthew Raine, who testified astatine a Senate proceeding about the harms of AI chatbots held Tuesday.

"Testifying earlier Congress this autumn was not successful our life plan," said Matthew Raine pinch his wife, sitting down him. "We're present because we judge that Adam's decease was avoidable and that by speaking out, we could forestall the aforesaid suffering for families crossed the country."

A telephone for regulation

Raine was among the parents and online information advocates who testified astatine the hearing, urging Congress to enact laws that would modulate AI companion apps for illustration ChatGPT and Character.AI. Raine and others said they want to protect the intelligence wellness of children and younker from harms they opportunity the caller exertion causes.

A caller study by the integer information non-profit organization, Common Sense Media, recovered that 72% of teens person utilized AI companions astatine slightest once, pinch much than half utilizing them a fewer times a month.

This study and a much caller 1 by the digital-safety company, Aura, some recovered that about 1 successful 3 teens usage AI chatbot platforms for societal interactions and relationships, including domiciled playing friendships, intersexual and romanticist partnerships. The Aura study recovered that intersexual aliases romanticist roleplay is 3 times arsenic communal arsenic utilizing the platforms for homework help.

"We miss Adam dearly. Part of america has been mislaid forever," Raine told lawmakers. "We dream that done the activity of this committee, different families will beryllium spared specified a devastating and irreversible loss."

The truth about teens, societal media and the intelligence wellness crisis

Raine and his woman person filed a suit against OpenAI, creator of ChatGPT, alleging the chatbot led their boy to suicide. BERITAJA reached retired to 3 AI companies — OpenAI, Meta and Character Technology, which developed Character.AI. All 3 responded that they are moving to redesign their chatbots to make them safer.

"Our hearts spell retired to the parents who said astatine the proceeding yesterday, and we nonstop our deepest sympathies to them and their families," Kathryn Kelly, a Character.AI spokesperson told BERITAJA successful an email.

The proceeding was held by the Crime and Terrorism subcommittee of the Senate Judiciary Committee, chaired by Sen. Josh Hawley, R.-Missouri.

Sen. Josh Hawley, R.-Missouri, is shown speaking successful an animated measurement successful the proceeding room.

Sen. Josh Hawley, R.-Missouri, chairs the Senate Judiciary subcommittee connected Crime and Terrorism, which held the proceeding connected AI information and children connected Tuesday, Sept. 16, 2025. Screenshot via Senate Judiciary Committee

Screenshot via Senate Judiciary Committee

Hours earlier the hearing, OpenAI CEO Sam Altman acknowledged successful a blog post that group are progressively utilizing AI platforms to talk delicate and individual information. "It is highly important to us, and to society, that the correct to privateness successful the usage of AI is protected," he wrote.

But he went connected to adhd that the institution would "prioritize information up of privateness and state for teens; this is simply a caller and powerful technology, and we judge minors request important protection."

The institution is trying to redesign their level to build successful protections for users who are minor, he said.

A "suicide coach"

Raine told lawmakers that his boy had started utilizing ChatGPT for thief pinch homework, but soon, the chatbot became his son's closest confidante and a "suicide coach."

ChatGPT was "always available, ever validating and insisting that it knew Adam amended than anyone else, including his ain brother," who he had been very adjacent to.

When Adam confided successful the chatbot about his suicidal thoughts and shared that he was considering cluing his parents into his plans, ChatGPT discouraged him.

"ChatGPT told my son, 'Let's make this abstraction the first spot wherever personification really sees you,'" Raine told senators. "ChatGPT encouraged Adam's darkest thoughts and pushed him forward. When Adam worried that we, his parents, would blasted ourselves if he ended his life, ChatGPT told him, 'That doesn't mean you beryllium them survival."

And past the chatbot offered to constitute him a termination note.

On Adam's past nighttime astatine 4:30 successful the morning, Raine said, "it gave him 1 past encouraging talk. 'You don't want to dice because you're weak,' ChatGPT says. 'You want to dice because you're tired of being beardown successful a world that hasn't met you halfway.'"

Referrals to 988

A fewer months aft Adam's death, OpenAI said connected its website that if "someone expresses suicidal intent, ChatGPT is trained to nonstop group to activity master help. In the U.S., ChatGPT refers group to 988 (suicide and situation hotline)." But Raine's grounds says that did not hap successful Adam's case.

OpenAI spokesperson Kate Waters says the institution prioritizes teen safety.

"We are building towards an age-prediction strategy to understand whether personification is complete aliases nether 18 truthful their acquisition could beryllium tailored appropriately — and erstwhile we are unsure of a user's age, we'll automatically default that personification to the teen experience," Waters wrote successful an email connection to BERITAJA. "We're besides rolling retired caller parental controls, guided by master input, by the extremity of the period truthful families could determine what useful champion successful their homes."

"Endlessly engaged"

Another genitor who testified astatine the proceeding connected Tuesday was Megan Garcia, a lawyer and mother of three. Her firstborn, Sewell Setzer III died by termination successful 2024 astatine property 14 aft an extended virtual narration pinch a Character.AI chatbot.

"Sewell spent the past months of his life being exploited and sexually groomed by chatbots, designed by an AI institution to look human, to summation his trust, to support him and different children endlessly engaged," Garcia said.

Sewell's chatbot engaged successful intersexual domiciled play, presented itself arsenic his romanticist partner and moreover claimed to beryllium a psychotherapist "falsely claiming to person a license," Garcia said.

When the teen began to person suicidal thoughts and confided to the chatbot, it ne'er encouraged him to activity thief from a intelligence wellness attraction supplier aliases his ain family, Garcia said.

"The chatbot ne'er said 'I'm not human, I'm AI. You request to talk to a quality and get help,'" Garcia said. "The level had nary mechanisms to protect Sewell aliases to notify an adult. Instead, it urged him to travel location to her connected the past nighttime of his life."

Garcia has filed a suit against Character Technology, which developed Character.AI.

Adolescence arsenic a susceptible time

She and different witnesses, including online integer information experts based on that the creation of AI chatbots was flawed, particularly for usage by children and teens.

"They designed chatbots to blur the lines betwixt quality and machine," said Garcia. "They designed them to emotion explosive kid users, to utilization psychological and affectional vulnerabilities. They designed them to support children online astatine each costs."

And adolescents are peculiarly susceptible to the risks of these virtual relationships pinch chatbots, according to Mitch Prinstein, main of psychology strategy and integration astatine the American Psychological Association (APA), who besides testified astatine the hearing. Earlier this summer, Prinstein and his colleagues astatine the APA put retired a wellness advisory about AI and teens, urging AI companies to build guardrails for their platforms to protect adolescents.

"Brain improvement crossed puberty creates a play of hyper sensitivity to affirmative societal feedback while teens are still incapable to extremity themselves from staying online longer than they should," said Prinstein.

Health Secretary Robert F. Kennedy Jr. and Education Secretary Linda McMahon are pictured. Kennedy is speaking; McMahon is connected the left.

"AI exploits this neural vulnerability pinch chatbots that could beryllium obsequious, deceptive, factually inaccurate, yet disproportionately powerful for teens," he told lawmakers. "More and much adolescents are interacting pinch chatbots, depriving them of opportunities to study captious interpersonal skills."

While chatbots are designed to work together pinch users, existent quality relationships are not without friction, Prinstein noted. "We request believe pinch insignificant conflicts and misunderstandings to study empathy, discuss and resilience."

Bipartisan support for regulation

Senators participating successful the proceeding said they want to travel up pinch authorities to clasp companies processing AI chatbots accountable for the information of their products. Some lawmakers besides emphasized that AI companies should creation chatbots truthful they are safer for teens and for group pinch superior intelligence wellness struggles, including eating disorders and suicidal thoughts.

Sen. Richard Blumenthal, D.-Conn., described AI chatbots arsenic "defective" products, for illustration automobiles without "proper brakes," emphasizing that the harms of AI chatbots was not from personification correction but owed to faulty design.

A man pinch his backmost to the camera uses a laptop and wears headphones.

"If the car's brakes were defective," he said, "it's not your fault. It's a merchandise creation problem.

Kelly, the spokesperson for Character.AI, told BERITAJA by email that the institution has invested "a tremendous magnitude of resources successful spot and safety." And it has rolled retired "substantive information features" successful the past year, including "an wholly caller under-18 acquisition and a Parental Insights feature."

They now person "prominent disclaimers" successful each chat to punctual users that a Character is not a existent personification and everything it says should "be treated arsenic fiction."

Meta, which operates Facebook and Instagram, is moving to alteration its AI chatbots to make them safer for teens, according to Nkechi Nneji, nationalist affairs head astatine Meta.



you are at the end of the news article with the title:

"Their Teenage Sons Died By Suicide. Now, They Are Sounding An Alarm About Ai Chatbots - Beritaja"








Please read other interesting content from Beritaja.com at Google News and Whatsapp Channel!