• About
  • Advertise
  • Careers
  • Contact
Sunday, November 30, 2025
No Result
View All Result
NEWSLETTER
mynewssourceonline
  • Home
  • Politics
  • Entertainment
  • Business
  • Legal
  • Sports
  • Lifestyle
  • World
  • Opinion
  • Home
  • Politics
  • Entertainment
  • Business
  • Legal
  • Sports
  • Lifestyle
  • World
  • Opinion
No Result
View All Result
mynewssourceonline
No Result
View All Result
Home News

AI Is making death threats way more realistic

How advanced AI tools are raising the stakes on digital harassment and online threats

by admin
November 3, 2025
in News, Tech
0
AI threats
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter

Even though she was toughened by years spent working in internet activism, Caitlin Roper found herself traumatized by the online threats she received this year.

There was a picture of her hanging from a noose, dead. And another of herself ablaze, screaming.

The posts were part of a surge of vitriol directed at Ms. Roper and her colleagues at Collective Shout, an Australian activist group, on X and other social media platforms.

Some of it, including images of the women flayed, decapitated, or fed into a wood chipper, was seemingly enabled — and given a visceral realism — by generative artificial intelligence. In some of the images, Ms. Roper was wearing a blue floral dress that she, in fact, owns.

“It’s these weird little details that make it feel more real and, somehow, a different kind of violation,” she said. “These things can go from fantasy to more than fantasy.”

Artificial intelligence is already raising concerns for its ability to mimic authentic voices in the service of scams or to produce deepfake pornography without a subject’s permission.

Now, the technology is also being used for violent threats — priming them to maximize fear by making them far more personalized, more convincing, and more easily delivered.

“Two things will always happen when technology like this gets developed: We will find clever and creative and exciting ways to use it, and we will find horrific and awful ways to abuse it,” said Hany Farid, a professor of computer science at the University of California, Berkeley. “What’s frustrating is that this is not a surprise.”

Digitally generated threats have been possible for several years. A judge in Florida was sent a video in 2023, most likely made using a character customization tool in the Grand Theft Auto 5 video game, that featured an avatar who looked and walked like her being hacked and shot to death.

However, threatening images are becoming increasingly easy to create and more persuasive. One YouTube page had more than 40 realistic videos — most likely made using AI, according to experts who reviewed the channel — each showing a woman being shot.

(YouTube, after The New York Times contacted it, said it had terminated the channel for “multiple violations” of its guidelines.) A deepfake video of a student carrying a gun sent a high school into lockdown this spring.

In July, a lawyer in Minneapolis said xAI’s Grok chatbot had provided an anonymous social media user with detailed instructions on breaking into his house, sexually assaulting him, and disposing of his body.

Until recently, artificial intelligence could replicate real people only if they had a substantial online presence, such as film stars with throngs of publicly accessible photos.

Now, a single profile image will suffice, said Dr. Farid, who co-founded Get Real Security, a service that identifies malicious digital content. (Ms. Roper said she had worn the blue floral dress in a photo published a few years ago in an Australian newspaper.)

 The same is true of voices — what once took hours of example data to clone now requires less than a minute.

“The concern is that now, almost anyone with no skills but with motive or lack of scruples can easily use these tools to do damage,” said Jane Bambauer, a professor who teaches about A.I. and the law at the University of Florida.

Worries about AI-assisted threats and extortion intensified with the introduction this month of Sora, a text-to-video app from OpenAI. The app, which allows users to upload images of themselves to be incorporated into hyper-realistic scenes, quickly depicts actual people in frightening situations.

The Times tested Sora and produced videos that appeared to show a gunman in a bloody classroom and a hooded man stalking a young girl. Grok also readily added a bloody gunshot wound to a photo of a real person.

“From the perspective of identity, everyone’s vulnerable,” Dr. Farid said.

An OpenAI spokeswoman said the company relied on multiple defenses, including guardrails to block unsafe content from being created, experiments to uncover previously unknown weaknesses, and automated content moderation systems.

(The Times sued OpenAI in 2023, claiming copyright infringement of news content related to A.I. systems, an assertion that OpenAI has denied.)

Experts in AI safety, however, said companies had not done nearly enough. Alice Marwick, director of research at Data & Society, a nonprofit organization, described most guardrails as “more like a lazy traffic cop than a firm barrier — you can get a model to ignore them and work around them.”

Ms. Roper said the torrent of online abuse starting this summer — including hundreds of harassing posts explicitly sent to her — was linked to her work on a campaign to shut down violent video games glorifying rape, incest, and sexual torture.

On X, where most of the abuse appeared, she said, some harassing images and accounts were taken down. But the company also told her repeatedly that other posts depicting her violent death did not violate the platform’s terms of service.

In fact, X once included one of her harassers on a list of recommended accounts for her to follow. Some of the harassers also claimed to have used Grok not just to create the images but to research how to find the women at home and at local cafes.

Fed up, Ms. Roper decided to post some examples. Soon after, according to screenshots, X told her that she was in breach of its safety policies against gratuitous gore and temporarily locked her account.

Neither X nor xAI, the company that owns Grok, responded to requests for comment.

 

admin

admin

Next Post
GIMPA criminal law

Acting Chief Justice commends GIMPA for promoting international criminal law and justice

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended

Apaak sanitary pad

Dr. Apaak dismisses Assafuah’s GH¢45 sanitary pad claim as ‘blatant falsehood’

4 days ago
J.B. Danquah 60th Anniversary Commemorative Compendium Set for Grand Launch on 9th March

J.B. Danquah 60th Anniversary Commemorative Compendium Set for Grand Launch on 9th March

9 months ago

Popular News

  • WAEC WASSCE results

    WAEC releases 2025 WASSCE results; cancels, withholds thousands over exam infractions

    0 shares
    Share 0 Tweet 0
  • GPL: Poor results force Hearts to ditch Accra sports stadium for Legon

    0 shares
    Share 0 Tweet 0
  • Cobolli wins epic tie-break send Italy into final

    0 shares
    Share 0 Tweet 0
  • Is ‘Immense’ Arsenal the most formidable team in Europe?

    0 shares
    Share 0 Tweet 0
  • Moravian stars light up Germany’s christmas season

    0 shares
    Share 0 Tweet 0

Connect with us

  • About
  • Advertise
  • Careers
  • Contact
Call us: +233208991455

© 2025 Mynewssourceonline - All rights reserved

Powered by
...
►
Necessary cookies enable essential site features like secure log-ins and consent preference adjustments. They do not store personal data.
None
►
Functional cookies support features like content sharing on social media, collecting feedback, and enabling third-party tools.
None
►
Analytical cookies track visitor interactions, providing insights on metrics like visitor count, bounce rate, and traffic sources.
None
►
Advertisement cookies deliver personalized ads based on your previous visits and analyze the effectiveness of ad campaigns.
None
►
Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies.
None
Powered by
No Result
View All Result
  • Home
  • Politics
  • Business
  • Entertainment
  • Banking
  • Legal
  • Sports
  • Lifestyle
  • World
  • Opinion

© 2025 Mynewssourceonline - All rights reserved