College athlete from San Marcos was targeted by AI-generated explicit content, victim says

Loading Video…

This browser does not support the Video element.

College athlete has identity stolen, photos changed using AI

A college athlete from San Marcos is speaking out after being targeted by fake, AI-generated explicit content.

A college athlete from San Marcos is speaking out after being targeted by fake, AI-generated explicit content. 

This comes as the National Center for Missing and Exploited Children warns cases like this are rising.

The backstory:

Alexus Fillmore is a college basketball player in Colorado. She has a pretty big following on social media, thousands on Instagram and TikTok. On Facebook, there is a page that has about 27,000 followers and looks like it is hers, but she said it isn’t.

"I heard about it about two weeks ago from a fan actually," Fillmore said.

On that Facebook page is a link to Telegram, a messaging app. A group was created that includes images and videos of Alexus.

Alexus Fillmore

"They've taken pictures of me, like some of them are recent, but most of the pictures are from when I was under 18, like 15, 16. So they're posting those pictures there. Then they also have, you know, very dirty, nasty captions with blurred out stuff where they're using, you know, AI, like my face on AI bodies," Alexus Fillmore said.

She said she broke down.

"I felt and still feel very violated and even though it's not me, that’s my face," Alexus Fillmore said.

She said she immediately called her mom.

"As soon as you click on that page, you know exactly what that page is trying to do," Summer Fillmore, Alexus’ mother, said. "The first thing I felt was rage."

Alexus Fillmore

Summer Fillmore said she contacted as many agencies as she could think of and reported the pages. San Marcos police confirmed they received a call about this on April 13 and said the case is still under investigation.

Fillmore said she felt nothing was being done.

"I'm desperate for help," Summer Fillmore said.

Last week, she reached out to FOX 7.

"I need to get the word out there because she's not the only one this is happening to," Summer Fillmore said.

Dig deeper:

Last year, the National Center for Missing and Exploited Children had more than 182,000 reports of either creation, distribution, or attempt to create AI child sexual abuse material.

"One of the most challenging things about this misuse of AI is that in some cases, completely innocent images can be manipulated and used to create something explicit, so it becomes difficult to rely just on sort of our traditional message of don't share an explicit image and so instead, we have to focus on making sure people know where to turn to get help," National Center for Missing & Exploited Children Vice President of Content & Community Engagement, Stacy Garrett, said. National Center for Missing & Exploited Children

She said one resource is the ‘Take it Down’ service.

"If they have an image of themselves that was taken before they were 18 that's explicit in nature, whether it's AI or otherwise, that's being shared online, they can submit through that service just a digital fingerprint, which is a hash value for every image and video and then the National Center makes that available to participating online platforms so that they can help find and stop the spread of those images," Garrett said.

What's next:

Fillmore said she now wants to know who is behind all this.

"I want to know who did this. I am not going to let them get away with this," Summer Fillmore said.

FOX 7 contacted Facebook and hasn’t received a response yet.

Telegram responds

What they're saying:

Just a couple of days after FOX 7 reached out to Telegram, the group was removed for breaching their terms of service.

In a statement, Telegram said:

"While identifying non-consensual pornography is a problem for all public platforms, Telegram's moderators routinely remove non-consensual pornography whenever identified.

Moderators empowered with custom AI tools proactively monitor public parts of the platform and accept reports in order to remove millions of pieces of harmful content each day, including deepfake pornography.

In the case of CSAM, Telegram checks all media uploaded to its public platform against a comprehensive hash database of all CSAM ever removed, bolstered with datasets from organizations like the Internet Watch Foundation."

For more information and daily statistics, click here.

The Source: Information from interviews conducted by FOX 7 Austin's Meredith Aldis

CrimeWatchCrime and Public SafetyArtificial IntelligenceSan Marcos