Actor Matthew McConaughey just got eight trademark applications approved by the US Patent and Trademark Office to protect himself against unauthorized AI copies. The trademarks cover a seven-second clip of him standing on a porch and audio of his famous line "Alright, alright, alright" from the 1993 film "Dazed and Confused," among others, according to the Wall Street Journal.
McConaughey says he wants to make sure his voice and likeness are only used with his permission. "We want to create a clear perimeter around ownership with consent and attribution the norm in an AI world," he writes in an email to the WSJ. His lawyers Jonathan Pollack and Kevin Yorn see the trademarks as a potential tool against AI abuse in federal court, though whether this strategy will hold up before a judge remains to be seen.
Several monkeys have escaped in St. Louis, and AI-generated images are making the search for the animals harder, another sign of how synthetic media is muddying everyday reality. The vervet monkeys were first spotted Thursday near a park in the north of the city, AP reports. Since then, social media has been flooded with rumors and AI-generated images from people falsely claiming they've caught the animals. As of Monday, the monkeys still hadn't been captured, according to Willie Springer, a spokesman for the city health department.
It’s been a lot in regard to AI and what’s genuine and what’s not. People are just having fun. Like I don’t think anyone means harm.
Willie Springer
Authorities still don't know who owns the monkeys, how they escaped, or exactly how many are out there. They're urging residents to keep their distance, as the animals can turn aggressive when stressed.
British media regulator Ofcom has opened an investigation into X over the AI chatbot Grok. The probe follows reports in recent weeks that Elon Musk's chatbot and social media platform were increasingly being used to create and share non-consensual intimate images and even sexualized images of children.
Ofcom is now examining whether X violated the UK's Online Safety Act. The regulator contacted X on January 5, 2025, demanding a response by January 9. The investigation aims to determine whether X took adequate steps to protect British users from illegal content. Violations could result in fines of up to 18 million pounds or 10 percent of global revenue. In severe cases, a court could even order X blocked in the UK.
Ofcom is also looking into whether xAI, the AI company behind Grok, broke any regulations. Last week, the EU Commission ordered X to preserve all internal documents and data related to the Grok AI chatbot through the end of 2026.
Elon Musk's platform X has emerged as the primary distribution hub for AI-generated images that digitally undress people without their consent.
Within just 24 hours, the chatbot generated roughly 6,700 images per hour that were flagged as sexually suggestive or explicit, according to Genevieve Oh, a researcher specializing in social media and deepfakes, who spoke with Bloomberg.
Oh's analysis reveals the staggering scale of abuse involving Elon Musk's AI model Grok on the X platform. While specialized websites for this type of content averaged only 79 new images per hour, Grok's output dwarfed that figure many times over. Users are deliberately using the chatbot to digitally undress uploaded photos of people - including children - through simple text commands. Despite xAI's promises to add safety measures after the fact, the case highlights an alarming normalization of sexualized violence enabled by generative AI.
For days, users have been flooding Grok with pictures of half-naked people, from young women to soccer stars. The problem stems from Grok's image editing feature, which lets users modify people in photos—including swapping their clothes for bikinis or lingerie. All it takes is a simple text command. Now, one user has discovered that Grok even generated such images of children.
X user "Xyless" discovered Grok would generate sexualized images of children. | via X
The discovery forced xAI to respond. The company acknowledged "lapses in safeguards" and said it was "urgently fixing them." Child sexual abuse material is "illegal and prohibited," xAI wrote.