OpenAI has removed all references to its "io" project after a trademark dispute with IYO Audio, whose name is pronounced the same as "io." The planned AI device, a collaboration between Sam Altman and Jony Ive, was originally teased under the "io" name, but IYO Audio objected and took legal action. IYO Audio, which is working on a similar AI product and presented it during a 2024 TED Talk, claims rights to the name. OpenAI says it disagrees with IYO's trademark claim and is reviewing its options. It's unclear if Ive intended to keep using the "io" name after OpenAI acquired the hardware startup, which was founded before the official announcement of the partnership.
Cybercriminals are upgrading WormGPT with stronger AI models. The original WormGPT, which launched in June 2023, used the open source GPT-J model to create a censorship-free LLM for cybercrime. Now, Cato CTRL reports that two new versions have surfaced on BreachForums: "keanu-WormGPT," which actually taps Grok from xAI through its API using a custom jailbreak, and "xzin0vich-WormGPT," which runs on Mixtral from Mistral AI. Both are distributed via Telegram and get around the original models' safeguards by manipulating system prompts. This lets them generate phishing emails, malicious code, and other attack tools. Cato calls this a "significant shift" in the misuse of large language models.
WormGPT now comes in new variants powered by Grok and Mixtral, making it easier for cybercriminals to create phishing emails and malicious code. | Image: Cato Networks
Google has released Magenta RealTime (Magenta RT), an open-source AI model for live music creation and control. The model responds to text prompts, audio samples, or both. Magenta RT is built on an 800 million parameter Transformer and trained on about 190,000 hours of mostly instrumental music. One technical limitation is that it can only access the last ten seconds of generated audio.
The BBC is threatening legal action against US AI startup Perplexity over the alleged unauthorized use of BBC content to train its AI systems. In a letter seen by the Financial Times, the BBC demanded that Perplexity immediately stop scraping its content, delete stored BBC material, and provide financial compensation. The broadcaster says Perplexity copied content verbatim, undermined BBC's own services, and used BBC material to train its standard AI model, Sonar. An internal BBC analysis found that 17 percent of answers generated by Perplexity's chatbot contained significant errors. Perplexity denies the allegations, but the company is already facing lawsuits from other media organizations and is in licensing negotiations with selected publishers.