Hub AI in practice
Artificial Intelligence is present in everyday life – from “googling” to facial recognition to vacuum cleaner robots. AI tools are becoming more and more elaborate and support people and companies more effectively in their tasks, such as generating graphics, texting or coding, or interpreting large amounts of data.
What AI tools are there, how do they work, how do they help in our everyday world – and how do they change our lives? These are the questions we address in our Content Hub Artificial Intelligence in Practice.
AI tools like ChatGPT are rapidly changing daily life for teachers in the US, according to a new Gallup study. Six out of ten public school teachers used AI in the last school year, mainly for lesson planning, grading, and communicating with parents. On average, teachers estimate these tools save them about six hours of work each week. Most say this improves their job quality. At the same time, education experts like Maya Israel from the University of Florida caution against relying too heavily on AI. While the technology can help with routine grading, Israel says it should not replace a teacher's educational responsibilities. Around two dozen US states have now introduced guidelines for using AI in the classroom.
Google has released a new AI-powered version of Google Colab, following an initial test phase. Colab AI can assist with data preparation, model training, debugging, and visualizations directly within the notebook. A new Data Science Agent is able to run complete analysis workflows, display results, and take user feedback. Users interact with Colab AI using everyday language, and the tool will update code or suggest corrections as needed. The new features are designed to streamline the workflow in Colab notebooks. Access is available via the Gemini icon in the toolbar of any open notebook.
Google is making Imagen 4 available via the Gemini API and in AI Studio. According to Google, the new text-to-image model offers significantly better quality when rendering text than the previous version, Imagen 3. There are two variants: Imagen 4 for general tasks ($0.04 per image) and Imagen 4 Ultra ($0.06 per image), which Google says is designed more for accurate following of prompts. The following AI slop comic was generated with Imagen Ultra 4, which can be tested free of charge in Google AI Studio.

OpenAI has removed all references to its "io" project after a trademark dispute with IYO Audio, whose name is pronounced the same as "io." The planned AI device, a collaboration between Sam Altman and Jony Ive, was originally teased under the "io" name, but IYO Audio objected and took legal action. IYO Audio, which is working on a similar AI product and presented it during a 2024 TED Talk, claims rights to the name. OpenAI says it disagrees with IYO's trademark claim and is reviewing its options. It's unclear if Ive intended to keep using the "io" name after OpenAI acquired the hardware startup, which was founded before the official announcement of the partnership.
Cybercriminals are upgrading WormGPT with stronger AI models. The original WormGPT, which launched in June 2023, used the open source GPT-J model to create a censorship-free LLM for cybercrime. Now, Cato CTRL reports that two new versions have surfaced on BreachForums: "keanu-WormGPT," which actually taps Grok from xAI through its API using a custom jailbreak, and "xzin0vich-WormGPT," which runs on Mixtral from Mistral AI. Both are distributed via Telegram and get around the original models' safeguards by manipulating system prompts. This lets them generate phishing emails, malicious code, and other attack tools. Cato calls this a "significant shift" in the misuse of large language models.

Google has released Magenta RealTime (Magenta RT), an open-source AI model for live music creation and control. The model responds to text prompts, audio samples, or both. Magenta RT is built on an 800 million parameter Transformer and trained on about 190,000 hours of mostly instrumental music. One technical limitation is that it can only access the last ten seconds of generated audio.
The code and model are available under open licenses on GitHub and Hugging Face. Users can test the model for free on Colab TPUs. Google plans to add local use, customizations, and publish a research paper soon.