Ad
Skip to content
Read full article about: OpenAI's Codex gets a plugin marketplace for Slack, Notion, Figma, and more

OpenAI is adding plugins to Codex that integrate with popular work tools like Slack, Figma, Notion, Gmail, and Google Drive. The plugins go beyond coding - OpenAI says they also help with planning, research, and coordination. Under the hood, plugins bundle predefined prompt workflows ("skills"), app integrations, and MCP server configurations into installable packages, similar to ChatGPT integrations. They work across the Codex app, command line, and IDE extensions. Developers can build their own and distribute them through local or team-wide "marketplaces." An official curated directory is already live, with self-publishing coming soon.

The move is part of OpenAI's broader push into coding tools and enterprise customers, which includes a planned "super app" combining ChatGPT, Codex, and the Atlas browser. Codex now has over 1.6 million weekly active users, with a Windows version shipping just recently.

Read full article about: Anthropic confirms leaked model marks a "step change" in reasoning after data breach reveals its existence

A data leak at Anthropic has exposed details about an unreleased AI model that internal documents call the company's most powerful to date. After Fortune broke the story, Anthropic confirmed it is already testing the model with select customers, claiming it marks a "step change" in reasoning, coding, and cybersecurity capabilities. The breach happened because of a misconfiguration in Anthropic's content management system. A default setting automatically made uploaded files public, leaving nearly 3,000 internal documents exposed for anyone to see.

OpenAI is reportedly also gearing up for a major release. The company is preparing a new model codenamed "Spud," which has already finished pretraining. Similar to Anthropic, OpenAI CEO Sam Altman has internally promised a massive jump in capabilities, saying the model can "really accelerate the economy," whatever that means. Both companies will likely time the release of their strongest models to ensure they are optimally positioned for their planned IPOs later this year.

Read full article about: Apple gets full Gemini access and uses distillation to build lightweight on-device AI

Apple has secured broad access rights to Google's Gemini models. According to The Information, Apple has full access to Gemini within its own data centers and can use distillation to build smaller models from it. Gemini generates high-quality answers along with its chain of thought, which then serve as training data for a smaller model. In short, Apple is paying for what Chinese AI companies are allegedly doing in secret: tapping a powerful AI model to generate quality training data for a smaller one.

Because Apple has full access, it can build smaller versions that give the same answers as Gemini and arrive at them the same way. These lighter versions need far less processing power and can run directly on Apple devices.

Since Gemini is built for chatbots and enterprise applications, it doesn't always line up with Apple's plans for Siri, according to The Information. But Apple is still building its own models in parallel through its Apple Foundation Models team. New AI features could drop at Apple's developer conference in June.

Ad
Read full article about: Mistral's first open-weight TTS model Voxtral clones voices from three seconds of audio across nine languages

French AI startup Mistral has released Voxtral TTS, its first text-to-speech model. The model supports nine languages—including German, English, French, and Spanish—and is relatively compact at four billion parameters. Mistral says it produces realistic, emotionally expressive speech and can adapt to new voices from as little as three seconds of reference audio. Latency sits at 70 milliseconds for a typical setup with a 10-second speech sample and 500 characters.

In human comparison tests, Voxtral TTS scored higher on naturalness than ElevenLabs Flash v2.5 at a similar response time. That said, ElevenLabs has since shipped a newer model with v3. Voxtral TTS is available through an API at $0.016 per 1,000 characters, can be tested in Mistral Studio, and is also available as an open-weights version on Hugging Face.

Read full article about: OpenAI and Anthropic before the IPO: Different balance sheets make comparison difficult

Anthropic and OpenAI are both growing fast, but they report revenue very differently, The Information reports. OpenAI's annualized revenue is around $25 billionAnthropic's is $19 billion. Both calculate this similarly: four weeks of revenue times 13, with Anthropic adding monthly subscriptions times 12.

The key difference is how they handle cloud partners. OpenAI gives 20 percent of revenue to Microsoft and reports the number before that deduction. For Azure cloud sales, it only counts its 20 percent cut. Anthropic does the opposite: It books all cloud sales through AWS, Microsoft, and Google as its own revenue, listing the providers' shares as sales and marketing costs. Anthropic considers itself the primary provider, while OpenAI treats Microsoft as the primary provider for Azure.

Both follow US accounting rules (GAAP), but their numbers are difficult to compare. Anthropic's revenue likely looks higher on paper than it would under the same method. That matters as both companies head toward an IPO.

Read full article about: Gemini 3.1 Flash Live is Google's most natural-sounding AI voice model yet

Google has unveiled Gemini 3.1 Flash Live, its best voice and audio AI model yet. It delivers faster responses, more natural conversations, and configurable thinking levels for developers. Google says it's better at detecting pitch and emotions and more reliable in noisy environments. The model now powers live mode in the Gemini app.

According to Artificial Analysis, the model scores 95.9 percent on the Big Bench Audio Benchmark at "High" thinking, second only to Step-Audio R1.1 Realtime (97.0 percent) with a 2.98-second response time. At "Minimal," quality drops to 70.5 percent, but response time falls to 0.96 seconds.

Gemini 3.1 Flash Live scores 95.9 percent on Big Bench Audio at its highest thinking level, just behind Step-Audio R1.1 Realtime. | Image: Artificial Analysis

The model is available through the Gemini Live API, Google AI Studio, Gemini Live, and Search Live in over 200 countries. Pricing matches its Gemini 2.5 predecessor at $0.35 per hour of audio input and $1.40 per hour of audio output, making it one of the cheapest audio AI models available. The slightly better-performing Step Audio model is cheaper on input but pricier on output.

Ad
Read full article about: Google rolls out Search Live globally, turning your phone camera into a real-time AI search tool

Google is making its "Search Live" feature available globally. Users in more than 200 countries can now talk to Google Search using voice and camera. Users ask questions out loud and get spoken answers with web links. With the camera on, you can point your phone at objects and ask about them—Google uses assembling a shelf as an example.

Search Live runs on the new Gemini 3.1 Flash Live model, a multilingual audio and voice model that Google says enables more natural conversations. The feature is part of the AI mode in the Google app for Android and iOS and is also accessible through Google Lens.

Read full article about: OpenAI halts "Adult Mode" as advisors, investors, and employees raise red flags

OpenAI has put development of an erotic chatbot on hold indefinitely, the Financial Times reports. The decision comes after employees and investors raised concerns about the societal impact of sexual AI content. OpenAI's well-being advisory board had already unanimously opposed the planned "Adult Mode," with one board member warning that OpenAI risked creating a "sexy suicide coach." The company is also dealing with technical problems - its age verification system misidentified minors as adults in roughly 12 percent of cases. With 100 million underage users per week, that's a significant gap.

The AI company, currently valued at $730 billion, now wants to wait for long-term research on the effects of sexually explicit chats and emotional attachments before moving forward. According to the FT, there have already been internal discussions about scrapping the project entirely. Investors saw a poor risk-reward ratio, and employees questioned whether the project aligned with OpenAI's mission.

In ChatGPT's app code, the project appears under the name "Citron Mode," with planned age verification for users 18 and older. OpenAI is now shifting its focus to productivity tools and a "super app" built around ChatGPT.

Comment Source: FT
Read full article about: GitHub will use Copilot interaction data to train AI models starting April 2026

Starting April 24, 2026, GitHub is changing its data policy for Copilot. Interaction data from users on the Free, Pro, and Pro+ plans will be used to train AI models unless users actively opt out. This includes prompts, outputs, code snippets, filenames, repository structures, and feedback.

Users who previously opted out will keep their existing settings. Copilot Business and Enterprise customers are not affected. GitHub's chief product officer Mario Rodriguez says real-world usage data improves the models. Internal testing with data from Microsoft employees already led to higher acceptance rates.

The data can be shared with Microsoft, but not with third-party AI model providers. Users who want to opt out can do so in their Copilot settings under "Privacy." More details are available on the GitHub blog.

Ad