Ad
Skip to content
Read full article about: Microsoft expands Azure AI Foundry with new OpenAI models

Microsoft unveiled several new multimodal AI models for Azure AI Foundry at OpenAI DevDay in October 2025. The update includes GPT-image-1-mini, GPT-realtime-mini, and GPT-audio-mini, along with security improvements for GPT-5-chat-latest and the analytics model GPT-5-pro. The new models are designed to help developers build AI applications for text, image, audio, and video faster and at lower cost.

The Microsoft Agent Framework, an open-source SDK for coordinating multiple AI agents, is now available, as is OpenAI's new Agent SDK.

Read full article about: OpenAI brings more control to Sora

OpenAI is adding new controls to its Sora video app. According to Sora head Bill Peebles, users can now decide where AI-generated versions of themselves can appear - for example, blocking political content or banning certain words. Users can also set style guidelines for their digital likeness. These updates come in response to criticism over abusive deepfakes on the platform. Peebles also announced that Sora will soon officially support cameos featuring copyrighted characters. Recently, CEO Sam Altman said rights holders should have "more control" and will soon receive a share of Sora's revenue.

Read full article about: Reasoning models like Claude Sonnet 4.5 are getting better at spotting security flaws

Anthropic sees growing potential for language models in cybersecurity. The company cites results from the CyberGym leaderboard: Claude Sonnet 4 uncovers new software vulnerabilities about 2 percent of the time, while Sonnet 4.5 increases that rate to 5 percent. In repeated tests, Sonnet 4.5 finds new vulnerabilities in more than a third of projects.

Image: Anthropic

In a recent DARPA AI Cyber Challenge, Anthropic notes that teams used large language models like Claude "to build 'cyber reasoning systems' that examined millions of lines of code for vulnerabilities to patch." Anthropic calls this a possible "inflection point for AI’s impact on cybersecurity."

Read full article about: Meta's Yann LeCun reportedly clashed with the company over new publication rules

Meta's top AI researcher, Yann LeCun, is reportedly at odds with the company over new publication guidelines for its FAIR research division. According to six people familiar with the matter, FAIR projects now need stricter internal review before release - a shift some employees say limits their scientific freedom. LeCun even considered stepping down in September, The Information reports, partly in response to Shengjia Zhao being named chief scientist for Meta's superintelligence labs.

The dispute comes as Meta reshapes its AI organization. LeCun, who has openly rejected the current large language model (LLM) paradigm, is pushing for new directions in AI. He has also positioned himself against Donald Trump, while CEO Mark Zuckerberg has been more willing to align with the Trump administration.

Read full article about: OpenAI's Sora 2 answers science questions directly in its generated videos

OpenAI's Sora 2 can handle knowledge questions, too. In a test by Epoch AI, Sora got ten random tasks from the GPQA Diamond Multiple Choice Benchmark covering natural sciences. Sora scored 55 percent, while GPT-5 managed 72 percent. To run the test, Epoch AI asked Sora to make a video of a professor holding up the answer letter on a sheet of paper.

Video: via EpochAI

Epoch AI points out that an upstream language model could tweak the prompt before the video is created and slot in the answer along the way. Other systems, like HunyuanVideo, use similar re-prompting tricks, but it's not confirmed whether Sora does the same. Either way, the lines between text and video models are starting to blur.

Read full article about: Meta will start using chatbot conversations to target ads across all major platforms

Meta is set to start mining users' conversations with its chatbot to target ads and content across all Meta platforms, including Facebook and Instagram. Beginning December 16, 2025, anything users say to Meta AI—by text or voice—will feed into the company's ad and content algorithms. If someone discusses hiking with the AI, for instance, they can expect to see more hiking-related ads, posts, and groups. Meta says it will exclude sensitive subjects like religion, health, and political views from this data collection.

Image: Screenshot via Meta

Users can try to limit what shows up in their feeds using settings like "Ads Preferences," but these changes only apply if accounts are linked in the Accounts Center. Meta plans to alert affected users ahead of time by notification and email. The policy will take effect in most regions.

Comment Source: Meta