Ad
Skip to content

Matthias Bastian

Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Read full article about: OpenAI could face a billion-dollar fine over claims it used pirated books in AI training

OpenAI could soon face a billion-dollar fine. Authors and publishers suing the company for copyright infringement have uncovered internal messages and emails about the deletion of a dataset containing pirated books. The plaintiffs now want access to communications between OpenAI and its lawyers, arguing these could show the company acted intentionally. Under US law, fines can reach up to $150,000 per work. The New York court is also considering whether OpenAI waived attorney-client privilege through its own statements. There is also an allegation that evidence was intentionally destroyed.

A similar lawsuit against Anthropic ended in August with a $1.5 billion settlement over the use of pirated books for AI training. This may be one reason both companies have reportedly had trouble securing insurance.

Read full article about: Google launches Gemini Enterprise as a response to Microsoft Copilot and ChatGPT Enterprise

Google introduces Gemini Enterprise, its answer to Microsoft Copilot and ChatGPT Enterprise. The new platform gives companies a central hub to create, manage, and deploy AI agents across existing workflows—no coding required. Employees can chat with Gemini to look up information, analyze data, or automate routine tasks. Out of the box, Google offers its own agents like Deep Research and Code Assist, but companies can also bring in their own or third-party agents.

Gemini Enterprise connects with data from Google Workspace, Microsoft 365, Salesforce, SAP, and BigQuery. There are two plans: "Gemini Business," starting at $21 per user per month for smaller teams, and "Gemini Enterprise Standard/Plus," starting at $30 with extra features for larger organizations.

Read full article about: Reasoning models like Claude Sonnet 4.5 are getting better at spotting security flaws

Anthropic sees growing potential for language models in cybersecurity. The company cites results from the CyberGym leaderboard: Claude Sonnet 4 uncovers new software vulnerabilities about 2 percent of the time, while Sonnet 4.5 increases that rate to 5 percent. In repeated tests, Sonnet 4.5 finds new vulnerabilities in more than a third of projects.

Image: Anthropic

In a recent DARPA AI Cyber Challenge, Anthropic notes that teams used large language models like Claude "to build 'cyber reasoning systems' that examined millions of lines of code for vulnerabilities to patch." Anthropic calls this a possible "inflection point for AI’s impact on cybersecurity."

Read full article about: Meta's Yann LeCun reportedly clashed with the company over new publication rules

Meta's top AI researcher, Yann LeCun, is reportedly at odds with the company over new publication guidelines for its FAIR research division. According to six people familiar with the matter, FAIR projects now need stricter internal review before release - a shift some employees say limits their scientific freedom. LeCun even considered stepping down in September, The Information reports, partly in response to Shengjia Zhao being named chief scientist for Meta's superintelligence labs.

The dispute comes as Meta reshapes its AI organization. LeCun, who has openly rejected the current large language model (LLM) paradigm, is pushing for new directions in AI. He has also positioned himself against Donald Trump, while CEO Mark Zuckerberg has been more willing to align with the Trump administration.