Ad
Skip to content
Read full article about: Grammarly's AI writing tips claim inspiration from experts who never agreed to participate

Grammarly is apparently using the names of journalists and authors without permission for an AI feature called "Expert Review." The feature offers writing tips that are supposedly "inspired" by experts like Stephen King or Neil deGrasse Tyson. Even people who have already died, such as Carl Sagan, are reportedly included. As The Verge, Platformer, and Wired report, the feature also lists numerous tech journalists, including Verge editor-in-chief Nilay Patel and other editors. None of them were reportedly asked beforehand.

Screenshot: Grammarly Expert Review-Panel mit AI-Schreibvorschlägen von Technologie- und Stil-Experten.
The Expert Review panel in Grammarly provides context-based writing recommendations.

After the backlash, Grammarly reportedly offered only an opt-out option via email - no apology. Alex Gay, vice president of product marketing at parent company Superhuman, said the feature never claimed direct involvement from the experts. According to The Verge, some of the feature's source links pointed to spam sites or completely unrelated content. Expert descriptions also contained outdated job titles. The AI suggestions show up in Google Docs looking like real user comments, which can easily mislead people.

Read full article about: Anthropic launches internal think tank to study AI's impact on society and security

Anthropic has launched the "Anthropic Institute," an internal think tank dedicated to studying how powerful AI affects society, the economy, and security. The institute will be led by co-founder Jack Clark, who is taking on a new role as "Head of Public Benefit."

The institute plans to research how AI is transforming jobs, what new risks emerge from misuse, what "values" AI systems express, and how humans can maintain control over self-improving AI systems.

The team consists of around 30 people drawn from three existing research groups: the Frontier Red Team, the Societal Impacts team, and the economics research team. Early hires include Matt Botvinick (formerly Google DeepMind), Anton Korinek (University of Virginia), and Zoe Hitzig (previously at OpenAI).

The launch comes at a turbulent time for the company. Anthropic has sued 17 federal agencies and the Executive Office of the President after being classified as a supply chain risk. According to The Verge, Clark said he has "no concerns" about research funding. Anthropic is also opening an office in Washington, D.C.

Read full article about: Amazon gets court order blocking Perplexity's AI shopping agent

A federal court in San Francisco has granted Amazon an injunction against AI startup Perplexity, barring it from using its AI browser agent Comet to make purchases on Amazon.

Amazon sued Perplexity in November, accusing the startup of fraud because Comet didn't disclose when it was shopping on behalf of a real person and ignored Amazon's demands to stop. The case raises a growing legal question: how should courts handle AI agents taking on complex tasks like online shopping?

Judge Maxine Chesney ruled that Amazon presented strong evidence that Perplexity was accessing users' password-protected accounts with their permission but without Amazon's authorization. Perplexity must also delete any collected Amazon data and has one week to appeal.

There's an interesting wrinkle here: Amazon recently became a major investor in OpenAI, which also sees product research and online shopping as key AI chat features. So far, though, OpenAI reportedly hasn't cracked direct checkout in its chat interface. Amazon may be positioning itself to step in and own that piece of the puzzle.

Read full article about: ChatGPT now explains math and physics with interactive visualizations

OpenAI is rolling out dynamic visual explanations for more than 70 math and science concepts in ChatGPT. Users can tweak variables in real time and see the effects on graphs and formulas instantly. For now, the topics are geared mainly toward high school and college students, covering things like binomial squares, exponential decay, Ohm's law, compound interest, and trigonometric identities.

According to OpenAI, the interactive explanations are available now to all logged-in users worldwide, regardless of their subscription plan. Over time, OpenAI plans to expand the learning modules to cover additional subjects.

Read full article about: Amazon makes senior engineers the human filter for AI-generated code after a series of outages

Following a series of allegedly AI-caused outages, Amazon is turning its senior engineers into human filters for AI-generated code.

"Folks, as you likely know, the availability of the site and related infrastructure has not been good recently," writes Dave Treadwell, Senior Vice President at Amazon, in an internal email obtained by the Financial Times. A briefing identifies a "trend of incidents" with a "high blast radius," linked to "Gen-AI assisted changes." Recently, there have been reports that Amazon's AI coding tools may have also contributed to two AWS outages.

The consequence: Junior and mid-level engineers now require sign-off from a senior engineer for all AI-assisted code changes. Standard code reviews have always existed at Amazon, but a dedicated approval requirement specifically for AI-generated output is new. Experienced developers are thus effectively becoming human quality filters for machine-generated code. Their role is shifting: away from building, toward reviewing what the machine has built.

Among the causes, the internal briefing cites "novel GenAI usage for which best practices and safeguards are not yet fully established."

Comment Source: FT
Read full article about: Nvidia and Mira Murati's Thinking Machines Lab announce long-term AI partnership

Nvidia and Thinking Machines Lab, the AI startup founded by former OpenAI executive Mira Murati, are entering a long-term partnership. Thinking Machines will receive at least one gigawatt of compute power through Nvidia's new Vera Rubin systems to train its own AI models. Deployment is set to begin early next year.

Nvidia has also taken a financial stake in Thinking Machines, though the exact amount wasn't disclosed. The startup had already raised around $2 billion in a seed round led by Andreessen Horowitz, at a valuation of $12 billion. Nvidia was an investor in that round as well. Most recently, Thinking Machines is reportedly seeking another funding round. The startup has also seen some departures - co-founders Barret Zoph and Luke Metz returned to OpenAI.

Together, the two companies plan to develop training and deployment systems for Nvidia hardware and make frontier AI models available to businesses and researchers. Murati left OpenAI in 2024 and co-founded Thinking Machines Lab.

Read full article about: Meta acquires Moltbook, the Reddit-style platform built for AI agents

Meta has acquired Moltbook, a platform best described as a Reddit for AI agents. Founders Matt Schlicht and Ben Parr are joining Meta's Superintelligence Labs (MSL), led by former Scale AI CEO Alexandr Wang, Axios reports. The purchase price wasn't disclosed, and the deal is expected to close in mid-March.

Moltbook launched in late January as an experimental space where AI agents could connect and coordinate tasks. Schlicht built most of it with help from his own AI assistant. Since then, two studies have deflated the sci-fi hype around the project: the actual number of agents appears far lower than claimed, and researchers found no real social interaction on the platform.

So what does Meta see in it? In a blog post obtained by Axios, Meta's Vishal Shah explains: "The Moltbook team has given agents a way to verify their identity and connect with one another on their human's behalf. This establishes a registry where agents are verified and tethered to human owners." Existing customers can keep using Moltbook temporarily.

The acquisition follows OpenAI's recent hire of Peter Steinberger, developer of the related agent framework OpenClaw.

Read full article about: Startup claims first full brain emulation of a fruit fly in a simulated body

Eon Systems says it has connected a complete fruit fly brain emulation to a virtual body, producing multiple behaviors for the first time. The emulation covers over 125,000 neurons and 50 million synapses.

According to co-founder Alex Wissner-Gross, the startup mapped the fruit fly's neural wiring from electron microscopy data and connected it to a virtual fly body running in MuJoCo, a physics simulation engine.

Previous projects like OpenWorm worked with far smaller nervous systems, just 302 neurons, or relied on machine learning techniques like reinforcement learning instead of actual brain data. Eon takes a fundamentally different approach. Rather than building AI, the startup wants to digitally copy and simulate real brains, neuron by neuron. The fruit fly is just the starting point. Within two years, Eon plans to emulate a mouse brain with 70 million neurons. The long-term goal is simulating a human brain.

Eon published the code for its brain model on GitHub, though it's based on a paper by Philip Shiu et al. that already appeared in Nature in 2024. The actually novel part, connecting the brain emulation to a simulated body, hasn't been released yet.

Read full article about: Investors bet $1 billion on Yann LeCun's vision for AI beyond LLMs

Yann LeCun, former chief AI scientist at Meta and Turing Award winner, has raised over $1 billion for his new startup Advanced Machine Intelligence Labs (AMI Labs) - making it Europe's largest seed funding round ever. Investors include Nvidia, Bezos Expeditions, Singapore's Temasek, and France's Cathay Innovation.

The company was valued at $3.5 billion before the funding round. Alexandre LeBrun, former head of French startup Nabla, serves as CEO, while LeCun will take the role of board chair. The company is launching with about a dozen employees spread across Paris, New York, Singapore, and Montreal.

AMI Labs aims to build so-called world models that understand the physical environment - with applications in areas like robotics and transportation. According to LeCun and LeBrun, today's language models aren't up to the task. Meta isn't an investor but is expected to partner with AMI Labs.

Comment Source: AMILabs | FT
Read full article about: Claude Code gets parallel AI agents that review code for bugs and security gaps

Anthropic has released a code review feature for Claude Code that automatically checks changes for errors before they're merged. Multiple AI agents work in parallel to catch bugs, security vulnerabilities, and regressions. The feature is available as a research preview for Team and Enterprise customers. According to the company, Anthropic has been using the system internally for months. Code output per developer has jumped 200 percent over the past year, turning manual review into a bottleneck.

Before deployment, 16 percent of changes received substantive comments - now it's 54 percent. For large changes over 1,000 lines, the system flags problems in 84 percent of cases, averaging 7.5 issues per change. Less than one percent of findings are marked as incorrect. The system doesn't approve any changes on its own - that stays with the developer. Costs are billed based on token consumption and average between 15 and 25 dollars per review, depending on size and complexity. Admins can set a monthly spending limit.

Anthropic is aggressively building out Claude Code this year. Recent additions include automated desktop functions, remote control for smartphones, a memory function, and a scheduling feature for planned tasks.