Ad
Skip to content

Matthias Bastian

Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Read full article about: OpenAI developer predicts programmers will soon "declare bankruptcy" on understanding their own AI-generated code

An OpenAI developer known by the pseudonym "roon" has a blunt prediction for the future of software development: many developers at software companies will soon openly admit they no longer fully understand the code they're submitting. Eventually, this will cause system failures that are harder to debug than usual but will still get fixed in the end, roon writes, adding that he doesn't "write code anymore."

OpenAI developer "roon" predicts a cultural shift where programmers "declare bankruptcy" on understanding their own code. | Screenshot via X

The prediction cuts to the heart of an ongoing debate: Is AI-assisted programming a fundamental shift in how developers work, or a risky breaking point? Some enthusiasts point to massive productivity gains, while critics fear growing dependencies and bugs that slip through undetected.

A developer survey from summer 2025 captures this split: only 33 percent of developers trust AI-generated code, yet 84 percent are already using AI tools or plan to start. As usual, the truth probably lands somewhere in the middle.

Google Deepmind goes on acquisition spree with three AI deals in one week

Google’s AI shopping spree reveals a well-known playbook for expanding market power: instead of outright acquisitions that trigger antitrust scrutiny, the company is scooping up top talent, licensing key technologies, and forging strategic partnerships with former employees.

OpenAI CMO responds to "Woke AI" accusations by citing co-founder Brockman's $25 million MAGA donation

OpenAI’s head of marketing is pushing back against accusations that the company is “Woke AI,” pointing to $25 million in MAGA donations from co-founder Greg Brockman – and to her own marriage to a cattle rancher. The trigger: a new hire with Democratic ties.

Read full article about: Elon Musk's AI chatbot Grok flooded X with millions of sexualized images

Elon Musk's AI chatbot Grok generated at least 1.8 million sexualized images of women and posted them on X over just nine days. That's according to the New York Times and the Center for Countering Digital Hate (CCDH), which conducted a data analysis. The CCDH estimates that roughly 65 percent of the images contained sexualized depictions of men, women, or children.

  Count in sample Out of 20,000 sampled (based on AI-assisted analysis) Share of sample Percentage of 20,000 sampled (based on AI-assisted analysis) Estimated Total on X
Extrapolated estimate (based on overall total of 4.6m images made by Grok)
Sexualized Images (Adults &
Children)
12,995 65% 3,002,712
Sexualized Images
(Likely Children)
101 0.5% 23,338

The flood of images started on December 31 after Musk shared a bikini picture of himself that Grok had created. Users quickly figured out they could ask the chatbot to undress or sexualize real photos of women and children. X didn't restrict the feature until January 8 and expanded those restrictions last week after authorities in the UK, India, Malaysia, and the US launched investigations.

Read full article about: Meta's AI lab ships first models internally after six months as CTO says big leaps for everyday users may be over

Meta Superintelligence Labs has completed its first internal AI models, Chief Technology Officer Andrew Bosworth revealed at the World Economic Forum in Davos. Speaking with Reuters, Bosworth said the models are "very good," but there's still "a tremendous amount of work to do post-training." He didn't share specifics about what the models can do.

Meta is reportedly developing a text model codenamed "Avocado" and an image and video model called "Mango." The new lab came after CEO Mark Zuckerberg restructured Meta's AI leadership following criticism of the company's Llama 4 model. Bosworth called 2025 a "tremendously chaotic year" for building out the new training infrastructure.

At an Axios event, Bosworth shared his broader take on AI development. He noted that for everyday queries, the improvements between model generations—like GPT-4 to GPT-5—are getting smaller. Specialized applications like legal analysis, health diagnostics, and personalization, however, continue to see significant gains. That's why he believes the industry's massive AI investments will pay off eventually.

Read full article about: OpenAI rolls out age prediction to apply teen safeguards in ChatGPT

OpenAI is rolling out age prediction in ChatGPT to identify when an account likely belongs to someone under 18, so the system can apply the right experience and safeguards for teens. The model analyzes behavioral patterns like usage times, how long the account has been active, and the age users entered at signup. When someone gets flagged as a minor, ChatGPT automatically enables safety features that block, among other things, graphic violence, sexual roleplay, depictions of self-harm, and content about extreme beauty standards.

The move follows OpenAI's announcement that adults will get access to some of this previously restricted content, making age verification a necessary first step. It also comes after cases of teenagers developing dangerous dependencies on AI chatbots, some with fatal outcomes.

Adults who get incorrectly flagged as minors can verify their age by taking a selfie through the Persona service. Parents get additional controls, including rest periods and notifications when the system detects signs of acute distress. The feature launches in the EU in the coming weeks. More details on OpenAI's help page.