Ad
Skip to content

Maximilian Schreiner

Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Read full article about: Bandcamp bans AI-generated music

Music platform Bandcamp now prohibits music created entirely or substantially by generative AI. The company says the new policy protects human creativity and the direct connection between artists and fans. The updated rules also strictly ban using AI tools to imitate specific artists or styles.

Unlike most streaming services, Bandcamp focuses on direct purchases of music and merchandise, letting fans support creators financially without intermediaries.

Users can now report content that sounds heavily AI-generated. Bandcamp reserves the right to remove music from the platform based on suspected AI origins alone.

Read full article about: Microsoft pledges to cover data center power costs as community pushback grows

Microsoft is rolling out a new initiative for AI data centers after facing mounting opposition from communities across the US. The company says it will fully cover the power costs of its data centers, ensuring residents won't see higher electricity bills as a result. The announcement comes as data center regions like Virginia, Illinois, and Ohio have seen electricity prices climb 12-16 percent faster than the national average.

Beyond power costs, Microsoft is making several other commitments: the company will stop requesting local tax breaks, cut water consumption by 40 percent by 2030, and replenish more water than it uses. Microsoft President Brad Smith told GeekWire that the industry used to operate differently and now needs to change its approach. Trump previewed the announcement on Truth Social before Microsoft made it official.

As part of the initiative, Microsoft also plans to train local workers and invest in AI education programs in affected communities.

Read full article about: AI models don't have a unified "self" - and that's not a bug

Expecting internal coherence from language models means asking the wrong question, according to an Anthropic researcher.

"Why does page five of a book say that the best food is pizza and page 17 says the best food is pasta? What does the book really think? And you're like: 'It's a book!'", explains Josh Batson, research scientist at Anthropic, in MIT Technology Review.

The analogy comes from experiments on how AI models process facts internally. Anthropic discovered that Claude uses different mechanisms to know that bananas are yellow versus confirming that the statement "Bananas are yellow" is true. These mechanisms aren't connected to each other. When a model gives contradictory answers, it's drawing on different parts of itself - without any central authority coordinating them. "It might be like, you're talking to Claude and then it wanders off," says Batson. "And now you're not talking to Claude but something else."

The takeaway: Assuming language models have mental coherence like humans might be a fundamental category error.

Comment Source: MIT

UK startup turns planetary biodiversity into AI-generated drug candidates

UK company Basecamp Research has developed AI models together with researchers from Nvidia and Microsoft that generate potential new therapies against cancer and multidrug-resistant bacteria from a database of over one million species.

Read full article about: Apple turns to Google's Gemini as Siri's technical debt becomes too much to handle

Apple will use Google's Gemini models for its AI features, including a revamped version of Siri. The multi-year partnership means Apple will rely on Google's Gemini and cloud technology for its upcoming products, according to CNBC. The new features are expected to roll out later this year.

In a statement, Apple said that after careful evaluation, Google's technology offers the most capable foundation for its applications. Rumors about talks between the two tech giants first surfaced in March of last year. Later reports suggested the switch would cost Apple more than one billion dollars annually.

The move comes as Apple continues to struggle with Siri's underlying architecture. Internal reports describe Siri as a technically fragmented system built from old rule-based components and newer generative models - a combination that makes updates difficult and leads to frequent errors. Apple is also working on an entirely new in-house LLM architecture and a model with roughly one trillion parameters, aiming to eventually break free from external providers. Google faced similar challenges early on keeping pace with OpenAI's rapid progress but managed to catch up.

Comment Source: CNBC
Read full article about: UK regulator investigates X over Grok AI's role in generating sexualized deepfakes

British media regulator Ofcom has opened an investigation into X over the AI chatbot Grok. The probe follows reports in recent weeks that Elon Musk's chatbot and social media platform were increasingly being used to create and share non-consensual intimate images and even sexualized images of children.

Ofcom is now examining whether X violated the UK's Online Safety Act. The regulator contacted X on January 5, 2025, demanding a response by January 9. The investigation aims to determine whether X took adequate steps to protect British users from illegal content. Violations could result in fines of up to 18 million pounds or 10 percent of global revenue. In severe cases, a court could even order X blocked in the UK.

Ofcom is also looking into whether xAI, the AI company behind Grok, broke any regulations. Last week, the EU Commission ordered X to preserve all internal documents and data related to the Grok AI chatbot through the end of 2026.