Ad
Short

A study making the rounds on social media claims that more than half of all web content is "created by AI instead of humans." According to the study, a piece of writing is considered "AI-generated" if at least 51 percent of its words are flagged as machine-written by a detector.

But this framing misses two key questions: Why was the text written, and who is actually responsible for it? When a product doesn't work, we don't blame the machine—we hold the people who designed and published it accountable. Most people don't care about the machine itself or who built it.

If anything, the study shows we need a real conversation about what counts as "AI-generated." I'm not linking to the study because it looks like SEO bait. If you're interested, you can find it with a quick search.

Ad
Ad
Short

Meta AI researcher Yann LeCun is distancing himself from the Llama models. In a recent post on X, LeCun said he "has not been involved in any Llama," except for a "very indirect" role in Llama 1 and pushing for the open-source release of Llama 2. He explained that since early 2023, Llama 2, 3, and 4 have been developed by Meta's GenAI team, which has since been replaced by the TBD Lab.

Yann LeCun clarifies his limited involvement with recent Llama models in a statement on X. | Image: via X

Although the Llama models were briefly popular in the open-source community, they were quickly overtaken by other models, and Llama 4 failed to meet expectations. LeCun leads FAIR, Meta's fundamental AI research group focused on long-term projects outside of large language models. FAIR has recently faced layoffs, while TBD Lab, led by Alexandr Wang, is gaining influence within the company.

Ad
Ad
Ad
Ad
Short

Anthropic is rolling out a new memory feature for Claude on the Pro and Max plans. With Claude Memory, the system remembers project content, user preferences, and workflows to keep context consistent across conversations. Each project gets its own separate memory, so confidential topics stay isolated. Users can review and edit what Claude remembers at any time, and there's also an "incognito chat" option that doesn't save data or appear in the chat history.

The feature is optional and can be turned on in the settings. Before launch, Anthropic said in a blog post it tested the storage function in sensitive areas, like making sure it wouldn't reinforce harmful conversation patterns or bypass safety features, and made adjustments as needed. The feature has been available for Team and Enterprise users since September.

Google News