Ad
Skip to content

Jonathan Kemper

Jonathan writes for THE DECODER about how AI tools can improve both work and creative projects.
Read full article about: Perplexity's "Personal Computer" promises a tireless AI agent for $200 a month

Perplexity AI's "Personal Computer" is an AI assistant that works around the clock - handling emails, presentations, and app control. It runs on a dedicated Mac Mini connected to the user's local apps and Perplexity's servers, controllable from any device. CEO Aravind Srinivas called it a "digital proxy" that never sleeps on X. The service builds on Perplexity Computer, which launched in February and bundles multiple AI models.

Security features include a kill switch and an activity log. Access requires the Max subscription at 200 dollars per month, with only a waiting list available for now. Perplexity is also launching an enterprise version that connects to over 400 tools like Salesforce and Snowflake - the company claims it completed 3.25 years' worth of work internally in four weeks. The concept draws comparisons to the controversial OpenClaw, whose developer now works at OpenAI. Agent-based AI systems dominate the current landscape but face sharp criticism around resource demands and security vulnerabilities.

Read full article about: Grammarly's AI writing tips claim inspiration from experts who never agreed to participate

Grammarly is apparently using the names of journalists and authors without permission for an AI feature called "Expert Review." The feature offers writing tips that are supposedly "inspired" by experts like Stephen King or Neil deGrasse Tyson. Even people who have already died, such as Carl Sagan, are reportedly included. As The Verge, Platformer, and Wired report, the feature also lists numerous tech journalists, including Verge editor-in-chief Nilay Patel and other editors. None of them were reportedly asked beforehand.

Screenshot: Grammarly Expert Review-Panel mit AI-Schreibvorschlägen von Technologie- und Stil-Experten.
The Expert Review panel in Grammarly provides context-based writing recommendations.

After the backlash, Grammarly reportedly offered only an opt-out option via email - no apology. Alex Gay, vice president of product marketing at parent company Superhuman, said the feature never claimed direct involvement from the experts. According to The Verge, some of the feature's source links pointed to spam sites or completely unrelated content. Expert descriptions also contained outdated job titles. The AI suggestions show up in Google Docs looking like real user comments, which can easily mislead people.

Read full article about: Claude Code gets parallel AI agents that review code for bugs and security gaps

Anthropic has released a code review feature for Claude Code that automatically checks changes for errors before they're merged. Multiple AI agents work in parallel to catch bugs, security vulnerabilities, and regressions. The feature is available as a research preview for Team and Enterprise customers. According to the company, Anthropic has been using the system internally for months. Code output per developer has jumped 200 percent over the past year, turning manual review into a bottleneck.

Before deployment, 16 percent of changes received substantive comments - now it's 54 percent. For large changes over 1,000 lines, the system flags problems in 84 percent of cases, averaging 7.5 issues per change. Less than one percent of findings are marked as incorrect. The system doesn't approve any changes on its own - that stays with the developer. Costs are billed based on token consumption and average between 15 and 25 dollars per review, depending on size and complexity. Admins can set a monthly spending limit.

Anthropic is aggressively building out Claude Code this year. Recent additions include automated desktop functions, remote control for smartphones, a memory function, and a scheduling feature for planned tasks.

Hallucinated references are passing peer review at top AI conferences and a new open tool wants to fix that

Fake citations are slipping past peer review at top AI conferences, and commercial LLMs can’t spot the fakes they generate. A new open-source tool called CiteAudit allegedly catches what GPT, Gemini, and Claude miss.