Ad
Short

Anthropic backs California's SB 53, a state bill that would force large developers of advanced AI to be more transparent and secure—apparently because they see Washington as too slow to act. Anthropic says SB 53 could serve as a solid starting point for national rules.

"While we believe that frontier AI safety is best addressed at the federal level instead of a patchwork of state regulations, powerful AI advancements won’t wait for consensus in Washington."

Anthropic

The bill would require affected companies to publish security policies, disclose risk analyses, report security incidents within 15 days, share internal assessments under confidentiality, and follow clear whistleblower protection rules. Violations could mean fines. The rules target only companies running highly capable models, aiming to keep the burden off smaller providers. Anthropic says their decision comes after reflecting on the lessons from California's failed SB 1047 effort.

Short

OpenAI CEO Sam Altman has picked up on a trend: "it seems like there are really a lot of LLM-run twitter accounts now." No punchline here.

Image: Altman via X

The Dead Internet Theory is a conspiracy theory that claims the internet is no longer driven by real people, but mostly by bots and AI-generated content. According to this idea, most online activity—comments, posts, and articles—is fake, created to manipulate public opinion and control users.

Short

Google has published the pricing and usage limits for its Gemini AI app. The service is available in over 150 countries and comes in three tiers: free Gemini, Google AI Pro, and Google AI Ultra. Users must be at least 18 years old.

The free version limits access to the advanced Gemini 2.5 Pro model to five requests per day. Google AI Pro raises that to 100 daily requests, while Ultra subscribers can make up to 500. Both paid tiers also expand the context window from 32,000 to 1 million characters.

Pro users can generate up to 20 Deep-Research reports and three videos per day with Gemini 2.5 Pro. Ultra subscribers get 200 Deep-Research reports and up to five videos daily. Image generation jumps from 100 images per day on the free plan to 1,000 on paid subscriptions.

Google notes that these limits may change and can vary based on factors like prompt complexity.

Ad
Ad
Short

Salesforce CEO Marc Benioff touts AI productivity gains - "4,000 less heads" needed in support.

Salesforce CEO Marc Benioff is celebrating the impact of AI on the company, describing recent advances as "the most exciting thing that's happened in the last nine months for Salesforce." In a podcast interview with investor Logan Bartlett, Benioff said that thanks to AI-driven productivity gains, he now needs "4,000 less heads" in customer support.

Salesforce stands out in Silicon Valley for Benioff's open enthusiasm about what he calls "radical augmentation" of the workforce through automation. While other tech CEOs still express regret over job cuts, Benioff is vocal about the shift. Since 2023, Salesforce has cut around 9,000 jobs (about 8,000 last year and another 1,000 in 2024), and just this week notified 262 employees in San Francisco of layoffs, according to a state filing. In June, Benioff told Bloomberg that AI already handles "50 percent" of the work at Salesforce.

Short

Some robotics experts question the value of humanoid designs: "There is a great invention called wheels."

"Humanoid designs only make sense if it is so important to justify the trade-off and sacrifice of other things," says Leo Ma, CEO of RoboForce, in an interview with the Washington Post. His company's Titan robot uses four wheels instead of legs and can lift more weight than humanoid models. Ma adds, "Other than that, there is a great invention called wheels."

Scott LaValley, founder of Cartwheel Robotics, is also skeptical. "The dexterity of these robots isn't fantastic. There are hardware limitations, software limitations. There are definitely safety concerns," he says.

One major challenge: most humanoid robots, like humans, have to constantly expend energy to stay balanced on two legs. If the power cuts out, a bipedal robot generally crumples to the ground, which can pose risks to nearby people and objects. Of course, Ma and others who favor wheeled robots have a clear interest in promoting their own designs.

Ad
Ad
Short

Alibaba unveils Qwen3-Max-Preview, its largest language model yet, featuring more than one trillion parameters. The model is available through Qwen Chat and the Alibaba Cloud API. According to Alibaba, Qwen3-Max-Preview outperforms the previous top model, Qwen3-235B-A22B-2507, in internal benchmarks and with early users. Improvements show up in knowledge, conversation, task handling, and instruction following, with reduced "model knowledge hallucinations."

Image: Qwen

Qwen3-Max-Preview accepts up to 258,048 input tokens and generates up to 32,768 output tokens. Pricing starts at $2,151 per million input tokens and $8,602 per million output tokens. The model does not support image processing.

Ad
Ad
Short

OpenAI will start mass-producing its own AI chips next year in partnership with US semiconductor company Broadcom, according to the Financial Times. The move is designed to reduce OpenAI's reliance on Nvidia and meet growing demand for computing power. On Thursday, Broadcom CEO Hock Tan mentioned a new customer that has committed to $10 billion in chip orders. Several sources confirmed that the customer is OpenAI. The company plans to use the chips exclusively for its own operations and does not intend to sell them to external clients. OpenAI is following a path already taken by Google, Amazon, and Meta, which have all developed specialized chips for AI workloads. CEO Sam Altman recently stressed that OpenAI needs more computing resources for its GPT-5 model and plans to double its computing capacity over the next five months.

Google News