Ad
Skip to content

China proposes rules to combat AI companion addiction

China wants to crack down on emotionally manipulative AI chatbots. Under proposed rules, providers would have to detect addictive behavior and step in when users show psychological warning signs. California is taking similar steps after tragic stories linked to AI companions.

Read full article about: Australia's financial regulator warns banks against flooding it with AI-generated suspicious activity reports

Australia's financial regulator, Austrac, is pushing back against banks that rely too heavily on AI to generate suspicious activity reports (SARs). According to industry sources, Austrac officials have met with several banks recently to urge more careful use of AI. One major bank was reportedly reprimanded in a private meeting.

Banks have used machine learning to flag suspicious transactions for years. But the shift toward modern large language models only picked up over the past two years, as banks saw the technology as a way to cut costs.

Austrac deputy chief executive Katie Miller said the agency doesn't want a flood of "low-quality" computer-generated reports packed with data but lacking real intelligence value. She warned that banks might be submitting large volumes of reports simply to avoid penalties.

The banks are leaning towards the ends of higher quality but smaller amounts. The more data you’ve got, there's a problem of noise. If banks were looking to use artificial intelligence just to increase the volume (of reports), that’s something we need to assess.

Authors sue six AI giants for book piracy

Pulitzer Prize winner John Carreyrou and other authors are suing OpenAI, Anthropic, Google, Meta, xAI, and Perplexity for book piracy. The AI companies allegedly stole their works from illegal online libraries. The lawsuit has a strong case, and this time the plaintiffs are going after the big bucks instead of the “pennies” of a class action settlement.

Read full article about: OpenAI reportedly seeking up to $100 billion in new funding round

OpenAI is in early talks with investors about a massive funding round that could push the company's valuation to around $750 billion, according to The Information. The company could raise tens of billions of dollars, potentially as much as $100 billion.

The discussions are still in their early stages, and nothing is set in stone. At this valuation, the deal would mark a 50 percent jump from OpenAI's last share sale in October.

Amazon is also in talks to invest $10 billion or more. It's the kind of circular AI deal that's become common: Amazon hands OpenAI cash, and OpenAI turns around and spends it on Amazon's chips and cloud services.

According to The Information, OpenAI has reached an annualized revenue run rate of $19 billion, keeping the company on pace to hit its $20 billion target by year's end. The company is projecting $30 billion in revenue for 2026, rising to around $200 billion by 2030. But these ambitious growth targets come with an enormous cash burn of roughly $26 billion for this year and next.

Read full article about: Terence Tao proposes "artificial general cleverness" as a more honest label for what AI actually does

Renowned mathematician Terence Tao has proposed a new way to think about AI capabilities. On Mastodon, Tao questions whether true "artificial general intelligence" (AGI) is actually achievable with current AI tools. His alternative: "artificial general cleverness" (AGC).

According to Tao, "general cleverness" means the ability to solve complex problems using partly improvised methods. These solutions might be random, rely on raw computing power, or draw from training data. That makes them something other than true "intelligence," but they can still succeed at many tasks, especially when strict testing procedures filter out incorrect results, he says.

"This results in the somewhat unintuitive combination of a technology that can be very useful and impressive, while simultaneously being fundamentally unsatisfying and disappointing."

Terence Tao

In humans, cleverness and intelligence are linked, but in AI they're decoupled, Tao argues. The mathematician has recently spoken positively about how AI has sped up his own work.

Read full article about: Trump's AI plan could affect his own voters

Trump attempts to block state AI laws by withholding broadband billions, but faces shaky legal ground.

"I think the administration has a 30 to 35% chance of this working legally," says Dean Ball, a former White House official who contributed to the administration's AI Action Plan.

The executive order directs the Commerce Department to block states with onerous AI regulations from the $42 billion Broadband Equity, Access, and Deployment program (BEAD), reports Reuters in an analysis of the new order. However, experts doubt whether Congress intended to give the administration authority over state AI regulation when it authorized broadband funding. Furthermore, the move risks political blowback from within the party: Republican governors like Ron DeSantis have previously spoken against federal interference, and withholding funds would impact rural voters—a key demographic that supported Trump by wide margins.

Read full article about: CHT blasts Trump's executive order for creating an AI accountability vacuum

The Center for Humane Technology (CHT), a nonprofit organization advocating for ethical technology, has criticized a new executive order from the Trump administration that aims to undermine state AI laws.

According to the CHT, the regulation puts public safety at risk by preventing states from meaningfully regulating AI. At the same time, it offers no national replacement framework, creating what the organization calls a vacuum in accountability.

Americans understand the potential benefits and dangers of this technology. They believe government should help regulate AI, not provide a regulatory shield to an industry that prioritizes growth at any cost. (CHT)

The CHT points to documented AI harms, including deepfakes, fraud, and chatbot-related suicides among young people. Social media already showed what happens when technology goes unregulated, the organization argues. The government should protect the public instead of caving to the tech industry.

Trump argues that varying state regulations are slowing down the industry. AI companies like Anthropic, OpenAI, and Google support national regulation.

Comment Source: CHT
Read full article about: Deepmind co-founder Shane Legg sees 50 percent chance of "minimal AGI" by 2028

Deepmind co-founder Shane Legg puts the odds of achieving "minimal AGI" at 50 percent by 2028. In an interview with Hannah Fry, Legg lays out his framework for thinking about artificial general intelligence. He describes a scale running from minimal AGI through full AGI to artificial superintelligence (ASI). Minimal AGI means an artificial agent that can handle the cognitive tasks most humans typically perform. Full AGI covers the entire range of human cognition, including exceptional achievements like developing new scientific theories or composing symphonies.

Legg believes minimal AGI could arrive in roughly two years. Full AGI would follow three to six years later. To measure progress, he proposes a comprehensive test suite: if an AI system passes all typical human cognitive tasks, and human teams can't find any weak points even after months of searching with full access to every detail of the system, the goal has been reached.