Ad
Skip to content

Maximilian Schreiner

Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Read full article about: Salesforce CEO celebrates AI automation

Salesforce CEO Marc Benioff touts AI productivity gains - "4,000 less heads" needed in support.

Salesforce CEO Marc Benioff is celebrating the impact of AI on the company, describing recent advances as "the most exciting thing that's happened in the last nine months for Salesforce." In a podcast interview with investor Logan Bartlett, Benioff said that thanks to AI-driven productivity gains, he now needs "4,000 less heads" in customer support.

Salesforce stands out in Silicon Valley for Benioff's open enthusiasm about what he calls "radical augmentation" of the workforce through automation. While other tech CEOs still express regret over job cuts, Benioff is vocal about the shift. Since 2023, Salesforce has cut around 9,000 jobs (about 8,000 last year and another 1,000 in 2024), and just this week notified 262 employees in San Francisco of layoffs, according to a state filing. In June, Benioff told Bloomberg that AI already handles "50 percent" of the work at Salesforce.

Read full article about: 'There is a great invention called wheels,' says the RoboForce CEO, pushing back against humanoids

Some robotics experts question the value of humanoid designs: "There is a great invention called wheels."

"Humanoid designs only make sense if it is so important to justify the trade-off and sacrifice of other things," says Leo Ma, CEO of RoboForce, in an interview with the Washington Post. His company's Titan robot uses four wheels instead of legs and can lift more weight than humanoid models. Ma adds, "Other than that, there is a great invention called wheels."

Scott LaValley, founder of Cartwheel Robotics, is also skeptical. "The dexterity of these robots isn't fantastic. There are hardware limitations, software limitations. There are definitely safety concerns," he says.

One major challenge: most humanoid robots, like humans, have to constantly expend energy to stay balanced on two legs. If the power cuts out, a bipedal robot generally crumples to the ground, which can pose risks to nearby people and objects. Of course, Ma and others who favor wheeled robots have a clear interest in promoting their own designs.

Read full article about: Warner Bros. Discovery is suing AI company Midjourney for copyright infringement

Warner Bros. Discovery is suing AI image generator company Midjourney for copyright infringement in federal court in California. The studio accuses Midjourney of building its business on the mass theft of content and using copyrighted characters like Superman, Wonder Woman, Batman, Bugs Bunny, and Rick and Morty. Warner Bros. Discovery is joining Disney and Universal, which filed similar lawsuits earlier this year. The complaint includes side-by-side comparisons of Midjourney outputs and original film images, such as Christian Bale's Batman from "The Dark Knight." According to Warner Bros. Discovery, the AI tool generates Warner-owned characters even for prompts as generic as "classic comic superhero battle." Midjourney offers four paid subscription tiers ranging from $10 to $120 per month. The company has not commented on the allegations. Warner Bros. Discovery is seeking damages of up to $150,000 per infringed work.

Read full article about: OpenAI plans an AI-powered job platform to certify and connect workers

OpenAI plans to launch a new AI-powered job platform and introduce a certification program for AI skills next year. The goal is to connect businesses and government agencies with qualified professionals. According to Bloomberg, the project is backed by major partners, including US retail giant Walmart. The initiative aims to certify ten million people in the US by 2030. The plans were announced at an AI education meeting at the White House, attended by OpenAI CEO Sam Altman and other tech leaders. The platform will let job seekers prove their skills and be matched with relevant opportunities. By helping workers adapt to the growing impact of AI across many professions, the project is intended to address concerns about AI's disruptive potential - and probably to bolster OpenAI's public image.

Read full article about: OpenAI to mass-produce custom AI chips with Broadcom

OpenAI will start mass-producing its own AI chips next year in partnership with US semiconductor company Broadcom, according to the Financial Times. The move is designed to reduce OpenAI's reliance on Nvidia and meet growing demand for computing power. On Thursday, Broadcom CEO Hock Tan mentioned a new customer that has committed to $10 billion in chip orders. Several sources confirmed that the customer is OpenAI. The company plans to use the chips exclusively for its own operations and does not intend to sell them to external clients. OpenAI is following a path already taken by Google, Amazon, and Meta, which have all developed specialized chips for AI workloads. CEO Sam Altman recently stressed that OpenAI needs more computing resources for its GPT-5 model and plans to double its computing capacity over the next five months.

Comment Source: FT
Read full article about: AI wargame simulations show language models struggle to understand or model de-escalation

Large language models have a tendency to escalate military scenarios - sometimes all the way to nuclear war.

"The AI is always playing Curtis LeMay," Jacquelyn Schneider of Stanford University told Politico, describing her team's experiments using language models in military wargames. LeMay, a US general during the Cold War, was famous for his aggressive stance on nuclear weapons. "It’s almost like the AI understands escalation, but not de-escalation. We don’t really know why that is."

In these simulations, the models consistently pushed situations toward escalation - often ending in nuclear strikes. Schneider and her team think the root of the problem is the training data: large language models learn from existing literature, which usually highlights conflicts and escalation. Peaceful resolutions, like the Cuban Missile Crisis, are rarely covered in detail. With so few examples of "non-events" in the data, de-escalation is hard for the AI to model. These tests used older language models, including GPT-4, Claude 2, and Llama-2.