Ad
Short

Apple is facing a lawsuit in California from authors Grady Hendrix and Jennifer Roberson, who claim the company violated their copyrights by using their books to train AI models like OpenELM and Apple Intelligence. The lawsuit alleges Apple used the Books3 dataset, a collection of more than 196,000 pirated books that includes works by both authors. The complaint also accuses Apple of using its Applebot web crawler to copy website content and pull material from so-called shadow libraries.

The plaintiffs are seeking damages and a court order barring Apple from using their works. This case follows a recent lawsuit against Anthropic, which ended in a settlement after similar copyright claims.

Ad
Ad
Short

Warner Bros. Discovery is suing AI image generator company Midjourney for copyright infringement in federal court in California. The studio accuses Midjourney of building its business on the mass theft of content and using copyrighted characters like Superman, Wonder Woman, Batman, Bugs Bunny, and Rick and Morty. Warner Bros. Discovery is joining Disney and Universal, which filed similar lawsuits earlier this year. The complaint includes side-by-side comparisons of Midjourney outputs and original film images, such as Christian Bale's Batman from "The Dark Knight." According to Warner Bros. Discovery, the AI tool generates Warner-owned characters even for prompts as generic as "classic comic superhero battle." Midjourney offers four paid subscription tiers ranging from $10 to $120 per month. The company has not commented on the allegations. Warner Bros. Discovery is seeking damages of up to $150,000 per infringed work.

Short

OpenAI plans to launch a new AI-powered job platform and introduce a certification program for AI skills next year. The goal is to connect businesses and government agencies with qualified professionals. According to Bloomberg, the project is backed by major partners, including US retail giant Walmart. The initiative aims to certify ten million people in the US by 2030. The plans were announced at an AI education meeting at the White House, attended by OpenAI CEO Sam Altman and other tech leaders. The platform will let job seekers prove their skills and be matched with relevant opportunities. By helping workers adapt to the growing impact of AI across many professions, the project is intended to address concerns about AI's disruptive potential - and probably to bolster OpenAI's public image.

Ad
Ad
Short

OpenAI will start mass-producing its own AI chips next year in partnership with US semiconductor company Broadcom, according to the Financial Times. The move is designed to reduce OpenAI's reliance on Nvidia and meet growing demand for computing power. On Thursday, Broadcom CEO Hock Tan mentioned a new customer that has committed to $10 billion in chip orders. Several sources confirmed that the customer is OpenAI. The company plans to use the chips exclusively for its own operations and does not intend to sell them to external clients. OpenAI is following a path already taken by Google, Amazon, and Meta, which have all developed specialized chips for AI workloads. CEO Sam Altman recently stressed that OpenAI needs more computing resources for its GPT-5 model and plans to double its computing capacity over the next five months.

Short

Large language models have a tendency to escalate military scenarios - sometimes all the way to nuclear war.

"The AI is always playing Curtis LeMay," Jacquelyn Schneider of Stanford University told Politico, describing her team's experiments using language models in military wargames. LeMay, a US general during the Cold War, was famous for his aggressive stance on nuclear weapons. "It’s almost like the AI understands escalation, but not de-escalation. We don’t really know why that is."

In these simulations, the models consistently pushed situations toward escalation - often ending in nuclear strikes. Schneider and her team think the root of the problem is the training data: large language models learn from existing literature, which usually highlights conflicts and escalation. Peaceful resolutions, like the Cuban Missile Crisis, are rarely covered in detail. With so few examples of "non-events" in the data, de-escalation is hard for the AI to model. These tests used older language models, including GPT-4, Claude 2, and Llama-2.

Ad
Ad
Short

Universities are recognizing that AI literacy is now critical for every graduate, not just those in computer science.

This new priority is leading top US schools, including UCLA and the University of Maryland, to appoint their first Chief AI Officers. The aim is to equip all students with foundational AI skills, reflecting the sense that understanding AI is becoming as essential as traditional core subjects.

Higher education leaders see a particular responsibility in this area, given their historic role in launching companies like Facebook, Google, and Dell. As AI expertise quickly becomes a basic requirement in the tough job market for new graduates, universities are under growing pressure to ensure students are prepared.

Google News