Ad
Skip to content
Read full article about: Apple turns to Google's Gemini as Siri's technical debt becomes too much to handle

Apple will use Google's Gemini models for its AI features, including a revamped version of Siri. The multi-year partnership means Apple will rely on Google's Gemini and cloud technology for its upcoming products, according to CNBC. The new features are expected to roll out later this year.

In a statement, Apple said that after careful evaluation, Google's technology offers the most capable foundation for its applications. Rumors about talks between the two tech giants first surfaced in March of last year. Later reports suggested the switch would cost Apple more than one billion dollars annually.

The move comes as Apple continues to struggle with Siri's underlying architecture. Internal reports describe Siri as a technically fragmented system built from old rule-based components and newer generative models - a combination that makes updates difficult and leads to frequent errors. Apple is also working on an entirely new in-house LLM architecture and a model with roughly one trillion parameters, aiming to eventually break free from external providers. Google faced similar challenges early on keeping pace with OpenAI's rapid progress but managed to catch up.

Comment Source: CNBC
Read full article about: UK regulator investigates X over Grok AI's role in generating sexualized deepfakes

British media regulator Ofcom has opened an investigation into X over the AI chatbot Grok. The probe follows reports in recent weeks that Elon Musk's chatbot and social media platform were increasingly being used to create and share non-consensual intimate images and even sexualized images of children.

Ofcom is now examining whether X violated the UK's Online Safety Act. The regulator contacted X on January 5, 2025, demanding a response by January 9. The investigation aims to determine whether X took adequate steps to protect British users from illegal content. Violations could result in fines of up to 18 million pounds or 10 percent of global revenue. In severe cases, a court could even order X blocked in the UK.

Ofcom is also looking into whether xAI, the AI company behind Grok, broke any regulations. Last week, the EU Commission ordered X to preserve all internal documents and data related to the Grok AI chatbot through the end of 2026.

Read full article about: Chinese AI industry admits US remains ahead for now

Leading figures in China's AI industry are tempering expectations: China won't overtake the US in the AI race anytime soon. Justin Lin, head of Alibaba's Qwen model series, puts the odds of a Chinese company surpassing OpenAI or Anthropic in the next three to five years at less than 20 percent. Tang Jie from Zhipu AI warned at the AGI Next Summit in Beijing that the gap with the US may actually be widening, though recent open-source releases suggest otherwise.

At the conference, executives cited limited computing capacity and US export controls on advanced chips as key hurdles. US infrastructure is one to two orders of magnitude larger, forcing Chinese companies to focus resources on current projects.

Yao Shunyu, a former OpenAI researcher and now Tencent's AI chief scientist, was more optimistic. He cited three to five years as a realistic timeframe for China to catch up but said the lack of advanced chipmaking machines was the main technical hurdle.

The cautious outlook follows a strong week on the stock market. Startups Zhipu AI and MiniMax Group together raised over one billion dollars in Hong Kong, with MiniMax shares doubling on their first day of trading.

Web world models could give AI agents consistent environments to explore

Researchers at Princeton University, UCLA, and the University of Pennsylvania have developed an approach that gives AI agents persistent worlds to explore. Standard web code defines the rules, while a language model fills these worlds with stories and descriptions.

Read full article about: Convogo's founders join OpenAI to close the gap between AI potential and actual use

OpenAI is bringing in the team behind Convogo, an AI startup that built software for evaluating executives, as part of its broader cloud strategy. Founder Matt Cooper announced the news on LinkedIn. Convogo's software used AI to automatically analyze interviews, surveys, and psychometric tests.

According to OpenAI (via Techcrunch), the acquisition is about the people, not the product. The three founders, Matt Cooper, Evan Cater, and Mike Gillett, will help drive OpenAI's AI cloud efforts. The deal was settled entirely in shares, though the amount remains undisclosed. Convogo's software is being shut down.

The founding team's strong product focus likely made them attractive. Cooper writes that the key to closing the gap between AI's potential and its actual use lies in well-designed, purpose-driven applications, a "usage gap" narrative that Microsoft and OpenAI have both pushed before.

The acquisition also fits OpenAI's strategy of controlling the entire value chain, from infrastructure to models to the end product. This push likely reflects how differentiating on model capabilities alone is getting harder as performance converges and cheaper open-source alternatives catch up.

Read full article about: OpenAI reportedly sets aside $50 billion for employee stock program

Last fall, OpenAI reportedly set aside a stock pool for employees worth about ten percent of the company. Based on the $500 billion valuation from October 2024, that comes to around $50 billion, according to The Information, citing two people familiar with the plans.

OpenAI has also already issued $80 billion in allocated shares. Combined with the new stock pool, employees now own about 26 percent of the company. Meanwhile, OpenAI is in early talks with investors about a new funding round worth roughly $750 billion.

A previous analysis found that OpenAI pays its employees more than any tech startup in history, with stock-based compensation averaging about $1.5 million per employee. That level of spending complicates the path to profitability: the company is targeting around $20 billion in ARR. But on top of hefty payroll, development costs, and day-to-day operations, OpenAI faces about $1.4 trillion in data center commitments over the next eight years.