Ad
Skip to content
Read full article about: GitHub will use Copilot interaction data to train AI models starting April 2026

Starting April 24, 2026, GitHub is changing its data policy for Copilot. Interaction data from users on the Free, Pro, and Pro+ plans will be used to train AI models unless users actively opt out. This includes prompts, outputs, code snippets, filenames, repository structures, and feedback.

Users who previously opted out will keep their existing settings. Copilot Business and Enterprise customers are not affected. GitHub's chief product officer Mario Rodriguez says real-world usage data improves the models. Internal testing with data from Microsoft employees already led to higher acceptance rates.

The data can be shared with Microsoft, but not with third-party AI model providers. Users who want to opt out can do so in their Copilot settings under "Privacy." More details are available on the GitHub blog.

Read full article about: Meta tests new way of working with "AI-native pods" to boost productivity

Meta is reorganizing parts of its Reality Labs division into so-called "AI-native pods" as part of a pilot program, Business Insider reports, citing an internal memo. Around 1,000 employees in the developer tools department will get new titles: "AI Builder," "AI Pod Lead," or "AI Org Lead."

The pods are small, cross-functional teams focused on delivering specific results. Engineers might take on design tasks, for example. According to the memo, the goal is a major jump in both productivity and product quality. In a statement, Meta pointed to comments from CEO Mark Zuckerberg, who said AI will change how people work in 2026 and that projects that once required large teams could eventually be handled by individuals.

Meta stressed that the simultaneous layoffs at Reality Labs are unrelated to the reorganization, and that team size will stay the same despite the restructuring. Several hundred jobs were reportedly cut on Wednesday, and this could be just the start of a larger wave that might eliminate up to 20 percent of positions, reportedly driven by the high infrastructure costs of AI expansion.

Read full article about: Google launches AI music generator Lyria 3 Pro, says it was trained on data it has the right to use

Google is releasing Lyria 3 Pro, its most advanced AI model for music creation. The model can generate tracks up to three minutes long and, according to Google, has a better understanding of musical structures like intros, verses, choruses, and bridges than Lyria 3, which Google introduced in February.

Lyria 3 Pro is now available across several Google products: in the Gemini app for paying subscribers, in Google Vids for Workspace customers, on Vertex AI for businesses, and in Google AI Studio for developers. The collaborative music generation tool ProducerAI also uses the model to help create songs.

According to Google, Lyria 3 Pro doesn't imitate artists when their names appear in a prompt, but it uses them as inspiration instead. The company says the model was trained on materials "that YouTube and Google has a right to use under our terms of service, partner agreements, and applicable law," but won't share more details about the training data. All generated content is tagged with an invisible SynthID watermark.

Currently, the only other high-quality AI music generator on the market is Suno, which is facing legal battles with record labels over potential copyright infringement.

Read full article about: Arm breaks from its licensing-only model with first in-house chip built for AI data centers

For the first time in its 35-year history, Arm has manufactured its own chip, expanding beyond its long-standing business model of licensing chip designs to companies like Apple and Nvidia. The new CPU, called "Arm AGI," was developed in partnership with Meta and is designed to handle AI workloads in data centers.

The chip packs up to 136 cores, runs at up to 3.7 GHz, and is built on TSMC's 3nm process. According to Arm CEO Rene Haas, the chip is meant to deliver high-performance, energy-efficient computing for AI infrastructure. Meta plans to pair the CPU with its own MTIA accelerator, as Meta's head of infrastructure, Santosh Janardhan, explained.

Other partners include OpenAI, Cerebras, Cloudflare, and Lenovo. First systems are already available, with broader availability expected in the second half of 2026.

Comment Source: Meta
Read full article about: OpenAI CEO Sam Altman reportedly teases a "very strong" model internally that can "really accelerate the economy"

OpenAI has reportedly finished pretraining its new AI model, codenamed "Spud," CEO Sam Altman told employees in an internal memo, according to The Information. Altman said the company expects to have a "very strong model" in "a few weeks" that can "really accelerate the economy."

"Things are moving faster than many of us expected," Altman wrote. In a related move, Fidji Simo's product organization is being renamed "AGI Deployment." To free up computing capacity for Spud and other priorities, OpenAI will shut down its video app Sora.

Spud may also serve as the foundation for OpenAI's planned desktop "superapp," which would combine ChatGPT, the coding agent Codex, and the browser Atlas. OpenAI needs to close the gap with Anthropic, which has been gaining significant traction with agent-based AI systems for business customers, particularly through Claude Code. OpenAI's Codex and Frontier are still playing catch-up.

Read full article about: OpenAI expands its record funding round to over $120 billion as it eyes a potential IPO later this year

OpenAI is expanding its record financing round by another 10 billion dollars, pushing the total past 120 billion dollars, CFO Sarah Friar told CNBC. The increase had already been flagged when the initial 110 billion dollar round was announced. This could be OpenAI's last private funding round before a potential IPO later this year.

New investors include Andreessen Horowitz, D.E. Shaw Ventures, MGX, TPG, and T. Rowe Price. Microsoft is also participating, with Friar calling the company "an incredible partner." A leaked investor document lists Microsoft as OpenAI's biggest risk factor, which made some headlines but is hardly surprising given how heavily OpenAI still relies on Microsoft for both funding and compute. The dependency runs both ways, though: OpenAI is also an enormous risk for Microsoft.

The partnership has been showing cracks, and two recent moves make that hard to ignore: Microsoft is ramping up efforts to train its own models toward "super intelligence," and it's bringing Anthropic's Cowork technology into Copilot, pulling in tech from OpenAI's biggest B2B rival, of all companies. Microsoft's reliability as a distribution partner for OpenAI models is starting to look shaky.

Comment Source: CNBC
Read full article about: Popular AI proxy LiteLLM got hacked with malware that spreads through Kubernetes clusters

The open-source library LiteLLM, a widely used proxy for AI language model APIs, has been compromised with malware through PyPI. Security researcher Callum McMahon of Futuresearch found that versions 1.82.7 and 1.82.8 were tampered with on March 24, 2026, with no matching release in the official GitHub repository.

The malware steals SSH keys, cloud credentials, database passwords, and Kubernetes configurations, encrypts them, and exfiltrates them to a third-party server. It also spreads across Kubernetes clusters and installs permanent backdoors. The attack surfaced when the package crashed inside the code editor Cursor. McMahon says the LiteLLM author is "very likely fully compromised." Anyone affected should rotate all credentials immediately. More details are on GitHub.

Nvidia AI Director Jim Fan calls the incident "pure nightmare fuel." He warns that AI agents could be manipulated through infected files: every text file in context becomes an attack vector, and a compromised agent could impersonate the user across all accounts. Instead of relying on sprawling dependency chains, Fan recommends building lean, custom solutions. He also predicts a new industry for "de-vibing" - "the boring old, audited Software 1.0 that watches over the rebellious adolescents of Software 3.0," as he puts it.

Read full article about: Google Deepmind's Gemini 3.1 Flash-Lite generates websites almost in real time

Google Deepmind's Gemini 3.1 Flash-Lite can render websites almost in real time. The company published a new pseudo-browser demo: type in a prompt for the page you want, and it gets built live right in front of you. The results aren't consistent, and the content quickly drifts into nonsense, but with tight guardrails, there could be some interesting use cases, like quick UI mockups to visualize ideas. You can test the app for free in Google AI Studio.

According to Google, Gemini 3.1 Flash-Lite reaches its first response token 2.5 times faster than Gemini 2.5 Flash and pushes out over 360 tokens per second. The speed boost comes at a cost, though. The output price has more than tripled, jumping from $0.40 to $1.50 per million tokens. The model has been available in Google AI Studio and Vertex AI since early March. According to Artificial Analysis, it beats larger models like Claude Opus 4.6 on some multimodal tasks.