Ad
Short

Google has added a File Search Tool to the Gemini API, allowing developers to query their own documents using a vector database. The tool manages storing files, splitting them up, searching for relevant content, and inserting that information into Gemini's responses. Supported file types include PDF, DOCX, TXT, and JSON. The tool is free to use, except for a small fee when indexing data ($0.15 per million tokens). Source references are included automatically in the responses.

Google says the main use cases are internal search systems and chatbots that need to answer questions based on specific company documents. Developers can find more information in the documentation or test out the demo in Google AI Studio.

Ad
Ad
Short

Google's new TPU v7 Ironwood chips are now general available for training and running large AI models. According to Google, Ironwood delivers ten times the peak performance of the previous TPU v5p and is four times more efficient than the TPU v6e. Early adopters like Anthropic, Lightricks, and Essential AI are already using Ironwood for demanding AI workloads. Amin Vahdat, VP and GM for AI Infrastructure at Google, recently said every TPU Google has built so far is currently in use.

Alongside Ironwood, Google is introducing Axion, a new line of Arm-based virtual machines aimed at everyday computing tasks. Vimeo and ZoomInfo report that Axion N4A instances deliver up to 60 percent better price-performance than comparable x86 systems. With both Ironwood and Axion, Google is challenging competitors like Nvidia and working to offer more flexible AI infrastructure with better cost efficiency, especially for serving its own AI products like Gemini.

Ad
Ad
Short

Deutsche Telekom and Nvidia are joining forces to build the "Industrial AI Cloud" in Munich, set to become one of the largest AI computing hubs in Europe. The center will feature more than 1,000 NVIDIA DGX B200 systems and RTX PRO servers, with up to 10,000 NVIDIA Blackwell GPUs. According to Telekom, this will increase Germany's AI computing power by 50 percent. For comparison's sake, Sam Altman recently said that OpenAI will have "well over one million GPUs" online by the end of 2025. That's just OpenAI.

"Germany's strength in engineering and industry is legendary and will now be further enhanced by AI."

Jensen Huang, Nvidia CEO

The new initiative aims to give European companies the ability to build AI solutions using local data. Early partners are SAP, Polarise, and Agile Robots. The platform is intended to support applications such as factory simulation, robot training, and running large language models on site. The project, valued at over one billion euros, is privately funded and separate from the EU’s AI gigafactory funding program.

Short

Rumors on LinkedIn claim that ChatGPT is no longer allowed to give medical or legal advice, but OpenAI says that’s false. The company says the model’s behavior has not changed. Karan Singhal, OpenAI’s Head of Medical AI, says ChatGPT was never meant to replace expert advice, but can still help users understand complex medical or legal topics.

Screenshot via X

OpenAI’s usage policy change logs show no recent changes to how sensitive topics are handled. The most recent update on October 29, 2025, was made to "reflect a universal set of policies across OpenAI products and services."

via waybackmachine

OpenAI’s usage policy change logs show no recent changes to how sensitive content is handled. The latest update on October 29, 2025, was made to unify the rules across all products. A line warning about giving advice that “requires a license” was already in earlier versions. Older policies included similar notes, just without the licensing reference.

Ad
Ad
Short

OpenAI has signed a $38 billion multi-year deal with Amazon Web Services (AWS) to run and expand its AI models using AWS infrastructure. The partnership includes access to AWS UltraServers powered by hundreds of thousands of NVIDIA GPUs and scalable CPUs. The agreement runs through at least 2026, with extension options. OpenAI's flagship models, such as GPT-5, will remain exclusive to Microsoft Azure and OpenAI's own platform, except for its open-source models.

via X

The AWS deal adds to a string of recent partnerships by OpenAI: with Nvidia and Broadcom for at least 10 gigawatts of compute each, AMD for up to 6 gigawatts, and Oracle for 4.5 gigawatts.

Google News