Ad
Short

ChatGPT can now access company data through a SharePoint connector. The beta feature is available for ChatGPT Plus, Pro, and Team users. With the new integration, ChatGPT can analyze and summarize content from multiple SharePoint sites, complete with source references. OpenAI says use cases include cross-department summaries of strategy documents or building customer profiles by combining internal data with information from the web.

According to OpenAI, ChatGPT only accesses content that users have permission to view, and by default, the data is not used for training. Support for Enterprise customers is coming soon. Users can configure the connector in ChatGPT settings under "Connected Apps." Last week, OpenAI also rolled out Deep Research for Github.

Ad
Ad
Short

U.S. President Donald Trump has dismissed Shira Perlmutter, head of the U.S. Copyright Office, shortly after her office released a report opposing broad "fair use" exemptions for AI training purposes. The report's stance conflicts with the interests of Trump ally Elon Musk and much of the AI industry. Perlmutter had served since 2020. Rep. Joseph Morelle, the top Democrat on the House Committee on Administration, called the dismissal an "unprecedented power grab with no legal basis" and said the timing was "certainly no coincidence."

This action once again tramples on Congress’s Article One authority and throws a trillion-dollar industry into chaos.

Rep. Joe Morelle

Ad
Ad
Short

Reasoning tasks sharply raise AI costs, according to a new analysis by Artificial Analysis. Google's Gemini Flash 2.5 costs 150 times more to run than Flash 2.0, due to using 17 times more tokens and charging $3.50 per million output tokens with reasoning, compared to $0.40 for the earlier model. This makes Flash 2.5 the most expensive model in terms of token use for logic. OpenAI's o4-mini costs more per token but used fewer tokens overall, making it cheaper in the benchmark.

Bar chart titled “Cost to Run Artificial Analysis Intelligence Index.” It shows total U.S. dollar costs to complete all tests in the Artificial Analysis Intelligence Index using different AI models. Bars are split into three colors: Input (blue), Reasoning (purple), Output (green). On the left are the most expensive models: GPT-3 ($1,951), Claude 3 Opus ($1,485), Gemini 2.5 Pro ($844). In the middle: Gemini 2.5 Flash with reasoning ($445), o4-mini (high) ($323). On the right are the cheapest models: Gemini 2.0 Flash ($3), Llama 3 8B ($2). A purple arrow above highlights the cost gap between Gemini 2.0 Flash and Gemini 2.5 Flash with reasoning, labeled “150x.” Source: Artificial Analysis.
Google's Gemini Flash 2.5 costs 150 times more to run with reasoning enabled than Flash 2.0, due to higher token use and pricing. | Image: Artificial Analysis
Ad
Ad
Short

Google introduces "implicit caching" in Gemini 2.5, aiming to cut developer costs by as much as 75 percent. The new feature automatically detects and stores recurring content, so repeated prompts are only processed once. According to Google, this can lead to significant savings compared to the old explicit caching method, where users had to set up their own cache. To maximize the benefits, Google recommends putting the stable part of a prompt—like system instructions—at the start, and adding user-specific input, such as questions, afterwards. Implicit caching kicks in for Gemini 2.5 Flash starting at 1,024 tokens, and for Pro versions from 2,048 tokens onwards. More details and best practices are available in the Gemini API documentation.

Google News