Ad
Short

ChatGPT can now access company data through a SharePoint connector. The beta feature is available for ChatGPT Plus, Pro, and Team users. With the new integration, ChatGPT can analyze and summarize content from multiple SharePoint sites, complete with source references. OpenAI says use cases include cross-department summaries of strategy documents or building customer profiles by combining internal data with information from the web.

According to OpenAI, ChatGPT only accesses content that users have permission to view, and by default, the data is not used for training. Support for Enterprise customers is coming soon. Users can configure the connector in ChatGPT settings under "Connected Apps." Last week, OpenAI also rolled out Deep Research for Github.

Short

For the first time, insurers in London’s Lloyd’s market are offering dedicated policies that cover damages caused by errors from AI chatbots. The product was developed by Armilla, a Y Combinator-backed startup, and is designed to protect companies if they get sued over faulty AI performance—such as when customers are harmed by incorrect answers or so-called "hallucinations" from a chatbot. The coverage includes legal fees and compensation payments. A recent example comes from Air Canada, where a chatbot promised a nonexistent discount that the airline later had to honor. According to Armilla, the new policy would have applied in such a case if the bot’s performance fell significantly below expectations. Karthik Ramakrishnan, Armilla’s CEO, says the goal is to make it easier for businesses to adopt AI. Traditional tech insurance often offers only limited coverage for AI-related risks, but Armilla’s policy specifically insures against performance drops in AI models.

Ad
Ad
Ad
Ad
Short

Reasoning tasks sharply raise AI costs, according to a new analysis by Artificial Analysis. Google's Gemini Flash 2.5 costs 150 times more to run than Flash 2.0, due to using 17 times more tokens and charging $3.50 per million output tokens with reasoning, compared to $0.40 for the earlier model. This makes Flash 2.5 the most expensive model in terms of token use for logic. OpenAI's o4-mini costs more per token but used fewer tokens overall, making it cheaper in the benchmark.

Bar chart titled “Cost to Run Artificial Analysis Intelligence Index.” It shows total U.S. dollar costs to complete all tests in the Artificial Analysis Intelligence Index using different AI models. Bars are split into three colors: Input (blue), Reasoning (purple), Output (green). On the left are the most expensive models: GPT-3 ($1,951), Claude 3 Opus ($1,485), Gemini 2.5 Pro ($844). In the middle: Gemini 2.5 Flash with reasoning ($445), o4-mini (high) ($323). On the right are the cheapest models: Gemini 2.0 Flash ($3), Llama 3 8B ($2). A purple arrow above highlights the cost gap between Gemini 2.0 Flash and Gemini 2.5 Flash with reasoning, labeled “150x.” Source: Artificial Analysis.
Google's Gemini Flash 2.5 costs 150 times more to run with reasoning enabled than Flash 2.0, due to higher token use and pricing. | Image: Artificial Analysis
Ad
Ad
Google News