Ad
Ad
Short

OpenAI's business practices continue to draw criticism that could have legal consequences. The SEC is investigating whether OpenAI CEO Sam Altman misled investors, according to the Wall Street Journal. The investigation follows allegations by former OpenAI board members that Altman was not "consistently candid" in his communications, which led to his brief ouster in November. Federal prosecutors in Manhattan are also investigating the case and are expected to release their report soon. In addition to the New York Times, three more media companies, Raw Story, The Intercept and AlternNet, are suing OpenAI for possible copyright infringement. The US and EU are investigating OpenAI's relationship with Microsoft to determine whether Microsoft's recent investment amounts to a takeover.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Short

ServiceNow, Hugging Face, and Nvidia have released StarCoder2, a family of open-access code generation LLMs. StarCoder2 was developed in collaboration with the BigCode community as a successor to Starcoder, which was released in May 2023 and trained on 619 programming languages. StarCoder2 offers three model sizes: a 3 billion parameter model from ServiceNow, a 7 billion parameter model from Hugging Face, and a 15 billion parameter model from Nvidia.

StarCoder2 has been trained on the new Stackv2 code dataset, which is also available. New training methods are designed to help the model better understand low-resource programming languages, mathematics, and source code discussions. The model can be fine-tuned by companies for their own tasks.

Short

Qualcomm has launched an AI Hub Models platform that provides pre-optimized, out-of-the-box AI models for image, audio and speech applications on Snapdragon devices and across the Android ecosystem. Models such as Whisper, ControlNet, Stable Diffusion, and Baichuan 7B are optimized for local AI performance, lower memory consumption, and better power efficiency, and are available for multiple form factors and runtimes. They can be deployed on-device using TensorFlow Lite or the Qualcomm AI Engine Direct SDK, and on cloud-hosted devices using Qualcomm AI Hub. For more information and to download models, you can visit the Qualcomm AI Hub. The company is also promoting collaboration and learning through its AI Hub Slack community.

Google News