OpenAI is providing the US AI Safety Institute at the National Institute of Standards and Technology (NIST) early access to its next frontier AI model. The collaboration aims to improve AI evaluation methods, says OpenAI CEO Sam Altman. He also reaffirmed the company's pledge to dedicate at least 20% of its computing resources to safety measures. This comes after former OpenAI safety researcher Jan Leike claimed the company had not kept this commitment. Altman also said that in May, OpenAI removed clauses that allowed the company to revoke previously granted stock options and required employees to remain silent. The change aims to create an environment where employees can "raise concerns and feel comfortable doing so," Altman writes.
Ad
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Sources
News, tests and reports about VR, AR and MIXED Reality.
New VR extraction shooter for Meta Quest: In Anomaly's apocalyptic city you fight for bare survival
Meta Quest: Behemoth slips down and an old acquaintance storms to the top VR game charts
XR weekly round-up: Meta Quest 3S attracts new customers despite weaknesses, VR blockbuster gets mixed reviews
MIXED-NEWS.com
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.