Apple is working on a large language model (LLM) version of Siri that will let users control specific app functions with their voice.
The new Siri will be the main focus of Apple's AI push, set to be announced at the Worldwide Developers Conference on June 10, according to Bloomberg sources.
For example, the upgraded Siri can open specific documents, move notes, or summarize articles. At first, the features will only work with Apple's own apps. Eventually, the goal is to support hundreds of different commands.
Initially, Siri will only be able to perform one command at a time, but users will later be able to string multiple commands together. However, the new Siri will not be released until 2025 as an update for the upcoming iOS 18.
Software sells hardware
Apple is developing more generative AI features, like voice note transcription, website summaries, auto-replies to messages, advanced photo editing, and AI emojis.
Apple's software chief Craig Federighi has told his teams to build as many new AI features as possible for this year's iOS 18 updates, Bloomberg reports. But many of the new AI features will only work on newer devices like the iPhone 15 Pro. Macs and iPads must have at least the M1 chip.
It seems Apple is sticking to the age-old "software sells hardware" mantra, hoping that AI will revitalize its recently slowing hardware sales.
Apple has also reportedly made a deal with OpenAI to use their LLM technology and possibly add ChatGPT as an iOS chatbot. This partnership is expected to be announced at WWDC. A deal with Google for Gemini is still possible, but not yet finalized.
Amazon, perhaps spooked by OpenAI's "Her" demo and Apple and Google's massive AI plans, is said to be working on a new Alexa push with a major update. The company already announced an LLM-based Alexa last year, but only as an experiment.