Elevenlabs is expanding its portfolio with 11ai, a voice-activated AI assistant designed to actively participate in digital workflows. The alpha version is meant to showcase what’s possible with voice-first technology and API integrations.
According to Elevenlabs, most voice assistants struggle to move beyond simple conversation and rarely take productive actions. 11ai aims to change that: users give spoken commands, and the system carries out tasks on their behalf.
Video: Elevenlabs
Voice-first productivity with direct tool integrations
Through a web interface at 11.ai/app/eleven, users can issue commands like "Plan my day and add my most important tasks to Linear," or "Use Perplexity to research our prospect meeting today and summarize their recent funding."
A promotional video shows 11ai available in every room during a morning routine, though Elevenlabs hasn’t specified what hardware powers the responses.
The system can handle sequential actions and understands context across different tools. For example, when researching a customer, 11ai searches connected systems, finds relevant data, and processes it through CRM updates or team messages.
Built-in integrations and custom servers
11ai’s functionality is built on the Model Context Protocol (MCP), a standardized API integration for AI assistants. Elevenlabs’ conversational AI platform supports MCP natively, connecting to services like Salesforce, HubSpot, Gmail, and Zapier.
At launch, Elevenlabs offers prebuilt integrations for Perplexity, Linear, Slack, HackerNews, and Google Calendar, with more planned to roll out weekly.
The underlying Elevenlabs conversational AI platform is designed for low latency in real-time conversations and supports multimodal voice and text interaction.
The platform also includes RAG functionality for accessing external knowledge bases and can automatically detect languages for multilingual conversations. Users can choose from more than 5,000 voices or create their own voice clones for a personalized experience.

11ai also supports custom MCP servers. Teams can connect internal tools or specialized software to 11ai through their own MCP servers, extending the assistant’s functionality to fit their workflows.
According to Elevenlabs, 11ai uses a permissions model, letting users specify what actions the assistant can take for each application.
11ai is currently available as a free, experimental alpha. Elevenlabs is using the alpha phase to collect feedback on integrations, desired MCP servers, voice interaction versus traditional interfaces, and new features for daily routines.
Actionable agents are becoming the norm
By entering the market for actionable voice assistants, Elevenlabs is competing directly with a growing number of similar products. Perplexity recently launched a mobile assistant that can handle tasks like restaurant reservations. Amazon has introduced Alexa+, an upgraded, agent-like Alexa focused on voice-first interactions.
Claude from Anthropic is also compatible with the MCP protocol, though it’s more focused on B2B use cases. Only recently did Claude get a voice mode, which relies on Elevenlabs’ technology. Google’s Gemini supports voice interaction, but its ability to take action and integrate with tools is still limited - even within Google’s own ecosystem.