Meta launches AI app with voice features, personalisation & smart glasses Integration

Meta launched the first version of its standalone Meta AI app, designed to offer users a more personal and voice-driven interaction with its artificial intelligence assistant. Built on its latest Llama 4 model, the app is now available in selected countries and serves as a hub to access the company’s AI capabilities across mobile, desktop and wearables.
The app includes a Discover feed, where users can explore and share how others are using AI, as well as features that allow for image generation and editing through both voice and text. The assistant is designed to become more useful over time by remembering user preferences and drawing on context from prior conversations. When connected to a user’s Facebook and Instagram accounts via Accounts Center, the AI can provide more tailored responses based on shared information such as liked content and profile details. Personalised responses are currently available in the United States and Canada.
/socialsamosa/media/post_attachments/wp-content/uploads/2025/04/01_HomeScreen-309438.jpg?w=1024)
Voice interaction is a central feature of the new app. In addition to standard voice commands, the platform is introducing a demo of full-duplex speech technology, allowing the assistant to engage in real-time conversational dialogue without relying solely on pre-written text. The voice demo is available to users in the United States, Canada, Australia and New Zealand. While the feature does not yet access the internet or real-time data, Meta said the demo offers a preview of future experiences and will be refined based on user feedback.
/socialsamosa/media/post_attachments/wp-content/uploads/2025/04/03_History_Carousel-01-444488.jpg?fit=1920%2C1672)
The app also integrates with Ray-Ban Meta smart glasses, replacing the previous Meta View companion app. Users of Meta View will see their settings, media and paired devices automatically transferred to the updated Meta AI interface, where they can continue managing their wearable technology. Conversations that begin on the smart glasses can be continued on the app or web interface, though the reverse is not currently supported.
/socialsamosa/media/post_attachments/wp-content/uploads/2025/04/05_Devices_Carousel-02-533413.jpg?fit=1920%2C1672)
The platform's desktop experience is also being updated to align with the app, including support for voice input and the Discover feed. The web platform has been optimised for larger screens and includes expanded options for image generation, such as presets and style controls. Meta is testing a document editor in select countries that allows users to generate multimedia documents, export them as PDFs and import files for analysis by the AI assistant.
Meta said the app was developed with user control in mind. Voice features can be toggled on or off, and content is only shared in the Discover feed if a user opts to post it. A “Ready to talk” setting also enables hands-free use for multitasking.
The new app represents Meta’s latest effort to embed artificial intelligence across its product ecosystem, including Facebook, Instagram, WhatsApp and Messenger. With the addition of the Meta AI app, the company aims to create a consistent, always-available assistant experience across platforms and devices.
The Meta AI app is now available for download in supported regions.
News