Microsoft Copilot gets a visual avatar

Microsoft Copilot gets a visual avatar

SHARE IT

28 July 2025

Microsoft has unveiled an innovative update to its AI assistant, Copilot, by giving it a visual presence that can express emotions and react during conversations. The feature, named Copilot Appearance, was first hinted at during Microsoft’s 50th anniversary celebration earlier this year. It aims to transform interactions with the chatbot by adding a new layer of engagement through non-verbal communication, enhancing the traditional voice-based exchanges users are familiar with.

Until now, Copilot’s conversational mode operated without any visual representation beyond a simple abstract animation. Users could interact with the AI through voice or text, but there were no facial expressions or gestures to complement the dialogue. Copilot Appearance changes that by introducing an avatar that can display real-time emotional responses, nod to affirm, and shift its facial expressions dynamically as the conversation unfolds. This marks a significant step toward making AI assistants feel more personable and interactive.

The design of the new Copilot avatar, however, is far from anthropomorphic. Rather than resembling a full-fledged digital human or an iconic mascot like Microsoft’s old Clippy, the avatar is currently an abstract, morphing shape with a face that visually conveys feelings and reactions. This subtle approach contrasts sharply with some other AI companion products on the market, such as xAI’s Grok AI, which offers more human-like, highly detailed avatars at a premium price. Microsoft’s take is intentionally restrained, focusing on providing expressive cues without overwhelming the user or veering into uncanny valley territory.

On Microsoft’s official Copilot Labs page, the company describes this update as an “experiment” aimed at integrating a richer, more engaging user experience. By bringing visual elements to voice interactions, Microsoft hopes users will find conversations with Copilot more natural and immersive. The addition of non-verbal signals is expected to help convey tone and intent more clearly, something that purely text- or voice-based chatbots have traditionally struggled with.

copilot-avatar-1.jpg

Currently, Copilot Appearance is being rolled out selectively to a limited group of users located in the United States, United Kingdom, and Canada. This cautious rollout reflects Microsoft’s desire to carefully gather user feedback and refine the feature before making it widely available. Those lucky enough to gain access can enable Copilot Appearance by activating Voice Mode and toggling the new visual feature in the settings menu. This allows users to see Copilot react in real time as they talk, brainstorm ideas, ask for advice, or simply experiment with the AI’s conversational capabilities.

Microsoft emphasizes that this initial release is still experimental. The company is actively working to improve the avatar’s responsiveness and expand the feature’s capabilities based on user input. This iterative development process is part of Microsoft’s broader strategy to humanize AI tools and create assistants that feel less like software and more like collaborative partners.

The introduction of Copilot Appearance underscores a growing trend in AI development: moving beyond purely functional interfaces toward richer, more emotionally intelligent interactions. As artificial intelligence becomes an increasingly integrated part of daily life, tech companies are exploring ways to make these digital helpers feel more relatable and intuitive. Visual avatars that communicate emotions are seen as one promising avenue to bridge the gap between humans and machines.

While Copilot’s visual upgrade is modest compared to some highly stylized digital avatars on the market, its real-time emotional expressions represent an important experiment in blending conversational AI with visual storytelling. This experiment might pave the way for more immersive, expressive digital assistants in the near future—tools that not only understand language but also convey empathy and responsiveness through subtle visual cues.

View them all