Deliver the goodness of AI with Digital People on the Human OS Platform
Soul Machines’ Human OSTM Platform features a patented Digital Brain that drives our Autonomous Animation, making it possible to deliver the goodness of human and machine collaboration. Soul Machines takes the best types of human conversations—engaging, warm, emotional connections—and combines them with revolutionary technology to create the most lifelike and dynamically interactive experiences.
Soul Machines Digital PeopleTM powered by Human OS create a safe, engaging, scalable, and powerful face-to-face interaction and brand experience.
With Human OS 2.0, our Autonomous Animation makes Digital People more human-like and creates a unique engaging, dynamic, and lifelike experience to connect with customers:
Intuitive Behaviors and Enhanced Expression allow Digital People to autonomously add emotionally appropriate eye contact, gesturing, and behavior and have more depth in their emotions.
If you’re speaking to a Digital Person about an upcoming vacation, and you seem happy and excited about it, they will reflect your happiness through their emotions and expressions.
New cognitive user experience enables Digital People to seamlessly interact with the 3D world around them, through immersive cinematic cuts and dynamic content interactions.
Ask a Digital Person to make a comparative review of products, and they will be able to autonomously gesture towards their respective pictures during the presentation.
Autonomous Body Animation allows Digital People to enhance their verbal communication in real-time with appropriate embodied gesturing, beginning with hands and arms and eventually including the whole body.
Write your Digital Person’s conversation and you’ll instantly see them bring the conversation to life with natural body animations that build a deeper connection.
Also, Human OS 2.0 Digital People are all created from the Digital DNA Blender, a tool that allows the creation of unique Digital People in real-time with customization of their hair, skin, eye color, face shape, and more. We’ll be looking to commercialize this product in early 2022.