Building machines that help everyone, everywhere.

Humanizing

The Interface

Between Man

& Machines

Artboard Copy 21.png
 

Mark sagar at cannes lions festival 2017

Watch the interview with Mark Sagar as he talks about bringing brands to life!

Capturing the attention of news media

Watch as Newshub talks about Rachel - our Digital Human - who is putting a face on artificial intelligence

CATE BLANCHETT THE NDIS AND SOUL MACHINES

Today we're making some news about our work with Cate Blanchett and the NDIS...

 
 
 
 
 

Emotional

Intelligence

Artboard Copy 12.jpg
 
 

Emotional Intelligence is at the heart of forming engaging interactions with people. By adding El to our avatars, it also gives them the ability to connect and engage users on an emotional level. Our avatars can recognize emotional expression by analyzing facial expressions and vocal expression in real time.

Our avatars have an unprecedented level of human-like expression and can communicate to the user with both subtle and dramatic emotional responses.

If a change in the emotional state of the person our avatar is talking to is recognized, it can be met with an appropriate emotional response from our avatar - which may be expressed both verbally and non-verbally. EI can learn through experience - like AI the more it interacts with you, the more it learns about your personality and emotions in the context of the conversation. Specifically, the questions being asked and the response to the answers provided will make an equal contribution to the direction of the conversation and the EI corpus. 

EI is the sum of interaction between a person and our avatars. We deliver on engaging, human-style user experience which can be the platform for an ongoing relationship and value creating experience for our customers. 

 
 
 
 
 
 

The 3D

Faces

 
Artboard Copy 12.jpg
 
 

The 3D Faces we create are as close to the real thing as we can make them. They are the most important instrument of emotional expression and engagement between people. We model the face in detail from the way the facial muscles create complex expressions all the way through the eyes that reflect what they see. We are developing full bodies from our Avatars with the same physiological control systems. Our avatars are perfect for AR and VR.

Personality. Every one of our avatars comes with its own personality. We create the character behind the face based entirely on the role the avatar will have in the "real" world. If for example, the avatar will be a Virtual Customer agent we will incorporate a range of emotional responses, expressions, and behaviors that are consistent with the role and the core values of the organization that they will be representing.

 
 
 

 

 

Neural

Network

Models

Artboard Copy 12.jpg
 
 

Our Neural Network Models are at the core of our Embodied Cognition Platform. They were originally developed in our BabyX research program to give BabyX the ability to learn in real-time, express, speak and then recognize words and objects. They process information from sensory inputs and generate behavior through neural system models to enable our avatars to express themselves based on the people they interact with.

We have developed biologically inspired models of the Brain that are responsible for some of the key capabilities of our avatar and controllable by virtual neurotransmitters and hormones like dopamine, serotonin, and oxytocin. Together they influence virtual physiological states which guide learning and behavior and modulate the emotions that our avatars "feel" and express.

Intelligent Sensors Provide our avatars with the ability to see via a webcam and hear via the microphone in the device are just the beginning of the digital nervous system that controls many of the physiological inputs that help bring our avatars to life. Another example is the breathing model we have build to ensure that when our avatars speak they do so in the most human-like way possible.

 
 
 
 
 

Visual &

Auditory

Systems

 
Artboard Copy 16.png
 
Artboard Copy 12.jpg
 
 

Visual and Auditory systems provide the data feeds our identification, emotion detection, and analysis systems. Our Auditory systems are also responsible for providing the captured voice stream to the Natural Language Processing (NLP) engine which in turn asks questions of the AI platform.

Voice and speech are created specifically for an avatar depending on the language and or accent required. For our very first avatar Nadia we captured the voice of Australian Oscar-winning actress Cate Blanchett. The ensure the most life-like facial expressions while talking we train the muscles and lip movement to match the voice. We have even provided people with deafness the ability to lip read.