Building machines that help everyone, everywhere.

Humanizing

Computing

Artboard Copy 21.png
 

Who we are

Soul Machines is a ground-breaking high tech company of engineers, artists and innovative thinkers, led by Academy Award winner Dr Mark Sagar. 

We bring technology to life by creating highly-detailed Digital Humans with personality and character.

Our vision is to humanize computing to better humanity.


what we do

Digital Humans will transform modern life for the better by revolutionizing the way computers interact with people.

Our engaging Digital Humans are designed around a detailed physiological model of the human face. A personality is added to deliver a character that matches the role they will be employed for. We then bring them to life with our Brain Language. This uses Neural Networks that combine biologically inspired models of the human brain and key sensory networks wrapped up in a virtual central nervous system.


OUR LATEST NEWS AND BLOGS

Mark sagar at cannes lions | festival of creativity

Watch the interview with Mark Sagar as he talks about bringing brands to life!

 
 
 
 

Emotional

Intelligence

Artboard+Copy+12.jpg
 
 

Emotional Intelligence is at the heart of forming engaging interactions with people. By adding El to our Digital Humans, it also gives them the ability to connect and engage users on an emotional level. Our Digital Humans can recognize emotional expression by analyzing facial expressions and vocal expression in real time.

Our Digital Humans have an unprecedented level of human-like expression and can communicate to the user with both subtle and dramatic emotional responses.

If a change in the emotional state of the person our Digital Human is talking to is recognized, it can be met with an appropriate emotional response from our Digital Human - which may be expressed both verbally and non-verbally. EI can learn through experience - like AI the more it interacts with you, the more it learns about your personality and emotions in the context of the conversation. Specifically, the questions being asked and the response to the answers provided will make an equal contribution to the direction of the conversation and the EI corpus. 

EI is the sum of interaction between a person and our Digital Humans. We deliver on engaging, human-style user experience which can be the platform for an ongoing relationship and value creating experience for our customers. 

 
 
 
 
 
 

The 3D

Faces

 
Artboard Copy 12.jpg
 
 

The 3D Faces we create are as close to the real thing as we can make them. They are the most important instrument of emotional expression and engagement between people. We model the face in detail from the way the facial muscles create complex expressions all the way through the eyes that reflect what they see. We are developing full bodies from our Digital Humans with the same physiological control systems. Our Digital Humans are perfect for AR and VR.

Personality. Every one of our Digital Humans comes with its own personality. We create the character behind the face based entirely on the role the Digital Human will have in the "real" world. If for example, the Digital Human will be a Virtual Customer agent we will incorporate a range of emotional responses, expressions, and behaviors that are consistent with the role and the core values of the organization that they will be representing.

 
 
 

 

 

Neural

Network

Models

Artboard Copy 12.jpg
 
 

Our Neural Network Models are at the core of our Embodied Cognition Platform. They were originally developed in our BabyX research program to give BabyX the ability to learn in real-time, express, speak and then recognize words and objects. Processing information from sensory inputs to generate behavior, these neural system models enable our Digital Humans to express themselves based on the people they interact with.

We have developed biologically inspired models of the Brain that are responsible for some of the key capabilities of our Digital Humans. These are controllable by virtual neurotransmitters and hormones like dopamine, serotonin, and oxytocin. Together they influence virtual physiological states which guide learning and behavior, modulating the emotions that our Digital Humans "feel" and express.

Intelligent Sensors Provide our Digital Humans with the ability to see via a webcam and hear via the microphone in the device are just the beginning of the digital nervous system that controls many of the physiological inputs that help bring our Digital Humans to life. Another example is the breathing model we have build to ensure that when our Digital Humans speak they do so in the most human-like way possible.

 
 
 
 
 

Visual &

Auditory

Systems

 
Artboard Copy 16.png
 
Artboard Copy 12.jpg
 
 

Visual and Auditory systems provide the data feeds our identification, emotion detection, and analysis systems. Our Auditory systems are also responsible for providing the captured voice stream to the Natural Language Processing (NLP) engine which in turn asks questions of the AI platform.

Voice and speech are created specifically for a Digital Human depending on the language and or accent required. For our very first Digital Human - Nadia - we captured the voice of Australian Oscar-winning actress Cate Blanchett. The ensure the most life-like facial expressions while talking we train the muscles and lip movement to match the voice. We have even provided people with deafness the ability to lip read.