Our Neural Network Models are at the core of our Embodied Cognition Platform. They were originally developed in our BabyX research program to give BabyX the ability to learn in real-time, express, speak and then recognize words and objects. Processing information from sensory inputs to generate behavior, these neural system models enable our Digital Humans to express themselves based on the people they interact with.

We have developed biologically inspired models of the Brain that are responsible for some of the key capabilities of our Digital Humans. These are controllable by virtual neurotransmitters and hormones like dopamine, serotonin, and oxytocin. Together they influence virtual physiological states which guide learning and behavior, modulating the emotions that our Digital Humans "feel" and express.

Intelligent Sensors Provide our Digital Humans with the ability to see via a webcam and hear via the microphone in the device are just the beginning of the digital nervous system that controls many of the physiological inputs that help bring our Digital Humans to life. Another example is the breathing model we have build to ensure that when our Digital Humans speak they do so in the most human-like way possible.