Our neural network models are at the core of our embodied cognition platform. They were originally developed in our BabyX research program to give BabyX the ability to learn in real-time, express, speak and then recognize words and objects. Processing information from sensory inputs to generate behavior, these neural system models enable our artificial humans to express themselves based on the people they interact with.

We have developed biologically inspired models of the brain that are responsible for some of the key capabilities of our artificial humans. These are controllable by virtual neurotransmitters and hormones like dopamine, serotonin, and oxytocin. Together they influence virtual physiological states which guide learning and behavior, modulating the emotions that our artificial humans "feel" and express.

Intelligent sensors provide our artificial humans with the ability to see via a webcam and hear via the microphone in the device are just the beginning of the virtual nervous system that controls many of the physiological inputs that help bring our artificial humans to life.