Our Neural Network Models are at the core of our Embodied Cognition Platform. They were originally developed in our BabyX research program to give BabyX the ability to learn in real-time, express, speak and then recognize words and objects. They process information from sensory inputs and generate behavior through neural system models to enable our avatars to express themselves based on the people they interact with.
We have developed biologically inspired models of the Brain that are responsible for some of the key capabilities of our avatar and controllable by virtual neurotransmitters and hormones like dopamine, serotonin, and oxytocin. Together they influence virtual physiological states which guide learning and behavior and modulate the emotions that our avatars "feel" and express.
Intelligent Sensors Provide our avatars with the ability to see via a webcam and hear via the microphone in the device are just the beginning of the digital nervous system that controls many of the physiological inputs that help bring our avatars to life. Another example is the breathing model we have build to ensure that when our avatars speak they do so in the most human-like way possible.