Visual and Auditory systems provide the data that feeds our identification, emotion detection, and analysis systems. Our Auditory systems are also responsible for providing the captured voice stream to the Natural Language Processing (NLP) engine which in turn asks questions of the AI platform.

Voice and speech are created specifically for a Digital Human depending on the language and or accent required.  To ensure the most life-like facial expressions while talking we train the muscles and lip movement to match the voice. We have even provided people with deafness the ability to lip read.