Five Levels of Autonomous Animation:
Updated Framework to Improve Human-Machine Collaboration

DOWNLOAD THE EBOOK

Introduction

Human cooperation is one of the most important forces in history. It has helped people eradicate polio, reach the moon, and sequence the entire human genome. As the world becomes increasingly virtual, human cooperation with machines will unlock even greater innovations and milestones for humanity. In order to reach that potential, we must upskill our machines to have more empathetic, congenial, and natural interactions with us at scale.

Innovators have already used current AI technology to achieve a wide range of impressive specific outcomes. Machine learning algorithms have enabled computers to hear and see at near-human levels—think of the image, facial, and speech recognition capabilities in your favorite devices. Other models allow you to use online AI tools to generate new visual art, literature, or musical compositions based on previous artworks. As impressive as these advancements may be, we are only starting to see the many applications of AI. Rather than harnessing AI to complete specific tasks, we must create a collaborative system in which machines can co-create valuable, relevant content with humans.

As things stand today, humans who want to work with machines must perform unnatural actions. We strap ourselves into cars and work their complex controls. We navigate the intricacies of an operating system so that we can access our applications and documents. The paradigm needs to shift to machines learning to navigate human behaviors in order to foster collaboration. As a company, it is our goal to humanize artificial intelligence. We have a fundamental belief that machines can be more helpful to us if they’re more like us, which has led us to building and developing human-like Digital People that are built to collaborate and connect with humans. Digital People are biologically inspired agents that do not pretend to be human but have human-like characteristics uniquely suited for the way we communicate. They create a safe, engaging, scalable, and robust brand experience that enables multi-channel communication, builds relationships, and creates trust.

To refocus the world’s efforts with AI, Soul Machines proposes a standard that describes how humans and machines should collaborate to co-create. Soul Machines presented this standard, Autonomous Animation, in 2019 to define how innovators can combine sophisticated algorithms and embody them in a human-friendly form factor to foster a collaborative environment in which Digital People can behave naturally.

Autonomous Animation is:

Soul Machines’ Approach to Autonomous Animation

Soul Machines’ approach to Autonomous Animation is rooted in its belief that the best experience and interactions are only possible when combining high-quality CGI with a computer architecture inspired by how the human brain operates.

This is exactly the goal of Soul Machines’ HumanOS. Researchers have created a Digital Brain to replicate the way humans handle everyday interactions. By combining models of physiology, cognition, and emotion, Soul Machines has created a new form of biologically inspired AI. The Digital Brain’s neural network models are driven by deep research into neuroscience, psychology, and cognitive science—and are the only explainable way to autonomously animate digital characters. Soul Machines takes the best types of human conversations—engaging, warm, emotional connections—and combines them with revolutionary technology to create the most lifelike and dynamically interactive experiences. It’s a scalable and cost-effective way to animate for customer experience, learning and development (L&D), and education and health, areas where face-to-face interactions should be thoughtful, engaging, and unique in the same way each human interaction is.

The “right” Autonomous Animation solution will ultimately be a combination of algorithms developed worldwide to advance humanity’s approach. Soul Machines is designing a platform that will be able to integrate these various innovations. This platform will house the company’s proprietary, innovative architecture and cognitive models that improve how the system makes decisions.

Soul Machines’ Autonomous Animation platform:

What This Means for Soul Machines’ Customers

Through Autonomous Animation, Soul Machines aims to deliver a personalized experience at scale. This experience will enable deeper, more personal relationships with customers and more thoughtful, multimodal engagement. It will also come with a lower cost of ownership. The goal, of course, is not to replace real people with Digital People—it’s to let organizations provide highly personalized, face-to-face customer experiences that might be too expensive, or even impossible, to deliver with humans.

One obvious use case is to deploy Digital People to assist a help desk that’s flooded by customer or employee questions. Whereas human help desk agents get tired of answering the same questions repeatedly and may let this affect their mood and tone, Digital People excel at staying cheerful while providing fast, accurate answers. This approach can help your organization contain costs and free up your people for challenging, complex problems that require creativity and human intelligence to solve.

An even more timely use case is to deploy Digital People to solve brand connectedness issues in our COVID and post-COVID world. If lockdown restrictions prevent your business from expanding into a new geography, you can use Digital People to speak to potential new customers in the appropriate languages. Because Digital People are broadcast from the cloud onto devices, you can have 100 or 100,000 Digital People representing your brand at any given moment—and design them in the ethnicities, ages, and genders that will best resonate with your target market.

In any use case, Digital People offer another distinct advantage over people: they are nonjudgmental. Although they are empathetic and relatable, they are built with empathy from the start that allow users not to feel judged. If customers ever contact your organization to discuss financial difficulties or sensitive medical concerns, they may actually feel more comfortable—at least initially—speaking with a Digital Person.

To measure success, Soul Machines is proposing Five Levels of Autonomous Animation (along with a level 0) that focus on raising the “easy to work with” score of machines. The increasing sophistication of these machines will evolve the user’s role from subordination to a more symbiotic relationship.

Humans are responsible for all aspects of the animation, including planning, creating, writing, and recording the sequence. It is the same general performance for all users. The system may provide general user data, but there is no direct learning feedback loop.

Example:

An animated character provides a scripted overview of a company’s products on loop (repeat)

Humans are responsible for all aspects of the animation, including planning, creating, writing, and recording the sequence. It is the same general performance for all users. The system may provide general user data, but there is no direct learning feedback loop.

Example:
A person is using an animated character to mimic their behaviors digitally

A Human Authored Animation (HAA) system uses algorithms to generate a set of animations pre-planned by the author. The system works in a limited, defined use case and requires every action to be planned explicitly. The system can capture usage patterns to inform the author.

Example:
An HR department creates a virtual trainer who walks new employees through common onboarding tasks such as enrolling in benefits.

A Cognitively Trained Animation (CTA) system uses algorithms to generate a set of animations without the need for explicit authoring. Authors evolve into trainers solely focused on defining the scope of content and role. The system informs the trainers on areas of improvement.

Example:
A customer service department creates a virtual agent whose behavior and response is based on users reactions and adjusts accordingly.

A Cognitively Trained Animation (CTA) system uses algorithms to generate and create new animations dynamically without specific authoring. Trainers focus on teaching goals to the system to address a broader set of situations and decide the best course of action. The system tries new interactions and learns from each one with the guidance of the trainer.

Example:
A financial services provider creates a virtual assistant who counsels customers on complex financial situations and creates new behaviors on the fly, all in line with branding and marketing goals periodically provided and updated by the company.

A Cognitively Trained Animation (CTA) system can dynamically animate and interact in all scenarios. Trainers provide the core values to assist in its decision making. The system learns without the guidance of a trainer.

Example:
A nonprofit organization providing medical services in remote, underserved areas deploys a virtual nurse who can provide customized medical advice and conduct detailed conversations entirely in response to patients’ input.

Autonomous Animation: Lower Total Cost of Ownership

In developing and sustaining relevant content, systems that are built in L1 and L2 require constant feedback from human loops. L1 is dependent on 100% human interaction and therefore is a linear investment over time. L2 is algorithm based, however requires content to be pre-recorded, thus becoming status over time. In order to refresh content, new investment is required.

L3+ over time can ‘learn’ how to adjust and is able to create new behaviors dynamically. In this way, new content can be created without continuous human investment.

What’s Next?

Soul Machines developed Digital People based on the belief that the more AI-based machines become like us, the more useful they’ll be to us. Innovators are already building AI into fundamental technologies that will impact every industry. As this evolution continues, it will only become more important to make AI technology personal, relatable, and comprehensive.

To that end, in the next iteration of our Human OS, Soul Machines is delivering enhanced capabilities such as:

  • Our brand new cognitive user experience enables Digital People to seamlessly interact with the 3D world around them, while keeping our customers’ content front and center.
  • Our Autonomous Body Animation will allow our Digital People to express themselves through their body starting with arms and hands and eventually into full body animation.
  • Our revolutionary Blendable DNA technology will empower you to create your very own Digital Person from scratch – customize their hair, skin, eye color, face shape and more – to represent your brand in the best way.

As for the future, the only limitations on AI deployment will be imposed by our collective imagination. Stop and think about how humans and machines can collaborate to address critical shortages of teachers and doctors. Underdeveloped parts of the world can finally benefit from the basic human services they have lacked for so long. Since access to talent is not evenly distributed, this is a means to level the playing field. They can meet their goals by interacting with Digital People.

In the years to come, most of us will spend more time interacting with machines and systems driven by AI. We will talk to vacuum cleaners, self-driving cars, and all kinds of automated machinery. We will need to learn how to relate to these machines. Just as importantly, we will need to build systems that humans can trust. How can we make these machines a part of our world in ways that can benefit us all? This is a question Soul Machines is working to answer.

DOWNLOAD THE EBOOK