Can AI & Humans Be Team Players? Part Two
If software is eating the world, can we trust artificial intelligence (AI) to help run it? That’s the question I asked and began to answer in Part One.
As we enter 2020 and our organizations employ more sophisticated machine learning (ML) capabilities than ever, how can we promote better understanding between computers and humans?
Getting to Teamwork
Improving human-machine environments requires us to zero in on trust. And more specifically, how trust is created prior to and during system interaction. One example model of trust, developed by Hoff and Bashir, illustrates how human trust is created:
Without going into too much detail, this model suggests:
- There are things we learn or know before interacting with the system.
- And things we learn during our real-time work with the system will influence our degree of trust.
- How we design the system—specifically, its interface—can also make a difference.
Knowing this and keeping in mind that other techniques also exist, how can a system help convey “trustworthiness?” The answer is by informing the human operator about the certainty of a potential conclusion or course of action. Here’s a visual description of what that means:
- In the first picture, the system’s heat map shows the human operator how it came to recognize a building (and its degree of certainty).
- The second picture offers another view of explainability, but uses the example of a hurricane tracking map with potential variations. This example vividly depicts the degree of uncertainty (note all the potential outliers).
- The third example shows the system’s (in this case, the autonomous car’s) awareness of its surroundings before performing an action. This picture is designed to give the car’s operator a higher comfort level with the planned route.
Reducing Stressful Interaction
A key issue in automation design is to determine the level of interaction between the system and its human operator. That means also understanding that all interaction reflects a socio-technical system. This makes it imperative that automation and AI designers consider what’s appropriate when it comes to communication.
We don’t need to maximize autonomy.
We need to maximize overall team performance.
Remember “Clippy?” If you don’t, it was Microsoft’s in-app office assistant. The intention behind Clippy was certainly good, but the avatar quickly began to annoy many users. And reactions like that translate into non-productive interactions between machines and humans. As we already touched on, this can lead to added cognitive stress.
More recently, though, interactive gaming platforms prove it’s possible to create relatable, non-aggravating and useful characters that stay in the background until users need them.
The lesson here: Not paying attention to acceptable social-technical communication considerations when designing systems might cause you to unnecessarily stress human operators out.
Focus on Your Mission
Finally, it’s important to focus on your ultimate goal, which is to deliver on your mission or desired business outcome(s). In contrast to what you might see and hear from industry pundits, the goal is not to automate all the things. Instead, you want to enable both degrees of human and machine activity (i.e., the team) for the greatest overall benefit. This also means rethinking how you measure effectiveness.
The following chart, adapted from a paper by Damacharla, et. al., highlights potential metrics to consider. You’re likely to come up with others to meet your requirements.
4 Key Takeaways
AI and humans can be team players if we recognize and understand the following:
- How trust may be formed between humans and machines and we think about ways in our ongoing interaction to increase trust.
- The design of the human-to-machine interface is critical. We have existing cues to leverage and continuously improve human understanding and confidence.
- We can’t overlook the socio-technical aspects when designing interactions. We need to avoid excessive, interrupting and/or alarming communications that may lead to disuse and lower trust.
- Developing success metrics must include measuring task and overall mission success rather than relying solely on maximizing the degree of automation.
The Long View: Improving the Human-Machine Environment
I believe doing AI the right way requires an interdisciplinary approach. That’s because beyond algorithms, AI will be part of a joint cognitive system.
It’s our collective job to ensure systems optimize their experiences with humans, which includes:
- Knowing how humans develop trust.
- Understanding human work patterns.
- Keeping an eye on language and semantics as these, too, contribute to understanding.
Learn about VMware’s Project Magna, the self-driving data center fueled by AI/ML.