Can AI & Humans Be Team Players? Part One

If software is eating the world, can we trust AI to help run it?

This is something I think about a lot and asked VMworld attendees to consider, too.

AI Pioneers

by Cameron Haight, VP and CTO, Americas, VMware

The rising popularity of artificial intelligence (AI) makes it seem like the concept is new. It isn’t.

In 1955, John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon proposed a Dartmouth Summer Research Project on AI: “An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”

Their pioneering work set the stage for others to dive deeper into the discipline, defining the types of AI more specifically:

  • Artificial Narrow Intelligence (ANI): Machine learning is used to solve specific challenges (where we are today).
  • Artificial General Intelligence (AGI): Machines have the intellectual capabilities of humans.
  • Artificial Super Intelligence (ASI): Machines outperform humans across all domains of thinking.

Inventor and futurist Ray Kurzweil believes we’ll reach AGI by 2029. He also believes 2045 is when what he refers to as the singularity will happen. According to him, “We will multiply our effective intelligence a billion-fold by merging with the intelligence we have created.” Although, he also suggests that achieving this capability might not necessarily be a good thing for humanity. That’s a topic for another discussion.

Entering the Spring of AI

Over the years, we’ve weathered so-called AI Winters — times when AI funding and research slowed down due to unmet expectations. Today, the numbers show us entering a new AI Spring:

  • 8x increase in academic papers published on AI since 1996
  • $3B equity in dollars raised in 2018 — a 72 percent increase over 2017
  • 37 percent of organizations have implemented AI in some form
  • $9B spending on AI systems in dollars in 2022
  • $7T global economic dollar impact by 2030

Most of us interact with AI in the form of machine learning regularly and may not even know it. That’s because it’s already embedded in many products, from smartphones to IT monitoring technologies (including those from VMware).  Interestingly and somewhat surprisingly, it’s in IT where at least one industry consulting firm believes the top enterprise use case for AI exists and that is a focus on IT automation (see Deloitte study).

AI Is Everywhere, But Basic Automation Came First

Before diving into the impact of AI, I believe that it’s important to look at previous attempts to incorporate automation to offload human effort and to potentially deliver a better, more consistent and economical outcome.

The Fitts’ list, published in 1951, described what were believed to be the strengths of humans and machines:

  • Humans surpass machines in detection, perception, judgement, induction, improvisation and long-term memory.
  • Machines surpass humans in speed, power, computation, replication, simultaneous operations and short-term memory.

Can AI & Humans Be Team Players?

Cameron Haight shares his insights on AI-Human interaction at VMworld 2019 US.

The idea was that we could approach automation with a divide and conquer perspective, known as the Compensatory Principle. If you need speed, a computer should do the task. If you need judgement, a human should take it.  Subsequent efforts to essentially assign roles based upon the relative capabilities of humans and machines  further evolved into the notion of function allocation (see Human and Computer Control of Undersea Teleoperators).

Yet there has arisen an alternative school of thinking lead by researchers such as Erik Hollnagel and David Woods (see Joint Cognitive Systems: Foundations of Cognitive Systems Engineering) that suggests that the attempt to clearly delineate tasks between human and machines is not only misguided, but also potentially dangerous. That’s because with a few exceptions, the ability to specify all of the needed steps to complete a task (and to deal with any exceptions that may arise) is often very rare. In fact, what usually occurs is that we automate only the simplest tasks and processes.  Those requiring greater cognitive flexibility are left to humans (this is what is referred to as the “left over” principle).

Our human expectation of automation is that we’ll get better performance with fewer errors. And that it thus lets us as intelligent humans focus on higher-level concerns. But there’s something else that’s introduced when we introduce automation: new forms of complexity and impacts to human cognition.

Hidden Effects of Automation

A real-life example of human cognitive stress was Air France Flight 447, a tragedy resulting in the death of 228 crew and passengers. The official crash report says operator error and it is true that numerous contributors to the accident were human-related. In reality, however, there were multiple issues contributing to the tragedy not the least of which included the design of the automation onboard which failed to give the pilots critical information as to why autopilot was being switched off (nor any advanced warning of the impending transfer of control).

Another example comes from the U.S. Navy cruiser the Vincennes, where tragedy may have been averted if human wisdom had overridden automated system procedures. In this case, the individuals involved may have exhibited symptoms of excessive machine trust leading to a lack of situation awareness and this could have contributed to poor human responsiveness thus resulting in the catastrophe.

Please keep in mind that in both examples, we are not sitting in judgement of humans. As historical observers, we ourselves were not there and in what could only be described as very stressful situations. But we do owe it to future scenarios where humans and machines are collaborating to continue to think about the human-to-machine environment.

A tremendous automation challenge is that when it works well, automation often causes a lack of situational awareness in terms of the human operators. Thus, when systems break, human performance can suffer. Often, the design of the automated system keeps us somewhat out of the loop. These are some of the other hidden effects of automation:

Now we’re seeing how automation coupling adds to existing complexity and can lead to cascading effects.

“Automation surprises occur when operators of sophisticated automation, such as pilots of aircraft, hold a mental model of the behavior of the automation that does not reflect the actual behavior of the automation,” wrote researchers Lance Sherry, Michael Feary, Peter Polson and Everett Palmer.

And the situation is getting worse. Increasingly sophisticated machine learning models are difficult, if not impossible, for human operators of automation to understand. The needs of humans aren’t sufficiently included into the automation design requirements.

From Human-Automation Systems to Human-AI Teams

Today, there is a growing focus on the concept of joint cognitive systems thinking. This abandons “the language of machines compensating for human limits and instead, (focuses) on how people are adaptive agents, learning agents, collaborative agents, responsible agents, tool creating/wielding agents,” writes David D. Woods, professor of Integrated Systems Engineering at The Ohio State University.

Recognizing it’s not easy to divide up tasks, we now know that success is not just about building automated systems. It’s also about understanding that humans and machines have to operate as “teams.” Yet, as we increasingly seek to factor in AI into our automation plans, we need to recognize and avoid issues that can lead to dysfunctional human to machine teams. This was highlighted in the popular book focusing on human team dysfunction:

  1. Absence of trust: “I don’t trust the machine.”
  2. Fear of conflict:“Whoever designed this system must be smarter than me, so I don’t want to offer any criticism.”
  3. Lack of commitment:“I don’t need to be invested because AI will take my job anyway.”
  4. Avoidance of accountability: “Nothing that goes wrong is my fault; I didn’t design it.”
  5. Inattention to results:“Systems rank higher than me. I don’t care about performance.”

Given the AI capability that exists today, machines are not at the point where they can become team “leaders,” let alone team players. For now, humans will retain that role. But as we seek to employ more sophisticated machine learning abilities in the future, we need to make sure we don’t repeat the issues often portrayed in the TV series Star Trek. Constant conflicts arose because of a lack of understanding between Spock (who, for all intents and purposes, was a biological computer) and humans.

Getting to Teamwork: Stay Tuned for Part Two

In the second part of this article, I’ll dive into how we can improve the human-machine environment.