Skip to content

AI and the Future of Human Decision Making: Winter is coming! Is it though?

By Dr. Pooya Tabesh—Will we lose our jobs to an army of intelligent machines? Will artificial intelligence (AI) systems advance to the point that human decision making becomes obsolete? Should we get prepared to defend ourselves against killer robots? Is “winter” coming?

While these important questions have been around for a few decades, efforts to answer them have not created a consensus and drawing a line between fearmongering and legitimate concern has become a complicated task for the general public. There are two camps of commentators weighing in:

The first camp views the technological developments surrounding AI as a revolution with positive consequences. They highlight the many benefits that AI can generate for effective decision making in various domains. AI tools contribute to the translation of documents, the reliability of self-driving cars, and the diagnosis of illnesses, among many other applications. The proponents of this view even go beyond these benefits to envision AI as a panacea and describe an exaggeratedly rosy picture of an AI-infused future. As an example, Google’s CEO Sundar Pichai has characterized AI as “the most important thing humanity has ever worked on” and “as something more profound than electricity or fire [1].”

The commentators in the second camp, however, portray a less optimistic, and sometimes scary picture of AI’s future, and warn about the negative consequences that AI can bring for modern societies. For example, they share concerns regarding mass unemployment in a world dominated by AI systems that outthink humans. They also highlight the biases and inequities that AI systems can create in human society and warn about the dangers of manipulative propaganda that AI tools can produce by creating fake, but realistic images or videos. In other cases, these legitimate concerns morph into overly-exaggerated predictions about the risks associate with AI tools that can be misused by ill-intentioned terrorists or the emergence of intelligent but ruthless killer robots that would eliminate civilization.

In this post, the intention is not to take sides in this heated debate. Instead, I focus on the process of decision making by human decision makers to move beyond the arguments for and against AI and highlight the possible synergic partnership between human decision makers and AI systems. By rejecting over-exaggerated positive and negative predictions about future of AI, I want to emphasize that AI and human decision processes, while having their own strengths and weakness, can be viewed as complements, and not substitutes in the process of decision making.

What is AI?

The confluence of data proliferation (e.g., big data) and algorithmic advancement (e.g., machine learning) has given rise to popularity of AI and has generated a great public interest in the subject. While the term AI was coined around 1956, it has recently become a buzzword that along with terms such as big data and machine learning, gets tossed around within many professional and academic communities. In broad practical terms, AI refers to a heterogenous set of non-human intelligent systems or algorithms with the ability to learn from data. From media to healthcare, and from fashion to transportation, a large variety of AI tools and techniques are growing in every industry at an astonishing rate. Digital assistants such as Alexa and Siri use AI to recognize their users based on analysis of data related to their facial features or voice patterns and make useful recommendations based on user’s location and prior preferences. In a more advanced example, AI algorithms based on machine learning have helped Emotech to develop a robot assistant, OLLY, which has the capability to evolve its personality and gradually become similar to its owner. It learns to detect owner’s “facial expressions, voice inflections and verbal patterns” to then initiate conversations or react to user’s feelings [2]. Despite the technical difference among AI systems, the majority of them serve the same purpose: to facilitate transformation of existing data into useful insights, decisions and outcomes.

Human decision making

When it comes to the process of decision making, humans tend to rely on two approaches to making a choice: Analysis and/or Intuition. These two major information processing systems are different in nature and represent different aspects of thinking and problem solving. In the following, I explain each of these approaches in detail and discuss their relevance to AI analytics.

Analytical decision making & AI. Analytical decision making entails a systematic and intentional process for data collection and analysis before making a choice. As a well-established approach to decision making in many modern organizations, analytical decision-making requires a methodic approach for collecting and analyzing relevant internal and external information, devising alternative courses of action, and comparing the alternatives based on specific criteria (e.g., decision goals and objectives) before making a choice. The analytic approach works best for solving complex problems and when sufficient data related to a phenomena or task are available. However, when it comes to large volumes of data, this deliberate process of data collection and analysis becomes arduous for human decision makers, who are generally constrained by cognitive capacity and attention limits.

AI-based decision support tools and systems can contribute to the process of analytical decision making by enhancing the speed and accuracy with which structured (e.g., tabulated) and unstructured (e.g., voice and video) data can be collected and processed. For instance, AI algorithms based on machine learning can facilitate the prediction of future unknown states and events based on current and past business-related data. Pattern discovery and trend analysis are among other avenues in which AI can provide value for analytical decision making. AI tools can translate, drive, and recognize faces. They can assist physicians in diagnosing illnesses or provide strategic decision makers with possible outcomes of their decisions. In short, these systems can learn from the past to enhance analytical decision making. At the same time, similar to human decision makers, AI tools are prone to mistakes and biases based on the realities in the data.

Intuitive decision making & AI. Intuitive decision-making entails an effortless and automatic process of decision making based on a holistic understanding of previous events and experiences. As opposed to analytical approach, intuitive decision-making is spontaneous and does not follow a systematic approach. Previously learned human knowledge gained through experience enables the intuitive decision maker to sense the opportunities and threats surrounding a decision. Based on such feelings or implicit understandings, she proceeds to make a choice, albeit without articulable reasoning. Intuitive approaches to decision making are suitable for situations characterized by overwhelming ambiguity or when there is no precedent to a decision. In these circumstances, relevant data regarding similar situations in the past are not available; nor is there time to systematically collect appropriate data.

Existing AI algorithms based on machine learning are inept to offer much insight when it comes to intuitive decision making. In fact, current AI algorithms, by design, are only poised to systematically learn from a data feed and absent the relevant data from the past (e.g., in extremely ambiguous and unprecedented circumstances), even the most sophisticated AI tools are nothing but several lines of useless code. These tools, if provided with non-relevant data, will best exemplify the notion of garbage-in garbage-out. In short, while the AI tools can drastically facilitate analytical tasks such as driving, translation, and diagnosis of diseases, they cannot be creative in unique situations. “A self-driving car can drive millions of miles, but it will eventually encounter something new for which it has no experience [3].” As another example, AI cannot find a solution for an unprecedented organizational crisis for which a seasoned CEO might come up with an intuitive answer. Overall, AI cannot think or understand in a holist, creative, or abstract manner. At the same time, however, AI research (e.g., machine learning algorithms) has become an alchemy. “Alchemists discovered metallurgy, glass-making, and various medications … but failed to cure diseases or to transmute basic metals into gold” [4]. Similarly, AI tools try to emulate human intuition, and repeatedly fail to do so, but produce powerful tools as byproducts: tools that can “beat human Go players, identify objects from pictures, and recognize human voices”.

A pragmatic paradigm: Human-AI partnership

Considering the shortcomings of AI systems for intuitive decision-making, and the importance of intuition for real-life decision-making, it is safe to assume that in the near future, humans will remain the masterminds behind AI algorithms. Under such circumstances, it is reasonable to view AI systems as partners who supplement and augment human intelligence to facilitate better and faster decisions. This way, our envisioned future of AI would look less like a Terminator-style scenario, and more like a synergistic collaboration [5] between human decision makers and ‘decision support tools.’ Based on such partnership conceptualization of human-AI relationship, human society would benefit from embracing AI tools and making them more accurate, effective and reliable. To this goal, capital investment and appropriate policy considerations could further advance AI technologies and make the benefits of them universally accessible to a larger number of beneficiaries across the globe. In this regard, equipping the workers with skills and abilities required for forming cognitive partnership with AI systems can slow down some of the negative consequences such as job displacements.

Concluding remarks

Emergent technologies such as advanced AI systems come with some benefits and some negative consequences (e.g., errors, inefficiencies and side effects). Considering the shortcomings of AI in handling intuitive decisions, AI machines will not entirely replace human decision makers, at least in the near future. It is upon human decision makers to leverage the benefits of AI in order to facilitate decision making and solve complex problems. More importantly, however, they need to continue to invest in what they do best: critical thinking, intuitive analysis, and creative problem solving. In the end, while it is difficult to jump into any conclusion about the future of AI, human decision makers seem to continue to monopolize these capabilities, which will help them keep the upper hand and prepare themselves in case “winter is coming”.

That said, it is naïve to ignore the fact that the increased AI ability can negatively impact the labor market and contribute to an increased income inequality. Following the AI-human partnership paradigm outlined earlier, the immediate action item to remedy the issue is policymaking. Appropriate policymaking for improving AI systems and ensuring accessibility of AI-related education for the vulnerable population of workers would not only slow down mass technological unemployment, but also help prevent subsequent inequities that might arise from the resultant skewed distribution of wealth.

Pooya Tabesh, Ph.D.

Assistant Professor of Management

References

[1] https://money.cnn.com/2018/01/24/technology/sundar-pichai-google-ai-artificial-intelligence/index.html

[2] https://builtin.com/artificial-intelligence/examples-ai-in-industry

[3] https://www.wired.com/story/greedy-brittle-opaque-and-shallow-the-downsides-to-deep-learning/

[4] https://syncedreview.com/2017/12/12/lecun-vs-rahimi-has-machine-learning-become-alchemy/

[5] Jarrahi, M. H. (2018). Artificial intelligence and the future of work: human-AI symbiosis in organizational decision making. Business Horizons, 61(4), 577-586.

share this post

Community

Discipline

Goodness

Knowledge

Never miss an update...

Subscribe to the CSB Blog!