Hype vs Reality: 6 Ways Artificial Intelligence Is Falling Short of Expectations


The world’s IT departments and managed service providers are endlessly pursuing ways to improve our digital experiences. Perhaps the most significant new development in the last year is that of artificial intelligence. Although it has made a lot of progress, it still has a long way to go before it can live up to all of our expectations.

Below are six ways AI falls short of current expectations and what we can do to overcome these challenges.

1. AI doesn’t understand nuance well

Human communication is incredibly complex. It involves a broad range of verbal and non-verbal cues that are culturally dependent and constantly in flux. Anyone who has tried to keep up with the slang of younger generations will understand this fact of life all too well.

Although capable of communicating in a human-like way, AI still finds it hard to grasp nuances of speech. This leads AI to misinterpret data and draw incorrect conclusions. Natural language processing (NLP) and other techniques can improve AI’s ability to better understand the meaning of words, but there’s still a long way to go before it reaches human levels.

2. AI doesn’t have common sense

Over the course of a regular life, humans develop common sense—a basic, intuitive understanding of how the world works. AI systems do not have common sense, which leads them to fail in ways that baffle human reasoning. For example, a driver less car may stop at a broken stoplight forever because it doesn’t understand it’s broken.

Although knowledge graphs can help AI better grasp basic concepts and develop common sense, they’re far from perfect.

3. AI continues to be biased

Contrary to popular understanding, AI systems are not objective. They are biased because the data they’re trained on can be (and often is) biased. For example, Amazon used historical data to develop an AI recruitment tool meant to find the best employees. However, that data came from a male-dominated tech industry, so it downgraded any resumes that contained the word “woman.” Rather than using it to improve their recruitment process, Amazon had to shelve the AI.

AI bias can be reduced by training systems on diverse data sets, but that’s often easier said than done.

4. AI is bad at generalizing

Human intelligence is great at adapting to new circumstances and learning on the fly. Artificial intelligence is not. For example, humans know that a black cat, a white cat, and an orange cat are all cats. An AI, however, might not recognize a white cat or orange cat if it has only been trained on images of black cats.

This inability to transfer learning from one task to another severely hampers AI’s ability to learn quickly, adapt to new circumstances, and adjust to a fast-changing world. Transfer learning techniques may help, but once again, these are still limited in their scope.

5. AI has trouble explaining itself

AI relies on complex algorithms to make decisions humans find hard to understand. Not only do humans have trouble understanding it, but as it turns out, the AI does as well. This lack of transparency leads to a lack of accountability which many find troubling. For example, if an AI gives a medical diagnosis but can’t explain its reasoning, doctors may be justifiably skeptical about implementing it.

This lack of trust leads to AI being less useful. However, some are trying to overcome it with an Explainable AI (XAI).

While AI has come a long way, there’s still considerable work to be done. Thankfully, by understanding the problems, we can see a clearer path to our next logical steps.