July 31, 2019 - 6 min read

Artificial Intelligence: updating old myths and legends

img

There is always someone smarter than you, but it never has been artificial intelligence (AI). Affecting our feeling of superiority, AI has started to scare us by its cold efficiency on specific tasks. More than fifty years of love/hate history has indeed carried a lot of misconceptions about it. In 1990, Mark S. Fox, now a distinguished Professor of Urban Systems Engineering at the University of Toronto, wrote an article about AI facts, myths and legends. I intend here to revisit some of his elements and share my perspective on the current situation.

AI has a tumultuous history with periods of intense interest and so called winters. Between 1987 and 1993, we were in the middle of the second AI winter with massive budget cuts. This winter follows a long period of investments on expert systems that were then a significant part of what artificial intelligence was about. As an approximation, expert systems are systems that are build on logic and rules with specific programming languages. They enable to mimic complex decision patterns for which engineers and researchers would define rules (knowledge base) and computers would use an inference engine to apply them.

Especially after 2011, computational power increases, large amounts of data are available (Big Data) and advances in AI algorithms lead to a shift in the type of models and tools that are used. Expert systems are decreasingly used in favor of technologies such as machine learning (which encompass deep learning). Theses technologies rely on data to learn expected behaviors which is in opposition to the explicit specification of rules.

Uncomplete and even dead wrong ideas are spread about AI. Professor Fox, starts to work from three terms to clarify the situation:

Myth: Perceptions not based on any fact.

Legend: Perceptions, once based on fact, that have been blown out of proportions

Facts: Perceptions that have a real basis in fact.

source: Mark S. Fox in IEEE EXPERT

First reading his article is not mandatory but can shed some additional lights on the evolution of AI myths and legends since almost 30 years.

Myth - if I have an expert, then we can create an expert system

Building expert systems has constrains such as the general complexity of the problem and the level of specifications that you can reach. Nowadays, a significant part of AI systems are based on supervised learning which requires data as a source of knowledge. Having only data or a data scientist won’t make a working solution. You also need infrastructure, specifications, field specific knowledge, … The design of digital products based on AI, hence require product managers, scientist, engineers, designers, sales, … in order to build a solution that brings value to actual users. Creating true value with digital tools keeps becoming an increasingly multidisciplinary activity.

Myth - expert systems do not make mistakes

It was not true then, it still is a myth. Expert systems or AI in general does make mistakes. Whenever you train a new algorithm for a given problem, one of the first things you evaluate are technical measurements such as accuracy, precision, recall, root mean squared error and/or others. The sole goal of these metrics is to assess how wrong the model is in different aspects.

Next you take into consideration what is your level of acceptance for errors. If it is a critical calculation to properly land a robot on Mars, you are very likely to be extremely restrictive in the error you allow in the system. If it is a matter of classifying pictures of cats and dogs, you might be more lenient.

Some of the best models on specific tasks have today a better accuracy than Men. This might have led to such shortcuts.

Myth - AI replaces conventional approaches

Still a vivid myth. AI is a tool and a tool has limits and advantages. Using complex technology for the sake of it, is unlikely to solve many problems. If a problem can be resolved by a conventional approach, then it is unnecessary to use a disproportionate hammer on a small anvil. AI, as any technological system, has a scope where it can shine, some seem unfortunately to pretend building AI-based solutions to drive investments and raise funding.

Legend - Rapid prototyping leads more quickly to final solutions

Rapid prototyping but also Proof-of-concepts (POC) in recent data science projects are widely used. Prototypes and POCs are useful to assess the general feasibility of the approach. If the approach is bound to not work, then you indeed get the answer faster. In the reverse case, you still need full specifications which usually raise new issues and require a drastic increase in robustness for production ready solutions. In my experience, building something that harbors the robustness required in production environments is time consuming and implies major rewriting of the initial POC. In the last years we have shifted from the POC industry to building final solutions faster as trust has increased in AI.

Myth - Small prototypes can be scaled up into full scale solutions

For different reasons this is still not true. A current scheme is to start prototyping python code in Jupyter notebooks which are interactive development environments (you can think about them as draft journal for a writer). If the results are of interest, the next step is to encapsulate the code chunks into proper independent script files, eventually building packages. In small prototypes, you often work with smaller datasets and the scalability of the code is often poor with unanticipated elements.

A major difference is that nowadays, many digital solutions are hosted in the cloud. This means that you have to further integrate your code base into a continuous integration and deployment (CI/CD) environment which fastens the time to production but have some additional technical constrains. This means that prototypes can be scaled-up but it is often a lot of work and I would thus change this item into the Legend category as it has a ground in truth.

Legend - AI systems can be easily verified and validated

The notion of verification and validation can be understood at different levels. In a naive vision, verifying that a model has the expected performances (accuracy, precision, …) is somewhat straightforward when it comes to machine learning models. On the other hand, as you do not specify the rules but only the expected general behavior by training the algorithm with data, it can be close to impossible to verify all possible results. This is quite different from expert systems where you can verify the implementation of the rules one by one as you actually specified them.

Many machine learning based algorithms are considered as black boxes. The increased weight of machine learning solutions in our daily life has also raised deeper concerns not only about their verifications and validations but also about their transparency. How do I know that a loan risk assessment algorithm is not biased toward certain minorities ? Does it respect legislation ? Is it ethical ? These are the kind of question that have an increasing weight and professionals developing these solutions are bound to answer them at some point. Legislation and technical progress have opened a door stating we can and should do it, but it is not easy.

Legend - AI systems are easy to maintain

Additionally to the elements shared by Pr. Fox, I would consider what maintain imply in a cloud era. Maintain means to keep a digital product running on a server that can be accessed by clients. The easy part is that, once it runs, it should in theory keep running. Unfortunately, bugs are discovered on a daily basis and hack attempts are common so you also have to consider technical updates and fencing attacks.

AI algorithms are trained with data, the relevance of information present in a dataset can change over time, monitored habits can shift. It has serious implications on machine learning algorithms. What happens when the model is supposed to perform a prediction from live data with unseen previous patterns? Should it not respond? Should it try to “guess” no matter the coherence of the results? Building a production ready algorithm is almost the easy part. The confrontation with live use is the real test of fire and requires continuous watch.


Beyond the current AI hype, looking in the past can help us to understand its concepts from higher grounds. Some issues are really old, some might not be relevant anymore. Trying to define AI is difficult, close to 30 years ago, it was a lot about expert systems, today it is deep learning, what will hold tomorrow is bound to be fundamentally different. Are you regularly facing AI myths? Which ones? How do you explain them? Tell me about them !

Copyright 2023 - Mikael Koutero. All rights reserved.

Privacy Statement