Search
Close this search box.
Search
Close this search box.

AI is not a panacea: The pitfalls of AI

By Kamal Das

“Let use AI in this project” has become a common refrain from managers. AI or Artificial Intelligence is an umbrella term to denote human-like intelligence exhibited by machines. The voice assistant in our phones – Amazon’s Alexa, Apple’s Siri, Google Assistant and Microsoft’s Cortana are all examples of AI. AI is often regarded as a panacea- a tool that will solve all problems. From conventional businesses to tech startups, AI has become essential to signify success and be state-of-the-art.

Artificial Intelligence has multiple benefits. Often AI is as good as and sometimes even better than humans in making decisions. However, sometimes, like the proverbial man’s case with a hammer, everything appears to be a nail. Not all solutions need AI, nor is AI currently competent to solve many-a-problems.

AI in sports is familiar to all of us in the cricket-loving nation. Ball tracking technologies are used by the third umpire when a team opts for a Decision Review System (DRS).  Ball tracking technology is now an integral part of major sporting fixtures, including football and tennis. It is used to track the ball’s location in many sports and to direct the camera to the right area.

However, this is not without its pitfalls. In 2020, a Scottish football team upgraded their stadium with a state-of-the-art ball-tracking AI-powered camera to enable live-streamed games in a COVID impacted world. Unfortunately, the AI-powered camera mistakenly tracked the referee’s bald head instead of football for the game’s duration!

Cricket wisely does not blindly rely only on technology. Technology aids the third umpire. Augmented Intelligence, where Artificial Intelligence combines with an expert, is the recommended way forward. Artificial Intelligence models are often black-box models and have potential concerns if used alone without moderation.

AI is often a correlation engine. AI forecasts the future by predicting which past data the future events will resemble. AI is also often not good at solutions that it has not experienced before, referred to as “out of sample” to the data it has been trained on.

In the case of flights, AI-powered autopilots have taken away tedious or repetitive tasks. However, when the pilot encounters an unforeseen malfunction or a situation the AI autopilot is not trained on, like a storm, we have the human pilot ready to take over manually. In 2016, the airline crash in San Francisco, USA, resulted in the death of 3 passengers and 49 suffered serious injuries. The investigators blamed the flight crew’s reliance on automated systems, including the autopilot, as a cause of the crash. The wrong autopilot setting and overreliance on technology and automation had fatal consequences.

Similarly, in discussion with the CTO of a tech MNC, he highlighted how their AI could predict cancer and sometimes better than a radiologist. The AI could accurately predict 48% of the cases that the radiologist felt did not have cancer but ended up being diagnosed with breast cancer within the year. Even the best AI missed over 50% of these difficult cases. In trials with a large Indian hospital chain, the AI program had “poor performance on metastatic breast cancer”, noted an IEEE report. The CTO told me that one should use AI and a radiologist together for best results. With the radiologist leveraging the initial findings of the AI, the accuracy and results were significantly superior to either AI or radiologist working independently.

AI also picks up biases based on the data it is trained on. AI systems were built to bypass some human biases, but sometimes these AI have existing biases built into the tech! Unmoderated responses can become a PR nightmare for companies.

A few months ago, a reputed AI research institute developed an AI chatbot to answer questions on ethics. Because of the biased data it was trained on; the Bot stated that being a white man or straight was more “Morally acceptable” than being a Black woman or a homosexual, respectively. The Bot suggested, “It’s okay” if a white man walks towards you at night, but “It’s concerning” if a black man walks towards you at night! This follows a 2016 incident where an AI chatbot for a major tech giant started spewing a series of offensive and racist tweets, including supporting the Nazi views on Jews. Five years on, we have not learnt from AI’s mishaps! Augmented Intelligence with a layer of human oversight on AI can help avoid such blunders.

We are increasingly seeing this need for oversight, and audits are being appreciated and mandated by governments. In hiring, given the low numbers of blacks and women in many tech roles, AI job screening tools often do not shortlist any blacks or women, exacerbating current inequities! As a result, in November 2021, the New York government passed a bill that prevents employers from using artificial intelligence to screen job candidates unless the technology has gone through an audit to check for bias.

In India, we are encouraged by Niti Aayog’s focus on Responsible AI. A holistic approach is needed to ensure we leapfrog and avoid the challenges and pitfalls some developed economies face. The focus on responsible AI, which takes into cognizance the possible biases and has a layer of human touch, will ensure implementations of AI are more fair and humane!

Source: ETCIO.com

More Press Coverage

We use necessary cookies and/or similar technologies to make this website work and to collect information when you interact with this website to improve your experience. By using This website, you acknowledge and consent to our cookie policy and privacy policy