Drawbacks of Artificial Intelligence
AI, or Artificial Intelligence, is the science of teaching computers to behave
like people. Its goal is to copy how our brain works, like tackling tough tasks,
learning new things, and making choices. Numerous fields, including healthcare,
finance, transportation, customer service, and cybersecurity, use AI
applications.
These applications allow machines to analyse data, complete common
tasks, and make predictions akin to those of humans. AI is also capable of
enhancing decision-making, automating repetitive duties, offering personal
experiences, streamlining processes, and aiding scientific progress.
As AI technologies continue to evolve, it's vital to acknowledge that it still
faces challenges and limitations despite its significant advancements. Privacy
concerns, ethical considerations, biases, and the need for human oversight are
among the factors that require thoughtfulness in addressing.
AI presents both advantages and challenges. Identified below are some of the
limitations linked to AI:
-
Biases can be passed on to AI systems if the datasets used to train them are inherently biased or reflect societal prejudices. The repercussions of such perpetuation of biases can be discriminatory or unfair outcomes, especially in fields like criminal justice, loan approvals, and hiring where consequential social implications may arise due to biased decisions. The large datasets used to train AI systems are known to contain such biases particularly against black people and minorities.
-
AI advancements bring about concerns, particularly in the field of employment. The automation of tasks once done by humans may spell job displacement and unemployment for some industries. While there are new jobs to be created from these advancements, they may not necessarily compensate for the potential for job loss.
-
The challenge of AI lies in the ethical questions it raises. For instance, autonomous vehicles confront moral dilemmas when faced with imminent accidents. In such scenarios, the AI system must decide between minimizing harm to either the vehicle passengers or pedestrians. Defining ethical guidelines and guaranteeing that AI systems make morally defendable choices is no easy feat.
-
When it comes to AI systems, one must acknowledge that they have made considerable strides, but they still grapple with context and common-sense reasoning. Sure, they may have their moments of brilliance, but humans have a comprehensive understanding that surpasses that of AI systems. Because of this, AI systems' shortcomings are often revealed when they have to deal with new scenarios or vague inputs, manifesting in incorrect data or errors.
-
Data, both sensitive and personal, is a crucial component that AI systems heavily depend on. But, storing and handling this data can often lead to privacy and security problems. If the AI systems lack adequate protection, malicious attacks might occur more likely. This could allow unjust access to this delicate info or even cause data breaches.
-
In policing, there are concerns with privacy being violated due to surveillance using AI technologies. Data collection methods, facial recognition, and predictive policing algorithms all have the potential to infringe on people's civil liberties and right to privacy.
-
AI systems can hinder fairness and equity in policing when they solely rely on incomplete, biased, or inequality-reflecting historical data to train AI models.
-
The absence of human oversight in an overreliance on AI system can result in errors or a shirking of accountability, therefore AI should augment rather than eliminate human judgment in decision-making. The critical role that human discretion, ethics, and judgment play in policing need to be bolstered with AI solutions.
-
With the continuous progress of AI technology, there's a likelihood of overdependence and overreliance on these systems. Overreliance may cause a decline in human skills such as critical thinking, creativity, and decision-making control.
-
Human feelings and body language, along with understanding social and cultural intricacies, are still tricky for AI systems. Therefore, they often mess up predictions or choices because they struggle to get the fine points of human behavior.
-
Considered black boxes, AI algorithms, including deep learning neural networks, often present accurate decisions and predictions. Nonetheless, comprehending their reasoning behind such conclusions can be tricky. Whenever accountability, ethics, or legal compliance are at stake, this lack of transparency could become cumbersome.
-
Making it tough to understand decisions reached, AI algorithms can prove complex and opaque, thus raising accountability concerns. If a biased or erroneous decision stems from an AI system, pinpointing the source becomes problematic, making rectification efforts challenging. Transparency issues lead to these complications.
-
The software systems we use learn from data, which unfortunately can contain bias. This can further deepen inequality. Statistics show Black communities are reported more for crime than white ones. It doesn't matter if the reporter is Black or white. This unfair situation paints Black areas as "high risk" more often. A problem with this thinking is the reliance on old data. Sure, past behavior can inform future actions. But it doesn't factor in the chance for change and growth. This just encourages negative stereotypes and unfairly punishes those who've already faced consequences.
-
All over the world, police use software to predict crime. There are many US tech companies selling these programs. A notable startup, Voyager Labs, gathers social media data like posts, emojis, and friends. Then, it analyzes and cross-references this with private data. The aim is to build a detailed profile to identify potential "risks". Frequently, these automated policing methods can be wrong.
-
In 2018, London's Metropolitan Police tested facial recognition tech. This software should have flagged 104 unknown crime suspects. The truth? Only 2 were correct.
Edward Santow, writing for The Australian Quarterly, offers insight into where things can go wrong. He explains how a falsely identified person could get arrested and taken to the police station. This can terrify them and lead to the violation of their human rights.
-
There's more. Facial recognition problems impact people of color differently. Take Facebook's system, for instance. It incorrectly termed Black people as "primates", an error they admitted to the BBC was downright wrong.
A closer look reveals that these AI systems were embraced hurriedly to save on
costs and bring uniformity. This was done under a one-size-fits-all tech model.
They weren't properly trained or supervised and had no safeguards for those
affected. Worryingly, issues relating to constitutional liability weren't even
considered.
Conclusion
Developing and deploying AI in a responsible manner is imperative in order to
overcome its limitations. This entails implementing strict protocols for data
collection, using diverse and unbiased training data, continuously monitoring AI
systems for fairness and impartiality, and guaranteeing human supervision and
accountability in decision-making processes.
In addition, implementing regulations and legal frameworks can help curtail
potential risks and foster ethical and responsible use of AI in law enforcement.
Many of the drawbacks associated with AI are the outcome of how AI systems are
developed, launched, and overseen, rather than being innate to AI technology.
Consequently, resolving these issues necessitates contemplating ethics,
transparency, accountability, and responsible utilization of AI technology.
In light of growing concerns, a few law enforcement agencies are stepping up to
the plate. Toronto Police Services Board made a notable announcement in
September 2021, revealing plans to develop a policy on AI technology use.
Meanwhile, disturbing exposés on the Chicago police department triggered the
suspension of their predictive policing program. It's imperative that all law
enforcement agencies prioritize this matter, as it could determine whether an
innocent or guilty individual is incarcerated.
Reference:
- What Happens When Police Use AI to Predict and Prevent Crime? Hope
Reese, 23 February 2022
Please Drop Your Comments