The rise of artificial intelligence
What’s the first thing you think of when someone mentions artificial intelligence (AI)? Is it your Alexa in the corner of the room? Or the facial recognition which unlocks your phone? Or the creepily targeted ads that appear whilst you are browsing the internet?
The modern world is built upon AI; not that we are all necessarily aware. But when did it begin? And how did it become an underlying presence in our society.
History of AI
Alan Turing, a notable mathematician who latterly worked at the University of Manchester, explored the mathematical possibility AI. In the 1950s, he pondered why machine can’t, like humans, use available information and reason to solve problems and make decisions. Unfortunately, because of the time, Turing was hindered by a lack of computer advancement – they could not store (remember) commands, but only execute them. Computing at the time was also very expensive. Leasing a computer in the early 1950s would have cost up to $200,000 a month (compare that to your £30 a month phone contract).
As computers became cheaper, faster and more accessible, AI began to flourish. One of the fathers of AI, Joseph Weizenbaum, created a language processing computer program (ELIZA). It was designed to imitate a therapist by asking open-ended questions and respond with follow ups. It is considered to be the first chatbot, developed in 1966. The success of this program and others led to more funding for AI.
In the 1980s, deep learning techniques became popularised; computers learn from experience. This was followed with ‘expert systems’ – a computer system that applies knowledge and reasoning to mimic a human expert. This system has since been applied to multiple categories including interpretation (speech recognition) and prediction (preterm birth risk assessment).
By 1997, AI had developed far enough that a chess playing computer program defeated the reigning world chess champion of the time. The same year also welcomed the development of speech recognition software.
AI in modern day
Artificial intelligence is abundant in modern day life. It is used by numerous industries which I’m you’ll be aware of if you have ever called a company to be greeted by a machine. Or if you are browsing the internet and a chatbot has popped up offering you assistance. The reliability of these examples may be questionable (I certainly recall cursing down my phone desperate to speak to a human).
There are better examples of AI in the modern world. Many women will be aware of a period tracking app, Flo. The apps AI programs use data logged manually by its 43 million active users to predict menstrual cycles and fertility windows and still treats every woman as the unique individual that they are.
Social media is also abounding with AI software’s. They are used to personalise your feeds based on past engagements, provide you with friend suggestions, target you with advertising and more recently to moderate content. TikTok is almost entirely governed by AI – it uses smart programs to quickly learn your interests as you use the app to deliver you optimal content.
The medical world is also experiencing an influx of AI. Medical diagnostics, drug discovery and clinical trials are of the many branches of medical research which are trying to optimise AI to transform healthcare.
How far can AI be pushed?
AI learns and improves through the information we provide it; incomplete or unreliable data can reduce the effectiveness of the programme. For example, when Amazon started using an AI program to review job applications, they trained the program using a large number of male applications. Consequently, the system began to filter out female applicants.
Biases can emerge within AI as systems act autonomously. Due to the ‘black box’ nature of AI, systems cannot explain how they arrived at conclusions. Therefore, this often means we cannot trust them – particularly as AI is starting to advance into sensitive matters such as healthcare or even governance.