Artificial Intelligence (AI) resides in two parallel realms. One is the fictional universe in which it becomes omnipresent and transcendent and threatens the very existence of humanity. The other is in our daily lives, where we come across it each day without thinking twice – whether it takes the form of virtual assistants on our phones or is deployed as autopilots in our planes. 

Rik Oostenbroek

We need a better understanding of what AI is, where it stands today and, more importantly, what makes the technology tick. These are the five key ways modern AI is evolving. 

Looking at the Bigger Picture

Today, many of us take photo organisation for granted. We can organise photos by whether they contain our own faces or the faces of our friends, or even certain objects. This is all made possible by AI. This technology has applications beyond the realm of social media and is being exploited by medical science, where computers today can scan through images generated by MRI machines, CT Scans or even X-rays and look for problems faster than a human radiologist would be able to. 

Stanford University has established the Center for Artificial Intelligence in Medicine and Imaging (AIMI), which has a number of projects under development. A paper released by AIMI on a retrospective comparison of their CheXNeXT algorithm and practising radiologists found that machines are getting close.

“Deep learning has an important role to play in AI when it comes to analysing imagery”

“We compared CheXNeXt’s discriminative performance on the validation set to the performance of 9 radiologists using the area under the receiver operating characteristic curve (AUC),” the report states. 

“The radiologists included 6 board-certified radiologists (average experience 12 years, range 4-28 years) and 3 senior radiology residents, from 3 academic institutions. We found that CheXNeXt achieved radiologist-level performance on 11 pathologies and did not achieve radiologist-level performance on 3 pathologies.” 

However, the algorithm was significantly faster in comparing the 420 images in the validation set, taking just 1.5 minutes compared with the 250 minutes that the human radiologists took. 

Deep learning has an important role to play in AI when it comes to analysing imagery. Convolutional neural networks (CNNs), which are inspired by biological processes, are the most common way of analysing images. The applications of CNNs span from image and video recognition to medical image analysis and even to natural language processing. 

Binary District Journal discussed the importance of CNNs with Yoshua Bengio, Head of the Montreal Institute of Learning Algorithms (MILA). 

“They were very important to demonstrate that deep learning could make a breakthrough in computer vision, and that has triggered the accelerating interest from industry, starting around 2013,” he explained. 

He also went on to say that the field has greatly expanded since the initial success in recognising objects in images, now encompassing many other “computer vision” tasks with generative adversarial networks (GANs), from animating faces to guessing how an object would look like from an angle rather than the observed one.

Learning to Learn

Meta-learning applies automated algorithms to metadata so as to advance self-learning by machines. This is a largely experimental field. However, it is also an important field because, currently, AI is not very good at doing multiple things at the same time. While we see specific AI applications like self-driving cars or chess playing programs, an all round AI is still a long way away. 

Work by Yoshua and Samy Bengio and Jurgen Schmidhuber among others in the 1980s and 1990s set the stage for these self-learning algorithms. 

“In supercomputers participating in TV game shows like Jeopardy! or mobile phone assistants like Siri, Cortana or Google Assistant, self-learning is becoming omnipresent”

Now, meta-learning is in the news again for a variety of reasons. Platforms have been released that can create self-learning AI systems that require less training and human involvement than what existed just in the recent past. 

However, examples of such systems are still relatively few and far between, according to Yoshua. Self-learning is important across many applications such as driving, gaming, process automation and management systems etc. Whether it is in supercomputers participating in TV game shows like Jeopardy! or mobile phone assistants like Siri, Cortana or Google Assistant, self-learning is becoming omnipresent in modern life.  

Hey Google!

Speech recognition is one of the most prominent use cases of artificial intelligence and it has permeated our lives. We find it in speech to text software, translation apps, voice-based mobile assistants and voice-activated IVRs. 

Discussing speech recognition, Yoshua comments, “There has been much progress in this field like in others, in great part thanks to much bigger datasets and models, with impressive results approaching a human level of accuracy.” 

Google’s AI-powered Speech-to-Text and Text-to-Speech are already available, but even now challenges remain, as Google admits in a blog: “When creating intelligent voice applications, speech recognition accuracy is critical. Even at 90% accuracy, it’s hard to have a useful conversation. Unfortunately, many companies build speech applications that need to run on phone lines and that produce noisy results, and that data has historically been hard for AI-based speech technologies to interpret.” 

This is a problem that Yoshua also touched upon as he indicated that challenges remain with unusual voices or when the recording is noisy (e.g. in a car). However, things are looking up, as we now have Google Duplex based on WaveNet, which is close to giving users a near human experience when making phone calls to carry out routine tasks like making an appointment.

Learn as You Do

Next, we have transfer learning, which allows developers to take a model which is trained on one particular problem and then retrain it on another. This allows for crowdsourcing and a significant reduction in costs attached. 

Basically, transfer learning has the potential to exponentially increase the amount of innovation possible because there is less time spent reinventing the wheel. Google and Microsoft have both used transfer learning in their Inception and ResNet models respectively. These models function in the image and video sphere. 

Natural Language Processing is another field where transfer learning has potential. As an example, the Twitter feed can become a model to analyse sentiment and then other applications can benefit from the transfer of that analysis, such as restaurant reviews. 

“Current theory for machine learning focuses on how the learning system generalises to examples from the same distribution as the training data,” Yoshua points out. “But in practice, the systems in the field face different data distributions, not like in the lab. And the tasks being performed evolve, while new domains of application are considered.

“We need a better understanding of how to generalise in this more powerful way, across related distributions. This question is intimately linked to the issue of learning the causal structure explaining the data, and I recently posted a paper on this subject on arXiv (A Meta-transfer Objective for Learning to Disentangle Causal Mechanisms). Learning to adapt quickly to a new distribution (the transfer scenario) is actually a way of discovering the underlying causal structure explaining the data.”

The Science and Politics of AI

There is nothing to fear but fear itself. However, as AI technology becomes more mainstream, there are issues that will creep into daily conversations. Self-driving cars, for example, are already controversial. There have been fatalities related to self-driving cars already, though there is debate over whether the machines themselves have been to blame. 

“What would prevent us from doing that is not science but politics”

In December 2018, there were reports that residents in Arizona were attacking self-driving cars as they did not want them in their neighbourhoods. So, not only has AI made a splash in the realm of science but also in that of politics. 

Will self-driving cars ever actually fulfil their potential? Will they be able to surmount the issues surrounding them? According to Yoshua, things look good for AI in general and the cars in particular. 

“The science of AI will progress. It is hard to know at what rate, though,” he says. “Ultimately, there are no reasons to believe that one day we wouldn’t be able to reach at least the level of competence of humans on most tasks (like driving). What would prevent us from doing that is not science but politics.”

Don’t Release the Reins 

Artificial intelligence is a powerful tool, capable of carrying out tasks with ever-increasing accuracy and efficiency. The likes of Google have created machines that can book appointments unassisted, dealing with tangents of conversation and curveballs that are genuinely impressive to see in action. When we talk about AI systems, it can be tempting to want to create completely autonomous software right off the bat, but some of the systems deal with high stakes – health care, facial profiling, etc. There are issues of existing inequalities and biases, which machines are not yet equipped to identify and combat, instead often further entrenching them. 

The solution? AI should not be given full control of the reins until vast improvements have been made. An algorithm’s confidence in its own result should be just as important as the result itself, and humans should be on hand to double check an AI’s work. If an AI registers low confidence in its own result, human moderators can be brought in to have final say on the more challenging tasks. AI can still perform a large bulk of the decision making and improvements to efficiency will still be seen as a result. AI might not necessarily take our jobs for some time, rather turning us into moderators to ensure they do theirs properly.

This post was written by Shivdeep Dhaliwal for BinaryDistrict.

Ilustration by Rik Oostenbroek