Rebooting AI: Building Artificial Intelligence we can trust
The book “Rebooting Ai” is an attempt to highlight the reasons why progress in AI has not kept up to its predicted time plan. The book does not give in to the exaggerated statements of what AI can achieve, it aims to find the underlying causes as to what factors need to be worked upon to get AI on the path to take off successfully. The book is aware of the overestimation by historians who claimed that AI would be the overwhelming technology in the next 20 years but did not materialize as predicted. A summary of the contents of the book would help the reader grasp what is missing in the contemporary efforts.
Before diagnosing the reasons for stalemate in AI’s progress, the book initially discusses the ambitious projects launched in the course of history and how well they performed. To simplify things, I would list down the AI inventions
Watson by IBM in 2016
IBM launched Watson with the aim to revolutionize healthcare by going through the medical literature and come up with correct recommendations in the field of pharmacology and radiology. Although Watson seemed promising, but it predicted incorrectly when installed in Germany for diagnosing rare diseases.
Chatbot “M” by Facebook in 2015
M was designed to be the perfect virtual assistant in catering to a user’s need. Eventually it was cancelled after Watson’s degrading performance. It stayed operational for only three years.
As the AI was continuously improving, there was a rising concern for the ethical, technological, and legal challenges. Acclaimed businessmen like Musk have considered AI to be more damaging than bombs and Stephen Hawking have adjudged AI to be a devastating event in civilization history. There are two factors which have given a boost to AI namely, advancement in hardware and secondly big data. The book sheds light on three challenges which must be worked upon for further success in AI. 1. Gullibility Gap: This gap entails that man is gullible when the computer displays technical efficiency. Humans immediately start attributing the same level of intelligence that humans possess which is a major lack on the social psychology part and is commonly known as fundamental over attribution error. 2. Illusory progress gap: This is the challenge where small advancements in AI are considered huge and viable to be applied to difficult problems. 3. Robustness gap: This is the misconception that if AI is able to solve one problem correctly, the output would be successful every time the algorithm is applied to a similar problem. An example to understand this gap is “autonomous cars”. These cars are trained very restrictive environment and cannot replicate the ever-changing circumstances in the external environment. Hence the AI technology although performs in narrow circumstances cannot be trusted in all the situations. The presence of aforementioned challenges does not mean that they cannot be overcome. A clear understanding of the repercussions, the reason current systems are unable to achieve the target and a new strategy to go forward can help with the challenges. The book enlists nine risks associated with AI as long as it is in the process of continuous improvement, first being the fundamental attribution error discussed earlier.
- Lack of robustness
- Heavy reliance on details in datasets
- Rekindling of obsolete biases
- Dependence on programs that can undergo manipulation by public can lead to false output
- AI’s training through datasets make it vulnerable to generate the same data
- The cumulative effect of social basis can be exacerbated based on contemporary data in the context
- High likelihood of AI trying to achieve the wrong destination
- Lastly, due to the AI’s potential it can be used to cause harm to the public for various reasons intentionally
The third chapter “Deep learning and beyond” talks about the deep learning approach which helped AI through supervised learning methods. The reason for stark progress in AI was the invention of GPU which provided with enough processing power to train the details in the form of layers to the AI. The book helps explain the difference between AI, deep learning, and machine learning. It stares that AI is the all-encompassing technology with deep leaning being a subset of machine learning. Decision trees, genetic algorithm and other approaches are not equally popular in machine learning.
The fundamental ideas governing deep learning are fundamental hierarchal pattern recognition and learning. The former states that the output must be understood in a hierarchal manner through the interconnected intermediate layers. Such systems are known as neural networks. The learning in the various layers depends on weights assigned to it. The respective weights are features and hence require the expertise of feature engineers to work with them. Although neural networks perform fairly well in statistical analysis, it raises doubt as to the process of how they reach the decision and they do not have the intuitive sense which is characteristic of humans. Deep learning bear numerous attributes in that it is opaque and brittle meaning that predictions may not be accurate which would not be identified incorrectly by man. The book establishes that deep learning can be largely useful when simple, repetitive tasks need to be done but the completion of a task cannot be guaranteed and hence lacks flexibility.
The fourth chapter tries to analyze intricacies that play a part in deep learning of text. When the searches were not exactly worded, the results were not in accordance with the query. It is an overestimation to think that machine-reading systems can answer any question. The limited progress in systems can be understood from the fact that the system fetches information based on cross validation rather than comprehending the information and applying reasoning. An example of its working is Google’s Algorithm which works by cross matching the words with words with text. Among the million search results shown by Google, the likelihood of the right answer in them is quite high. Systems like Alexa and Siri are still illiterate except informing us about the factual information like weather and still require a lot of improvement. It is evident that machine learning technology has not evolved enough that none of Google’s approaches depend on algorithm learning the meaning of the documents. Google Translate have amazed researchers for being based on bitexts and finding out the patterns, however the deep learning of the context is still absent in any case. Existing technology miss out on the understanding of the big picture and runs on straightforward translation.
Encapsulating the summary of the book’s first four chapters the main points to ponder is that deep learning has not been integrated with AI to the extent that systems do not know the significance or crux of what they are processing. Deep learning still requires to be modified in appropriate direction to get valid results and mere exaggeration of slight betterment should not be miscalculated as all encompassing.