Can machines be intelligent? Can machines learn? What can we do with intelligent machines? These are questions that have intrigued us for several decades. With AI becoming integrated into real applications during the last few years, quite a few answers are emerging. We have seen that the machines can learn in a manner quite comparable to humans. They can be labeled intelligent, i.e exhibiting cognitive capabilities of learning, decision making and problem-solving. AI with ML neural networks is at an early stage of development, addressing relatively narrow and very specific tasks. AI has been proven very capable in pattern recognition tasks, where the machine learns to accurately interpret patterns in images, video, sound etc. They make reliable decisions in areas like speech, object, and face recognition, language translations, medical diagnosis, scene descriptions and language understanding. The unique value of AI comes from automatically making better, faster and accurate decisions in areas which are not clearly defined by precise rules. All aspects of any business benefit from better decisions made faster, automatically, and inexpensively.
We have scratched only the surface of intelligence in this book. Intelligence ?—? as experienced in humans ?—? goes beyond decision making and problem-solving. Areas like creativity, insights, imagination, judgment, relationships, conscience and emotional intelligence are in a different domain. The connection between emotions and rational elements in decision making is quite complex and not yet understood. The intelligence of our brain has many more dimensions, and is far more potent than AI exhibited by machines today. It is interesting to speculate what could happen when we network thousands or millions of AI neural networks together, operating like one entity. Is that how our brain is structured, with different areas of the brain responsible for specialized tasks like speech, vision etc.? Another area pursued by researchers, including Elon Musk, CEO of Tesla, is interconnecting our brain with AI thus extending the capacity of our brain to seamlessly tap into AI for specific expertise in new languages or skills. Imagine being able to tap into the entire Google database directly and seamlessly from your brain ?—? as if all that knowledge were to reside in your brain.
In the second part of the book, we will use the newly gained insight into artificial intelligence and apply it to how to leverage it for your business.
Can you think of an industry that does not benefit from intelligence? We believe that AI can enhance all business areas and will fundamentally impact most industries and businesses. The reason is very simple ?—? artificial intelligence allows you to make better decisions for both simple and very complex tasks. It does so by understanding and evaluating all the parameters and factors that influence it. It can leverage complete sets of data, it can better understand the influencing factors and produce an answer that is more reliable than humans alone can. We are seeing the evolution of new AI-based tools and services, that can help run better organizations, optimize the core business processes as well as create better products and services for customers. Continuous technological advancement in computing power, connectivity, neural networks will fuel the development of more and better AI, which can be leveraged in business.
Businesses are constantly evolving organizational entities, striving to create and deliver the best goods and services to consumers. They do so in an ever changing world, often in a highly competitive environment. Customer needs change, new competitors enter the market, laws and regulations change, communication channels change, technology evolves providing new capabilities. AI is the one new capability that has far-reaching implications on many aspects of the business and industries they operate in. It is a new lubricant of all business operations by enabling better decisions on all levels.
Some examples where artificial intelligence can help businesses in making better decisions are:
Should you hire this person?
Who is your best customer?
What services need to be improved?
What features do your customers love?
What products will go out of demand soon?
Which are suppliers with declining quality and reliability?
How to train your employees better?
What is the risk of a contract?
What is the best business model option?
Based on the learning effects, each new decision can be used to better train the system by observing the results of the decision. Artificial intelligence can be used to create more intelligent products. A smartphone that knows where you want to go next and organizes the way. A voice assistant which can book a restaurant table dinner this evening, informing and coordinating the participants. A refrigerator that replenishes itself and helps with a healthy diet on a daily basis. A self-driving car. A security drone. A CRM system that automatically focuses on helping you develop your best customers. A HR tool that helps you hire the most suitable person. A corporate communication tool that ensure transparency and assesses the motivation level of your employees. These are just a few examples of intelligent products that are likely to evolve from artificial intelligence capabilities.
Technology has had a profound impact on our lives. It has to allowed us to create astonishing tools and machines that make our lives easier and more secure, like cranes, the automobile or x-ray scanners. We have significantly reduced famine, plague and war, doubled our life expectancy, and live a much more comfortable life compared with just a couple of hundred years ago. Most of this can be attributed to technological advancements. Our intelligence together with our ability to create powerful tools has moved us to the top of the food chain. The invention of artificial intelligence, combined with other modern technologies, like Internet of Things, Big Data, and robots will now take us to new productivity levels, far beyond today’s possibilities.
The impact of artificial intelligence on technology, in general, is huge. It adds learning capability to machines and improves the decisions that machines need to make. This is done through software and neural networks, which have improved based on the huge data transfer capacities of recent computing generation, and the highly parallel computing capabilities of modern chips with specialized parallel processing architectures. All computers, machines, and robots are directed by software. The programs that run them define their performance and capabilities. The logic of software is based on input, some operations and calculations being done on the input producing an output.
Since artificial intelligence enables the machine to learn from this flow, from input to output, by being trained or by observing the results, the software can adapt itself to perform better in the future. Today this is often done through neural networks, whose multi-level pattern-recognitions create self-improving algorithms based on observing enormous data streams. These algorithms can yield better results than algorithms created by human developers in traditional ways, giving artificial intelligence an edge over human-created ones, especially in cases where there are many complex parameters, and large volumes of data over time. Better algorithms enable better software that even improves over time. This brings cognitive capabilities to other technologies running on software, and because most technologies run on software, the impact is huge.
Computer chips, sensors, the Internet, cloud computing, apps, robots, drones and augmented reality are some examples where artificial intelligence based software improves the performance. But also for more complex systems like enterprise resource systems, mobile phones, and large-scale traffic control systems, artificial intelligence advances the capabilities, performance, and quality of software and machines significantly. It also adds new capabilities like voice and image recognition that enable new functions and more efficient and convenient user interfaces. The capability of machines to learn also allows more complex use cases, where several steps are needed to perform a given task.
Initial technological areas where artificial intelligence has the most profound impact are Big Data, Internet of things and robotics. The increasing digitalization of many areas of our lives and businesses have created enormous data pools and constant data streams. The area of Big Data focuses on datasets that are so large or complex that traditional data processing application software is inadequate to deal with them. Big Data is often generated by combing application specific data with external data sources, thus making the data sets very complex and difficult to handle.
A simple example is using an app to go to a restaurant. Your smartphone knows the time of the day, your current location, as well as your destination. An algorithm can calculate the best route to get from A to B. This information can be combined with weather information, information about road traffic, public transportation options, taxi and Uber-like options or local bike sharing options as well as the availability of bike paths on the way. Predictive data can simulate how the situation is likely to change in the near and distant future. The complexity of handling such a digital service becomes enormous. Artificial Intelligence can make better sense of the data, identify patterns and learn from your behavior and the result. How much time did it actually take to reach the destination and how is it different from what was projected? What factors caused delays? An intelligent routing system based on artificial intelligence can produce significantly better results than simple routing algorithms. And it can improve over time as it learns what factors matter and what personal behaviors impact the result.
Other examples are understanding medical images, optimizing plant fertilization and watering, simulating climate change predictions and financial transactions. In companies, customer behavior, digital marketing and advertising and human resource performance are other examples of Big Data applications where artificial intelligence can play a big role.
More and more data sets are produced by real world sensors that have emerged through the growth of Internet of Things (IoT), where smart and connected little machines perform simple tasks in all parts of our lives. Scales, cameras, coffee machines, thermostats, gates, video surveillance systems are only a few such examples. IoT is characterized by inter-networking of physical devices. Often it contains sensors to generate data, an ability to communicate this data through the Internet and then acting on this data. A surveillance camera generates a series of pictures, uploads them to the cloud, where image processing software detects the alarming situation and informs the user via an app, while the camera sounds a built in alarm. Then based on the movement of objects, the camera can follow moving objects or can be controlled via an app from anywhere in the world. Both the application of IoT devices and the sheer amount of data they produce are creating a lot of use cases that benefit from artificial intelligence. Most of the recent devices are labeled smart or intelligent, like Nuki’s door knob that automatically opens your door as you approach with your smartphone, but just like the personal voice assistants, the intelligence can still be questioned. Financial services company IHS forecasts that the IoT market will grow from an installed base of 15.4 billion devices in 2015 to 30.7 billion devices in 2020 and 75.4 billion in 2025, so IoT becomes a big driver and beneficiary of artificial intelligence capabilities.
Another area where artificial intelligence is being deployed on a big scale is robots. The field of robotics, an interdisciplinary branch of engineering and science that includes mechanical engineering, electrical engineering, computer science, materials and others, has embraced artificial intelligence to give robots cognitive capabilities, going from simple pre-programmed capabilities to more complex, context-aware applications of high quality. Robots combine a lot of technological capabilities and also incorporate technologies like Big Data or IoT. Robots come in many shapes or forms like manufacturing machines, self-driving cars, and drones. These cognitive capabilities allow new use cases like monitoring crop health by drones, resulting in better fertilization and use of pesticides. The cost saving and quality improvements in comparison to traditional models, like human inspection or airplane monitoring, are huge. According to a study by Informa Economics, corn, soybean and wheat farmers could save an estimated $1.3 billion annually by using drones to increase crop yields and reduce input costs.
Robots are also an area of great dispute and cause of human anxiety. In the industrial revolution, people feared that automated machines would take away a lot of their jobs, and science fiction movies have often portrayed robots as stronger than humans, once they turn to their own self-created motives. If these motives are bad, the impact on humanity is existential. We should be aware that humans create the technology and give robots their tasks.
The new and evolving capability of artificial intelligence, and its impact on other technologies, has added a lot of complexity to an already difficult-to-understand and difficult-to-master landscape of technologies. The complexity of creating the best solution for a given use case and the impact for businesses if they get it wrong are limiting factors for take-up of the technologies in many areas. Fortunately, an ecosystem of software, tools, and services is being created at a fast pace, fueled by corporate and venture investments in the promising new area of artificial intelligence. One can find ready-to-use and proven solution in all areas of technology. If you are looking for complex industry solutions, if you are looking to optimize your enterprise processes, are looking for capabilities, tools and components like image recognition, voice understanding and translations, neural networks or core technologies like neural networks, there is a growing ecosystem of artificial intelligence-enhanced solutions and services that you can tap into.
Another ecosystem visualization focuses on core areas of enterprises:
Businesses now need to figure out how to create value for their customers using artificial intelligence and what business model makes it competitive and profitable.
Just like in the case of the Internet, every business in every industry will benefit, and will very likely be transformed with AI. The following list points out just a few examples of where AI will be applied in various industries. Over time, as the AI technology matures, we believe it will become an integral part of all business processes.
Similarly, AI will become an integral part in almost all horizontal business functions. Here are a few examples.
Human intelligence benefits from an interesting duality of arriving at conclusions. On the one hand arriving at conclusions based on perception of patterns, and on the other reaching conclusions based on logical and rational analysis. Both forms are distinct, but complementary. Machine based intelligence also comes in two forms: AI based decision making based on deep learning which interprets patterns in data to arrive at conclusions, mimicking the perception-based intelligence of our brain, and standard rule-based computing (like in a PC) mimicking the rational intelligence of our brain.
If we were to model the brain based on our general observations it would consist of two parts:
Right – Perception based
Left – Rational based
Our senses – taste, sight, touch, smell, and hearing – provide patterns to the right part of our brain to generate perceptions. Whereas all our logical interpretations influence the left part and generate a structured and rational understanding of a situation or a problem.
When we study physics or mathematics we are mostly using the rational part of the brain, which is best suited to providing us a logical structure for the subject. However, when we are dealing with patterns created by our senses, we are using the perception part of the brain. Our five senses are the prime sources of patterns for creating perceptions. Since most situations are a mix of logic and patterns, we collaboratively use both rational and perception parts of the brain to arrive at conclusions and make decisions. Both parts, perception and rational, are integral sources of human intelligence.
Both parts of the brain are simultaneously active in all situations. The right part may be busy generating perceptions based on patterns, while simultaneously, for the same situation, the rational part of the brain is busy constructing a rational interpretation of the situation based on some logical structure and comes up with a rational conclusion. Who wins? Right or left-brain? It depends on the situation.
Most AI systems today are based on deep learning, where learning happens through exposing the AI system to tens of thousands of illustrative examples. Deep learning involves absorbing intricate details and subtle nuances in pictures, videos, or sounds into the parameters of the neural network of the AI system. After the training, the AI system is able to perceive the input data based on patterns in images, faces, objects, movements or sounds fed into the system. The AI system’s decision-making is based on the perception of the input data patterns, behaving like the right side of the brain – specializing in perceiving patterns.
The left side is about understanding and dealing with logic of the situation. This works more like the standard computing we know from a personal computer (PC) or a smartphone. This is about coding situations that are clearly structured by rules that can be articulated in “IF-THEN-ELSE” logic. It can be compared to the left side of the brain in our simplistic model. Future PCs and Smartphones are very likely to have AI logic integrated. We want to remind our readers that these simplistic models used here are gross approximations of reality. Their purpose is just to illustrate the working of two processes in the brain and to offer a metaphor to illustrate how two forms of computing work in an easy to understand fashion.
We believe that these two forms of computing for decision making ?—? for processing structured (with standard computing) and unstructured data (with deep learning AI) ?—? can be used collaboratively for a much more balanced decision making. Deep learning AI systems are essentially recognition algorithms that automatically convert unstructured pattern based data into structured information that can then be acted upon using standard logic-based computing. This approach helps in addressing the “black box” issues of AI and increases the transparency of decision making. Let us illustrate this with a couple of examples.
Surveillance video: Today most public places are monitored with scores of video cameras. Video data from these can be collectively scrutinized for suspicious activities or people using AI. Once AI concludes based on understanding the video feeds that something suspicious is happening or about to happen it creates an alert with its decision on what is happening together with a recommended action. By doing this AI has translated an unstructured situation based on video sequences into a concrete structured information that something critical has happened or is imminent ?—? eg. a terrorist attack or a terrorist has been identified in a public place. With this structured information, the rational intelligence dealing with structured information can kick in and take a specific decision to act (e.g. vacate the public place). This illustrates how both types of computing can work collaboratively.
Traffic management: Google maps obtains traffic information based on people travelling with smartphones on the road, indicating which segments of roads have normal traffic, slower traffic or traffic jams. Using this information together with other sources of traffic alerts and allows an AI system to predict on how traffic in each segment is likely to evolve over time. Essentially, the AI translates unstructured data of traffic speed data for on various roads into structured information forecasts of how the traffic will be in 15 minutes, in 30 minutes, or in an hour. This forecast information can be used to guide individual drivers via routes that minimize their estimated time of arrival, based on where they are heading.
Both these examples illustrate how AI and standard computing can work hand-in-hand to solve problems in a balanced and collaborative way. AI is essentially used to make sense of the unstructured data patterns and translate it into structured information, which can be transparently and decisively acted upon by standard computing techniques.
This balanced approach can be used to take reliable decisions in complex systems containing mixed sources of unstructured and structured data such as for autonomous vehicles or other robotic solutions. A generic version is illustrated in the following diagram.
AI systems do not have the same limitations of our brain. AI can process millions of inputs, without ignoring a single one. And it can do so extremely fast, without rest, without tiring, without distraction, or becoming emotional. AI is an excellent complement to our brain in decision-making. AI is essentially a prediction machine, providing a fairly accurate prediction of the best outcome for a given input situation. Machine Learning has provided modern AI systems with the ability to self-learn from thousands of cases provided.
Decision making starts with data inputs. Some systems use analytics to improve the understanding of the input before using AI. AI/ML generates a prediction in the form of a probability value for the best answer corresponding to the input. Based on the prediction a judgment has to be made to decide the best action. Judgment is a human skill that takes into account the prediction and any other considerations (like emotions, timing etc.). Action results from the decision. The consequences of the action are fed back into the system for tuning. For example, in a skin cancer diagnosis system, the input consists of medical data in form of medical images, reports, and medical history. Based on these the AI system gives a prediction indicating the probability of cancer and need for an operation (e.g. 87%). Using this a surgeon uses their judgment based on the person’s age and other complications to take a decision to operate or not. The outcome of the operation is fed back into the system to improve its diagnostic skills and recommendations. In real-time systems such as an autonomous vehicle, there is no time for human involvement in decision making and the machine directly takes the decision, provided that the machine has achieved a desired confidence level.
Internet of Things (IoT) is becoming very popular for automatically managing complex control systems. Its front-end consists of a networked array of sensors providing real-time data from sources needed as inputs for the control system, such as temperature, location, pressure, motion or an image. This data, in its digital form, is fed into an AI based decision-making system, which decides the best action based on the provided input data. IoT systems are used in a variety of applications ranging from optimizing cooling in a data center and enabling self-driving cars to deciding which segment of crops to harvest when
Companies like IBM, Microsoft, Amazon and Google are setting up AI IoT Cloud platforms for their customers, who can bring in data from their sensor networks. Customers develop control algorithms within the IoT Cloud. The IoT platform provides AI and additional algorithm libraries that are needed for control and decision-making. IoT platform providers are also acquiring vast amounts of sensor data to make their platforms even more attractive. They will be able to provision almost any sensor data needed by their customers on a subscription basis. As a result, creating IoT systems will become significantly easier and faster.
Data, in essence, is a digital representation of the reality in the world. As humans, we take our daily decisions based on sensing and analyzing the reality around us with our five senses and using our brain to decide and act upon that data. Internet of Things (IoT) systems work very similarly?—?with a notable difference. They can decide based on all the thousands of cases they are trained on, 24×7, undistracted, without breaks or the need to sleep.
Drones have become an increasingly useful sourcing tool for IoT data. Drones can capture high-quality images and videos of specific areas at a very low cost. Images and video are excellent sources of data patterns for AI. Describing the content or the subtle uniqueness in images and video clips is often difficult for humans?—?but not for AI. Machines have become better than humans at pattern recognition and can learn to differentiate subtle patterns in images without human assistance. It is possible to do reliable lip-reading by training AI machines with millions of samples of people talking and the corresponding text. Using image data of vineyards, captured by drones, AI can now predict the best time for harvesting different sections of the vineyard. The color and texture of the grapes provide the patterns for the AI.
As the cost of better decision-making with AI drops steeply, AI will get integrated into almost all products, services, or processes. This will make them a lot smarter and more competitive.
Should I marry Susan or Karen, should I buy this house or that, should I take up this job or the other one, should I buy Google stock or Amazon, should I take a vacation this summer in Sicily or in Corsica, should I rent a BMW X5 or Audi Q5, should I take the highway or stay on the country roads to get to my destination faster…. Should I hire John or Sandra, should we partner with Microsoft or with Apple, should I outsource assembly to India or to China, should we target a broad customer base or focus on a special high margin segment? We are perpetually making decisions about a desired future outcome in all walks of our lives.
Making better decisions is the key to success?—?both in personal and in business life. A better decision means choosing the option that is considered the best from all angles, is the fastest, and the most accurate. As humans we have always tried to figure out the best way to arrive at a better decision. At the core, faced with uncertainty, decision-making is really about predicting the best option.
So how do we make better decisions? Most decisions were (are even now) made based on an amalgamation of personal experience, asking opinion of trusted friends, and gut feel. The emergence of digital technologies, especially the Internet, made decision-making more professional and easier?—?at least, at first. We now have unprecedented access to reports, opinions, research, comparisons, social media, and an expanse of additional data?—?allowing us to make data-based decisions. This is when it started becoming difficult again. The human brain has a fairly limited capacity for processing information from multiple inputs and becomes overwhelmed with too much data, resulting in decision-making paralysis. This is where many of us find ourselves today when we have to make a decision. Either we do not decide at all, or we decide based on just 3–5 factors (often ignoring the significant ones), or just surrender to our gut feel.
The benefit of data is that it provides objectivity in decision-making and forces us to look at various aspects and angles. However, too much of it overwhelms our decision-making abilities. There is much talk of being buried in data. For example, with thousands of new papers published on medical research every year, no medical professional can be expected to read and digest every new insight and use them in their daily job. It is not just a matter of available time, but also the capacity of our brain for inputs, actively using them, and making a sense of them for a decision. This is true for most professions?—?lawyers, tax consultants, and engineers. Is there a way to objectively make decisions, always factoring in all available information?
A neural network needs thousands of perceptrons. Each is a processor that executes the same function again and again with changing data. Thousands of simple processors with a very small instruction set are needed. This lies in the domain of HPC (High-Performance Supercomputers) with many processors running in parallel, which is very different from the software running on the processor in your personal computer or smartphone, which operates as a sequence of many instructions, needing a complex architecture. Interestingly, HPC architectures are also used in graphics processing in video games. As a result, Graphics Processing Unit (GPU) components (e.g. from NVidia), typically used in advanced video games consoles and PCs provided an excellent start in building fast, compact and low cost neural computing hardware. Now, dedicated hardware for deep learning neural networks have become the biggest rage in computer hardware, with many AI focused companies like Google, Facebook, IBM and Apple are building their own components for AI, along with traditional semiconductor companies like Intel, Qualcomm, and NVidia.
Deep learning AI systems are “experts within a black box.” They produce a decision for an input situation. The logic for generating the decision is not revealed. One cannot determine why it made that decision. It cannot explain why a text output is the best speech recognition for a given audio input or why a given diagnosis represents the most probable cause for inputs of symptoms and medical history.
When we go to a human expert – a doctor, a tax consultant, or a financial advisor, we expect them to provide a rationale for the consultation advice or decision, which gives us confidence and trust in the advice offered. A deep learning AI system can provide us with the best advice and decisions, but does not, and cannot, provide the rationale for the decision. Deep learning technique that is used in AI systems cannot reveal the rationale used for decision-making. Just as we cannot understand the reasons behind perceptions and emotions of humans making decisions, we have to accept the opaqueness of AI systems, so long as we are happy with their decisions. Maybe a new technique will emerge to extract the rationale for an AI decision. Alternative methods of machine learning are being researched, where the rationale of decision is transparent?—?e.g. Bayesian method where you start with a hypothesis and every additional data input is used to tune that hypothesis.
Even if ML/AI decision-making remains a black box, there are significant advantages of leveraging AI for decision making. AI systems do not get distracted or tired, are generally more available, and they will get better and cheaper over time. AI systems can be exactly replicated and massively networked to work collaboratively with other AI systems. In contrast, the knowledge base and expertise of human experts has to develop individually and cannot be automatically networked. We believe that human experts and AI systems will work collaboratively, each contributing with their unique skills.
Data is the critical resource for machine learning. Patterns for learning and for taking decisions are contained in the data. Machine learning systems can produce valuable output if the data is relevant, clean, up-to-date, and reliable. In the new economy “data is the new oil” since AI converts data into business value.
Deep learning is the magic behind the breakthrough success of ML in AI systems in this decade. It is this emerging area of computer science that offers the most promising approach for machine learning and it is revolutionizing artificial intelligence.
Deep learning requires a neural network having multiple layers —?each layer doing mathematical transformations, and feeding into the next layer. The output from the last layer is the decision of the network for a given input. The layers between the input and output layer are called hidden layers.
A deep learning neural network is a massive collection of perceptron’s interconnected in layers. The weights and bias of each perceptron in the network influence the nature of the output decision of the entire network. In a perfectly tuned neural network, all the values of weights and bias of all the perceptron are such that the output decision is always correct (as expected) for all possible inputs. How are the weights and bias configured? This happens iteratively during the training of the network —?called deep learning.
The diagram shows a deep neural network designed for deep learning with multiple layers. Data inputs enter the network into the input layer. Each layer has multiple perceptrons. They transform the inputs and generate outputs which feed into the inputs of the next layer. The interconnection between the layers and the mathematical function of each perceptron is determined by the deep learning network designers. The inputs get successively transformed layer by layer and eventually generate an output decision?—?a value between 0 and 1, indicating the confidence level (as a probability) of the decision for the input data. For example, if the input image is that of a cat and the network has to identify it as a cat, the confidence level should be as close as possible to 1. A lower value indicates that the network is not yet well optimized to identify the cat as a cat.
During the training phase of the neural network, the output is compared with the desired output. Deviations (errors) are back propagated through the network, adjusting and tuning the weights and biases of all the perceptrons in the network using the “cost function.” Learning happens at every tuning of the network parameters. Training the network requires a vast number of cases where the desired output is known. At the conclusion of the training, all the weights and biases of all the perceptrons have been tuned to their final values, and the network is able to deliver the correct decision for all the cases. This is equivalent to having trained a specialist with lots of cases so that they have learned to take the correct decision in all situations. Now the neural network is ready for deployment.
The learning parameters are stored in the weights and biases of individual perceptrons in the network. A large number of perceptrons in the network results in a higher resolution in decision making —?making them more valuable. Most neural networks have thousands of perceptrons. Modern neural networks are usually made up of approximately 10-20 layers and contain around 100 million programmable parameters. Since the algorithmic logic for decision making is spread out in the weights and biases of thousands of perceptrons, it is almost impossible to reconstruct the logic or rationale used for the decision making —?making the AI system based on deep learning neural networks a black box.
Amazon CEO, Jeff Bezos says: “We are now solving problems with machine learning and artificial intelligence that were … in the realm of science fiction for the last several decades. And natural language understanding, machine vision problems, it really is an amazing renaissance.” Bezos calls AI an “enabling layer” that will “improve every business.”