You’ve read the terms artificial intelligence (AI), neural networks, and machine learning several times in this book, and I’ve told you how it’s the future. I want to try to explain why it’s the future. I will explain how it works and how it helps us solve hard problems.
Artificial intelligence is our pursuit to make machines think and do things on a level that rivals what humans are capable of. Identifying things in a picture, recognizing faces, learning to stand on two legs, driving a vehicle, creating music and art—these are all tasks that seem easy and natural to us humans, but they are unbelievably difficult for a machine. I don’t have to put any effort into helping you understand why it would be great if computers could do these things, but I’m happy to say that now they can. The neural network is a technology that makes this possible. Originally conceptualized in the 1960s, they have only come to prominence now due to two factors: 1) an abundance of data to train the neural networks on, and 2) widespread access to computing power (GPUs) to make it happen.
Computers are programmed using logical statements that a software developer manually interprets into lines of code. For example, “if age is twenty-one, then allow access to beer” is easy enough for a software developer to translate into code. Now what if the same developer had to write code to detect whether a picture contains a cat? He would have access to is a series of numbers representing colors for every point (pixel) that makes up the whole picture. The combinations and probabilities of a series of numbers that represent a tail, four legs, eyes, the various colors and shapes of cats, the shape of the mouth, the shape of the teeth (if they’re visible)—it’s overwhelming and impossible for a human to define in a series of logical statements. This is where neural networks, also known as artificial intelligence (AI), come into play. AI doesn’t depend on logical statements. It instead works with probabilities and approximations. An AI model to detect pictures of cats would first need to be trained to do so. Training an AI model requires giving it many pictures of cats until it learns the probabilities of things that exist together only in cat pictures. From there, it can then look at any picture and determine if it contains a cat.
Training AI is kind of like getting the computer to figure out the code for finding cats. When you give a neural network a large collection of cat pictures, it slowly learns which series of numbers (that make up each picture) define cats. Once it does this, you have a trained AI model for detecting cats in pictures. There is always the possibility of it making mistakes, but that’s also the case with humans. A lot of the effort in developing AI is focused on reducing errors, reducing the amount of training data required, and reducing the training time required to create a model. Once trained, an AI model can be better than humans in a lot of cases. Say you train an AI model to identify various butterfly species based on the colors and patterns on their wings. It will do its job really well, even better than a highly trained human. Plus it will never tire, and it will be way more cost efficient.
Neural networks are designed to mimic how a brain works, or at least how we think a brain works. Like our brains, neural networks are also made up of large numbers of units called neurons. A neuron takes an input value, does a simple calculation on it, and outputs a score. That score then acts as an input for the next neuron, and so it goes. The score is just a number that reflects how confident the neuron is about something. For example, a neuron outputting the value 0.8 when given an input of a photo means that the neuron is highly confident (80 percent) that the picture contains what we’re looking for.
But first, before it can do all of that, the neural network needs to be trained. And if we want to create a neural network to find photos with cats in it, we first need a lot of cat pictures to train the network on. By train, I mean show the neural network a picture of a cat, and when it makes its guess, we either give the neuron a pat on the back if it’s right or a slap on the wrist if it’s wrong. What I really mean is that if the network is correct at identifying the picture of a cat, we do nothing, but if it’s wrong about the picture, we ask it to change its calculations a little and try again. This training process is done on millions and millions of neurons at the same time, all working together. If you’d like to get a little more technical, this process of training the network is called “backpropagation,” and it’s responsible for a big part of our future.
The big take away here is that AI is at the core of a lot of technology progress happening right now and that we must all understand and leverage it effectively to build better products. This seems pretty obvious, but it’s really hard to do because to make AI work properly, you need a lot of expertise, data, and computing power. These are all difficult to acquire even if you have the financial resources to invest in them. Some of the smartest people in the world work in the AI field, and the demand for their talent outweighs supply by approximately a hundred to one. In such a relatively new field, getting experienced talent isn’t easy. Smart AI talent wants to work with other smart AI talent, and more likely than not it’s at places such as Google. Since AI talent is the future, Google can teach us a thing or two about attracting this valuable resource.
A lot of the work in AI is fast-moving and cutting-edge, which has it leaning more into the world of research than practice. Google successfully combines research and practice into one with the Google Brain team creating an environment that fosters the AI talent streaming out of the best universities on the planet. The team promotes the freedom to do research, an open culture, and Google’s scale as the three pillars of their success. A quick look at their website will show you the hundreds of research papers on AI that are flowing out of this environment. I can only imagine how attractive this will be to the next generation of AI researchers that grow up reading and citing work coming out of Research at Google.
I personally find the second pillar of an open culture to be very alluring. Always a big believer in open source, Google has actively contributed to some of the most popular projects out there, and continuing on this trend, they released TensorFlow, a powerful open-source software platform for building AI applications. This is the same software that Google uses internally to build their own AI. Within a short span of months, TensorFlow is now one of the most popular and easy-to-use platforms for AI research, and it’s a massive win for all of humanity.
The third pillar, Google’s scale, is the hardest for any competitor to replicate. Consider the access to large amounts of data that can’t be found anywhere else. Now add to that the access to massive computing resources and the access to other AI researchers in numbers that would be hard find elsewhere. Geoffrey Hinton of the University of Toronto, recognized as one of the world’s leading AI researchers, works with the Google Brain team as a distinguished researcher. Other Silicon Valley companies such as Twitter, LinkedIn, Facebook, and Uber are quickly replicating this model within their own organizations. Each of these companies has access to unique data that is very attractive to AI researchers focused on problems regarding these datasets. Facebook AI Research (FAIR) and the Twitter Cortex team have both hired many of the world’s top AI researchers, including Yann LeCun and Clément Farabet, respectively.
Access to data is very important when working with AI. For a lot of Silicon Valley companies, thinking data first is very natural. They understand the value of their data and guard it well to maintain a competitive advantage. Data is generated from everything. It can be entered in by your users, generated by users interacting with your application, purchased from external providers, scraped from the public Internet, or crowdsourced. The source of the data isn’t important. What’s important is that the data is available, well structured, and labeled. If you’re going to train an AI to find cats, you first need a large collection of pictures labeled correctly as a cat or not a cat. This allows the AI to learn. Labeled data is hard to get externally, but when dealing with data generated on your own platform, it can easily be labeled at the source, such as when people fill out their LinkedIn profile. In this case, they choose their job title or position from a list, effectively labeling their own data. Most AI research is based on data labeled using crowdsourcing platforms such as Amazon Mechanical Turk and CrowdFlower.
The Google Cloud has a collection of managed services that allows you to do specific AI tasks such as speech to text, image recognition, language translation, and text analysis (natural language processing). Each of these services includes AI models built and managed by dedicated Google teams, and they aim to be the best in class for each of their problem domains.
For those who want to build their own AI models and run them at scale in the cloud, Google offers a service called Cloud Machine Learning.
The AI-based image recognition service is called Vision API, and it can analyze any image to surface various details. It can locate objects in the image such as logos, lamps, people, cars, animals, landmarks, etc. It can even detect people in the image and analyze their emotions and determine if they’re happy, angry, or sad. It can extract all the text in the image and can even tell you if the image contains any adult content.
The Speech API supports more than eighty languages and can convert provided audio into text in real time.
The Natural Language API can provide a detailed analysis of any block of provided text. The technical term for this is “natural language processing.” For example, if I provided it with the sentence “Outside London there live many sheep,” it would be able to extract many useful things from it. It would be able to determine the tone of the text. Is it angry, happy, or neutral? It would also determine that “London” is a place and “sheep” are a type of animal. It can also break down the sentence into its components and dependencies. This is very useful if you are trying to understand the text in your code. For example, if you were trying build a sophisticated customer support add-on for your company website, this API could help you understand what the customer needs and what his state of mind is to help direct him to the correct solution. Being a managed solution, you wouldn’t have to do anything. The service would learn and continually improve its performance. Imagine tying this with the Translate API, which can detect the language of the text and convert it to any language instantly. Now your support add-on can handle requests from your customers worldwide.
Recommendation or relevance problems are common problems that app developers face. For example, when Amazon suggests books to buy, Netflix suggests movies to watch, or LinkedIn suggests people to connect with, these are all recommendation problems. Imagine Amazon’s recommendation software inspiring people to buy 1 percent more stuff. That represents millions of dollars of additional revenue. This is what makes these very valuable problems. In all of these cases, software needs to make a prediction about something to a user and over time learn from this interaction to improve on the quality of its predictions. Smart applications are the future, and making solid predictions helps the user experience by making it more efficient.
The Prediction API is another managed service that will take a lot of technical complexity off your hands. The predictions are of two types. The first is classification, where you get a label or a fixed value that applies to the data (e.g., spam or not spam), and the second is regression, where you get a continuous value (e.g., age, weight, or price). Using this service is also extremely easy. All you need to do is upload your data into it, and you’re good to go. If you don’t have access to the data needed to predict something desirable, Google is working on making a marketplace for prediction models, so if, for example, you need a model to detect spam, you will be able to purchase one.
Within the next five years, AI will disrupt numerous diverse fields ranging from finance to health care. Even jobs where the human element is very valued, such as with legal services and sales, will begin to feel the impact. Change is happening faster than anyone expected, and it’s clearly not possible for every business to attract the AI talent they need to compete in this new world. Startups have recognized this, and a wide range of cloud-powered AI providers has begun to surface. In addition, companies such as Salesforce and Google have begun to integrate AI into existing applications.
AI will either entirely replace people and do a better job or give people who leverage it almost superhuman abilities with certain tasks. A great example of the impact of AI was recently demoed by Uber when their self-driving truck delivered a cargo of fifty thousand cans of beer over 120 miles of freeway. Although I don’t see the job of driving trucks going away, I do see a massive shift in the nature of that work. Another example is Microsoft’s Skype Translator, a product that uses AI to give you superhuman capabilities. It can translate between multiple languages for voice and text interactions. A business use case for this would be to help a single customer support employee handle customers in more than fifty languages. That, to me, is a superpower.
Learning about AI is more accessible than ever. Students and working professionals have several choices. Udacity and Coursera offer courses in deep learning and machine learning and even go as far as providing training data. A few courses on self-driving cars come with a lot of labeled training data taken directly from cars driven around Silicon Valley for this purpose. Entire lectures from CMU and MIT are available on YouTube for free for those with enough initiative to go through them. Many software engineers that I’ve worked with have taken these online courses and upgraded their capabilities. Some even landed new higher paying jobs in data science.
An interesting story that has stuck with me is of a close friend of mine. He would use his commute on the Caltrain from San Francisco to Mountain View and back to take the famous Andrew Ng machine learning course. He told me that he gained the motivation to do this from the trends he saw while working in Silicon Valley. He saw data science and machine learning solve problems that he could never have even attempted with only his software engineer skill sets. He knew it was time to upgrade his “own software” to better position himself for continued success.
All product names, logos, and brands are property of their respective owners. All company, product and service names used in this website are for identification purposes only. Use of these names, logos, and brands does not imply endorsement. The TC50 photograph above (center) is by Jen Consalvo.
© 2016 Culture Capital Corp.