Artificial Intelligence (AI) is growing into big business. Machine learning, a subset of AI, is expected to reach $8.81 billion by 2022. After millennia of automaton dreams and decades of AI hype, the ability to automate tasks is becoming reality for an ever-widening range of applications. Increasingly, researchers are applying machine learning to power their AI, from the onslaught of costumer-centric chatbots and product recommendations to self-driving cars and future scenario forecasting.
In computer science, a computer is programmed with an algorithm, a set of directions to follow in order to accomplish a task. Machine learning is the use of algorithms to enable a machine to effectively learn from the data it is fed. Often, the more algorithms a machine is programmed with and the more data it is fed, the more complex it can become. Therefore, the machine will appear more (artificially) intelligent to humans.
The hype surrounding AI in general and machine learning specifically is due in large part to the upsurge in big data. With big data, we’re talking about very large sets of data especially those generated from the internet. So, the machines are able to recognize more intricate patterns, and therefore, they are more useful for industries.
There are multiple ways to program a machine with algorithms for machine learning. The primary methods (although there a couple of others) are:
- Supervised learning: the algorithms are directed toward a specific answer. For example, if a car drives past a line on the intersection while the light is still yellow, snap a photo. If the license plate is identifiable, search for an address. If the address is found, send traffic citation.
- Unsupervised learning: the machine is given data to process and identify patterns with few or no instructions. It works really well with retail payments to identify demographics of customers by their purchases. Usually, this information is applied to an algorithm that will predict future customer behavior. Such predictions are usually more accurate than if the unsupervised learning were not performed.
- Deep learning: human brains are artificially simulated in a machine using whole groups of algorithms. These simulated brains are capable of processing information of larger sets of data at faster rates. More importantly, deep learning machines are capable of recognizing images and sounds. So, deep learning is driving self-driving cars and automated language translation.
- Cognitive computing: some people argue that the term is a marketing buzzword. However, scientists have combined our current understanding of cognitive science with deep learning, computer vision, and other fields. IBM’s Watson which won on Jeopardy is the primary example.
Instead, of relying so heavily on AI, many experts are calling for hybrid intelligence, a relationship between humans and machines that will improve the thinking of both. In other words, an AI drives the car while we keep our hands on the steering wheel and our foot on the brake pedal. The car’s primary directive is safety, and the car will, therefore, drive at a speed that is appropriate to all the information it is given. However, the driver is still alert to any dangers the car may not recognize.
For millennia, humanity has created automatons to show off our godlike abilities of creation. Forecasters and science fiction authors have helped foster the narrative that AI will either save or destroy humanity. And now some people even want to create AI gods. Once society has deified or demonized something, it is much more difficult to think of it as an equal.
Researchers do not fully comprehend how the most advanced algorithms work. Many of these algorithms are unsupervised learners after all. This issue is key because the algorithms could easily make mistakes. A car driven by deep learning/ cognitive computing could plow down a pedestrian, or a smart house could allow a murderer to walk through the front door. These are hypothetical, but mistakes of some sort are likely to be made. Who takes responsibility for those mistakes? How could researchers address such mistakes if they don’t even understand the process that led the machine to make such decisions?
The root question from all of this research, of course, is whether any of the machine learning techniques create machines that are actually intelligent. As far as Douglas Hofstadter is concerned, they aren’t because they do not mimic thinking even if they mimic the human mind. He believes thinking depends heavily on the analogies we as individuals use to view the world and our place in it. If Hofstadter if correct, perhaps we need to hold off on applying these AI techniques to dangerous domains like driving until our machines can understand our cultural perspectives.
The above was written for Excel4theStreet https://www.excel4thestreet.com/learning-machines-really-intelligent/