When I was growing up we had many examples of Artificial Intelligence (AI) in the movies. Of course we had R2-D2 and C-3PO in Star Wars, and HAL in 2001: A Space Odyssey. It was clear to anyone that these machines were actually intelligent.
These days the media is calling anything and everything “AI” with little evidence of any intelligence whatsoever. Here are some examples from the news:
- Microsoft made its AI work on a $10 Raspberry Pi: “The aim is to make dumb things like fridges and sprinklers smart.”
- Instagram Starts Using Artificial Intelligence to Moderate Comments. Is Facebook Up Next?
- Thanks to AI, you’ll never be ignored at a hospital again: “IBM’s Watson takes on the scut work at a Philadelphia hospital, so nurses can focus on what matters.”
- It’s Too Late to Stop China From Becoming an AI Superpower
An “AI” running on a $10 Raspberry Pi… to make your fridge “smart”? Give me a break! Scientists have been working on modelling what’s going on in the human brain, and according to this article:
It took 40 minutes with the combined muscle of 82,944 processors in K computer to get just 1 second of biological brain processing time. While running, the simulation ate up about 1PB of system memory as each synapse was modeled individually.
To be fair, that’s what it takes to simulate all the neurons in a human brain, and it’s not clear that this is a good analog for an artificial intelligence. Still, in 2015, the IEEE published an article saying that the human brain is 30 times faster than the world’s best supercomputers. Certainly you’re not doing that on a Raspberry PI.
We’re only scratching the surface of AI right now. “Deep Learning” is the big new buzzword. It works like this: you feed it a big dataset, like a bunch of X-ray images, and you have an expert in the field, like a radiologist, pick examples from that dataset and categorize them (“cancer”, “not cancer”). You then let the deep learning program go at the dataset and try to build a model to categorize all the images into those two groups. The expert then looks over the result and corrects any mistakes. Over time and many cycles, the software gets better and better at building a model of detecting cancer in an X-ray image.
Clearly this is pattern matching, and it’s something we humans are particularly good at. However, I’d also note that most animals are good at pattern matching. Your dog can learn to pickup subtle clues about when you’re about to take her for a walk. Even birds can learn patterns and adapt to them.
If your job can be replaced by a pattern matching algorithm, isn’t it possible that your job doesn’t require that much intelligence? It’s more likely you relied on a lot of experience. When I walk out to a machine and the operator tells me that the motor’s making a weird sound when it powers up, chances are I’ve seen that pattern before, and I might be able to fix it in a few minutes. That’s pattern matching.
We hear a lot in the media about AI coming to take our jobs, but it’s more correct to say that Automated/Artificial Experience (AE) is really what’s about to eat our lunch. Lots of highly paid professions such as medical doctors, lawyers, engineers, programmers, and technicians are in danger of deep learning systems removing a lot of the “grunt work” from their profession. That doesn’t mean the entire profession will have nothing left to do. After all, these systems aren’t truly intelligent, but we can’t hide from the fact that in large teams, some of the employees are likely only doing “grunt work.”
So don’t worry about AI just yet. Just make sure you’re using your real intelligence, and you should be safe.
Great article. AI has a long way to go.
Pingback: What is “AI†Anymore? | Contact and Coil – BrentHumphreys.net