The current state of AI

This is an update or an addendum to the post I made on the “Why Machine Learning Sucks” blog post.

https://litrpgreviews.blog/2017/10/23/why-every-machine-learning-algorithm-sucks/

My central thesis is pretty simple:  The problem with most of what we call “AI” or “Machine Learning” is not scientific in any sense. It’s a million monkeys banging on a million keyboards until they produce a result that makes spurious correlations.

As always, XKCD produces this entire article in one comic strip:

machine_learning

Not only is the result spurious, but we don’t even have the methods of introspection to get into the reason why it’s spurious.  We can verify neither input nor output.  The whole premise of science is that we can measurably and demonstrably prove something by the same input giving the same output, or at least within a tolerable threshold of error.

This over-hyping of AI isn’t new. If you’ve been around the rat nest that is Silicon Valley, you know that over-hyping and under-delivering is practically a mantra. There was a previous big hype about AI in the 80s.

Even before that, there’s always been a big hype about AI. Programming, my professional trade, was supposed to be obsolete decades ago. Computers were supposed to be able to program themselves, until they eventually just keep building better versions of themselves and humans went into obsolescence.

Since programmers are still here, you can figure out that it hasn’t happened yet.  Roger Schank, for example, criticizes IBM’s claims about Watson based on the incredibly superficial readings that Watson does of Bob Dylan lyrics.

I wrote a book called The Cognitive Computer in 1984:

I started a company called Cognitive Systems in 1981. The things I was talking about then clearly have not been read by IBM (although they seem to like the words I used.) Watson is not reasoning. You can only reason if you have goals, plans, ways of attaining them, a comprehension of the beliefs that others may have, and a knowledge of past experiences to reason from. A point of view helps too. What is Watson’s view on ISIS for example?

Dumb question? Actual thinking entities have a point of view about ISIS. Dogs don’t but Watson isn’t as smart as a dog either. (The dog knows how to get my attention for example.)”

Schank is here being way too hyperbolic for my tastes.

Part of the problem is that there’s a conceptual misconception about what intelligence is.  The sort of intelligence that IQ tests measure, (and if you’ve ever done an online IQ test, that isn’t a real IQ test), is your specific ability to learn new information.  This means someone with a high IQ can be vastly less knowledgeable than a person with a lower IQ.

However, on average, people who have the capacity to learn more do learn more.  The result is that:

 intelligence is a powerful predictor of success but, on the whole, not an overwhelmingly better predictor than parental SES or grades.

This is kind of what we’d hope, the best and the brightest do better on average regardless of their parents wealth.

Now, here’s where things get tricky.  Intelligence is a domain specific trait.  That is, being good in one domain of intelligence has no transference to other domains of intelligence.  I.e. knowing everything about physics doesn’t make you a good investor.  Being a good author doesn’t make you a political science wonk, (JK Rowlings).  Being able to learn things quickly (having a high IQ) doesn’t mean you actually went and learned anything.

The way that AI hypers have gotten around this problem is the Church-Turing thesis.

https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis

The Church-Turing thesis states that given an infinite amount of time and computing power, all computations are functionally equivalent.

This reminds me of studying AI in college textbooks, which assumed infinite power and infinite resources.  The first thing you learn when you actually make AI in video game systems for example, is you have a very finite amount of resources and time, which makes most of what you learn in college absolutely useless when it comes to designing video game AI.

In the real World, you constantly have limitations on resources and time.  Intelligence becomes adapted to the domain it’s most useful in with the most useful heuristic shortcuts.  Or put simply, we learn things in the laziest way possible.

Game designers use the method of simulating AI that requires the least amount of resources.  An example is the original Max Payne video game.  People praised its AI, but most of that was the game designers intentionally moving NPCs to pre-programmed locations.  To the end user, there was no distinction between a NPC moving to hide from bullets and a pre-planned route that would make the NPC appear to be moving away from bullets.

Computers have a vastly different domain of knowledge than humans.  No human could realistically comb through 8 million lines and memorize them, or scan billions of web pages, but that’s trivial for computers.

Getting people who are not programmers to understand what’s a simple task for a computer and what’s a complex task for a computer is very difficult.

tasks

In general, Moravec’s paradox applies:  The things humans are very good at, (recognizing birds), are very difficult for computers to do, but the things that are very hard for humans to do (high level math, going through hundreds of billions of web pages), are very easy.

So what Schank is missing is that computers are not dumber than dogs, they are simply intelligent in a completely different domain than dogs are.  (I suspect Schank knows this and is being hyperbolic).  What the Church-Turing hypothesis misses is that in the real World, we are constantly facing the constraints of time and resources.  (What Aristotle would have called a confusion of potentiality with actuality in regards to Zeno’s paradoxes).

Hell, a large portion of this blog is about economics, and economics boils down to a question: “Given an infinite demand and a limited supply, how do we allocate resources?”  This is why Thomas Carlyle called economics the “dismal science”, because it says that we will never have enough resources to fulfill all demands.

Even your brain works like this.  What you think of as your “conscious” brain isn’t nearly as responsible for your behavior as you’d like to think.  Your conscious brain, (the most vaunted prefrontal cortex) has one primary function:  It acts as a negation for impulses sent from other parts of your brain.

Your brain receives dozens of signals everywhere from all over your body.  That’s why you don’t notice things like your heart beat, breathing, how much you actually itch all over, etc.  Your brain is controlled by an army of idiots, each of which sends signals about what they want to do.  If they conflict, or if they exceed a certain excitation threshold, then your prefrontal cortex gets to veto it.

Why’s that important?  It’s why alcohol makes people both honest and bad decision makers.  Alcohol shuts down your prefrontal cortex, so its veto power diminishes in proportion to the amount of alcohol you drink.

** If you want more information on this, try The Idiot Brain and Thinking Fast and Slow

The point being we really don’t know much about our own computational trade-offs and how they arose.  Nor do we really understand even simpler animals. Birds are as intelligent as apes and other mammals, for example, but they don’t have a cortex.  So we can’t even make the assumption that we can reduce the brain functions to their cellular substrate.

Anyway, the question comes back to, “Given that humans and computers have completely different domains of intelligence, how do we understand what AIs are doing?”  The answer some have come up with is to design a system where AIs argue with each other.

There’s a lot of problems with this, even outside of the ones mentioned in the article.

The first is that it’s a massive assumption to assume that we could get computers both to understand and converse in natural language.

The second is that even if we could get computers to do this, their specific domains of intelligence will be outside of our comprehension. It would be like giving the average layman a scientific journal.  Here’s a scientific article on cancer:

Early experiments showed that the in vitro binding of apoptotic rodent thymocytes by isologous peritoneal macrophages could be inhibited by addition of N-acetyl glucosamine or its dimer N,N’-diacetyl chitobiose, and it was suggested that lectinlike receptors on the surface of the macrophages might specifically recognize changes in the carbohydrates exposed on the surface of the apoptotic bodies. More recently, macrophage vitronectin receptors have been implicated in the recognition of neutrophil leukocytes undergoing apoptosis and evidence has been produced that the exposure of phosphatidylserine on the surface of apoptotic thymocytes and lymphocytes may lead to their specific recognition by macrophages.

Without the domain specific knowledge of that subject, it looks like gibberish.  The machines would be speaking in a way completely outside of any domain of knowledge that we have access to, so it would still be meaningless communication.  That’s assuming you get around the natural language problem, because it’s more likely that the AIs would invent their own internal language to communicate.

Natural languages are very messy, and also part of Moravec’s paradox.  Human children invent their own languages easily with full syntax and grammatical rules, but machines have a very hard time understanding human language.  Every programming language is an attempt to solve that problem, and it’s the reason why there’s so many programming languages out there, and why programmers constantly keep inventing new ones.

So the tl;dr summary of the current state of AI is as follows:

  1. The people hyping it are not being honest about the actual limitations of AI.
  2. We are a very long way from reaching singularity, if that’s obtainable.
  3. The reason for the if in the above is because there is no one domain of knowledge that encompasses all other domains of knowledge except in theoretical discussions where there areno limitations imposed by time or space.  Since we exist in time and space, it’s meaningless outside of those theoretical conversations.

So as in the original article, this is why Facebook, Twitter, YouTube, Amazon, et al. that think they can solve all of their problems are woefully and fundamentally misguided.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s