Why ‘artificial thought’?
First, because sometimes phrases are so overused that they lose their meaning. Such is the case with “personalization” and, lately, with “artificial intellligence”.
But more importantly, we need to understand what it means ‘to understand.’ (This is quite apart from any questions of ‘is the machine conscious,’ which I find an uninteresting question to ask.) The way to do that is via cognitive science and neuroscience, as well as introspection and observation of what exactly is happening when we formulate and exchange ideas.
Because things have multiple meanings, what we need is to find the basic functions which in such combinations result in almost limitless results. What is being understood in the brain is a series of steps of less than 100, which we know from On Intelligence in terms of how quickly we understand things.
Knowledge itself is fractal. That means the understanding is what is happening from level to level in the brain is very likely a single process – that’s what understanding is. It is recognition, with variations built in.
We live in a world of specifics, which we apprehend in usable generalizations. Each instance cannot be understood except as a class. We live in a world of instances that we can only know as a series of classes. In this sense, generalization is built in.
What we need to do is to figure out how to build it in to a machine.
Oh. And my name is David Wolpe.