The hardest thing to understand in the world is the income tax.
Albert Einstein
If we can build a machine that can make sense of the income tax, then we have truly achieved intelligence. Or a psychopath. Or both.
When I first became interested in, and involved with, computers in the 1970s,
I was fascinated by machines that could "think." Of course it wasn't long
before I found out that machines couldn't really think, but could only do
what they were told; and you had to be
Then I learned about "artificial intelligence." People were actually writing programs to make a computer think. I started reading more and more about it. Much of what I read I didn't understand, but I kept at it. At that time, in the early 80s, most of the researchers and most of the books and articles were focused on "symbolic processing." I'll have a lot more to say about that later. I didn't know much about how the brain worked, but I was pretty sure that this symbolic processing stuff was different. It confused me greatly. It seemed to me that if we wanted to make an "intelligent" computer we should start by copying how the brain worked. But these experts surely knew better than me, so I assumed I just didn't understand enough.
Then, lo and behold, sometime in the mid 80s I read an article in Dr. Dobbs'
Journal about neural networks. Ha! Someone
Right now, neural networks are all the rage. They have come (back) into fashion with a vengeance. I, unexpectedly, am using them in my day job. But the most projects people are doing (including me!) are a long way from intelligent! Mostly they are being used as fancy pattern matching machines. They are really good at that, as we shall investigate later. But they are capable of much more. I will have a lot more to say about that later, too.
I've left a lot of things hanging already, and there will be more. But I don't want to get bogged down in a history or philosophy lesson. I want to build an intelligent machine. This isn't a tutorial. It's more like a blog of my experiments. So I will be jumping around a lot. For now, let's just dive in. We'll come back and discuss all this later, when the experiments are slow.
Let's jump in head first. We'll come back to all the why fors and philosopy
and whatnot later. We are going to need a
The brain and nervous system are made up of neurons. And we need a lot of them. How many? The very simplest of creatures that show anything resembling intelligent behavior have a few thousand. A human has around a hundred billion. We'll shoot for something in between for now.
A neuron is a special kind of cell. There are quite a few different types of neurons, but for our purposes they are much more alike than different. In essence, they are tiny little analog computers. I won't go into all the anatomy and physics and chemistry of them now. There are lots of good explanations on the web. Wikipedia has a good description. Like any computer, they input information, process it, and output results. They have memory, too.
Let's start from the inputs and work our way to the output. The inputs to
a neuron are called
What happens when the pulse comes into the neuron? It goes into what amounts
to an
In addition to the storage element, every synapse has a
The cell body itself gets the weighted value from each synapse and continually
adds them together. The cell body has another programmable value called the
It is important that the height and length of the output firing pulse does
not vary. It is a constant shaped pulse, no matter what the weighted input
values may be; if the input exceeds the threshold, no matter how much, the
output pulse is the same. However, with
The pulse comes out of the neuron on a long strand called the
We said the human brain has about a hundred billion neurons. We also said each neuron has an average of a thousand synapses. That means there are a total of around a hundred trillion synapses, each with an integrator and a weight. Those connections certainly aren't random, but they aren't structured as simply as we might like. We can build a neuron easy enough. And if we have enough compute power we can build a hundred billion of them easy enough. But that hundred trillion connections is going to be difficult. We'll start with a few less.
To summarize, each neuron has around a thousand inputs, or synapses. Each one has an integrator and a weight. The integrator integrates the value of fixed-shape pulses coming from other neurons over time and multiplies the integrated value by a weight. The cell adds all those weighted inputs together and if the sum exceeds a threshold value, fires an output pulse down its axon. When it fires, the integrated values are reset to 0. A biological analog computer!
_____ | Contents | Next |