Experiments in Intelligence

The hardest thing to understand in the world is the income tax.
Albert Einstein

May 24, 2019

If we can build a machine that can make sense of the income tax, then we have truly achieved intelligence. Or a psychopath. Or both.

When I first became interested in, and involved with, computers in the 1970s, I was fascinated by machines that could "think." Of course it wasn't long before I found out that machines couldn't really think, but could only do what they were told; and you had to be very precise in what you told them.

Then I learned about "artificial intelligence." People were actually writing programs to make a computer think. I started reading more and more about it. Much of what I read I didn't understand, but I kept at it. At that time, in the early 80s, most of the researchers and most of the books and articles were focused on "symbolic processing." I'll have a lot more to say about that later. I didn't know much about how the brain worked, but I was pretty sure that this symbolic processing stuff was different. It confused me greatly. It seemed to me that if we wanted to make an "intelligent" computer we should start by copying how the brain worked. But these experts surely knew better than me, so I assumed I just didn't understand enough.

Then, lo and behold, sometime in the mid 80s I read an article in Dr. Dobbs' Journal about neural networks. Ha! Someone IS trying to build a copy of the brain! And I've been focused on that ever since. Mind you, I'm no expert, but I'm learning. These posts will chronicle my experiences.

Right now, neural networks are all the rage. They have come (back) into fashion with a vengeance. I, unexpectedly, am using them in my day job. But the most projects people are doing (including me!) are a long way from intelligent! Mostly they are being used as fancy pattern matching machines. They are really good at that, as we shall investigate later. But they are capable of much more. I will have a lot more to say about that later, too.

I've left a lot of things hanging already, and there will be more. But I don't want to get bogged down in a history or philosophy lesson. I want to build an intelligent machine. This isn't a tutorial. It's more like a blog of my experiments. So I will be jumping around a lot. For now, let's just dive in. We'll come back and discuss all this later, when the experiments are slow.

Neurons. Lots of Neurons

Let's jump in head first. We'll come back to all the why fors and philosopy and whatnot later. We are going to need a LOT of neurons, so maybe we should start by making some.

What is a Neuron?

The brain and nervous system are made up of neurons. And we need a lot of them. How many? The very simplest of creatures that show anything resembling intelligent behavior have a few thousand. A human has around a hundred billion. We'll shoot for something in between for now.

A neuron is a special kind of cell. There are quite a few different types of neurons, but for our purposes they are much more alike than different. In essence, they are tiny little analog computers. I won't go into all the anatomy and physics and chemistry of them now. There are lots of good explanations on the web. Wikipedia has a good description. Like any computer, they input information, process it, and output results. They have memory, too.

A "generic" neuron image, showing how signals propagate. Image from US National Institute of Health

Let's start from the inputs and work our way to the output. The inputs to a neuron are called synapses. On average, a single neuron has around 1000 synapses. The signal that comes into a synapse is essentially a digital pulse. It has a fixed height and width. It is either a 0 (no pulse) or 1 (single pulse.) The pulses are somewhere in the neighborhood of 1 millisecond long. If the inputs are digital, why did I call it an analog computer? The pulses can come in at any rate, from 0 pulses per second to the maximum before they overlap, something like 500 pulses per second. That pulse repetition rate is analog. In the communication world it is called Pulse Frequency Modulation.

What happens when the pulse comes into the neuron? It goes into what amounts to an integrator or low-pass filter. The pulse "charges up" the storage element some amount. After the pulse is gone, the stored value will slowly fade away toward 0. It is very similar to resistor-capacitor circuit with a time constant. Every synapse has one of these integrator/storage elements. For convenience, we will call the maximum stored value a "1.0." At any given time a value between 0 and 1.0 can be stored there.

Each synapse has an integrator on the input connection.

In addition to the storage element, every synapse has a weight associated with it. The weight is a multiplier, between -1.0 and +1.0. These weights are often adjustable. The main method we have to "program" neurons is by adjusting the weights of the synapses. Between the value stored for each synapse and the multiplying weight associated with it, the effective value going into the main part of the neuron can range from -1.0 to +1.0.

The cell body itself gets the weighted value from each synapse and continually adds them together. The cell body has another programmable value called the threshold. The threshold value with normally range between 0 and 1.0. At any time, if the sum of all the synaptic inputs exceeds the threshold, the neuron will fire, outputing a single pulse. That output clears all the integrated input values, effectively setting them back to 0.0.

It is important that the height and length of the output firing pulse does not vary. It is a constant shaped pulse, no matter what the weighted input values may be; if the input exceeds the threshold, no matter how much, the output pulse is the same. However, with stronger inputs the threshold may be reached again sooner and the pulses will happen more often. this creates an analog value from digital output pulses.

The pulse comes out of the neuron on a long strand called the axon. Just as the neuron has an average of around a thousand inputs, each axon will connect to around a thousand other neurons, on average. A single output pulse from the neuron may affect over a thousand other neurons. The speed at which it reaches those neurons varies, based on distance and the diameter of the axon. That time is usually a small number of milliseconds. That time delay, conbined with the time integration in the synapses, causes the neuron response to cover a varying period of time. That is very useful as we will see later.

We said the human brain has about a hundred billion neurons. We also said each neuron has an average of a thousand synapses. That means there are a total of around a hundred trillion synapses, each with an integrator and a weight. Those connections certainly aren't random, but they aren't structured as simply as we might like. We can build a neuron easy enough. And if we have enough compute power we can build a hundred billion of them easy enough. But that hundred trillion connections is going to be difficult. We'll start with a few less.

To summarize, each neuron has around a thousand inputs, or synapses. Each one has an integrator and a weight. The integrator integrates the value of fixed-shape pulses coming from other neurons over time and multiplies the integrated value by a weight. The cell adds all those weighted inputs together and if the sum exceeds a threshold value, fires an output pulse down its axon. When it fires, the integrated values are reset to 0. A biological analog computer!


_____ Contents Next

Small Time Electronics

Your Source for Custom Electronics, Projects, and Information




© 2019 William R Cooke