William T Powers
1133 Whitfield Rd
Northbrook IL 60062
A scientific revolution is just around the corner, and anyone with a personal computer can participate in it. The last time this happened, 250 years ago, the equipment was the homebrew telescope and the subject was astronomy. Now, astronomy belongs just as much to amateurs as to professionals. This time the particular subject matter is human nature and in a broader scope, the nature of all living systems. Some ancient and thoroughly accepted principles are going to be overturned, and the whole direction of scientific investigation of life processes will change.
The key concept behind this revolution is control theory. Control theory has been developing for almost 40 years, and has already been proposed (by Norbert Wiener) as a revolutionary concept. It has not been easy, however, to see just how control theory can be made part of existing scientific approaches although many people have tried. Most of these attempts have tried to wedge control theory into existing patterns of thought. To apply any new idea in such a way, while ignoring the new conceptual scheme made possible, is to deny the full potential of the new idea.
Many life scientists who have tried to use control theory have tried to imitate the engineering approach, dealing with human beings as part of a man -machine system instead of complete control systems in their own right. Others have used control theory directly to make models of human and animal behavior, but have concentrated on minor subsystems, failing to see that the organism as a whole can be dealt with in terms of the same principles. The result has often been a strange mixture of concepts - a patchwork instead of a system.
Strangely enough, many engineers who do understand control theory haven't done much better. Here the problem is that these engineers tend to accept the basic concepts developed by biologists and psychologists, and to use control theory to explain cause - effect relationships they are told exist - but which in fact do not exist. We will start this development by looking at something called behavior, which biologists and psychologists have assured engineers is very important, thereby leading the engineers astray.
What is all this supposed to mean? A lot is meant, though in different ways. Roboticists, for example, are trying to develop machines which will imitate human organization, and so are the artificial intelligence experimenters. But from whence came the description of the system they are trying to model? Basically, it came from the life sciences. If the life sciences are using the wrong model, it would be essential to know that before much more labor is invested in imitating an imaginary creature.
Perhaps the most general reason control theory is interesting is that it concerns people. There aren't many sciences left in which important discoveries can be made by amateurs working at their own tables. Control theory opens up an entirely new field of experimentation, a kind that has never been done before in psychology or any other life science.
All that is needed by amateurs who want to participate in these developments is a basic grasp of control theory, an understanding of the procedures that go with it, some basic equipment, and curiosity about human nature. I shall now provide the first two items on that list. The rest is up to you.
The word behavior is used frequently - we hear about behavioral science, behavior modification, behavior therapy. For example, Science News now has a "Behavior Column ";
it was formerly the publication's "Psychology Column ". An innocent bystander might conclude that any word this important must have a universally accepted definition, but that is not true. Behavior is a slippery concept.
Here is an example of a person behaving. Chip Chad is seated in front of a teletypewriter pounding keys. What is he doing?
Is he alternately tensing and relaxing muscles in his arms? Yes. Is he moving his fingers up and down? Yes. Is he typing strings of symbols? Yes. Is he adding a return instruction that he forgot at the end of a subroutine? Yes. Is he writing a program for plotting stock market prices? Yes. Is he making a little extra money for a vacation? Yes. Is he justifying his hobby to his family? Yes.
Clearly, each description of what Chip is doing is, in fact, an accurate description of the very same collection of actions. Which one then, is Chip's behavior? Obviously, they all are expressions of behavior.
Suppose Chip decides that he really doesn't need a subroutine, and substitutes a jump instruction for the return. Now, he is writing the program - obviously the same program - by using a different behavior. Or suppose he buys an input device, and continues working on the subroutine by speaking letters into a microphone. Now he is using different muscles and movements, but he is still doing the same behaviors farther down the list. How could he be doing the same thing g by means of doing something different?
Or consider Chip driving a car along a straight road. He is consciously steering. This happens to be a gusty March day, and every five minutes the wind changes speed and direction. Chip is an experienced driver, and continues to steer the car down the road in a straight line. If we look at what his arms are doing, however, we find that they are moving the steering wheel in an apparently random pattern, now centered, now far to the right, now far to the left. Somehow he is managing to produce a constant steering- the -car behavior by means of a behavior that is widely varying. The path of the car doesn't correlate with the position of the steering wheel at all.
Scientists have always thought of behavior as the final product of activity inside the organism. The brain sends commands to the muscles, which create forces, which produce movements, which generate the stable and repeatable patterns we recognize as behavior. There is, in principle, a chain of cause and effect, with the events at the end of the chain being caused by the events at the beginning. Such scientists would say that in the example with Chip at the computer keyboard, we were simply attending to various stages in that chain.
How does that picture fit in with Chip's driving the car in a straight line? The direction in which the car is going is affected by his movements of the steering wheel, and is farther out along the chain of causes and effects. But the wind adds its effects on the direction of the car after Chip's effects in the chain. Somehow he is varying his actions so that when their effects are added to the effects of the randomly varied wind, the result is something constant. If we had been thinking of driving the car in a straight line as Chip's behavior, we have to revise that idea: the direction of the car depends just as much on the wind as on Chip.
It may seem that we have simply moved our definition of behavior closer to Chip. But consider how he moves the steering wheel. The wheel moves when the forces reflected from the front wheels do not exactly balance the forces created by his muscles. As the car goes along, the roadbed tilts and various bumps and dips cause changes in the reflected forces. The wheel may be turned far to the right, into the crosswind, on the average, but maintaining the wheel in that position requires that his muscles be constantly changing tension, as the reflected steering wheel forces fluctuate. We have the same problem as before: Chip produces a varying output that affects the steering wheel, but the steering wheel is also being affected by forces that are independent of what Chip is doing with his muscles. Yet the sung of the muscle forces and those extraneous forces is zero, except when the steering wheel is changing position.
Even if we back up another step and call Chip's muscle tensions his behavior, we have trouble. Muscles are made to contract by signals from the nervous system, but muscles don't respond the same amount to a given signal every time they are used. They fatigue; other muscles interfere with them; joint angles change so that a given muscle tension can produce different amounts and directions of force. The only behavior that Chip produces which can be attributed entirely to Chip and not in part to his environment consists of the nerve signals that leave his nervous system and enter his muscles.
If we want to be completely accurate about Chip's behavior, we should consider the output signals from his nervous system, and leave everything else in his environment. That is what we will do, but by doing that we create the biggest problem of all.
A scientist studying a behavior hopes to learn enough about its rules to predict when
it will occur. Under the old approach, this means varying factors in the environment and looking for behaviors that correlate with those variations. But if we try to describe behavior in terms of the output signals from the nervous system, all correlations disappear. Oh, maybe we have a knee jerk or a sneeze left over, but we have lost all the regularities that give us some reason to talk about behavior in the first place. We would never guess, from looking at Chip's neural signal outputs, that the result of them would be a straight path of a car that is being forced one way and another by a variable crosswind.
When you pause and reflect upon what has been covered so far, you will realize that we are already deep into control theory, even though we haven't discussed it by name yet. We have dealt with the subject as such because the discussion concerns a fundamental difficulty with the very concept of behavior, especially the concept that behavior is the final product of an organism's inner activities. As we see how this difficulty gets resolved, we will be forced into control theory no matter how we approach the solution. One reason biologists or psychologists have not developed control theory is that they have clung stubbornly to the idea that behavior is part of a causal chain that starts in the nervous system (or in stimuli that cause activity in the nervous system) and propagates outward from there according to physical laws of cause and effect. That is why people design robots in the same way, and why those robots have yet to behave in a way that is convincingly alive. In order to solve this problem instead of just brushing it aside, w e have to admit that the causal chain in which people have believed for so long simply does not exist, and never has existed.
Figure 1 sums up the problem we are dealing with. At every stage of events following the outputs from Chip's nervous system disturbances come into play, adding to the effects that can be traced to the neural signals. As we go farther to the right of the figure, we might expect that any regularities in Chip's output signals would be lost (ie: that each successive variable would show more and more random variations).
Exactly the opposite is true. The farther to the right we go in figure 1, the less random variation occurs. The variable farthest to the right, the relationship of the car to its lane, can remain constant within a few inches for hour after hour. We find that this is the most stable variable in the chain, and that as we go backward up the chain toward Chip's nervous system, the random-looking variations get larger and larger. At the beginning of the chain the variations become totally unpredictable.
Consider figure 2; we added the effects of external events on a nervous system. According to the old picture still fundamental to most life sciences, external events act on the physical structure of the nervous system (along with internal events such as changes in body chemistry), and cause outputs to occur. Those outputs have consequences which show up at the end of the chain as behavioral patterns. To study the organization of behavior, you manipulate the external events, and look for regular behaviors that result (of course, you find them).
About the Author William T Powers has been exploring the meaning of control theory for studies of human nature since 1953, when he was working as a health physicist at the University of Chicago. Since that time he has spent a number of years (to 1960) in medical physics, and then another 13 (to 1975) as Chief Systems Engineer for the Department of Astronomy at Northwestern University. His occupation has been designing electronic, optical, and mechanical systems for science. Powers' book, Behavior: The Control of Perception (Aldine, 1973) was quite well received. At the moment he consults in one-of-a-kind electronics.
But in figure 2 we also see those random disturbances. The only way to get away from them is to make sure that the environment remains absolutely stable (ie: that nothing happens which can interfere with behavior). The standard approach requires eliminating those disturbances, for the simple reason that if they are not eliminated, the experimental results disappear into the background noise. Thus by eliminating disturbances as completely as possible, under the guise of establishing standard (ie: control) experimental conditions, some scientists have swept this basic problem under the rug. They have also done away with the principal tool we have for understanding how these systems really work. If there are no disturbances, then the idea of a cause - effect chain running from external events through the organism to behavior seems to hold up, more or less. As soon as natural disturbances are allowed to occur, we find that the overall connection from external event to final behavior remains as clear as ever; but, the model of what happens in between falls to pieces with a loud crash.Closing the Loop
There seems to be nothing wrong with figure 2; nothing, that is, except that it cannot account for the regularities of behavior. There is something wrong; some- thing has been left out. Let's focus on the final variable in the chain, the position of the car relative to the lane. What variable that could affect Chip's senses, do you suppose, would have the most to do with his manipulations of the steering wheel? The position of the car relative to the lane. This variable is both the consequence of Chip's actions, and the main source of sensory information that could cause him to act (see figure 3).
Psychologists have gone this way before. They have tried to make sense of this situation by supposing that the behavioral variable is somehow different from the stimulus variable. If the position of the car relative to its lane is the behavioral variable, then perhaps the onset of a change in the visual image of the road is the stimulus variable. That leads to the idea of a chain of stimuli and responses. The car drifts in its lane; that stimulates Chip's nervous system to make a response, which affects the physical position of the car in its lane, which causes a new change in the stimulus, and so on around and around.
There are several severe difficulties with this explanation. In the first place, there is no way to separate the visual image from the position of the car; these are just two ways of talking about one whole physical situation in which a certain collection of interdependent variables changes simultaneously. The alternation between stimulus and re- sponse is completely imaginary, as anyone who drives knows. If causes and effects really were sequential, and chased themselves around and around the loop, it is unlikely that Chip would keep the car on the road for more than ten seconds. In part 2 we'll do a proper simulation in BASIC, and you will see that when the system is designed to behave sequentially, the result is most likely to be violent oscillations.
There is no reason at all to make an artificial distinction between the position of the car on the road as a behavioral response and as the stimulus which causes the response. Only one physical situation
Now we begin to draw a diagram of a proper control system. In figure 4, three physical quantities are shown, an output quantity, an input quantity, and a disturbing quantity.
The output quantity corresponds to an output of Chip's that is entirely due to himself (ie: perhaps due to the neural signals reaching his muscles or to some variable farther down the chain of figure 2, revealed when disturbances are known or can be legitimately eliminated).
The input quantity is the variable that is stabilized by the variations in Chip's output. Thus we call the input quantity, here, the position of the car relative to its lane. Of course, by that we mean whatever it is about that position that can be a sensory input to Chip (ie: probably a visual image of the hood of the car and the road beyond, framed in the windshield).
Between the output quantity and the input quantity is placed a feedback function. This function expresses the physical links that exist between Chip's output quantity and the input quantity. In the case of a moving car, if the output quantity were the angle of the steering wheel, which it might be if the angle is also a controlled quantity, then the effect of the wheel angle would be a continual change of car position, and the feedback function would have to include at least one time integration. The feedback function is simply a description of the physical processes which give each magnitude and direction of the output quantity a contribution to the state of the input quantity.
In figure 4 we also include disturbances as an integral part of the diagram of the system. The disturbing quantity in this case would be wind velocity and direction, and the disturbance function connecting it to the input quantity would express the way in which aerodynamic laws convert wind velocity into effects on the car's position in its lane.
The state of the input quantity, therefore, can be expressed in terms of all effects which contribute to it. We have shown only the output quantity and the disturbance due to wind. Many other disturbances - low tires, or tight wheel bearings, or gradation in the road - could also contribute to the state of the input quantity at the same time. All disturbances, however, can be reduced to a single one, since no matter what the cause of the disturbance, the only effect that matters is the effect on lateral position of the car.
Chip himself can be represented by a function, a function that converts the sensed position of the car into a steering wheel angle. This system function (system, being short for behaving system) will surely contain delays, nonlinearities, and even variations of its parameters. At first glance it may seem a terrible oversimplification to reduce a whole human being to a simple input/ output box, but the situation isn't that bad. We are centering this diagram around the input quantity, not around Chip as a whole; therefore the "Chip box" does not wholly represent him, but only that part which reacts to changes in the input quantity by altering the output quantity. Furthermore, the Chip box (ie: the system function) is not quite as simple as it seems even after being simplified a great deal.
The functions connecting the variables in this closed loop can be extremely complex, and even to approach this system analytically will obviously require some approximations. This is not the place to justify every simplification; sometimes complex mathematics are required to reach a simple conclusion. I'll drop some hints along the way about how the simplified model is generated and why it works, but if you really want to get into this, study a text on servo-mechanism design.
Let us conclude by building a working simulator of Chip driving the car. This is just a hint of what this 4 part series of articles will develop. Building the simulator requires building some special numbers into the program without any explanation at present. The point is to enjoy the simulation, and get used to the idea that everything in a control loop happens at the same time.
We will assume that the steering wheel angle to left or right of center is Chip's output quantity, and that there are no disturbances that can interfere at this point. This output quantity will be called A.
Under the influence of A alone, the car would drift sideways at a rate proportional to A, for small deviations from the center of the lane. Designating the crosswind velocity as W, if W were the only influence acting, the car would drift sideways at a rate proportional to W (in this somewhat oversimplified universe). In the BASIC program we will assume that each iteration corresponds to a fixed amount of elapsed time, so the distance D that the car will drift during any one iteration is simply the sum of the two influences acting on it (line numbers correlate with listing 1):
7 D=K1 *W+K2*AThe position, I, of the car relative to its lane will change by an amount D on each iteration:
8 I=1+DNow I must introduce a detail: if we just had Chip respond proportionally to the deviation of car position, we would have to make his muscles so flabby that hardly any response would occur, unless we wanted to demonstrate self -immolating oscillations. We have to take care of two destabilizing factors. First, the feedback function is essentially
an integrator, and so puts a lag into the control process. This alone would not cause a problem, but Chip also contains a transport lag; he cannot actually produce an output at the same instant that the input occurs, nor can our program since it is evaluating equations one at a time. The integration lag we take care of by adding to the position I (which Chip senses) the variable D, which is approximately the first derivative of the input quantity. He senses the input quantity with some emphasis on its rate of change, which is actually a realistic model of human perception. This part of the stabilizing of the control action is done in step 9:
9 A1=K3*(I+0.8*D)We have computed a variable Al, the angle which the wheel would assume if Chip reacted instantly. But to handle the transport lag, we must slow his reponse, letting only a fraction KS (between 0 and 1) of it occur during any one iteration. That is what step 10 does:
10 A=A+ KS* (Al -A)This slowing technique will be used in the larger simulator next time. To see how it works, set Al to 10.00, KS to 0.25, and A to 0, and then simply keep doing step 10 with pencil and paper. A will gradually approach the value of Al from any starting point.
The program in listing 1 asks for a wind velocity, and then proceeds to do ten iterations of the control loop, printing wheel angle A and car position deviation I each time. A positive number means the wind is blowing, the wheel is cocked, or the car has moved to the right. If you want to follow the program for more than ten iterations, give it the same wind again. It always starts where it left off.
In part 2, we will begin exploring a model of the kind described in figure 4 and start the somewhat mind boggling task of retraining the intuition to think in closed loop terms instead of straight through cause and effect. There is a big difference. We'll see that, in general, control systems control what they sense, not what they do. We'll discover something called a reference signal, which functions in a control system exactly the way an inner purpose has always been supposed to function. In part 2, we'll see how perception figures into control. And we'll start working with a more extended BASIC simulator than the tiny one in listing 1. Parts of this simulator will be suitable for building into the computer part of a robot, should anyone want to carry matters that far.
William T Powers
1133 Whitfield Rd
Northbrook IL 60062
Note on Northstar BASIC
The method of accessing strings in North
Star BASIC is different from that of Micro-
soft and other BASICs. Translate as follows:
A$(1,N) becomes LEFT$(A$,n)
A$ Note on North Star BASIC The method of accessing strings in North Star BASIC is different from that of Microsoft and other BASICs. Translate as follows: A$(1,n) A$(n) A$(m,n) becomes becomes becomes LEFT$(A$,n) RIGHT$(A$,n) MID$(A$,m,n)
Listing 2: A control system simulator written in North Star BASIC. 1 PRINT "PROGRAM TWO: SIMULATION OF CONTROL SYSTEM BEHAVIOR" 2 PRINT 3 PRINT "AFTER PROMPT (COLON), YOU MAY TYPE" 4 PRINT "'PLOT XXXXXX', WHERE XXXXXX MEANS" 5 PRINT "ANY ONE OR MORE CHARACTERS FROM THE" 6 PRINT "SET P,E,R,I,O,D, IN ANY SEQUENCE." 7 PRINT 8 PRINT "YOU MAY ALSO SET PARAMETERS BY TYPING IN" 9 PRINT "THE PARAMETER SYMBOL IMMEDIATELY FOLLOWED" 10 PRINT "BY AN EQUAL SIGN AND THE VALUE (NO SPACES)." 11 PRINT 12 PRINT "PARAMETERS ARE L, K1, K2, S1, S2, 0, P, R, AND D" 13 PRINT "DEFAULT VALUES 16, 1, 2, 1, 1, 0, 0, 0, AND 15" 14 PRINT 15 PRINT "TO RUN, TYPE '.' (INITIALIZE), OR '/' (DON'T INIT)." 16 PRINT 17 K1 =1 18 K2 =2 19 S1 =1 20 S2 =1 21 PO =0 22 00 =0 23 RO =0 24 D0 =15 25 V(4) =1 26 V(5)=1 27 V(6) =1 28 INPUT "DISPLAY WIDTH: ",W 29 W =W -2 30 C =W/2 \ REM CENTER OF DISPLAY 31 DIM ZS( W), M$ 1W I,A$120(,B$(6(,K(6),U(6),E$(72) 32 B$ = "PERIOD" 33 L1 = 15 34 FOR J =1 TO W 35 Z$1J,J) =" " 36 NEXT J \ REM CREATE BLANK FILE 37 DEF FNI(X) \ REM INPUT FUNCTION 38 P =P +S1 (K1 X -P) 39 RETURN P 40 FNEND 41 DEF FNO(X) \ REM OUTPUT FUNCTION 42 0= 0 +S2(K2E -0) 43 RETURN O 44 FNEND 45 DEF FNF(X) =0.5X \ REM FEEDBACK FUNCTION 46 DEF FND(X) =0.8 X \ REM DISTURBANCE FUNCTION 47 REM 48 REM COMMANDS FOR SETTING PARAMETERS 49 GOTO 51 50 AS =" " \ IF El >LENIES) THEN 51 ELSE 53 51 INPUT ": ",E$ \ AS =" " \ El =1 52 IF LEN(ES)< >0 THEN 53 \ PRINT \ GOTO 51 53 EIS = ESIEI,E1) \ El =El +1 54 IF El S = "," THEN 57 ELSE IF El >LENIES) THEN 56 55 AS =AS +E1 S \ GOTO 53 56 AS =AS +E1 S 57 IF AS = "." THEN 95 58 IF AS = "/" THEN 99 59 IF AS <> "?" THEN 62 60 PRINT \ PRINT%7F3,"K1 = ",K1," K2= ",K2, 61 GOTO 51 62 IF LENIAS)< 5 THEN 72 63 IF AS11,51<> "PLOT" THEN 72 64 AS =A$l6) 65 FOR J =1 TO 6 \ REM 66 VIJ) =0 \ REM 67 FOR K =1 TO LEN(AS) 68 IF ASIK,K) = BSIJ,J) THEN VW) =1 69 NEXT K 70 NEXT J 71 GOTO 50 72 IF LENIASI<3 THEN 91 73 IF AS(1,31< > "K1 =" THEN 75 74 K1 = VAL(AS(41) \ GOTO 50 75 IF AS(1,31< > "K2 =" THEN 77 76 K2 = VAL(AS(4)) \ GOTO 50 77 IF A$11,31< >"S1 =" THEN 79 78 S1 = VALIASI4)) \ GOTO 50 79 IF AS(1,31 <> "S2 =" THEN 81 80 S2= VALIASI4))\ GOTO 50 81 IF A$11,21<> "0 =" THEN 83 82 00= VAL(AS(3))\ GOTO 50 83 IF AS11,21 < > "P =" THEN 85 84 PO= VALIAS(3)l\ GOTO 50 85 IF AS(1,21< > "R =" THEN 87 86 RO= VALIAS131)\ GOTO 50 87 IF AS(1,21< > "D =" THEN 89 88 DO =VALIAS(3)) \ GOTO 50 89 IF AS(1,2) < > "L =" THEN 91 90 L1 = VALIAS131) \ GOTO 50 91 PRINT " ? ? ? ", \ GOTO 50 92 REM 93 REM 94 REM 95 P =PO \REM 96 0= 00 \D =DO \R =RO 97 I = FNF(0) +FNDID) 98 E=R -P \ GOSUB 109 \ REM 99 D =DO \REM 100 R =RO 101 FOR L =1 TO L1 \ REM 102 I = FNF(0) +FNDID) 103 P= FNI(I) 104 E =R -P 105 0 = FNO(E) 106 GOSUB 109 \ REM 107 NEXT L 108 GOTO 50 109 REM 110 REM 111 REM 112 U11) =P +C 113 U(2) =E +C 114 U13) =R +C 115 U(4) =I +C 116 U(5) =O +C 117 U(6) =D +C 118 PRINT 119 MS =ZS \ REM 120 MSIC +I,C +11 = "." \REM 121 FOR J =1 TO 6 \ REM 122 U = INTIUIJ) +.5) +1 123 IF U<1 THEN U =1 124 IF U >W THEN U =W 125 IF VW) =1 THEN MS(U,U)= BSIJ,J) 126 NEXTJ 127 PRINT MS, \ REM 128 RETURN 999 END S1 = S2=", S2" TAG VARIABLES TO BE PLOTTED. SIMULATION AND PLOTTING LOOP ENTRY WITH INITIALIZATION PLOT INIT. CONDITIONS ENTRY, NO INITIALIZATION CONTROL LOOP SIMULATION CALL PLOTTING SUBROUTINE PLOTTING SUBROUTINE CLEAR OUTPUT BUFFER MARK SCREEN CENTER LOAD BUFFER PRINT BUFFER
In part 1, we went through a chain of reasoning that ended with the conclusion that the behavior of an organism is not what it seems. Behavior appears to be at the end of a cause and effect chain that starts with the inputs to a nervous system, but that chain is subject to disturbances that can occur after the output of the nervous system. Nevertheless, the behavior at the end of this chain is stable and repeatable, while events closer to the organism become less predictable as we get nearer to the neural signals at the output of the nervous system. By analyzing an example in which a car is maintained in the center of its lane, we saw that this measure of behavior belongs at both the cause and effect ends of the chain, and that if this variable is shown only once in the diagram, a closed loop results.
We are going to look in more detail at the behaving system in this closed loop, to see how it might be organized to produce the results seen. We will start using a simulator written in BASIC which allows the user to vary many parameters of the control system to see the effects on its actions. Human behavior will not be mentioned much in this installment; there are many fundamentals to cover before we can get back to the main purpose of this series. The object here is to retrain the intuition so that the closed loop way of seeing behavior becomes as natural as the old straight through cause and effect way.
The simulator (listing 2) is set up to demonstrate the properties of a standard sort of control system organization. We will first look at that organization, then at the simulator itself, and finally at some details of the operation of the control system. You
can do much more experimenting than we will discuss here.
Figure 5 is a diagram of a typical control system. Almost every control system can be expressed in this form, although in the real system, functions that are shown here as separate are often combined into one physical entity. The symbols for functions and variables are those which appear in the BASIC simulator.
The behaving system is entirely above the boundary line. All that is not the behaving system (or systems inside the organism at a higher level, not considered here) is called the environment of the system. Variables inside the system will always be called signals, and variables in the environment will always be called quantities. (paragraph here or not?) In the environment we have three quantities mentioned in part 1. The input quantity is a physical variable that the system can sense. The state of this quantity is the result of all influences acting on it (which in our limited universe means the influence from the system's own output) and one representative disturbing quantity that can vary independently from what the system does. The system's output is represented by the output quantity. The input quantity is called I, the output quantity O, and the disturbing quantity D.
The output and disturbing quantities are separated in space from the input quantity, and they influence the input quantity through properties of the intervening environment. The connection that translates the state of the output quantity into an influence on the input quantity is called the feedback function, symbolized in BASIC as FNF. The function that translates the state of the disturbing quantity into another influence on the input quantity is the disturbing function, symbolized FND. If the input quantity is associated with some physical object, then FNF and FND may both contain properties of that object (eg: its mass). There are less redundant ways to handle this in special cases.
The meaning of the previous paragraph is summed up in line 102:1 = FNF(0) + FND(D). The state of the input quantity is the sum of the influences from the output quantity and the disturbing quantity. In the real world, both the output quantity and the disturbing quantity may have many effects other than those on I, but those effects are irrelevant to the operation of this system (perhaps not to the designer or user of the system, if it is artificial). We have therefore considered everything about the environment that is of interest here.
Above the line we have the behaving system. We cross the boundary at the input function, FNI. This is the function which turns the state of an external quantity, I, into the magnitude of a perceptual signal, P. Both sensors and computing processes may be involved in a complex input function. The outcome, however, is always the magnitude of a single signal, whatever it represents. This signal can only increase or decrease; we will always work with one - dimensional control systems, treating multidimensional control phenomena by using multiple control systems. The perceptual signal is the system's internal representation of the external world - its only such representation.
Line 103 expresses the definition of the input function and the way it relates the input quantity and perceptual signal: P = FNI(I).
Inside the system is another signal, the reference signal, R. In living systems, this signal is generated elsewhere in the organism; it is not accessible from outside. The reference signal, along with the perceptual signal, enters a function called the comparator, which subtracts one signal from the other and emits an error signal, E, representing the signed difference of magnitudes. It does not matter which signal is subtracted from which, but for uniformity we will always treat the reference signal as the positive input and the perceptual signal as the one subtracted from it. Thus, a positive error signal always means that the reference signal is larger than the perceptual signal. This function does not have to be generalized, as nonlinearities and amplification can always be absorbed into one of the other functions.
Let's run through the simulator quickly before we start using it, to see how this control organization operates.
Lines 1 thru 16 are user instructions. Lines 17 thru 27 initialize the system in a way that will be used to illustrate a point. Lines 28 thru 33 do more initializing, and ask for the width of your display. Lines 34 thru 36 create a blank string in case y our BASIC doesn't set dimensioned strings initially to spaces.
Lines 37 thru 46 define the various functions of the control system. If your BASIC can't do multiline functions, you can substitute subroutines here. The idea is to make it easy to try out different kinds of functions in the control system.
Lines 49 thru 91 comprise the interpreter, which accepts character strings and sets initial conditions and parameters before each run. Variables are initialized and constants are set by typing a string of the form A =m or An =m (no spaces; terminated by a carriage return). To set up the plotter, the statement is PLOT XXXXXX, where XXXXXX is one or more characters from the set P,E,R,I,O, and D, in any sequence. The plotter comes up set to plot P, E, and R. If you forget the last values of the parameters K1, K2, S1, and S2, type ? and they will be printed out. We will eventually define them.
The control system itself is simulated from line 95 to line 108. Entering the simulator at line 95 initializes the perceptual and output variables to values given to the interpreter. Entering at line 99 runs the simulation from the conditions left at the end of the last run. This is taken care of by the two run commands in the interpreter: a dot () means run with initialization, and a slash (l) means run without initialization. All commands require a carriage return termination.
The plotting subroutine goes from line 112 to line 128. Its operation deserves a note, since it was arrived at after some more normal schemes were rejected for being too slow. When the interpreter is given a string of symbols to set up the plotting, a table is set up (V(j)) in which a 1 means plot and a 0 means don't plot. When the plotter is entered, it transfers all six variables to another table, U(j). The output buffer is then cleared, and a short loop scans the V table, picking up variables from the U table when V(j) =1, and putting the symbol into the output buffer in a position corresponding to the value of the variable. Then the output buffer is printed out. This eliminates sorting the variables by size or printing the line as many times as there are variables. This method nicely cures the fundamental "rheumatism" of BASIC, as it is able to plot about two lines per second on my Polymorphics VTI display.
When two variables fall on the same spot, the variable that actually appears is the latest one in the series PER /OD. Thus far is has always been easy to figure out where a missing variable is hidden.
Once we have a set of variables connecting functions together, and an overall arrangement, we can treat the system by assembling it piece by piece. Let's look at the pieces we have, represented by the four statements in listing 2 from line 102 to 105:
102 I = FNF(0) t FND(D) 103 P = FNI(I) 104 E = R -P 105 O = FNO(E)
Looking at figure 5, we can see that these four statements lead us clockwise around the closed loop. I is the result of combining the outputs of the feedback and disturbance functions. It becomes the input to the input function, producing a value of the perceptual signal P. Pis one of the inputs to the comparator, which produces the error signal E. Continued on page 140 E is the input to the output function that produces O, the output quantity. The output quantity is the input to the feedback function, which leads us back to the start.
It might seem that all we have to do now is to supply some specific forms for the functions, and turn the system on to see what it will do. In a sense, this is right. If this were an analogue computation, we might even get a correct Idea of how the system works. However, it Is unlikely that anyone who hasn't done this before would plug in the right functions to make a digital computer give us anything more than a fairy tale. It is so important to understand this point that l have written the simulator to come up initialized in order to illustrate it.
Therefore line 104 represents the comparator without using a function; it is the comparator function itself: E = R - P.
The error signal drives the output of the system via the output function, FNO. The output of the system, therefore, depends not on the input quantity or the perceptual signal alone, but on the difference between the perceptual signal and the reference signal. The output function translates a signal inside the system into a quantity outside it, according to whatever rule is described by FNO. If the error signal changes sign, the output quantity also changes; in other words, we assume that output functions have no constant term. Any such constant term would have the same effect as a reference signal, creating an offset in the overall system response. Not every system can handle error signals and output quantities that go through zero and thus change sign, but the principles remain the same in the region where the system works.
Line 105 expresses the operation of the output function: O = FNO(E). This closes the loop of cause and effect since the output quantity appears in line 102 where the input to the system is calculated
If the system functions are properly designed for the properties of the system's environment, this entire closed loop will seek an equilibrium state. Our simulator will let us look at time -varying effects, but for the most part we will be concerned with steady state relationships.
Once we have seen how time variations come into the picture, we will concentrate on variations that occur slowly enough that the system and its environment never get far from a steady state relationship. This is the whole trick in grasping how control systems work. If you allow yourself to become embroiled in the interesting details of stabilization, or interested in the limits of performance in the presence of large and rapidly changing disturbances, you may learn a lot about one control system, but you will miss the organizational features that are obvious only when the system is not being subjected to unusual stresses. We will be concerned mainly with the normal range of operation, the range within which this system can behave very nearly like an ideal control system. Once that mode of operation is understood, there is plenty of time to explore the limits of operation. (See "Anatomy of the Simulator" text box).
Let us start off by assuming that we have a simple linear system. The input function is a multiplier of 1, the comparator is already
simple and linear, the output function is a multiplier of 2, the feedback function is a multiplier of 0.5, and the disturbance function is a multiplier of 0.8. These choices are dictated partly by the need to keep variables from falling on each other when we plot them. The simulator initializes D to 15.
Our four system equations, with these values substituted, now look like this:
I = 0.5x0 + 0.8x D = 0.5x0 + 12 (1) P= I(2) E_ R -P (3) 0 = 2xE (4)This system of equations is iterated during a simulation of behavior.
The above is a pretty simple system of equations. So why can't we just solve it algebraically and skip the rest? I suggest, in fact, that you do solve it (by successive substitutions). Solve for the value of the perceptual signal in terms of R and D. You'll get P =I =(R - 0.8 x D) /2.
Ready for a shock? Your computer can't come up with that solution! Let's fire up the BASIC simulator, which is initialized according to equations 1 thru 4 above, and plot I, D, and 0. Type RUN, and answer the question with a reply that tells the width of your display. After the colon prompt appears, type in the following: :.
I trust nobody had trouble with that.
The dot says "do a plotting run after initializing the variables." A slash (/) would say "do the run from where the last run left off." The result can be found in figure 6.
The disturbance is set to a steady +15 units, and the reference signal is initialized to 0. According to the algebraic solution above, the input signal should be a steady 0.8 x 15/2, or 6 units, to the right of center (dots indicate center when nothing is there). It is clear that something else happened. The whole system is in a state of endless oscillation. (When variables fall on top of each other in a plot, the visible one is the latest in the sequence PERIOD.)
Nature has a way of slapping your wrist when you forget something important. Our wrist has just been slapped. Naturally we do not get the same result that algebra gives: the algebraic solution comes from treating all of those relationships simultaneously. Our computer program is treating them one at a time. The algebra says that if one variable changes, they all change. The computer, being a purely sequential machine, thinks it can change one variable without changing the others. If the physical system being modeled is of that nature - if it, too, is a sequential state machine - then the computer will produce a correct picture of behavior. But, if the system being modeled works in terms of continuous variables, even in part, the computer will turn it into a sequential-state machine and analyze that kind of system instead of the one we actually have. That is what has happened here. We forgot to tell the computer that these variables can't change as fast as the computer can compute.
In order to make this simulated system behave the way the algebra says it should, we have to slow down changes in one or more variables to take account of the fact that we are dealing with real, physical variables
and not abstract numbers. The simulator does this in the input and output functions, lines 37 thru 40 (input) and 41 thru 44 (output). We will be basically dealing with a linear system in which both the input and output functions are constants of proportionality. As you can see from listing 2, however, there's a little more to it than that.
Consider line 42: 0 = 0 + S2* (K2 *E - O). The O on the left side is the new value of that quantity after this program step has been executed. On the right side, O indicates the last value of the output quantity. We recognize K2 *E as a calculation of the output quantity as if it were simply proportional to the error signal, E. The expression in parentheses, therefore, is the difference between this calculated new value and the old value of O. This is how much the output quantity would change if it could change instantly.
This calculated amount of change is multiplied by S2, a s /owing factor, and the result is added to the old value of O. We calculate the amount of change that an instantly reacting system would produce, but allow only a fraction 52 of it to occur on any one iteration. S2 is a positive number between zero and one. We've put a low - pass filter into the output function, without affecting the steady state proportionality constant.
The same thing is done for the input function. A slowing factor Si, between zero and one, acts to slow P down. We need only one slowing factor to make this simulator behave realistically, but there is provision for two, so that you can explore the effect of having two if you wish. In all the plots to follow, we'll use a modest slowing factor of 51 =0.5 in the input function, and essentially all of the required slowing in the output function. Once you get the hang of this you can put slowing factors into any of the functions.
The simulator is initialized with S1 and S2 set to 1, which reduces O + S2x (K2xE - O) to O + K2xE -O or just K2xE (no slowing at all). The same is done for the input function. Let's set them to other values and see what happens. The values of S1 and S2 can be set by typing S1 =n or S2 =n and a carriage return:
:S1 =0.5 :S2 =0.2 :. (run with initialization)
Suddenly we see nice, smooth relationships (figure 7). If you measure, you'll see that the input signal, I, ends up just six units to the right; the same solution given by the algebraic approach.
Does this mean we can just use algebra to analyze a control system? Not at all. We won't delve into this, but the algebraic solutions are valid only if the differential equations which really describe the system have steady state solutions. Then the algebraic solutions are the steady state solutions. In our simulator, we see all the time variations that lead toward the steady state, and the algebra says nothing about these. By putting the slowing factors into our calculations we have caused this system to seek a steady state. Therefore, it is the stability of the system that tells us we can use algebra, not the other way around. Predicting stability can become a messy process. We fiddle around with slowing factors until we get stability, which is more or less how Nature does it anyway.
We have now established the fact that using natural logic and following causes and effects around the closed loop as a sequence of events will lead to a wrong prediction of control system behavior. This immediately eliminates three -quarters of what biologists, psychologists, neurologists, and even cyberneticians have published about control theory and behavior. We are just beginning to see that one must view all the variables in a control system as changing together, not one at a time. This is what I mean by retraining the in-
tuition. Cartesian concepts of cause and effect, and Newtonian physics, have trained us to think along directed lines. What we need to do to understand control systems is to learn how to think in circles.
The simulator is run from the keyboard, using commands that tell it which variables to plot and what values of variables and parameters to start with. The instructions can be given one at a time, terminated by carriage returns, or they can be given in a continuous string with commands separated by commas. The latter is useful for altering parameters in the middle of a plot in order to see their effects.
The only time a space is permitted in a command or string of commands is when it is separating the word PLOT from the string of variable symbols to be plotted.
In order to tell the simulator what variables to plot, type: PLOT XXXXXX
where XXXXXX means a string of 1 to 6 symbols from the set PERIOD. The order of the symbols makes no difference. When two or more symbols land on the same plot, the one that you see is the latest in the series PERIOD, regardless of the order in which they were given.
To start a plotting run, type a period followed by a carriage return or comma if initialization is to occur first, and type a slash (I) if the run is to start from the conditions at the end of the previous run. Initializing creates one extra line of plot showing the initial conditions.
The parameters and variables that can be set are as follows:
L Number of lines to be plotted in any plotting run. Kl Steady state proportionality factor of the input function. Si Slowing factor for the input function; positive and between 0 and 1. K2 Steady state proportionality factor of the output function. S2 Slowing factor for the output function; positive and between 0 and 1. O Initial value of output quantity. P Initial value of perceptual signal. R Setting of reference signal. D Magnitude of disturbing quantity.
Examples: (colon is prompt from computer. Always terminate a string with a carriage return).
Set L to 16 :L=16 Set D to 0, run without initializing : D =O, / or :D =O :/ Set D to 0, plot 2 points after initializing, set D to :PLOT PER,D= 0,L= 2,.,D =10,L =13,1 10, plot 13 points from previous conditions. Plot P,E, and R
The program is written so that after a plot is completely done (a complete string has been interpreted), the prompt character appears to the right without a carriage return. That allows a 16 point plot to be shown on a 16 line video display screen without the final carriage return bumping the first line off the screen. If you want your next string to start at the left, just hit a carriage return.
To find out the values of K1, K2, Si, and S2 when you forget them, type "? "followed by carriage return and they will be printed.
Figure 8 shows the control system and its environment as we will be dealing with it from now on. Let's start with some definitions:
Loop Gain means the product of all the steady state factors encountered in one trip around the closed loop, counting the comparator as a factor of -1. In the initial setup, Kl was 1, K2 was 2, and the feedback function FNF was a multiplier of +0.5, so the loop gain was -1. The sign of the loop gain is the sign of the feedback; we have (and will continue to have) negative feedback.
Error Sensitivity is the factor K2, the steady state proportionality factor in the output function FNO. This number expresses how much output will be generated by a given amount of error signal.
Input Sensitivity is the factor Kl, the steady state proportionality factor in the input function FNI. This number expresses how much perceptual signal will be generated by a given amount of input quantity.
We are going to perform a series of experiments with this control system in order to arrive at some useful rules of thumb for thinking about how control systems work. These rules are approximations, but by doing the experiments and seeing how good the approximations are, you will learn to think precisely about control phenomena, even when using approximate language.
We will set the system parameters to give a loop gain of -10. As a way of summarizing where we are (refer to figure 8), the commands are
:K1 =1 Input sensitivity =1. :K2 =20 Error sensitivity = 20. :S1=0.5 Input slowing factor = 0.5. :S2 =0.07 Output slowing factor = 0.07. :R =0 Reference signal = 0. :0 =0 Output initialization = 0. :P=0 Perception initialization = 0. :D=0 Disturbance = 0.
Type those commands, and the system is now set up in a "home base" condition. Remembering that the comparator is equivalent to the factor of -1 and the feedback function is permanently set to be a factor of +0.5, this combination of parameters gives a loop gain of 1 x (-1) x 20 x 0.5 = -10.
There are two fundamental rules of thumb: a control system keeps its perceptual signal matching its reference signal, and the output of a control system cancels the effects of disturbances on the input quantity. We will take these up in order.
We're looking at the system with no disturbance acting (D = 0). If you want to be sure that everything stays at zero, type PLOT PERIOD . followed by a carriage return. You will see a row of Ds, D being the last symbol in the sequence PERIOD and hence the only one visible when all variables are at zero.
Now we will plot just the reference signal and the perceptual signal. The first two points will be done with the initial conditions set up above. The reference signal will then be set to +25 units, and the plot will be continued for 13 more points. Since this plot will commence with initialization (the dot command), an extra line showing the initial conditions will be plotted first. This makes a total of 16 lines, which will fit on most video displays. Of course, if you're doing this on paper you don't have to worry about the number of points plotted. Here is the command string:
Before discussing this, let's do another run of 13 points (figure 9), setting the reference signal to -25 units and continuing without initialization (the slash command, /):
It is clear that the perceptual signal comes to a steady state quite close to the magnitude of the reference signal, whatever the reference signal may be. The question is, how critically does this tracking effect depend on the input sensitivity and error sensitivity?
Let's leave the reference signal at -25 and do a run in which the error sensitivity is doubled at the start, and the input sensitivity is doubled halfway through the run. We will start from the previous conditions. The loop gain will now be -40 instead of -10.
:K2= 40,L= 8, /,K1 =2,/
To insure that everything is working correctly, let's flip the reference signal to +25 units (figure 10):
While there is an effect on the way the tracking takes place, the only effect of these rather drastic changes in input and error sensitivity is to make the tracking a little better. What about a decrease in these parameters?
L= 16,K1= 0.5,K2 =10, /, R =25,/ (Loop gain now 2.5)
Figure 11 shows that the approximation P = R isn't very accurate any more. For loop gains smaller in magnitude than about 10 (negative), the approximation begins to lose accuracy.
You will notice that doubling the error sensitivity, which doubles the amount of output generated by a given error, does not double the amount of output that actually occurs. Far from it. When, for any reason, the loop gain goes up, the steady state error simply gets smaller, assuming that the system remains stable. This fact does violence to the popular idea that the brain commands muscles to produce behavior. If that were the case, doubling the sensitivity of a muscle to the nerve signals reaching it ought to produce twice as much muscle tension. Nothing of the sort happens, unless you've lopped off the rest of the nervous system, particularly the feedback paths.
As long as the loop gain is sufficiently large and negative ( -10 or more negative will do for a number), a stable control system will match its perceptual signal nearly to its reference signal, regardless of the reference setting. We are ignoring, of course, transient effects.
All of this was done with the disturbance set to zero. Now let us set the ref- erence signal to zero, and check the second fundamental rule of thumb.
This rule requires some interpretation. It says, for the sake of brevity, that (with the reference signal constant) a change in the output quantity is equal and opposite to (the minus sign) a change in the disturbing quantity. Generally, the input and disturbing quantities will affect the input quantity through different physical paths. In our model, the output quantity acts through a multiplier of 0.5, and the disturbance through a multiplier of 0.8. The rule has to be interpreted to mean that the effects of the changes on the input quantity are equal and opposite. We will see this demonstrated.
We will now plot the output quantity, O, the disturbing quantity, D, and the input quantity, I (to make the above clear). The reference signal could be left where it is, but to avoid confusion let's set it to zero for this set of plots. The loop gain is set to -l0.
:PLOT OID, R- 0,K1- 1,K2- 20,L- 1,D =0, .,L= 15,D =15,/ Let this plot run out, then: : D = -15 o I D /
There is some lurching back and forth in figure 12, but in the steady state the behavior of the input quantity shows that the effect of the disturbance is essentially cancelled by the final effect of the output quantity.
If you did some measuring on the plot, you would find that the final value of the output quantity is very close to 8/5 of the value of the disturbing quantity. This follows from three facts: the input quantity ends up nearly at zero; one unit of output has 0.5 unit of effect on the input quantity; one unit of disturbance has 0.8 unit of effect on the input quantity. This is the kind of reasoning that helps in understanding how a control system works.
The primary observation about a control system is always the existence of an input quantity which is stabilized against disturb- ances by variations in the output quantity. If the input quantity is held essentially
constant (in the steady state), then one can deduce the relationship between disturbances and the system's output quantity simply from observing the properties of the system's environment. On inspection, an external observer can see both the feedback function and the disturbance function, here multipliers of 0.5 and 0.8 respectively. For any given disturbance, the effect on the input quantity for a constant output quantity can be calculated on purely physical grounds. Since the input quantity remains undisturbed in the steady state, one can then look at the connection between the output quantity and the input quantity, and deduce how the output quantity must change to account for the fact that the input quantity doesn 't change.
Thus, in order to predict how this system will react to any external disturbance, it is necessary only to know that the system is a control system and to look closely at its environment. The kind and amount of reaction follow from the nature of the feedback and disturbance functions which are properties of the visible environment.
Most important, as far as the life sciences are concerned, the form and amount of reaction do not depend on any property of the control system; not enough to make any difference. Therefore, when you apply a stimulus and see a response, you are using the organism as a complicated analogue computer in order to study the physics of the local environment. This is not what the life sciences have thought they were doing.
All that remains to wrap up this section is to see the effects of disturbances when the reference signal is set to different values. This will lead to the definition of a useful technical term: the reference level of the input quantity (see figure 13): :PLOT RIOD,D=0,R=0,L=1,.,R=12, L=15,/,D=15,/
If you have a 16 line video display this will scroll past you, losing the early parts, but no matter. The first event is that the reference signal is set to 12, and the input quantity moves essentially to +12. The output quantity goes to +24 in order to accomplish this. Then the disturbing quantity goes to +15, which has the exact effect on the input quantity that +24 units of output have. As a result, the output quantity drops to zero -- exactly zero, if you look at the numbers.
In effect, the disturbance, by itself, has enough effect to make the perceptual signal match the reference signal. Looking at figure 8, you can see that this would mean a zero error signal and no drive to the output function. So, whenever the output drops to zero, we know that the perceptual signal is matching the reference signal, even if we can't see it.
In our model right now, the input sensitivity is 1, so the perceptual signal is numerically equal to the input quantity. That's a coincidence, since the units are different: physical units outside, impulses per second inside. Even if K1 wasn't 1, the output would still drop to zero when P = R. Thus, we can give a special name to the particular value of input quantity (however created) that brings the error signal, and hence the output quantity, to zero: the reference level of the input quantity. The reference signal clearly determines what this reference level will be, but so does the form of the input function.
All of this is supposed to have established two principal ideas. The first is that control systems control what they sense, not what they do. The second is that control systems act on the outside world only in order to protect a controlled perception against disturbance.
As we have demonstrated these principles, we have established some other odd facts. We have found that the main effect
of negative feedback in a control loop is to diminish the effects which disturbances would otherwise have on the system's input quantity. While we have had only one disturbance at our disposal, it should be clear that the number or the causes of disturbances make no difference. If ten different disturbances were acting at once, they could only end up increasing or decreasing the value of the controlled input quantity. Since the system maintains control by acting directly on the input quantity, and not by acting to oppose the cause of the disturbance, the system does not have to take account of the number of causes acting, or the phenomena that are involved. It acts to oppose the net effect of any disturbances on the input quantity.
From the point of view of the behaving system itself, reality consists of the magnitude of one perceptual signal, because that is the only internal representation of the outside world. If the system can be said to have a purpose or intention, it must be to maintain the perceptual signal matching the reference signal. The reference signal specifies to the system what it is to sense, but not what it is to do.The output that matches perceptual and reference signals is determined by the nature of the feedback function and by the strength and direction of any disturbances that may be acting. Whatever sets the reference signal, thus effectively controlling the perceptions of this system, does not have to know anything about how the control system comes up with a matching perception.
What is perhaps most amazing to a person who has not previously worked with negative feedback systems is the capability that this system has to maintain quite precise control over its own perceptual signal, even if its own properties change. If its output apparatus becomes stronger or weaker, or its perceptual apparatus becomes more or less sensitive, there is scarcely any effect on the perceptual signal. As long as some minimum loop gain is maintained and the system does not become unstable and begin oscillating, it does not really matter how much loop gain there is, or whether most of it is in the output or the input function.
A servomechanism engineer might find this approach somewhat odd. Why all this fuss about the system's internal perceptual signal? When you build a control system for a practical use, you worry more about the external variables than internal variables, because the customer is interested in the external variables.
This is exactly the point. Living control systems are not interested in the external variables. They can't be. They don't know about them, except indirectly. All they know is what happens to themselves. The point of behavior is not to accomplish something for a user in -the external world, but to affect the system itself. Everything that a living system knows about the outside world has to first exist in the form of perceptual signals, or some other internal effect of external events (not all organisms have nervous systems).
In part 3 we will start looking at living systems more directly, and this will become much clearer. We now know that control systems control, above all, their own internal perceptual signals. Next time we will see why they do that.
In the meantime you might enjoy using this simulator to do further explorations. We have looked into only a few of the questions that might be raised about control systems. The simulator can reveal far more than we have seen. For example, it is instructive to look at the effects of the disturbance strictly from the external point of view (plotting I, O, and D), and then to look at exactly the same effects from inside (plotting P, E, and R). We haven't even raised the question of what a control system looks like when it becomes unstable, how the slowing factors interact with loop gain to determine stability, or what happens when the input function, the output function, or both are nonlinear. Speaking of nonlinearity, you might try rewriting the definition of the feedback function as follows:
45 DEF FNF(X)=X*X*X/2048+X/2and then performing some of the experiments again. Try to make the input function logarithmic (adding a constant to make sure you don't make the perceptual signal negatively infinite), and see how the input quantity and perceptual signal behave as the reference signal or disturbance is changed.
The main objective before the next article in this series appears is to understand how a control system controls its perceptual signal, and why an external observer, who doesn't know about the controlled input quantity, might think the disturbance acts on the system to make it respond, like a doorbell. The simulator is there to help you grasp this closed loop phenomenon. I hope it does help.
In part 1 of this series, I demonstrated that the concept of behavior is not as clear as certain people would indicate. The patterns that we call behavior result from the convergence of many influences, only a part of which can be attributed to the organism that we say is behaving. Yet the behaving organism varies its own actions so that when the influence of these actions is added to all that is unpredictable, the result is recognizable as patterns of behavior.
In part 2 we observed that a control system controls its input, not its output. It acts on its environment to make its own sensory or perceptual signal match a reference signal received from elsewhere, and to automatically counteract the effects of disturbances. It does not have to sense the cause of the disturbance: it senses the quantity it is controlling, and reacts to deviations of that quantity (or the signal representing it) from a reference level that is set by the reference signal.
The reference signal acts just as an intention ought to act. It specifies some state of affairs that is to be achieved, and serves as a target toward which action always urges the perception of the controlled variable. Under normal circumstances the control system can make its perceptual signal track a changing reference signal, and still oppose the effects of disturbances.
There are two main rules of thumb:
The reference signal reaching a good control system controls the perceptual signal in that system.
The actions of the control system vary so as to oppose the effects of disturbances, even if the reference signal remains constant.
Let's see how this control system model applies to one small human subsystem: a spinal reflex arc (reflex just means "turned back on itself "). This will lead to some concepts that will be of use to the designers of robots.
In the early 19th century, Sir Charles Bell established the fact that sensory nerves are separate from motor nerves, and described the "circle of nerves" found in a spinal reflex. A sensory nerve that is part of a spinal reflex arc (we will talk about one that is stimulated by the stretching of a tendon) sends its signal to the spinal cord, and the same cell that receives this signal emits a motor signal that reaches a muscle. When the muscle contracts, it has physical effects that stimulate the same sensory nerve. These are closed loops; the effects of sensory nerves that are stimulated by muscle action affect the same muscle action.
In all such loops that have been discovered, the sense of the feedback is negative. This is true of the tendon reflex. If signals from cells in the spinal cord cause a muscle to contract, the resulting stretch of the tendon stimulates sensors clustered around the tendon. The signals from these sensors reach the same cells in the spinal cord to inhibit their firing.
Apparently the materials are present for a control system, but before we discuss this, a digression is necessary.
Apparently the materials are present for a control system, but before we discuss this, a digression is necessary.
As a result, almost all neurological research has focused on single impulses. The "all-or-none" principle became so firmly entrenched that by the time digital computers arrived on the scene, most people were led off the track. "Aha," they said, "if a nerve-cell has a threshold that is just high enough, 2 impulses will have to reach it simultaneously to fire it: behold, an AND gate!" Since inhibition (an impulse tending to reduce the sensitivity of a nerve cell to an impulse arriving by a different path) can occur, we clearly have the NOT operator, and with the addition of OR (a nerve cell that can be fired by an impulse from any of several paths), we have all of the ingredients for a generalized logic circuit.
There is no longer sufficient reason to believe that the nervous system works in this way. Those who tried to analyze nerve nets as logic devices had to make a lot of assumptions, such as synchronism or clocking, that are incompatible with experimental facts. This more modern under-
Therefore, when I begin to identify components of a control system, as I will do in a moment, the signals will be thought of as continuously variable frequencies, not as on /off binary quantities. The functions that combine some signals will be functions of continuous variables. While any one neuron behaves as a rather nonlinear device, a collection of neurons performing essentially the same function in parallel yield an overall pleasantly linear input /output relationship, especially if we consider the normal, rather than extreme range of frequencies (zero or saturation rates of firing).
The spinal reflex systems we will now examine involve several hundred - sometimes several thousand - control systems operating in parallel, although they will be drawn as simple control systems. A perceptual signal is really the mean rate of firing in a whole bundle of pathways, all starting from sensors that are measuring the same input(eg: stretch in a tendon). The signal that enters the muscle in this system is a bundle of signals, each exciting 1 or 2 small fibers out of the thousands that make up 1 muscle. Thus, we will be dealing with neural impulses in much the way electronic engineers deal with electrons. In the majority of cases, the number of impulses passing through a cross-section of a bundle of redundant pathways per unit time will be "the signal," just as the number of electrons passing through a cross-section of a conductor per unit time is called "the current."
Figure 13b is a schematic diagram of the tendon reflex. Figure 13a is the diagram of a general control system that I have already shown and discussed earlier. Figure 13a has an input function FNI, a perceptual signal P, a comparator C, a reference signal R, an error signal E, an output quantity O, a feedback function FNF and an input quantity I completing a closed loop. Entering this loop at the same point as the input quantity are the effects of a disturbing quantity D, affected by the disturbance function FND.
Figure 13b contains the same components in the same relationships. The input function is a sensor which emits a signal P, the frequency of which depends continuously on the amount of stretch I of the tendon at the end of the muscle. This signal P travels to the spinal cord, and the local branch enters an inverter which is specialized to produce inhibitory effects on any neuron it reaches (these actually exist in the spinal cord as Renshaw cells). This inverted copy of the perceptual signal reaches the cell body of a motor neuron C, which also receives an excitatory input from a pathway descending from centers that are higher in the nervous system (the reference signal R).
The signal emitted by this motor neuron represents the excess of excitation over inhibition, and thus represents the difference between the reference and (inverted) perceptual signal: it is clearly the error signal E. The error signal enters the muscle, where it is converted into an average shortening of the contractile fibers in the muscle FNO. The output quantity O is the net stretch of the connective tissue that links the individual contractile fibers together. The feedback function FNF consists of the mechanical relationships that sum all these individual little forces into one force that will tend to stretch the tendon.
I have shown the disturbance as a string that pulls directly on the tendon. It is rather hard to disturb the tendon control system without dissecting the organism, a procedure that always leaves one wondering whether or not this is the original system. The reflex that is tested with a hammer just under the kneecap is a different one, a muscle -length control
In part 2 I described how control systems work. We now immediately know what this spinal reflex loop does. It maintains the perceptual signal P matching the reference signal R. Since P is a measure of tension in the tendon, we can say that this control system controls the sensed tension, and not the degree of contraction of the muscle. It also varies the amount of contraction in the fibers of the muscle to oppose any extraneous effects that tend to alter the tension in the tendon, either increasing or decreasing it.
We know that muscles are attached to bones, generally across a joint, and that when a muscle changes tension it often changes the angle at the joint that it spans. In this way movements are created and forces are applied to objects, or against gravitational and other forces. However, this little control system knows nothing of that. The only behavior it produces is sensed tension. It controls a neural signal which represents the net force being created by the muscle and any active disturbances. The control system does not know this - it has, after all, only the one kind of sensor. It knows only how much signal it is getting from the outside world, and not even what kind of signal this is. It is just an amount. It would need many other sensors and a very intelligent computer in order to know that this amount is measured in units of tension.
Every muscle that is used in voluntary behavior (as opposed to internal or visceral) is involved in a control system like that in figure 13b. There are no exceptions. Thus, there is no way that any higher process in the brain can directly produce a muscle tension. The brain can produce a muscle tension only by providing a reference signal which specifies how much tension is to be sensed. This does not even determine how tense the muscle will be, for if there is a steady external disturbance working, the muscle will adjust its degree of contraction to compensate for the disturbance. Pull steadily on the tendon, and the muscle will completely relax, even with the presence of a nonzero reference signal. Inject Novocain into the perceptual pathway, and the muscle may go into a violent spasm because it is trying to create a perceptual signal. The brain cannot command the muscles to contract. It can only tell level -1 control systems how much tension to sense. It is up to those control systems to do what is necessary to create the demanded signal.
Gray's Anatomy names about 200 muscles, most of which occur in pairs, and many of which consist of numerous subdivisions capable of having different effects. There are perhaps 500 to 800 muscles which can be distinguished on the basis of different directions of effect. Thus, we own 500 to 800 level-1 control systems. Every human action must be performed by adjusting the reference signals for these control systems. The behavior of these control systems need not be simulated for the simple reason that this has been done to a sufficient degree in part 2 of this series.
There are actually more level-1 control systems than muscles. For example, every muscle also contains length sensors, which are involved in level -1 control systems that govern not force, but something related to the stretching of the muscle itself. Length and force can be controlled quite independently under suitable circumstances; however, we won't be getting into such details here. The main point is that we chew, scratch, talk, walk, run, and swim by using level -1 control systems, and by telling them not what to do, but what to sense.
We have accounted for all outgoing signals from the brain that are concerned with overt actions (in the sense that all will act on level -1 control systems, although there may be, at level 1, control systems we haven't considered here). We have not, however, accounted for all incoming signals. The nervous system has hundreds of millions of sensory endings, most of which are not involved in level-1 control systems.
You'll notice that in figure 13b the perceptual signal branches. This is a real branch; all level -1 perceptual signals involved in these control systems branch, sending one branch
The signals going downward from this higher part end up in control systems of the general type shown in figure 13b, controlling sensed tension and a few other simple variables. The signals going upward, the level -1 perceptual signals, all reach the next higher level of organization, which happens to be represented in the brain stem, the cerebellum, and one part of the cerebral cortex.
Imagine a second level of control systems. The input functions of this new layer will not be equipped with sensors; instead, they will receive the perceptual signals generated by level-1 input functions (or in the case of signals involved in level-1 control systems, copies of them, courtesy of the bifurcation of the dorsal roots). These signals, in subsets, are the real-time inputs to level-2 input functions, each of which generates one level-2 perceptual signal. We define a level-2 input function in terms of the way a single level-2 perceptual signal depends on some set of level-1 perceptual signals.
It is now clearly possible to construct a level -2 comparator, provide it with a reference signal, and make it generate a level -2 error signal. That error signal can then be wired to the input of a level -2 output function, and copies of the output of that FNO can be fanned out to serve as reference signals for level -1 control systems.
In fact, we can construct as many level-2 control systems as we like, until we run out of neurons that are located where the level-1 perceptual signals terminate and the level-1 reference signals originate. All outgoing signals that are further inward will be accounted for; they will be level-2 reference signals. (If you can figure out why they can't be level-1 reference signals, bypassing level-2, you are beginning to understand control theory. Hint: Level-1 reference signals are adjusted by level-2 systems: what happens if an arbitrary signal is added to the output of a level-2 system?)
Some level-1 perceptual signals may be combined to produce level-2 perceptual signals, without involving the new perceptual signals in any level-2 control system. Perceptual signals that are involved in level-2 control systems branch, just as their counterparts at level-1 do: one of the branches heads further inward and upward in the brain. We can now repeat the process of going from the first to the second level of control. Clearly, a third level of control systems can be constructed, then a fourth, and so on, until we run out of brain and find ourselves looking at the inside surface of the skull.
This is my model of the brain. It will be discussed in greater detail in the next article of this series. At present we will develop a clearer understanding of the relationship between one level of control and the next higher level of control through the use of BASIC. As you will see, the relationship has some rather amazing and challenging properties.
We are going to model a very elementary 2-level control system. I won't attempt to model a real human system because it would get too complicated. The imaginary system will consist of 3 level-1 control systems, each controlling sensed force (just as in the tendon reflex system) and 3 level-2 systems, each controlling a separate aspect of the forces controlled by level-1 systems.
The 3 muscles will be laid out in a plane, one end of each being joined at a common central point, and the other being anchored to a point in the plane. If the angles between the muscles are equal, they will form a Y. We will assume that the common connection does not move; the muscles will apply a force there but, as in the case of flying a stick -controlled airplane, any movement will be negligible. This allows us to ignore some complex interactions between the muscles. Those interactions would not in-
There will be 3 level-1 control systems, 1 for each muscle. Each will sense the force being generated by its own muscle. Each will have a loop gain of 10, and a slowing factor of 0.07 (see part 2 for discussion of these properties).
There will also be 3 level-2 control systems. One will use the 3 muscles to control a force in the X direction (left and right), another will control a force in the Y direction (up and down), and the third will control the sum of the 3 forces, this sum corresponding to what physiologists call "muscle tone." We will see why there is such a thing as muscle tone (the steady mutually cancelling tension that is always there in muscles). Each level-2 control system will have a loop gain of 50, and a slowing factor of 0.01.
I hope that this arrangement looks a little amazing. Here we have 3 muscles spaced at roughly 120-degree intervals around a common point. No one muscle pulls in either the X or the Y direction. To pull in the X direction, all 3 muscles must alter their tensions. To pull in the Y direction, all 3 must alter their tensions. To vary the muscle tone all 3 must once more alter their tensions. We will be able to set reference values for these 3 variables at the same time, throw in a disturbance of arbitrary size and direction to boot, and there will be no interference among the systems that cannot be easily taken care of. Each level-2 force -controlling system will be able to keep its perceptual signal matched to any reference signal, while the others do the same thing at the same time.
It may add interest to know that the outputs from the level-2 systems to the level-1 systems will not be accurately weighted: the only choice will be whether or not a given level-2 output reaches a given level-1 comparator after multiplication by 1, 0, or -1. All 3 level-2 outputs will reach and be added together in all 3 level-1 comparators. The neat separation of X, Y, and tone control is not accomplished by carefully balancing the amount of output sent to each level-1 system. Only the crudest adjustment has to be made on the output side, essentially the choice between positive and negative feedback, with negative always being chosen.
We now come to what is perhaps the most fundamental concept of this theory of brain function. The organization which determines that an X vector, a Y vector, and a tone or scalar force will be controlled is found in the input functions, not in the output functions. The organization of behavior is determined by the perceptual, not the motor organization of the brain. By the time we finish this installment you will see exactly how that happens.
Let us start by looking at a typical control system of unspecified level in a hierarchy of control systems. This system will receive multiple input signals from lower -level systems and multiple reference signals from higher -level systems. It will emit just 1 output signal (we will assume that the only need for an explicit output function is to provide error amplification and to smooth; otherwise the error signal could be used directly as the output signal). Figure 14 shows this typical system.
The input function will now be a little too complicated to be represented as a BASIC function since we need a set of weighting factors so that each input can be assigned a weight before summing all of the inputs together. The easiest way to deal with weighting factors for a generalized system is to use a matrix that contains all of the factors for all of the levels. For the input function we designate the matrix as S (for sensory) and write it as: S(L,J,K),
where: L = level J = system at that level K = weight of Kth signal from level L-1
The perceptual signal for this Jth system at the Lth level will be designated P(L,J). The perceptual signal can thus be written as the sum of contributions (weighted) from some set of lower -level systems, a weighting of O in the S matrix meaning absence of a connection: where N(L-1) is the number of systems in the next lower level.
A similar operation is performed to calculate the net reference signal R(L,J). A matrix M(L,J,K) is used to select a connection factor (1, 0, or -1) for each output of a higher -level system; the net reference signal is the sum of all the outputs of the higher-level systems, each multiplied by its appropriate factor. A 0, of course, means no connection.
The M matrix is filled by looking at the sign of the corresponding entry in the S matrix for the next higher level.
Suppose that we wanted to fill in the M matrix for 1 level of systems. An entry will be -1 if the corresponding S matrix entry of the next higher level is negative, 0 if the S matrix entry is 0, and 1 if the S matrix entry is positive. But which is the entry in the S matrix for level L +1 corresponding to M(L,J,K)?
The answer is simple: M(L,J,K) corresponds to S(L +1,K,J). The source and destination indices are simply interchanged. If a higher -level system gives a negative weight (of any amount) to the perceptual signal from a given lower -level system, it sends a copy of its output to the comparator of the same lower -level system with a negative (inhibitory) sign. A negative connection factor means that the output of this higher-level system will subtract from the contributions of other higher -level systems to the lower -level net reference signal.
Thus, once the S matrix for the
next higher level has been filled in, we
can calculate the entries in the M
M(L,J,K) = SGN (S(L +1,K,J))
You may choose to skip these procedures and simply spell out each connection one at a time. My thought in using a general solution is not merely to save lines of program, but to point the way toward expanding the simulation both horizontally (adding more systems at each level) and vertically (adding more levels).
The reference signal for level L,
system J, is found by summing over
the outputs of all systems of level
L +1, multiplying the output from
each higher-level system by the appropriate
connection factor from the
To complete this general model we need only calculate the error signal E and the output signal O. The required slowing factor and the error sensitivity are put in the output function.
E(L,J) = R(L,J) - P(L,J) O(L,J) = O(L,J) + K(L) x (G(L) x E(L,J) - O(L,J))where K(L) is the slowing factor for all systems of level L (see part 2), and G(L) is the error sensitivity for all systems of level L.
We do not have a complete control system at the top of this hierarchy where we will be injecting reference signals for the highest complete level. Therefore we designate those signals as (in this case) 0(3,I), output signals from 3 imaginary level -3 systems (us) indexed by I = 0 (X force), 1 (Y force), or 2 (tone). The M matrix for level 2 is set up so that M(2,I,I) is 1, I running from 0 to 2; this establishes connections from each level -3 output to 1 corresponding level -2 reference input. All other entries are left at 0 (my North Star BASIC zeros arrays when they are first dimensioned).
At the bottom, the output signals 0(1,I) are supposed to create muscle tensions that affect 3 input quantities; the amount of stretch in the tendon attached to each muscle. To avoid treating a special case, we will designate these input quantities as "level 0 perceptual signals," P(0,I). The value of each input quantity is found by adding the magnitude of the corresponding output to the component of a disturbance that acts along the length of the associated muscle. The value of the input quantity P(0,I) represents the net stretch in a tendon created by the muscle contraction and this component of the disturbance as they act together.
The level-1 S matrix simply connects each input quantity, multiplied by 1, to its respective input function. Thus, we set S(O,I,I) = 1, for I = 0, 1, and 2. All other entries in this matrix are O.
The geometry of the muscles is adjustable. Since setting up this geometry is the opening phase of the BASIC program, we will take a quick run through this program and discuss the muscle setup. See figure 15 to help
Muscle angles. After the dimension statements and the statements that set slowing factors and error sensitivities for each level have been called, the program calls a subroutine that asks for the angle at which each of the 3 muscles is to be set (in degrees). You can use 30, 150, and 270 degrees (for equal spacing). There is nothing to prevent the choice of any angles you like, although you should draw a diagram to determine the effect on the system. It is hard to create a force in a direction in which there is no component of force from any muscle.
Sensory weightings. Lines 9 to 15 organize the perceptions of this system, and thus organize its behavior. For values of I from 0 to 2, all 3 levels of sensory matrix are set up. You can now see how X and Y forces are sensed. The weights for level 2, system 0, correspond to the cosine of the angle between the positive X axis and the angle of each muscle. Those for level 2, system 1, correspond to the sine of the same angles. Each input function is weighting the perceptual signals from the muscles according to the component of force that is aligned with the direction being sensed. The tone system, level 2, system 2 adds the signals together to yield a total-force signal.
Motor weightings. Lines 19 to 23 use the already entered values of the S matrices to create the connection matrix M. The sign function selects the sign that will preserve negative feedback.
Highest -level reference signals. In line 24, the program calls a subroutine that asks for 3 reference signals: one designating the amount of X force, another designating the amount of Y force, and a third designating the sum of forces, or muscle tone. Positive or negative numbers are allowed. A real nervous system cannot handle negative frequencies, but the same effect can be created by suitable use of inverters so that one (positive) frequency means a positive quantity and another (also positive) frequency means a negative quantity. In reality there would be 6 systems of level 2 in this 4-quadrant system.
I have set up level 1 to behave realistically like a muscle control system; neither negative signals nor negative forces can be produced.
Disturbance. At line 25, the program calls a subroutine which asks for the amount and direction of a constant disturbance. A disturbance might be created by seizing the place where the 3 muscles join, moving it, and holding it in the new position. Despite the fact that the control systems are neither detecting nor controlling position, arbitrary movement of this junction in space will stretch or relax the muscles, creating changes of force due to the spring constants of the muscles. Therefore it is reasonable to suppose that a force disturbance can be created, one which projects into the direction of each muscle according to the cosine of the angle between the disturbance vector and the axis of the muscle.
Calculating the behavior. Lines 29 through 37 call a subroutine that actually does the calculation of signals in all 6 control systems. You will notice 3 nested FOR -NEXT loops. The outer 2 loops cause the lower-
Text continued on page 111
1 DIh P( 2, 2), R( 2, 2), E( 2, 2), 0( 3 ,2),S(3,2,2),M(2,2,2),A(3),K(2) 2 DIM G(2) 3 G(1) =10\ K(1) =.07\ G(2) =50\ K(2) =.01 4 P= 3.1415927/180 5 GOSUB 99\ REM (SET UP MUSCLE GEOMETRY) 6 REM * * * * * * * * * * * * * * * * * * * * * * * ** 7 REM SET UP SENSORY WEIGHTINGS 8 REM * * * * * * * * * * * * * * * * * * * * * * * ** 9 FOR I =0 TO 2 10 S(1,I,I) =1 11 S(2,0,I)= COS(A(I)) 12 S(2,1,I)= SIN(A(I)) 13 S(2,2,I) =1 14 S(3,I,I) =1 15 NEXT I 16 REM 17 REM SET UP MOTOR WEIGHTINGS 18 REM * * * * * * * * * * * * * * * * * * * * * * * ** 19 FOR L =1,TO 2 20 FOR I =0 TO 2 21 FOR J =0 TO 2 22 M(L,I,J)= SGN(S(L +1,J,I)) 23 NEXT J\ NEXT I\ NEXT L 24 GOSUB 109\ REM (SET UP REFERENCE SIGNALS) 25 GOSUB 116\ REM (SET UP DISTURBANCE) 26 REM * * * * * * * * * * * * * * * * * * * * * * * ** 27 REM CALCULATE SYSTEM BEHAVIOR 28 REM * * * * * * * * * * * * * * * * * * * * * * * ** 29 ! \FOR Q =1 TO 5 30 FOR J3 =0 TO 1 31 L =2\ GOSUB 50\ REM CALCULATE SYSTEMS AT LEVEL L 32 FOR J2 =0 TO 1 33 L =1\ GOSUB 50 34 FOR z =0 TO 2 35 P(0,I)= 0(1,I) +D *COS(A(I) -A(3)) 36 NEXT I\ NEXT J2\ NEXT J3 37 GOSUB 69\ REM (PRINT TABLE OF VALUES) 38 NEXT Q 39 ! "(A)NGLE? (R)EFS? (D)IST? (C)ONT? (P)RIt1T MATRICES? " 40 INPUT " ",A$ 41 IF AS< > "A" THEN 42\ GOSUB 102\ GOTO 29 42 IF A$ <> "R" THEN 43\ GOSUB 109\ LOTO 29 43 IF AS<>"D" THEN 44\ GOSUB 116\ GOTO 29 44 IF A$< > "C" THEN 45\ LOTO 29 45 IF AS <> "P" THEN 46\ GOTO 76 46 !" ? ? ?? "\ !\ GOTO 39 47 REM * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * ** 48 REM CALCULATIONS FOR LEVEL L SYSTEMS 49 RER * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * ** 50 FOR J =0 TO 2 51 v =n 52 FOR K =0 TO 2 53 V= V+P(L -1,K) *S(L,J,K) 54 NEXT K 55 IF L =1 AND V<0 THEN V =0 56 P(L,J) =V\ V =0 57 FOR K =0 TO 2 58 V= V +O(L +1,K) *M(L,J,K) 59 NEXT K 60 R(L,J) =V\ V= 0(L,J) 61 E(L,J)= R(L,J)- P(L,J) 62 V= V +K(L) *(G(L) *E(L,J) -V) 63 IF L =1 AND V <0 THEN 0(L,J) =0 ELSE 0(L,J) =V 64 NEXT J 65 RETURN 66 REM * * * * * * * * * * * * * * * * * * * * * ** 67 REM DATA LISTING SUBROUTINE 68 REM * * * * * * * * * * * * * * * * * * * * * ** 69 ! \! "ITERATION # ",X2I,Q, 70 FOR J =2 TO 1 STEP -1, 71 ! \! "LEVEL ",Y.2I,J,Y. #7F2 72 FOR I =0 TO 2 \!" ",R(J,I)," ",\ NEXT I 73 ! \FOR I =0 TO 2 \!" ",P(J,I)," ",O(J,I)," ",\ NEXT I 74 !\ NEXT J
Text continued from page 106:level system to iterate twice for every iteration of the higher -level system. This proves to be an exceedingly useful, easy way to stabilize the 2-level system. (I have also tried this with a 3-level system, and it worked just as well.) I have no formal rationale for why this works; informally, it seems to be a good idea to let the lower-level system correct most of its error before the higher-level systems take their own errors seriously.
The inner loop, line 35, simply calculates the values of the input quantities for the level-1 systems, using the angles of the muscles and of the disturbance. This is, in effect, the simulation of the environment (the muscles are in the environment of a neural control system).
At line 37 a routine is called which prints out the signals for all systems: the reference signal on 1 line, the perceptual signal to the lower left of it, and the output signal to the lower right for each system. Line 38 closes the iteration loop; 5 iterations are called for.
Lines 39 through 46 ask what action is to be taken after 5 iterations.
Calculation subroutine. Lines 50 to 65 calculate the signals for each system. The V that occurs here and there is simply a way to reduce the number of times a subscript has to be calculated. The perceptual signal is calculated first, then the reference signal, the error signal, and the output signal, for each system of level L. The level is set at lines 31 and 33 by the calling program. Line 62 contains the slowing routine which appeared in part 2. Lines 55 and 63 determine whether or not level 1 is being calculated; if it is, the perceptual and output signals are prevented from going negative.
Data listing subroutine. This subroutine is called after every complete iteration of both levels. It prints only the perceptual signal, reference signal, and output signal from the 3 systems at each level.
After the RUN command is given, the program asks for all adjustable parameters and then does 5 iterations, printing out the values of all signals each time. It then issues a prompting message, the answer to which determines what happens next. The C command means do 5 more iterations. The P command causes the sensory and motor matrices to be printed out. To get an idea of the time scale on which human level -1 and level -2 systems work, imagine that each iteration takes about 1/20 of a second. (If you are looking for mental exercise, you might adapt the plotter from part 2 to show the variables in this simulation.)
There has always been a problem in conventional models of the brain that have to do with coordinated actions. The standard description is that something high in the brain thinks of a general command like "push!" and sends the equivalent signals downward toward lower systems. Those lower systems receive the general commands, and elaborate on them, turning them into more detailed commands at every step. At the lowest level, all of the detailed commands converge into the final common pathway, the relatively few channels running from the spinal cord to the muscles. There, at last, the neural signals are turned into tensions that create motions that create behavior.
The problem that nobody has ever been able to figure out is how a simple general command gets turned into specific commands that will have effects that satisfy the general command. Unfortunately, neurology is full of sentences that sound like explanations but are really restatements of the effect that is to be explained. When such sentences are uttered, they create the impression that the problem has been solved and needs no further investigation.
The simulator described here shows a different way for commands to get turned into actions. The command that specifies an X force doesn't
Since we are specifying 3 functions of 3 variables, and setting reference levels for the value of each function, there is only one state of the muscles that will allow zero error in all 3 systems at once. What we have done, in fact, is set up an analog computer for the simultaneous solution of 3 equations in 3 variables.
This simulator shows that the reference signals for the lower -level systems do not correspond to any one output from a higher -level system. Nevertheless, the perceptual signal sensed by each higher -level system matches the corresponding reference signal. The higher systems each sense a different function of the set of lower -level perceptual signals. Independent control is possible only because the functions represent independent dimensions of variation of the lower-level world.
In the environment of this 2-level system, there is no such thing as X force, Y force, or tone. There are simply 3 tendons in various states of tension. I have created the idea of
If there were sensors on each muscle to detect muscle length as well as force, we could add 3 more control systems at level 1, and 3 more independent aspects of the external world to control at level 2. In fact, there are muscle-length sensors, and I am working on several models that take them into account.
If you now imagine 500 to 800 muscles involved with at least twice as many level-1 control systems (length and force surely; rate of change highly likely), you will begin to perceive the richness of the world in which level-2 systems exist. Add to this the millions of sensors for heat, cold, vibration, joint angle, light, sound, taste, smell, hunger, pain, illness, angular acceleration, joint compression, and so on, and you might begin to glimpse the complexity of the real system we are modeling. Since perceptions that arise from sources other than direct effects of muscles exist in large numbers, there can clearly be far more level-2 systems than level-1 systems, although the number of level-2 systems that can act independently at the same time is limited by the total number of comparators available at level-1.
Perhaps you can now see why this approach to a model of a human being (rudimentary as it is at this point) has some powerful implications for the building of robots. I suggest a formal distinction between a robot (an imitation of a living system) and an automaton (a device which automatically produces complex actions). An automaton is designed to create preselected movements; a robot is designed to control preselected perceptions (its own). In order for an automaton to produce precise and repeatable behavior, it must be built so strongly that normal disturbances cannot alter its movements, or it must be protected from disturbances that might interfere with its movements. In order for a robot to create, for itself, precise and repeatable perceptions (and thus precise and repeatable consequences of behavior), it need only perceive precisely, have a sufficiently high error sensitivity, and be capable of producing forces as large as the largest disturbances that might reasonably occur.
There is much more that can be said about the general relationship of one level of control to another, but this installment has raised enough points to ponder. To prepare for part 4, you should run this simulator and observe what happens to all of the variables in it. Try keeping the disturbance constant in magnitude and rotating its angle; try altering the muscle angles; change line 3 to use different error sensitivities (G(x)) and slowing factors (K(x)). Use the C command for longer iterations, and convince yourself that a steady state has really been reached. See what happens if the muscle tone isn't set high enough (there is a very good reason for muscle tone control). Do a series of iterations with slowly changing reference signals, and plot muscle tension against each reference signal. Get the feel of this small extract of the whole human hierarchy because in part 4 we will widen the field of view to include everything, and we will begin to look at some experiments with human subjects. These experiments will be noninvasive, nondestructive - more like video games than science - but far more useful than the games.
In this last part of my series of articles, a simple experiment with a human subject will be attempted; an experiment that can be expanded almost indefinitely. All of the principles from the previous parts will be used. Before the experiment starts, note the following main points that have been established:
The behavior of an organism is not its output, but some consequence of its motor outputs acting together with unpredictable forces or other disturbances.
For a more or less remote consequence of motor outputs to be repeatable in a disturbance -prone world, the behaving system must sense the consequence, and act to keep it matching some static or dynamic reference condition. By definition, that makes the organism a control system.
Organisms acting as control systems control what they sense, not what they do.
What is controlled is what is sensed, even when the sensing involves one or more stages of real-time computations based on primitive sensory signals.
In a multiple -level control system, the higher levels act by varying the reference signals for lower -level systems. They control perceptions computed from many lower -level perceptions, some or all of which are controlled by the same lower - level systems.
If there are n degrees of freedom at one level of control, in principle n higher-level systems could act independently and simultaneously by sharing the use of the lower-level systems. Any higher -level system acts by sending amplified copies of its error signal to many lower-level systems, each with the proper sign to achieve a negative feedback effect. Any lower -level system receives a reference signal that is the net effect of superimposed higher-level output signals. This worked for a 2 -level system with 3 control systems at each level; there is no limit, in principle, to the number of levels or the number of systems at each level. In practice, there is reason to anticipate finding hundreds of systems at a given level, but no more than 10 or 12 distinct levels in a human being. This will be commented on later.
Abstract models and simulations are fine for conveying general ideas. However, if one does nothing but make models and simulations, it is easy to get involved in the math and engineering, and forget the real thing is there to be seen. Items described in the first 3 articles in this series represent something real. Real organisms work much the same way control systems work. They do not work in any of the other ways that have been proposed over the centuries (as far as their behavior is concerned). I am not talking metaphorically. There are excellent reasons to think that when the properties of organisms begin to be investigated in terms of control theory, hard data about the way we are organized will start to accumulate (up to a point, anyway).
The experiment to be described in this article is so simple that it may look elementary. Nevertheless, it is the starting point for a new approach to exploring the organization of human beings. Most new ideas start by looking like old ones, but with a twist that leads in unexpected directions. If you are familiar with tracking experiments, do not be too quick to decide what this is all about.
The basic equipment needed to do
this experiment is:
A joystick with 1 degree of freedom (ie: a potentiometer with a stick on the shaft will suffice).
A reasonably fast analog-to-digital (A /D) converter with 7 -bit or more accuracy. My system uses the Cromemco D+7A, which has 7 analog channels in and 7 out, as well as 1 input and 1 output 8-bit port.
A memory-mapped display, in which points are plotted on a video screen by depositing appropriate codes in a reserved segment of memory. This, or something equivalent, is essential for creating the moving objects that are involved in the experiment. I use the Polymorphics VII with the display area in the 1 K bytes of memory starting at hexadecimal location D000. Out of deference to systems that do not have the VTI's graphics capability (however crude), I have used 64 horizontal elements in the alphabetic mode. Higher resolution would be much more desirable, but this much is enough to show the principles well.
If no memory-mapped display is available, but 2 digital-to-analog (D /A) outputs and a triggered oscilloscope are, the display that is needed can be created. Use 1 D/A
Systems with built-in graphics under BASIC control, such as Apple, PET, or TRS-80, will probably allow the experiment to be done more simply than how I did it in listing 5. The basic requirement is to be able to read a number from a stored table, add the handle position to it, erase the old cursor, and use the sum to position the new cursor, doing this for 3 cursors at least 4 times per second - the faster the better. (An example of the simulation on the Apple II is shown in listing 6.)
Imagine a display with 3 cursors on it, one above the other. Each cursor can move left and right. The subject looks at this display while holding a control handle. The instructions for the first experiment are very simple: the subject is asked to select I of the cursors, and hold it still, somewhere near the center of the screen as accurately as possible for the duration of the run. Engineering psychologists call this "compensatory tracking." They use it to investigate the limits of speed and accuracy of control in the presence of rapid disturbances of various kinds.
If the handle is held centered, each cursor will be seen to wander back and forth in a pattern that is independent of the other 2 cursors. In this experiment, the disturbances causing this wandering are made very slow and smooth. With even a slight amount of practice, every subject will be able to maintain essentially perfect control. Transfer functions will not be measured, nor will the limits of control be tested in the manner traditional in engineering psychology. A subject acting well within the range of normal operations under conditions where the phenomena of control can be clearly seen is desired. The subject selects a visual variable (position of 1 of the cursors), selects a reference level for that variable (a particular position), and maintains the perceived position at the reference position, while disturbances act that tend to move the cursor away from the reference position.
Figure 17 shows the setup in
schematic form. The 3 disturbances
are labeled D1, D2, and D3. The 3
cursor positions are labeled Cl, C2,
and C3. The position of the control
handle is H. The position of each cursor
is determined by the sum of H and
one of the Ds. For cursor 2 the effect
of the handle is reversed, so the 3
Cl = D1 + H C2 = D2 -H C3 = D3 + H
If the subject controls C3 in relation to a reference position of 0 (ie: midscreen), and does so perfectly, then 0 = D3 +H, or H = -D3. The handle position should be an accurate mirror image of the magnitude of the disturbance D3 at every moment, and the cursor C3 does not move at all. You will find that all subjects, after a little practice, will closely approximate these predictions.
This may seem elementary, obvious, boring and hardly worth the labor of getting the experiment up and running. Do not be deceived; this experiment appears to be simple because it is fundamental. It is fundamental because it can prove that all of the life sciences have been using the wrong model. There are also several extensions of the experiment that will show how to get started mapping the whole hierarchy of human control systems. There is no theory and no simulation that carries the impact of seeing how a real living control system works; especially when you can understand every detail of what is happening, either as subject or observer. The 3 previous articles in this series have been designed to give the ability to grasp what is happening here. This experiment is designed to give the gut feeling of knowing.
The program in listing 5 is written in North Star BASIC, Version 6, Release 3. It contains a machine-language subroutine for an 8080 /Z80
The machine-language subroutine reads in the handle position, adds it with the appropriate sign to the value of a disturbance that is passed to the subroutine by the CALL command (in the DE register pair), erases the old cursor, and deposits the new cursor, a rubout, on the screen. Each time the subroutine is called it steps to the next cursor, recycling as necessary. On return from the subroutine, the handle position is passed back to the main program (in the HL registers). The machine-language program is in lines 200 thru 230, expressed as a string of hexadecimal bytes with no punctuation. Thus if your machine is not an 8080 /Z80 type, a program can be assembled, the listing copied into these lines, and possibly this program can be made to work with little other modification.
The program asks for the most significant byte of the place where the machine -language subroutine is stored. The loader adjusts memory references by inserting the value of this byte in memory wherever necessary, after the program is loaded (lines 300 thru 330).
The display area consists of 1 K bytes of memory starting on any 256-byte boundary. Lines 370 thru 400 ask for the starting location of the memory area devoted to the display, and set up base registers in the machine -language program for the left margin of each cursor's movement. The FILL command is like POKE. If the computer has graphics capability built -in, everything from line 60 thru 400, and the plotting subroutine (later), can be accomplished in a simpler way.
Disturbance tables are set up in lines 510 thru 620. The unnecessary use of symbols, instead of constants,
Disturbance D1 is a sine wave and D3 is a triangular wave. D2 is a smoothed random disturbance. On reruns, only D2 is reloaded, taking about 20 seconds.
The experimental run is controlled by lines 660 thru 780. Lines 660 and 680 lay down 3 arbitrary scales on the screen, while the rest repeatedly call the machine -language subroutine. For each stored value of each disturbance, all 3 cursor positions are corn - puted and plotted, and the handle position is stored in the table H1$. The inner loop from line 710 to line 770 adjusts the duration of the experimental run; here it is set up so that the disturbances change and a handle position is recorded only every fourth time the display is generated. On my system, this works out so the display is refreshed 16 times per second, and data is sampled and stored 4 times per second. The 2 OUT statements reflect my laziness; I use 2 digital-to-analog outputs to supply the voltage to the potentiometer that measures handle position.
The data plotting routine (lines 820 thru 1010) is entered at the end of an experimental run. This routine is set up to plot either on the video screen or on a hard -copy device; it asks for the X and Y dimensions of the plot, which cursor is to be plotted, and which device is to be used. My system is set up so the typewriter is device 2 and the screen is any other device number. If you do not have this ability in your BASIC or system, delete lines 1060 and 1070 (in the subroutine that requests information about the display), and eliminate the " #2," in lines 970 and 990. In North Star BASIC, the exclamation point is short for PRINT.
Only the handle position is stored as data; the cursor positions are reconstructed during plotting from the list of handle positions and the corresponding tables of disturbances.
The plotting scheme is designed to work with any teletypewriter -like device. If you have legitimate graphics, you can rewrite this part and get a more pleasing result.
There are 3 choices for plotting, each associated with cursors Cl, C2, and C3. Each plot shows the cursor as a C, the handle position as an H, and the disturbance acting on the cursor as a D. A dot indicates the center of the display when nothing else is there. After each plot is finished, there is a pause; hitting the carriage return will cause the program to ask about the next plot. If the question about the Y dimension of the display is responded to with a 0, the program will reload the random disturbance table and issue a prompt for another experimental run. The old data will be destroyed. Remember, it takes about 20 seconds to reload the random disturbance table. Do not panic if
At line 1260 there is a utility routine that converts any hexadecimal number up to 10 digits to a decimal number. I used it while writing the program. It calls the conversion subroutine starting at line 1130.
Running the Experiments If you possibly can, take the trouble to set this experiment up. Nothing can take the place of actually experiencing yourself as a control system and understanding things that you have taken for granted all your life.
Here is a typical run for the benefit of the many readers who do not have the equipment to do this; the data will then be observed. Here is an old friend, Chip Chad (from part 1 of this series), glaring at the screen and maintaining a choke -hold on the handle, waiting for the experimenter to hit the return key at line 610. The experimenter reaches in and taps the key. The reference scales slide up into place and the 3 cursors pop into view, moving. Chip picks the middle one as most people do the first time, decides to keep it on the middle + mark, and after a few wobbles succeeds.
"So what ?" he says.
If learning were being studied, good information could be obtained from this first run. But the plan is to see Chip acting as a competent control system, so his first effort is praised and he is given another run (answering the query about Y dimension with a 0). After the second run, the data is plotted for each cursor.
Figure 18 shows the data for each cursor, number 1 on the left, 2 in the middle, and 3 on the right. The 2 end plots are a mess, but the middle plot shows a striking symmetry. The Cs march more or less down the center of the screen, deviating a little to left and right, but maintaining a constant position on the average. The Ds make a random-looking pattern, and the Hs follow almost the mirror image of the D pattern.
Looking carefully at the middle plot, could it be said that the handle position or motion looks like any sort of regular function of the cursor position or motion? There may be some relationship, but it certainly is not clear.Probably, nobody would claim that the large, smooth motions of the handle could be reconstructed ac- curately on the basis of measurements of cursor position (that is, reconstructed roughly or statistically with accuracy,especially if handle acceleration is compared with cursor deviation from the average position). The best which could be hoped for would be some statistical relationship (eg: a small signal buried in much noise).
On the other hand, the relationship between the handle position and the magnitude of the invisible disturbance is obvious and quantitative. It is seen that the handle position is the mirror image of the disturbance magnitude with an error of only a few percent of full scale. There is much signal and little noise in that relationship.
Here is the situation. There is 1 measure of Chip's behavior, H. There are 2 variables, D and C, either of which might have some relationship to that behavior. Which variable, D or C, would be selected by any statistical test as the most probable cause of the behavior? Of course, D would be selected. In fact, a formal statistical analysis, like those done in every scientific study of behavior, shows D to be the only significant contributor to the behavior, while C, the cursor position, is rejected as an irrelevant variable!
That is a paradox, however, from
That is the proof mentioned earlier. The old cause -effect model fails utterly when applied to this situation. The question then is, why have generations of intelligent people believed that behavior is caused by sensory stimulation? The answer is clear: they have been fooled by a monstrous illusion.
The illusion would be easier to see if there was some visible, direct indication of the magnitude of the disturbance. Suppose there were a moving D (or a number that conCORRELATION > 0.99 D C - [SUBJECT] - H CORRELATION %lt 0.1 CORRELATION %lt 0.1 tinually reflected the magnitude of D) on the display. Clearly, if Chip managed to control C without that indication, he could still do so; he could ignore it and perform as well as ever. However, something has now been added that would mislead a bystander who did not understand control theory.
That bystander could now see 2 variables, both able to affect Chip's senses. Taking the apparent relationships at face value, it would be clear that the indication of D was accur-
An organism is surrounded by a world full of variables; variables that change within widely diverse ranges. The organism receives many signals from its internal parts, too. In that sort of situation, if the organism is controlling some of the variables, it will react strongly and smoothly to any disturbance tending to alter 1 of the controlled variables. The result is that it will seem to be responding directly to the disturbances. There will be no obvious indication that it is controlling anything at all. There is every excuse for even the best of scientists to have observed the relationship between disturbance and behavior, and to have missed the very existence of controlled variables.
The name for such disturbances is stimuli. Once in a while, an experimenter must have accidentally picked a real controlled variable to call a stimulus, but the chances are against that. If an attempt is made to manipulate a real controlled variable, the organism will have to be strapped down to keep it from interfering. That is what is done in such cases. If the organism insists on acting like a control system, forcibly break the loop and make the organism conform to the theory. As a famous psychologist said, the theme is "Behave, damn it!" It never occurs to such stong -willed individuals that they might have the wrong idea about what is happening.
There is more in this elementary experiment than meets the eye. If all psychologists were to experience it, and try to meet the challenge of explaining these effects using any standard theory, the result would be a total collapse of that science, followed by a rebirth. However, many jobs would be threatened. What has happened instead is that a handful of psychologists has supported this theory, another handful has taken up arms against it, and most have resolutely ignored it.
I suggest that you run this experiment many times with subjects controlling all 3 cursors. Every case will show that mirror -image relationship between D and H and little relationship between C and either D or H. If the previous parts of this series are studied and all the relationships that make up a control system thought about carefully, it will be evident that there is no other explanation for what is going on here. If you get nothing else out of this, you should acquire an intuitive feel for a new theory of how behavior works. You might even begin to understand how to design a robot in a new way.
It is time now to try to fulfill a pro-
More Controlled Variables Once subjects controlling all 3 cursors have been seen, it might seem that the possibilities of this experiment have been exhausted; this is not the case at all. There are controllable variables all over that screen; all of them can be controlled by the same means, movements of the handle in 1 dimension. Discovering them is a good way to get out of the habit of thinking that we simply perceive our environment, and start a new way of thinking: to recognize that we construct perceptions, imposing order on our experiences far more than recognizing order. As you will see, a controlled variable does not have to be "real" at all.
Here is an example. It is possible to perceive the relative position of any of the 2 cursors. The handle affects C2 in a direction opposite to its effects on Cl and C3, so the relative position of Cl and C3 cannot be controlled because the handle does not affect it. However, it is possible to keep Cl even with C2, or C2 even with C3; in fact, it is easy. A plot of the results would involve plotting C2 -C1 or C3 -C2 instead of just C, and D2 -D1 or D3 -D2 instead of just 1 disturbance. The mirror image relationship with H would be as good as ever. Do not forget that C2 -C1 and C3 -C2 are variables. Any value of the variables can be selected as a reference level (eg: Cl to be 1 inch to the left of C2).
These are examples of higher -level controlled variables. If the subject could not perceive the present positions of the cursors, he or she certainly could not perceive their relative positions. Relative position is derived from perceptions of individual positions, but not vice versa. In order to control relative positions, it is necessary to control (or at least vary) individual positions, but individual positions can be controlled without controlling relative positions. These are the relationships one looks for to map out a hierarchy of perception and control.
Other relative perceptions can be controlled. All 3 cursors can be kept lying in a straight line, at least within the range where 1 of them does not fall off the edge of the display and pop up at the other edge. Reducing the amplitude of the disturbances would eliminate that problem. Also, the 3 cursors can be made to form any fixed angle, subject to the same limitation. There may be more static patterns that can be controlled, but I have not thought of any. This is, after all, a simple display.
It is not, however, limited to static conditions. Suppose the subject visualizes a pattern in which 1 cursor moves back and forth slowly between 2 limits. This pattern can easily be maintained, the handle moving just enough to produce it, and enough more to cancel the effects of any of the disturbances. A similar oscillation could be maintained for the relative
There is clearly an infinite range of different temporal patterns, ranging from a simple steady motion in 1 direction to completely arbitrary motions and rhythms. There is an unlimited number of potential controlled variables in this simple display. Anything that can be perceived, and that the handle can affect in a systematic way, can be controlled.
For all of these examples of controllable perceptions, it is essential to remember that the disturbances are acting all the time. This is not a matter of producing any particular behavior. The cursor cannot be made to move slowly back and forth between fixed limits just by moving the handle slowly back and forth between fixed limits. The handle might be moving the wrong way at many moments, when the disturbance tends to make the cursor move faster than the reference pattern being considered. There is no one -to -one correspondence between handle position or velocity and cursor position and velocity, because of those ever-present disturbances. Regularities of behavior are not being looked at here, but regularities of controlled perceptions. If there were a slowly oscillating prism between the display and the subject's eyes, a regular pattern of movement of the cursor on the screen would not be seen. The subject controls the visual image, not the reality. For the higher -level variables, the subject controls some function of the visual image (often the controlled variable could not be found, even on the retinas).
One could create displays of far greater complexity, and provide means of affecting the display that have more than 1 degree of freedom to explore a staggering range of possible controlled variables. This is what I suggest be done. The first step in the development of any new science is acquire the facts; here the most needed facts concern what variables human beings can actually control. What is needed is a large and simpleminded program of recording the obvious and obscure. What is needed is a body of definitions of variables in every sensory mode that people have been able to control. Order and system count much less than sheer volume of data at this point. In fact, an unsystematic gathering of data may be the best kind, since it will not be constrained by theories about what people ought to be able to control. Anything which can be a way of testing is worth testing at this stage. The possibilities are limited only by the imagination.
We do need some sort of ordering principle -some criterion for judging the reality of any proposed controlled variable. This is where the test appears; here is how it works.
Test for Controlled Variables The first thing to remember when investigating a possible controlled variable is that in order for something to be controllable it has to be variable. There is neither the means nor the need to control the existence of the Empire State Building or the planet Jupiter. Not all perceptions are controlled. Some are just disturbances; some are just there.
One might think initially about controlling, for instance, a car. People often speak casually about controlling things. But what is meant is controlling something about those things. A person cannot really control a car; but under proper circumstances its shape, its color, its price, its speed, its direction, its parking place, its dirtiness, its dangerousness, its desirability, its altitude, or the flatness of its tires can be controlled. A car, after close inspection, proves to be composed entirely of hundreds or even thousands of variables. Together they create "car-ness" in our perceptions. Individually, or in groups, most of them can be affected by one means or another, and can be controlled if it is worth the effort. You can even make the car disappear instantly by closing your eyes. Keep remembering that what is controlled is really a perception.
The first step in applying the test for the controlled variable is to define a variable. You do not have to know in advance if it is a controlled
By push I mean to apply a disturbance that under normal circumstances should have a predictable direction and amount of effect on the variable. If I push hard enough on a life-sized statue, it should tilt in the direction of the push. Perhaps it will topple in that direction according to the simple laws of mechanics.
Having selected a variable and applied a push to it, the next step is to measure the actual effect of the push. I predict that pushing on this statue should make it tilt a certain amount in a certain direction. I apply the push and observe the tilt
If the actual effect is far smaller than the predicted effect, common sense indicates that something must be pushing back. If the pushing -back is always just enough to cancel any amount or direction of disturbance (within some limits), it can be concluded that the pushing -back is systematic. The mirror -image effect that has been observed is what is wanted.
It is necessary to discover what is pushing back, and how it is doing the pushing. Perhaps, examining the statue carefully, an iron rod is found supporting its back from its base. In that case, a conclusion is made that there were not enough facts to make a correct prediction of the effects of the push; the bending moment of the rod should have been taken into account. But if no simple explanation for the failure of the prediction is found, one must look further.
Suppose it is discovered that the base of the statue seems to move when pushed. If there is a push to the east, the base tilts to the west moving the center of support east of the center of gravity of the statue, and thus creating a counterforce. Suppose this tilt of the base is found to be always just what is required to offset the effects of the push. It can be concluded that one may be on the trail of a control system.
What has been done is to find out something about the means of control, the path by which the output of the control system, if it exists, might be linked to the controlled variable (the angle between the statue's longitudinal centerline and the ver- tical). Finding this link is a necessary step in the test.
That step will usually lead to discovering the physical control system. Tracing the wires that work the motors that tilt the base of the statue, you find a black box a few yards away from the statue. That may be the control system, or at least all of it that is not its actuators (which have been found).
There is still one step to be taken. You cannot be completely sure of the nature of the control system until you discover the variable it is really sensing. The situation has been approached with human prejudices; to me, it seems that the controlled variable is the orientation of the statue, a geometric or visual variable. Perhaps that variable is only related to the real controlled variable. What must be found now are the sensors that the control system is using.
Thinking in visual terms, you might look for a photocell that detects the tilt. Suppose a photocell is found on a stand near the statue. The test calls for breaking this link, preventing the sensing of the statue. The result should be that the effect of the push returns to what would be predicted from mechanical laws. So the photocell is covered and the disturbances are applied again. What happens is that the floodlights illuminating the statue turn on. The statue still resists the push -the photocell was for something else.
By careful searching 4 strain gauges built into the base of the statue are discovered. These provide a signal showing where the center of thrust is, and the wires from the strain gauges run over to that black box. Disconnecting the wires shows that now the push succeeds in tilting the statue. As soon as its tilt becomes marked, an angry groundskeeper comes leaping out of the bushes and arrests the experimenter. Aha! You may have discovered another control system controlling the state of the statue.
To recapitulate, the test for the
controlled variable involves the
1. Define a variable.
2. Apply various amounts and directions of disturbances directly to the variable.
3. Predict the expected effects of the disturbances, assuming no control system is acting.
4. Measure the actual effect of the disturbances.
5. If the actual effect is essentially the same as the predicted effect, stop. No control system is found.
6. If the actual effect is markedly smaller than the predicted effect, look for the cause of the opposition to the disturbance, and determine that it results from systematic variations in some other variable. If such a cause is found, it may be associated with the output of a control system.
7. Look for a means of sensing the controlled variable. If none is found, stop: no control system is proven to exist.
8. If a means of sensing is found, block it, so the variable cannot be sensed. If control is not lost, the sensor is not the right one. If no such sensor is found, stop: no control system is proven to exist.
9. If all steps of the test are passed, the variable is a controlled variable, its state is its reference level, and the control system has been identified.
To apply step 8 of the test to our computer experiment, cover the cursor suspected of being controlled with a cardboard strip. Control should be lost. Cover each cursor. The covered one will never pass the test. The other steps are easily carried out.
Now it is up to you. You can test controlled variables involving intensity, sensation, configuration, change, sequence, relationship, strategy, principle, and system concepts having to do with visual, auditory, tactile, kinesthetic, and other senses.
Good luck with the programs, and good hunting for controlled variables. I will be interested to receive word about what people are doing with the information covered in these articles.
Powers, W T, Behavior: The Control of Perception,
Aldine Publishing Co, 200 Saw Mill River Rd, Hawthorne NY 10532, 1973.