THE ANALOG/DIGITAL DISTINCTION IN THE PHILOSOPHY OF MIND

  title page

  intro

 I. 

  II.

  III.

  IV.

  V.


II. Philosophers' Distinctions

II.1 Von Neumann

The nervous system is based on two types of communications: those which do not involve arithmetical formalisms, and those which do, i.e. communications of orders (logical ones) and communications of numbers (arithmetical ones). The former may be described as language proper, the latter as mathematics. (von Neumann, 1958, 80)

Von Neumann's The Computer and the Brain, written in 1956 and published in 1958, made the analog/digital distinction relevant to philosophy by claiming that the logics and mathematics of the central nervous system, viewed as representational systems, must "structurally be essentially different from those languages to which our common experience refers" (1958, 82). He has in mind here both natural language and binary mathematics.

Von Neumann did not claim, as is often said, that the brain must be an analog computer. He thought it was a system using both analog and digital signal types, organized so there is frequent interaction throughout the system. His sense of 'analog' comes, of course, from analog computers: "in an analog machine each number is represented by a suitable physical quantity" (1958, 3). He is thinking of analog computers in their computational and not their analogical aspect, here, and so he is thinking of voltage or current magnitudes as representing (continuous) numbers by non-coded means. His sense of 'digital' emphasizes code: "in a decimal digital machine each number is represented in the same way as in conventional writing or printing, i.e. as a sequence of decimal digits" which are, in turn, instantiated as physical magnitudes (1958, 6).

His sense of computation is in accord with the kind of signal being used: analog computation is systematic transformation of signal magnitudes by physical means. Digital processes are "patterns of alternative actions, organized in highly repetitive sequences, and governed by strict and logical rules" (1958, 10).

Von Neumann thinks of "nerve pulses", spiking frequencies in axons or dendrites, as having a digital character because he thinks of them as discrete and notational. Given the possibility of dual description, von Neumann is giving spiking frequencies a linguistic description: they are "communicating orders", telling other physical systems what to do. Chemical changes in the nervous system are analog for von Neumann because they are a form of communication which is directly causal and not coded: they have representational significance but do not involve symbols. Because they have computational effect von Neumann thinks of them as arithmetical, quantitative. These continuous quantities are then transduced:

Intensities of quantitative stimuli are rendered by periodic or nearly periodic pulse trains, the frequency always being a function of the intensity of the stimulus ... That it is a monotone function ... permits the introduction of all kinds of scale effects and expressions of precision in terms that are conveniently and favorably dependent on the scales that arise. (von Neumann, 1958, 77)

Notice here that analogy is still present, though von Neumann is talking about digital signals. To his mind we have gone from physical/implicit magnitudes to signals encoding numbers, but the code employs proportional relation between magnitudes and pulse frequency. Of course this is not true of codes in a digital computer, but in the nervous system it would give neural coding an essentially statistical character:

What matters are not the precise positions of definite markers, digits, but the statistical characteristics of their occurrence, i.e. frequencies ... Thus the nervous system appears to be using a radically different system of notation from the ones we are familiar with in ordinary arithmetics and mathematics. (von Neumann, 1958, 79)

Von Neumann set a challenge to cognitive studies because he said that if the brain is a computer it is not necessarily or wholly a digital computer. Even the aspects of neural function that seem to be using discrete signals seem to be using them in a different form of code than we use in digital programs. That the CNS might be, or be significantly like, an analog computer was the obvious alternative, but most writers who followed von Neumann had technological or philosophical or political reasons for wanting to show that the brain is digital after all.

I will look first at a group who attempted to understand computation by means of functional analogy, and then at a line of argument committed to digital description.

II.2 Working analogy

We recall that two different natural systems N1, N2 are analogous when they realize a common formalism. F. As we saw, analogy is like a modeling relation except that it relates two natural systems, rather than a natural system and a formal one.

The relation of analogy between natural systems is in fact independent of their material constitution. The most materially disparate systems can still be analogous. All that is required is that each natural system, individually and separately, be describable by the same formalism F. (Rosen, 1990, 119)

Margaret Boden uses 'analog' in a way that applies to representation in general and is far removed from its roots in computing technology. Consequently she ignores the continuity/discreteness aspect of the contrast and settles on this definition: "since significant similarity contributes to all senses of analogical representation, it may be regarded as the basic, minimalist definition of the term" (Boden, 1988, 29).

Because she is disregarding the question of continuity, her definition can cut across the symbolic/nonsymbolic line. Consequently her "significant similarity" can be either a modeling relation or an analogical relation, in Rosen's sense of those terms - it can be a relation between two physical processes that both realize the same formalism, or it can be a relation between a physical thing and a formalism. Boden is a classical cognitivist; her definition allows for mental representation that is both analog and symbolic.

For her the relevant contrast is between representation that has some and representation that has no "interpretive mapping, or significant isomorphism, between the structure of the representation and the structure of the thing represented" (Boden, 1988, 29). Her contrasting term is not 'digital' but (Sloman's term) 'Fregean'. Normal word order in a sentence would count as Fregean representation when it does not reflect the order of events described; it is Fregean because it involves the application of logical functions to arguments by a process like filling in slots in a pre-built sentence structure. On those occasions where word order does reflect some significant order of events ("Jehane drank her cocoa and went to bed" means something different than "Jehane went to bed and drank her cocoa") the sentence is said to be both analog and Fregean.

This is a definition that could be fun. It would give us 'tall' as analog because of its preponderance of (relatively) tall consonants, 'bite' as analog because of its snap of the teeth, 'look' as analog because it shows two eyes, 'howl' as analog for its sound, and 'concatenated' as analog because it is what it means. But none of this is relevant to whether human brains are analog or digital computers. We need a reading of 'significant similarity' that is systematic enough to give us inferential structure.

Johnson-Laird (1988) offers a contrast between mental models and propositional representation which does not mention analogicity but which could stand as a gloss for the similarly ambiguous notion of an analog medium of representation found in Kosslyn (1980). Johnson-Laird cites as his forbear Craik (1943) who, with a sense of computation not yet informed by digital computers, thought human brains might construct some form of

physical working model which works in the same way as the processes it parallels, in the aspects under consideration at any moment ... By a model we thus mean any physical or chemical system which has a similar relation-structure to that of the processes it imitates. (Craik, 1943, 51)

What we are talking about here is analogy between two physical processes whose causal structure is 'the same' - i.e. whose respective causal relations can be modeled in the inferential relations of some common formalism. Johnson-Laird does not commit himself to a non-symbolic understanding of what I will call working analogs. For him a 'mental model' is "not propositional", and by 'propositional' he seems to mean something like what Boden and Sloman mean by Fregean representation - he gives the predicate calculus, semantic networks and natural language sentences as examples. But does 'not propositional' imply 'not notational', not coded? He does speak of a "set of tokens that corresponds to the set of men, a set of tokens that corresponds to the set of cars, and a mapping relation between these two sets of entities". On the face of it these "tokens" and "mappings' could be just a way of speaking about something actually to be understood in hardware terms directly, as weighted neural connections passing a flow of patterned activation, perhaps.

The sorts of mental model or analog Johnson-Laird postulates involve some of the dimensions of analogy I have discussed. A "simple relational" model/analog provides analogy of elements and their properties and their relations. A spatial model/analog provides element-relation analogy in which relations are spatial. A dynamic model or analog provides causal analogy along with temporal analogy. (See Johnson-Laird, 1983, 422-3.) Any of these may also be part-whole analogies.

It is difficult to know how to take this proposal. A mental model has a "similar relation-structure to the process it models", and this differentiates it from "a simulation which merely mimics the phenomenon without relying on a similar underlying relation-structure" (Johnson-Laird, 1983, 4). What is ambiguous is whether the "underlying relation-structure" is implemented as a physical/causal structure or as a data structure like an array. This is the ambiguity that also troubles Kosslyn's story about an analog medium of image representation.

What is troublesome is that both Kosslyn and Johnson-Laird want cognition to be entirely modelable in formalisms suited to digital computers. Both are aware that a code-modeled procedure is not necessarily a code-using procedure; both want the determinacy and compositionality of code structures; and yet both want to talk about representational structures that have inference built into them in ways that are importantly unlike the ways inference is built into, for instance, a number system, or other systems normally used in digital computation. The question is, how do we understand an inferential relation over symbols, which is semantics-driven and not syntax-driven? There will be more about semantics in chapter IV. For now I would like just to pursue what might be meant by a physical relation-structure.

Any relation-structure, temporal succession for instance, can be modeled (in Rosen's sense) as a data structure in a code. Another physical event, modelable by the same formalism, may provide an analog of its temporal relation. If our cognitive creature is using a code - if the mental model Johnson-Laird proposes is a relation-structure implemented in a data structure - then that data structure must itself be physically realized. What is the nature of the mapping (through intermediate states if necessary) between data structure and the physical states realizing it? Does the data structure model its realization? Yes: modeling and realization are symmetrical with respect to each other. So, technically speaking, even when the creature is using a code there will be a relation of analogy between the world-event and its brain state representation. In other words, if we have a coded model physically realized, we will always have an analogy - of some kind. Now we need to ask whether there is an important difference in kinds of analogy.

Let's say we have some physical process we want to model - a bridge subjected to a flood for example. We want to know whether the bridge will hold. We write an equation that describes the response properties of the parts of the bridge and the stressing properties of the flood at various points and times. We tie these descriptions together with mathematical operators. Our equation can take different forms which nonetheless are mathematically equivalent.

We write the equations in a form readily interpretable into the bridge and storm scenario. We have a set of simultaneous equations where parts of the equation model particular parts of the causal story, and where mathematical operations are ordered the way the causal sequence is ordered. Then we set up an analog computer to realize this equation - to be a working analog of the bridge and storm. It will give us a result that tells us whether the bridge will hold.

Alternatively we can write the equations in a parsimonious form in which operations are differently ordered and values are differently lumped. This equation will give us the same result. When we realize this form of the equation on our analog computer, the analog computer and the bridge-with-flood are still analogs, because they realize a common formalism. But the computer is no longer a working analog of the bridge system. It is a functional analog: if we consider it a black box we will say that it implements the same global input-output function.

Yet another alternative is that we write the equations in an extremely prolix form, in which recursive functions are defined over binary digits. If we realize this equation in an analog computer our analog computer is a digital computer. Once again, the physical machine will be the analog of the bridge with flood; they realize the same formalism. Once again it will be a functional or black-box analog but not a working analog.

So now our question is this: Are brain states working analogs or merely functional analogs of world states? The answer, it seems, might be a matter of detail.

Let's go back to Craik's naive formulation in which human brains may be thought to construct a "physical working model which works in the same way as the processes it parallels." (Recall that I want to preserve Rosen's distinction between a model and an analogy, and so am speaking of working analogs rather than working models.) What is involved in a physical analog "working the same way" as something else? We want to disqualify any sort of global input-output functional correspondence because we have in mind some of the ways the details of representing structure can be more rather than less representationally relevant.

Sloman (1978) (in a chapter called "Intuition and Analogical Reasoning") puts it this way:

Analogical representations have parts which denote parts of what they represent. Moreover, some properties of, and relations between, the parts of the representation represent properties of, and relations between, parts of the things denoted. ... An analogical representation has a structure which gives information about the structure of the thing denoted, depicted or represented. (Sloman, 1978, 165)

Sloman is principally interested in non-formal inference and he wants to establish the plausibility of forms of representation that would make it possible. He uses 'analogical' and not 'analog' and, like Boden, he has an understanding of the term dissociated from analog computation with its necessary continuity. So his contrast is not between discrete and continuous, or between notational or non-notational, or between model and analogy. His use of 'symbol' also does not discriminate between explicit/notational code and representation generally. This should be kept in mind throughout the following passage:

The contrast between Fregean and analogical symbolisms is concerned with the ways in which complex symbols work. In both cases complex symbols have parts which are significant, and significant relations between parts ... The essential feature of Fregean symbolism. is that all complex symbols are interpreted as representing the applications of functions to arguments ... It will suffice to notice that although a complex Fregean symbol, "the brother of the wife of Tom," has "Tom" as a part, the thing it denotes (Tom's brother-in-law) does not have Tom as a part. The structure of a complex symbol bears no relation to the structure of what it denotes, though it can be interpreted as representing the structure of a procedure for identifying what it denotes. (Sloman, 1978, 164)

An arithmetic expression like

3 x 5 + 4 x 3
---------------
11 - 2

which Sloman takes as Fregean, can certainly be seen as representing a procedure for finding what the expression denotes. But notice that if this equation were realized on an analog computer, and if individual numerical values were encoded magnitudes from our bridge-in-flood system, and if addition, subtraction and multiplication encode causal relations, then the analog computer realization of the equation will (intuitively speaking, since we still lack a definition) be a working analog of the bridge in flood.

It may be that if we think of 'procedure' in brain terms, a procedure being what the brain does, in which order, then "procedure for identifying what is denoted" would be just whatever the brain does when something is being understood. If there has to be an activation of neural patterns instantiating the many things known about Tom, in order to activate the neural patterns instantiating "Tom's wife", in order to activate the neural patterns instantiating "her brother", on the way to activating the neural patterns instantiating "Jerry Rigg", then it will also be true of the structure of a brain procedure (the sequence of structurally-related system states) that "some properties of, and relations between, the parts of the representation, represent properties of, and relations between, parts of the things denoted".

I will emphasize again that the Fregean expression is a model not an analogy. The way the expression is realized in the brain hardware of a person understanding it is an analog if there is some world state which also realizes that formal expression. This analogy may be a functional analogy, or what I have called a working analogy. At this point our Fregean model does not give us enough information to be able to tell which, but there could be another formal expression, more complex, relationally more specific, detailing more entailment structure. (See Rosen, 1991, 98-103.) Two physical configurations which were analogs in relation to the above formalism may also be analogs in relation to this much more detailed formalism. We could thus define 'working analogy' as a relation mediated by models with more detailed, more informative, entailment structure. This would put working analogy and functional analogy on two ends of a continuum, and I would think this is correct.

Where does this leave us with analog and digital, symbolic and non-symbolic? We have agreed that analog computers are continuous function computers and that they are therefore not code-using devices. We have seen that an analog computer configured to realize some differential equation is a functional analog of every other physical process describable by that equation - and that this property is one they share with digital computers. I have proposed that they will be working analogs of another physical process when the formal description that models them both is an expression in a formalism capable of noting more complex and detailed entailment structure. This property is the one that can distinguish them from digital computers, whose causal states and relations will not be mappable onto the more detailed inferential structure of the expanded formalism.

II.3 The rationalist line

David Lewis published a paper in 1973 attempting a definition of analog and digital representation of numbers. Fodor and Block in an often cited unpublished paper written the same year attempted to improve on Lewis's definition. Both papers inspired replies, the most important of which was Demopoulos (1987). Although the spirit of Pylyshyn's (1984) definition would make him part of this group, I will look at his distinction separately because the detail of his treatment makes it a useful foil for the discussion of connectionism which will follow.

Lewis draws the analog/digital distinction in relation to the fundamental versus derived nature of the physical magnitudes that characterize the relevant physical realizations of computational states. "Analog representation of numbers is representation of numbers by physical magnitudes that are either primitive or almost primitive" in some "good reconstruction of the language of physics" (1973, 324-5). Digital representation of numbers is "representation of numbers by differentiated multidigital magnitudes" (1973, 327).

A physical magnitude as Lewis is defining it is the measuring practice which assigns numbers to physical systems. A primitive magnitude is one that is straightforward in its operation - something like measuring electrical resistance or weighing a volume of fluid. These measurement functions - voltage, current, resistance, weight, velocity, and the rest - are "definable in the language of physics more or less as we know it" (1973, 323).

Digital representation of numbers in a computer is also representation by means of physical magnitudes. Why then isn't it analog representation? Lewis imagines a 36-bit register where each bit position indicates a binary 0 or 1 by means of negative or positive voltage. This is digital representation of a number, and it is representation by means of physical magnitudes. At each bit position the magnitude is also primitive in his terms. But the number is not represented by the individual magnitudes. It is represented by a particular pattern of magnitudes, and pattern of magnitudes is not a primitive magnitude in the language of physics as we know it. Lewis would call it a derived magnitude. Describing a pattern of magnitudes is no straightforward matter, and this complexity of description is what differentiates digital from analog representation.

The pattern of voltages in Lewis's 36-bit register resembles what in section I.5 I was calling the state space description of the electrical configuration of the machine. It too could be described in the form of a vector, an ordered set of primitive magnitudes representing voltage at the 36 bit positions. The thing about both Lewis's bit pattern and the state space of the machine is that the causal properties of the relevant physical magnitudes are emergent.

( See P. M. Churchland,1989, 51, for a discussion of this sense of emergence along with the other sense that specifies an emergent property as one that does not consist in an organizational property of a substrate and hence is not reducible. It should be noted however that Churchland's understanding of property reduction does not involve deducibility. "Formal considerations alone guarantee that, for any predicate F not already in the proprietory lexicon of the aspirant reducing theory TN, no statements whatever involving F ... will be deducible from TN. The deducibility requirement would thus trivialize the notion of reduction by making it impossible for any conceptual framework to reduce any other, distinct conceptual framework": 1989, 51).

They are network properties that appear exactly when the elements of some substrate are suitably organized, when they stand in certain relations to each other. This sense of an emergent causal property, the sense in which it is understood as consisting exactly of an organizational feature of a substrate, implies reducibility. So we are not dealing with a representational property that is not reducible to primitives of physics. What the difference between analog and digital seems to come to for Lewis, then, is a question of whether reduction is necessary or not.

In this paper Lewis also rejects Goodman's definition of 'analog'. He produces two counterexamples to show that mathematical continuity - denseness of representational types - is not essential to forms of computation accepted as analog. The first is an analog computer whose input is controled by a potentiometer with differentiated click-positions. The rest of its operation is the usual analog computer setup. He says this computer is analog even though input signal magnitudes are quantized, because numbers are being represented by electrical resistances. His second counterexample is a complicated device for multiplying by means of pellets draining from buckets through spring-loaded valves. Here the signal is disjoint and differentiated, and yet we would call this analog multiplication because, again, numbers are being realized as physical magnitudes, weight perhaps, simply describable in the language of physics.

It is true that engineers would have no trouble calling either of these devices analog, although they propagate amplitude-quantized signals. They are both still continuous function computers. The implementations of mathematical operators in both systems, op amps in the first and spring-loaded valves in the second, are still continuous-time processors.

But what if they were not, what if processing elements could operate only with quantized time-sampled signals? Then we would call the computers either analog or digital I think, depending on whether we thought of the discretized signals and processes as implementing a code. Mechanical digital computers must have had something of this character - gear wheels with teeth, some large and some small. We would have lost mathematical continuity but we would still have processing that depends on causal proportionalities among angles of rotation and so on, computational elements whose output was proportional to their input. Here the gear teeth would also be thought of as realizing number symbols. The point about Goodman's criteria for notational systems is that we can't think of continuous signals as instantiating symbols. Analog computation has to depend on causal proportionalities because there are no elements in an analog signal for syntactic rules to get a grip on. Again, it is always possible to introduce thresholding devices into analog circuits but analog computation occurs precisely without the computational use of such thresholds.

Fodor and Block's (1973) reply to Lewis's paper also rejects the continuous/discrete characterization of the analog/digital distinction, because they think that in this form the distinction will not carry the weight they want it to carry - i.e. the full weight of the distinction between cognitive and noncognitive processes:

From the point of view of the fundamental organization of the machine, the distinction between continuous and discrete systems is a very trivial one. Any continuous system can be rendered discrete simply by reducing its sensitivity (e.g., by incorporating threshold elements). Thus the theorist who wants to rest the weight of the cognitivism issue on the continuous/discrete distinction is likely to be embarrassed by the existence of pairs of machines which obviously operate fundamentally in the same way, even though one is digital and the other analog on the present way of drawing the distinction. (Fodor and Block, 1973, 8-9)

But the continuous/discrete distinction really is not so trivial. A discretized analog machine will be like the geared digital machine I spoke of earlier: It will be possible to see it as a symbol-using device, whose inferential rules are implemented in angles of rotation. With an analog machine whose representational processes are not digitized, we do not have that option.

Fodor and Block, and Demopoulos after them,

wish to be able to say for some principled and not merely pragmatical reason that some components of the cognitive system are more accurately described in biological (and ultimately physical) terms, while others demand a computational description. And we would like this distinction to coincide with the one between analog and digital. (Demopoulos, 1987, 83)

But why do we want "this distinction to coincide with the one between analog and digital"? For reasons, as Demopoulos admits, having to do with disciplinary territory. It is being assumed that cognitive processes, representation and inference, must be language-like processes - must be like language thought of as a formal system. Representation must be symbolic, and inference must be directed by rules. And 'rule', here, is not just another name for a formal description of a nonformal process. In a cognitive system, in Fodor's sense of it, rules are deployed to control the behavior of the organism (Fodor and Block, 1973, 4); cognitive processing relies on the analysis of signals into language-like constituents recognized by these rules. (Recall how in section I.5 the order of signals was representationally relevant only in relation to divisions of the bit-stream into code words.)

Given the assumption that cognition depends on code, Fodor and Block do not want it to be possible for analog and digital machines to "operate in the same way". A discretized analog machine, if it were seen as digital, would have to be thought of as a symbol-using machine - a representing machine - whose inferential processes nonetheless were not rule-governed but physical/causal in an immediately transparent way. If inferential processing of representations can be accomplished without the intermediacy of rules themselves implemented as symbol-expressions in code, then cognitive psychology loses its special science autonomy. Human psychology becomes a division of biophysics. Machine psychology becomes a division of engineering (which it is anyway). Cognitive science would lose its founding assurance that logic-using systems constitute a natural kind. So 'analog' and 'digital' must be defined in a way that divides logic-users from non-logic-users.

Lewis's way of drawing the distinction won't do because, as we saw, the causal properties of an organization of primitive magnitudes are, in the end, reducible to physical properties. So Fodor and Block say it this way: a computing device is analog if its input-output behavior instantiates a physical law, and digital otherwise. An example of the instantiation of a law is an analog component set up in such a way that relations of resistance, voltage and current - described by Ohm's Law - do the computational work. That is, those of the machine's state transitions that are representationally relevant fall under Ohm's Law. The state transition map just is Ohm's Law.

The same computation done in a digital machine will also have a physical description: the state space description of its electrical configuration. Will the machine's transitions in state space instantiate a law or laws? Fodor and Block say no. Its description will be a true empirical generalization not a law. The difference they admit seems to be a matter of degree.

Demopoulos argues that none of this will work, because transitions in the electrical state space of any individual digital machine will always be subsumable under some combination of physical laws. To get to a principled distinction between cognitive and noncognitive computational systems, we have to talk about classes of machines:

A class of machines is analog if its state-transition functions are subsumable under a physical law. A machine-class is digital if its computational behavior can be captured only in computational terms - if there is no single set of physical principles which captures the generalizations governing the behavior of all the devices in the class. (Demopoulos, 1987, 84)

By "computational behavior" Demopoulos means behavior described in terms of its task domain. We may want to say different models of digital computers are multiplying 23 x 47. The state transition functions of all these machines will not have anything in common that can be described by means of a physical principle. "Multiplying 23 x 47" will not be a projectible predicate.

This is a version of cognitive psychology's argument from multiple instantiation. Psychology, it is argued, wants explanation over such functional entities as 'believing it will snow' or 'knowing it is time to go home'. Functionalist kinds - psychological or computational - are physically realized in such diverse ways that it would be impossible to find a common physical description. So functionalist kinds cannot be reduced to physical kinds. If we want to talk about what a class of computers is doing, or can do - its functions - then we have to give it a computational description. For the class of digital computers, Demopoulos says, this task-domain description will be the only sort of computational description we can give.

A consequence of this way of drawing the analog/digital distinction is that analog computers will also be classed as digital to the extent that we view them as representing and computing. There is more than one way for analog computers, also, to multiply 23 x 47; and any variation in their component materials will require that they be described by different physical laws. So all classes of computers will be digital, inasmuch as they realize task-domain functions; and all singleton computers will be analog, inasmuch as their computational state-transitions are law-describable as well as rule-describable. An odd consequence.

"It would be worthwhile if the analog/digital distinction were drawn in a way which illuminated the computationalist's interest in the computer analogy", Demopoulos says (1987, 80). He himself draws it in a way that certainly does not make the appropriateness of the digital computer metaphor less plausible; indeed he designs a distinction which merely brings the analog/digital distinction into line under the physicalist/functionalist distinction. This is a pragmatic rather than a principled move because he has disregarded the features of analog computers that suggest an alternative to logic-using computation over symbols.

 

 

next