THE ANALOG/DIGITAL DISTINCTION IN THE PHILOSOPHY OF MIND

  title page

  intro

 I. 

  II.

  III.

  IV.

  V.


III. Pylyshyn and Symbolic Systems

 It was all over by 1970. The field of computers came to mean exclusively digital computers. Analog systems faded to become a small sub-part of electrical engineering. The finish was spelled not just by the increased speed and cost-efficiency of digital signal processing systems, but by the discovery of the Fast Fourier Transform, which created the field of digital signal processing and thus penetrated the major bastion of analog computation. The transformation of the field is so complete that many young computer scientists hardly know what analog computers are.

The main significance of this issue, with its resolution, was to help create the discipline of computer science and separate it from electrical engineering. Its effect on AI lies mostly in the loss of an analytic point of view, in which the contrast between analog and digital computation is taken as the starting point for asking what sort of information-processing the nervous system does. (Newell, l983, l95)

Allen Newell's l983 account of intellectual issues in the history of artificial intelligence makes the interesting claim that the issue of continuous versus symbolic systems was the issue that, from about 1955, resulted in the institutional separation of artificial intelligence from engineering and cybernetics. Taking a position on this issue would result in coordinated choices on four other issues: (1) pattern-recognition versus problem-solving as research areas; (2) learning versus performance; (3) parallel versus serial processing; and (4) neurophysiology versus psychology as background from which problems are drawn.

Continuous system folk ended up in electrical engineering departments; the AI folk ended up in computer science departments. (It must be remembered that initially computer science departments were almost exclusively focused on software systems and almost all concern with hardware systems was in electrical engineering departments.)

Adopting a class of systems has a profound influence on the course of a science. Alternative theories that are expressed within the same class are comparable in many ways, but theories expressed in different classes of systems are almost totally incomparable. Even more, the scientist's intuitions are tied strongly to the class of systems he or she adopts - what is important, what problems can be solved, what possibilities exist for theoretical extension, and so forth. Thus, the major historical effect of this issue in the 1960s was the rather complete separation of those who thought in terms of continuous systems from those who thought in terms of programming systems. The former were the cyberneticians and engineers concerned with pattern recognition; the latter became the AI community. (Newell, l983, l98)

Classical cognitive science takes its computational bearings from computer science, which evolved with AI in the symbol-system camp. Looking at what I have called the rationalist line on the analog/digital distinction, it is helpful to keep these institutional alliances in mind.

Pylyshyn is a computational psychologist, a cognitive scientist - one of those who adopt "the programming system itself as the way to describe intelligent systems" (Newell, 1983, l98). His l984 monograph Computation and Cognition offers a definition of 'analog' that is like Fodor and Block's, and which serves to drive a wedge between what are considered fixed, stimulus-bound, biological components of cognition and those that may participate in rationality. Before I outline Pylyshyn's notion of analog processing I will look briefly at the more general approach that informs it.

III.1 Logical functionalism

Functionalism is an approach to explanation which defines its explanatory kinds over causal relations rather than physical structures. Freudian psychology is an instance of a functionalist theory, in which 'ego', 'id' and 'superego' are defined in relation to their causal effects. They are hypothetical functions from some input to some output, which have not been localized to any organ in the brain although it is assumed they are neurally implemented in some way.

Pylyshyn's computationalism is a logical functionalism because his explanatory entities are identified with the elements of formal languages or, more precisely, with the elements of those formal languages which are also programming languages. Any formal language has these characteristics:

(1) a finite lexicon of logical primitives - discrete, context-independent elementary symbols;

(2) well-formation rules and inferential operators which compose and re-order strings of these atomic symbols in purely syntactic ways that are nonetheless guaranteed to be reference- and truth-preserving;

(3) a semantic function from symbols to referents which is combinatorial - which guarantees that the meaning or reference of any string is a function of the meaning or reference of its parts.

Pylyshyn's logical functionalism, then, is a psychological theory which attributes certain kinds of behavioral regularities to the causal interaction of symbols and rules. An explanation of a psychological event would take the form of a program specifying a series of procedures performed on symbols. In this context 'program' has the same explanatory status as 'symbol' or 'rule'. It too is a functionally defined entity. Like id, ego and superego, symbol, rule and program are not localized to any identifiable structure or process in the brain: they are said to be implemented or instantiated or realized in neural structures and procedures, but the details of this instantiation need not be considered. Indeed they cannot be considered, because they are not known. Cognitive psychologists are in the Kantian position of trying to extrapolate from experimental data to the rules and representational structures which must be thought to be responsible for that data.

There are many psychologies, and functionalism is an approach that lends itself to many systems of functional kinds - we can think of Gestalt theory, Reichian orgone theory and even well-developed theologies in this light. Cognitive psychologists are motivated to the choice of logical functionalism by the class of behaviors that most interest them. They develop a logical or linguistic functionalism because they are interested in rational behaviors mediated by language. Rational behavior is thought to be goal-directed behavior arrived at through the intermediacy of inferential operations on beliefs. Goals or beliefs are themselves thought to require internal representation in the form of propositions, because, as is the case with those of our beliefs and goals articulated in natural language, they are thought to refer, to abstract, to have truth conditions and to be in logical relation to each other. They are also thought to have constituent structure: to have parts which may be connected and disconnected and moved around. Consequently they are thought to be productive and systematic. Linguistic productivity - the indefinite variability of what may be said in any language - is given by recursive processes operating on syntactic/semantic units. Our ability to believe just about anything is thought to be given by similar recursive operations over units of belief. Linguistic systematicity - the fact that we can say that eggs lay ducks as easily as we can say that ducks lay eggs - is also thought to be a feature of the internal representations of beliefs.

A psychology which looks for explanations in terms of beliefs and goals which themselves name some external features of a task domain is called propositional attitude psychology or intentional psychology or belief-desire psychology. An intentional psychology could theoretically be neutral as to the form taken by internal representations of beliefs and goals, but intentional psychologies of Pylyshyn's sort invariably posit sentence-like strings, both for the reasons given above and because their explanatory medium and construct - the program - comes in the form of linguistic strings. The internal language posited by such psychologies is called mentalese; and the hypothesis itself is called LOT (language of thought) theory. So Pylyshyn is a belief-desire psychologist and a LOT theorist, and logical functionalism is the methodology that makes these theories computationally workable. "Sentential predicate-argument structures," Pylyshyn says, "are the only known means of encoding truth-valuable assertions for which there exist a partial calculus and at least certain combinatorial aspects of a semantic theory" (1984, 196). We have well-developed theories of deductive inference defined over formal elements of a language. We know how to implement formal languages in lower-level languages and ultimately in hardware processes. Digital computation yields an existence proof for the physical instantiability of formal languages. But what exactly are symbols and rules and programs, functionally defined?

III.2 Functionalist symbols

We know the work 'symbol' is doing in Pylyshyn's theory: it is the theoretical atom: the logical primitive, the epistemological and computational foundation. (Pylyshyn cites Goodman on this.) A symbol must have disjoint types and finitely differentiated tokens. It must be discrete, non-fractionable, and context-independent. But what does it mean to say a functionally defined entity is discrete?

A symbol in a digital computer is something we can know top-down, bottom-up, both coming and going. Seen at the hardware level, as just this temporary setting of flip-flops in a register, an instantiation of a symbol is a structure not a function. It is some particular inflection of a physical material. This identifiability of token symbols with a material state gives the notion of a digital symbol a clarity it does not have when we are talking about human cognition. We know what it means for digital symbols to be discrete: it means a digital machine's switches are bistable devices with no intermediate states. But where in the massively interconnected structure of the central nervous system should we imagine a similar discreteness? It seems very unlikely that the firing decisions of individual neural synapses should be identified with instantiations of logical atoms, since there are more than 1014 distinct synapses in the human brain (see P.M. Churchland, 1989, 209-10). If symbols are thought to be instantiated by cell assemblies (Hebb, l949), do we say an activation pattern a 'is discrete' if it results in decision A rather than decision B somewhere downstream? That would allow connectionist patterns of activation to be thought of as realizing symbols. But all patterns of activation that result in decision A will be called symbol a in a functionalist identification. And if the activation patterns which instantiated a on one occasion instantiate b on another occasion when surrounding cells are differently activated, we have the same local state instantiating different symbol types at different times. This violates Goodman's disjointness criterion.

My point is that the status of a functionalist symbol is unclear as long as neural realization is unknown. There is no problem with external description in symbol terms. We identify symbol a with whatever results in decision A, whatever the local circumstance. This is perfectly good functionalist description. We can use it to write programs in which decision A is always followed by decision E given the copresence of decisions C and D. Our program is a formal description of the causal relations in a sequence of events - it is a model, in Rosen's terms. Like any model it can be run on a digital computer. The problem with functionalist programs whose neural realization is unknown comes when we want to say the nervous system is a computer executing - literally running - the program we have written. There is nothing in a functionalist definition of symbols or rules or programs that licences us to do this. And there is nothing in a functionalist description that requires it. We have seen that analog computation can be given a formal description without having to be seen as itself employing symbols. A claim that the human brain is using the symbols named in a program describing it seems so manifestly to overstep the functionalist mandate that one wonders if Pylyshyn really wishes to make this claim.

The difference between weak and strong equivalence of program and cognitive process, Pylyshyn says, is a matter of detail. A program is weakly equivalent if it realizes the same global input-output function as the process modeled:

Clearly, if the computational system is to be viewed as a model of the cognitive process rather than a simulation of cognitive behavior, it must correspond to the mental process in greater detail than is implied by weak equivalence. On the other hand, it is equally clear that, because computers not only are made of quite different material than brains but, through details of realizing particular operations (for example, certain register-transfer paths or binary mechanisms and bit-shifting operations), differ from the way the brain works. The correspondence between computational models and cognitive processes appears to fall somewhere between these two extremes. (Pylyshyn, 1984, 89)

Pylyshyn's suggestion for the appropriate level at which to define strong equivalence is the level of the algorithm - a level which specifies the steps a system is to follow. Elementary steps in the algorithm are to be formally isomorphic to primitive operations of the cognitive architecture - and primitive operations in the architecture are those defined over the most elementary cognitive units, which are identified with symbols. Strong equivalence of an algorithm written for a digital machine and the procedure followed by a computing brain would imply that brain and machine have the same functional architecture. Whether equivalence of functional architecture would rule out connectionist realization of elementary symbols of the language of thought remains unclear. I will come back to the notion of functional architecture, but first I would like to look more closely at what a functionalist version of a rule is like.

III.3 Functionalist rules

An algorithm is a sequence of well-defined rules or procedures; strong equivalence of digital algorithms and biological cognition will imply that cognitive creatures perform computational operations by means of the same steps taken in the same order. Again, these steps are functionally not physically defined:

Computational operators deal only with computational events. Although there are physical-event tokens which underlie instances of computational events, the regularities among the computational events are not captured by physical laws, but by computational rules instead ... Operators map computational states onto computational states, or symbols onto symbols, but not physical descriptions onto physical descriptions. That is why physical descriptions are irrelevant in characterizing computational operators, though, of course, they are relevant to showing how primitive operators are realized in a particular system. (Pylyshyn, 1984, 170)

That 'rule', 'operator', and 'procedure' are often used synonymously follows from the functionalist nature of their description. 'Rule' is a term that is at home in logical calculi, where rules of inference are prescriptive rather than descriptive. It is also at home in grammars, where its status is mixed or ambiguous. The rules of grammar are taught, and thus prescriptive; but when grammars are studied as an empirical domain, as is the case with Chomsky's generative grammar of natural languages, grammatical rules are descriptions of regularities assumed to lie behind the construction or comprehension of sentences having constituent structure. It may or may not be thought that these grammatical rules are also inscribed as conditional sentences in mentalese, somewhere in the human language system. A functionalist characterization is compatible with either a descriptive or a prescriptive reading of 'computational rule'.

High-level programs for digital computers are written in the form of procedures, simple commands and repetition statements. These can be seen as rules in the prescriptive sense: they are orders telling the machine what to do. Instantiated at the hardware level they are, of course, causal sequences, a series of events which reset switches and thus reorganize the physical machine. Here the relation of the written program to the machine in operation can also be seen as descriptive - a modeling relation. This interesting overlay of the prescriptive and descriptive senses of 'rule' at different conceptual levels in a digital computer is made possible by the fact that digital machines are string processors some of whose instructions are programmed and so enter the machine in the same form as the sentences to be processed. Programmed rules (as opposed to those implicit in hardware) are sentences designed to have logical effects on other sentences. We see them as causes because we are thinking of them as task initiators in an intentional domain. They are commands, orders which are also causal determinants. A parallel in cognitive terms would be what happens when someone says to us "Give me the sum of your birth year and your telephone number", and we do. Their sentence, somehow input and implemented in personal mentalese, i.e. realized in personal wetware, programs us and results in a computation.

When we are dealing with digital machines, then, we seem to have a process that can be seen as simultaneously rule-programmed or rule-using - at the intentional or functional level - and rule-describable - at the structural or physical level.We have seen that this is not the case with analog machines, which are physically configured - 'programmed' - without the intermediacy of coded rules. Analog machines 'obey' mathematical rules inasmuch as their materials have been chosen for their proportion-preserving causal properties, but they cannot be seen as responding to internalized sentences. They are rule-describable but not rule-using. What we noticed above, however, is that digital machines, described at the hardware level, are rule-describable rather than rule-using as well. The description of a computational machine as rule-using, then, presupposes two things: (1) the description must be a description at a level higher than the hardware level; and (2) the machine must be programmed at least partially by the intermediacy of coded sentences.

As we will see in chapter IV, a connectionist criticism of LOT theories such as Pylyshyn's is that a description of human cognition as rule-using is implausible and unnecessary. Pylyshyn's reply would emphasize the sorts of cognitive instance where humans very evidently compute by means of sentences. Rationalist tradition thinks of intelligence as rationality, and rationality as inference. So paradigm instances of cognition, for a rationalist, are those where an issue is debated internally - evidence marshalled, implications explored, and conclusions reached. Our experience of this sort of episode is an experience of 'hearing ourselves think'. Pylyshyn would not be so naive as to think the phenomenal experience of cognition is a window onto the cognitive substrate, but, like Fodor and other LOT theorists, he does wish to support the possibility that creatures with speech are able to use sentences to program themselves .

With digital machines, we are the programmers, we input the rules, which are 'obeyed' when the machine does what we tell it to do. We have this masterful relation to our machines because we design them. Generally we speak of the relation of the descriptive levels available with digital machines in a sequential way that supports this sense of mastery: we write a program in source code, i.e. in a task domain language; we then implement this upper-level language in an assembly language of some kind; and we then compile this intermediate language into absolute code suitable for direct execution by the central processor. The chain of command flows top-down from program to hardware realization - the linguistic seen as commanding the material in a fashion that's downright theocratic. But rules in the head cannot, of course, be implemented in this order: they are not linguistic before they are physical. As rules in the head they are immediately physical; their linguistic description will be a functional description of some structure which is already a physical structure. Without temporal priority, intention-level description does not so easily seem description of a command structure. When we have both top-down and bottom-up descriptions, and they are descriptions of the same event seen simultaneously as material and linguistic, there is no special reason to think of the linguistic (or voluntary, perhaps) as commanding the material. There is as much reason to think of the material as self-organizing, and the linguistic as self-organized along with it.

III.4 Analog as functional architecture

The functional architecture of a digital computer is a description of the machine at its lowest functional level. Because these functions are thought of as directly realized in the physical machine, they can be given physical descriptions. We can explain their workings either in terms of physical laws describing material properties of components, or in terms of the ways components are configured. This is the level where a bistable device is called a switch and a certain configuration of bistable devices is called a register - both descriptions being descriptions of computational function at its lowest level. More general aspects of a computer's functioning are also part of its functional architecture - the way memory and addressing are organized, the operation of control functions and user interfaces, and so on. A description of the functional architecture of a computing system is an engineer's description of parts and their functions and configurations. It can be given a mathematical description - and although there is still room for a variety of hardware realizations - these mathematical descriptions can be seen as descriptions of constraints on the computational behavior of the system. While these constraints must be thought of as having computational effect, they are not themselves modifiable by computational (read 'linguistic') means. They are computationally impenetrable.

Pylyshyn uses a similar notion of cognitive impenetrability in relation to human computation. The notion of cognitive impenetrability is Pylyshyn's device for attempting an empirical boundary between (what amounts to) the physical and the mental. A cognitive process will belong to functional architecture if it is fixed and stimulus-bound, or modifiable only by non-cognitive means such as maturation or blood chemistry. It will be called a cognitive or symbolic or representational process if it is modifiable by linguistic or rational or logical means. In the psychology lab we can apply tests to discover whether instructions or information change the outcome of a process: if so, it is not part of the functional architecture, Pylyshyn says. The boundary between functional architecture and rational processes obviously is not a boundary that can be drawn within a hardware description of a computational process, though: the modification of a synapse as a response to recency of use looks exactly like the modification of a synapse due to the input of a sentence S of experimenter instruction--when it is in fact be the same event. For Pylyshyn this is irrelevant. What he (like Demopoulos) wants is a separation of descriptive domains, descriptional formalisms:

The distinction between functional architecture and representation-governed processes ... marks the boundary in the theory between two highly different kinds of principles - the kind stated in terms of semantics, in terms of properties of the intentional objects or contents of thoughts or other representations - and those that can be explained in more traditional, functional terms. (Pylyshyn, 1984, 262)

This is, in fact, just Demopoulos' distinction between "components of the cognitive system ... more accurately described in biological (and ultimately physical) terms", and components which "demand a computational description". And Pylyshyn, too, wants his distinction to "coincide with the one between analog and digital". His definition of 'analog' is wider than Fodor and Block's, however. For Pylyshyn analog processes are not only those whose regularities are consequences of direct instantiations of physical laws, but also any processes for which a system "behaves as it does because of the kind of system it is, because of the way its nervous system or circuits are structured to encode information and to process these codes" (1984, 212).

So the boundary between 'digital' and 'analog', for Pylyshyn, is exactly the boundary between processes which are thought to require a cognitive or task-domain or representational description and those that can be given a physical or configural description. Pylyshyn does not exactly stint the computational nature of analog processes: he grants they are "a form of complex process that avoids both the logical step-by-step character of reasoning and the discrete language-like character" of the representation typical of cognitivist theories. They deliver their output in one bold step and "can be achieved within material causal systems, just as the syntactic alternative can" (1984, l99). But he considers there is nothing to be gained from considering that human representation might employ analog forms:

We have no idea what explanatory advantage the proposal is supposed to gain us, because (a) there is nothing equivalent to a calculus, or even principles of composition, for what are called "dense symbol systems"; so we do not know what can be done with them, how elementary symbols can be combined to form complex representations, or the kinds of regularities that such symbols can enter into; and (b) there is no idea of a physical mechanism that could instantiate and process such symbols. The "analog computing" view ... does not do this; it is not a symbolic system at all, inasmuch as none of its regularities require explanation at a separate symbol level, as is true of digital computing. (Pylyshyn, 1984, l99)

We want human cognition to have a "sentential-representation format" because formalist ideas of the sort that are native to twentieth-century logic and mathematics are "the only scheme we have for capturing normative patterns of reasoning" (Pylyshyn, 1984, l98).

Where "normative patterns of reasoning" are the paradigm of intelligence in cognitive creatures, other discriminations tend to follow. Pylyshyn's logic-based view of knowledge leads him to think of any process as unintelligent just insofar as it is non-digital. The operations of functional architecture, which most likely will turn out to include learning and development, emotion, mood, psychopathology, motor skills, early stages of perception, and sensory store, will - when they can be said to involve representation at all - involve non-articulated, continuous, or "wholistic" representation that disqualifies them from those combinatorial possibilities of systematicity and productivity that mark anything we want to call a thought or belief. It follows that perception will be seen as intelligent just to the extent that it involves inference, and it will be suspected of involving inference just to the extent that it is seen as intelligent.

But cognitive units - 'ideas' in the old-fashioned parlance - must come from somewhere, at least for Pylyshyn, who does not wish to be a methodological solipsist. So Pylyshyn's theory provides transducers, those magical devices for inputting the physical and outputting the mental.

III.5 Pylyshyn and transducers

Transducers are part of the functional architecture, that is to say, their operation is thought of as cognitively impenetrable, stimulus-bound and fixed. A perceptual transducer is thought of as encoding exactly those (proximal) characteristics of an input signal which, inferentially combined, will yield a description that can be reliably correlated with distal properties of an object. Its biological fixity at this first level of perception would guarantee the reliability of perception, its independence of cognitive influences. A transducer is thought to provide a point of contact between world and mind, and between physical and computational vocabularies, because the cognitive state resulting from the operation of a transducer "happens to be type-equivalent to a physiological state" (Pylyshyn, 1984, 142). It might help to remember, here, the operation of an A/D converter, which accepts a continuous waveform and emits a pulse train taken as instantiating a binary '1'. Any pulse train emerging from the transducer with just that sequence of pulses and no pulses will also instantiate a '1'. There is type equivalence between pulse-pulse-no pulse and '1'. Transduction for Pylyshyn is strictly local: there are neural sites the fixed, biologically-given function of which is to emit coded signals in response to specific physical magnitudes. Transducers may also be internal to cognitive processes. Any change in a belief must be arrived at by rational means, so any alteration of functional architecture which results in cognitive change must do so by means of transducers. Emotional changes, if thought of as chemical changes, cannot affect beliefs directly, since syntactic compositionality and inferential integrity depend on uninflectable logical atoms. The only way mood or emotion could moderate a belief would be if an emotion-transducer were to produce token propositions able to interact logically with belief-tokens already present (Pylyshyn, 1984, 270). The presence of a specific magnitude of some chemical concentration would have to be coded to have cognitive effect. An extraordinary tale, which Pylyshyn apparently does not find counterintuitive.

There is, incidentally, what I take to be a major hedge in Pylyshyn's account of topological property transduction. If a perceptual transducer is to qualify as an analog or functional architecture process, then it must produce atomic symbols and not symbolic expressions. It is difficult to know how such atomic transductions will not lose topological information. Pylyshyn's solution is to say that transducers must signal their identity or their relative location in such a way that global properties may be reconstructed by combinatorial means. "The location (or other unique identifier) of the source of a local code must be available ... This information might even be signaled topographically, by the actual location of the neural connections from the other transducers" (Pylyshyn, 1984, l63). It would certainly make sense to have topological information signaled topographically, but as soon as we have the physical configuration of the machine given computational importance in this way, 'analog' and 'digital' seem to drift into register: a feature of the functional architecture is at the same time a feature of the code. This, to the connectionist, is just as it should be.

 

 

next