THE ANALOG/DIGITAL DISTINCTION IN THE PHILOSOPHY OF MIND

  title page

  intro

 I. 

  II.

  III.

  IV.

  V.


Introduction

Analog and digital systems are similar in some ways and different in more than one way, and I am going to assume there is no single right way to draw the analog/digital distinction. How we draw the contrast matters only because the notion of 'analog' has had, and in a new form continues to have, an important oppositional role in discussions of the functioning of human nervous systems. There is something about the way analog computers work - the way they compute - that seems to provide us with an alternative picture of what representation and computation might be and, indeed, with an alternative sense of what language we should be speaking when we attempt the discussion.

Notions of 'digital' are quite uniform and well understood, but 'analog' gives us trouble. It is vague, compendious, and not anchored to a particular, ubiquitous and ascendent technology the way 'digital' is. So I am going to look mainly at the ways 'analog' is used, first in the communications engineering community and then in the community of cognitive scholars including psychologists and philosophers.

I am looking first at the engineering community because I believe it is helpful to ground the term in the technologies that were its original context. Engineers are not concerned with providing definitions that will exclude bizarre counterexamples, and their definitions frankly accommodate the range and overlap which characterize our intuitive senses of the term. I am taking this as a virtue because it will allow me to lay out some of the several dimensions of what 'analog' means in practice. This in turn will allow me to show how and why philosophers can carve the concept in the several ways they do.

My larger intention is a defence of the cognitive alternative originally suggested by the existence of analog computers. Some of those who have defended the idea of analog cognition have been ineffective because they were missing parts of the picture that have arrived since. I believe the development of parallel distributed processing supports, extends and refines the oppositional role played by earlier pictures of analog computation. This is not to say that parallel distributed processing is analog processing. (Sometimes it is and sometimes it isn't.) What I am going to suggest is that analog computing and parallel distributed processing can play something like the same oppositional role in the face of classical, digital cognitivism because they are both pictures of nonsymbolic cognition.

This amounts to saying that what is important in the analog/digital debate is just what is important in the connectionist/language of thought debate - the question of whether or not we have to think of creature computation as employing a symbol system. And further: whether or not the modes of discourse we have developed to talk about operations in formal systems are suitable when we are talking about what brains do.

Analog computers are uncommon engines now, so in section I.1 I will offer a description of what they are, what they do and how they do it. In section I.2 I will use Nelson Goodman's specifications of the requirements for notational codes to show that they are symbol-describable but that they cannot be symbol-using, and to show what this has to do with mathematical continuity.

More than one kind and locus of continuity is cited in engineering definitions of 'analog'. Also important are notions of physicality, implicit representation, material implication and analogy. Analogy is tricky: just what is analogous about analog computation? This question will require a section of its own (section I.4), as will a discussion of transducers (section I.5).

Philosophers' use of 'analog' falls into two general camps. There is a rationalist line of argument - from Lewis through Fodor and Block to Demopoulos - that wishes to support the autonomy of psychology as a special science by emphasizing the language-like aspects of cognition. This group, discussed in section II.3, defines analog computation as law-governed rather than as rule-governed; in other words, as physical and not properly mind-like.

Another group, which includes Sloman and Boden and which is discussed in section II.2, defines analog computation as the operation of automated working models - computation by means of functional analogy. Analogy of function can of course be modeled formally, and most members of this group are not opposed to rationalist cognitivism.

Those who, in the empiricist style, wish to naturalize cognitive studies by emphasizing the evolutionary and developmental and perception-like aspects of intelligence, readily agree that analog computation should be described nomologically. The notion of functional analogy is not very useful to connectionists, however, since what makes nonsymbolic computation and representation possible in analog devices is not what makes it possible in creatures that construct their own representations in concert with a structured environment. Here another sense of representation is involved - representation which is intrinsic to the representer, not read into it by our uses of a device. The sort of representation ascribed to connectionist nets is, of course, still of the latter kind. What connectionist computation does have in common with analog computation and what it has in common with creature computation will be the subject of chapter IV.

A common argument from the rationalist camp says that anything an analog computer can do without explicit programs or symbols can also be done, although differently, by a digital computer. Any real numbers can after all be approximated, as closely as we like, by a rational number. This is correct and it can be granted at the outset. What is at issue however is not whether cognition can be described, or simulated, by digital computers: anything that can be modeled mathematically can be modeled in base2 notation, even a gust of wind or a topological transformation. What is at issue is whether the mind-brain is a digital computer: whether representation and processing in the brain are what they are in digital computers.

The issue is important politically because if we think of human cognition as digital computation, then we will also think of those things a digital computer does easily as being central to human intelligence: we will think of the kinds of people who are good at what computers do easily, as being the most intelligent, and even the most human. And it is important in another way. If human brains are not digital computers, and if human cognitive virtue is of a different kind than digital competence, then we could misunderstand our own capability and fail to be as intelligent as we might.

 

 

next