THE ANALOG/DIGITAL DISTINCTION IN THE PHILOSOPHY OF MIND
I. The Engineering Distinction
I.1 Analog computers
When we talk about analog computation we have to talk about hardware, because there is no program as such. The machine consists of a moderate to large number of functional modules, based on operational amplifiers, that perform such operations as summation, inversion, integration and nonlinear function generation. In the early days these components would be strung together by hand. Later, patchboards were used. The set-up was generally thought of as implementing a differential equation or a set of simultaneous differential equations. Potentiometer settings would provide parameter values. Often an oscilloscope or pen plotter would graph the output.
Analog computers were good at modeling dynamical systems - physical, chemical, electrical. They could for instance model the response of the suspension system of a car to various levels and locations of impact. This kind of modeling presupposes a good understanding of the laws describing both the physical properties of the modeled system and the components of the analog model:
So an analog set-up is a physical implementation of a mathematical description of another physical system. It is thus an analog in the strict sense defined by Robert Rosen who says two natural (i.e. physical) systems are analogous when they realize a common formalism (Rosen, 1991, 119). I will have more to say about this later.
Analog computation has found a new use recently in what has been called experimental mathematics. Linear time-invariant systems are the tractable part of dynamical systems studies. They can be modeled in the form of ordinary differential or difference equations. Those equations have modular virtue: any signal is a decomposable sum of its zero-input response (the response the system would evolve without input) and its zero-state response (the response the system would evolve from input excitation if its initial state were zero). The principle of superposition holds here - if x produces y, and z produces w, then x + z produces y + w .
Nonlinear and time-varying systems are another matter. A system is nonlinear if even one system component changes its characteristics as a function of excitation applied, the way viscosity changes as a function of pressure, or friction as a function of velocity. A system is time-varying if even one system parameter changes over time, the way medium-frequency radio transmission changes over the course of a day due to changes in ionospheric reflectivity.
Neither nonlinear nor time-varying systems can be described by means of modular differential equations. Nonlinear systems, in which variables are mutually dependent, require the use of nonlinear differential equations; and time-varying systems will need partial differential equations. Both sorts of equation are generally insoluble by analytic means. The principle of superposition does not hold: the form of the solution will change depending on the values of inputs and initial conditions. (In system terms, zero-input response and zero-state response will not be summable because they are mutually dependent.) Examples of models which are both nonlinear and time-varying are the equations describing thermal equilibria, weather patterns, and neural signaling. So many modifications, compromises and approximations are needed when we try to digitize our descriptions of these systems that digital computers are sometimes said to simulate the equations involved rather than solve them - they will give us some kind of an idea of what's going on, but subject to so much error at intermediate stages that we must accept our results with caution.
Although global solutions can seldom be found for nonlinear and partial differential equations, there are ways of finding solutions for individual values of the variables; and analog computers, if correctly configured to model mutual dependencies among variables, can solve these more complex systems almost as easily as linear time-invariant systems. The new mathematics of nonlinear dynamics has found analog computers to be a direct, hands-on means of exploring the global behavior of complex equations by probing their behavior over wide ranges of values (Gleick, 1987, 244). At times this exploration is purely mathematical: we want to see how a particular equation responds over its range. At other times we are concerned to develop models of complex physical systems. In this context analog computers have unusual computational virtues:
I.2 Why analog systems are non-notational
When we talk about analog computation or transmission we talk about hardware; when we talk about digital computation or transmission we talk about symbols, programs, languages. How do we account for this difference, which seems to make the analog/digital distinction a distinction across descriptional languages rather than a distinction within a common language?
Nelson Goodman says that digital technologies are able to employ a code or notational system, and analog technologies are not. Some form of representation is involved in analog signaling, but it does not qualify as notational representation.
What we want from a notational language is unambiguous identifiability of symbols and their compound expressions. Goodman prefers to talk about marks rather than tokens, and about characters rather than types, but I will adopt Pierce's token-type terminology and ground it in Allen Newell's notion of a physical symbol. For Newell a symbol is a "physical pattern in some natural or artifactual physical machine, which can be physically related to other such patterns in various ways to form compound symbol-structures" (Newell, 1980, 135).
Newell emphasizes the materiality of computational symbols, as Goodman does as well ( "I am speaking here of symbols only, not of numbers or anything else the symbols may stand for": Goodman, 1967, 136). A character or type is an equivalence class of marks or tokens. 'Symbol' applies both to individual physical marks and to the type of which they are instances. Goodman himself uses 'symbol' in the loose sense by which any form of representation is said to be symbolic. In my use it will be synonymous with 'element of a code'. By 'code' I will mean what Goodman means by 'notational scheme'.
For Goodman a representational scheme may be notational only if it meets two kinds of syntactic requirement:
(1) Disjointness of types. No token can belong to two different types, whether at the same time or at different times.
(2) Finite differentiation. It must be theoretically possible to distinguish system states instantiating a 'one', say, from system states instantiating a 'zero'. "Determination of membership of a mark in a character will depend upon the acuteness of our perception, and the sensitivity of the instruments we can devise" (Goodman, 1967, 135), but not on processes requiring infinite (non)differentiation. In short, all characters of a symbol system must be disjoint classes of (theoretically if not actually) unambiguous inscriptions.
Practical code design relies on these properties of notational schemes and attempts to realize them in ways that make good use of channel properties. Disjointness of types and finite differentiation of tokens guarantee that, since a 6.38 will be read as a 6, signals will not degrade over stages, or else can be reformed at intervals. This robustness also makes possible storage and retrieval of the sort we assume in von Neumann architectures.
Here we have the relation between 'notational' and 'digital':
So Goodman defines 'digital' in this way: a digital scheme is not merely discontinuous, but differentiated throughout. Since 'discontinuous' here means the same as 'disjoint', we have a digital scheme defined the same way as a notational scheme. 'Digital' and 'code' are synonymous in Goodman's definition. And the properties of notational systems are essential to the operation of digital systems:
"'Zero' or a 'one'" gives us a paradigm set of disjoint types; and two-state pulsed signaling synced to a clocked bit-period gives us finite differentiation of tokens.
A representational scheme is analog for Goodman if it is syntactically dense, in other words, if there are indefinitely many characters so ordered that between each two there is always a third. The reals and the rationals are dense in this way. Density puts the disjointness requirement for notational representation out of reach because there are no gaps between types. We cannot say a token is a member of only one type, because every finite position will be representable by, and hence 'belong to', an indefinite number of characters.
Syntactic density is Goodman's only defining property for analog schemes, but he points out that syntactic density also makes finite differentiation impossible. Where every token is a member of indefinitely many types it will be impossible to say of any two tokens that they are members of the same or different types. Thus analog schemes are doubly disqualified from being notational.
Any continuous signal may of course be discretized. We can set a ruled grid against the column of mercury in a thermometer and then disregard the intervals between marks, for instance. When we do so we are substituting a notational or digital scheme for a non-notational or analog scheme. The column of mercury is an analog signal only when the unmarked values between marked intervals are considered to have representational relevance even though they have not been discretized. Unthresholded electronic signals are analog in that all of the continuous values of certain magnitudes have causal and therefore computational relevance. Since none are disregarded by the system itself, we cannot think of the system as using a notational scheme. If physical reality is in fact granular at some ultimate level, then we could indeed consider every signal to be ultimately discrete and therefore potentially notational. But we do not have to think of a system as notational just because it has discrete elements. Discreteness of signal is a necessary but not a sufficient condition for notational systems. (See also page 71 on code-using systems.)
Analog computers are continuous-time systems and digital computers are discrete-time systems. This is the standard engineering taken on the analog/digital distinction. What is meant will become clearer in section I.3. For the moment I only want to point out the relation between mathematical continuity and Goodman's definition of 'analog'.
Analog computers compute continuous functions. The equations they implement are differential equations, thought of as taking values over the whole range of some real interval. Digital computers are discrete function computers; the equations they implement are difference equations, which take their values over a (rounded-off) subset of the rationals. Difference equations are discrete approximations of differential equations, adapted to handle the clocked nature and finite expression length of digital signals. In digital modeling we are not, and cannot be, interested in what happens between discrete time instants. Disjointness and differentiation depend on and follow from the minute divisions of the bit-period, and they also involve us in the approximation, the quantization error, that characterizes discrete mathematics.
So the non-notational nature of analog computers does not imply that analog computing is less exact than digital computing. The contrary may be true, within the limits of component manufactury. What mathematical continuity does imply is just that analog computation cannot be thought of as instantiating symbols. It can be thought of as implementing functions, and the functions being computed at any multiplier or integrator can be supplied with numerical descriptions if we choose (we could rig a device that measures current and gives us a numerical reading of an op amps's output, for instance), but it cannot be thought of as using symbols in its computation. Computation for an analog computer is plainly a matter of causal relations among physical materials, and the sort of causal relations involved are barred from implementing a code because they involve mathematical continuities.
It is also plain that the analog computer is implementing mathematical equations, which we think of as expressions in a formal language. If we see the equation-expression as a formal model of what the physical machine is doing, then we have a picture of something which is clearly computation, clearly describable in terms of a code, and also clearly not computation by means of a code.
I.3 Engineering definitions
'Analog' and 'digital' came into use in the 1940s and '50s to distinguish between kinds of computing technology. The terms received an extended use in other areas of electronic communications engineering when digital technologies were brought in. The old signal processing technologies - radio, telegraphy, telephone - had been around since the mid-1800s without needing a special term; now, however, these old technologies were called 'analog' to distinguish them from the digital technologies beginning to replace them. In communications engineering 'analog' also picked up senses less obviously related to analog computing. The term has had other even more general applications to just about any kind of working system, mechanical or not, representational or not. I will not concern myself with these uses because they derive by metaphor from one or another of the primary engineering uses I will outline here. The inconsistencies in these wide-ranging applications are attributable to the inconsistencies in engineering usage.
In electrical engineering, which includes computer engineering, 'analog' is applied to information sources and sinks, signals, entire communication systems, processing technologies and components, and channels.
Telephones are said to have analog sources and sinks because they receive continuous waveforms and emit reconstructed approximations of these waveforms. A wordprocessor would have digital source and sink. (See Couch 1983, 3.)
There is disagreement about how to define an analog signal. One writer (Kunt, 1986, 1) says an analog signal is any signal whose dependent variable is defined over a continuous range of time. It may have either continuous or discrete amplitude values. If they are discrete, it will be called a quantized analog signal. A discrete-time signal, on the other hand, is a digital signal only if its amplitude is also quantized. Other writers (Illingworth,1990, 14) say an analog signal varies continuously in both time and amplitude.
A communications system may be said to be analog if it transmits information from a continuous source by either continuous or discrete means (Couch, 1983, 3); or, on the other hand, if it describes a continuous source by means of continuous signals (Proakis, 1983, 60).
Lafrance (1990, 7) uses 'analog' only in relation to representation schemes. Analog registration (of acoustic waveforms as physical engravings on vinyl, for instance) does not employ code, while digital representation (of acoustic information as numbers on compact disk) does.
Analog filters are old-technology processing elements like resistors, constructed to modify a physical waveform directly. A digital filter is an algorithm, part of a program, an instruction for altering a signal by changing the symbols that represent it.
Every channel or medium - whether coaxial cable or seawater - is, as a physical substance, inherently continuous and therefore analog. But some writers will call any channel used to carry the pulsed signals instantiating binary symbols a digital channel, because it is used to transmit a digital code (Couch, 1983, 4).
To summarize, some of the dimensions of contrast picked out by the analog/digital distinction in engineering practice are these:
(1) A source is analog if the information we are picking up arrives continuously.
(2) A signal is analog if it is a continuous function of space or time parameters,
it is analog if it is either a continuous or a discrete function of the continuous independent variable.
(3) Systems are analog if they describe a continuous source by means of continuous signals,
they are analog if they transmit signals from an analog source, regardless of how transmitted.
(4) Analog processing, analog filters, are realized in hardware directly, while digital processing is implemented in a program which is then realized in hardware.
(5) Channels may be called analog simply in virtue of their physical continuity,
they may be called digital when they carry pulsed signals
they may be called digital whenever they are carrying signals from a digital source, even when the carrier is continuous.
I.4 "An analogous electrical network"
Engineers often remark that in analog technologies there is some sort of analogy between representing signal and represented source, while in digital technologies there is not. A standard example would be analog and digital acoustic storage media: in an analog tape medium, variation in an acoustic waveform is represented as a proportional variation in degree of magnetization of a ferrous coating; the degree of magnetization of a digital tape is not proportional to amplitude of the source signal because, on a DAT tape, magnetization represents binary numerals. Hyndman calls an analog computer "a direct analogy between the behavior of variables in a mechanical system and those in an electrical system" - it is an "analogous electrical network" (Hyndman, 1970, 6).
Proportional relations between representing and represented values can of course be exploited in filtering and other sorts of processing. Where the original value had representational significance, the transformed value will also have representational significance. This is the principle behind computation of any sort - the structure of relations in the representing system is set up to preserve the structure of relations in the represented, regardless of transformations, and in such a way that systematic transformation in the representation models some transformation, often causal, in the represented. Where our representing relations are logical relations in a code, transformations will be truth-preserving. In digital systems representational relations are of this logical kind: they are nonproportional, but otherwise systematic, rule-governed relations among expressions of a code.
Where representational relations are among values of current at various components of an analog computer, systematic physical transformation is what preserves predictive relevance. An analog computer "mimics a physical system" by being a physical system whose materials preserve representational relevance through systematic causal transformation.
There is a long tradition of working models in engineering, and the analog computer is seen as a superior sort of working model - superior because of its flexibility and general applicability. Analog computation is the operation of a working model. This is just what makes analog computers relevant to cognitive questions - they offer a picture of nonsymbolic inference, systematic transformation without code. This is what Campbell means when he says analog computers solve nonlinear equations by mimicking physical systems (Campbell et al., 1985, 383).
It is sometimes said (see Peterson, 1967, 2) that an analog computer is analogous not to some other physical system, but to a differential equation, a set of mathematical operations. Its input and output are, after all, numbers, at least from our point of view. Its components are called 'summers', 'scalers', 'integrators', 'nonlinear function generators'. General purpose versions of the machine are often called differential analyzers. The device does combine different values of some electrical magnitude in the same manner as numbers are combined in a mathematical operation.
What is wrong with saying analog computers are analogs of equations is that it conflates analogy with modeling; It does not preserve the distinction between physical things and their descriptions. An equation is a description in a mathematical formalism. It is a formal expression. It can, therefore, be implemented or realized in an analog computer's system states. And the computer's system states can be modeled by the equation. This allows us to reserve 'analogy' for the relation between two physical systems whose system states realize a common description.
There are more and less informational ways of writing system equations. I will have more to say about equation forms in section II.2. For now I will just mention that engineers find it valuable to implement equations given in the form that allows the most direct and detailed modeling of components and their relations. A system equation giving the global input-output function of the system will be some n-th order differential equation. If this equation is put into its normal form of n simultaneous first-order equations, and implemented in this form by simultaneous parallel circuits, then the engineer watching the computation can 'see' more of the internal dynamics of the system.
Some of the ways physical systems can be analogous are these:
(1) They may be analogous with respect to time. An analog computer which is continuously computing the evolution of some system may be operating in real time, that is, in parallel with the represented system, or it may be time-scaled, with proportionality preserved.
(2) They may be analogous with respect to spatial relation. Analogous computers may, if we like, be set up so that components are in the same spatial orientation to each other as working parts in the represented system. This would have no mathematical purpose, but it would make the computing system easier to 'read'.
(3) They may be analogous with respect to complex, simultaneous, causal interrelations among system components independently identifiable. The analogous systems might be thought of as realizing a weighted directed graph of causal connections.
(4) They may be analogous in some higher dimensional way, where their mutual description is realized in relations among relations of first-order magnitudes in the two physical systems.
(5) All of these dimensions of possible analogy can also be analogies with respect to part-whole relations
I.5 Transduction and codes
"We need no longer be concerned with the physical parameters themselves": the most interesting aspect of the analog/digital distinction is this leap in category from physical to linguistic. A code, it is admitted, is always 'realized' by physical markers in some physical medium; but it is assumed that realization may be dropped from the discussion. Why this is so, and what we import with this assumption, is the question at the heart of this thesis. Answering it would help us see through old disagreements about analog and digital representation, and now disagreements about the import of connectionism.
I will make a start at answering it by looking more closely at transducers, and in particular at A/D converters. A/D converters are the technical bridge between analog and digital systems. Are they also a bridge - is such a thing possible - between a domain of physical description and a domain of linguistic description?
A transducer, in its most general sense, is any device by which energy is transmitted from one system to another, whether of the same or a different kind. The term is usually used, however, for devices which convert one form of energy - sound, light, pressure - into another, usually electrical energy. An example is a photomultiplier tube which converts light energy into electrical energy.
In any old-style communications system, transducers would accept non-electronic signals and emit electronic signals which could then be filtered or otherwise processed in this electronic form. Transduction here was called 'source coding', because a form of representation was involved: light waves were being represented by electrical impulses, for example. If yet another sort of channel - airwaves maybe - was to be involved in signal propagation, there would be a second transduction called channel coding, which often involved modulation of carrier waves.
When we describe analog information systems we do not easily lose sight of the fact that physical energy is being propagated through a physical medium. And we don't have to think of the message as being in existence when it is in transit. It - the analog form of it that human senses can recognize - is reconstructed by channel decoders and source decoders at the other end. There is no music in the grooves of the record. And, oddly, no speech in the telephone lines. Nothing but electrical currents. Potential messages, we might want to say.
With digital media something changes. We are not tempted to say there is music on the DAT tape or compact disk, but we are tempted to say there are numbers. We might say we have our texts on floppy disk but we more seriously believe our binary 1's and 0's are stored there. These phenomena are a consequence of digitization, which is the task of a transducer called an A/D converter. Interestingly, an A/D converter is not usually called a transducer, although it is thought to require a transducer as its first stage.
Digitization involves representing a continuous-time signal in the form of an n-bit binary word. Two steps are usually recognized: sampling and quantization. Sampling measures the amplitude of a continuous-time signal at discrete intervals. Formally, it is the mapping of a continuous-time signal onto a finite subset of its coordinates in signal space. Nyquist's sampling theorem assures us that a (band-limited, i.e. prefiltered) signal waveform can be uniquely represented by samples taken at a rate at least twice as fast as the highest period contained in the signal. Sampling, also called time-quantization, leaves us with sampled real signals - signals whose amplitude values still range over the real numbers.
Amplitude quantization is sometimes called coding, because it matches an absolute amplitude value to a digital word representing some amplitude closest to the actual sampled value. Amplitude quantization and coding are conceptually separable but physically identical. A sampled signal is said to be pulse amplitude modulated (PAM), and still analog (half-analog maybe, i.e. analog in the sense that signal amplitude still "denotes analogic information"). A sampled quantized signal is said to be pulse code modulated (PCM) and digital (see Couch, 1983, 82-88).
There are half a dozen different serial-bit signaling formats. In unipolar signaling a binary 1 is realized as a high level and a binary 0 as a zero level of current. In differential encoding a binary 0 is realized by a change in level and a binary 1 as no change in level. This format has the advantage that polarity may be inverted without affecting recoverability of data.
Physically, transduction is a process that takes us from
(1) a continuous waveform in some physical channel
(2) a continuous waveform carried intermittently in an electronic channel,
(3) a patterned sequence of pulses and absences of pulse carried on a similar electronic channel.
Conceptually, what we have is a process that takes us from
(1) a real-valued and linguistically inexplicit representation of quantities by the measurable but unmeasured magnitudes of some physical parameter,
(2) an intermittent but still real-valued representation of magnitudes in such a way that the order of representing signals is time-correlated with the order of represented signals, and the amplitude of representing signals is proportional to the amplitude of represented signals,
(3) intermittent pulses whose order of amplitudes is not at all related to the order of amplitudes of represented signals.
The overall sequence of code words does retain some temporal relation to the sequence of waveform values coded, but the sequence of pulses within a code word is unrelated to the values coded. Transducer output is determined rather by the designer of the code, who has decided that a measured value of +7 (or +6.85, or +7.16) will be coded as the binary sequence 110, and a measured value of -3 as 001.
The transduction process loses two, correlated dimensions of analogy - amplitude and temporal order - and it gives us a temporal mapping between code expressions and sampled regions of our original signal: code expressions naming measured quantized amplitudes flow from the transducer in the same order as the signals they describe. But there is no analogy between the elements of the code words - the pulses and no pulses - and the pulses or no pulses of the original signal. The dependence of the operation of the code on some form of temporal correspondence is worth noticing but the 'analogy' now is between the ordering of physical entities, waveforms, and the ordering of linguistic entities, code words. What has happened is that we are only able to identify the chunks of the transduced signal relevant to representational order by knowing they are code words.
There is no longer an immediately systematic relation - analogy - between the physical/causal properties of the source signal and the physical/causal properties of the transduced signal. It does not follow that transformations within the post-transducer medium will be unsystematic with respect to the physical/causal properties of the original signal. The contrary is true: any transformation will be effectivelyreversible; we will always be able to approximate the original signal's physical properties. But the systematicity which allows for computation - which preserves relevance to the original signal - now is the systematicity of the code.
There have to be two sorts of systematicity to give us this reversibility and computational relevance between signals that are not physically analogous. One is the systematicity of the formal system: the discrete nature of the elements, the syntactic specificity of the rules for combining them into expressions, and the total explicitness of the procedures which determine how transformations will take place. The formal system is our inferential engine.
The other sort of systematicity is the systematicity of our encoding schemes and practices. The systematicity of the formal system is typically axiomatic, but the systematicity of the modeling relation - and that is what we are talking about - involves empirical decisions, trial-and-error, curve-fitting, testing. The transducer's designer first has to decide that a measured signal amplitude of +6.85 will be encoded as if it were +7, and that both values will emerge from the transducer named '110'. The designer then has to be able to implement the decision in circuitry. If quantization is to be nonlinear, for instance, with more importance given to some areas of dynamic range than others, the nonlinearity of the relation between source signal and quantized signal must also be encoded into system equations, or inference will be lost.
The systematicity of computation by means of formal system and encoding practices replaces the analogous systematicity of computation by means of proportional transformations. It does this precisely by encoding the proportionality itself. Any number system and all measurement practices are designed to do this. So the fact that computation over symbols preserves inference is not mysterious.
What is mysterious, what continues to have a feeling of legerdemain about it, is the way physical computation seems to turn into nonphysical computation by passing through an electronic transducer. One way to handle the transubstantiation is to talk about dual description. We might say something like this: as soon as we have a code we have the option of talking about what is happening in terms of linguistic expressions or, as often happens, in terms of their referents. We also, if we happen to be engineers or neurologists, still have the option of talking about energy propagated in a physical medium. Both descriptions are valid. Which we choose depends on our interest. If we are interested in the representational function of the signals we must talk about code words and their referents because, as we saw earlier, there is now no other way to identify the units doing computational work. So the story goes.
This story does work for digital computers. When we are talking about human cognition, though, the problem with dual description is that it perpetuates a form of mind-body split. We do not know how to translate from the terms of one description into the terms of the other. We have (to anticipate chapters III and IV) top-down functionalist-intentionalist description working its way downward toward finer grained functional description, and bottom-up structural-biological description working its way upward toward more comprehensive structural organization, and a curious blank where we would want there to be a join.
For the transducer, though, there is no problem. So I would like to look again at how the transducer does what it does. Input: physical waveforms. Output: physical waveforms. I have said there is no longer a relation of analogy between the form of the input waveforms and the form of the output waveforms. But there is some other systematic relation between the two physical signals. If there were not, machines could not compute or represent. And it is a physical relation too, but it is a relation that cannot be seen if we look just at the output of the transducer.
We also have to look at how the transducer output signal is processed in the digital machine. Incoming pulsed signals - one by one, or in batches corresponding to register capacity - simply reset switches. It is said that all a computer can do is add and compare. But even this is anthropomorphizing. All a computer can do is reset switches. It resets switches not because it can 'read' but because the materials it is made of have response properties. It can't read; but it does not have our problem recognizing what the representational units in a stream of pulses and no pulses are. It does not have to move into intentional or linguistic description to do so. And this is because the materiality of the machine - the physical organization of the machine - is doing the work.
Even the Turing machine, that icon of virtual computation, is a description and not a computer as long as it does not have a minimal physicality - a power source, a bit clock, a mechanical tape mover, a real tape with right and left sides, a magnetize/demagnetize head, and, yes, a tiny transducer setting up pulses or no pulses in response to magnetization magnitudes. All of these mechanical and electronic functions would have to be enabled by the causal properties and spatial organization of a whole slew of little switches that either open or don't open when each pulse or no pulse is evoked. It is this whole contraption that instantiates the code.
We have seen that physical computation, either of the analog or the digital kind, requires a systematic relation of some physical sort between represented and representing signals. Now we are in a position to see that the physical form of the input waveform has a systematic relation not (immediately) to the form of the output waveform, but to the electrical state of the entire machine. An incoming pulse-pulse-no pulse will only function as a binary 110 if certain switches are in certain states; and an incoming 110 will only function to represent a source magnitude of +7 if many other switches are in certain states. This is what compilers and assemblers are are about, making sure the entire electrical configuration of a machine is such that it will conform to the program we think we are running. Or we could say it the other way: making sure the program we are running is able to make inferential use of the causal properties of circuit materials. We can say it either way because there is reciprocity here. We have no philosophical problem getting top-down functional description (what we say the computer is doing in task domain language) and bottom-up electrical engineering description (what we say current is doing in the machine) into register at any scale we like. All that is needed is massive technical labour.
With brains, massive technical labour has not yet been enough. The important difference is that we make computers and don't make brains. We know how hardware realizes software because we have built it to do just what it does.
One question remains. Is the relation between the physical form of the source signal and the electrical configuration of the entire machine a relation of analogy? There will certainly be a mathematical mapping possible between a signal space description of signal waveform and a state space description of the electrical configuration of the machine, and this mapping will be subject to systematic transformation. In other words, seen in the right way there is an analogy between source signal and representing electrical configuration.
This does not give us logical parity between 'analog' and 'digital', however. Why not? Because 'digital' never is seen in the 'right' way. It is always seen in terms of the referents of the instantiations of the code - in terms of 0's and 1's, or in terms of higher level program entities. 'Digital' and 'analog' belong to different language-games.