Marr’s three levels of inquiry

Preparation for this class

Read the General Introduction (pp. 1-7) of Vision: A Computational Investigation into the Human Representation and Processing of Visual Information by David Marr.  Here is the PDF for this reading.   You have access to the entire e-book via SWEM, but you need only read the General Introduction.

Watch “How can we study the human mind and brain: Marr’s levels of analysis” by Nancy Kanwisher.

 


Summary

There are two take home messages of David Marr’s book computational aspects of vision.

(1) Levels of inquiry in computational neuroscience

In order to understand a device that performs an information-processing task, one needs to employ a hierarchy of explanations.  In other words,

… computational neuroscience also involves [three different] levels of analysis. First, there is the level of what a neural subsystem does and why. Does it see or does it hear? Does it control the arm or the head? And what function does it compute in order to perform this function? Answering these what and why questions leads to what Marr called a ‘computational theory’ of the system. The theory specifies the function computed and why it is computed, without saying what representations and procedures are used in computing it. Specifying the representations and procedures is the job of the ‘algorithmic theory’. Finally, an ‘implementation theory’ specifies the mechanisms by which the representations and algorithms are implemented. [Piccinini and Shagrir 2014]

In brief, Marr’s Three Levels of Inquiry are

  • Computational: What computations does the central nervous system perform and why?
  • Algorithmic: What representations and procedures are used in the neural computation?
  • Implementation: What the physiological mechanisms that bring about these representations and carry out these algorithms?

It seems to me that the “vision” of contemporary neuroscience is to have an “multi-level explanation” of information processing by the nervous system that is “integrated” in the sense that explanations of different mechanistic levels are “linked.”  Consider,

Nervous systems as well as artificial computational systems have many levels of mechanistic organization. They contain large systems like the brain and the cerebellum, which decompose into subsystems like the cortex and the brainstem, which decompose into areas and nuclei, which in turn decompose into maps, columns, networks, circuits, neurons, and subneuronal structures. Computational neuroscience studies neural systems at all of these mechanistic levels, and then it attempts to discover how the properties exhibited by the components of a system at one level, when they are suitably organized into a larger system, give rise to the properties exhibited by that larger system. If this process of linking explanations at different mechanistic levels is carried out, the hoped result is an integrated, multi-level explanation of neural activity. [Piccinini and Shagrir 2014, p. 28.]

But what do “we” mean by the linking of different mechanistic levels of explanation?  Is this linkage a one-way “bottom up” inheritance as in the philosophical position known as reductionism?  Is it possible that the linkage is two-way, “top down” as well as “bottom up”?

Let’s say that in my experience, 90% of physicist believe that the linkage between different mechanistic levels is “bottom up” only, while 90% of psychologists believe the linkage works both ways: “bottom up” and “top down.”   What is the significance of that observation?

(2) Representation versus processing of information

Marr also emphasizes a duality between representation and processing of information

Vision is therefore, first and foremost, an information-processing task, but we cannot think of it just as a process. For if we are capable of knowing what is where in the world, our brains must somehow be capable of representing this information — in all its profusion of color and form, beauty, motion and detail.  The study of vision must therefor include not only the study of how to extract from images the various aspects of the world the are useful to us, but also an inquiry into the nature of the internal representations by which we capture this information and thus make it available as a basis for decisions about our thoughts and actions.  This duality — the representation and the processing of information — lies at the heart of most information-processing tasks and will profoundly shape our investigation of the particular problems posed by vision.

Modern representational theories conceive of the mind as having access to systems of internal representations; mental states are characterized by asserting what the internal representations currently specify, and mental processes by how such internal representations are obtained and how they interact.


Notebook/Response

Do you think Hilary Putnam would agree that mental states are characterized by asserting what the internal representations currently specify?

Before reading Ch 1 of Vision by David Marr, give a few examples of internal and/or neural representations that we did not discuss in class.