Large Scale Analysis of Neural Structures


CSL-89-10 November 1989, [P89-00173]

Copyright 1989 Xerox Corporation. All rights reserved.

Abstract: Advances in computer analysis of images, the dropping cost of computer power and advances in light and electron microscopy and possibly in staining techniques will make it possible in the next few years to analyze neural structures of unprecedented size at the cellular level. A complete analysis of the cellular connectivity of a structure as large as the human brain is only a few decades away.

CR Categories: J.3 [Life and Medical Sciences]: Biology; I.4.m [Image Processing]: Miscellaneous.

Additional Keywords and Phrases: Three dimensional reconstruction, neural reconstruction, nervous structure, EM tomography, microscopy, electron microscopy, neural function, nervous system, brain, brain reconstruction.

1. Introduction

"The great enigma in the organization of the brain was the way in which the nervous ramifications ended and in which neurons were mutually connected."

The nervous system is virtually unique in its exquisitely complex three dimensional structure. Other tissues can be understood without knowing the precise shape and specific connections of each cell, but not the nervous system. The tools being used today to analyze shape and structure at the cellular level are hard pressed to describe in detail more than a handful of nerve cells - - yet we must analyze the shape and connections of at least thousands, probably millions and perhaps billions of nerve cells before we can lay claim to a full understanding of the structures that guide the behavior of flies, worms, mice -- and people.

One of the significant motivations of this research is the simple desire to better understand neural structures -- both animal and human -- because of natural curiosity and the general awareness that such knowledge must surely touch our own lives in ways that we can scarcely predict beforehand. Beyond this general desire for knowledge, there are somewhat more specific reasons to expect valuable results which we briefly consider here.

1.1 Correlating structure with function

A major reason for analyzing the large scale structure of nerve cells is the presumption that structure will yield valuable and indeed otherwise unobtainable insights into function. The ultimate objective must not simply be to determine cellular structure but also to correlate that structure with the detailed electrical behavior of the cells -- at the level of membrane and synaptic potentials. We will concentrate primarily on structure and will consider function all too briefly, an examination of this important issue would easily double the size of this paper. It seems clear, though, that a sufficiently detailed structural analysis is essential to a functional model of any reasonably complex neural system. Critical requirements for modeling include identifying the location of synapses and their precise electrical behavior; determining the electrical properties of different regions of nerve cell membrane -- e.g., spiking versus non- spiking regions in a dendritic tree, threshold levels of spike initiation zones etc; and longer-term modulation of synaptic function (by, for example, neuro-peptides which can alter neural response over periods of minutes to hours). The appearance of a synapse in the electron microscope reveals valuable qualitative information about its function. At the most basic level, most synapses in the cerebral cortex can be divided into two types based on their appearance: Type I and Type II. Type I synapses have proven to be excitatory and Type II to be inhibitory in all cases studied so far [346]. In addition, the degree of excitation or inhibition has been inferred in some cases based purely on appearance. Bailey [6] for example saw differences in appearance in an identified synapse from trained sea snails (aplysia) versus the appearance of the same synapse in untrained sea snails that correlated with synaptic strength. Others have found qualitative correlations between synaptic function and appearance in some cases [345]. Active research in this area continues.

It is possible to do more than simply examine unstained neurons in a microscope. Because the basic mechanisms of neural function are governed by specific molecules, it is feasible to recover functional information by using specific stains; e.g., tetrodotoxin binds very tightly to voltage activated sodium channels, and alpha-bungarotoxin binds very tightly to acetylcholine receptors [340]. Chemical variants of these tightly binding nerve poisons can be used to identify the precise distribution of the corresponding molecular species, and hence to infer functionality. For example, the presence of voltage activated sodium channels can reasonably be taken to imply an active membrane patch that will exhibit spiking behavior (spike initiation or propagation). While all molecules of interest do not yet have a corresponding high-affinity stain, and while new neurotransmitters and receptors are continually being discovered [329], the use of various stains should allow the recovery of substantial functional information and -- coupled with structural information -- should eventually provide sufficient information to allow functional modeling. An excellent reason for optimism regarding the eventual development of the needed stains is the existence of monoclonal antibody-based methods which provide a marvelous specificity and accuracy. These issues, however, deserve a paper of their own.

1.2 Medicine

The neurological reconstruction work being done today is usually motivated by a desire to better understood the workings of specific neural circuits, particularly circuits whose failure is at the root of human illness. Depression, anxiety, mania, schizophrenia, Alzheimer's disease, memory impairment, paralysis, epilepsy, multiple sclerosis, Parkinson's disease, Huntington's chorea, stroke and a host of other problems are today the object of intense research efforts whose aim is to better understand and better treat these conditions. Analyzing how the connections of individual nerve cells are changed is often crucial. Beyond providing a better understanding of disease conditions, there is a broad desire to understand normal development and function. Detailed reconstructions are often essential to determine precise patterns of development -- reconstruction work has often been motivated by research into embryology and developmental biology.

1.3 Impact on Artificial Intelligence

Even after decades of research we still seem no closer to duplicating the "common-sense" intelligence that is the holy grail of AI (artificial intelligence) research. While many major advances have been made -- some immensely valuable -- they work by avoiding the central problems of AI rather than by solving them. While we might discover how to duplicate natural intelligence on a computer tomorrow (after some truly brilliant breakthrough) yet we might also find the solution eluding our grasp for years, decades, or longer. This uncertainty is widespread in the AI community -- to quote Marvin Minsky "...the process could take 500 years ... or it could be just around the corner!" [20, page 24].

Despite the uncertainty about when, eventual success seems assured. The fundamental reason for long-term optimism in AI research is the existence of natural intelligence -- both in lower animals and in humans. If nature can do it, we can do it too -- eventually. Unfortunately, it is unclear how long such success will take. If evolution took 100 million years, how long will we take? Significantly less -- but even one thousand years (about one hundred thousand times faster than nature) is still a very long time.

Direct analysis of natural systems might well be faster than rediscovering the basic principles of intelligence from scratch. Neurological research has already had significant influence on vision research [195, 344] (largely because the retina [193,341] and visual areas [342,343] are among the best studied neural systems.) It has also influenced work on motor control [192, 306] and has inspired the "neural network" approach to AI [117, 148] which has recently aroused a great deal of interest [105, 275] and has already led to a commercial product [196]. A better understanding of natural systems will increase this influence -- a deep understanding of natural systems might make this influence a dominant factor in the design of AI systems.

2. Automated Analysis: A Brief Review of Past and Current Projects

Reconstructions of neural structures of up to hundreds of neurons [1, 2, 10, 11, 13, 14, 15, 16, 19, 31, 33, 34, 124, 111, 149] (and even the reconstruction of sub-cellular organelles [3, 30, 326, 327, 328]) have been done by examining sections of neural tissue with either light or electron microscopy. Most dramatic was the complete reconstruction of all 959 cells in the nematode [349] including the 338 cells in its nervous system [350, 351], as well as their complete lineage from the fertilized egg. This work all required the very tedious analysis of the images by a human -- to date, there has been no successful fully automated reconstruction of neural tissue (although Hibbard et al. [339] successfully completed a fully automated reconstruction of a capillary bed -- the lumen of the capillary is sufficiently distinct that current image analysis techniques work). In light microscopic work an entire cell body is injected with contrast material -- sometimes fluorescent. The presence of even fine fibers can be detected this way, although their dimensions (sometimes as small as .1 micron) and fine structure cannot always be accurately determined. A single neuron can be traced in a section as thick as a few hundred microns. The X and Y coordinates can be entered into a computer by centering a cross hair or other computer- coupled pointing device on the feature of interest. Depth (or Z- axis) information can be recovered by using a microscope with a shallow depth-of-field, and measuring the focus adjustment at which the feature of interest produces a sharp image.

Because many features of interest are much smaller than can be resolved with optical microscopy (.1 to .2 microns) TEM (transmission electron microscopy) is indispensable. In TEM work sections anywhere from a few hundred to a few thousand angstroms are typical. Cell membranes are readily resolved, and stains are often oriented towards enhancing membrane visibility. The resulting images are analyzed using the human eye. The most popular systems today use a digitizing pad or similar graphic entry device so a computer can do further "bookkeeping" computations [2, 12, 14, 19, 31, 44, 49, 50, 51, 53, 111, 139, 149]. The computer can "stack" the images from each section atop one another to produce a 3-dimensional reconstruction of the neuronal cell bodies and synaptic connections, which is then displayed using graphical display routines (e.g., perspective computation, hidden line removal, rotation, shading, etc.). Volumes that are tens of microns deep (a few hundred to a thousand sections) by a few hundreds of microns wide and long have been reconstructed.

Three dimensional reconstruction is now a standard analytical technique, despite its tedium. Eutectic Electronics of Raleigh, NC, sells a commercial system based on the work of Capowski [12, 49, 53, 139] for the three dimensional reconstruction of neural and other structures. It has been used primarily for light microscopic reconstructions [124, 163] but can also be used with other imaging modalities. Stevens et al. have done extensive serial section reconstruction work using TEM [2, 15, 30, 64, 111, 197], as have Levinthal and coworkers with the CARTOS system [11, 17, 19, 149].

The prospect of automating the tedious task of analyzing neural structures has attracted many good researchers. The idea of designing a computer vision system to help analyze natural vision systems which would lead in turn to better computer vision systems has a certain recursive attractiveness to it. Ballard and Brown in their respected textbook "Computer Vision" list neuroanatomy as one of seven major application areas [208 page 11]. Reddy et al. proposed a computerized system in 1973 [42] to trace individual stained neurons. Coleman et al. [44] built a system in the late `70's. Tucker built a system for his Ph.D. thesis [113, 114, 115] in the early `80's. Researchers at the CARTOS project at Columbia have been pursuing this area for well over a decade [17, 149] and work there by Schehr [46] continues. Selfridge at Bell Labs [141, 164] is pursuing algorithm design in this area in cooperation with continuing work on CARTOS. Ze-Nian and Leonard Uhr [337] used neuron recognition to illustrate work on a computer architecture specialized for visual processing. The most recent and most sophisticated proposal is that by Levitt [151].

Reddy's proposed system [42] used serial sections imaged in a light microscope and then digitized for analysis on a PDP-10. They planned to use dye injected neurons, which have very high contrast so simple edge detection, edge following and thresholding algorithms are relatively effective. Even so, imperfections in the image due to noise, dirt, tissue preparation artifacts or uneven staining can confuse simple algorithms.

Coleman et al. [44, 55] built a system to solve the same problem but with a slightly different approach. They used a computer controlled light microscope to follow a stained neuron in three dimensions. Two stepping motors controlled the X and Y coordinates of the microscope stage, and a third controlled the focal plane (or Z coordinate). The system worked with reasonable reliability, but required human monitoring and intervention if it went astray. The speed of analysis was limited by the required human oversight, not by the hardware or algorithms employed.

Tucker developed a system for his Ph.D. thesis to apply model driven image understanding techniques to two dimensional reconstruction of single images of nerve cells. In such techniques, the program is actively looking for the various features that it expects to find, based on an internal model of what a cell "should" look like (e.g., the cell boundary should be closed, the nucleus should be within the cell boundary, etc). This approach is very useful in overcoming noise and errors because it uses domain-specific information to reject impossible or improbable conclusions. His technique was successful in the limited domain in which he applied it and in principle should extend to the analysis of more general cell images [113, 114, 115].

The CARTOS project at Columbia has been the most extensive and longest lived reconstruction project [11, 17, 19, 101, 149] and has used both light and electron microscopy. One aspect of the CARTOS effort is CARTOS-ACE [149, 46], which is a research effort aimed at automating the reconstruction process. This system traces neuron boundaries from serial sections imaged with an electron microscope. Stain or contrast material is not injected into a specific neuron, but instead all cell boundaries are enhanced by the use of a stain (such as osmium tetroxide) which is freely absorbed from a bath and which has a chemical affinity for the lipid bi-layer surrounding all cells. The outline of a single neuron can then be followed using a fast and simple edge detection and tracking algorithm. This method has been used to follow individual neurons through several sections and is relatively reliable for tracking axons when the image provides good contrast and is free of imperfections. Again, human oversight is required. Work on CARTOS-ACE continues [46] and is producing algorithms with improved reliability.

3. A Brief Look at Stains

Stains are critical both in human analysis and even more so in automated analysis of tissue sections. A stain that provided sufficiently high contrast of the nerve cell membranes would allow the successful use of simple algorithms and reduce the need for more complex ones. Sobel [45] has a photograph of stained zebra fish neuropil "...that was accidentally produced by Neil Bodick at [Cyrus] Levinthal's lab showing spectacular extracellular staining ...". Apparently, osmium tetroxide flooded the extra-cellular space but did not achieve any significant penetration intracellularly, resulting in an extremely dark and uniform extracellular stain. Unfortunately, the stain was not reproducible; to make matters worse, a human would reject such a stain as "bad" because of the complete lack of intra-cellular detail -- yet it is precisely this simplification that would make the stain useful for automated analysis. Essentially all current stains were developed for the benefit of the human visual system -- which is remarkably competent at recovering reliable information from a complexly stained specimen. Current automated image understanding systems work much better on simple, high contrast images that have minimal "noise" or clutter -- and stains that produce such "simple" images will have been systematically rejected as being less informative, less useful and less interesting for a human. The development of stains specifically to ease the problems of automated image analysis is an important and worthwhile objective.

The Golgi stain, discovered in the early 1870's, was the earliest and most remarkably useful stain for neurological work. It leaves most cells (perhaps 95%) completely unstained, but when it does stain a cell it stains the entire cell body and all axonal and dendritic extensions completely. The precise shape of a single neuron can be easily traced under a light microscope. The more modern and more controllable equivalent is the direct injection of dye into a single neuron of interest -- again, the whole neuron is rendered clearly visible under a light microscope. The most recent efforts to clearly mark a single neuron involve recombinant DNA techniques. A modified retrovirus is used to infect individual cells with a gene which produces a protein product that can be made visible by suitable treatment [152,338]. Currently, this method is used to mark all the descendants of the infected cell -- an ability of great value in developmental biology. More selective control of the implanted gene's expression would produce an extremely powerful tool for labeling specific types of cells. At present, developing even a single retroviral stain is difficult -- it must simultaneously be produced in adequate volume by the cell, be adequately visible even in small dendrites, not significantly affect the cell's biochemistry and conform with other restrictions. Despite the difficulties, work is progressing.

An alternative approach is the use of transgenic organisms that have been altered by promoter fusion with a suitable marker -- i.e., animals in which the DNA coding for the stain or marker is incorporated into the DNA of every cell but in which the marker is expressed only in specific cells. The most dramatic example of a transgenic marker to date was produced by incorporating the luciferase gene from the firefly into a tobacco plant -- which would then literally glow in the dark [180]. While this particular example did not illustrate selective control over expression of the luciferase gene in different plant cells (it was produced by all cells in the plant), control over expression of a marker can be accomplished by fusing the marker DNA with a promoter sequence which permits expression only in cells of a specific type. While this kind of analysis can only be done on animals specially bred and genetically modified for the purpose it provides a powerful method of staining almost any desired type of cell. It's only drawback is that ALL cells of the selected type will be stained. With either staining technique (transgenic animals or retroviral infection of individual cell-lines), should multiple stains be developed (a dozen or so) that are optically distinguishable (different colors) it might prove possible to "color code" individual neurons. This would be extremely useful in neural reconstruction.

Another extremely powerful technique is in-situ hybridization [220], which uses nucleotide probes to detect specific nucleotide sequences in sections. It is most often used to determine which cells are expressing a known messenger RNA (and hence are manufacturing the protein-product specified by that messenger RNA). The stained RNA is left "in-situ" on the section (as opposed to being analyzed in bulk while in solution) and the complementary probe hybridizes (combines) very selectively with only RNA that has the correct sequence. This method has already been of great utility in neurological work and in favorable circumstances allows localization of the messenger RNA to individual cells. While in-situ hybridization will not aid in determining cell boundaries, it could offer invaluable aid in classifying the type of a specific cell.

Finally, one of the most widespread techniques is the use of immunologically based stains. An antibody to a specific molecule will selectively attach to its target and can be produced in large quantity. The antibodies can be made clearly visible for light microscopy by attaching fluorescent markers (fluorescein or rhodamine dye) to them. Alternatively, by linking an enzyme to the antibody (such as peroxidase) a locally specific chemical reaction can be used to generate a visible stain. Electron-dense labels (such as colloidal gold) can be attached to the antibody to make it visible when viewed with an electron microscope -- this technique can provide a resolution of 50 angstroms [353]. These techniques offer a very powerful and general mechanism for mapping the distribution of specific complex molecules.

4. Imaging Technologies and Computer Enhancement of Images

Another revolution is taking place in imaging technologies. The pace is so fast that the journal "Medical Imaging" is entirely devoted to describing new developments. In general, these new imaging technologies are computationally intensive and take advantage of a wide diversity of fundamental physical interactions. Sound, light, X-ray, gamma-rays, radio waves, magnetic fields, electrons, ions and the like are used to spray, sweep or otherwise interact with the specimen and the resulting changes in the original beam and the various secondary fields and particles emitted are then analyzed by a bewildering array of detectors and analyzed by computer. Even traditional imaging technologies can be greatly enhanced by the use of extensive computer analysis, as work on optical sectioning microscopy and 3D electron tomography show. A beautiful pictorial presentation of some of the new imaging technologies is given in National Geographic [294].

While we are concentrating in this article almost exclusively on optical and transmission electron microscopy, this is not to say that other imaging technologies currently being developed are not of interest -- quite the contrary. However, optical and electron microscopy have been around for quite a while, are relatively cheap, and significant neural reconstruction work has been done with them: we have a baseline of experience. The newer technologies should prove to be extremely useful in the future.

4.1 A Brief Overview of some Imaging Technologies

An imaging technology creating quite a stir at the moment is MRI (Magnetic Resonance Imaging, also called NMR for Nuclear Magnetic Resonance) [185,294]. It is based on the observation that the nucleus of the hydrogen atom (a proton) acts like a small bar magnet which, when it is immersed in a magnetic field, resonates at a frequency proportional to the strength of that field. If a specimen is immersed in a magnetic field that varies in strength in different regions of space, the protons at different positions will resonate at different frequencies (typically radio frequencies) -- the strength of the resonance at a particular frequency therefore measures the number of protons in a particular region of space. A complete map of the spatial distribution of the protons can be built up by varying the shape of the magnetic field over time and repeatedly measuring the strength of the resonance at different frequencies.

MRI is routinely used to image various parts of human beings -- the brain is particularly popular. The cover of Nature was graced by an MRI micrograph of a single living cell at a resolution of 10 microns [133] -- what makes this particularly interesting is that no theoretical resolution limit for this technology is yet in sight, and the practical limit to resolution is still unclear. The recent dramatic advances in superconductors [348] should provide magnets that are more powerful and much cheaper than exist today, which will make MRI microscopy cheaper and improve its resolution. Further progress is expected.

Another technology that offers unique advantages in imaging is the ion beam microprobe [254, 186]. It is similar to the scanning electron microscope, but instead of electrons, it uses relatively low-energy ions. The bombarding ion beam (argon, oxygen, nitrogen and cesium are common) literally knocks lose atoms from the surface of the specimen -- and the charged atoms that are knocked lose (secondary ions) can then be collected and analyzed. Resolution of better than 500 angstroms has been obtained, and 100 angstroms (close to the theoretical limit) seems achievable. While this method can only analyze the surface of the specimen, the "surface" can be eroded away (sputtered) allowing progressive access to deeper layers.

Acoustic microscopy can be used to obtain images, but does not normally provide adequate resolution for the kind of reconstruction we are considering. However, acoustic microscopy in superfluid liquid helium at a fraction of a degree Kelvin has already reached a resolution of 200 angstroms [363] and has a theoretical limit of only a few angstroms. This high-resolution imaging technique is still in the research phase.

Photoelectron imaging [36] has recently been used to provide images of the surface of a biological specimen from the the electrons ejected when the surface is exposed to ultra-violet light. Because the image is formed from the ejected electrons, the resolution is not limited by optical effects. Resolution of 100 angstroms has already been achieved, and higher resolution is expected. The electrons are usually ejected from the first 10 to 30 angstroms of the surface [251,352] and so surface detail is not confused by emissions from deeper layers (as happens in SEM (Scanning Electron Microscopy)).

Another way to provide high-resolution images of a surface is the scanning tunneling microscope [194]. In this imaging method, a very fine needle is scanned less than 10 angstroms from the surface of the specimen. Electrons from the surface of the needle can "tunnel" this short distance and cause a current flow. An image is built up by measuring the change in current flow as the needle moves, and by moving the needle in a raster scan pattern over the surface of the specimen. Lateral (X-Y) resolution of a few angstroms and depth (Z) resolution of a fraction of an angstrom have already been achieved. Moving a needle with a precision and stability of less than .1 angstroms was a major challenge. While this technology clearly has more than adequate resolution for imaging nerve cells, the problems involved in imaging a "large" area (more than a few microns across) have not yet been addressed.

A related technology is the atomic force microscope [365]. This is essentially a modified scanning tunneling microscope in which the atom at the tip of the probe is pressed against the surface under examination. The resulting force is measured, and an outline of the atomic structure of the surface is provided. Resolutions of a few angstroms laterally have already been demonstrated (sufficient to distinguish adjacent carbon atoms in a graphite surface) with a force sensitivity of a few piconewtons. The limit of force sensitivity is several orders of magnitude smaller than this [365]. Future developments in these very new technologies are awaited with great interest.

4.2 Optical Sectioning

Optical sectioning microscopy combines optical images taken from many different focal planes throughout a specimen into a single three-dimensional optical reconstruction. Most of the common imaging methods available with light microscopy may be used. The images from the different focal planes are digitized, and the data is processed by computer to remove the blurred and out-of-focus information present in each individual image. The result provides resolution near the limits of optical microscopy (.1 to .2 microns) in all three spatial dimensions. The usual array of dyes and fluorescent techniques can be used -- only now providing data in three dimensions instead of two. Relatively thick specimens can be examined with this technique (100 to 1000 microns). An excellent review by Agard [32] provides a more detailed look at this method.

4.3 3-D Electron Microscopic Tomography

Most neural reconstruction work uses sections that are thinner than can be comfortably handled mechanically in order to improve visualization (not resolution, which is adequate). Ward [1] used 50 nm (500 angstrom) sections. Stevens [2] found 1 micron thick sections unsuitable despite "excellent pictures" because the complex overlapping structural details could not be disentangled, even with stereo pairs. They adopted .1 micron sections in their reconstruction work. In Lindsey and Ellisman's [3] reconstruction of a sub-cellular organelle they employed both thin sections (.17 microns) and some uncluttered thick sections (2 to 3 microns). Stereo slides (obtained by tilting the specimen by a few degrees between taking electron microscopic photographs) were used with both the thin and thick sections. The primary purpose in using the thinner sections was to resolve the confusion created in the thick sections by the "piling up" of detail from various levels of the section -- the thin sections were essential to the reconstruction. Excellent resolution with thick sections has been obtained [47, 48, 262, 263, 264] and is not a limiting factor in neural reconstruction.

Despite the use of thin sections, there can be a considerable change in the image from section to section. For example, a dendrite that is .1 microns in diameter and whose direction of travel fluctuates near the plane of sectioning can appear very different from one section to the next. Folding and crumpling introduced during sectioning will also change the image. Following structures from section to section (both by eye and with computer analysis) would be easier with better resolution along the Z axis.

Thus, the image resolution obtainable with thick sections (1 micron thick) is adequate for the reconstruction of neural fibers -- but thin sections (.1 micron thick) are used despite significant drawbacks because the human eye is unable to recover the detail present in a "cluttered" section even using stereo pairs. However, by using the technology developed for medical imaging it is possible to "look inside" a solid object without actually cutting it open. This has been done with electron microscopy by four groups [25, 26, 187, 189] using algorithms similar to those used for X-ray CAT scans (the algorithms are somewhat different because of the limited tilt-angle of the specimen in the electron microscope [354,358]). Multiple images of the section taken at different angles are combined by a computer into a coherent picture of the interior. Resolution of 50 to 75 angstroms in three dimensions for .25 micron sections has already been obtained [25].

Imaging technology allows the viewer to see computer reconstructed "slices" that are as thin as the resolving limits of the microscope employed. Slices taken from any plane through the specimen -- horizontal, vertical, or at some angle are readily available. The mechanical difficulties of dealing with thin sections are thus traded for the computational difficulties of medical imaging -- but computer costs will continue to fall dramatically for the foreseeable future [281, 282].

4.4 Image Understanding by Computer

Once we have obtained the raw image data we must still determine the outlines of individual neurons (and other relevant information). This problem is not as well understood as the tomographic reconstruction problems discussed above, and is the most challenging problem that must be solved before fully automated reconstruction can become a reality. However, we know that the visual apparatus of many neurological researchers can solve this problem (because they have done it) so we can safely conclude that computerized systems will eventually be able to do the same. Unfortunately, this line of reasoning does not let us estimate how long "eventually" might be -- whether it is a few years or a few centuries. As discussed later, current research in fully automated reconstruction provides good grounds for optimism.

4.5 Correcting Imaging and Preparation Artifacts

Ideally, the image obtained should correspond precisely to the original tissue prior to preparation. In fact, a variety of factors cause spatial distortion of the image. A good first step in analyzing the raw data is to compensate, as much as possible, for the damage done during preparation and imaging. Slicing sections creates compression, uneven distortion and tears. Stabilization of the specimen with an electron beam will also cause uneven shrinkage and distortion (once the specimen has been stabilized by pre-exposure to the electron beam, further distortion while the specimen is being viewed should be minimal). Even if the topology of the specimen is preserved within a single section (despite distortion) the distortions are unlikely to be the same between sections -- and so serious discontinuities from section to section will arise. These problems have not prevented human analysis but represent a significant challenge for computer analysis. It might be possible to simplify the problem for computer analysis by the use of additional image data, in particular by imaging the block face prior to sectioning and using this "correct template" as an aid in removing the distortions. If we assume that the distortion and compression errors are on a larger scale than the limits of optical resolution -- i.e., that the distortion is more or less uniform over distances of a few microns -- then the analysis of the block face can be done with optical microscopy. The optical image of the block face can then be compared with the optical image of the section to determine distortions caused by sectioning. (This comparison is computationally similar to the section-to-section comparison done when reconstruction is done from section images only. Some algorithms for the section-to-section comparison have already been developed [259,330]). Comparing the optical image with the TEM image presumably would allow compensation for the beam- induced distortions as well as sectioning distortions, though direct comparison of such disparate imaging modalities might be difficult and awaits an experimental test.

Any other surface-imaging technique could be employed to examine the block face, e.g., scanning electron microscopy, photoelectron imaging, etc. Which method provides the most useful data is unclear and awaits experimentation. A general advantage of such "block face analysis" techniques is the better estimates of the size and exact position of the neurons -- sections are much more vulnerable to distortion. Correct estimates of neural diameter and length are important in electrical modeling of neural behavior [34].

Alternatively, the computer analysis could work only with the image data from the sections (as human analysis has done) and infer the distortion from section to section by either (a) matching low-level grey-scale data and prominent "local features", or (b) actually matching high-level recognized objects (cell boundaries, mitochondria, etc.). The former approach has been considered by Dierker [259], while the latter approach has not yet been attempted because no one has as yet extracted high- level object information. This general approach would presumably yield less precise information about cell geometry, but would avoid the need to image the block face and compare the resulting block-face image with the section image.

5. Can We Analyze the Human Brain?

At the present time, a reasonable research objective is the fully automated analysis of a cube of complex neuropil about 100 microns on a side. When this has been accomplished (in a time frame of perhaps a decade and presumably after considerable effort) we might reasonably consider what the next target should be. Would it be possible to consider at that time a full analysis of the human brain? The following discussion suggests the only limits that will remain will be budgetary -- and that the budget required will be within the reach of a major research project.

We shall consider here only evolutionary improvements in current technologies that might reasonably be available in the next one to two decades and will exclude possible major technological breakthroughs [38, 174, 175, 274, 283].

Continued increases in computational power and decreases in the cost of electronic devices that are in keeping with historic trends are assumed -- a factor of 100 or more per decade [281, 282]. The overall cost should be no greater than the cost devoted to other scientific projects of major interest, e.g., sending a man to the moon, building a high-energy accelerator, or sequencing the human genome. As the following analysis suggests, the cost required should be less than one billion dollars within one to two decades from now -- using fairly conservative estimates.

5.1 The Basic Objective

In the following analysis, we assume that a complete reconstruction of the entire human brain is needed. An alternative possibility would be to analyze selected small regions and assume they are replicated over large volumes. For example, analyzing a single representative cortical column from area 17 of the cortex might allow a reasonable inference about how the whole of that area is organized. By analyzing a small region from each "different" area of the brain, and by then establishing the inter-regional projections, it might be possible to infer the structure of the whole with only partial information. Such smaller scale efforts must precede any more ambitious effort and will certainly provide a great deal of very valuable information about neural function, but it seems probable that we will eventually desire information that can only be provided by the more complete analysis. Global structure might not be deducible from the structure of isolated components. In the presence of specific long range connections, for example, the specificity of the connections would not be evident from examination of the isolated regions. It might also prove difficult to understand the highly specific details involved in complex aspects of higher cortical function from just a general description -- especially when we consider that the final form of the adult brain is acquired only after substantial interaction with the environment. Providing a reasonable understanding of the cortical activity involved in composing Mozart's "Mass in C Minor" might well require more than a general description of isolated regions of the brain. Whether or not the additional cost of a complete analysis is justified by the additional information gained is certain to be a subject of lively debate as the technical feasibility of such a project draws closer. This debate will be similar in form to the current debate over complete sequencing of the entire human genome versus selective sequencing of specific regions. In any event, we shall assume that a full analysis of the human brain is desired -- certainly if this is feasible, then more selective analysis of small regions would also be feasible.

The results of such an analysis will not be a raw three- dimensional image. Such an image would require about 10^22 bits to store. While this storage capacity does not appear to be infeasible [283] it is probably beyond our self imposed planning horizon of ten to twenty years. Instead, we will assume that image analysis is done on the raw data as it is generated, and that a "stick figure" model of the neural structure is generated. In such a model, each neuron is represented by a "stick figure" giving branching information, neuronal type, synaptic types, synaptic connectivity, and the like. This representation has two major advantages. First, it captures the global information that smaller scale analysis might find difficult to provide. Capturing this global information is the major purpose of analyzing a large number of neurons. Information that can be derived from purely local analysis obviously does not require analysis of a large volume. Detailed information about the local structure (and detailed inferences about local function) of individual neurons need not be included in a global analysis. Summary information about local structure and inferred local function (e.g., local information that is likely to affect the global interpretation) does need to be included. This information is typified by a description of the type of synapse, and numerical data concerning the inferred strength of the synaptic connection etc.

Second, a "stick figure" model reduces the total amount of information in a description of the neural structures of the brain to about 10^17 = 10^15 * 100 bits (the number of synapses times roughly 100 bits per synapse to store synaptic type and other relevant summary parameters). CREO products [366] sells a high density optical tape system that stores a terabyte (10^12 bytes) on a single 880 meter by 35 millimeter reel of optical tape. Media costs are under $10.00 per gigabyte, allowing the storage of 10^16 bytes (about 10^17 bits) for under $100,000,000. The transfer rate per drive is about 3 megabytes/second, allowing the transfer of 10^16 bytes in three years (10^8 seconds) with about 30 such drives. Each drive costs under $300,000, so drive costs should be below $10,000,000. Even allowing for the need to read tapes several times, it is unlikely that the cost for drives would exceed $100,000,000. We can reasonably conclude that the raw storage costs required to hold the output of the analysis are within the required range even using todays technology, and are likely to drop significantly over the next ten to twenty years.

5.2 The Basic Problems

There are two major problems that we must face in analyzing a structure as large as the brain: (1) getting the raw image data and (2) analyzing the flood of data once you get it. We assume the image data is acquired via TEM analysis, and thus that relatively thin sections (thin enough to be penetrated by the electron beam) are required. In outline, the following paragraphs consider (1) how thick the sections should be (2) how to produce that many viewable sections (3) what resolution is required for reconstruction (4) the total number of electrons required to image the brain at the required resolution (5) how to put that many electrons through that much tissue and capture the resulting images in some reasonable period of time (6) how to convert that many images into a flood of digital information and (7) how to analyze that much digital information.

5.3 Section Thickness

We shall adopt 1 micron as a reasonable section thickness. Sections this thick have a number of useful properties. First, such sections can be penetrated easily by an electron beam. Sections from .25 to .5 microns are now recommended for routine use [48] with beam energies as low as 100 kev, and tissue sections ranging up to 10 microns thick have been examined [47, 262, 263] (although the higher beam energies required are more expensive. Most current work is done at lower energies -- tens to hundreds of kev rather than a few mev). Second, large sections (on the order of one square centimeter) can be prepared fairly easily [221 pages G116-G123, 255 page 165] and significantly larger sections seem possible. Third, use of 1 micron sections avoids the use of thin sections (.1 micron or thinner) -- which are more fragile, more numerous, more difficult to section, and more prone to produce artifacts from buckling, warping, and tearing.

5.4 Section Support

Sections are generally supported in an electron microscope on a grid of metal which has holes in it -- the holes allow the beam of electrons to pass freely through the specimen. If the holes are too large, the section will collapse through them under its own weight. To prevent this, a continuous film of support material is sometimes used to add strength, although even very thin support films tend to blur and obscure the specimen. Even a section supported by a thin film will eventually collapse if the hole is large enough. A wide variety of grids with holes of different sizes and shapes are available. Slot-shaped holes are often used, and are commonly available with slot-widths ranging from over 1000 microns down to about 20 microns [255 page 133]. Supporting films are usually used with slot-widths over 100 microns.

Clearly, if a section is laid out on a slot-grid only the parts over one of the slots can be viewed -- and so a large portion of the section is effectively lost. Three methods for avoiding this problem seem possible.

First, we could arrange matters so that an unsupported specimen would not collapse. Given the size of section we are considering (several centimeters) this approach is probably only feasible in a micro-gravity environment (for example, in an orbital facility). While this would clearly prevent the section from collapsing under its own weight, the additional cost of providing a laboratory in low earth orbit would be substantial. We will not consider this possibility further.

Second, we could arrange matters so that the hidden portion of the section is not of interest. This could be done by first pre- sectioning the specimen into 1 millimeter slices and then interleaving these with 1 millimeter "fill" slices. The combined layered material (somewhat like neapolitan ice cream) can then be embedded and sectioned. The resulting sections would have alternate 1 millimeter stripes of "fill" and tissue. If the sections were laid out on the slot grid so that the "fill" was directly supported by the metal of the grid while the tissue was over the slot, then all of the tissue could be examined. This seems to require fairly large slot widths (1 millimeter in this example) and doubles the volume required during later sectioning steps -- a minor disadvantage.

The third approach would be to move the section on the slot grid after the visible portions of the section had been examined, thus exposing the rest of the section to view. This is not normally done because present slots are wide enough to allow the full area of interest to be examined. There seems no reason in principle, however, why a 1 micron section could not be lifted from the surface of the slot grid and moved over. There are many techniques that involve lifting a section off a glass slide (after viewing with a light microscope) and then re-embedding and re- sectioning it for viewing with the electron microscope [255]. Simply moving a section (without re-embedding or re-sectioning) would seem to be an easier operation. This approach does not require large slots; more conventional slot widths of perhaps slightly more than 100 microns could be used and would provide good support. While no one has yet demonstrated feasibility, there has been no pressing need to do so. Many possible mechanisms for lifting the section from the grid are possible. It could be raised up by a rising fluid (such as water) or pushed up by probes thrust upwards through the slots; it could be pulled upwards by a flat sheet glued to the sections upper surface, and the glue later dissolved; it could be vibrated loose from the grid with ultrasonics, or dissolved loose from the grid by some chemical bath. Which one of the many possible methods for moving the section will prove simplest and most convenient is unclear -- it seems probable that some method will work.

5.5 Feasibility of Large Sections

The logistics of converting a brain into a series of 1 micron sections that can be viewed under an electron microscope requires some thought. Current techniques can reliably produce 1 micron thick sections which are 12 by 16 millimeters in size which is "...about 200 times larger than those used in electron microscopy, ..." [255 page 165]. There is today no great need to make larger sections -- even if they could be made, imaging them in an electron microscope and then analyzing the resulting images (by eye) would be tedious using current techniques. There are no theoretical barriers to producing larger sections, and there is no reason to presume the practical barriers cannot be dealt with - - no one has done so because no one has really wanted to. Many commercially available microtomes are mechanically accurate enough to produce 1 micron sections from blocks a few decimeters on a side [221 page 118]. Suitable glass knives 40 mm long have been made and larger are quite possible [255 page 165]. Recent work on diamond coatings [285,362,364] should lead to very high quality low cost diamond coated knives (Sumitomo Electric of Japan has already made a tweeter with a 1 micron diamond coating). Because of the large size and the requirement that reproducible serial sections be produced, it seems likely that diamond knives will be required.

If the blade and microtome are suitable, then the only remaining obstacle must be the specimen block -- and large block faces can deform under the pressure of sectioning. The usual solution is to use a harder embedding media, and this has produced quite satisfactory results. With care, it might be possible to extend this method to larger section sizes. An alternative method would be to provide direct bracing to the face of the specimen block being sectioned. The principle should be familiar to anyone who has seen a meat slicer in a deli in which the meat slides along a flat plate, and is sectioned by a blade which is parallel to and slightly above the plane of the flat plate. The meat itself is quite soft and could not be sectioned easily with a "free hand" knife blade because it would deform too easily, but with the aid of the meat-slicer it can be converted into very thin uniform sections with little difficulty. A similar design in microtomes would replace the "free blade" and unsupported block face of the conventional microtome with an optically flat support block against which to lay the specimen, and an optically flat knife perhaps a fraction of a millimeter beyond the edge of the support block and 1 micron above the plain of the support block. The specimen block would then be slid along the support block and into the knife -- even a soft embedding should produce good results.

Such a microtome would clearly be more expensive than a conventional microtome: the optically flat support block, the optically flat blade, and the close alignment between the two would require additional effort to build. Given that its only real advantage is the ability to produce large sections, and given the limited value of such sections to previous research, it is not surprising that it has not been built. However, it appears to be a technically feasible undertaking and could be built if simpler approaches prove inadequate.

In view of the foregoing, we shall assume that the entire brain is sectioned into a series of large (roughly 14 by 18 centimeter) sections which are each 1 micron thick, and that the sections are supported on slot grids with fairly narrow slots -- about 100 microns wide. The human brain is roughly 7 centimeters high, so roughly 70,000 sections will be required. While this is a large number, automated handling techniques should reduce the per- section costs to a few dollars per section. Even if it costs $100/section (with a suitably automated section-handling system) this comes to about $7,000,000 -- an acceptable cost as part of a major project. It is important in keeping costs under control that mechanical handling is done on a per-section basis. There is no requirement for making or mechanically handling smaller sections.

5.6 Resolution Requirements

We now consider what resolution is required. The smallest features that we must reliably analyze are small axons and dendrites -- which are .1 micron or 1000 angstroms in diameter. If the entire volume of the cell were filled with contrast material, this size might also suffice as the resolution limit (making optical analysis just feasible). If however the membrane boundary itself is stained with a contrast agent, then higher resolution is required. A circle (which is the appearance of a small axon or dendrite boundary viewed in a two-dimensional cross section) which is drawn with lines that are 1/10 its diameter can be resolved reliably, and so 100 angstroms appears to be an adequate resolution. This will readily allow resolution of even the finest nerve fibers and is almost sufficient to resolve the presence of the larger protein molecules. Considering that a typical nerve cell membrane might be 40 angstroms thick, that significant membrane proteins are perhaps 100 angstroms, that microtubules (longitudinal structural fibers within the nerve cell) are about 250 angstroms in diameter, and that current reconstruction work is typically done with sections that are 500 to 1000 angstroms (or more) thick, it might well be possible to use a poorer resolution successfully. We shall, however, adopt 100 angstroms.

The presence of synaptic vesicles is extremely useful in identifying the location of synapses. Vesicles range from about 400 angstroms up to 2000 angstroms in diameter. Even small vesicles (400 angstroms) are just visible with 100 angstrom resolution -- though higher resolution would be useful. The reliable recognition of synapse location and function might well require the use of a specific stain -- in which case, the stain would serve to identify synaptic location, and synaptic vesicles would serve simply as an additional marker. It seems likely, therefore, that reliable identification of certain very small features can be done with staining techniques rather than by attempting to increase EM resolution.

The resolution limit is smaller than the section thickness by a factor of 100. We will therefore require EM tomography of the sections to obtain sufficient resolution.

5.7 Number and Beam Current Requirements for the Electron Microscopes

The human brain occupies about 1350 cc; 1 cc is 1000 cubic millimeters, 1 cubic millimeter is 10^9 cubic microns, and 1 cubic micron is 10^6 of our 100 angstrom minimally resolvable cubes (which we shall call "voxels", or volume elements, in keeping with the 3-d computer graphics literature). Multiplying this together yields 1350 x 1000 x 10^9 x 10^6 = 1.3 x 10^21 voxels. This is the most fundamental parameter with which we must deal, and will appear throughout the following analysis.

An electron yields information about an object in its path by having its path deflected, and perhaps by having its energy diminished. Electrons that pass through the specimen can thus be divided into two categories: those that followed their normal path with minimal deviation and energy loss, and those that didn't. (While electron microscopes that measure the energy loss of an electron as it passes through the specimen are in use, they have not been used for neural reconstruction. This might well be a useful strategy but awaits further research and will not be considered further here). By collecting and counting all the electrons that passed undeflected through a given spot on the specimen, the tendency of the specimen at that spot to scatter or retard electrons can be inferred. Because the scattering process is random, and because the exact number of electrons that passed through a given spot is also random, a large number of electrons is required to yield an accurate estimate of the scattering tendency (or "electron density"). In particular, the error in the estimate of the electron density at a point is the square root of the number of electrons that passed through that point. If we desire an electron image accurate to 7 bits (or 1 part in 128) we must expose each voxel of the specimen to about 128^2 or 1.6 x 10^4 electrons. This number is not quite right -- the specimen is not one voxel thick, it is 100 voxels thick (1 micron) and we are using EM tomography to reconstruct its interior. If we required exactly 100 views from 100 different angles for the reconstruction of a 100 voxel thick section, then our calculations would still be correct. However, it takes between 200 and 600 views through such a section to allow reconstruction to an accuracy of about 100 angstroms [187, 188, 354]. We shall assume that 300 views are required for EM tomography, in keeping with the empirical findings of Belmont et al. [187], and therefore that our estimate of the number of electrons required per voxel must be multiplied by 3. This yields 4.8 * 10^4 electrons per voxel.

The selection of 7 bit resolution, as opposed to 8 bit, 6 bit, or 5 bit is rather arbitrary. Many image analysis systems now in use have fewer than 7 bits and work quite effectively. Much work has been dedicated to 1 bit (black and white dots) imaging systems, which are effective in many applications. While 7 bits should suffice, the total electron dose could be substantially reduced if less accuracy were required -- if 6 bit accuracy were sufficient, then only (2^6)^2 or 4096 electrons would be required. We shall use the higher estimate of 4.8 * 10^4 electrons per voxel in this analysis.

If there are 1.3 x 10^21 voxels, and each voxel must receive 4.8 x 10^4 electrons, then we require a total of 1.3 x 4.8 x 10^25 or 6.2 x 10^25 electrons. Now, the charge on an individual electron is 1.6 x 10^-19 coulombs, so the total charge is 1.6 x 6.2 x 10^6 or 9.9 x 10^6 coulombs. If we now assume that the analysis of a brain takes 3 years (a rather arbitrary number, but a reasonable one for a major project) then this 9.9 * 10^6 coulombs will be spread across 3 x 365 x 24 x 60 x 60 or 9.5 x 10^7 seconds (the number of seconds in 3 years). This yields .10 coulombs per second. A single coulomb per second is by definition 1 ampere, so this rate of flow is .10 amperes or 100 milliamperes. If we (again rather arbitrarily) use 1000 electron microscopes, each one must have a beam current of about .1 milliamperes to yield the needed total of 100 milliamperes. Current instruments have maximum beam currents around .1 milliamperes (the Philips 430 has a maximum beam current of .1 milliamperes), which is just what we assumed.

There is no reason to believe that beam currents substantially higher than .1 milliamperes cannot be achieved. Current transmission electron microscopes have not been designed to maximize beam current. They assume that the specimen can be viewed for relatively long periods of time without moving it (seconds to minutes). Beam currents higher than .1 milliamperes are sufficient to destroy most specimens during exposure for such a time (a feature of dubious value). In addition, increased beam current increases the power consumption of the microscope, which means the power supply costs more money both to build and to operate. Given that the function of current electron microscopes is to produce a high resolution image of good quality in a span of a few seconds, at present there seems little reason to have a beam current significantly above .1 milliampere. In one second, this beam can deliver 6.2 x 10^14 electrons. If we were to divide this number of electrons by the number of pixels in a high-resolution photograph, we would have the electron dose per pixel. A picture of 10^4 x 10^4 pixels is more than adequate for all current work, and this implies 6.2 x 10^6 electrons per pixel. A satisfactory image can be made with a few thousands of electrons per pixel, so a .1 milliampere beam current is more than adequate for any current requirement (if the reader will pardon the pun). Providing a higher beam current on present electron microscopes would be like providing a carbon-arc light on a flash-camera -- there is simply no reason to do so. For these reasons, it is reasonable to assume that electron microscopes of the future will provide higher beam current, particularly if they are specifically designed for this purpose. However, because reliable predictions of beam currents in future electron microscopes are difficult to find (unlike predictions of future computational power) we will (conservatively) assume no progress in this area. This means we will use 1000 electron microscopes each one of which has a 100 microampere beam current.

If the electron microscopes are assumed to cost half a million dollars each, this implies a cost of half a billion dollars -- half of our estimated one billion dollar budget and the single most expensive item.

5.8 Viewing Requirements

We now come to the next significant problem -- moving the entire volume of the human brain through the viewing fields of these 1000 microscopes during the 3 years of the analysis. Viewing fields a few microns across are typical in current transmission electron microscopes and increasing the viewing field might prove awkward. Current electron lenses have severe spherical aberration which limits the angle at which individual electrons can go through the lens (the resolution decreases roughly as the cube of the beam angle). This in turn places limits on the viewing field size. While the development of newer electron lenses might be possible, and the re-design of current instruments to enhance field size at the cost of resolution is feasible, we shall not consider these possibilities but will instead (again somewhat arbitrarily) consider a field size of 100 microns by 100 microns. (This is considered a large field by current standards, but is within the range of many current microscopes. The Zeiss EM 10C, for example, can view a 2 millimeter diameter field with its "wide field" imaging mode.) This corresponds to a square which is 10,000 pixels by 10,000 pixels. (A pixel is a two dimensional PICture ELement). The size of the individual viewing field is not as fundamental a parameter as the total number of voxels that must be resolved (which is dictated by the volume and required resolution) or the number of electrons required per voxel (which is dictated by the required accuracy). The viewing field size and geometry might change significantly depending on the technology and engineering trade-offs -- we will not consider these possibilities here. It is important to note that the viewing field size does NOT imply that the section is physically cut into squares of 100 microns by 100 microns -- the physical sections are large (on the order of 14 centimeters by 18 centimeters). The viewing field logically divides the section into many small squares -- handling costs, however, are proportional to the number of physical sections (about 70,000) and not the number of viewing fields.

A single field of view has 10^4 * 10^4 or 10^8 pixels and there are 3.9 * 10^21 pixels to be scanned (3 times larger than the 1.3 * 10^21 voxels because of the use of EM tomography). This means that we must scan 3.9 * 10^21/10^8 or 3.9 * 10^13 viewing fields. Dividing the 9.5 * 10^7 seconds (or 3 years) allotted for the task by the number of viewing fields yields the time (in seconds) that we can spend to examine each field. This gives 9.5 * 10^7 / (3.9 * 10^13) or 2.4 * 10^-6 seconds/field. Because we are assuming 1000 electron microscopes working together, each microscope must scan one viewing field in 2.4 milliseconds.

Even though bringing a new viewing field into position requires only that the physical section be "stepped" by 100 microns, the requirement that this be done in 2.4 milliseconds suggests that improvements on current mechanical or digitally controlled viewing stages might be inadequate. Therefore, we presume that the whole physical section is moving smoothly and continuously through the field of view, at a rate of 100 microns every 2.4 milliseconds (this corresponds to 1 meter/24 seconds or about .15 kilometers per hour -- a slow crawl). (As a minor aside, we note that if the slots in the supporting grid are at right angles to the direction of travel of the physical section, then the field of view will remain uninterrupted by the slot grid). If the specimen is moving continuously, though, what prevents the image from being a complete blur? If 2.4 milliseconds is too short a time for mechanical action to take place, we must adopt some electronic technique to compensate for the motion of the specimen.

Several methods for eliminating specimen motion are possible. What appears to be the simplest is for the electron microscope to track the motion of the specimen. In this arrangement, a set of deflection coils are placed close to the the objective lens where they can "sweep" the beam over the specimen at precisely the same speed with which the specimen is moving. This arrangement is the same as the deflection coils used in scanning electron microscopy to deflect the electron beam when the beam is in motion but the specimen is held still. Thus, the image of the specimen will appear to be steady for the 2.4 milliseconds that one frame is in view -- and then the deflection coils will "fly back" and lock in on the next viewing frame. The 2.4 milliseconds is quite long compared with the time that a single scan line on a standard television set requires -- 63.5 microseconds (including fly-back time). The repetitious "saw tooth" signal required to generate this electronic scanning action has a primary frequency of 15.75 kilohertz in the case of a standard television set (which causes the high-pitched whine some people hear), and would have a frequency of only 420 hertz in the case of the electron microscope.

Perhaps the simplest mechanism to move the specimen smoothly through the field of view of the TEM would be to place several 1- micron sections of the specimen on the edge of a rotating disk, and let the microscope view the different fields much as a phonograph needle "views" a record -- with the field of view slowly spiraling inwards.

5.9 Conversion of Electron Beam Images to Digital Information

Having stabilized the image for 2.4 milliseconds, we must now convert the image into some 10^8 digital samples for analysis by a computer. This is done by projecting the electron image onto a fluorescent screen that converts it into an optical image, and then detecting it with optical sensors. (Direct imaging of electrons by a CCD imager has been done, but beam damage limits the lifetime of the imager [224]. Other mechanisms that directly detect the passage of a 100 kev electron are possible -- but we shall confine the discussion to the most commonly used technique). The fluorescent screen must not retain the image for any significant fraction of 2.4 milliseconds, or separate images will blur into one. Commercially available fluorescent coatings with an "after glow" under 100 nanoseconds are available. P47 decays to 10% of its original brightness 80 nanoseconds after the electron beam is removed [219 page 182]. This phosphor is already used in SEM's where short decay times are essential.

Having once stabilized the image, we must convert it into a digital stream. This process is normally done in two steps: converting the image (made up of photons) into an electric analog and then converting the analog form into a digital stream. The first step can be done with CCD (Charge Coupled Device) imaging devices (typically used in video cameras) while the second step requires an ADC (Analog-to-Digital Converter). Because the second step is more expensive in this application, we shall first compute how many ADCs are required and then provide a sufficient number of CCD imaging chips to provide the raw analog data.

The fastest one-chip ADC currently (1987) on the market is the Honeywell HADC77100B. This converts at a rate of 150 million samples per second, costs $200 in lots of 100 and is accurate to almost 8 bits [250]. (Sony and NTT have presented 350 and 400 megahertz 8-bit one-chip ADC's at the 1987 International Solid State Circuits Conference [280]). (Sony has recently announced the 8-bit CXA1076K that converts at 200 million samples/second and costs $385 in lots of 100 [355]).

There are 3 x 1.3 x 10^21 pixels to be imaged in 3 years, which gives 4.1 x 10^13 pixels/second. The total number of ADCs required can be computed by dividing this by the sampling rate of one ADC: 4.1 x 10^13/1.5 x 10^8 or 2.7 x 10^5 one-chip ADCs. At a cost of $200 each we get a raw cost of $54,000,000. While high, this is still in keeping with the general costs of a major research project -- and we can reasonably expect this price to drop significantly over the next few years. The raw cost of analog to digital conversion does not appear to be a major factor.

Given that we have 2.7 x 10^5 ADCs, we require 2.7 x 10^5 sources of 150,000,000 samples/second of video data to drive them, e.g., some imaging chips. Currently the largest available CCD imagers are about 1,280 by 980 (corresponding to the size required for proposed high-definition television systems) and have over 1,000,000 pixels -- this will prove to be larger than we need. If we limit ourselves to currently available output rates of about 50,000,000 samples/second, then it will take 3 CCD's to drive a single ADC -- or 8.1 x 10^5 CCDs. Given that each image will be presented for only 2.4 milliseconds, then each CCD can produce data for only 2.4 milliseconds before a new image must be processed. At a rate of 50,000,000 samples/second, each CCD can produce only 120,000 image points. This means each CCD need have only 120,000 pixels.

If we assume that we actually package 1,200,000 pixels per CCD imaging chip (which is in keeping with current technology, and would be very conservative by future standards) then even assuming $100 per chip this is only $8,100,000 -- so raw costs of the CCD imaging chips are well within our allotted budget.

The raw task of converting a volume of neural tissue the size of the human brain into a series of digital samples at 100 angstrom resolution can be done for several hundreds of millions of dollars within a few years. The most expensive item in this process will be the electron microscopes -- and it seems likely that technical progress will substantially reduce their cost in the next ten to twenty years -- a reduction that we did not take into account in the cost estimate. The actual cost of such a project will probably be lower than our estimate for this reason.

5.10 Analyzing the Raw Image Data

We now turn to the question of analyzing this volume of data. We first consider the computational requirements for the tomographic reconstruction of the interior of each 1 micron slice. There are 100 horizontal "slices" of 100 angstroms in a single 1 micron physical slice. If we assume that vertical slices of 100 x 100 voxels are reconstructed by the imaging algorithm (n=100) and that 1000 (10 x n) operations per voxel are required [184, 185] then the computational effort is 1.3 x 10^21 x 1000 or 1.3 x 10^24 elementary computations. At this point, we must estimate the cost per elementary operation, and so a digression on the current and projected costs of computation is in order.

We shall confine our attention to present and relatively near- term projections of computational technology -- we shall not consider molecular [38, 174, 175, 273] quantum-mechanical [274] or nanotechnological [283] approaches which, though almost certainly feasible in the future, have not yet been demonstrated. Given the absence of theoretical limits to computation [24] it seems probable that computational power substantially greater than that considered here will eventually be available.

Currently available advanced one-chip microcomputers have 300,000 to 400,000 transistors, cost $100 to $400 dollars, and typically have 30 to 67 nanosecond clock cycles (15 to 30 MHz) [252]. It is now possible to fabricate chips with 10^6 to 10^7 transistors, and evolutionary improvements will yield 10^8 to 10^9 transistors on a single chip [277]. Among others [357], James D. Meindl (co- director of Stanford University's Center for Integrated Systems) has predicted "gigascale integration" before the turn of the century [37]. By substantially increasing the area of a single chip (wafer scale integration [279]) or by building up multi- layered three-dimensional devices [278] we can reasonably expect to pack even more transistors per "chip" using principles that are fundamentally similar to those in use today.

The most powerful computer proposed to date is the IBM TF-1, or Terra Flop processor. This machine will be able to execute over 10^12 floating point operations per second. It will be built from 32,768 general purpose processors each one of which will have 12 megabytes of memory. Each processor will have a 20 nanosecond cycle time and can theoretically execute two floating point operations per cycle for a peak theoretical rating of 10^8 floating point operations per processor per second, or 3.2 * 10^12 for the whole system. The peak execution rate cannot be sustained for most programs, so the more realistic 10^12 number is generally used. The processors are connected by a communications network of 24,000 special purpose communication chips. The network can transfer one byte per processor per cycle, with a delay of 20 cycles from the time data leaves the source processor to the time it reaches the destination processor when the communications network is unloaded. A modest additional delay would be expected in most actual applications due to collisions in the network. Although the system will not be made commercially available, the estimated total system cost is expected to be around $120,000,000. The system should be completed within two years (by 1990) [331,359,360,361]. This computer could execute a total of almost 10^20 flops in 3 years. If we consider that each processor is general purpose, that the image processing tasks required in reconstruction work can probably be done with properly scaled fixed-point integer arithmetic (as opposed to the more complex floating point arithmetic provided in the TF-1), and that the TF- 1 is based on currently available technology, then we can reasonably conclude that computing power substantially in excess of this can be made available in the next 10 to 20 years.

One of the most powerful computers actually built and delivered commercially is the Connection Machine, which has 65,536 processors and costs about $4,000,000 [347]. The processors in the Connection Machine are much less powerful than those in the TF-1. These and other massively parallel designs share a common theme -- a very large number of relatively cheap processors connected by a flexible communications network producing a very large aggregate computational power.

Current processors vary in their ability to rapidly execute different classes of programs -- we are specifically interested in high-speed execution of a very special class of image processing tasks. A general purpose chip will typically execute only a single multiply or add per instruction, and a single instruction can take from 1 to a few cycles. Utilizing 300,000 to 400,000 transistors organized as a very general purpose processor to repeatedly execute a few adds and a few multiplies in a highly structured fashion is wasteful. A chip of similar complexity which was specifically designed for such a function could execute many such adds and multiplies at the same time. (Special purpose image-processing devices are an area of great interest, and advanced designs are already being considered. "For example, it is possible to prepare a video sensor on the top layer, than an A/ D converter, ALU, memory, and CPU in the lower layers to realize an intelligent image processor in a multilayered 3-D structure." [278 page 1705]). A variety of increasingly specialized chips are available which are progressively worse at executing "general" programs and progressively better at executing specialized programs. The fastest available high-speed specialized processor is the IMS A100 by Inmos, which has 32 multiplication and addition units on a single chip, costs $406, and executes 320 million "operations" (a 4x16-bit multiply and a 36-bit addition) each second [234]. Using the Inmos chip as a guide, we can estimate the cost of executing 1.3 x 10^24 "operations" (roughly equivalent to those performed by the A100) over 3 years. (Note that although there are 3.9 x 10^24 image points, we assume that this overhead is taken into account in the estimate of 1000 operations per voxel - - e.g., there are 333 operations per image point). First, we compute the number of chips required as: chips required = (total operations) / (operations/second x (seconds in 3 years)) = 1.3 x 10^24/(320 x 10^6 x 9.5 x 10^7) = 4.3 x 10^7 chips. Multiplying this times the cost per chip gives 17 billion dollars. An additional factor of 2 takes into account various other costs (boards, support chips, etc. A larger overhead multiplier does not seem appropriate, given the high cost of the chip itself). This gives an estimated cost of 34 billion dollars and is the first cost estimate significantly above the one billion dollar range we are aiming for. Here, however, we can invoke the well known factor of 100 or more decrease per decade in the cost of computation [281, 282]. In one decade alone, the cost will drop to 340 million dollars -- which is within the desired range.

While this "pre-processing" of the data has a relatively straightforward cost, the actual computational costs involved in image analysis and recognition of the various neuronal elements is more difficult to assess -- we don't know what computations need to be performed. Despite this, we can make plausibility arguments concerning this based on (1) current work to date and (2) estimates of the computations performed by the human visual system. By either of these standards 10,000 computations per voxel is reasonable. This increases the cost computed previously (which assumed 1000 computations per voxel) by a factor of 10. This would mean the computational costs would be about 3.4 billion dollars in 10 years. A full 20 years from now this would be 34 million dollars.

5.11 Software

We can reasonably conclude that we'll have hardware capable of analyzing the neural structure of the human brain within 20 years -- but will we have the software? Fortunately, we can start work on the software today (and researchers are working in this area now). More extensive work in this area is clearly required. While the hardware capacity to undertake a large analysis is not yet available, essentially all the system design and software problems must be solved in even a small scale analysis -- and such an effort could begin at once. A small scale effort (such as analysis of the nervous system of a fruit-fly or a single region of cortex) is valuable in its own right -- and takes on a greater value when we consider that the lessons learned can be applied almost directly to a more ambitious effort. A "small scale" project should be started today, and should focus on a specific neural system -- either a sub-system of interest (retina, cortical column, etc.) or a small but entire nervous system (fruit fly, grasshopper, etc.). Such an effort would provide a sharp focus for the work of people from many different backgrounds -- neurology, biology, image analysis, electron microscopy, neurochemistry, computer science, etc.

Current work [46,151,141,113,115] gives good grounds for optimism about development of the needed image processing software, and the successful fully automated reconstruction by Hibbard et al. of a capillary bed [339] clearly shows that biological reconstructions can be done if sufficiently clear and high- contrast images can be produced. Success in image analysis tasks is generally found when sharply limited problems in specific domains are attacked. The problem posed here -- analysis of neural structures -- is such a problem. Depth perception is not involved, nor are variations in lighting or viewing conditions. Complete information on an entire volume is available, though with some noise and distortion. The types of objects typically seen in an EM micrograph are modest in number -- the various sub- cellular organelles number no more than 20 or 30. In short -- this is the kind of problem where success seems probable.

6. Conclusion

Successful work in elucidating the behavior of individual synapses [6, 8, 54, 213] has led to increased interest in networks of synapses [284, 148, 117]. Tests of complex theories of network function require significant advances in our knowledge of the actual connectivity of real networks. Today, we can analyze only small numbers of neurons by hand and must infer the topology of large networks by indirect evidence. By automating the analysis process we can extend our knowledge to networks of significant size using currently available techniques and hardware. If we use the technology that will be available in 10 to 20 years, if we increase the budget to about one billion dollars, and if we use specially designed special purpose hardware -- then we can determine the structure of an organ that has long been of the greatest interest to all humanity, the human brain.

We can and should begin work on automated analysis of a "small" neural system today. Not only will it improve our too sketchy knowledge of real neural networks, but the understanding (and the software) that such preliminary efforts provide will be directly applicable to the more ambitious projects that will inevitably follow. As shown here, success on a small project can be scaled up to success on much larger projects -- up to and including the human brain.

Appendix: A Short List of Key Assumptions

The feasibility and cost estimates for a complete analysis of the human brain depend on a number of assumptions, which are here presented in tabular form. Irwin Sobel suggested the use of STEM (Scanning Transmission Electron Microscopy) as a means of easing or eliminating requirements 6, 7, 8, 10 and 15 below. Further analysis of this option seems warranted. It should be re- emphasized that the purpose of the current analysis is to demonstrate technical feasibility and so encourage others to consider the problem -- if and when such a system is actually implemented it might well differ radically from the present proposal to take advantage of technologies not considered here.

1.) The resolution in three dimensions required is 100 angstroms, or .01 microns. This implies there are 1.3 x 10^21 resolvable elements in the human brain. The presence of smaller features (proteins) must be detected by the use of appropriate stains.

2.) Sections of the whole brain 1 micron thick can be made. The largest sections made to date are 12 by 16 millimeters [255 page 165] -- an increase by about a factor of 10 (to 14 by 18 centimeters) over current state-of-the-art is presumed to be possible. If this should not be feasible, handling costs will be increased but should still be acceptable.

3.) Electron microscopes capable of imaging 1 micron sections at a resolution of 100 angstroms are needed. Such microscopes exist today. Costs are tolerable -- perhaps $500,000 for one such microscope. 1000 microscopes are assumed for the analysis.

4.) The interior of the physical section can be computationally reconstructed using algorithms developed for CAT scanners. This will require that each physical section be viewed some 300 different times at 300 different tilt angles to obtain 100 angstrom resolution in three dimensions. The feasibility of this approach has already been demonstrated.

5.) The specimen must be able to withstand the large electron flux implied by assumption (4). This has been demonstrated in existing systems.

6.) The sections must be rotated rapidly through the field of view of the electron microscopes to allow imaging of the entire human brain in a reasonable time. This requires the design and construction of novel EM stages -- however, this appears conceptually straightforward.

7.) The section rotation required by (6) must proceed smoothly - - vibrational motion must be less than 100 angstroms (the limit of resolution) in 2.4 milliseconds (the time during which a single viewing field will be under examination). This is equivalent to 10 micrometers/2.4 seconds, or 4.2 micrometers/second. This appears achievable.

8.) The moving image of a section required by (6) must be electronically stabilized. This will require the design of an image stabilizing system which is novel in EM applications. The required image stabilization seems within the electronic state of the art.

9.) The field of view of the EM is assumed to be 10,000 by 10,000 pixels. This should be within the state of the art. Should it prove expensive to achieve, it would be possible to change the current proposal by assuming a frame size of 1,000 by 1,000 pixels -- this would increase the speed requirement mentioned in (7) above from 2.4 milliseconds/frame to 24 microseconds/frame. This would increase the frame rate from 420 frames/second to about 42,000 frames per second -- which seems achievable given current electronics.

10.) Assumption (9), along with the image-stabilization requirement of (8), implies that the electron lens must limit distortion (pin-cushion, barrel, etc.) to less than one part in 10,000. This is stringent, but appears to be within the state of the art. Again, should this prove expensive a smaller frame size could be adopted (see discussion in (9) above).

11.) The total time alloted for analysis is (arbitrarily) set at 3 years.

12.) There are a total of 3 x 1.3 x 10^21 image points that must be converted to digital form (three times larger than the number of voxels in the brain because some redundancy is required by the 3- D image reconstruction algorithm). Analog-to-digital conversion costs using current technology would be about $54,000,000. Costs will drop by at least a factor of 10 during the next 10 to 20 years, further reducing costs.

13.) Each image element must be accurately measured to one part in 128 (7-bit accuracy). The selection of 7-bit accuracy is somewhat arbitrary.

14.) Assumption (13) implies not only that the analog-to-digital conversion step must be this accurate, but also implies a lower bound on the number of electrons that must be collected for each image element -- and hence a lower bound on the electron beam current. Each of the 1000 electron microscopes must have a beam current of .1 milliamperes (100 microamperes) effectively available at the specimen. This is within the current state of the art.

15.) CCD imaging chips can be used and will cost less than the associated analog-to-digital conversion chips.

16.) Total computational requirements are presumed to be 10,000 "operations" per voxel. While necessarily somewhat imprecise (the image analysis software has not yet been written and algorithmic design issues are unsettled) this appears a plausible and probably somewhat conservative estimate. An "operation" is probably a few 16 or 32 bit additions.

17.) The total computational cost using current technology is estimated at $340,000,000,000. This cost estimate assumes the custom design of components specifically optimized for this application. This is both the highest individual cost estimate and the estimate that will most reliably fall over the next two decades. In 20 years, this cost should be about $34,000,000.

18.) An optical analysis phase will almost certainly have to precede the EM high-resolution analysis. It is presumed that the overall costs of this optical phase will be significantly lower than the costs of the EM phase. A detailed analysis of this phase has not been presented. The optical analysis phase will be done at the limits of optical resolution -- .1 to .2 microns.

19.) Extensive use of optical staining techniques to recover biologically relevant information (distribution of neurotransmitters, receptors, channels, etc.) will almost certainly be required. The possible stains that might be used have only been touched on, and the problems inherent in simultaneous use of multiple staining techniques have not been considered. The successful development of appropriate stains will have a significant impact on the utility of the information generated.

20.) Software to analyze the EM and optical images and determine cell structure has not yet been written but is estimated to be "close" to the current state of the art. It is clear from extensive human success in such reconstructions that such software can be written. Additional software to integrate the data obtained from both optical and EM analysis will be required. While forecasts of future image analysis capabilities are notoriously error prone the current research in this area suggest that optimism is both warranted and realistic.


It is the author's pleasant duty to acknowledge the many people who have provided encouragement, information, and help as this manuscript has taken its final form. They are: David Agard, Joe Capowski, Corey Goodman, Roger Jacobs, Tod Levitt, Vic Nalwa, Robert Schehr, Carla Shatz, Irwin Sobel, John Stevens and Brian Wandell.

The author would also like to thank the many people who so kindly gave a few minutes of their time to patiently answer questions and provide references.


[1] Samuel Ward, Nichol Thomson, John G. White and Sydney Brenner, "Electron microscopical reconstruction of the anterior sensory anatomy of the nematode Caenorhabditis elegans," J. Comp. Neur. 160. 313-338.

[2] John K. Stevens, Thomas L. Davis, Neil Friedman and Peter Sterling, "A systematic approach to reconstructing microcircuitry by electron microscopy of serial sections," Brain Research Reviews, 2 (1980) 265-293.

[3] J.D. Lindsey, M.H. Ellisman, "The Neuronal Endomembrane Systems," J. of Neuroscience Vol 5 No 12, pp. 3111-3144, December 1985.

[5] The Cold Spring Harbor Symposia on Quantitative Biology, volume 48, Molecular Neurobiology, 1983.

[6] Craig H. Bailey, Mary Chen, "Morphological basis of long-term habituation and sensitization in Aplysia," Science 220, 1983.04.01, 91-93

[7] Eric R. Kandel, "Cellular mechanisms of learning and the biological basis of individuality," page 817 in [8].

[8] Eric R. Kandel, James H. Schwartz, "Principles of Neural Science," 2nd edition, Elsevier 1985.

[9] Deborah M. Barnes, "Lessons from Snails and other models," Science 231, 86.03.14, 1246-1249

[10] Pasko Rakic, Larry J. Stensas, Edward P. Sayre, Richard L. Sidman "Computer-aided three dimensional reconstruction and quantitative analysis of cells from serial electron microscopic montages of foetal monkey brain," Nature 250, July 5, 1974, 31.

[11] Cyrus Levinthal, Randle Ware "Three dimensional reconstruction from serial sections," Nature, 236, Mar. 31, 1972, 207-210

[12] J.J. Capowski, "An automatic neuron reconstruction system," Journal of Neuroscience Methods, 8, 1983, 353-364

[13] H. Mannen, "Three-dimensional reconstruction of individual neurons in higher mammals," International review of cytology, supplement 7, 329-372

[14] Randle W. Ware, Vincent LoPresti, "Three-dimensional reconstruction from serial sections," International Review of Cytology 40, 1975, 325-440

[15] Barbara A. McGuire, John K. Stevens, Peter Sterling, "Microcircuitry of bipolar cells in cat retina," The Journal of Neuroscience 4, 1984.12, 2920-2938

[16] Barbara A. McGuire, Jean-Pierre Hornung, Charles D. Gilbert, Torsten N. Wiesel, "Patterns of synaptic input to layer 4 of cat striate cortex," The Journal of Neuroscience 4, 1984.12, 3021- 3033

[17] I. Sobel, C. Levinthal, E.R. Macagno, "Special techniques for the automatic computer reconstructon of neuronal structures," Annual Review of Biophysics and Bioengineering 9, 1980, 347-362

[18] Cameron H. Street, R. Ranney Mize, "A simple microcomputer- based three-dimensional serial section reconstruction system (MICROS)," Journal of Neuroscience Methods 7, 1983, 359-375

[19] E.R. Macagno, C. Levinthal, I. Sobel, "Three-dimensional computer reconstruction of neurons and neuronal assemblies," Ann. Rev. Biophys. Bioeng 8, 1979, 323-351

[20] Marvin Minsky, "Robotics," Anchor Press/Doubleday 1985

[24] Charles H. Bennett, Rolf Landauer, "The fundamental physical limits of computation," Scientific American 253, July 1985, 48-56

[25] Donald E. Olins, Ada L. Olins, Henri A. Levy, Richard C. Durfee, Stephen M. Margle, Ed P. Tinnel, S. David Dover, "Electron microscope tomography: transcription in three dimensions," Science 220, April 29, 1983, 498-500

[26] Juan A. Subirana, Sebastian Munoz-Guerra, Joan Aymami, Michael Radermacher, Joachim Frank, "The layered organization of nucleosomes in 30 nm chromatin fibers," Chromosoma 91, 1985, 377- 390

[27] Bertil Hille, "Ionic Channels of Excitable Membranes," Sinauer 1984.

[28] Stephen W. Kuffler, John G. Nichols, A. Robert Martin, "From Neuron to Brain," Second Edition Sinauer 1984.

[29] J.G. Sutcliffe, R.J. Milner, F.E. Bloom, "Cellular localization and function of the proteins encoded by brain- specific mRNA's," in [5] 477-484.

[30] Sharon E. Sasaki-Sherrington, J. Roger Jacobs, John K. Stevens, "Intracellular control of axial shape in non-uniform neurites: a serial electon microscopic analysis of organelles and microtubules in AI and AII retinal amacrine neurites," Journal of Cell Biology 98, 1984.04, 1279-1290

[31] John C. Mazziotta, Betty L. Hamilton, "Three-dimensional computer reconstruction and display of neuronal structure," Computers in Biology and Medicine 7, 1977, 265-279

[32] David A. Agard, "Optical sectioning microscopy: cellular architecture in three dimensions," Annual Review of Biophysics and Bioengineering 13, 1984, 191-219

[33] Alfredo A. Sadun, Judith D. Schaechter, "Tracing axons in the human brain: a method utilizing light and TEM techniques," Journal of Electron Microscopy Technique 2, 1985 175-186,

[34] H.L. Atwood, J.K. Stevens, L. Marin, "Axoaxonal synapse location and consequences for presynaptic inhibition in crustacean motor axon terminals," The Journal of Comparative Neurology 225, 1984, 64-74

[35] R. Ranney Mize, "The Microcomputer in Cell and Neurobiology Research," Elsevier, 1985

[36] O. Hayes Griffith, "Photoelectron imaging in cell biology," Annual Review of Biophysics and Biophysical Chemistry 14, 1985, 113-130

[37] John W. Wilson, Scott Ticer, "Superchips: the new frontier," Business Week, June 10, 1985 page 82-85

[38] Michael Rand, "Molecular Electronics Research Growing Despite Controversy," ElectronicsWeek, May 6 1985, page 36-37.

[39] "Biochip Research gets a Sugar Daddy," Business Week, April 15 1985, page 150C

[40] F. E. Yates, "Report on Conference on Chemically-Based Computer Designs," Crump Institute for Medical Engineering Report CIME TR/84/1, 1984 University of California, Los Angeles, CA 90024.

[41] Stanley B. Kater, Charles Nicholson, "Intracellular Staining in Neurobiology," Springer-Verlag, New York 1973

[42] D.R. Reddy, W.J. Davis, R.B. Ohlander, D.J. Bihary, "Computer analysis of neuronal structure," Chapter 16 of [41] 227-253.

[43] Robert D. Lindsay, ed. "Computer analysis of neuronal structures," Plenum Press, 1977

[44] Paul D. Coleman, Catherine F. Garvey, John H. Young, William Simon "Semiautomatic Tracking of Neuronal Process," Chapter 5 of [43] 91-109.

[45] Irwin Sobel, private communication, December, 1986

[46] Robert Schehr, informal talk near Stanford in August, 1986

[47] Pierre Favard, Nina Carasso, "The preparation and observation of thick biological sections in the high voltage electron microscope," Journal of microscopy, Jan. 1973 Vol. 97, page 59- 81.

[48] Conly L. Rieder, Gerald Rupp, Samuel S. Bowser, "Electron microscopy of semithick sections: advantages for biomedical research," Journal of electron microscopy technique, Feb. 1985 Vol. 2, No. 11, page 11-28.

[49] J.J. Capowski, M.J. Sedivec, "Accurate computer reconstruction and graphics display of complex neurons utilizing state-of-the-art interactive techniques," Computers and Biomedical Research 14, 1981, 518-532

[50] M.J. Shantz, G.D. McCann, "Computational morphology: three- dimensional computer graphics for electron microscopy," IEEE Trans. on Biomedical Eng. (25, 1), 1978.01 99-103

[51] T. Joe Willey, Robert L. Schultz, Allan H. Gott, "Computer graphics in three dimensions for perspective reconstruction of brain ultrastructure," IEEE Trans. on Biomedical Eng. 1973.07, 288-291

[52] Steven R. Reuman, J.J. Capowski, "Automated neuron tracing using the Marr-Hildreth zerocrossing technique," Computers and biomedical research 17, 1984, 93-115

[53] Ellen M. Johnson, J.J. Capowski, "A system for the three- dimensional reconstruction of biological structures," Computers and biomedical research 16, 1983, 79-87

[54] Eric R. Kandel, James H. Schwartz, "Molecular biology of learning: modulation of transmitter release," Science, Oct. 29, 1982 Vol. 218, page 433-443.

[55] Paul D. Coleman, Dorothy G. Flood, Mark C. Whitehead, Robert C. Emerson, "Spatial sampling by dendritic trees in visual cortex," Brain Research, 1981 Vol. 214, page 1-21.

[64] Barbara A. McGuire, John K. Stevens, Peter Sterling, "Microcircuitry of bipolar cells in cat retina," The Journal of Neuroscience, Dec. 1984 Vol. 4, No. 12, page 2920-2938.

[101] Research Resources Information Center, NIH, "CARTOS: modeling nerves in three dimensions," May 1981.

[105] Otis Port, "Computers that come awfully close to thinking," Business Week, Jun. 2, 1986 page 92-96.

[111] John K. Stevens, Judy Trogadis, "Computer-assisted reconstruction from serial electron micrographs: a tool for the systematic study of neuronal form and function," Advances in Cellular Neurobiology, 1984 Vol. 5, page 341-369.

[113] Lewis W. Tucker, "Computer vision using quadtree refinement," Ph.D. thesis, May. 1984.

[114] Lewis W. Tucker, "Model-guided segmentation using quadtrees," Seventh international conference on pattern recognition, Montreal, Can., 1984.

[115] Lewis W. Tucker, Hector J. Cornejo, Donald J. Reis, "Image understanding and the cell world model," Quantitative neuroanatomy in transmitter research, 1984.

[117] John J. Hopfield, "Neural networks and physical systems with emergent collective computational abilities," Proc. Natl. Acad. Sci. USA, Apr. 1982 Vol. 79, page 2554-2558.

[124] M. J. Sedivec, J. J. Capowski, L. M. Mendell, "Morphology of HRP-injected spinocervical tract neurons: effect of dorsal rhizotomy," the Journal of Neuroscience, Mar. 1986 Vol. 6, No. 3, page 661-672.

[133] James B. Aguayo, Stephen J. Blackband, , Joseph Schoeniger, Mark A. Mattingly, Markus Hintermann, "Nuclear magnetic resonance imaging of a single cell," Nature, Jul. 22, 1986 Vol. 322, page 190-191.

[139] Ellen M. Johnson, J.J. Capowski, "Principles of reconstruction and three-dimensional display of serial sections using a computer," The microcomputer in cell and neurobiology research (Mize), 1985.

[141] Peter Selfridge, "Locating neuron boundaries in electron micrograph images using "primal sketch" primitives," Computer Vision, Graphics, and Image Processing, 1986 Vol. 34, page 156- 165.

[148] John J. Hopfield, David W. Tank, "Computing with neural circuits: a model," science, Aug. 8, 1986 Vol. 233, page 625-633.

[149] Noel Kropf, Irwin Sobel, Cyrus Levinthal, "Serial section reconstruction using cartos," The microcomputer in cell and neurobiology research, 1985 page 266-292.

[151] Tod S. Levitt, Corey Goodman, Daniel J. Edelson and John W. Dye. "Feasibility of a next generation computer environment for assisting 3D reconstruction and analysis of neural anatomy," Feb. 11, 1987. Available from Advanced Decision Systems, 1500 Plymouth, Mt. View, CA 94043-1230. Phone: 415-960-7300. Prepared under NSF contract #1S1-8660489.

[152] Jack Price, David Turner, Constance Cepko, "Lineage analysis in the vertebrate nervous system by retrovirus-mediated gene transfer," Oct. 1986. (Pre-print, Harvard)

[163] Y. Sugiura, C.L. Lee, E.R. Perl, "Central projections of identified, unmyelinated (C) afferent fibers innervating mammalian skin," Science, Oct. 17, 1986 Vol. 234.

[164] Peter G. Selfridge, "Progress towards automatic 3D neuron reconstruction from serial sections," pre-print.

[174] Forrest L. Carter, "The chemistry in future molecular computers," Computer applications in chemistry, 1983 page 225- 262.

[175] Forrest L. Carter, "The molecular device computer: point of departure for large scale cellular automata," Physica 10D, 1984 page 175-194.

[180] David W. Ow, Keith V. Wood, Marlene DeLuca, Jeffrey R. De Wet, Donald R. Helinski, Stephen H. Howell, "Transient and stable expression of the firefly luciferase gene in plant cells and transgenic plants," Science, Nov. 14, 1986 Vol. 234, page 856- 859.

[184] Albert Macovski, "Medical Imaging," published by Prentice Hall, 1983.

[185] "Special issue on computerized tomography," Proceedings of the IEEE, (71, 3) pages 289-448, Mar. 1983.

[187] Andrew S. Belmont, John W. Sedat, David A. Agard, "A three- dimensional approach to mitotic chromosome structure: evidence for a complex hierarchical organization," Nov. 1986.

[188] A. Klug, R. A. Crowther, "Three-dimensional image reconstruction from the viewpoint of information theory," Nature, 1972 Vol. 238, page 435-440.

[192] Shimon Ullman, Trends in Neurosciences, "Artificial intelligence and the neurosciences," Oct. 1986 Vol. 9, No. 10, page 530-533.

[193] Richard H. Masland, "The functional architecture of the retina," Scientific American, Dec. 1986 Vol. 255, No. 6, page 102- 111.

[194] Gerd Binnig, Heinrich Rohrer, "The scanning tunneling microscope," Scientific American, Aug. 1985 Vol. 253, No. 2, page 50-56.

[195] David Marr, "Vision," published by Freeman, 1982.

[196] TRW, "MARK III Artificial Neural System Processor," Rancho Carmel AI Center, 1986.

[197] John K. Stevens, "Reverse engineering the brain," Byte, Apr. 1985 page 286-299.

[208] Dana H. Ballard, Christopher M. Brown, "Computer vision," published by Prentice Hall, 1982.

[213] P.G. Montarolo, P. Goelet, V.F. Castellucci, J. Morgan, E.R. Kandel, S. Schacher, "A critical period for macromolecular synthesis in long-term heterosynaptic facilitation in Aplysia," Science, Dec. 5, 1986 Vol. 234, page 1249-1254.

[219] Illes P. Csorba, "Image Tubes," published by Sams, 1985.

[220] W. Scott Young, III, "In-situ hybridization histochemistry and the study of the nervous system," Trends in neurosciences, Dec. 1986 Vol. 9, No. 11, No. 12, page 549.

[221] "Biotechnology products & instruments 1986," Science, May 30, 1986 Vol. 232, part II.

[223] "The ISSCC's menu ranges from 4-Mb DRAMs to GaAs memories," Electronics, Nov. 27, 1986 page 84.

[234] "Data-flow IC samples at 320-million/s rate," Electronics, Nov. 13, 1986 page 96.

[244] P.T.E. Roberts, J.N. Chapman, A.M. MacLeod, "A CCD-based image recording system for the CTEM," Ultramicroscopy, 1982 Vol. 8, page 385-396.

[250] "Recent IC announcements," Computer, Jan. 1987 page 126.

[251] William A. Houle, Hugh M. Brown, O. Hayes Griffith, "Photoelectric properties and detection of the aromatic carcinogens benzo[a]pyrene and demethylbenzanthracene," Proc. Natl. Acad. Sci. USA, Sep. 1979 Vol. 76, No. 9, page 4180-4184.

[254] Arthur L. Robinson, "High spatial resolution ion microprobe," Science, Sep. 14, 1984 Vol. 225, page 1139.

[255] M. A. Hayat, "Basic techniques for transmission electron microscopy," published by Academic Press, 1986.

[259] M.L. Dierker, "An algorithm for the alignment of serial sections," Computer Technology in Neuroscience, 1976.

[262] Audrey M. Glauert, "Recent advances of high voltage electron microscopy in biology," Journal of Microscopy, Sep. 1979 Vol. 117, No. 1, page 93-101.

[263] Audrey M. Glauert, "The high voltage electron microscope in biology," The Journal of Cell Biology, 1974 Vol. 63, page 717-748.

[264] Mircea Fotino, "Experimental studies on resolution in high- voltage transmission electron microscopy," Electron Microscopy in Biology, Vol I, 1981 page 89-138.

[273] F.L. Carter, "Prospects for computation at the molecular- size level," Digest of Papers, Spring Compcon, Feb. 1984 page 110- 114.

[274] Richard P. Feynman, "Quantum mechanical computers," Optics News, Feb. 1985 Vol. 11, page 11-20.

[275] Otis Port, John W. Wilson, "They're here: computers that `think'," Business Week, Jan. 26, 1987 page 94-95.

[277] W.C. Holton, R.K. Cavin, III, "A perspective on CMOS technology trends," Proceedings of the IEEE, Dec. 1986 Vol. 74, No. 12, page 1646-1668.

[278] Y. Akasaka, "Three-dimensional IC trends," Proceedings of the IEEE, Dec. 1986 Vol. 74, No. 12, page 1703-1714.

[279] Richard O. Carlson, Constantine A. Neugebauer, "Future trends in wafer scale integration," Proceedings of the IEEE, Dec. 1986 Vol. 74, No. 12, page 1741-1752.

[280] The 1987 IEEE International Solid-State Circuits Conference Digest of Technical Papers, Vol. 30, IEEE Cat. No. 87CH2367-1, Feb. 25-27, New York, NY.

[281] G.J. Myers, A.Y.C. Yu, D.L. House, "Microprocessor technology trends," Proceedings of the IEEE, Dec. 1986 Vol. 74, No. 12, page 1605-1622.

[282] S. Asai, "Semiconductor memory trends," Proceedings of the IEEE, Dec. 1986 Vol. 74, No. 12, page 1623-1635.

[283] K. Eric Drexler, "Engines of creation," Anchor Press/ Doubleday, 1986.

[284] Gary Lynch, "Synapses, circuits, and the beginnings of memory," published by MIT press, 1986.

[285] Rustum Roy, "Diamonds at low pressure," Nature, Jan. 1, 1987 Vol. 325, page 17-18.

[294] Howard Sochurek, Peter Miller, "Medicine's new vision," National Geographic, Jan. 1987 Vol. 171, No. 1, page 2-41.

[306] Keir Pearson, "The control of walking," Scientific American, Dec. 1976 page 72-86.

[307] Gordon M. Shepherd, "Microcircuits in the nervous system," Scientific American, Feb. 1978 page 93-103.

[325] L.G. Briarty, P.H. Jenkins, "GRIDSS: an integrated suite of microcomputer programs for three-dimensional graphical reconstruction from serial sections," Journal of Microscopy, Apr. 1984 Vol. 134, No. 1, page 121-124.

[326] J. Roger Jacobs, John K. Stevens, "Experimental modification of PC12 neurite shape with the microtubule-depolymerizing drug nocodazole: a serial electron microscopic study of neurite shape control," The Journal of Cell Biology, Sep. 1986 Vol. 103, page 907-915.

[327] J. Roger Jacobs, John K. Stevens, "Changes in the organization of the neuritic cytoskeleton during nerve growth factor-activated differentiation of PC12 cells: a serial electron microscopic study of the development and control of neurite shape," The Journal of Cell Biology, Sep. 1986 Vol. 103, page 895- 906.

[328] J. Roger Jacobs, "The ontogeny of structure and organization of PC12 neurites," 1985, Ph.D. thesis at University of Toronto, 79-104.

[329] Solomon H. Snyder, "Drug and neurotransmitter receptors in the brain," Science, Apr. 6, 1984 Vol. 224, page 22-31.

[330] Tod Levitt, unpublished work.

[331] Monty Denneau, IBM Yorktown, personal communication.

[337] Ze-nian Li, Leonard Uhr, "A pyramidal approach for the recognition of neurons using key features," Pattern Recognition, 1986 Vol. 19, No. 1, page 55-62.

[338] R.L. Gardner, P.A. Lawrence, "Single cell marking and cell lineage in animal development," published by The Royal Society, 1986.

[339] Lyndon S. Hibbard, Barbara J. Dovey-Hartman, Robert B. Page, "Three-dimensional reconstruction of median eminence microvascular modules," Comput. Biol. Med., 1986 Vol. 16, No. 6, page 411.

[340] Stephen W. Kuffler, John G. Nicholls, A. Robert Martin, "From Neuron To Brain," 2nd edition, published by Sinauer, 1984.

[341] "Information processing in the retina, special issue," Trends in NeuroSciences, May. 1986 Vol. 9, No. 5.

[342] D.H. Hubel, T.N. Wiesel, "Brain mechanisms of vision," Sci. Am., Mar. 1979 Vol. 241, No. 3, page 150-162.

[343] D.H. Hubel, T.N. Wiesel, "Ferrier Lecture: Functional architecture of macaque monkey visual cortex," Proc. R. Soc. Lond [Biol.], 1977 Vol. 198, page 1-59.

[344] Shimon Ullman, "Artificial intelligence and the brain: computational studies of the visual system," Ann. Rev. Neurosci., 1986 Vol. 9, page 1-26.

[345] H.L. Atwood, G.A. Lnenicka, "Structure and function in synapses: emerging correlations," Trends in NeuroSciences, Jun. 1986 Vol. 9, No. 6, page 248-250.

[346] Francis Crick, C. Asanuma, "Certain aspects of the anatomy and physiology of the cerebral cortex," Chapter 20 from Parallel Distributed Processing by McClelland and Rumelhart, published by MIT press, 1986.

[347] W. Daniel Hillis, "The Connection Machine," published by MIT press 1986.

[348] Emily T. Smith, Jo Ellen Davis, "Superconductors," Business Week, page 94-100.

[349] J.E. Sulston, E. Schierenberg, J.G. White, J.N. Thomson, "The embryonic cell lineage of the nematode Caenorhabditis elegans," Developmental Biology, 1983 Vol. 100, page 64-119.

[350] J.G. White, "Computer-aided reconstruction of the nervous system of C. elegans," Ph.D. thesis, University of Cambridge, 1974.

[351] Samuel Ward, Nichol Thomson, John G. White, Sydney Brenner, "Electron microscopical reconstruction of the anterior sensory anatomy of the nematode Caenorhabditis elegans," J. Comp. Neur., 1975 Vol. 160, page 313-338.

[352] O.H. Griffith, G.H. Lesch, G.F. Rempfer, G.B. Birrell, C.A. Burke, D.W. Schlosser, M.H. Mallon, G.B. Lee, R.G. Stafford, P.C. Jost, T.B. Marriott, "Photoelectron microscopy: a new approach to mapping organic and biological surfaces," Proc. Nat. Acad. Sci. USA, Mar. 1972 Vol. 69, No. 3, page 561-565.

[353] James F. Hainfeld, "A small gold-conjugated antibody label: improved resolution for electron microscopy," Science, Apr. 24, 1987 Vol. 236, page 450-453.

[354] J.A. Reeds, L.A. Shepp, "Limited angle reconstruction in tomography via squashing," Medical Imaging, June 1987, Vol. 6 No. 2, 89-97.

[355] "Recent IC announcements," Computer, June 1987 page 110.

[357] B.C. Cole, "Here comes the billion transistor IC," Electronics, Apr. 2 1987, Vol. 60 No. 7, 81-85.

[358] Barry P. Medoff, "Image Reconstruction from limited data: theory and applications in computerized tomography," chapter 9 of Image Recovery: Theory and Application.

[359] Philip Elmer-DeWitt, Thomas McCarrol, Madeleine Nash, and Charles Peltonn "Fast and Smart: designers race to build supercomputers of the future," in Time, March 28 1988, page 54-58.

[360] Monty M. Denneau, Peter H. Hochschild, and Gideon Shichman, "The Switching Network of the TF-1 Parallel Supercomputer," in Supercomputing, Winter 1988 pages 7-10.

[361] Talk given by Gideon Shichman at Xerox PARC in July 1988.

[362] John C. Angus and Cliff C. Hayman, "Low-Pressure, Metastable Growth of Diamond and "Diamondlike" Phases," Science, Vol 241, Aug. 19, 1988 pages 913-921.

[363] Hadimioglu and J. S. Foster, in "Advances in superfluid helium acoustic microscopy," J. Appl. Phys. Vol 56 No. 7, Oct. 1, 1984, pages 1976-1980.

[364] Arthur L. Robinson, "Is Diamond the new wonder material?" Science, Vol 234, Nov. 28, 1986, pages 1074-1076.

[365] G. Binnig and C. F. Quate, "Atomic Force Microscope," in Physical Review Letters, Vol. 56 No. 9, Mar. 3 1986, pages 930 to 933.

[366] CREO products, Inc. 110 Discovery Park, 3700 Gilmore Way, Burnaby, B.C. Canada V5G 4M1. 604-437-6879.

This page is part of Ralph C. Merkle's web site.