Complexity Digest 1999.09

  Archive: http://comdig.unam.mx
  "I think the next century will be the century of complexity." Stephen Hawking, 2000.

  1. Earth System Analysis & 2nd Copernican Revolution, Nature Millenn. Suppl.
  2. From Molecular To Modular Cell Biology, Nature Millennium Suppl.
  3. Genetics and general cognitive ability, Nature Millennium Suppl.
  4. The Future Of Evolutionary Developmental Biology, Nature Millennium Suppl.
  5. Computing 2010: From Black Holes To Biology, Nature Millennium Suppl.
  6. The neurobiology of cognition, Nature Millennium Suppl.
  7. New Theories Help Explain Mysteries of Autism, New York Times
  8. Transitions and Resonances in the Behavior of Heart Cells, Chaos
  9. Why Do Microorganisms Survive Deep Underground?, Science Daily, INEEL
  10. Adaptive Agents and Collective Intelligence on the Internet?, arXiv
  11. Cortical Mechanisms of Human Imitation, Science
  12. What Kills Motor Neurons in Locked-In Patients?, Science

  1. Earth System Analysis & 2nd Copernican Revolution, Nature Millenn. Suppl. Bookmark and Share

    The Copernican revolution in the middle of our outgoing millennium put people's world -the Earth- into the proper context of our solar system and the cosmos. Today, at the end of the second millennium our "world" goes far beyond planet Earth: From the Apollo missions to the moon thirty years ago we got an impressive visual image of our planet as a limited space and environment. The last few decades have brought an incredible increase in Earth sensors and computing simulation powers so that we can start to get a view of the complex and intricate workings of the interwoven sub-systems of the terrestrial environment and ecosystem. H.J. Schellnhuber, Director of the Potsdam Institute for Climate Impact Research (PIK) compares that emerging view of the Earth system with that of an organism, "Gaia's body" and suggests that its consequences will be that of a second Copernican revolution (see figure).

    He sees three different principles of how one can attempt to get a holistic picture of the Earth system: the "bird's eye" principle of remote sensors that are already providing a torrent of geo-data. The second principle is that of "digital mimicry" or simulation where different "what-if"-scenarios can be tested without having to experiment with the real system, of which we only have one sample. The third principle Schellnhuber calls "Lilliputian" after the land of little people that Gulliver encountered on his travels. Ecological laboratories like the Biosphere-II project can build an Earth model in hardware and teach the researchers important lessons about what real living systems might do and that can a-priori not be captured in a digital simulation (see the Jurassic Park syndrome).

    The understanding of the Earth system will bring up the challenge of a geo-cybernetic task: If we can see the consequences of greenhouse gas emissions in a 'business-as-usual' policy, what could be done to induce a transition to a more desirable future? With his scientific background Schellnhuber cannot resist writing down an 'Earth equation' where the 'Earth variable E' depends on "natural, N" and 'human, H' subsystems. It is interesting that he then continues to split the human 'H' into a physical 'antroposphere A' and a 'spiritual (?)' subsystem of an emerging 'global subject, S'. The global subject S manifests itself in global actions such as international protocols for climate protection that increase the 'evolutionary fitness' of the Earth system and prevents it from terminating in a Martian or Venusian regime.

    By defining the global subject as subsystem of the human subsystem one can expect that a sustainable solution where, say, ants become the dominant species on this planet will be avoided. It seems, on the other hand a little risky to entrust the global subject to a species that was around for barely a few million years and after that short time had only a single human species left standing.

    The author ends with presenting three geo-strategic approaches to a sustainable future: Optimization by a more organic distribution of global tasks of food and energy production. A stabilization strategy to fix the ozone layer and atmospheric climate problems by iron fertilization and injection of "designer" greenhouse gases for preventing the next ice age. Of course it is clear that the complexity of the problem is tremendous because of the interconnectedness of the subsystems. For instance it is feasible that the storms and mild and rainy winters that Europe has been experiencing will contribute to the termination of the North Atlantic Conveyor-Belt and thereby in fact facilitate a new European ice age.

    The third geo-strategy is that of pessimization and the creation of minimal safety standards for operating the Earth system by geo-engineering only as much as is necessary for creating "'guardrails' for responsible planetary management".

    'Earth system' analysis and the second Copernican revolution , H. J. Schellnhuber, Millennium Supplement, Nature 402: 6761 C19 (1999)


  2. From Molecular To Modular Cell Biology, Nature Millennium Suppl. Bookmark and Share

    Traditional, reductionist 20th century biology tries to explain biological phenomena (mysteries) by finding molecules with special (miracle) functions. According to Hartwell et al. this will change dramatically in the biology of the 3rd millennium: Most biological functions arise from the interactions among a large number of different(!) components. The fact that the components (subsystems) are themselves quite complex and not all identical makes biology so much harder to understand than, e.g. particle physics where all matter is built up of a few different sub-atomic particle types (proton, electron, etc.). But we know that already simple, identical rules such as in cellular automata can produce structures of amazing complexity. How much more difficult to analyze and understand must be a system with non-uniform and complex subsystems. One other feature discriminates today's biological systems from simple physical systems: they have evolved for a couple of billion years and only those biological systems are around today that participated in evolutionary progress and selection. For the analysis evolutionary fitness can be formulated as a purpose of the biological system: Biological systems "want" to reproduce.

    Hartwell et al. propose to study biological systems not only on the genetic molecular level but also in the framework of "modules" that enhance evolutionary fitness. The authors give a number of example where an understanding of the function of modules leads to prediction for the properties of molecules such as DNA.

    In the analysis of modular functions complex computer simulations ("in silico" reconstructions) have become an important heuristic tool for biological research. Thus the biology of the next millennium will benefit greatly from contributions from "synthetic sciences" like computer science and engineering but also from insights gained in the abstract research on complex systems: Bifurcations and non-equilibrium phase transitions appear to be ubiquitous in biological systems. Another property that can follows naturally from the framework of modules is complexity as a consequence of a sequence of random historical events: Once a module is in place mutations are unlikely to change it, rather lead to modifications in the arrangements that might not be optimal in a global sense. An example of a recent historical accident in human design is the QWERTY keyboard that the authors view as a "living fossil".

    They conclude with a discussion of some of the key questions for modular biology: " A major challenge for science in the twenty-first century is to develop an integrated understanding of how cells and organisms survive and reproduce. Cell biology is in transition from a science that was preoccupied with assigning functions to individual proteins or genes, to one that is now trying to cope with the complex sets of molecules that interact to form functional modules. There are several questions that we want to answer. What are the parts of modules, how does their interaction produce a given function, and which of their properties are robust and which are evolvable? How are modules constructed during evolution and how can their functions change under selective pressure? How do connections between modules change during evolution to alter the behavior of cells and organisms?"

    From Molecular To Modular Cell Biology, Leland H. Hartwell, John J. Hopfield, Stanislas Leibler & Andrew W. Murray, Millennium Supplement, Nature 402: 6761 C47 (1999)



  3. Genetics and general cognitive ability, Nature Millennium Suppl. Bookmark and Share

    One of the big scientific challenges for the next millennium will be what E.O. Wilson once called "consilience" or 'unity of knowledge' where scientific arguments between different sciences and even political and religious organizations will be resolved on a higher level. The question about the role of "nature vs. nurture" influence of human development has played a central role in the discussion. Specifically the issue of general intelligence, how it is defined and to what degrees it has genetic origins has sparked public discussion especially after the publication of 'The Bell Curve'.

    Plomin presents arguments for the genetic contribution to behavioral disorders from autism to schizophrenia to the very common reading disability. Strong evidence for genetic contributions has been established through a large number of statistical correlations discovered under diverse genetic (siblings and twins) and environmental (up-bringing) conditions. But instead of looking for individual genes that are responsible for these disorders it seems more realistic (and complex) to try to assume that there might be a whole number of genes whose interactions can produce a given disorder. This view is known as the quantitative trait locus (QTL) and has profoundly different implications than the case where a single gene is the sole culprit for a disorder. For instance already today in the US newborn babies are routinely checked for phenylketonuria (PKU) a disorder based on a defect in a single gene that can lead to mental retardation. Early diagnosis and nutritional adjustment can greatly ameliorate the effects of the disease.

    Functional genomics will, according to Plomin, will be able to define and predict traits as complex as general cognitive ability "g", a variant of the traditional intelligence quotient (IQ). To the science fiction scenario of classifying babies at birth into alpha, beta and gamma humans he responds with a prediction of early intervention for those children at risk of mental retardation similar to the PKU example. He also expects a beneficial impact of functional genomics when it helps parents to recognize the genetic limitations (and talents) of their children instead of developing unrealistic expectations based on family history, social status, etc.

    Genetics And General Cognitive Ability, Robert Plomin, Millennium Supplement, Nature 402: 6761 C25 (1999)



  4. The Future Of Evolutionary Developmental Biology, Nature Millennium Suppl. Bookmark and Share

    "Combining fields as diverse as comparative embryology, palaeontology, molecular phylogenetics and genome analysis, the new discipline of evolutionary developmental biology aims at explaining how developmental processes and mechanisms become modified during evolution, and how these modifications produce changes in animal morphology and body plans. In the next century this should give us far greater mechanistic insight into how evolution has produced the vast diversity of living organisms, past and present."
    The Future Of Evolutionary Developmental Biology, Peter W. H. Holland, Millennium Supplement, Nature 402: 6761 C41 (1999)



  5. Computing 2010: From Black Holes To Biology, Nature Millennium Suppl. Bookmark and Share

    Futurologists from half a century ago produced a major howler: They predicted flying cars and rolling sidewalks everywhere but completely missed how computers would change our lives. Today we anticipate a significant increase in the role of computers over the next decades but maybe their influence will be replaced by something completely different. Perhaps that is the reason why Butler only projects out to the year 2010. Like most science fiction movies he predicts a strange blend of today's habits with beefed-up technology: Are we really going to use ten gigabyte lines in order to make conference calls where we can "shake hands" with avatars of our collaborators?

    It is more likely that increased compute power will be used to make simulations more realistic. Everyone who has worked in the field knows that one can always saturate the fastest machines by just cranking up the resolution of a simulation (that is why typical simulation jobs on supercomputers always took about one hour and always will).

    As the title indicates astrophysical and biological simulations will be at the core of number crunching activities. The next generation IBM supercomputer is even designed specifically to simulate protein folding (see ComDig 1999.beta6.10). Other complexity challenged computational goals are simulations of cells and the eco system of planet Earth. Most of the computational speed-up is gained by linking thousands and tens of thousands of PC processors together either in a dedicated machine or even across the Internet like in a Beowulf cluster of Linux machines. The introduction of 64-bit machines later in 2000 will greatly enhance the capabilities of such clusters.

    One application of this concept has led to the largest distributed computing project in history on the planet's largest virtual supercomputer: using a downloadable screensaver SETI@home more than a million PC users on the Internet donate unused CPU cycles to search for extra-terrestrial life by scanning through 35 gigabytes of signals received daily at the Arecibo Radio Telescope in Puerto Rico (see the movie "Contact" for details).

    Problems like SETI are easy to process in parallel because the data can be split up in chunks that can be processed without having to communicate much with other processors. For most complex simulations what is going on at one part of the system influences most other parts so that fast communication between the processors becomes critical. A new software toolkit available at www.cactuscode.org significantly facilitates the process of "parallelization" of computer jobs.

    But even the fastest Internet connections would be too slow for efficient parallel simulations. Therefore some researchers are working on the next step taking advantage of a computational trick that has evolved in nature: Brains don't split up into CPU and RAM but the computation is done directly in the memory elements of neurons. Therefore putting the central processing unit inside the memory promises much more exciting developments than fancy avatars with whom you can shake hands.

    Computing 2010: From Black Holes To Biology, Declan Butler, Millennium Supplement, Nature 402: 6761 C67 (1999)


  6. The neurobiology of cognition, Nature Millennium Suppl. Bookmark and Share

    Our brain, as one of our most complex organs, performs a number of functions whose neurobiological mechanisms remain as a challenge for science of the next millennium. Among the most important functions are cognition (perception, learning, memory, attention, decision-making, language and motor planning), and emotion, both of them cannot be implemented in today's computers. Nichols and Newsome expect significant progress in the next few decades based on improved technology to monitor and measure neuronal activity, they don't expect many changes in conceptual understanding. They describe three levels of understanding: localization, representation and micro circuitry. With the help of PET scans and functional magnetic resonance imaging (fMRI) (both measuring changes in blood flow to active brain areas) as well as techniques measuring the electro magnetic fields of neuronal activity directly much progress has been made in associating different brain regions with different cognitive events. One important insight, however, was that we cannot expect a one-to-one map between events and brain regions: many events are mapped to different regions simultaneously and the same region can respond to different types of events. Because of the complexity of a single neuron even dramatic improvements in the understanding of neuronal micro circuitry will still leave a major challenge in synthesizing the information to understand the coupling across hierarchies of organization that lead to observed behavior. It is not clear if the question about consciousness can be answered in a scientific manner since there is no agreed upon definition of consciousness that does not depend on subjective reports.

    Conceptually easier are questions about how decisions are made. In monkeys one was able to identify neurons whose activity preceded the behavioral manifestation of a decision by several seconds. Will it be possible with the help of smart brain monitoring devices to see what decision a person will make before that person is actually consciously making that decision? How about if I can see what decision I am going to make in a few second, can I still change my mind? We can anticipate that results in that direction will revive philosophical discussions about determinism and free will.

    The Neurobiology Of Cognition , M. James Nichols & William T. Newsome, Millennium Supplement, Nature 402: 6761 C35 (1999)



  7. New Theories Help Explain Mysteries of Autism, New York Times Bookmark and Share

    Autism, the disorder illustrated by Dustin Hoffman in the "Rainman" movie has a number of peculiar symptoms. Common to them are inabilities to relate to others in social interactions and peculiar focus on details that sometimes leads to the phenomenon of "idiot savants", patients with extraordinary metal capabilities e.g. in mental arithmetic and photo graphic memory. In the early days of psychology traumatic childhood experiences and deprivations have been postulated as the cause for the disorder. More recent results point more towards genetic origins (involving five or six genes) that lead to changes in the organization of different brain regions. For instance it seems that specific neurons respond selectively to actions of other individuals independent to one's own action (see " Cortical Mechanisms of Human Imitation", ComDig 1999.9.11). Other indications for genetic origins are the first onset of clear symptoms happening in specific developmental stages, often around an age of 14 to 22 months.

    It seems that autistic brains are generally bigger and heavier with abnormalities in three different regions, all of which are relevant to control social behavior: Frontal lobes involved in decision making and planning are thicker, cells in the limbic system -where emotions are processed- are a third smaller and more numerous but more immature. Finally, cells in the cerebellum -involved in predicting the next movements, thoughts, and emotions are reduced by up to 50%.

    Some of the cells (in the amygdala) respond to faces, whereas autistic children tend to ignore facial expressions. Neuronal activity corresponding to arousal is several times higher than on average. That might explain why details that most would ignore don't bore them. Today the prognosis for therapy is good if the disorder is diagnosed (a strong symptom is the inability of a two year old to speak short sentences at age 2) by age 2 to 3 years.

    New Theories Help Explain Mysteries of Autism, New York Times


  8. Transitions and Resonances in the Behavior of Heart Cells, Chaos Bookmark and Share

    Heart rhythms have long been a prominent example of how non-linear dynamics can describe biological oscillations that were inaccessible to traditional, linear approaches. Yehia et al. show in detail how already in single cells transitions to oscillations can be induced that might lay the foundation to the understanding of a number of heart diseases.

    Abstract: The transmembrane potential of a single quiescent cell isolated from rabbit ventricular muscle was recorded using a suction electrode in whole-cell recording mode. The cell was then driven with a periodic train of current pulses injected into the cell through the same recording electrode. When the interpulse interval or basic cycle length (BCL) was sufficiently long, 1:1 rhythm resulted, with each stimulus pulse producing an action potential. Gradual decrease in BCL invariably resulted in loss of 1:1 synchronization at some point. When the pulse amplitude was set to a fixed low level and BCL gradually decreased, N + 1:N rhythms (N2) reminiscent of clinically observed Wenckebach rhythms were seen. Further decrease in BCL then yielded a 2:1 rhythm. In contrast, when the pulse amplitude was set to a fixed high level, a period-doubled 2:2 rhythm resembling alternans rhythm was seen before a 2:1 rhythm occurred. With the pulse amplitude set to an intermediate level (i.e., to a level between those at which Wenckebach and alternans rhythms were seen), there was a direct transition from 1:1 to 2:1 rhythm as the BCL was decreased: Wenckebach and alternans rhythms were not seen. When at that point the BCL was increased, the transition back to 1:1 rhythm occurred at a longer BCL than that at which the {1:12:1} transition had initially occurred, demonstrating hysteresis. With the BCL set to a value within the hysteresis range, injection of a single well-timed extrastimulus converted 1:1 rhythm into 2:1 rhythm or vice versa, providing incontrovertible evidence of bistability (the coexistence of two different periodic rhythms at a fixed set of stimulation parameters). Hysteresis between 1:1 and 2:1 rhythms was also seen when the stimulus amplitude, rather than the BCL, was changed. Simulations using numerical integration of an ionic model of a single ventricular cell formulated as a nonlinear system of differential equations provided results that were very similar to those found in the experiments. The steady-state action potential duration restitution curve, which is a plot of the duration of the action potential during 1:1 rhythm as a function of the recovery time or diastolic interval immediately preceding that action potential, was determined. Iteration of a finite-difference equation derived using the restitution curve predicted the direct {1:12:1} transition, as well as bistability, in both the experimental and modeling work. However, prediction of the action potential duration during 2:1 rhythm was not as accurate in the experiments as in the model. Finally, we point out a few implications of our findings for cardiac arrhythmias (e.g., Mobitz type II block, ischemic alternans). ©1999 American Institute of Physics.

    Hysteresis and bistability in the direct transition from 1:1 to 2:1 rhythm in periodically driven single ventricular cells, Ali R. Yehia, Dominique Jeandupeux, Francisco Alonso, and Michael R. Guevara , Chaos, Volume 9, Issue 4, pp. 916-931


  9. Why Do Microorganisms Survive Deep Underground?, Science Daily, INEEL Bookmark and Share

    "Even Dante would blanch at the conditions kilometers below the earth's surface. Temperatures climb past 100 degrees Celsius, pressures hundreds of times greater than atmospheric pressure bear down, and space is so tight even microorganisms can barely budge. Yet, even there life persists. Now subsurface scientists have begun to identify the factors that determine why microorganisms survive deep underground in some places, but not others, report researchers from the Department of Energy's Idaho National Engineering and Environmental Laboratory and Princeton University. The INEEL specializes in subsurface science as part of its environmental mission. High temperatures ensure nothing can live too far below the earth's surface. But pressure, the availability of water, the porosity of the surrounding rock and the flow of chemical nutrients also limit where extremophiles--microorganisms that relish harsh conditions--can exist. (…)"We're at the point of recognizing that microorganisms have remarkable abilities to colonize these environments and trying to understand the parameters that control that colonization," said INEEL microbiologist Rick Colwell, who presented a synthesis of recent findings in the Biogeoscience: Deep Biospheres: Where and How? poster session today at the American Geophysical Society meeting in San Francisco. A better understanding of how extremophiles survive deep underground may shed light on how life endured the earth's violent youth, or show scientists where to look for life on other planets, said Princeton geochemist T.C. Onstott. Temperature appears to be the primary factor in limiting how deep extremophiles can go. No known microorganism can live for long at 120 degrees Celsius. Since the surface temperature averages 15 degrees Celsius and the temperature in the ground increases with depth about 19 degrees Celsius every kilometer, extremophiles should die off between five and six kilometers below the surface of dry land. (…)Pressure limits the range of extremophiles less than temperature does. Most microorganisms can survive pressure 600 times atmospheric pressure, which corresponds to a depth of six kilometers. At that depth in most locations the temperature likely exceeds 120 degrees Celsius. Lack of water and chemical nutrients likely prohibits deep subsurface life in arid, geologically stable regions. For instance, little grows between the surface and the groundwater of the Snake River Plain, on which INEEL sits. Conversely, extremophiles may be more abundant deep in active geological features, such as faults, mid-ocean ridges and salt deposits, where fluids and nutrients flow more freely. Subsurface environments are so austere some extremophiles live in a state of nearly suspended animation. Microorganisms living on the surface divide after hours or days. Those living deep underground may divide only after hundreds of thousands of years. Life may have persevered below the surface 4 billion years ago, when asteroids routinely pelted the earth and caused the oceans to boil, Onstott said, so microorganisms living deep underground may provide clues about the emergence of life on the developing planet. "If you want to understand primitive microbial ecosystems, the only place you can go is into the subsurface," he said. If life exists elsewhere in the solar system, it may be tucked beneath the surface of other planets or their moons. By studying subsurface extremophiles on earth, researchers may learn where to look in their search for extraterrestrial life. (…)Onstott agrees. "In terms of microbiology," he said, "I think the field is headed toward identifying energy sources for these microorganisms, correlating these sources to microbial activity and determining whether that activity has changed the subsurface environment." "
    Life In The Inferno: Researchers Identify Factors That Determine Where Microorganisms Can Survive In The Hellish World Deep Underground , Science Daily 12/22/99,The original news release can be found at: Idaho National E & E Laboratory


  10. Adaptive Agents and Collective Intelligence on the Internet?, arXiv Bookmark and Share

    A major ingredient of the Internet are "routers" that tell pieces of information where to go next in the network on the way to their destination. The performance will degrade if some routers attract too much traffic. Therefore routers can be viewed as interacting agents in the complex system of the overall network. Wolpert et al. observe that "Adaptivity, both of the individual agents and of the interaction structure among the agents, seems indispensable for scaling up multi­agent systems (MAS's) in noisy environments. One important consideration in designing adaptive agents is choosing their action spaces to be as amenable as possible to machine learning techniques, especially to reinforcement learning (RL) techniques."

    They found " … the perhaps surprising fact that simply changing the action space of the agents to be better suited to RL can result in very large improvements in their potential performance: at their best settings, our learning­amenable router agents achieve throughputs up to three and one half times better than that of the standard Bellman­Ford routing algorithm, even when the Bellman­Ford protocol traffic is maintained. We then demonstrate that much of that potential improvement can be realized by having the agents learn their settings when the agent interaction structure is itself adaptive. "

    In a related paper Kumer and Wolpert show that sub-optimal routing can be caused by ""side-effects", in this case of current routing decision on future traffic." They develop a theory that addresses this problem: "The theory of COllective INtelligence (COIN) is concerned precisely with the issue of avoiding such deleterious side-effects. We present key concepts from that theory and use them to derive an idealized algorithm whose performance is better than that of the Ideal Shortest Path Algorithm (ISPA), even in the infinitesimal limit. We present experiments verifying this, and also showing that a machine-learning-based version of this COIN algorithm in which costs are only imprecisely estimated (a version potentially applicable in the real world) also outperforms the ISPA, despite having access to less information than does the ISPA."

    Adaptivity in Agent-Based Routing for Data Networks, David H. Wolpert, Sergey Kirshner, Chris J. Merz, Kagan Tumer, Report-no: NASA-ARC-IC-99-122

    Avoiding Braess' Paradox through Collective Intelligence, Kagan Tumer, David H. Wolpert, Report-no: NASA-ARC-IC-99-124



  11. Cortical Mechanisms of Human Imitation, Science Bookmark and Share

    The ability to imitate someone else is a major factor that allows infants (already a few hours after birth) and small children to learn from their parents and older siblings. It is also a fundamental factor for social interactions for instance in initiating a communication, a signal of understanding what the other means. The capability seems to be a central part that is missing in autistic children.

    Imitation of behavior on the other hand is behavior not limited to humans: dolphins can be very creative in imitating movements of humans even if they are missing the body parts that they observe moving. Monkeys also are known to imitate ("monkey") the movements of others.

    Iacoboni et al. conducted a series of careful experiments to identify brain regions that specialize on the essence of imitation. To demonstrate this effect beyond reasonable scientific doubt they had to exclude alternative explanations that are based on the observation of the body part itself etc. Their finding indicates that there are two regions the "left inferior cortex (opercular region) and the rostral-most region of the right parietal lobule" that are selectively active during imitation, regardless of how it is evoked. This distribution of activity seems to help keeping the distinction between the "actor" and the "imitator".

    Cortical Mechanisms of Human Imitation , Marco Iacoboni, Roger P. Woods, Marcel Brass, Harold Bekkering, John C. Mazziotta, Giacomo Rizzolatti, Science



  12. What Kills Motor Neurons in Locked-In Patients?, Science Bookmark and Share

    Nitric oxide is a chemical that has powerful cellular signaling capabilities (see, Ancient origins of nitric oxide signaling in biological systems, ComDig 1999.beta6.11). Now it also seems to be involved in signaling motor neurons that it is time to commit cellular suicide in apoptosis. The large scale dying-off of motor neurons is happening in amyotrophic lateral sclerosis (ALS), also called Lou Gehrig disease which is not curable and leaves patients in a locked-in state where their brain is fully functional but they are not able to move a single muscle (see Turning Thoughts Into Actions, ComDig 1999.beta1.4).

    'Its cause in most cases is not known, but 2% of ALS patients carry mutations in Cu,Zn superoxide dismutase (SOD), an enzyme that scavenges the superoxide free radical. Estévez et al. report that mutant SOD, which is unable to bind zinc (but still binds copper), induces cultured motor neurons to undergo apoptosis. If the wild-type SOD was forced to give up its zinc, it also caused motor neurons to die. When both wild-type and mutant SOD were replete with zinc, then both SODs protected motor neurons from apoptosis upon removal of nurturing growth factors. The authors propose that loss of zinc from SOD induces motor neuron apoptosis through an oxidative mechanism that produces nitric oxide. '

    Induction of Nitric Oxide -- Dependent Apoptosis in Motor Neurons by Zinc-Deficient Superoxide Dismutase, Alvaro G. Estévez, John P. Crow, Jacinda B. Sampson, Christopher Reiter, Yingxin Zhuang, Gloria J. Richardson, Margaret M. Tarpey, Luis Barbeito, and Joseph S. Beckman , Science 286: 2498-2500.




Complexity Digest is an independent publication available to organizations that may wish to repost ComDig to their own mailing lists. ComDig is published by the Computer Sciences Department, IIMAS and the C3, Universidad Nacional Autonoma de Mexico and edited by Carlos Gershenson. To unsubscribe from this list, please go to Subscriptions.
[CNW:Counter]