Found this book at a bookstore and was hooked after reading the first few pages, based on it’s topic and clear and well-written prose.
James Davies, with a PhD in social and medical anthropology from Oxford, begins with a history of psychiatry starting in the 1970s and a crisis of confidence it faced. A series of experiments questioned the validity and reliability of psychiatric diagnosis.
The 1973 Rosenhan experiments on “Being Sane in Insane Places” questioned the validity of psychiatric diagnoses. Neurotypical confederates checked themselves into asylums claiming to have heard a voice once, but then once admitted acted normal and saw what they were diagnosed with and saw the great lengths it took to be deemed healthy again and get discharged from the institutions. After publishing the results, they were challenged by a hospital to send pseudopatient imposters back in. Rosenhan agreed to the challenge but did not send any patients. A month later the hospital reported they suspected 41 imposters.
Another experiment showed that diagnoses were not consistent between psychiatrists. It sent the same patients to different psychiatrists and showed that they got different diagnoses from psychiatrists around a third of the time. Additionally, the prevalence of different diagnoses seemed to be regional, with some diagnoses being more prevalent in certain countries.
These criticisms of psychiatry lead to a drastic rewrite of the 3rd edition of psychiatrists diagnostic manual, the DSM III, headed by Robert Spitzer, who attempted to increase diagnostic reliability by making the definitions of disorders more precise and adding an explicit checklist to each disorder. These checklists created thresholds between psychiatric disease and “normal” human experience that, while decided by expert consensus, were ultimately arbitrary.
Additionally, the DSM III removed many disorders, especially those that had been introduced by psychoanalysts, as that discipline was falling out of favor. He brings up an interesting story about the removal of homosexuality as a “sexual deviation,” which he says was largely due to pressure from the Gay Rights movement and came down to a vote at an American Psychiatric Association meeting in the 70s, with 5,854 psychiatrists voting to remove homosexuality as a disease and 3,810 voting to keep it in. Davies makes the point that many of these decisions were political and not directly based on any changes in scientific research.
Despite the reforms made in the DSM III and subsequent manuals, diagnostic reliability remains a difficulty. Interestingly, I looked up a citation Davies makes to Aboraya, 2006, which he says “showed that reliability actually has not improved in thirty years.”I’m uncertain about how Davies reached that conclusion from this paper, as it clearly states:
- that while diagnostic reliability remains a problem, the third generation of psychiatric diagnoses “from 1980 to present… more reliability papers were published and the reliability of psychiatric diagnosis has improved,” and
- “The development of the DSM-III and its subsequent versions has been a major accomplishment in the history of psychiatric nomenclature. Clinicians use the DSM criteria in clinical practice as an effective way to communicate the clinical picture, the course of illness, and efficacy of treatment.”
This citation seems academically sloppy and perhaps shows that Davies seeks to oversimplify a complex and murky issue into a one-sided story (though this also might reflect my innate bias against pop-science books).
Chapter 1 ends questioning the validity of psychiatric diagnoses even if we fix the reliability problem. Even if we could get every psychiatrist to agree on the diagnoses, does that mean it’s a real disease entity, or that we’ve just made a reliable but arbitrary construct? He argues that we need biomarkers to prove it’s a “discrete, identifiable biological disease.” While I agree, I think that psychiatric definitions do a good job of separating normal but different from disease, by often requiring that the disease is disruptive to the patients social relationships or occupational function. What makes psychiatric illnesses, diseases is that they are problematic for people’s lives, and people, whether the patient themselves or their friends and family, want something done about it. I’m unsure if we will be able to find or need to find biomarkers for every disease. While some diagnoses may ultimately be arbitrary, if they are clinically helpful and can show statistical and long-term improvements in patients quality of life, then they are valuable.
Let me know if you have any comments on this blog post. I hope to continue blogging my thoughts on this book as I read through it.
Filed under: Uncategorized | Leave a Comment
Striatal time cells, transgenic birdsongs, stuttering mice and more – Birdsong4 Sattelite and #SFN14 Notes
Joe Paton from the Saltzman lab – Time encoding cells in the rodent striatum.
You can use an operant conditioning train animals to press a lever and get a reward. Using a fixed interval paradigm, you then do not reward the animal for lever presses until a certain time interval is passed. Animals will learn roughly learn this time association, and pause from lever presses until a point some intermediate time before the interval will expire, and then they begin pressing the lever again.
If you record in the striatum of rodents that have learned this task, you see neurons that fire at every point in the fixed interval and rescale with the fixed interval, if the interval changes. They saw that the rescaling was always slightly subproportional, and also that striatal cells multiplex information about action and time.
I think this is a really great clear electrophysiological link between striatal activity and the task being performed. I wonder if labs have tested mutant mice such as FOXP2 KOs or Shank3B knockouts with proposed striatal defects in tasks like this.
Also, it’s harder to learn the association between a Conditioned Stimulus (CS) and Unconditioned Stimulus (US), if you stretch out the time delay between the CS and US, while keeping the inter-trial interval the same. However, for some reason if you increase the inter-trial interval proportionally with the delay between CS and US, then that increased difficulty of learning the CS/US association doesn’t occur.
Maybe, it’s more difficult to make the association between CS/US with a longer delay, and therefore it takes the brain longer to process the association. But, if you leave the brain to process the old trial offline without interference from new tasks, then it can form the association. To restate that, when the trials happen too quickly, processing the subsequent trial interferes with the ongoing processing of the previous trial and interferes with learning.
Carlos Lois – Transgenic Songbirds – Genetic tools to investigate brain circuit assembly and cellular basis of behavior.
Lois mentioned that HVCà RA neurogenesis occurs during song learning. Neurons migrate into the HVC after making soma-soma contact with resident neurons. A thought I had never really had was how much is going on during windows of time when fetuses are learning about speech. A lot of developmental psychology work has shown that babies learn to identify prosaic cues of language, their mother’s voice, etc. while in the womb. I wonder to what extent these process are concurrent with and depend on neurogenesis. More generally, do we have a good sense from autopsy studies and/or radiation exposure studies when neurogenesis ends in normal human development for cortical areas, the striatum, etc.
Collaborating with the Gardner lab, Lois designed an adeno-associated virus (AAV) to express GCAMP6 in a small population of neurons in the HVC. (For some reason with the current viruses and promoters they’re getting much smaller yields of infected cells than people normally do with mice. ~2,000 neurons in birds compared to ~20 million in mice. Of course this isn’t always bad–sparse labeling can be good for measuring morphology or separating cell-autonomous effects from emergent effects). By mounting a low weight CMOS camera on top of the birds head, they could record activation of many cells simultaneously in a singing behaving bird (and know where these cells lay relative to each other in the rostro-caudal and medial-lateral axes).
Lois’s group also infected HVC with a virus driving expression of a bacterial Na+ channel ‘NaChBaC.’ Neurons in this channel went from firing single discrete spikes to prolonged depolarizations. As the channel starts being expressed, song gets completely distorted, but remarkably a few days later the bird has found a way to compensate and get song back to normal. They said that they did histology to confirm the infected neurons were still alive and expressing the channel. However, maybe the bird simply down-regulated all of the synaptic strengths of the infected neurons, and there is enough redundancy to produce the song with the remaining neurons.
Finally, Lois showed data from transgenic zebra finches that have been germline transfected with RNAi to knock down CNTNAP2 expression, which is associated with developmental language disorders and autism. They showed that the CNTNAP2 knockdown birds showed normal learning of simple syllables, but impaired learning of complex syllables. He also mentioned that they had developed a line of GCAMP6 transgenic animals which is pretty exciting. I believe viral infection of germline cells cann be combined with the CRISPR-CAS system to cause deletions and premature stops at target genes, and theoretically even induce homologous recombination (by also providing a template strand complementary to the region where the DNAase has cut the DNA.
Someone asked a question about CRE lines in birds, and Lois said that economically it probably wasn’t viable. Not enough people do research on birds to justify that kind of investment, and making such animals would be a largely thankless job. However, there are still a ton of experiments that could be done on simpler transgenics such as: plain KOs, KDs, transgenics that express fluorescent proteins and optogenetic channels. I think just having that level of genetic tools combined with the song system’s anatomical modularity, the strength of song as a behavior, and the strong phenotypes seen in FOXP2 KDs and CNTNAP2 KDs show that genetic songbirds could be an extremely powerful models for diseases of speech, social behavior, and motor learning. Further, birds have similar reproductive cycle lengths as mice and live longer, so stocks of transgenics can be bred in reasonable amounts of time. I wanted to ask Lois what he thought about creating inbred strains of zebra finches, but I didn’t have the chance. (I’m almost tempted to start this as a side project in my apartment, but I’m afraid of then bringing an infection into the lab.)
Lois had the impression that the NIH was not interested in funding zebra finch transgenics. His grants came from mouse projects and work he did on zebra finches were side projects.
A member of the Simons Foundation “SFARI program” said that they have wide agreement that rodents aren’t “sophisticated.” However, the program is most concerned with investigating the rare genetic causes of autism and generating high-throughput screening models for autism. Personally, I think this is short sighted and the basic science just isn’t there yet. We really need to investigate the basic mechanisms of communication, imitative learning, etc., and I think songbirds are one of the best model organisms for these behaviors—and without a doubt better than rodents. So, while using genetics we’ve done amazing work on specific etiologies like Rhett syndrome, as someone in the audience rightly pointed out there are still almost no examples of rational drug design in neuroscience and neurology, except for the new example of orexin antagonists for insomnia and how good of a drug they’ll be in the long term is unclear.
Dennis Drayna – genetics of stuttering and mouse models
Stuttering affects 4% of the population at some point in their life, but only .5-1% of adults are persistently affected and at that point there is a 4:1 ratio of males to females.
Most human genetics has focused on this persistent stuttering population. They’ve shown that it’s about 85% Continue reading ‘Striatal time cells, transgenic birdsongs, stuttering mice and more – Birdsong4 Sattelite and #SFN14 Notes’
Filed under: Uncategorized | 2 Comments
In 1972, Horace Barlow, great-grandson of Darwin, wrote an article in which he put forth “A neuron doctrine for perceptual psychology?” (The question mark at the end shows that Barlow was a contemplative guy.) With this doctrine, Barlow tried to relate the firing of neurons in sensory pathways with subjectively experienced sensation. One of the principles he introduces is: at progressively higher levels of sensory processing, information is carried by fewer neurons because the system is organized to a near complete a representation with the fewest active neurons. In other terms, the encoding of sensory information gets ‘sparser’ as one moves up into higher levels of sensory processing. Since then, a number of lines of evidence have converged, supporting Barlow’s proposition and the general value of ‘sparse codes.’
On the blog today I’ll be reviewing one such paper, “Sparse coding of sensory inputs” by Bruno Olshausen and David Field. This paper defines ‘sparse coding’ as a computational strategy where brains encode sensory information using a small number of simultaneously active neurons at a given time.
This paper puts forth four ideas for why sparse coding theoretically might be a good strategy:
- More memories can be stored
- Makes use of the statistical structure of natural signals
- Represents data in a convenient way for further processing
- Save metabolic energy, by decreasing neuronal firing rates
When model neurons are trained to optimize spare representations of natural scenes, the receptive fields that emerge represent the simple-cells of the primary visual cortex.
One interesting feature of sensory representations that this paper points out, is that as visual signals move from the thalamus to the visual cortex, there is a 25:1 expansion (of axonal projects to cortex versus axonal projections to the LGN of the thalamus). They suggest that this 25:1 expansion possible emerges as a compromise between the trade-offs: with high sparseness eventually ending in ‘grandmother cells’ where a single unique neuron represents each element of a sensory, and low sparseness incurring developmental and metabolic costs of having to use many neurons to encode each element. Additionally, the more a neuron spikes the more energy is needed to maintain it’s electrochemical gradients through pumps like the sodium-potassium ATPase. From what we know about the metabolic use of the cortex, scientists have estimated that only 1/50th of neurons are active at any given time.
Another interesting point this paper makes, is that some experimental results show neurons with much higher firing rates than would be predicted by metabolic estimates. Because these experiments often involve searching for firing neurons with an electrode, we may be systematically biasing our studies towards a minority of neurons that fires “less sparsely” than the general populations. Solutions to this bias include chronically implanting electrodes where the positioning is set anatomically or using antidromic stimulation to identify neurons as opposed to stimulus elicited firing.
Sparse coding beyond sensory systems
This paper also points out that sparsely firing neurons are observed in motor cortex during movements, and that experimentally driving a single neuron can be enough to initiate whisker movements in rats (Brecht et al., 2004). In the zebra finch song production pathway, HVC neurons fire sparsely at precise points in song, and precise spike-timing in RA may be important as well.
How does one actually measure “sparseness”
Olshausen and Field write that a standard measure of “sparseness” is kurtosis, with a larger value indicating a “sparser” distribution.
They also describe another method developed by Rolls and Tovee, the activity ratio, which is specialized for one sided distributions (and therefore good at modeling neurons since firing rate cannot drop below zero).
Finally, The activity ratio can then be scaled from 0-1, using Vinje and Gallant’s sparse coding scale transformation:
I think there is compelling evidence for brains using of sparse coding in sensory systems and there are good theoretical reason for why brains should use sparse coding. It will be interesting to see if these findings hold up for motor systems as well. If they find evidence of sparse coding in relatively simple and blunt movements like locomotion, I would guess that they will be more present in higher premotor areas encoding complicated and learned movements.
I big issue in machine learning is how to process the data before unleashing machine learning algorithms on it. I think neurophysiology shows that a good strategy might be to duplicate the data and turn it into over complete sparse representations. I assume scientists working with big data are already doing this, but honestly I have no idea.
Barlow HB: Single units and sensation: a neuron doctrine for perceptual psychology? Perception 1972, 1:371-394.
Brecht M, Schneider M, Sakmann B, Margrie TW: Whisker movements evoked by stimulation of single pyramidal cells in rat motor cortex. Nature 2004, 427:704-710
Olshausen BA, Field DJ (2004). Sparse Coding of Sensory Inputs. Current Opinion in Neurobiology, 14: 481-487.
Rolls ET, Tovee MJ: Sparseness of the neuronal representation of stimuli in the primate temporal visual cortex. J Neurophysiol 1995, 73:713-726.
Vinje WE, Gallant JL: Sparse coding and decorrelation in primary visual cortex during natural vision. Science 2000, 287:1273-1276.
 I had always thought of sparse coding from the perspective of individual neurons, as a strategy where a given sensory neuron fires only at a very specific stimuli or aspect of a stimuli, so therefore it fires ‘sparsely.’ I realized in reading this, that really this paper’s definition and my mental understanding are two sides of the same coin if the neurons are firing independently. However, even with a population of neurons that only fire very rarely might not be ‘sparsely coding’ by Olshausen and Field’s definition if they are all firing together and then silent together.
 How does that neuron spur the rest of the brain into action and what occurs when that neuron dies?
 This is the same gallant that did some of the coolest experiments ever decoding neural activity from fMRI activation: https://neuroamer.wordpress.com/2011/10/31/scientists-record-lucid-dreams-with-eeg-and-fmri-simultaneously/
Filed under: Uncategorized | Leave a Comment
First off, I wanted to say I’m working in a songbird lab now, so while I’m keeping this a general neuroscience blog, you’re probably going to start seeing more blogposts about bird brains.
So, is the bird vocal learning pathway specialized for song and independent from other tasks? A new paper by the Okanoya lab addresses this very question.
What is the vocal learning pathway?
We know that the vocal learning pathway resembles general thalamo-cortico-basal ganglia circuitry, but it’s generally thought to be very specialized for song because:
- lesion studies show brain areas in the pathway are necessary for song
- the brain areas are much larger in males (and only males sing)
- the areas are much larger in songbirds than birds that aren’t vocal learners
- neurons in these areas firing is modulated during singing (and in some conditions during playback of the bird’s own songs) as shown by direct neural recordings and early gene expression studies.
One of the brain areas in the vocal learning pathway is the striatum-like Area X, which appears to be a specialized structure surrounded by the more general avian medial striatum (MSt). Like the mammalian striatum, Area X receives strong dopaminergic inputs, but unlike the classic thalamo-cortico-basal ganglia circuit, Area X does not reciprocally connect to the dopaminergic vental tegmental area (VTA) or substantia nigra pars compacta (Person et al., 2008). It is unclear whether parts of the song system evolved out of basal ganglia circuits, but I believe all known vertebrates have basal ganglias, making the basal ganglia one of the most conserved brain regions.
Testing if Area X performs basal-ganglia like calculations
Because electrophysiological studies in mammalian striatum and avian MSt have shown that neurons fire in response to stimuli signaling food rewards and the rewards themselves, the Okanoya lab wondered whether Bengalese finch Area X neurons are modulated by food rewards as well. They designed two operant conditioning studies for birds to perform while they recorded from neurons in Area X.
In the first experiment, if the birds peck at a red LED while it is illuminated, they are rewarded with food 50% of the time. If the birds pecked at the LED while it was off, they received no food, and there would be a greater delay before it illuminated again.
A second experiment the LEDs lit up either red or green, with 50% probability and arandom sequence. If the bird pecked the LED while it was red, it received food 100% of the time. If the LED was green when the bird pecked it, the bird received no food, however the bird was required to peck it to proceed to further trials. This way experimenters could separate out a visual stimulus from it’s association with reward and also control for the general motor movement of pecking. However, it seems to me that to properly learn the task, the bird must learn to peck the green LED to continue on in the trial, see more red lights, and get more food rewards.
After the animals learned these tasks, they performed the tasks while experimenters recorded extracellularly from neurons in Area X that were previously shown to be modulated by song (n=19 neurons for task one and n=25 neurons for task two), and a few neurons that were not modulated from songs (when looking at the inter-spike interval).
Results from Experiment 1
Two examples of Area X neuron firing during the task are shown above. Many neurons showed differential responses between reward trials (when the peck was followed by food) and non-reward trials (when it wasn’t). Standard deviations in spiking rate were significantly greater in reward trials than non-reward trials (p < 0.001).
Results from Experiment 2
In experiment 2, birds were slower to key-peck the non-rewarded key. As in experiment 1, the SDs of firing rate were significantly higher in reward trials than non-reward trials. Neurons recorded in experiment 2 showed different firing on the red LED reward trials, than the green LED no-reward trials, before they even pecked or got their food reward.
I’m going to ramble a bit here, to leave some notes for myself…
I think the authors did a fairly convincing job of showing that Area X neurons are modulated by non-song related tasks. How important this signaling is for normal behavior is unclear. The majority of neurons they recorded from seemed to modulate their firing much more during song.
I wonder if Area X might be not a song learning area, per say, but an elaborated thalamo-cortico-basal ganglia loop that is specialized for beak, and vocal organ movements. I wonder if the modulation seen in this study could relate to motor planning for the actual eating movements. For example, the birds showed a much longer delay in pecking for the unrewarded condition in experiment 2. Maybe the difference in activity upon seeing the reward or non-reward light related to a difference in the movement made as opposed to directly coding the reward of the visual signal. Monkey studies sometimes try to control for this by training the animals to have to respond to non-reward trials with an equal speed as reward trials. As the experimenters mention, monkey studies also try to separate out cognitive processes from related motor activity by recording directly from muscles with electromyography EMG.
Another way to try to minimize this confound for this would be to change the reward from a food pellet to some sort of juice delivery (though the bird would still have to swallow), or perhaps an IV drug delivery, though the drug’s affect on dopamine systems might also modulate area X activity. Similarly, perhaps they changes in activity we see are due to dopamine signaling, but will be irrelevant to the plasticity of synapses in this circuit because they are not directly-locked to a movement. So perhaps Area X is song-specific, but VTA and SNpC are not. Area X is slightly modulated by VTA and SNpC but effectively can ignore their signaling.
One strange part of this study is that they implanted the birds with testosterone secreting tubes to increase the rate of song. However, as they mention testosterone has been shown to change behaviors in some cognitive tasks, so could confound the results. Getting Bengalese finches to sing with electrodes in their brain is challenging, but it is done all the time without using testosterone supplementation.
A possible confound of these experiments is that the birds only heard the mechanical feeder sound on the trials they received food. A better control would be to have the exact same sounds, to make sure the Area X neurons are not being modulated by sound.
Finally, I’m curious about the converse of this question. Is the rest of the striatum modulated during song, similarly to how this study showed Area X was modulated by non-song stimuli?
Hope you enjoyed this article, please check out other posts on my blog. If you liked this blogpost about songbirds, you might also like: Of mice and birds – what is a zebra finch, really?
Person, A.L., Gale, S.D., Farries, M.A. & Perkel, D.J. (2008) Organizationof the songbird basal ganglia, including area X. J. Comp. Neurol., 508, 840–866
Feenders, G., Liedvogel, M., Rivas, M., Zapka, M., Horita, H., Hara, E., Wada, K., Mouritsen, H. & Jarvis, E.D. (2008) Molecular mapping of movement-associated areas in the avian brain: a motor theory for vocal learning origin. PLoS One, 3, e1768.
Seki, Y., Hessler, H.A., Xie, K., Okaynoya, K. (2014) Food rewards modulate the activity of song neurons in Bengalese finches. Eur. J. Neurosci., 39,6, 9750983
 (also known as the anterior-forebrain pathway, AFP)
Filed under: Uncategorized | Leave a Comment
Zebra finch genetics
The zebra finch, Taeniopygia guttata, is an Australian songbird with a black and white striped breast. It is used by neuroscientists as a model organism to study the learning and production of a complex motor behavior–birdsong. Like other songbirds, zebra finches are genetically predisposed to learn their species’ particular song, but they have to hear the song being produced and slowly learn to mimic it through a trial and error process, similar to how babies learn to speak.
So out of the over 5,000 identified passerine songbirds, why did neuroscience choose the zebra finch as its model songbird? For similar reasons, to why biology chose the mouse as a model mammal–zebra finches are small, easy to care for, and breed well in captivity. However, with any lab-bred animals there is a risk of inbreeding, which could alter the results of studies and effectively create subspecies reducing comparability between labs.
Lab bred animals – inbred strains of mice
To reduce genetic variability within studies and increase comparability between labs, mouse geneticists have intentionally mated strains of mice to be inbred. By breeding mice with their siblings for ten generations, they generated animals homozygous at essentially every allele. In other words, all of the animals born from this process have the same genome, with the same copies of each gene on each chromosome. These animals are essentially genetic clones of one another, and if they reproduce with each other their young will clones as well. Through this process we created standardized inbred strains of mice such as C57/BL6Js or BALB/CByJs that are used in labs around the world and maintained by organizations such as The Jackson Laboratory.
Inbred strains have been essential for research, however they can sometimes produce idiosyncratic results, that don’t generalize to other strains, let alone mammals in general. For example, knocking out a gene in one strain may have an obvious phenotype that is conspicuously absent in another, likely because other genes can compensate in the second strain. In extreme cases, inbreeding can fundamentally alter normal characteristics of the species and predispose pathological states, for example BALB/cWah1, are predisposed to lack a corpus callosum. About 20% of mice have no CC and another 20 to 30% have an unusually small CC. Scientists have therefore argued that while data generated from inbred strains might be more reproducible, it may also be less generalizable. You have reduced experimental noise and you can reproduce the same results, but are they robust findings that generalize to all mice or only a particular strain.
Are zebra finches inbred?
There have been no efforts (to my knowledge) to generate an inbred laboratory strain of zebra finches. However, there could potentially have been bottlenecks during the process of domestication and shipping birds between continents that have reduced genetic diversity within laboratory populations of the species. To investigate the population genetics of laboratory zebra finches, researchers genotyped 1000 zebra finches from 18 laboratory populations (held in Europe, North America, and Australia), and two wild populations at 10 microsatellites. Microsatellites, are sequences of repeats of 2-5 base pairs that tend to be quite polymorphic, making them good markers for genetic diversity.
As might be expected, laboratory populations showed loss of genetic variability, due to genetic drift. If in a population a particular allele every drifts to zero, then it is lost in that population. Lab populations on average had roughly half the number of alleles per locus compared to wild zebra finches—11.7 for captive versus 19.3 for wild. The most inbred population studied, a population bred for the recessive trait of white plumage showed only 6.4 alleles per locus, roughly a third of the alleles per locus seen in the wild populations. Researchers found genetic differences between zebra finches found in their European and North American populations and pointed out that there could be functional differences between these populations as well.
The study looked at some functional differences between populations of zebra finches. Populations varied greatly in body mass, with some laboratory populations weighing almost double that of newly domesticated populations (and although body mass may be affected by animal husbandry and housing, this result was true even for three genetic separate populations housed in the same lab). Body weight of animals can be important for scientists who may wish to place electrodes, cannuli, or other devices on birds heads, but it was mainly used as an easy and reliable measure to show real differences between the populations. It is possible there are differences to brain structures and genes involved in learning and plasticity that may affect song structure as well. Such differences are more difficult to demonstrate, but also important to know as they could affect and possibly complicate results from studies.
In sum, the laboratory populations studied zebra finches are somewhat inbred, but still show diversity. If studies conflict between North American and European laboratories, we should consider if genetic effects might explain the differences. Perhaps we should begin inbreeding strains of zebra finches to facilitate genetic studies in the future. Also, we may need to investigate other lab bred species of songbirds used by researchers such as Bengalese Finches, which have been bred in captivity for centuries and selectively bred for their plumage.
 (Parrots and hummingbirds are also vocal learners.)
 This process of sibling-sibling breeding is also known as “selfing.” The Ancient Egyptians, believing the royal family to be gods bred brothers and sisters for 10 generations, and likely created humans homozygous at each allele.
 Of course, due to the sex chromosomes, males and females will have different genomes, and a small number of new mutations occur with each generation.
 Though they are far from perfect: http://en.wikipedia.org/wiki/Microsatellite#Limitations. Although mutations can complicate analyses, previous studies by Forstmeier et al. demonstrated that they are extremely rare in Zebra finches — they did not observe a single event in their microsatelli loci in 7368 parent offspring comparisons.
Forstmeier W, Segelbacher G, Mueller JC, Kempenaers B. Genetic variation and differentiation in captive and wild zebra finches (Taeniopygia guttata). Mol Ecol. 2007 Oct;16(19):4039-50.
Filed under: Uncategorized | Leave a Comment
Alzheimer’s disease (AD) is by far the number one cause of dementia, and the number one risk factor for AD is age–with almost 50% of people above the age of 85 suffering from the disease.
With the aging baby boomer generation and medical advances squaring off the life expectancy curve, the United States is projected to have over 9 million cases of AD by 2050. This huge expected increase in cases has been dubbed by some as the “Alzheimer’s Disease Epidemic.”
Alzheimer’s disease is marked primarily by memory loss and inability to form new memories, but it can also impair language, higher thinking, visuospatial skills, and can even cause personality changes and delusions. By erasing patients’ identities, memories, and personalities, Alzheimer’s disease robs patients of their humanity, and can be devastating for the friends and family watching the disease unfold.
Alzheimer’s disease was first described in 1901, a patient Auguste D., a 51-year-old woman, suffering from debilitating memory loss, deficits, and precatory delusions. She underwent a progressive decline and died five years later. The case was described by Alois Alzheimer a German psychiatrist and neuropathologist (though now the disease is treated by neurologists).
The fact that Alzheimer was a pathologist was of crucial importance. Because of it, he performed an autopsy on Auguste D. and discovered proteinaceous plaques, and neurofibrillary tangles in the woman’s brain, which to this day remain the pathological hallmarks of Alzheimer’s disease:
Alzheimer’s mentor, Kraepelin, was the first to use the eponym “Alzheimer’s Disease” in 1904, and his writing about the disease was prescient:“Although the anatomical findings suggest that we are dealing with a particularly serious form of senile dementia… this disease sometimes starts as early as in the late forties.”
This clinical observation was borne out by later genetics. There is an early onset familial variant of the AD, that is caused by rare mutations. Familial AD is devastating, otherwise normal, healthy patients in their 40s or younger, but luckily it accounts for a minority of cases. The later-onset variant has complex genetic and environmental risk factors and accounts for the vast majority of cases. In this variant genetic factors such as the ApoE4 allele are known to increase the risk of Alzheimer’s, but a doctor cannot genetically predict who will get it and who won’t.
What was really interesting about the first three genes identified in familial Alzheimer’s disease is that they are part of the same pathway. These genes code for the amyloid precursor protein (APP), and presenilin 1 and 2. Amyloid precursor protein is cleaved by the “gamma secretase complex,” an enzyme that is made up of multiple subunits, including presinilin 1 & 2. If APP is cleaved at the wrong locations, it forms a protein fragment beta-amyloid, which can induce other proteins to misfold, forming a protein aggregate that grows like a snowball rolling down a hill. This protein aggregate eventually forms a giant extracellular plaque—the very same plaques that Alzheimer’s described in his patient Auguste D!
The Amyloid Cascade Hypothesis
This genetic work led to the amyloid cascade hypothesis: that extracellular amyloid accumulates and initiates a sequence of events, eventually leading to neurotoxicity and clinical symptoms in AD. This hypothesis has inspired a number of clinical trials to test drugs to treat or slow the course of Alzheimer’s, by attempting to stop the formation or remove these beta-amyloid plaques. Even when trials have been successful y removed patient’s plaques, patients have not shown clinical improvements. It is unclear whether the amyloid cascade hypothesis is wrong (and perhaps amyloid plaques correlate with but do not cause AD), or whether we are just starting our treatments too late once the cascade of neurodegeneration is initiated–but that these treatments could be successful if we used them on patients earlier in their disease course.
In fact we know that patients with Alzheimer’s disease don’t show clinical symptoms until late in the disease, perhaps due to a phenomenon known as cognitive reserve. New techniques however, allow us to determine whether people showing subtle cognitive deficits are at high or low risk of developing Alzhiemer’s, and therefore we can continue to test the amyloid cascade hypothesis by perforing clinical trials using the drugs we know clear plaques on people who are of high risk of developing Alzheimer’s.
For example, the same proteins found in the plaques and tangles in Alzheimer’s disease brains such as amyloid beta make their way into the cerebrospinal fluid which coats the brain and spinal cord. In patients with mild cognitive impairment–meaning they are starting to show cognitive problems compared to age-matched, education-matched controls, but not severe enough to be classified as Alzheimer’s–we can perform lumbar punctures to harvest their cerebrospinal fluid. Then by measuring levels of these proteins found in plaques, we can predict which of these patients with MCI are at high risk of progressing to Alzheimer’s.
Additionally, we can look at the burden and distribution of plaques in the brain using radioactive PET ligands that bind to the plaques:
The Cholinergic Hypothesis
While the previously-described techniques will inevitably assist with the diagnosing AD, the amyloid cascade hypothesis is still unproven, and so far has not lead to any effective therapy. Currently, one of the best treatments for Alzheimer’s disease is lifestyle modification: labeling and arranging one’s life as memory declines. The mainstay of pharmacological approach for AD are cholinesterase inhibitors developed in the 1990s and 2000s, which can improve performance on cognitive tests.
These drugs block acetylcholinesterase, blocking the breakdown of the neurotransmitter acetylcholine into acetate and choline, so the drugs therefore they increase levels of acetylcholine in the synapse and signaling to the postsynaptic cell.
You might be wondering, how do these drugs help treat Alzheimer’s if it is a disease caused by plaques, tangles and neurodegeneration? Why did we develop and test them in the first place?
These drugs were actually developed based on an earlier theory of Alzheimer’s the cholinergic hypothesis.
In the 1960s and ‘70s psychologists gave young healthy subjects cholinergic inhibitors (which prevent acetylcholine from signaling to post-synaptic cells). These young healthy subjects had memory problems and cognitive problems that resembled patients with Alzheimer’s disease, showing that acetylcholine was important for memory.
Most of the brain’s supply of acetylcholine derives from the nucleus basalis of Meynert in the basal forebrain, so doctors investigated this area during the autopsies of patients with Alzheimer’s and showed that the nucleus basalis seems to be particularly vulnerable to the neurodegeneration seen in AD. Patients showed decreased numbers of healthy neurons in this area, and decreased cholinergic projections to the hippocampus and entorhinal cortex (areas we also knew were important for memory), suggesting that patients with AD may have decreased acetylcholine.
Going back to the drugs we use for Alzheimer’s cholinesterase inhibitors, these drugs are thought to boost the remaining acetylcholine signaling going on in synapses, similar to how SSRIs increase the amount of serotonin. However, because they are not preventing the neurodegeneration that is causing AD, disease progression continues and causes massive neurodegeneration, eventually killing patients by interfering with their ability to swallow and breath.
However, if the cholinergic hypothesis is correct, and disruptions to the acetylcholine signaling from the basal forebrain is especially important for the symptoms of AD, then maybe if we can selectively prevent degeneration of this area we can slow the disease. Currently clinical trials are delivering of Nerve Growth Factor (NGF) to the basal forebrain using gene therapy approaches—implanting stem cells that produce NGF or using viruses to modify cells in the basal forebrain to produce it themselves. Nerve Growth factor is known to increase the growth and vitality of cholinergic neurons, and may therefore help preserve the cells as they suffer early insults of Alzheimer’s.
Even if Nerve Growth Factor gene therapy can just slow the disease, we can give patients more time with their memories intact to enjoy their families and a dignified independent life. We can decrease the time families and friends spend caring for a person who’s face the recognize, but whose mind becomes increasingly unfamiliar, and we can decrease the time patients have to spend institutionalized or with a full time caretaker.
Clinical research is slow, and research is never certain, but the new approaches to Alzheimer’s give hope to a Alzheimer’s disease—a disease where prognoses for patients like Auguste D. have remained gloomy for over a century, and where an epidemic of new cases looms ahead in the near future.
Filed under: Uncategorized | Leave a Comment