I, Pencil: The Movie

A film from the Competitive Enterprise Institute, adapted from the 1958 essay by Leonard E. Read. For more about I, Pencil, visit http://www.ipencilmovie.org

neurosciencestuff:

Study ties father’s age at childbearing to higher rates of psychiatric, academic problems in kids
An Indiana University study in collaboration with medical researchers from Karolinska Institute in Stockholm has found that advancing paternal age at childbearing can lead to higher rates of psychiatric and academic problems in offspring than previously estimated.
Examining an immense data set — everyone born in Sweden from 1973 until 2001 — the researchers documented a compelling association between advancing paternal age at childbearing and numerous psychiatric disorders and educational problems in their children, including autism, ADHD, bipolar disorder, schizophrenia, suicide attempts and substance abuse problems. Academic problems included failing grades, low educational attainment and low IQ scores.
Among the findings: When compared to a child born to a 24-year-old father, a child born to a 45-year-old father is 3.5 times more likely to have autism, 13 times more likely to have ADHD, two times more likely to have a psychotic disorder, 25 times more likely to have bipolar disorder and 2.5 times more likely to have suicidal behavior or a substance abuse problem. For most of these problems, the likelihood of the disorder increased steadily with advancing paternal age, suggesting there is no particular paternal age at childbearing that suddenly becomes problematic. 
"We were shocked by the findings," said Brian D’Onofrio, lead author and associate professor in the Department of Psychological and Brain Sciences in the College of Arts and Sciences at IU Bloomington. "The specific associations with paternal age were much, much larger than in previous studies. In fact, we found that advancing paternal age was associated with greater risk for several problems, such as ADHD, suicide attempts and substance use problems, whereas traditional research designs suggested advancing paternal age may have diminished the rate at which these problems occur."
The study, “Parental Age at Childbearing and Offspring Psychiatric and Academic Morbidity,” was published today in JAMA Psychiatry.
Notably, the researchers found converging evidence for the associations with advancing paternal age at childbearing from multiple research designs for a broad range of problems in offspring. By comparing siblings, which accounts for all factors that make children living in the same house similar, researchers discovered that the associations with advancing paternal age were much greater than estimates in the general population. By comparing cousins, including first-born cousins, the researchers could examine whether birth order or the influences of one sibling on another could account for the findings.
The authors also statistically controlled for parents’ highest level of education and income, factors often thought to counteract the negative effects of advancing paternal age because older parents are more likely to be more mature and financially stable. The findings were remarkably consistent, however, as the specific associations with advancing paternal age remained.
"The findings in this study are more informative than many previous studies," D’Onofrio said. "First, we had the largest sample size for a study on paternal age. Second, we predicted numerous psychiatric and academic problems that are associated with significant impairment. Finally, we were able to estimate the association between paternal age at childbearing and these problems while comparing differentially exposed siblings, as well as cousins. These approaches allowed us to control for many factors that other studies could not."
In the past 40 years, the average age for childbearing has been increasing steadily for both men and women. Since 1970 for instance, the average age of first-time mothers in the U.S. has gone up four years from 21.5 to 25.4. For men the average is three years older. In the northeast, the ages are higher. Yet the implications of this fact — both socially and in terms of the long-term effects on the health and well-being of the population as a whole — are not yet fully understood.
Moreover, while maternal age has been under scrutiny for a number of years, a more recent body of research has begun to explore the possible effects of advancing paternal age on a variety of physical and mental health issues in offspring. Existing studies have pointed to increasing risks for some psychological disorders with advancing paternal age. Yet the results are often inconsistent with one another, statistically inconclusive or unable to take certain confounding factors into account.
The working hypothesis for D’Onofrio and his colleagues who study this phenomenon is that unlike women, who are born with all their eggs, men continue to produce new sperm throughout their lives. Each time sperm replicate, there is a chance for a mutation in the DNA to occur. As men age, they are also exposed to numerous environmental toxins, which have been shown to cause mutations in the DNA found in sperm. Molecular genetic studies have, in fact, shown that sperm of older men have more genetic mutations.
This study and others like it, however, perhaps signal some of the unforeseen, negative consequences of a relatively new trend in human history. As such, D’Onofrio said, it may have important social and public policy implications. Given the increased risk associated with advancing paternal age at childbearing, policy-makers may want to make it possible for men and women to accommodate children earlier in their lives without having to set aside other goals.
"While the findings do not indicate that every child born to an older father will have these problems," D’Onofrio said, "they add to a growing body of research indicating that advancing paternal age is associated with increased risk for serious problems. As such, the entire body of research can help to inform individuals in their personal and medical decision-making."

neurosciencestuff:

Study ties father’s age at childbearing to higher rates of psychiatric, academic problems in kids

An Indiana University study in collaboration with medical researchers from Karolinska Institute in Stockholm has found that advancing paternal age at childbearing can lead to higher rates of psychiatric and academic problems in offspring than previously estimated.

Examining an immense data set — everyone born in Sweden from 1973 until 2001 — the researchers documented a compelling association between advancing paternal age at childbearing and numerous psychiatric disorders and educational problems in their children, including autism, ADHD, bipolar disorder, schizophrenia, suicide attempts and substance abuse problems. Academic problems included failing grades, low educational attainment and low IQ scores.

Among the findings: When compared to a child born to a 24-year-old father, a child born to a 45-year-old father is 3.5 times more likely to have autism, 13 times more likely to have ADHD, two times more likely to have a psychotic disorder, 25 times more likely to have bipolar disorder and 2.5 times more likely to have suicidal behavior or a substance abuse problem. For most of these problems, the likelihood of the disorder increased steadily with advancing paternal age, suggesting there is no particular paternal age at childbearing that suddenly becomes problematic. 

"We were shocked by the findings," said Brian D’Onofrio, lead author and associate professor in the Department of Psychological and Brain Sciences in the College of Arts and Sciences at IU Bloomington. "The specific associations with paternal age were much, much larger than in previous studies. In fact, we found that advancing paternal age was associated with greater risk for several problems, such as ADHD, suicide attempts and substance use problems, whereas traditional research designs suggested advancing paternal age may have diminished the rate at which these problems occur."

The study, “Parental Age at Childbearing and Offspring Psychiatric and Academic Morbidity,” was published today in JAMA Psychiatry.

Notably, the researchers found converging evidence for the associations with advancing paternal age at childbearing from multiple research designs for a broad range of problems in offspring. By comparing siblings, which accounts for all factors that make children living in the same house similar, researchers discovered that the associations with advancing paternal age were much greater than estimates in the general population. By comparing cousins, including first-born cousins, the researchers could examine whether birth order or the influences of one sibling on another could account for the findings.

The authors also statistically controlled for parents’ highest level of education and income, factors often thought to counteract the negative effects of advancing paternal age because older parents are more likely to be more mature and financially stable. The findings were remarkably consistent, however, as the specific associations with advancing paternal age remained.

"The findings in this study are more informative than many previous studies," D’Onofrio said. "First, we had the largest sample size for a study on paternal age. Second, we predicted numerous psychiatric and academic problems that are associated with significant impairment. Finally, we were able to estimate the association between paternal age at childbearing and these problems while comparing differentially exposed siblings, as well as cousins. These approaches allowed us to control for many factors that other studies could not."

In the past 40 years, the average age for childbearing has been increasing steadily for both men and women. Since 1970 for instance, the average age of first-time mothers in the U.S. has gone up four years from 21.5 to 25.4. For men the average is three years older. In the northeast, the ages are higher. Yet the implications of this fact — both socially and in terms of the long-term effects on the health and well-being of the population as a whole — are not yet fully understood.

Moreover, while maternal age has been under scrutiny for a number of years, a more recent body of research has begun to explore the possible effects of advancing paternal age on a variety of physical and mental health issues in offspring. Existing studies have pointed to increasing risks for some psychological disorders with advancing paternal age. Yet the results are often inconsistent with one another, statistically inconclusive or unable to take certain confounding factors into account.

The working hypothesis for D’Onofrio and his colleagues who study this phenomenon is that unlike women, who are born with all their eggs, men continue to produce new sperm throughout their lives. Each time sperm replicate, there is a chance for a mutation in the DNA to occur. As men age, they are also exposed to numerous environmental toxins, which have been shown to cause mutations in the DNA found in sperm. Molecular genetic studies have, in fact, shown that sperm of older men have more genetic mutations.

This study and others like it, however, perhaps signal some of the unforeseen, negative consequences of a relatively new trend in human history. As such, D’Onofrio said, it may have important social and public policy implications. Given the increased risk associated with advancing paternal age at childbearing, policy-makers may want to make it possible for men and women to accommodate children earlier in their lives without having to set aside other goals.

"While the findings do not indicate that every child born to an older father will have these problems," D’Onofrio said, "they add to a growing body of research indicating that advancing paternal age is associated with increased risk for serious problems. As such, the entire body of research can help to inform individuals in their personal and medical decision-making."


fuckyeahfluiddynamics:

Hospital-acquired infections are a serious health problem. One potential source of contamination is through the spread of pathogen-bearing droplets emanating from toilet flushes. The video above includes high-speed flow visualization of the large and small droplets that get atomized during the flush of a standard hospital toilet. Both are problematic for the spread of pathogens; the large droplets settle quickly and contaminate nearby surfaces, but the small droplets can remain suspended in the air for an hour or more. Even more distressing is the finding that conventional cleaning products lower surface tension within the toilet, aggravating the problem by allowing even more small droplets to escape. (Video credit: G. Traverso et al.)

(Source: arxiv.org)

neurosciencestuff:

Listen to this: Research upends understanding of how humans perceive sound
A key piece of the scientific model used for the past 30 years to help explain how humans perceive sound is wrong, according to a new study by researchers at the Stanford University School of Medicine.
The long-held theory helped to explain a part of the hearing process called “adaptation,” or how humans can hear everything from the drop of a pin to a jet engine blast with high acuity, without pain or damage to the ear. Its overturning could have significant impact on future research for treating hearing loss, said Anthony Ricci, PhD, the Edward C. and Amy H. Sewall Professor of Otolaryngology and senior author of the study.
“I would argue that adaptation is probably the most important step in the hearing process, and this study shows we have no idea how it works,” Ricci said. “Hearing damage caused by noise and by aging can target this particular molecular process. We need to know how it works if we are going to be able to fix it.”
The study was published Nov. 20 in Neuron. The lead author is postdoctoral scholar Anthony Peng, PhD.
Deep inside the ear, specialized cells called hair cells detect vibrations caused by air pressure differences and convert them into electrochemical signals that the brain interprets as sound. Adaptation is the part of this process that enables these sensory hair cells to regulate the decibel range over which they operate. The process helps protect the ear against sounds that are too loud by adjusting the ears’ sensitivity to match the noise level of the environment.
The traditional explanation for how adaptation works, based on earlier research on frogs and turtles, is that it is controlled by at least two complex cellular mechanisms both requiring calcium entry through a specific, mechanically sensitive ion channel in auditory hair cells. The new study, however, finds that calcium is not required for adaptation in mammalian auditory hair cells and posits that one of the two previously described mechanisms is absent in auditory cochlear hair cells.
Experimenting mostly on rats, the Stanford scientists used ultrafast mechanical stimulation to elicit responses from hair cells as well as high-speed, high-resolution imaging to track calcium signals quickly before they had time to diffuse. After manipulating intracellular calcium in various ways, the scientists were surprised to find that calcium was not necessary for adaptation to occur, thus challenging the 30-year-old hypothesis and opening the door to new models of mechanotransduction (the conversion of mechanical signals into electrical signals) and adaptation.
“This somewhat heretical finding suggests that at least some of the underlying molecular mechanisms for adaptation must be different in mammalian cochlear hair cells as compared to that of frog or turtle hair cells, where adaptation was first described,” Ricci said.
The study was conducted to better understand how the adaptation process works by studying the machinery of the inner ear that converts sound waves into electrical signals.
“To me this is really a landmark study,” said Ulrich Mueller, PhD, professor and chair of molecular and cellular neuroscience at the Scripps Research Institute in La Jolla, who was not involved with the study. “It really shifts our understanding. The hearing field has such precise models — models that everyone uses. When one of the models tumbles, it’s monumental.”
Humans are born with 30,000 cochlear and vestibular hair cells per ear. When a significant number of these cells are lost or damaged, hearing or balance disorders occur. Hair cell loss occurs for multiple reasons, including aging and damage to the ear from loud sounds. Damage or impairment to the process of adaptation may lead to the further loss of hair cells and, therefore, hearing. Unlike many other species, including birds, humans and other mammals are unable to spontaneously regenerate these hearing cells.
As the U.S. population has aged and noise pollution has grown more severe, health experts now estimate that one in three adults over the age of 65 has developed at least some degree of hearing disability because of the destruction of these limited number of hair cells.
“It’s by understanding just how the inner machinery of the ear works that scientists hope to eventually find ways to fix the parts that break,” Ricci said. “So when a key piece of the puzzle is shown to be wrong, it’s of extreme importance to scientists working to cure hearing loss.”

neurosciencestuff:

Listen to this: Research upends understanding of how humans perceive sound

A key piece of the scientific model used for the past 30 years to help explain how humans perceive sound is wrong, according to a new study by researchers at the Stanford University School of Medicine.

The long-held theory helped to explain a part of the hearing process called “adaptation,” or how humans can hear everything from the drop of a pin to a jet engine blast with high acuity, without pain or damage to the ear. Its overturning could have significant impact on future research for treating hearing loss, said Anthony Ricci, PhD, the Edward C. and Amy H. Sewall Professor of Otolaryngology and senior author of the study.

“I would argue that adaptation is probably the most important step in the hearing process, and this study shows we have no idea how it works,” Ricci said. “Hearing damage caused by noise and by aging can target this particular molecular process. We need to know how it works if we are going to be able to fix it.”

The study was published Nov. 20 in Neuron. The lead author is postdoctoral scholar Anthony Peng, PhD.

Deep inside the ear, specialized cells called hair cells detect vibrations caused by air pressure differences and convert them into electrochemical signals that the brain interprets as sound. Adaptation is the part of this process that enables these sensory hair cells to regulate the decibel range over which they operate. The process helps protect the ear against sounds that are too loud by adjusting the ears’ sensitivity to match the noise level of the environment.

The traditional explanation for how adaptation works, based on earlier research on frogs and turtles, is that it is controlled by at least two complex cellular mechanisms both requiring calcium entry through a specific, mechanically sensitive ion channel in auditory hair cells. The new study, however, finds that calcium is not required for adaptation in mammalian auditory hair cells and posits that one of the two previously described mechanisms is absent in auditory cochlear hair cells.

Experimenting mostly on rats, the Stanford scientists used ultrafast mechanical stimulation to elicit responses from hair cells as well as high-speed, high-resolution imaging to track calcium signals quickly before they had time to diffuse. After manipulating intracellular calcium in various ways, the scientists were surprised to find that calcium was not necessary for adaptation to occur, thus challenging the 30-year-old hypothesis and opening the door to new models of mechanotransduction (the conversion of mechanical signals into electrical signals) and adaptation.

“This somewhat heretical finding suggests that at least some of the underlying molecular mechanisms for adaptation must be different in mammalian cochlear hair cells as compared to that of frog or turtle hair cells, where adaptation was first described,” Ricci said.

The study was conducted to better understand how the adaptation process works by studying the machinery of the inner ear that converts sound waves into electrical signals.

“To me this is really a landmark study,” said Ulrich Mueller, PhD, professor and chair of molecular and cellular neuroscience at the Scripps Research Institute in La Jolla, who was not involved with the study. “It really shifts our understanding. The hearing field has such precise models — models that everyone uses. When one of the models tumbles, it’s monumental.”

Humans are born with 30,000 cochlear and vestibular hair cells per ear. When a significant number of these cells are lost or damaged, hearing or balance disorders occur. Hair cell loss occurs for multiple reasons, including aging and damage to the ear from loud sounds. Damage or impairment to the process of adaptation may lead to the further loss of hair cells and, therefore, hearing. Unlike many other species, including birds, humans and other mammals are unable to spontaneously regenerate these hearing cells.

As the U.S. population has aged and noise pollution has grown more severe, health experts now estimate that one in three adults over the age of 65 has developed at least some degree of hearing disability because of the destruction of these limited number of hair cells.

“It’s by understanding just how the inner machinery of the ear works that scientists hope to eventually find ways to fix the parts that break,” Ricci said. “So when a key piece of the puzzle is shown to be wrong, it’s of extreme importance to scientists working to cure hearing loss.”


neurosciencestuff:

Seeing in the Dark

Find a space with total darkness and slowly move your hand from side to side in front of your face. What do you see?

If the answer is a shadowy shape moving past, you are probably not imagining things. With the help of computerized eye trackers, a new cognitive science study finds that at least 50 percent of people can see the movement of their own hand even in the absence of all light.

"Seeing in total darkness? According to the current understanding of natural vision, that just doesn’t happen," says Duje Tadin, a professor of brain and cognitive sciences at the University of Rochester who led the investigation. "But this research shows that our own movements transmit sensory signals that also can create real visual perceptions in the brain, even in the complete absence of optical input."

Through five separate experiments involving 129 individuals, the authors found that this eerie ability to see our hand in the dark suggests that our brain combines information from different senses to create our perceptions. The ability also “underscores that what we normally perceive of as sight is really as much a function of our brains as our eyes,” says first author Kevin Dieter, a post-doctoral fellow in psychology at Vanderbilt University.

The study seems to confirm anecdotal reports that spelunkers in lightless caves often are able to see their hands. In other words, the “spelunker illusion,” as one blogger dubbed it, is likely not an illusion after all.

For most people, this ability to see self-motion in darkness probably is learned, the authors conclude. “We get such reliable exposure to the sight of our own hand moving that our brains learn to predict the expected moving image even without actual visual input,” says Dieter.

Tadin, Dieter, and their team from the University of Rochester and Vanderbilt University reported their findings online October 30 in Psychological Science, the flagship journal of the Association for Psychological Science.

Although seeing one’s hand move in the dark may seem simple, the experimental challenge in this study was to measure objectively a perception that is, at its core, subjective. That hurdle at first stumped Tadin and his postdoctoral advisor at Vanderbilt Randolph Blake after they initially stumbled upon the puzzling observation in 2005. “While the phenomenon looked real to us, how could we determine if other people were really seeing their own moving hand rather than just telling us what they thought we wanted to hear?” asks Blake, the Centennial Professor of Psychology at Vanderbilt and a co-author on the paper.

Years later, Dieter, at the time a doctoral student working in Tadin’s Rochester lab, helped devise several experiments to probe the sight-without-light mystery. For starters, the researchers set up false expectations. In one scenario, they led subjects to expect to see “motion under low lighting conditions” with blindfolds that appeared to have tiny holes in them. In a second set up, the same participants had similar blindfolds without the “holes” and were led to believe they would see nothing. In both set ups, the blindfolds were, in fact, equally effective at blocking out all light. A third experiment consisted of the experimenter waving his hand in front of the blindfolded subject. Ultimately, participants were fitted with a computerized eye tracker in total darkness to confirm whether self-reported perceptions of movement lined up with objective measures.

In addition to testing typical subjects, the team also recruited people who experience a blending of their senses in daily life. Known as synesthetes, these individuals may, for example, see colors when they hear music or even taste sounds. This study focused on grapheme-color synesthetes, individuals who always see numbers or letters in specific colors.

The researchers enlisted individuals from Rochester, Nashville, Fenton, Michigan, and Seoul, South Korea, but, in a lucky coincidence, one synesthete could not have been closer. At the time, Lindsay Bronnenkant was working as a lab technician for co-author David Knill, a professor of brain and cognitive sciences at Rochester.

"As a child, I just assumed that everybody associated colors with letters," says the 2010 Rochester graduate who majored in brain and cognitive sciences. For Bronnenkant, "A is always yellow, but Y is an oranger yellow." B is navy, C burnt orange, and so on. She thought of these associations as normal, "like when you smell apple pie and you think of grandma." She doesn’t remember a time when she did not see numbers and letters in color, but she does wonder if the particular colors she associates with numbers derived from the billiard balls her family had going up. When she donned the blindfold and waved her hand in the experiment, "what I saw was a blur. It was very dim, but it was almost like I was looking at a light source."

Bronnenkant was not atypical in that respect. Across all types of participants, about half detected the motion of their own hand and they did so consistently, despite the expectations created with the faux holes. And very few subjects saw motion when the experimenter waved his hand, underscoring the importance of self-motion in this visual experience. As measured by the eye tracker, subjects who reported seeing motion were also able to smoothly track the motion of their hand in darkness more accurately than those who reported no visual sensation—46 percent versus 20 percent of the time.

Reports of the strength of visual images varied widely among participants, but synesthetes were strikingly better at not just seeing movement, but also experiencing clear visual form. As an extreme example in the eye tracking experiment, one synesthete exhibited near perfect smooth eye movement—95 percent accuracy—as she followed her hand in darkness. In other words, she could track her hand in total darkness as well as if the lights were on.

"You can’t just imagine a target and get smooth eye movement," explains Knill. "If there is no moving target, your eye movements will be noticeably jerky."

The link with synesthesia suggests that our human ability to see self-motion is based on neural connections between the senses, says Knill. “We know that sensory cross talk underlies synesthesia. But seeing color with numbers is probably just the tip of the iceberg; synesthesia may involve many areas of atypical brain processing.”

Does that mean that most humans are preprogrammed to see themselves in the dark? Not likely, says Tadin. “Innate or experience? I’m pretty sure it’s experience,” he concludes. “Our brains are remarkably good at finding such reliable patterns. The brain is there to pick up patterns—visual, auditory, thinking, movement. And this is one association that is so highly repeatable that it is logical our brains picked up on it and exploited it.”

Whether hardwired or learned, Bronnenkant finds the cross talk between her senses a potent reminder of the underlying interconnectivity of nature. “It’s almost a spiritual thing,” she says. “Sometimes, yeah, I think to myself, ‘I just got this sense from a billiard ball,’ but other times I think that being able to cross modalities actually reflects how unified the world is. We think of math and chemistry and art as different fields, but really they are facets of the same world; they are just ways of looking at the world through different lenses.”