Showing posts with label Audio. Show all posts
Showing posts with label Audio. Show all posts

Friday, December 7, 2012

USA - Deception can be perfected

With a little practice, one could learn to tell a lie that may be indistinguishable from the truth.

New Northwestern University research shows that lying is more malleable than previously thought, and with a certain amount of training and instruction, the art of deception can be perfected.

People generally take longer and make more mistakes when telling lies than telling the truth, because they are holding two conflicting answers in mind and suppressing the honest response, previous research has shown. Consequently, researchers in the present study investigated whether lying can be trained to be more automatic and less task demanding.

This research could have implications for law enforcement and the administering of lie detector tests to better handle deceptions in more realistic scenarios.

Researchers found that instruction alone significantly reduced reaction times associated with participants' deceptive responses.

They used a control group—an instruction group in which participants were told to speed up their lies and make fewer errors, but were not given time to prepare their lies—and a training group, which received training in how to speed up their deceptive responses and were given time to prepare their lies. In the training group that practiced their lies, the differences between deceptive and truthful responses were completely eliminated.

"We found that lying is more malleable and can be changed upon intentional practice," said Xiaoqing Hu, lead author of the study and a doctoral candidate in the department of psychology at Northwestern.

Hu said they were surprised that even in the instruction group, members who were not given time to prepare their lies and told only to try to speed up their responses and make fewer errors were able to significantly reduce their deceptive response reaction time.

"This was really unexpected because it suggests that people can be really flexible, and after they know what is expected from them, they want to avoid being detected," Hu said, noting the findings could help in crime fighting.

"In real life, there's usually a time delay between the crime and interrogation," said Hu. "Most people would have time to prepare and practice their lies prior to the interrogation." However, previous research in deception usually gave participants very little time to prepare their lies.

Lie detector tests most often rely on physiological responses. Therefore, Hu said further research warrants looking at whether additional training could result in physiological changes in addition to inducing behavior changes as observed in their study.

More information: "A Repeated Lie Becomes a Truth? The Effect of Intentional Control and Training on Deception" was recently published in Frontiers in Cognitive Science. www.frontiersin.or… 488/abstract

Provided by Northwestern University

"Deception can be perfected." December 6th, 2012. http://medicalxpress.com/news/2012-12-deception.html

USA - Antiseptic products can be contaminated, study says

They aren't always produced in sterile environments, can cause infections, experts say.

Antiseptics are meant to keep bacteria and other pathogens from entering the body through breaks in the skin, but sometimes these products can be contaminated with the very organisms they're supposed to guard against, new research shows.

In the Dec. 6 issue of the New England Journal of Medicine, scientists from the U.S. Food and Drug Administration detail recent outbreaks that have occurred in products such as single-use alcohol swabs and pre-surgery antiseptics.

"It is important that health care providers be aware that topical antiseptic products, if contaminated, pose a risk of infection and that particular microbes isolated from clinical specimens have been traced to the contamination of such products," the FDA experts wrote in the report.

How can products that are supposed to kill germs contain germs?

"Nothing is 100 percent. Bacteria are really diverse and they're adapted to living in different environments," said Dr. Bruce Hirsch, an attending physician in infectious diseases at North Shore University Hospital in Manhasset, N.Y.

However, Hirsch added that he believes the FDA "should require sterilized manufacturing whenever possible."

Currently, companies that produce antiseptic products aren't required to manufacture these products in sterile environments.

The regulations surrounding the production of antiseptic products were designed in the 1970s. At the time, it was assumed that antiseptic products didn't need to be produced in a sterile environment, because experts believed that any pathogens present would be killed by the antiseptic.

But, according to the FDA researchers, a number of outbreaks associated with these products have been reported in medical journals and to the U.S. Centers for Disease Control and Prevention. The authors also noted that there are probably more outbreaks related to these products than have been reported, because this method of contamination is quite difficult to detect.

"Sometimes it's really tricky to trace where an organism came from," explained Dr. Mohamed Fakih, medical director of infection prevention and control at St. John Hospital and Medical Center in Detroit. He said that most hospitals use one product to clean the skin before surgery, but even if that product is contaminated, not everyone will get sick from it. And, because everyone—both the sick and healthy—had the same product used on their skin, it's difficult to isolate the antiseptic as the cause of the infection.

In the case of single-use products, such as an alcohol swab, by the time someone has reported an infection, the packaging is gone and can't be tested for contamination, Fakih added.

Contamination of these products can occur during manufacturing or at the point of use, according to previous research. Products that have been found to be contaminated include iodophors (antiseptics that contain iodine), alcohol products, chlorhexidine gluconate (used in hand sanitizers and as a pre-surgical antiseptic), and quaternary ammonium (an antiseptic used to clean surgical and medical procedure equipment).

The FDA is currently reviewing whether or not sterile manufacturing should be mandated for topical antiseptics that are meant to be used on broken skin, such as from injury, medical procedures or surgery. Public hearings on the subject are scheduled for Dec. 12 and Dec. 13.

Fakih said he believes that these products should be manufactured under sterile conditions. If the FDA doesn't mandate sterile manufacturing processes, he said it would be very helpful if they at least mandated that labeling contain information about whether or not the product was manufactured in sterile conditions. He said it's very difficult to find that information right now.

Hirsch said when using these types of products, it's important to remember that "exceptions are exceptions. The products we're using are safe and helpful, and make a positive difference, but there's no 100 percent guarantee."

More information: To learn more about the public hearing on the issue of antiseptic products, visit the U.S. Food and Drug Administration's website.

Journal reference: New England Journal of Medicine

"Antiseptic products can be contaminated, study says." December 6th, 2012. http://medicalxpress.com/news/2012-12-antiseptic-products-contaminated.html

USA - New prenatal test, chromosomal microarray, proposed as standard of care

A large, multi-center clinical trial led by researchers from Columbia University Medical Center (CUMC) shows that a new genetic test resulted in significantly more clinically relevant information than the current standard method of prenatal testing. 

The test uses microarray analysis to conduct a more comprehensive examination of a fetus's DNA than is possible with the current standard method, karyotyping—a visual analysis of the fetus's chromosomes. Results were published in the Dec. 6, 2012, issue of The New England Journal of Medicine (NEJM).

The prospective, blinded trial involved 4,400 patients at 29 centers nationwide; the data took four years to compile. The study involved women with advanced maternal age and those whose fetuses were shown in early screening to be at heightened risk for Down syndrome, to have structural abnormalities (as seen with ultrasound), or to have indications of other problems. Ronald J. Wapner, MD, professor and vice chairman for research at the Department of Obstetrics and Gynecology at CUMC and director of reproductive genetics at NewYork-Presbyterian/Columbia, was principal investigator. This is the first and only study to prospectively compare karyotyping with microarray in a blinded head-to-head trial.

The trial found that microarray analysis, which compares a fetus's DNA with a normal (control) DNA, performed as well as karyotyping in identifying common aneuploidies (an abnormal number of chromosomes—an extra or missing chromosome causes genetic disorders such as Down syndrome and Edwards syndrome); it also identified additional abnormalities undetected by karyotyping.

Among fetuses in which a growth or structural anomaly had already been detected with ultrasound, microarray found clinically relevant chromosomal deletions or duplications in one out of 17 cases (6%) that were not observed with karyotyping. In cases sampled for advanced maternal age or positive screening results, microarray analysis picked up an abnormality in one out of every 60 pregnancies (1.7%) that had a normal karyotype.

"Based on our findings, we believe that microarray will and should replace karyotyping as the standard for evaluating chromosomal abnormalities in fetuses," said Dr. Wapner. "These chromosomal micro-deletions and duplications found with microarray are often associated with significant clinical problems."

As with karyotyping, microarray requires fetal cells obtained via an invasive procedure such as amniocentesis, where fetal cells are taken from the amniotic fluid, or chorionic villus sampling, where cells are taken from the placenta. Parents deciding whether to take these tests must weigh a number of factors, including their individual risk of fetal abnormalities, possible procedure-induced miscarriage, and the consequences of having an affected child. Work is underway to develop a non-invasive test, the same information seen by microarray, using a blood sample taken from the mother but is not yet available.

"We hope that in the future—when microarray can be done non-invasively—every woman who wishes will be offered microarray, so that she can have as complete information as possible about her pregnancy," said Dr. Wapner.

Second Paper Published in Same Issue of NEJM re: Microarray in Stillbirth

In a second paper published in the same issue of NEJM, about the use of microarray in stillbirth, results showed that microarray produced a clinical relevant result in 87% of 532 cases, which were analyzed with both karyotyping and microarray. In contrast, standard methods for analyzing a stillbirth, which include karyotyping, have been shown in previous research to fail to return information in 25% of cases.

"Microarray was significantly more successful at returning clinically relevant information because, unlike karyotyping, it does not require cultured cells. Viability does not come into play at all—DNA can be extracted from tissue that is no longer living," said study senior author, Brynn Levy, MSc(Med), PhD, associate professor of pathology and cell biology, and co-director of the Division of Personalized Genomic Medicine at CUMC, and director of the Clinical Cytogenetics Laboratory at NewYork-Presbyterian/Columbia. "Not being able to explain why a stillbirth occurred can be very hard for families.

These findings are important because they give us a significantly more reliable method to provide information to families and their physicians."

"The primary benefit of using microarray analysis rather than karyotype analysis is the greater likelihood of obtaining results," said Uma Reddy, MD, MPH of the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) of the National Institutes of Health, first author of the paper.

"Microarray analysis is especially useful in stillbirth cases in which the karyotype has failed or there is a birth defect present. However, microarray analysis is currently more expensive than karyotyping – and this may be a barrier to some."

The study used data compiled by NICHD's Stillbirth Collaborative Research Network (SCRN), a population-based study of stillbirth in five geographic catchment areas.

Clinical Significance of Chromosomal Abnormalities Growing by Leaps and Bounds

Both microarray and karyotyping reveal clinically relevant information about conditions that can be life-threatening for a newborn baby or that can signal a possible health threat that might be treatable. However, as with all current genetic testing including karyotyping, the results may reveal findings that have not been described in the literature or in which the exact implications are known.

"While the vast majority of abnormalities found with microarray are associated with known conditions, not all are. But with time, knowledge and understanding of what these abnormalities mean will continue to grow," said Dr. Wapner. "When we started this study five years ago, the incidence of findings we did not understand was about 2.5%—now, with more information, that has fallen to 1.5%."

Dr. Wapner is currently leading phase two of his microarray study. Supported by the NICHD (NIH Grant No. 2U01HD055651-06; Project No. GG006961), he has begun a five-year study to follow children born to mothers who underwent microarray, to learn the clinical implications of micro-deletions or duplications that are not yet understood.

"Unfortunately, it is sometimes difficult to predict the full spectrum of some diseases indicated by a particular deletion or duplication," said Dr. Wapner. "Genetic medicine is about obtaining genomic information about an individual and predicting what affect it will have on that person. But we are all different—so genetic abnormality in one person may behave differently than in someone else. For example, an inherited disease could be mild in the mother but severe in her child. We are studying what these mean clinically, and science continues to catch up with our ability to obtain the information."

More information: Dr. Wapner's paper is titled, "Chromosomal Microarray Compared with Karyotyping for Prenatal Diagnosis." An abstract was presented in Feb. 2012 at the Society for Maternal-Fetal Medicine annual meeting.

Journal reference: New England Journal of Medicine


"New prenatal test, chromosomal microarray, proposed as standard of care." December 5th, 2012. http://medicalxpress.com/news/2012-12-prenatal-chromosomal-microarray-standard.html

USA - Research on blood vessel proteins holds promise for controlling 'blood-brain barrier'

This shows blood vessels near the center of a healthy mouse retina -- arteries in green, veins in red.

Working with mice, Johns Hopkins researchers have shed light on the activity of a protein pair found in cells that form the walls of blood vessels in the brain and retina, experiments that could lead to therapeutic control of the blood-brain barrier and of blood vessel growth in the eye.

Their work reveals a dual role for the protein pair, called Norrin/Frizzled-4, in managing the blood vessel network that serves the brain and retina. The first job of the protein pair's signaling is to form the network's proper 3-D architecture in the retina during fetal development. The second job, after birth, is to continue signaling to maintain the blood-brain barrier, which gives the brain an extra layer of protection against infection transmitted through the circulatory system.

The Hopkins researchers say results of the study, published online in Cell on Dec. 7, could have treatment implications for disorders of the retinal blood vessels caused by diabetes, and age-related loss of central vision. They also could help clinicians develop a way to temporarily increase the penetrability of the blood-brain barrier, allowing critical drugs to pass through to the brain, says Jeremy Nathans, M.D., Ph.D., a Howard Hughes researcher and professor of molecular biology and genetics at the Institute for Basic Biomedical Sciences at the Johns Hopkins School of Medicine.

Scientists already knew that Frizzled-4 is a protein located on the surface of the cells that create blood vessel walls throughout the body. Genetic mutations that cause Frizzled-4's absence in mice and humans create severe defects in blood vessel development, but only in the retina, the light-absorbing sheet of cells at the back of the eye. Retinal tissue consumes the most oxygen per gram than any other tissue in the body. Therefore, three networked layers of blood vessels are required to fulfill its oxygen needs. So blood vessel defects in the retina generally starve it of oxygen, causing blindness.

In an effort to understand how Frizzled-4 and its activator Norrin work normally, Nathans' team deleted Norrin in mice. As a result, the rodents' retinal arteries and veins became confused and crisscrossed.

Alternatively, if they turned Norrin on earlier than usual, the networks began to develop earlier. And in mice missing either Norrin or Frizzled-4, retinal blood vessels grew radially, but they grew slowly and failed to create the second and third networked layers. All of these results suggest that Norrin and Frizzled-4 play an important role in the proper timing and arrangement of the retinal blood vessel network, Nathans says.

The team also found that mice missing just Frizzled-4, besides having major structural defects in their retinal blood vessels, showed signs of a leaky blood-brain barrier and, similarly, a leaky blood-retina barrier. To get at the cause of this, the team used special genetic tricks to control the activity of Frizzled-4 in a time- and cell-specific manner, creating mice that were missing Frizzled-4 in only about one out of every 20 endothelial cells. What they found is that only the cells missing Frizzled-4 were leaky and, surprisingly, the general architecture of the networks was fine.

Nathans explains that, normally, these blood vessel endothelial cells contain permeable "windows" and relatively loose "bolts" connecting the cells together. When in the brain and retina, they have no "windows" and their "bolts" connect them tightly. Nathans adds, "We now know that endothelial cells that make up the blood-brain barrier have to receive signals constantly from nearby brain or retinal cells telling them, 'You're in the brain. Tighten your bolts and close your windows.'"

The "windows" in the other endothelial cells in the body are protein portals that allow large molecules to pass through easily—to be filtered by the kidneys, for example. The central nervous system, including the retina, is a privileged area. If toxins were to pass through an endothelial "window" into the brain, the resulting damage could be detrimental to the brain's activity. So the body seals off these areas from bloodborne pathogens by tightening the "bolts" between and closing the "windows" of the endothelial cells that form the blood vessels servicing those areas. This reinforcement of the endothelial cells is what is known as the blood-brain barrier.

Although crucial to protecting the central nervous system, the blood-brain barrier also prevents drugs in the bloodstream from getting inside the brain to treat diseases, such as cancer. "Our research shows that blood vessel cells lacking Frizzled-4 are leaky. With this information in hand, we hope that someday it may be possible to temporarily loosen the blood-brain barrier, allowing life-saving drugs to pass through," says Nathans.

Journal reference: Cell


"Research on blood vessel proteins holds promise for controlling 'blood-brain barrier'." December 6th, 2012. http://medicalxpress.com/news/2012-12-blood-vessel-proteins-blood-brain-barrier.html

USA - After 100 years, understanding the electrical role of dendritic spines

This is a computer simulation showing color-coded voltage (in millivolts) in a portion of a neuron’s dendritic tree. Computer-generated spines have been attached to the dendrites, and synapses on seven spines near the center have been activated, raising the voltage at those locations. The simulation quantifies the spread of electric charge and the accompanying voltage rise in neighboring parts of the dendritic tree a short time (1-1/3 milliseconds) after the synapses have been activated.

It's the least understood organ in the human body: the brain, a massive network of electrically excitable neurons, all communicating with one another via receptors on their tree-like dendrites. Somehow these cells work together to enable great feats of human learning and memory. But how?

Researchers know dendritic spines play a vital role. These tiny membranous structures protrude from dendrites' branches; spread across the entire dendritic tree, the spines on one neuron collect signals from an average of 1,000 others. But more than a century after they were discovered, their function still remains only partially understood.

A Northwestern University researcher, working in collaboration with scientists at the Howard Hughes Medical Institute (HHMI) Janelia Farm Research Campus, has recently added an important piece of the puzzle of how neurons "talk" to one another. The researchers have demonstrated that spines serve as electrical compartments in the neuron, isolating and amplifying electrical signals received at the synapses, the sites at which neurons connect to one another.

The key to this discovery is the result of innovative experiments at the Janelia Farm Research Campus and computer simulations performed at Northwestern University that can measure electrical responses on spines throughout the dendrites.

A paper about the findings, "Synaptic Amplification by Dendritic Spines Enhances Input Cooperatively," was published November 22 in the journal Nature.

"This research conclusively shows that dendritic spines respond to and process synaptic inputs not just chemically, but also electrically," said William Kath, professor of engineering sciences and applied mathematics at Northwestern's McCormick School of Engineering, professor of neurobiology at the Weinberg College of Arts and Sciences, and one of the paper's authors.

Dendritic spines come in a variety of shapes, but typically consist of a bulbous spine head at the end of a thin tube, or neck. Each spine head contains one or more synapses and is located in very close proximity to an axon coming from another neuron.

Scientists have gained insight into the chemical properties of dendritic spines: receptors on their surface are known to respond to a number of neurotransmitters, such as glutamate and glycine, released by other neurons. But because of the spines' incredibly small size—roughly 1/100 the diameter of a human hair—their electrical properties have been harder to study.

In this study, researchers at the HHMI Janelia Farm Research Campus used three experimental techniques to assess the electrical properties of dendritic spines in rats' hippocampi, a part of the brain that plays an important role in memory and spatial navigation. First, the researchers used two miniature electrodes to administer current and measure its voltage response at different sites throughout the dendrites.

They also used a technique called "glutamate uncaging," a process that involves releasing glutamate, an excitatory neurotransmitter, to evoke electrical responses from specific synapses, as if the synapse had just received a signal from a neighboring neuron. A third process utilized a calcium-sensitive dye—calcium is a chemical indicator of a synaptic event—injected into the neuron to provide an optical representation of voltage changes within the spine.

At Northwestern, researchers used computational models of real neurons—reconstructed from the same type of rat neurons—to build a 3D representation of the neuron with accurate information about each dendrites' placement, diameter, and electrical properties. The computer simulations, in concert with the experiments, indicated that spines' electrical resistance is consistent throughout the dendrites, regardless of where on the dendritic tree they are located.

While much research is still needed to gain a full understanding of the brain, knowledge about spines' electrical processing could lead to advances in the treatment of diseases like Alzheimer's and Huntington's diseases.

"The brain is much more complicated than any computer we've ever built, and understanding how it works could lead to advances not just in medicine, but in areas we haven't considered yet," Kath said. "We could learn how to process information in ways we can only guess at now."

Journal reference: Nature


"After 100 years, understanding the electrical role of dendritic spines." December 5th, 2012. http://medicalxpress.com/news/2012-12-years-electrical-role-dendritic-spines.html

Wednesday, December 5, 2012

USA - Infants learn to look and look to learn

Researchers at the University of Iowa have documented an activity by infants that begins nearly from birth: They learn by taking inventory of the things they see.

In a new paper, the psychologists contend that infants create knowledge by looking at and learning about their surroundings. The activities should be viewed as intertwined, rather than considered separately, to fully appreciate how infants gain knowledge and how that knowledge is seared into memory.

"The link between looking and learning is much more intricate than what people have assumed," says John Spencer, a psychology professor at the UI and a co-author on the paper published in the journal Cognitive Science.

The researchers created a mathematical model that mimics, in real time and through months of child development, how infants use looking to understand their environment. Such a model is important because it validates the importance of looking to learning and to forming memories. It also can be adapted by child development specialists to help special-needs children and infants born prematurely to combine looking and learning more effectively.

"The model can look, like infants, at a world that includes dynamic, stimulating events that influence where it looks. We contend (the model) provides a critical link to studying how social partners influence how infants distribute their looks, learn, and develop," the authors write.

The model examines the looking-learning behavior of infants as young as 6 weeks through one year of age, through 4,800 simulations at various points in development involving multiple stimuli and tasks. As would be expected, most infants introduced to new objects tend to look at them to gather information about them; once they do, they are "biased" to look away from them in search of something new. In other words, an infant will linger on something that's being shown to it for the first time as it learns about it, and that the "total looking time" will decrease as the infant becomes more familiar with it.

But the researchers found that infants who don't spend a sufficient amount of time studying a new object—in effect, failing to learn about it and to catalog that knowledge into memory—don't catch on as well, which can affect their learning later on.

"Infants need to dwell on things for a while to learn about them," says Sammy Perone, a post-doctoral researcher in psychology at the UI and corresponding author on the paper.

To examine why infants need to dwell on objects to learn about them, the researchers created two different models. One model learned in a "responsive" world: Every time the model looked away from a new object, the object was jiggled to get the model to look at it again. The other model learned in a "nonresponsive" world: when this model looked at a new object, objects elsewhere were jiggled to distract it. The results showed that the responsive models"learned about new objects more robustly, more quickly, and are better learners in the end," says Perone, who earned his doctorate at the UI in 2010.

The model captures infant looking and learning as young as 6 weeks. Even at that age, the UI researchers were able to document that infants can familiarize themselves with new objects, and store them into memory well enough that when shown them again, they quickly recognized them.

"To our knowledge, these are the first quantitative simulations of looking data from infants this young," the authors write.

The results underscore the notion that looking is a critical entry point into the cognitive processes in the brain that begin in children nearly from birth. And, "if that's the case, we can manipulate and change what the brain is doing" to aid infants born prematurely or who have special needs, Perone adds.

"The promise of a model that implements looking as an active behavior is that it might explain and predict how specific manipulations of looking over time will impact subsequent learning," the researchers write.

Provided by University of Iowa 

Canada - Could high insulin make you fat? Mouse study says yes

When we eat too much, obesity may develop as a result of chronically high insulin levels, not the other way around. That's according to new evidence in mice reported in the December 4th Cell Metabolism, a Cell Press publication, which challenges the widespread view that rising insulin is a secondary consequence of obesity and insulin resistance.

The new study helps to solve this chicken-or-the-egg dilemma by showing that animals with persistently lower insulin stay trim even as they indulge themselves on a high-fat, all-you-can-eat buffet. The findings come as some of the first direct evidence in mammals that circulating insulin itself drives obesity, the researchers say.

The results are also consistent with clinical studies showing that long-term insulin use by people with diabetes tends to come with weight gain, says James Johnson of the University of British Columbia.

"We are very inclined to think of insulin as either good or bad, but it's neither," Johnson said. "This doesn't mean anyone should stop taking insulin; there are nuances and ranges at which insulin levels are optimal."

Johnson and his colleagues took advantage of a genetic quirk in mice: that they have two insulin genes. Insulin1 shows up primarily in the pancreas and insulin2 in the brain, in addition to the pancreas. By eliminating insulin2 altogether and varying the number of good copies of insulin1, the researchers produced mice that varied only in their fasting blood insulin levels. When presented with high-fat food, those with one copy and lower fasting insulin were completely protected from obesity even without any loss of appetite. They also enjoyed lower levels of inflammation and less fat in their livers, too.

Those differences traced to a "reprogramming" of the animals' fat tissue to burn and waste more energy in the form of heat. In other words, the mice had white fat that looked and acted more like the coveted, calorie-burning brown fat most familiar for keeping babies warm.

Johnson says it isn't clear what the findings might mean in the clinic just yet, noting that drugs designed to block insulin have been shown to come with unwanted side effects. But, he added, "there are ways to eat and diets that keep insulin levels lower or that allow insulin levels to return to a healthy baseline each day."

Unfortunately, constant snacking is probably not the answer.

More information: Mehran et al.: "Hyperinsulinemia drives diet-induced obesity independently of brain insulin production." Cell MetabolismDOI: 10.1016/j.cmet.2012.10.019

Journal reference: Current Biology and Cell Metabolism

Provided by Cell Press

Europe - Moderate coffee consumption may reduce risk of diabetes by up to 25 percent

Drinking three to four cups of coffee per day may help to prevent type 2 diabetes according to research highlighted in a session report published by the Institute for Scientific Information on Coffee (ISIC), a not-for-profit organisation devoted to the study and disclosure of science related to coffee and health.

Recent scientific evidence has consistently linked regular, moderate coffee consumption with a possible reduced risk of developing type 2 diabetes. An update of this research and key findings presented during a session at the 2012 World Congress on Prevention of Diabetes and Its Complications (WCPD) is summarised in the report.

The report outlines the epidemiological evidence linking coffee consumption to diabetes prevention, highlighting research that shows three to four cups of coffee per day is associated with an approximate 25 per cent lower risk of developing type 2 diabetes, compared to consuming none or less than two cups per day1. Another study also found an inverse dose dependent response effect with each additional cup of coffee reducing the relative risk by 7-8 per cent.

Whilst these epidemiological studies suggest an association between moderate coffee consumption and reduced risk of developing diabetes, they are unable to infer a causal effect. As such, clinical intervention trails are required to study the effect in a controlled setting. One prospective randomized controlled trial3, tested glucose and insulin after an oral glucose tolerance test with 12g decaffeinated coffee, 1g chlorogenic acid, 500 mg trigonelline, or placebo. This study demonstrated that chlorogenic acid, and trigonelline reduced early glucose and insulin responses, and contribute to the putative beneficial effect of coffee.

The report notes that the association between coffee consumption a reduced risk of type 2 diabetes could be seen as counter intuitive, as drinking coffee is often linked to unhealthier habits, such as smoking and low levels of physical activity. Furthermore, studies have illustrated that moderate coffee consumption is not associated with an increased risk of hypertension, stroke or coronary heart disease. Research with patients with CVD has also shown that moderate coffee consumption is inversely associated with risk of heart failure, with a J-shaped relationship.

Finally, the report puts forward some of the key mechanistic theories that underlie the possible relationship between coffee consumption and the reduced risk of diabetes. These included the 'Energy Expenditure Hypothesis', which suggests that the caffeine in coffee stimulates metabolism and increases energy expenditure and the 'Carbohydrate Metabolic Hypothesis', whereby it is thought that coffee components play a key role by influencing the glucose balance within the body. There is also a subset of theories that suggest coffee contains components that may improve insulin sensitivity though mechanisms such as modulating inflammatory pathways, mediating the oxidative stress of cells, hormonal effects or by reducing iron stores.

Dr. Pilar Riobó Serván, Associate Chief of Endocrinology and Nutrition, Jiménez Díaz-Capio Hospital of Madrid and a speaker at the WCPD session concludes the report, commenting: "A dose-dependent inverse association between coffee drinking and total mortality has been demonstrated in general population and it persists among diabetics. Although more research on the effect of coffee in health is yet needed, current information suggests that coffee is not as bad as previously considered!"


http://medicalxpress.com

USA - Unexpected toughness may mark out cancer cells in the blood

A syringe needle serves as the heart of a new, experimental microfluidic protocol to expose cancer cells to fluid shear stress. A new UI study suggests that resistance to fluid shear stress may be a biomarker for cancer cells, which could be used to improve detection and monitoring of circulating cancer cells in blood.

A surprising discovery about the physical properties of cancer cells could help improve a new diagnostic approach – a liquid biopsy – that detects, measures, and evaluates cancer cells in blood.

Cancer cells circulating in the bloodstream can form metastases – new tumors. Detecting these rare circulating cancer cells in a blood sample is much less invasive than a standard tumor biopsy, and could prove useful for monitoring cancer progression and detecting recurrence.

While studying what happens to cancer cells when they are subjected to powerful fluid forces, like those encountered in the bloodstream, researchers at the University of Iowa unexpectedly discovered that cancer cells are actually more likely to survive this turbulent fluid environment than normal epithelial cells.

The researchers suggest this surprising "hardiness" could be a potential biomarker for detecting and studying cancer cells in the blood. The findings were published Dec. 3 in the journal PLOS ONE.

"For many years, it's been assumed that these circulating cancer cells are quite fragile, and they essentially get 'blended' by the fluid forces in the blood," says Michael Henry, Ph.D., associate professor of molecular physiology and biophysics at the UI Carver College of Medicine and lead study author. "But there was no real direct evidence for how fluid forces in the blood affect cancer cells.

"What we found was that normal cells were, as expected, quite sensitive to the fluid forces and most did not survive. But, surprisingly, the cancer cells seemed to be remarkably resistant."

Henry suggests that resistance to fluid shear stress might be a way to distinguish benign from malignant cells in circulating tumor cell samples.

"By adding this really simple physical test to the isolation of circulating tumor cells, this technique might let us sort out malignant cells from benign cells. Being able to quantify the numbers of 'dangerous' cells might be a more accurate prognostic marker for the patient than simply counting the total number of circulating tumor cells," says Henry, who also is deputy director for research with Holden Comprehensive Cancer Center at the UI.

A simple system

Using a simple syringe and precise mathematical calculations of fluid dynamics, the UI team created an experimental system to mimic the short bursts of turbulent flow that a cancer cell might experience in the blood. Passing a suspension of cells through the syringe needle allowed the researchers to study the effect of a series of millisecond pulses of high fluid shear stress on a variety of different cancer cell types (prostate, breast, and melanoma) as well as normal epithelial cells from breast and prostate tissue.

After 10 passages though the syringe needle at high flow rate, around half of the cancer cells were still alive. In contrast, very few normal epithelial cells survived the process.

Closer examination of the cell survival data revealed an additional twist. The rate at which the cancer cells are destroyed by passage through the syringe is not constant over all 10 passages. Instead, exposure to fluid shear stress during the earlier passages seems to trigger adaptive responses in cancer cells that actually increase the cells' resistance to fluid shear stress.

The UI team went on to show that this "toughening up" process appears to involve expression of common cancer-causing genes. They also showed that blocking the signaling pathway controlled by one of these oncogenes reduced the cancer cells' resistance to fluid shear stress.

Many different cellular pathways can go wrong to create cancer cells. Henry suggests that this newly discovered physical characteristic of cancer cells may be a common, convergent manifestation of these various, separate molecular abnormalities.

If that is true, simply measuring cancer cells' ability to resist fluid shear stresses might allow researchers to examine the behavior of cancer cells and investigate the effects of cancer drugs on tumor cells.

Translating to clinical

With the surprising findings and their potential for clinical use, the study has grown from a side project of study author and then-graduate student J. Matthew Barnes, and has now garnered external funding to support further research. Jones Nauseef, study author and a UI MSTP student in Henry's lab, has taken up the research, and Henry has established a collaboration with UI urologist James Brown, M.D., to test the technique with blood samples from patients with prostate cancer. That study is being funded by a grant from the Department of Defense.

"A next step for us is to translate these findings into patient specimens and determine whether this can be useful in a context that is clinically meaningful," Henry says, such as determining whether a cancer is progressing or if it may respond to a particular therapy.

Journal reference: PLoS ONE

Provided by University of Iowa

USA - Do brain cells need to be connected to have meaning?

Roy proposes that the only difference between distributed representation and localist representation brain models is that localist neurons have meaning by themselves, and distributed neurons do not. He argues that experimental evidence supports the view that localist neurons are widespread throughout the brain, in contrast with the connectionist brain model in which a pattern of neuronal activity is needed to represent a concept.

The classic theory of the brain is one of connections, in which the brain consists of a network of neurons that interact with each other to allow us to think, see, interpret, and understand the world around us. In this model, called distributed representation, an individual neuron by itself has no inherent meaning, but only contributes to a pattern of neuronal activity that has meaning. For example, a certain pattern of many neurons fires when you think "dog" and another pattern for "cat."

"The belief in distributed representation theory is that a concept or object is not represented by a single neuron in the brain but by a pattern of activations over a number of neurons," explains Asim Roy, a professor of information systems at Arizona State University, to Medical Xpress . "Thus there is no single neuron in the brain representing a cat or a dog. Proponents of this theory claim that a cat or a dog is represented by its microfeatures such as legs, ears, body, tail, and so on. However, they think that neurons have absolutely no meaning on a stand-alone basis. Therefore, they go further and claim that these microfeatures are at the subsymbolic level, which means that meaning arises only when you consider the pattern of activations as a whole. Therefore, there are no neurons representing legs, ears, body, tail, etc. The representation is at a much lower level."

Roy is among a number of scientists working in the fields of neuroscience and artificial intelligence (AI) who suspect that the brain may not be as connected as distributed representation suggests. The basis of their alternative model, called localist representation, is that a single neuron can represent a dog, a cat, or any other object or concept. These neurons can be considered symbols since they have meaning on a stand-alone basis. However, as Roy explains, this doesn't necessarily mean only one neuron represents a dog; such "concept cells" are high-level neurons, which fire in response to the firing of an assortment of low-level neurons that represent the legs, ears, body, tail, etc.

"In localist representation, there could be separate neurons for a dog and a cat, and also neurons for legs, ears, body, tail, etc.," he said. "It's very similar to the model in my paper for word recognition, which is an old model from James McClelland [Chair of the Psychology Department at Stanford University] and [the late pioneering neuroscientist] David Rumelhart. You have low-level neurons that detect letters of the alphabet and then high-level neurons for individual words. So letter neurons and word neurons, they both exist."

The origins of this dispute between localist and distributed representation goes back to the early '80s, to a dispute between the symbol processing hypothesis of artificial intelligence (AI) and the subsymbolic paradigm of connectionists. In the past 30 years, the debate has only intensified.

Not so different after all?

Staunchly on the side of the symbol model, Roy has published a paper in a recent issue of Frontiers in Cognitive Science in which he makes two main claims that he thinks will ramp up support for localist representation. First, he proposes that distributed representation and localist representation models are essentially the same, with just one small but important difference: localist neurons have meaning by themselves, and distributed neurons do not. Traditionally, the two models have been thought to have inherent structural differences. Roy's second claim is that localist representation and its symbolic, meaningful neurons are widespread throughout the brain. Up to now, even the strongest proponents of localist representation considered that the brain may use symbolic neurons only in some areas at certain levels of processing.

In regards to his first point, he explains that several misconceptions of the two models have led scientists to assume that they differ more than they actually do.

"The first misconception is that the property where 'each concept is represented by many units, and each unit represents many different concepts' is exclusive to distributed representation," he said. "I show that that property is actually a property of the model that one builds, not of the units. A second misconception, which is partly related to the first, is that a localist unit should respond to one and only one concept. I show that that is not true either, that localist units can indeed respond to many different higher-level concepts. All these false notions haunt localist representation, and the first thing I did was show that they are false notions. And you can show them to be false only if you stick to the basic property of localist units, that they have 'meaning and interpretation on a stand-alone basis.'"

If Roy is correct, it would mean that many of the arguments used against localist representation – in particular, against University of Bristol Psychology Professor Jeff Bowers' "grandmother cell theory" – are invalid. (Put simply, grandmother cells are high-level concept neurons.) But perhaps more importantly, Roy's interpretation also means that any model built with distributed neurons can be built with localist neurons, since there is no structural difference. In other words, a model in which a neuron responds to multiple concepts can be either distributed or localist.

A neuron for everyone and everything

This interpretation clears the path to Roy's second claim, that the brain processes information using symbols, not subsymbolic connections. He explains that experimental support for symbol-based localist representation is robust, with some of the earliest evidence coming from studies of the visual system.

"There's more than four decades of research on receptive fields in the primary visual cortex and even in retinal ganglion cells that shows that the functionality of the cells in those regions can be interpreted," Roy said. "Researchers have found cells that detect orientation, edges, color, motion, and so on. David H. Hubel and Torsten Wiesel won the Nobel Prize in physiology and medicine in 1981 for breaking this 'secret code' of the brain."

The discovery of these vision cells is just one piece of neurophysiological evidence suggesting that individual neuron cells have meaning and interpretation. Roy also cites several recent studies that have identified individual neurons in the hippocampus and the medial temporal lobe that represent specific objects or concepts and do not depend on the activity of other neurons. For example, in 2005, neuroscientists discovered that an epilepsy patient had one neuron cell that fired whenever a photo of Jennifer Aniston was presented. Various photos showing the blonde actress in different poses and from different angles all elicited a response from the same concept cell, a neuron in the hippocampus.

"Concept cells were also found in different regions of the medial temporal lobe," Roy said. "For example, a 'James Brolin cell' was found in the right hippocampus, a 'Venus Williams cell' was in the left hippocampus, a 'Marilyn Monroe cell' was in the left parahippocampal cortex and a 'Michael Jackson cell' was in the right amygdala."

Roy thinks that one of most supportive studies of his argument is the Cerf experiment from 2010. In this experiment, Moran Cerf, a neuroscientist at New York University and UCLA, asked epilepsy patients to look at several different images on a screen while the researchers attempted to identify one neuron in the medial temporal lobe that independently fired for each of the different images. One of the images was then randomly selected to become the target image, and patients were shown the target image at 50% visibility and a distractor image at 50% visibility and asked to focus their thoughts on the target image.

The visibility of the target image increased when the firing rate of the previously identified target neuron increased compared to the firing rate of the distractor neuron. By focusing on the target images, the patients could increase the target neuron's firing rate, with 69% of the patients succeeding in making the target image 100% visible.

In Roy's perspective, these results suggest that the neuron the researchers originally identified as the representative neuron for the target image was indeed a localist neuron. In other words, when that neuron fired, it had one specific meaning: the patient was thinking of the target image.

Roy emphasized that he did not look exclusively for studies to support his claim and ignore studies that contradicted it; he says he found no evidence that might contradict his claims.

"Although I have not exhaustively searched this literature, from what I looked at, there was not much to 'pick and choose' from," he said. "In the paper, I have cited some recent studies. And although I have not covered the universe of single cell studies on insects, animals, and humans, the ones I have looked at don't contradict my broad claim.

"There are some studies that show that a population of neurons has meaning," he acknowledged. "But that doesn't contradict my theory. For example, one can read the outputs of cells representing legs, ears, body, tail, and so on, and say that represents a cat. However, that doesn't contradict the claim that all of these cells have meaning and interpretation on a stand-alone basis, even though only when their outputs are combined can you say that it's a cat."

Future developments

All this evidence further solidifies Roy's impression that the brain is a system of symbols rather than a network of connections. If he's correct, then it would have implications for our understanding of the brain and future AI developments.

"The brain would need fewer connections with localist representation than with distributed representation," he said. "There is efficiency and filtering associated with localist representation. We can quickly filter out aspects of a scene without further processing. And that saves computations and energy consumed. Our brains would be exhausted if they didn't filter out irrelevant things quickly."

Applying the brain's symbolic representation to create AI systems may sound more straightforward than attempting to build AI systems using a subsymbolic mode, but it's far from simple.

"Localist representation may sound simplistic, but we are still struggling with the mathematics to replicate those functionalities, even for the visual system," Roy said. "So maybe it's not that simple."

Commentary on Roy's paper by David Plaut

David Plaut, Psychology Professor at Carnegie Mellon University, carries out research using the connectionist framework for computational modeling of brain functions. He has found issues with a few ideas in Roy's paper, starting with the fact that Roy frames the argument on neural representation differently than how it's usually framed.

"Asim's main argument is that what makes a neural representation localist is that the activation of a single neuron has meaning and interpretation on a stand-alone basis," Plaut said. "This claim is about how scientists interpret neural activity. It differs from the standard argument on neural representation, which is about how the system actually works, not whether we as scientists can make sense of a single neuron. These are two separate questions."

Plaut also thinks that Roy needs to clearly define what he means when he says that a neuron has "meaning and interpretation."

"My problem is that his claim is a bit vacuous because he's never very clear about what a coherent 'meaning and interpretation' has to be like," he said. "He brings up some examples that he claims are supportive of neurons having meaning and interpretation, such as in the medial temporal lobe and hippocampal regions, but never lays out what would count as evidence against his claim. On his view, if we can't yet characterize the function of a neuron, it just means we haven't figured it out yet. There's no way to prove him wrong."

In fact, Plaut thinks that much of the experimental evidence that Roy cites as support for his view may not be as supportive as Roy claims.

"If you look at what he says 'meaning and interpretation' is supposed to be coding for, if you look into the examples he gives, they're not actually quite like that," Plaut said. "If you look at the hippocampal cells (the Jennifer Aniston neuron), the problem is that it's been demonstrated that the very same cell can respond to something else that's pretty different. For example, the same Jennifer Aniston cell responds to Lisa Kudrow, another actress on the TV show Friends with Aniston. Are we to believe that Lisa Kudrow and Jennifer Aniston are the same concept? Is this neuron a Friends TV show cell?"

He notes that there are other examples; for instance, there is one neuron that fires for both spiders and snakes, and another neuron that fires for both the Eiffel Tower and the Leaning Tower of Piza – somewhat related concepts, perhaps, but still with quite distinct meanings.

"Only a few experiments show the degree of selectivity and interpretability that he's talking about," Plaut said. "For example, Young and Yamane published a study in 1992 in which, out of 850 neurons, they found only one that had this high level of selectivity, while the other cells had varying degrees of responses. If we ignore what the vast majority of what neurons are doing, it's selection bias. In some regions of the medial temporal lobe and hippocampus, there seem to be fairly highly selective responses, but the notion that most cells respond to one concept that is interpretable isn't supported by the data."

Commentary on Roy's paper by James McClelland

As mentioned above, one of the papers that Roy cites is coauthored by James McClelland, a psychology professor at Stanford University whose work has played a pivotal role in developing the connectionist framework. In response to Roy's paper, McClelland explained why he still favors the distributed representation model:

"Roy's paper lays out his claim that the brain uses localist representation – the view that individual neurons in the brain have 'meaning and interpretation' on a stand-alone basis – and contrasts this with the distributed representation view – the view that each neuron participates in many representations, and that it is therefore not possible to determine what concept is being represented by looking at the activity of a single neuron. Although my collaboration with David Rumelhart exploring neural networks began with the exploration of localist models (McClelland & Rumelhart, 1981), we soon became convinced that the localist view is unlikely to be correct (McClelland & Rumelhart, 1985). Here I briefly explain why I still hold the distributed representation view.

"One problem with localist representation is the question, when to start and when to stop using a localist representation. Suppose I encounter a new kind of bread – one baked in thin sheets with sesame and cardamom seeds. In order to understand that this new kind of bread might smell or taste like, I would likely rely on representations of other kinds of bread and of sesame and cardamom seeds, and also on my knowledge of other kinds of foods in thin sheets that I may know about. I already have a great deal of knowledge about this thin bread, having never encountered it before. Did I already have a localist representation for it, or did I compose my understanding of it out of knowledge I had previously acquired for other things? If the latter, what basis do I have for thinking that the representation I have for any concept – even a very familiar one – as associated with a single neuron, or even a set of neurons dedicated only to that concept?

"A further problem arises when we note that I may have useful knowledge of many different instances of every concept I know – for example, the particular type of chicken I purchased yesterday evening at the supermarket, and the particular type of avocados I found to put in my salad. Each of these is a class of objects, a class for which we may need a representation if we were to encounter a member of the class again. Is each such class represented by a localist representation in the brain? The same problem arises with specific individuals, since we know each individual in many different roles and phases. Do I have a localist representation for each phase of every individual that I know? Given these questions, my work since the 1985 paper has focused on understanding how the brain may use what it has learned about many different and partially related experiences, without relying exclusively on localist representations.

On this view, the knowledge arising from an experience is the set of adjustments made to connection weights among participating neurons – neurons that participate in representing many different things.

"Roy lays out several lines of argument in support of his point of view. Perhaps the central argument is that recordings from neurons show that the neurons in some parts of the brain have what some might consider to be surprisingly specific responses. Let us discuss one such neuron – the neuron that fires substantially more when an individual sees either the Eiffel Tower or the Leaning Tower of Pisa than when he sees other objects. Does this neuron 'have meaning and interpretation independent of other neurons'? It can have meaning for an external observer, who knows the results of the experiment – but exactly what meaning should we say it has? An even harder question is, what meaning does the neuron have for the individual in whose brain it has been found? Let's take the simpler question first.

"First, for the external observer: it should be apparent that the full range of test stimuli used affects what meaning we assign to such a neuron. The Japanese neuroscientist Keiji Tanaka found neurons in monkeys' brains that others had called 'monkey paw detectors' and others they might have called 'cheshire cat detectors,' but he then constructed many special test stimuli to use in testing each neuron.
He found that the neurons generally responded even better to schematic stimuli that were not recognizably paws or cats but had features in common with them. Such neurons surely participate in representing cats or paws but may also participate in representing other objects with similar shape features. Critically, however, the response of the neuron is difficult to pin down in simple verbal terms and neighboring neurons have similar responses that shade continuously from one combination of features to another. Is the same true of the Eiffel Tower/Leaning Tower of Pisa neuron? In the context of these observations, the Cerf experiment considered by Roy may not be as impressive. A neuron can respond to one of four different things without really having a meaning and interpretation equivalent to any one of these items.

"Second, to the individual in whose brain the neuron has been found: Roy's analysis ignores the question of how a neuron assigned to represent a concept is then used by the observer to mediate use of the observer's knowledge of the concept. This is the issue my colleagues and I have sought to explore with explicit models that rely on distributed representations over populations of simulated neuron-like processing units. While we sometimes (Kumeran & McClelland, 2012, as in McClelland & Rumelhart, 1981) use localist units in our simulation models, it is not the neurons, but their interconnections with other neurons, that gives them meaning and interpretation. The sight of a picture of Saddam Hussein brings to mind heinous crimes against the citizens of Iraq and Kuwait, not because a particular neuron is activated but because it (and many other neurons) participates in activating other neurons that are involved in the representation of other heinous crimes and/or in verbal expressions and imagined scenes involving such crimes. And it participates in activating these other neurons because of its connections to these neurons. Again we come back to the patterns of interconnections as the seat of knowledge, the basis on which one or more neurons in the brain can have meaning and interpretation.

"In our work we have proposed that different parts of the brain rely on representations that differ in their relative specificity (McClelland et al, 1995; Goddard & McClelland, 1996). The Medial Temporal Lobes are thought to represent items, locations, events, and situations in terms of sparse patterns of activation, but even here each neuron is thought of as participating in many representations. Even here, the principles of distributed representation apply: the same place cell can represent very different places in different environments, for example, and two place cells that represent overlapping places in one environment can represent completely non-overlapping places in other environments. Other parts of the neocortex of the brain are thought to rely on denser distributed representations, where a somewhat larger overall fraction of the neurons are activated by a particular item, location, etc. There is a lot more to understand about these representations. Studies involving very small numbers of neurons may be misleading in this regard. Progress will depend on recording from large numbers of neurons, so that we can more readily visualize the activity across the entire population."

Roy has responded to Plaut's and McClelland's comments here.

More information: 
Roy, A. "A theory of the brain: localist representation is used widely in the brain." Frontiers in Cognitive Science.
Kumaran, D. & McClelland, J. L. (2012). "Generalization through the recurrent interaction of episodic memories: A model of the hippocampal system." Psychological Review, 119, 573-616.DOI: 10.1037/a0028681
Rogers, T. T. & McClelland, J. L. (2004). Semantic Cognition: A Parallel Distributed Processing Approach. Cambridge, MA: MIT Press
McClelland, J. L., McNaughton, B. L., & O'Reilly, R. C. (1995). "Why there are complementary learning systems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory." Psychological Review, 102, 419-457
McClelland, J. L. & Rumelhart, D. E. (1985). "Distributed memory and the representation of general and specific information." Journal of Experimental Psychology: General, 114, 159-197
McClelland, J. L. & Rumelhart, D. E. (1981). "An interactive activation model of context effects in letter perception: Part 1. An account of Basic Findings." Psychological Review, 88, 375-407

Journal reference: Frontiers in Cognitive Science