We are searching data for your request:
Upon completion, a link will appear to access the found materials.
Is the human visual / motor system able to track, and move in response to, objects of certain colours more quickly and reliably than for others? By more reliably, I mean with greater accuracy in judging position and velocity, and with greater accuracy when moving in response.
Which colours are "most visible" in that sense, as a function of background and illumination?
This interests me specifically in the context of juggling: I like to juggle, and I wonder whether there's a difference in my response time and accuracy with different colours of balls/clubs. Usually I'm indoors, and ceilings are usually white, sometimes black or brown, with strip lighting. My only personal observations are rather obvious: dark colours can be hard to see against dark backgrounds, and white can be hard to see against white.
The answer is yes and yes. Firstly, motion cannot be processed at equiluminance. In other words if you have clubs that are equally as bright as their background you will not be able to see them move even though they are clearly visible because of a different color. It's a remarkable effect (demo link below), as you become in effect motion-blind. Obviously you notice after a while that things moved, but you don't see them move smoothly as you are used to. This is similar to what people with akinetopsia experience (had lesions to brain areas involved in motion processing). So have clubs that have clear luminance contrast to the background (very dark, or very bright).
Then there are known difference in processing latencies for different colors. More specifically yellow is processed faster than blue. And black is processed faster than white (if you consider them colors). These effect are massive, a few hundreds msec in some cases which is very long. There is also a known relationship between stimulus intensity (here color contrast) and response time, called Piéron's law. However that effect saturates at fairly low contrast so as long as you can see the clubs clearly it's unlikely there will be a difference in response time for more contrasted colors.
Finally there are discrimination differences for colors. For example yellow and orange are more easily discriminable than blue. But that's only true if you have to perceive a low contrast color difference (a yellow club on a yellow background). Here again as long as you see your clubs very clearly, it's unlikely the specific color of the clubs will make a difference.
So overall I would recommend to use clubs as contrasted as possible to the background (which is common sense). Then prefers darks to lights colors, and prefers yellows to blues. The ideal color is brown, which is in facts a dark yellow.
Komban, S. J., Alonso, J. M., & Zaidi, Q. (2011). Darks are processed faster than lights. Journal of Neuroscience, 31(23), 8654-8658.
Wool, L. E., Komban, S. J., Kremkow, J., Jansen, M., Li, X., Alonso, J. M., & Zaidi, Q. (2015). Salience of unique hues and implications for color theory. Journal of vision, 15(2), 10-10.
Witzel, C., & Gegenfurtner, K. R. (2013). Categorical sensitivity to color differences. Journal of vision, 13(7), 1-1.
A faster and more accurate heuristic for cyclic edit distance computation
This letter describes a new heuristic algorithm to compute the cyclic edit distance.
We extend an existing algorithm which compares circular sequences using q-grams.
Theoretical insight to support the suitability of the algorithm is provided.
Experiments show the heuristic is more accurate compared to existing heuristics.
Experiments show the heuristic is faster compared to existing heuristics.
A faster and more equitable disaster recovery system is possible
Many American communities do not fully understand their disaster risk when faced with natural disasters. Low-income communities and communities of color are most severely affected by natural disasters, but they only learn how poorly the system works for them after disastrous events. As a result, families and individuals suffer unnecessarily, often for years, and many just give up.
In an op-ed previously published by The Hill, the authors make a compelling argument for prioritizing low and moderate income families after disasters. The authors deeply understand the issues and they highlight several opportunities for public and private sector innovation that will shrink the time between disaster and recovery, ensuring fewer survivors “slip through the cracks.” A single application for assistance, shared between responding government agencies at the federal, state and local levels would help simplify the process for survivors, but more can be done.
The authors mention two significant challenges that must be overcome: expanding access to federal resources in a more equitable manner, and providing faster assistance to those who lack access to savings and/or credit.
Below I offer two recommendations that would amplify the effects of a single application for federal assistance and directly address these problems.
Expanding access to federal resources in an equitable manner
For homeowners, access to federal assistance begins with a FEMA application and a home damage assessment. Prior to COVID-19, FEMA relied on contract inspectors, deployed to disaster impacted communities, to inspect damage to homes one home at a time. This method is slow, inconsistent and is subject to human error and bias.
Since COVID-19, FEMA has resorted to having homeowners self report damage to FEMA personnel over the phone. FEMA has not shared the call script with the public. Most survivors are neither construction nor disaster experts. For many, FEMA assistance (average award $4,300) may be the only kind they receive until Community Development Block Grant Disaster Recovery Program (CDBG-DR) funds arrive (years in the future, maybe never). If survivors answer questions incorrectly, it could prevent them receiving all the assistance they may be eligible for. When damages are underreported and undercompensated, vulnerable survivors begin a long and uncertain road to recovery with even fewer resources.
Meanwhile the insurance industry uses aerial imagery, flyovers, satellites, image-to-estimate and other technology to assess damage and pay claims faster and more accurately than ever before and much faster than traditional government assistance. FEMA has access to incredible technology and should expand its capabilities. If FEMA adopts similar technology and practices, low-income survivors will receive faster and more accurate assistance more quickly and with less risk of human error or bias.
Lack of liquidity/access to credit for the most vulnerable
Consolidating duplicative applications and ensuring that damage assessments are faster and more accurate will expand the amount of early FEMA resources available to low and moderate income communities. Expanded access to assistance will allow many to recover more quickly and predictably.
Still, the most vulnerable survivors will require long term housing repair programs that take two years or more to operationalize at the state and local level. These programs do not offer clarity or predictability to homeowners who urgently need assistance. Those without insurance or access to private resources, are left staring into an abyss of uncertainty.
When individual rebuilding and recovery efforts are delayed and unpredictable, it causes the potential for community deterioration at every level. Tax bases shrink as many survivors relocate, public services suffer, opportunities to attract additional investment become scarcer and, most depressingly, those survivors who can least afford to wait are served last and least predictably.
But this too can be solved. This two-plus year gap can be erased through a Recovery Acceleration Fund.
Many CDBG-DR programs include a reimbursement pathway, where eligible homeowners who made well-documented repairs out of pocket can be reimbursed some or all of their expenses when state and local CDBG-DR funded programs are up and running.
Charitable and social impact investment can create a Recovery Acceleration Fund to make repairs for low and moderate income families who will be eligible for CDBG-DR assistance when it arrives but cannot self-fund repairs today. This would reduce overall cost to taxpayers by making critical repairs more quickly, preventing the rising costs of repair when homes are left untouched for months or years after disasters. More importantly it would erase the two-plus years of delay and uncertainty for families who can least afford to wait. Most importantly it could create a new public private partnership model for social impact investment, one that prioritizes the reduction of needless human suffering in low-income communities when help is needed most.
Without a doubt, these are challenging times for our country. But our nation has proven time and again that it can adapt to new challenges and overcome any obstacle. Disaster recovery can be improved but it will require innovation and new approaches. And the time is now.
Reese May, is the chief strategy and innovation officer for SBP, a national disaster resilience and recovery organization. He leads SBP’s disaster preparedness and recovery efforts across the country, advises state and local decision-makers on effective long-term disaster recovery programs and advocates for policy change at the federal level. He is a Truman National Security Fellow.
New optical method promises faster, more accurate diagnosis of breast cancer
Figure 3 from a new article in the Journal of Biomedical Optics compares stained bright-field microscopy (top row) and SLIM (bottom row) images in their respective abilities to show malignant and benign. The images were obtained from adjacent sections. Color bars are in radians. Credit: Hassaan Majeed, Univ. of Illinois, et al./SPIE
A new optical method for more quickly and accurately determining whether breast tissue lesions are cancerous is described by University of Illinois researchers in the Journal of Biomedical Optics, published by SPIE, the international society for optics and photonics.
In "Breast cancer diagnosis using spatial light interference microscopy," published 20 August, University of Illinois at Urbana-Champaign and Chicago researchers Hassaan Majeed, Mikhail Kandel, Kevin Han, Zelun Luo, Virgilia Macias, Krishnarao Tangella, Andre Balla, and Gabriel Popescu report on a quantitative method for diagnosing breast cancer using spatial light interference microscopy (SLIM).
Because this method is based on quantitative data rather than a subjective assessment, the researchers expect that these preliminary results show the potential of their technique to become the basis for an automated image analysis system that would provide a fast and accurate diagnostic method.
"Conventional methods for diagnosis of breast cancer have several limitations, including observer discrepancy," said YongKeun Park, a professor at KAIST and a guest editor of the special section on Quantitative Phase Imaging in Biomedicine in which the article appears.
When an abnormality in the breast is discovered, standard practice is for the physician to take a tissue biopsy, which is then stained to provide enough contrast for a pathologist to study key features of the tissue under a microscope. The tissue analysis is done manually. Due to variations such as staining intensity and the illumination used, the process does not lend itself to automation.
Manual inspections, however, are subject to investigator bias, and the process is time-consuming. This can, in some cases, result in late diagnosis ? a critical shortcoming given that early diagnosis significantly improves chances of survival.
Using the breast tissue biopsies of 400 different patients, researchers selected two parallel, adjacent sections from each biopsy. One was stained and the other left unstained.
The unstained samples were analyzed using a SLIM module attached to a commercial phase contrast microscope to generate interferograms ? photographic images derived from data based on how the tissue refracts light.
Four interferograms were used to produce one quantitative image showing areas with different refractive properties in different colors. The boundary between tumors and the cells around them were clearly delineated, making it possible to assess whether the tumors were malignant or benign.
HONGKUI ZENG: Map brain connections
Executive director of structured science, Allen Institute for Brain Science, Seattle, Washington.
The connections between individual cells and various cell types are so complex that mapping their connectivity at the global and population level is no longer sufficient to understand them. So, we are mapping connections based on cell type and at the single-cell level.
We can accomplish this with ‘anterograde’ and ‘retrograde’ tracing, which reveal the structures that protude from specific cells, called axon projections. We are also using more methods based on single-neuron morphology, looking at where the projections arise and terminate for individual neurons.
A big advance is the generation of electron microscopy data sets that cover substantially larger volumes than has been possible before. At the Howard Hughes Medical Institute’s Janelia Research Campus in Ashburn, Virginia, for instance, researchers are working to map every neuron and synapse in the Drosophila fruit fly.
Improvements in image acquisition and sample handling are key for these advances so, too, are improvements in computing. At the Allen Institute for Brain Science, we are involved in an effort to build a virtual map of mouse brain neural connectivity with the help of machine-learning algorithms.
Tremendous specificity is encoded in the brain’s connections. But without knowing that specificity at both global and local scales, our ability to understand behaviour or function is essentially based on a black box: we lack the physical foundations to understand neuronal activity and behaviour. Connectomics will fill in that missing ‘ground-truth’ information.
Faster and more accurate classification of time series by exploiting a novel dynamic time warping averaging algorithm
A concerted research effort over the past two decades has heralded significant improvements in both the efficiency and effectiveness of time series classification. The consensus that has emerged in the community is that the best solution is a surprisingly simple one. In virtually all domains, the most accurate classifier is the nearest neighbor algorithm with dynamic time warping as the distance measure. The time complexity of dynamic time warping means that successful deployments on resource-constrained devices remain elusive. Moreover, the recent explosion of interest in wearable computing devices, which typically have limited computational resources, has greatly increased the need for very efficient classification algorithms. A classic technique to obtain the benefits of the nearest neighbor algorithm, without inheriting its undesirable time and space complexity, is to use the nearest centroid algorithm. Unfortunately, the unique properties of (most) time series data mean that the centroid typically does not resemble any of the instances, an unintuitive and underappreciated fact. In this paper we demonstrate that we can exploit a recent result by Petitjean et al. to allow meaningful averaging of “warped” time series, which then allows us to create super-efficient nearest “centroid” classifiers that are at least as accurate as their more computationally challenged nearest neighbor relatives. We demonstrate empirically the utility of our approach by comparing it to all the appropriate strawmen algorithms on the ubiquitous UCR Benchmarks and with a case study in supporting insect classification on resource-constrained sensors.
This is a preview of subscription content, access via your institution.
New technology to speed cleanup of nuclear contaminated sites
Members of the engineering faculty at Oregon State University have invented a new type of radiation detection and measurement device that will be particularly useful for cleanup of sites with radioactive contamination, making the process faster, more accurate and less expensive.
A patent has been granted on this new type of radiation spectrometer, and the first production of devices will begin soon. The advance has also led to creation of a Corvallis-based spinoff company, Avicenna Instruments, based on the OSU research. The market for these instruments may ultimately be global, and thousands of them could be built, researchers say.
Hundreds of millions of dollars are spent on cleanup of some major sites contaminated by radioactivity, primarily from the historic production of nuclear weapons during and after World War II. These include the Hanford site in Washington, Savannah River site in South Carolina, and Oak Ridge National Laboratory in Tennessee.
"Unlike other detectors, this spectrometer is more efficient, and able to measure and quantify both gamma and beta radiation at the same time," said David Hamby, an OSU professor of health physics. "Before this two different types of detectors and other chemical tests were needed in a time-consuming process."
"This system will be able to provide accurate results in 15 minutes that previously might have taken half a day," Hamby said. "That saves steps, time and money."
The spectrometer, developed over 10 years by Hamby and Abi Farsoni, an assistant professor in the College of Engineering, can quickly tell the type and amount of radionuclides that are present in something like a soil sample -- contaminants such as cesium 137 or strontium 90 -- that were produced from reactor operations. And it can distinguish between gamma rays and beta particles, which is necessary to determine the level of contamination.
"Cleaning up radioactive contamination is something we can do, but the process is costly, and often the question when working in the field is how clean is clean enough," Hamby said. "At some point the remaining level of radioactivity is not a concern. So we need the ability to do frequent and accurate testing to protect the environment while also controlling costs."
This system should allow that, Hamby said, and may eventually be used in monitoring processes in the nuclear energy industry, or possibly medical applications in the use of radioactive tracers.
The OSU College of Engineering has contracted with Ludlum Instruments, a Sweetwater, Texas, manufacturer, to produce the first instruments, and the OSU Office of Technology Transfer is seeking a licensee for commercial development. The electronic systems for the spectrometers will be produced in Oregon by Avicenna Instruments, the researchers said.
Materials provided by Oregon State University. Note: Content may be edited for style and length.
Scientists propose an algorithm to study DNA faster and more accurately
Stylized image of DNA. Credit: bioinformatics101.wordpress.com
A team of scientists from Germany, the United States and Russia, including Dr. Mark Borodovsky, a Chair of the Department of Bioinformatics at MIPT, have proposed an algorithm to automate the process of searching for genes, making it more efficient. The new development combines the advantages of the most advanced tools for working with genomic data. The new method will enable scientists to analyse DNA sequences faster and more accurately and identify the full set of genes in a genome.
Although the paper describing the algorithm only appeared recently in the journal Bioinformatics, which is published by Oxford Journals, the proposed method has already proven to be very popular—the computer software program has been downloaded by more than 1500 different centres and laboratories worldwide. Tests of the algorithm have shown that it is considerably more accurate than other similar algorithms.
The development involves applications of the cross-disciplinary field of bioinformatics. Bioinformatics combines mathematics, statistics and computer science to study biological molecules, such as DNA, RNA and protein structures. DNA, which is fundamentally an information molecule, is even sometimes depicted in computerized form (see Fig. 1) in order to emphasize its role as a molecule of biological memory. Bioinformatics is a very topical subject every new sequenced genome raises so many additional questions that scientists simply do not have time to answer them all. So automating processes is key to the success of any bioinformatics project, and these algorithms are essential for solving a wide variety of problems.
One of the most important areas of bioinformatics is annotating genomes – determining which particular DNA molecules are used to synthesize RNA and proteins (see Fig. 2). These parts – genes – are of great scientific interest. The fact is that in many studies, scientists do not need information about the entire genome (which is around 2 metres long for a single human cell), but about its most informative part – genes. Gene sections are identified by searching for similarities between sequence fragments and known genes, or by detecting consistent patterns of the nucleotide sequence. This process is carried out using predictive algorithms.
Locating gene sections is no easy task, especially in eukaryotic organisms, which includes almost all widely known types of organism, except for bacteria. This is due to the fact that in these cells, the transfer of genetic information is complicated by "gaps" in the coding regions (introns) and because there are no definite indicators to determine whether a region is a coding region or not.Diagram showing the transmission of hereditary information in a cell. Credit: dnkworld.ru/transkripciya-i-translyaciya-dnk
The algorithm proposed by the scientists determines which regions in the DNA are genes and which are not. The scientists used a Markov chain, which is a sequence of random events, the future of which is dependent on past events. The states of the chain in this case are either nucleotides or nucleotide words (k-mers). The algorithm determines the most probable division of a genome into coding and noncoding regions, classifying the genomic fragments in the best possible way according to their ability to encode proteins or RNA. Experimental data obtained from RNA give additional useful information which can be used to train the model used in the algorithm. Certain gene prediction programs can use this data to improve the accuracy of finding genes. However, these algorithms require type-specific training of the model. For the AUGUSTUS software program, for example, which has a high level of accuracy, a training set of genes is needed. This set can be obtained using another program – GeneMark-ET – which is a self-training algorithm. These two algorithms were combined in the BRAKER1 algorithm, which was proposed jointly by the developers of AUGUSTUS and GeneMark-ET.
BRAKER1 has demonstrated a high level of efficiency. The developed program has already been downloaded by more than 1500 different centres and laboratories. Tests of the algorithm have shown that it is considerably more accurate than other similar algorithms. The example running time of BRAKER1 on a single processor is ∼17.5 hours for training and the prediction of genes in a genome with a length of 120 megabases. This is a good result, considering that this time may be significantly reduced by using parallel processors, and this means that in the future, the algorithm might function even faster and generally more efficiently.
Tools such as these solve a variety of problems. Accurately annotating genes in a genome is extremely important – an example of this is the global 1000 Genomes Project, the initial results of which have already been published. Launched in 2008, the project involves researchers from 75 different laboratories and companies. Sequences of rare gene variants and gene substitutions were discovered, some of which can cause disease. When diagnosing genetic diseases, it is very important to know which substitutions in gene sections cause the disease to develop. The project mapped genomes of different people , noting their coding sections, and rare nucleotide substitutions were identified. In the future, this will help doctors to diagnose complex diseases such as heart disease, diabetes, and cancer.
BRAKER1 enables scientists to work effectively with the genomes of new organisms, speeding up the process of annotating genomes and acquiring essential knowledge about life sciences.
Lactoferrin-iCre: a new mouse line to study uterine epithelial gene function
Transgenic animal models are valuable for studying gene function in various tissue compartments. Mice with conditional deletion of genes in the uterus using the Cre-loxP system serve as powerful tools to study uterine biology. The uterus is comprised of 3 major tissue types: myometrium, stroma, and epithelium. Proliferation and differentiation in each uterine cell type are differentially regulated by ovarian hormones, resulting in spatiotemporal control of gene expression. Therefore, examining gene function in each uterine tissue type will provide more meaningful information regarding uterine biology during pregnancy and disease states. Although currently available Cre mouse lines have been very useful in exploring functions of specific genes in uterine biology, overlapping expression of these Cre lines in more than 1 tissue type and in other reproductive organs sometimes makes interpretation of results difficult. In this article, we report the generation of a new iCre knock-in mouse line, in which iCre is expressed from endogenous lactoferrin (Ltf) promoter. Ltf-iCre mice primarily direct recombination in the uterine epithelium in adult females and in immature females after estrogen treatment. These mice will allow for specific interrogation of gene function in the mature uterine epithelium, providing a helpful tool to uncover important aspects of uterine biology.
Generation of Ltf-iCre knock-in mice.…
Generation of Ltf-iCre knock-in mice. A, Map of the Ltf genomic region. The…
Published by the Royal Society. All rights reserved.
. 2001 Human expressions as adaptations: evolutionary questions in facial expression research . Am. J. Phys. Anthropol . 33, 3–24. (doi:10.1002/ajpa.20001) Crossref, PubMed, ISI, Google Scholar
Parr LA, Winslow JT, Hopkins WD, de Waal FBM
. 2000 Recognizing facial cues: individual discrimination by chimpanzees (Pan troglodytes) and rhesus monkeys (Macaca mulatta) . J. Comp. Psychol . 114, 47–60. (doi:10.1037/0735-7036.114.1.47) Crossref, PubMed, ISI, Google Scholar
. 2007 Integrating face and voice in person perception . Trends Cogn. Sci . 11, 535–543. (doi:10.1016/j.tics.2007.10.001) Crossref, PubMed, ISI, Google Scholar
Yuval-Greenberg S, Deouell LY
. 2009 The dog's meow: asymmetrical interaction in cross-modal object recognition . Exp. Brain Res . 193, 603–614. (doi:10.1007/s00221-008-1664-6) Crossref, PubMed, ISI, Google Scholar
Ghazanfar AA, Logothetis NK
. 2003 Facial expressions linked to monkey calls . Nature 423, 937–938. (doi:10.1038/423937a) Crossref, PubMed, ISI, Google Scholar
. 2004 Matching vocalizations to vocalizing faces in a chimpanzee (Pan troglodytes) . Anim. Cogn . 7, 179–184. (doi:10.1007/s10071-004-0212-4) Crossref, PubMed, ISI, Google Scholar
. 2013 Crossmodal integration of conspecific vocalizations in rhesus macaques . PLoS ONE 8, e81825. (doi:10.1371/journal.pone.0081825) Crossref, PubMed, ISI, Google Scholar
Nagasawa M, Mitsui S, En S, Ohtani N, Ohta M, Sakuma Y, Onaka T, Mogi K, Kikusui T
. 2015 Oxytocin-gaze positive loop and the coevolution of human-dog bonds . Science 348, 333–336. (doi:10.1126/science.1261022) Crossref, PubMed, ISI, Google Scholar
Faragó T, Pongrácz P, Range F, Virányi Z, Miklósi A
. 2010 ‘The bone is mine’: affective and referential aspects of dog growls . Anim. Behav . 79, 917–925. (doi:10.1016/j.anbehav.2010.01.005) Crossref, ISI, Google Scholar
Taylor AM, Reby D, McComb K
. 2011 Cross modal perception of body size in domestic dogs (Canis familiaris) . PLoS ONE 6, e0017069. (doi:10.1371/journal.pone.0017069) Crossref, ISI, Google Scholar
Nagasawa M, Murai K, Mogi K, Kikusui T
. 2011 Dogs can discriminate human smiling faces from blank expressions . Anim. Cogn . 14, 525–533. (doi:10.1007/s10071-011-0386-5) Crossref, PubMed, ISI, Google Scholar
Racca A, Guo K, Meints K, Mills DS
. 2012 Reading faces: differential lateral gaze bias in processing canine and human facial expressions in dogs and 4-year-old children . PLoS ONE 7, e36076. (doi:10.1371/journal.pone.0036076) Crossref, PubMed, ISI, Google Scholar
Müller CA, Schmitt K, Barber ALA, Huber L
. 2015 Dogs can discriminate emotional expressions of human faces . Curr. Biol . 25, 601–605. (doi:10.1016/j.cub.2014.12.055) Crossref, PubMed, ISI, Google Scholar
. 2013 Can domestic dogs (Canis familiaris) use referential emotional expressions to locate hidden food? Anim. Cogn . 16, 137–145. (doi:10.1007/s10071-012-0560-4) Crossref, PubMed, ISI, Google Scholar
. In press. Does affective information influence domestic dogs’ (Canis lupus familiaris) point-following behavior? Anim . Cogn. (doi:10.1007/s10071-015-0934-5) ISI, Google Scholar
Fukuzawa M, Mills DS, Cooper JJ
. 2005 The effect of human command phonetic characteristics on auditory cognition in dogs (Canis familiaris) . J. Comp. Psychol . 119, 117–120. (doi:10.1037/0735-7036.119.1.117) Crossref, PubMed, ISI, Google Scholar
. 2012 Empathic-like responding by domestic dogs (Canis familiaris) to distress in humans: an exploratory study . Anim. Cogn . 15, 851–859. (doi:10.1007/s10071-012-0510-1) Crossref, PubMed, ISI, Google Scholar
Andics A, Gácsi M, Faragó T, Kis A, Miklósi A
. 2014 Voice-sensitive regions in the dog and human brain are revealed by comparative fMRI . Curr. Biol . 24, 574–578. (doi:10.1016/j.cub.2014.01.058) Crossref, PubMed, ISI, Google Scholar
Kondo N, Izawa E-I, Watanabe S
. 2012 Crows cross-modally recognize group member but not non-group members . Proc. R. Soc. B 279, 1937–1942. (doi:10.1098/rspb.2011.2419) Link, ISI, Google Scholar
Silwa J, Duhamel J, Pascalis O, Wirth S
. 2011 Spontaneous voice–face identity matching by rhesus monkeys for familiar conspecifics and humans . Proc. Natl Acad. Sci. USA 108, 1735–1740. (doi:10.1073/pnas.1008169108) Crossref, PubMed, ISI, Google Scholar
. 2009 Cross-modal individual recognition in domestic horses (Equus caballus) . Proc. Natl Acad. Sci. USA 106, 947–951. (doi:10.1073/pnas.0809127105) Crossref, PubMed, ISI, Google Scholar
. 2012 Cross-modal individual recognition in domestic horses (Equus caballus) extends to familiar humans . Proc. R. Soc. B 282, 3131–3138. (doi:10.1098/rspb.2012.0626) Link, ISI, Google Scholar
Somppi S, Törnqvist H, Hänninen L, Krause C, Vainio O
. 2014 How dogs scan familiar and inverted faces: an eye movement study . Anim. Cogn . 17, 793–803. (doi:10.1007/s10071-013-0713-0) Crossref, PubMed, ISI, Google Scholar
Guo K, Meints K, Hall C, Hall S, Mills D
. 2009 Left gaze bias in humans, rhesus monkeys and domestic dogs . Anim. Cogn . 12, 409–418. (doi:10.1007/s10071-008-0199-3) Crossref, PubMed, ISI, Google Scholar
Holden E, Calvo G, Collins M, Bell A, Reid J, Scot EM, Nolan AM
. 2014 Evaluation of facial expression in acute pain in cats . J. Small Anim. Pract . 55, 615–621. (doi:10.1111/jsap.12283) Crossref, PubMed, ISI, Google Scholar
Fast, accurate cystic fibrosis test developed
Researchers at the Stanford University School of Medicine have developed a fast, inexpensive and highly accurate test to screen newborns for cystic fibrosis. The new method detects virtually all mutations in the CF gene, preventing missed diagnoses that delay babies' ability to begin receiving essential treatment.
A paper describing the new test published online Feb. 1 in The Journal of Molecular Diagnostics. Cystic fibrosis, which causes mucus to build up in the lungs, pancreas and other organs, is the most common fatal genetic disease in the United States, affecting 30,000 people. To develop the disease, a child must inherit two mutated copies of the CF gene, one from each parent. Newborns in every U.S. state have been screened for CF since 2010, but the current tests have limitations.
"The assays in use are time-consuming and don't test the entire cystic fibrosis gene," said the study's senior author, Curt Scharfe, MD, PhD. "They don't tell the whole story." Scharfe was a senior scientist at the Stanford Genome Technology Center when the study was conducted and is now associate professor of genetics at the Yale School of Medicine.
"Cystic fibrosis newborn screening has shown us that early diagnosis really matters," said Iris Schrijver, MD, a co-author of the study and professor of pathology at Stanford. Schrijver directs the Stanford Molecular Pathology Laboratory, which has a contract with California for the state's newborn CF testing.
Advantages of early diagnosis, medical attention
Prior studies have shown that newborn screening and prompt medical follow-up reduce symptoms of CF such as lung infections, airway inflammation, digestive problems and growth delays. "When the disease is caught early, physicians can prevent some of its complications, and keep the patients in better shape longer," Schrijver said. Although classic CF still limits patients' life spans, many of those who receive good medical care now live into or beyond their 40s.
In the current test, babies' blood is first screened for immunoreactive trypsinogen, an enzyme that is elevated in CF cases but also can be high for other reasons, such as in infants with one mutated copy and one normal copy of the CF gene. Since the majority of infants with high trypsinogen will not develop CF, most U.S. states follow up with genetic screening to detect mutations in the CF gene. California, which has the most comprehensive screening process, tests for 40 CF-causing mutations common in the state. (More than 2,000 mutations in the CF gene are known, though many are rare). If one of the common mutations is identified, the infant's entire CF gene is sequenced to try to confirm whether the baby has a second, less common CF mutation.
The process takes up to two weeks and can miss infants who carry two rare CF mutations, particularly in nonwhite populations about whose CF changes scientists have limited knowledge.
DNA from dried blood spots
The Stanford-developed method greatly improves the gene-sequencing portion of screening, comprehensively detecting CF-causing mutations in one step, at a lower cost and in about half the time now required. Stanford University is exploring the possibility of filing a patent for the technique.
To enable these improvements, the team developed a new way to extract and make many copies of the CF gene from a tiny sample of DNA -- about 1 nanogram -- from the dried blood spots that are collected on cards from babies for newborn screening. "These samples are a very limited and precious resource," Scharfe said. The entire CF gene then undergoes high-throughput sequencing. This is the first time scientists have found a way to reliably use dried blood spots for this type of sequencing for CF, which typically requires much more DNA.
"In our new assay, we are reading every letter in the book of the CF gene," Schrijver said. "Whatever mutations pop up, the technique should be able to identify. It's a very flexible approach."
In order for the new test to be adopted, the molecular pathology lab needs to train its staff on the new procedure and run thorough validation studies as part of regulatory and quality requirements to show that the reliability of the test in a research setting will be maintained in the larger-scale clinical laboratory. California newborn screening officials will then have the opportunity to decide whether they want the new test to replace the current method. Schrijver expects the process will take less than a year. "Regardless of how the state decides, the new technique can be widely adopted in different settings," she said, noting that the technique could also be used for carrier and diagnostic testing and to screen for other genetic diseases, not just CF.
"Ultimately, we would like to develop a broader assay to include the most common and most troublesome newborn conditions, and be able to do the screening much faster, more comprehensively and much more cheaply," Scharfe said.