Tag Archives: Johns Hopkins University

Artificial intelligence (AI) brings together International Telecommunications Union (ITU) and World Health Organization (WHO) and AI outperforms animal testing

Following on my May 11, 2018 posting about the International Telecommunications Union (ITU) and the 2018 AI for Good Global Summit in mid- May, there’s an announcement. My other bit of AI news concerns animal testing.

Leveraging the power of AI for health

A July 24, 2018 ITU press release (a shorter version was received via email) announces a joint initiative focused on improving health,

Two United Nations specialized agencies are joining forces to expand the use of artificial intelligence (AI) in the health sector to a global scale, and to leverage the power of AI to advance health for all worldwide. The International Telecommunication Union (ITU) and the World Health Organization (WHO) will work together through the newly established ITU Focus Group on AI for Health to develop an international “AI for health” standards framework and to identify use cases of AI in the health sector that can be scaled-up for global impact. The group is open to all interested parties.

“AI could help patients to assess their symptoms, enable medical professionals in underserved areas to focus on critical cases, and save great numbers of lives in emergencies by delivering medical diagnoses to hospitals before patients arrive to be treated,” said ITU Secretary-General Houlin Zhao. “ITU and WHO plan to ensure that such capabilities are available worldwide for the benefit of everyone, everywhere.”

The demand for such a platform was first identified by participants of the second AI for Good Global Summit held in Geneva, 15-17 May 2018. During the summit, AI and the health sector were recognized as a very promising combination, and it was announced that AI-powered technologies such as skin disease recognition and diagnostic applications based on symptom questions could be deployed on six billion smartphones by 2021.

The ITU Focus Group on AI for Health is coordinated through ITU’s Telecommunications Standardization Sector – which works with ITU’s 193 Member States and more than 800 industry and academic members to establish global standards for emerging ICT innovations. It will lead an intensive two-year analysis of international standardization opportunities towards delivery of a benchmarking framework of international standards and recommendations by ITU and WHO for the use of AI in the health sector.

“I believe the subject of AI for health is both important and useful for advancing health for all,” said WHO Director-General Tedros Adhanom Ghebreyesus.

The ITU Focus Group on AI for Health will also engage researchers, engineers, practitioners, entrepreneurs and policy makers to develop guidance documents for national administrations, to steer the creation of policies that ensure the safe, appropriate use of AI in the health sector.

“1.3 billion people have a mobile phone and we can use this technology to provide AI-powered health data analytics to people with limited or no access to medical care. AI can enhance health by improving medical diagnostics and associated health intervention decisions on a global scale,” said Thomas Wiegand, ITU Focus Group on AI for Health Chairman, and Executive Director of the Fraunhofer Heinrich Hertz Institute, as well as professor at TU Berlin.

He added, “The health sector is in many countries among the largest economic sectors or one of the fastest-growing, signalling a particularly timely need for international standardization of the convergence of AI and health.”

Data analytics are certain to form a large part of the ITU focus group’s work. AI systems are proving increasingly adept at interpreting laboratory results and medical imagery and extracting diagnostically relevant information from text or complex sensor streams.

As part of this, the ITU Focus Group for AI for Health will also produce an assessment framework to standardize the evaluation and validation of AI algorithms — including the identification of structured and normalized data to train AI algorithms. It will develop open benchmarks with the aim of these becoming international standards.

The ITU Focus Group for AI for Health will report to the ITU standardization expert group for multimedia, Study Group 16.

I got curious about Study Group 16 (from the Study Group 16 at a glance webpage),

Study Group 16 leads ITU’s standardization work on multimedia coding, systems and applications, including the coordination of related studies across the various ITU-T SGs. It is also the lead study group on ubiquitous and Internet of Things (IoT) applications; telecommunication/ICT accessibility for persons with disabilities; intelligent transport system (ITS) communications; e-health; and Internet Protocol television (IPTV).

Multimedia is at the core of the most recent advances in information and communication technologies (ICTs) – especially when we consider that most innovation today is agnostic of the transport and network layers, focusing rather on the higher OSI model layers.

SG16 is active in all aspects of multimedia standardization, including terminals, architecture, protocols, security, mobility, interworking and quality of service (QoS). It focuses its studies on telepresence and conferencing systems; IPTV; digital signage; speech, audio and visual coding; network signal processing; PSTN modems and interfaces; facsimile terminals; and ICT accessibility.

I wonder which group deals with artificial intelligence and, possibly, robots.

Chemical testing without animals

Thomas Hartung, professor of environmental health and engineering at Johns Hopkins University (US), describes in his July 25, 2018 essay (written for The Conversation) on phys.org the situation where chemical testing is concerned,

Most consumers would be dismayed with how little we know about the majority of chemicals. Only 3 percent of industrial chemicals – mostly drugs and pesticides – are comprehensively tested. Most of the 80,000 to 140,000 chemicals in consumer products have not been tested at all or just examined superficially to see what harm they may do locally, at the site of contact and at extremely high doses.

I am a physician and former head of the European Center for the Validation of Alternative Methods of the European Commission (2002-2008), and I am dedicated to finding faster, cheaper and more accurate methods of testing the safety of chemicals. To that end, I now lead a new program at Johns Hopkins University to revamp the safety sciences.

As part of this effort, we have now developed a computer method of testing chemicals that could save more than a US$1 billion annually and more than 2 million animals. Especially in times where the government is rolling back regulations on the chemical industry, new methods to identify dangerous substances are critical for human and environmental health.

Having written on the topic of alternatives to animal testing on a number of occasions (my December 26, 2014 posting provides an overview of sorts), I was particularly interested to see this in Hartung’s July 25, 2018 essay on The Conversation (Note: Links have been removed),

Following the vision of Toxicology for the 21st Century, a movement led by U.S. agencies to revamp safety testing, important work was carried out by my Ph.D. student Tom Luechtefeld at the Johns Hopkins Center for Alternatives to Animal Testing. Teaming up with Underwriters Laboratories, we have now leveraged an expanded database and machine learning to predict toxic properties. As we report in the journal Toxicological Sciences, we developed a novel algorithm and database for analyzing chemicals and determining their toxicity – what we call read-across structure activity relationship, RASAR.

This graphic reveals a small part of the chemical universe. Each dot represents a different chemical. Chemicals that are close together have similar structures and often properties. Thomas Hartung, CC BY-SA

To do this, we first created an enormous database with 10 million chemical structures by adding more public databases filled with chemical data, which, if you crunch the numbers, represent 50 trillion pairs of chemicals. A supercomputer then created a map of the chemical universe, in which chemicals are positioned close together if they share many structures in common and far where they don’t. Most of the time, any molecule close to a toxic molecule is also dangerous. Even more likely if many toxic substances are close, harmless substances are far. Any substance can now be analyzed by placing it into this map.

If this sounds simple, it’s not. It requires half a billion mathematical calculations per chemical to see where it fits. The chemical neighborhood focuses on 74 characteristics which are used to predict the properties of a substance. Using the properties of the neighboring chemicals, we can predict whether an untested chemical is hazardous. For example, for predicting whether a chemical will cause eye irritation, our computer program not only uses information from similar chemicals, which were tested on rabbit eyes, but also information for skin irritation. This is because what typically irritates the skin also harms the eye.

How well does the computer identify toxic chemicals?

This method will be used for new untested substances. However, if you do this for chemicals for which you actually have data, and compare prediction with reality, you can test how well this prediction works. We did this for 48,000 chemicals that were well characterized for at least one aspect of toxicity, and we found the toxic substances in 89 percent of cases.

This is clearly more accurate that the corresponding animal tests which only yield the correct answer 70 percent of the time. The RASAR shall now be formally validated by an interagency committee of 16 U.S. agencies, including the EPA [Environmental Protection Agency] and FDA [Food and Drug Administration], that will challenge our computer program with chemicals for which the outcome is unknown. This is a prerequisite for acceptance and use in many countries and industries.

The potential is enormous: The RASAR approach is in essence based on chemical data that was registered for the 2010 and 2013 REACH [Registration, Evaluation, Authorizations and Restriction of Chemicals] deadlines [in Europe]. If our estimates are correct and chemical producers would have not registered chemicals after 2013, and instead used our RASAR program, we would have saved 2.8 million animals and $490 million in testing costs – and received more reliable data. We have to admit that this is a very theoretical calculation, but it shows how valuable this approach could be for other regulatory programs and safety assessments.

In the future, a chemist could check RASAR before even synthesizing their next chemical to check whether the new structure will have problems. Or a product developer can pick alternatives to toxic substances to use in their products. This is a powerful technology, which is only starting to show all its potential.

It’s been my experience that these claims having led a movement (Toxicology for the 21st Century) are often contested with many others competing for the title of ‘leader’ or ‘first’. That said, this RASAR approach seems very exciting, especially in light of the skepticism about limiting and/or making animal testing unnecessary noted in my December 26, 2014 posting.it was from someone I thought knew better.

Here’s a link to and a citation for the paper mentioned in Hartung’s essay,

Machine learning of toxicological big data enables read-across structure activity relationships (RASAR) outperforming animal test reproducibility by Thomas Luechtefeld, Dan Marsh, Craig Rowlands, Thomas Hartung. Toxicological Sciences, kfy152, https://doi.org/10.1093/toxsci/kfy152 Published: 11 July 2018

This paper is open access.

Prosthetic pain

“Feeling no pain” can be a euphemism for being drunk. However, there are some people for whom it’s not a euphemism and they literally feel no pain for one reason or another. One group of people who feel no pain are amputees and a researcher at Johns Hopkins University (Maryland, US) has found a way so they can feel pain again.

A June 20, 2018 news item on ScienceDaily provides an introduction to the research and to the reason for it,

Amputees often experience the sensation of a “phantom limb” — a feeling that a missing body part is still there.

That sensory illusion is closer to becoming a reality thanks to a team of engineers at the Johns Hopkins University that has created an electronic skin. When layered on top of prosthetic hands, this e-dermis brings back a real sense of touch through the fingertips.

“After many years, I felt my hand, as if a hollow shell got filled with life again,” says the anonymous amputee who served as the team’s principal volunteer tester.

Made of fabric and rubber laced with sensors to mimic nerve endings, e-dermis recreates a sense of touch as well as pain by sensing stimuli and relaying the impulses back to the peripheral nerves.

A June 20, 2018 Johns Hopkins University news release (also on EurekAlert), which originated the news item, explores the research in more depth,

“We’ve made a sensor that goes over the fingertips of a prosthetic hand and acts like your own skin would,” says Luke Osborn, a graduate student in biomedical engineering. “It’s inspired by what is happening in human biology, with receptors for both touch and pain.

“This is interesting and new,” Osborn said, “because now we can have a prosthetic hand that is already on the market and fit it with an e-dermis that can tell the wearer whether he or she is picking up something that is round or whether it has sharp points.”

The work – published June 20 in the journal Science Robotics – shows it is possible to restore a range of natural, touch-based feelings to amputees who use prosthetic limbs. The ability to detect pain could be useful, for instance, not only in prosthetic hands but also in lower limb prostheses, alerting the user to potential damage to the device.

Human skin contains a complex network of receptors that relay a variety of sensations to the brain. This network provided a biological template for the research team, which includes members from the Johns Hopkins departments of Biomedical Engineering, Electrical and Computer Engineering, and Neurology, and from the Singapore Institute of Neurotechnology.

Bringing a more human touch to modern prosthetic designs is critical, especially when it comes to incorporating the ability to feel pain, Osborn says.

“Pain is, of course, unpleasant, but it’s also an essential, protective sense of touch that is lacking in the prostheses that are currently available to amputees,” he says. “Advances in prosthesis designs and control mechanisms can aid an amputee’s ability to regain lost function, but they often lack meaningful, tactile feedback or perception.”

That is where the e-dermis comes in, conveying information to the amputee by stimulating peripheral nerves in the arm, making the so-called phantom limb come to life. The e-dermis device does this by electrically stimulating the amputee’s nerves in a non-invasive way, through the skin, says the paper’s senior author, Nitish Thakor, a professor of biomedical engineering and director of the Biomedical Instrumentation and Neuroengineering Laboratory at Johns Hopkins.

“For the first time, a prosthesis can provide a range of perceptions, from fine touch to noxious to an amputee, making it more like a human hand,” says Thakor, co-founder of Infinite Biomedical Technologies, the Baltimore-based company that provided the prosthetic hardware used in the study.

Inspired by human biology, the e-dermis enables its user to sense a continuous spectrum of tactile perceptions, from light touch to noxious or painful stimulus. The team created a “neuromorphic model” mimicking the touch and pain receptors of the human nervous system, allowing the e-dermis to electronically encode sensations just as the receptors in the skin would. Tracking brain activity via electroencephalography, or EEG, the team determined that the test subject was able to perceive these sensations in his phantom hand.

The researchers then connected the e-dermis output to the volunteer by using a noninvasive method known as transcutaneous electrical nerve stimulation, or TENS. In a pain-detection task, the team determined that the test subject and the prosthesis were able to experience a natural, reflexive reaction to both pain while touching a pointed object and non-pain when touching a round object.

The e-dermis is not sensitive to temperature–for this study, the team focused on detecting object curvature (for touch and shape perception) and sharpness (for pain perception). The e-dermis technology could be used to make robotic systems more human, and it could also be used to expand or extend to astronaut gloves and space suits, Osborn says.

The researchers plan to further develop the technology and better understand how to provide meaningful sensory information to amputees in the hopes of making the system ready for widespread patient use.

Johns Hopkins is a pioneer in the field of upper limb dexterous prostheses. More than a decade ago, the university’s Applied Physics Laboratory led the development of the advanced Modular Prosthetic Limb, which an amputee patient controls with the muscles and nerves that once controlled his or her real arm or hand.

In addition to the funding from Space@Hopkins, which fosters space-related collaboration across the university’s divisions, the team also received grants from the Applied Physics Laboratory Graduate Fellowship Program and the Neuroengineering Training Initiative through the National Institute of Biomedical Imaging and Bioengineering through the National Institutes of Health under grant T32EB003383.

The e-dermis was tested over the course of one year on an amputee who volunteered in the Neuroengineering Laboratory at Johns Hopkins. The subject frequently repeated the testing to demonstrate consistent sensory perceptions via the e-dermis. The team has worked with four other amputee volunteers in other experiments to provide sensory feedback.

Here’s a video about this work,

Sarah Zhang’s June 20, 2018 article for The Atlantic reveals a few more details while covering some of the material in the news release,

Osborn and his team added one more feature to make the prosthetic hand, as he puts it, “more lifelike, more self-aware”: When it grasps something too sharp, it’ll open its fingers and immediately drop it—no human control necessary. The fingers react in just 100 milliseconds, the speed of a human reflex. Existing prosthetic hands have a similar degree of theoretically helpful autonomy: If an object starts slipping, the hand will grasp more tightly. Ideally, users would have a way to override a prosthesis’s reflex, like how you can hold your hand on a stove if you really, really want to. After all, the whole point of having a hand is being able to tell it what to do.

Here’s a link to and a citation for the paper,

Prosthesis with neuromorphic multilayered e-dermis perceives touch and pain by Luke E. Osborn, Andrei Dragomir, Joseph L. Betthauser, Christopher L. Hunt, Harrison H. Nguyen, Rahul R. Kaliki, and Nitish V. Thakor. Science Robotics 20 Jun 2018: Vol. 3, Issue 19, eaat3818 DOI: 10.1126/scirobotics.aat3818

This paper is behind a paywall.

Mixing the unmixable for all new nanoparticles

This news comes out of the University of Maryland and the discovery could led to nanoparticles that have never before been imagined. From a March 29, 2018 news item on ScienceDaily,

Making a giant leap in the ‘tiny’ field of nanoscience, a multi-institutional team of researchers is the first to create nanoscale particles composed of up to eight distinct elements generally known to be immiscible, or incapable of being mixed or blended together. The blending of multiple, unmixable elements into a unified, homogenous nanostructure, called a high entropy alloy nanoparticle, greatly expands the landscape of nanomaterials — and what we can do with them.

This research makes a significant advance on previous efforts that have typically produced nanoparticles limited to only three different elements and to structures that do not mix evenly. Essentially, it is extremely difficult to squeeze and blend different elements into individual particles at the nanoscale. The team, which includes lead researchers at University of Maryland, College Park (UMD)’s A. James Clark School of Engineering, published a peer-reviewed paper based on the research featured on the March 30 [2018] cover of Science.

A March 29, 2018 University of Maryland press release (also on EurekAlert), which originated the news item, delves further (Note: Links have been removed),

“Imagine the elements that combine to make nanoparticles as Lego building blocks. If you have only one to three colors and sizes, then you are limited by what combinations you can use and what structures you can assemble,” explains Liangbing Hu, associate professor of materials science and engineering at UMD and one of the corresponding authors of the paper. “What our team has done is essentially enlarged the toy chest in nanoparticle synthesis; now, we are able to build nanomaterials with nearly all metallic and semiconductor elements.”

The researchers say this advance in nanoscience opens vast opportunities for a wide range of applications that includes catalysis (the acceleration of a chemical reaction by a catalyst), energy storage (batteries or supercapacitors), and bio/plasmonic imaging, among others.

To create the high entropy alloy nanoparticles, the researchers employed a two-step method of flash heating followed by flash cooling. Metallic elements such as platinum, nickel, iron, cobalt, gold, copper, and others were exposed to a rapid thermal shock of approximately 3,000 degrees Fahrenheit, or about half the temperature of the sun, for 0.055 seconds. The extremely high temperature resulted in uniform mixtures of the multiple elements. The subsequent rapid cooling (more than 100,000 degrees Fahrenheit per second) stabilized the newly mixed elements into the uniform nanomaterial.

“Our method is simple, but one that nobody else has applied to the creation of nanoparticles. By using a physical science approach, rather than a traditional chemistry approach, we have achieved something unprecedented,” says Yonggang Yao, a Ph.D. student at UMD and one of the lead authors of the paper.

To demonstrate one potential use of the nanoparticles, the research team used them as advanced catalysts for ammonia oxidation, which is a key step in the production of nitric acid (a liquid acid that is used in the production of ammonium nitrate for fertilizers, making plastics, and in the manufacturing of dyes). They were able to achieve 100 percent oxidation of ammonia and 99 percent selectivity toward desired products with the high entropy alloy nanoparticles, proving their ability as highly efficient catalysts.

Yao says another potential use of the nanoparticles as catalysts could be the generation of chemicals or fuels from carbon dioxide.

“The potential applications for high entropy alloy nanoparticles are not limited to the field of catalysis. With cross-discipline curiosity, the demonstrated applications of these particles will become even more widespread,” says Steven D. Lacey, a Ph.D. student at UMD and also one of the lead authors of the paper.

This research was performed through a multi-institutional collaboration of Prof. Liangbing Hu’s group at the University of Maryland, College Park; Prof. Reza Shahbazian-Yassar’s group at University of Illinois at Chicago; Prof. Ju Li’s group at the Massachusetts Institute of Technology; Prof. Chao Wang’s group at Johns Hopkins University; and Prof. Michael Zachariah’s group at the University of Maryland, College Park.

What outside experts are saying about this research:

“This is quite amazing; Dr. Hu creatively came up with this powerful technique, carbo-thermal shock synthesis, to produce high entropy alloys of up to eight different elements in a single nanoparticle. This is indeed unthinkable for bulk materials synthesis. This is yet another beautiful example of nanoscience!,” says Peidong Yang, the S.K. and Angela Chan Distinguished Professor of Energy and professor of chemistry at the University of California, Berkeley and member of the American Academy of Arts and Sciences.

“This discovery opens many new directions. There are simulation opportunities to understand the electronic structure of the various compositions and phases that are important for the next generation of catalyst design. Also, finding correlations among synthesis routes, composition, and phase structure and performance enables a paradigm shift toward guided synthesis,” says George Crabtree, Argonne Distinguished Fellow and director of the Joint Center for Energy Storage Research at Argonne National Laboratory.

More from the research coauthors:

“Understanding the atomic order and crystalline structure in these multi-element nanoparticles reveals how the synthesis can be tuned to optimize their performance. It would be quite interesting to further explore the underlying atomistic mechanisms of the nucleation and growth of high entropy alloy nanoparticle,” says Reza Shahbazian-Yassar, associate professor at the University of Illinois at Chicago and a corresponding author of the paper.

“Carbon metabolism drives ‘living’ metal catalysts that frequently move around, split, or merge, resulting in a nanoparticle size distribution that’s far from the ordinary, and highly tunable,” says Ju Li, professor at the Massachusetts Institute of Technology and a corresponding author of the paper.

“This method enables new combinations of metals that do not exist in nature and do not otherwise go together. It enables robust tuning of the composition of catalytic materials to optimize the activity, selectivity, and stability, and the application will be very broad in energy conversions and chemical transformations,” says Chao Wang, assistant professor of chemical and biomolecular engineering at Johns Hopkins University and one of the study’s authors.

Here’s a link to and a citation for the paper,

Carbothermal shock synthesis of high-entropy-alloy nanoparticles by Yonggang Yao, Zhennan Huang, Pengfei Xie, Steven D. Lacey, Rohit Jiji Jacob, Hua Xie, Fengjuan Chen, Anmin Nie, Tiancheng Pu, Miles Rehwoldt, Daiwei Yu, Michael R. Zachariah, Chao Wang, Reza Shahbazian-Yassar, Ju Li, Liangbing Hu. Science 30 Mar 2018: Vol. 359, Issue 6383, pp. 1489-1494 DOI: 10.1126/science.aan5412

This paper is behind a paywall.

Hallucinogenic molecules and the brain

Psychedelic drugs seems to be enjoying a ‘moment’. After decades of being vilified and  declared illegal (in many jurisdictions), psychedelic (or hallucinogenic) drugs are once again being tested for use in therapy. A Sept. 1, 2017 article by Diana Kwon for The Scientist describes some of the latest research (I’ve excerpted the section on molecules; Note: Links have been removed),

Mind-bending molecules

© SEAN MCCABE

All the classic psychedelic drugs—psilocybin, LSD, and N,N-dimethyltryptamine (DMT), the active component in ayahuasca—activate serotonin 2A (5-HT2A) receptors, which are distributed throughout the brain. In all likelihood, this receptor plays a key role in the drugs’ effects. Krähenmann [Rainer Krähenmann, a psychiatrist and researcher at the University of Zurich]] and his colleagues in Zurich have discovered that ketanserin, a 5-HT2A receptor antagonist, blocks LSD’s hallucinogenic properties and prevents individuals from entering a dreamlike state or attributing personal relevance to the experience.12,13

Other research groups have found that, in rodent brains, 2,5-dimethoxy-4-iodoamphetamine (DOI), a highly potent and selective 5-HT2A receptor agonist, can modify the expression of brain-derived neurotrophic factor (BDNF)—a protein that, among other things, regulates neuronal survival, differentiation, and synaptic plasticity. This has led some scientists to hypothesize that, through this pathway, psychedelics may enhance neuroplasticity, the ability to form new neuronal connections in the brain.14 “We’re still working on that and trying to figure out what is so special about the receptor and where it is involved,” says Katrin Preller, a postdoc studying psychedelics at the University of Zurich. “But it seems like this combination of serotonin 2A receptors and BDNF leads to a kind of different organizational state in the brain that leads to what people experience under the influence of psychedelics.”

This serotonin receptor isn’t limited to the central nervous system. Work by Charles Nichols, a pharmacology professor at Louisiana State University, has revealed that 5-HT2A receptor agonists can reduce inflammation throughout the body. Nichols and his former postdoc Bangning Yu stumbled upon this discovery by accident, while testing the effects of DOI on smooth muscle cells from rat aortas. When they added this drug to the rodent cells in culture, it blocked the effects of tumor necrosis factor-alpha (TNF-α), a key inflammatory cytokine.

“It was completely unexpected,” Nichols recalls. The effects were so bewildering, he says, that they repeated the experiment twice to convince themselves that the results were correct. Before publishing the findings in 2008,15 they tested a few other 5-HT2A receptor agonists, including LSD, and found consistent anti-inflammatory effects, though none of the drugs’ effects were as strong as DOI’s. “Most of the psychedelics I have tested are about as potent as a corticosteroid at their target, but there’s something very unique about DOI that makes it much more potent,” Nichols says. “That’s one of the mysteries I’m trying to solve.”

After seeing the effect these drugs could have in cells, Nichols and his team moved on to whole animals. When they treated mouse models of system-wide inflammation with DOI, they found potent anti-inflammatory effects throughout the rodents’ bodies, with the strongest effects in the small intestine and a section of the main cardiac artery known as the aortic arch.16 “I think that’s really when it felt that we were onto something big, when we saw it in the whole animal,” Nichols says.

The group is now focused on testing DOI as a potential therapeutic for inflammatory diseases. In a 2015 study, they reported that DOI could block the development of asthma in a mouse model of the condition,17 and last December, the team received a patent to use DOI for four indications: asthma, Crohn’s disease, rheumatoid arthritis, and irritable bowel syndrome. They are now working to move the treatment into clinical trials. The benefit of using DOI for these conditions, Nichols says, is that because of its potency, only small amounts will be required—far below the amounts required to produce hallucinogenic effects.

In addition to opening the door to a new class of diseases that could benefit from psychedelics-inspired therapy, Nichols’s work suggests “that there may be some enduring changes that are mediated through anti-inflammatory effects,” Griffiths [Roland Griffiths, a psychiatry professor at Johns Hopkins University] says. Recent studies suggest that inflammation may play a role in a number of psychological disorders, including depression18 and addiction.19

“If somebody has neuroinflammation and that’s causing depression, and something like psilocybin makes it better through the subjective experience but the brain is still inflamed, it’s going to fall back into the depressed rut,” Nichols says. But if psilocybin is also treating the inflammation, he adds, “it won’t have that rut to fall back into.”

If it turns out that psychedelics do have anti-inflammatory effects in the brain, the drugs’ therapeutic uses could be even broader than scientists now envision. “In terms of neurodegenerative disease, every one of these disorders is mediated by inflammatory cytokines,” says Juan Sanchez-Ramos, a neuroscientist at the University of South Florida who in 2013 reported that small doses of psilocybin could promote neurogenesis in the mouse hippocampus.20 “That’s why I think, with Alzheimer’s, for example, if you attenuate the inflammation, it could help slow the progression of the disease.”

For anyone who was never exposed to the anti-hallucinogenic drug campaigns, this turn of events is mindboggling. There was a great deal of concern especially with LSD in the 1960s and it was not entirely unfounded. In my own family, a distant cousin, while under the influence of the drug, jumped off a building believing he could fly.  So, Kwon’s story opening with a story about someone being treated successfully for depression with a psychedelic drug was surprising to me . Why these drugs are being used successfully for psychiatric conditions when so much damage was apparently done under the influence in decades past may have something to do with taking the drugs in a controlled environment and, possibly, smaller dosages.

Nanofiber coating for artificial joints and implants

The researchers have a great image to accompany their research, which fit well with Hallowe’en and the Day of the Dead celebrations taking place around the same time as the research was published.

 A titanium implant (blue) without a nanofiber coating in the femur of a mouse. Bacteria are shown in red and responding immune cells in yellow. Credit: Lloyd Miller/Johns Hopkins Medicine

A titanium implant (blue) without a nanofiber coating in the femur of a mouse. Bacteria are shown in red and responding immune cells in yellow.
Credit: Lloyd Miller/Johns Hopkins Medicine

An Oct. 24, 2016 news item on ScienceDaily announces the research on nanofibers,

In a proof-of-concept study with mice, scientists at The Johns Hopkins University show that a novel coating they made with antibiotic-releasing nanofibers has the potential to better prevent at least some serious bacterial infections related to total joint replacement surgery.

An Oct. 24, 2016 Johns Hopkins Medicine news release (also on EurekAlert), provides further details (Note: Links have been removed),

A report on the study, published online the week of Oct. 24 [2016] in Proceedings of the National Academy of Sciences, was conducted on the rodents’ knee joints, but, the researchers say, the technology would have “broad applicability” in the use of orthopaedic prostheses, such as hip and knee total joint replacements, as well pacemakers, stents and other implantable medical devices. In contrast to other coatings in development, the researchers report the new material can release multiple antibiotics in a strategically timed way for an optimal effect.

“We can potentially coat any metallic implant that we put into patients, from prosthetic joints, rods, screws and plates to pacemakers, implantable defibrillators and dental hardware,” says co-senior study author Lloyd S. Miller, M.D., Ph.D., an associate professor of dermatology and orthopaedic surgery at the Johns Hopkins University School of Medicine.

Surgeons and biomedical engineers have for years looked for better ways —including antibiotic coatings — to reduce the risk of infections that are a known complication of implanting artificial hip, knee and shoulder joints.

Every year in the U.S., an estimated 1 to 2 percent of the more than 1 million hip and knee replacement surgeries are followed by infections linked to the formation of biofilms — layers of bacteria that adhere to a surface, forming a dense, impenetrable matrix of proteins, sugars and DNA. Immediately after surgery, an acute infection causes swelling and redness that can often be treated with intravenous antibiotics. But in some people, low-grade chronic infections can last for months, causing bone loss that leads to implant loosening and ultimately failure of the new prosthesis. These infections are very difficult to treat and, in many cases of chronic infection, prostheses must be removed and patients placed on long courses of antibiotics before a new prosthesis can be implanted. The cost per patient often exceeds $100,000 to treat a biofilm-associated prosthesis infection, Miller says.

Major downsides to existing options for local antibiotic delivery, such as antibiotic-loaded cement, beads, spacers or powder, during the implantation of medical devices are that they can typically only deliver one antibiotic at a time and the release rate is not well-controlled. To develop a better approach that addresses those problems, Miller teamed up with Hai-Quan Mao, Ph.D., a professor of materials science and engineering at the Johns Hopkins University Whiting School of Engineering, and a member of the Institute for NanoBioTechnology, Whitaker Biomedical Engineering Institute and Translational Tissue Engineering Center.

Over three years, the team focused on designing a thin, biodegradable plastic coating that could release multiple antibiotics at desired rates. This coating is composed of a nanofiber mesh embedded in a thin film; both components are made of polymers used for degradable sutures.

To test the technology’s ability to prevent infection, the researchers loaded the nanofiber coating with the antibiotic rifampin in combination with one of three other antibiotics: vancomycin, daptomycin or linezolid. “Rifampin has excellent anti-biofilm activity but cannot be used alone because bacteria would rapidly develop resistance,” says Miller. The coatings released vancomycin, daptomycin or linezolid for seven to 14 days and rifampin over three to five days. “We were able to deploy two antibiotics against potential infection while ensuring rifampin was never present as a single agent,” Miller says.

The team then used each combination to coat titanium Kirschner wires — a type of pin used in orthopaedic surgery to fix bone in place after wrist fractures — inserted them into the knee joints of anesthetized mice and introduced a strain of Staphylococcus aureus, a bacterium that commonly causes biofilm-associated infections in orthopaedic surgeries. The bacteria were engineered to give off light, allowing the researchers to noninvasively track infection over time.

Miller says that after 14 days of infection in mice that received an antibiotic-free coating on the pins, all of the mice had abundant bacteria in the infected tissue around the knee joint, and 80 percent had bacteria on the surface of the implant. In contrast, after the same time period in mice that received pins with either linezolid-rifampin or daptomycin-rifampin coating, none of the mice had detectable bacteria either on the implants or in the surrounding tissue.

“We were able to completely eradicate infection with this coating,” says Miller. “Most other approaches only decrease the number of bacteria but don’t generally or reliably prevent infections.”

After the two-week test, each of the rodents’ joints and adjacent bones were removed for further study. Miller and Mao found that not only had infection been prevented, but the bone loss often seen near infected joints — which creates the prosthetic loosening in patients — had also been completely avoided in animals that received pins with the antibiotic-loaded coating.

Miller emphasized that further research is needed to test the efficacy and safety of the coating in humans, and in sorting out which patients would best benefit from the coating — people with a previous prosthesis joint infection receiving a new replacement joint, for example.

The polymers they used to generate the nanofiber coating have already been used in many approved devices by the U.S. Food and Drug Administration, such as degradable sutures, bone plates and drug delivery systems.

Here’s a link to and a citation for the paper,

Polymeric nanofiber coating with tunable combinatorial antibiotic delivery prevents biofilm-associated infection in vivo by Alyssa G. Ashbaugh, Xuesong Jiang, Jesse Zheng, Andrew S. Tsai, Woo-Shin Kim, John M. Thompson, Robert J. Miller, Jonathan H. Shahbazian, Yu Wang, Carly A. Dillen, Alvaro A. Ordonez, Yong S. Chang, Sanjay K. Jain, Lynne C. Jones, Robert S. Sterling, Hai-Quan Mao, and Lloyd S. Miller. PNAS [Proceedings of the National Academy of Sciences] 2016 doi: 10.1073/pnas.1613722113 Published ahead of print October 24, 2016

This paper is behind a paywall.

How might artificial intelligence affect urban life in 2030? A study

Peering into the future is always a chancy business as anyone who’s seen those film shorts from the 1950’s and 60’s which speculate exuberantly as to what the future will bring knows.

A sober approach (appropriate to our times) has been taken in a study about the impact that artificial intelligence might have by 2030. From a Sept. 1, 2016 Stanford University news release (also on EurekAlert) by Tom Abate (Note: Links have been removed),

A panel of academic and industrial thinkers has looked ahead to 2030 to forecast how advances in artificial intelligence (AI) might affect life in a typical North American city – in areas as diverse as transportation, health care and education ­– and to spur discussion about how to ensure the safe, fair and beneficial development of these rapidly emerging technologies.

Titled “Artificial Intelligence and Life in 2030,” this year-long investigation is the first product of the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted by Stanford to inform societal deliberation and provide guidance on the ethical development of smart software, sensors and machines.

“We believe specialized AI applications will become both increasingly common and more useful by 2030, improving our economy and quality of life,” said Peter Stone, a computer scientist at the University of Texas at Austin and chair of the 17-member panel of international experts. “But this technology will also create profound challenges, affecting jobs and incomes and other issues that we should begin addressing now to ensure that the benefits of AI are broadly shared.”

The new report traces its roots to a 2009 study that brought AI scientists together in a process of introspection that became ongoing in 2014, when Eric and Mary Horvitz created the AI100 endowment through Stanford. AI100 formed a standing committee of scientists and charged this body with commissioning periodic reports on different aspects of AI over the ensuing century.

“This process will be a marathon, not a sprint, but today we’ve made a good start,” said Russ Altman, a professor of bioengineering and the Stanford faculty director of AI100. “Stanford is excited to host this process of introspection. This work makes practical contribution to the public debate on the roles and implications of artificial intelligence.”

The AI100 standing committee first met in 2015, led by chairwoman and Harvard computer scientist Barbara Grosz. It sought to convene a panel of scientists with diverse professional and personal backgrounds and enlist their expertise to assess the technological, economic and policy implications of potential AI applications in a societally relevant setting.

“AI technologies can be reliable and broadly beneficial,” Grosz said. “Being transparent about their design and deployment challenges will build trust and avert unjustified fear and suspicion.”

The report investigates eight domains of human activity in which AI technologies are beginning to affect urban life in ways that will become increasingly pervasive and profound by 2030.

The 28,000-word report includes a glossary to help nontechnical readers understand how AI applications such as computer vision might help screen tissue samples for cancers or how natural language processing will allow computerized systems to grasp not simply the literal definitions, but the connotations and intent, behind words.

The report is broken into eight sections focusing on applications of AI. Five examine application arenas such as transportation where there is already buzz about self-driving cars. Three other sections treat technological impacts, like the section on employment and workplace trends which touches on the likelihood of rapid changes in jobs and incomes.

“It is not too soon for social debate on how the fruits of an AI-dominated economy should be shared,” the researchers write in the report, noting also the need for public discourse.

“Currently in the United States, at least sixteen separate agencies govern sectors of the economy related to AI technologies,” the researchers write, highlighting issues raised by AI applications: “Who is responsible when a self-driven car crashes or an intelligent medical device fails? How can AI applications be prevented from [being used for] racial discrimination or financial cheating?”

The eight sections discuss:

Transportation: Autonomous cars, trucks and, possibly, aerial delivery vehicles may alter how we commute, work and shop and create new patterns of life and leisure in cities.

Home/service robots: Like the robotic vacuum cleaners already in some homes, specialized robots will clean and provide security in live/work spaces that will be equipped with sensors and remote controls.

Health care: Devices to monitor personal health and robot-assisted surgery are hints of things to come if AI is developed in ways that gain the trust of doctors, nurses, patients and regulators.

Education: Interactive tutoring systems already help students learn languages, math and other skills. More is possible if technologies like natural language processing platforms develop to augment instruction by humans.

Entertainment: The conjunction of content creation tools, social networks and AI will lead to new ways to gather, organize and deliver media in engaging, personalized and interactive ways.

Low-resource communities: Investments in uplifting technologies like predictive models to prevent lead poisoning or improve food distributions could spread AI benefits to the underserved.

Public safety and security: Cameras, drones and software to analyze crime patterns should use AI in ways that reduce human bias and enhance safety without loss of liberty or dignity.

Employment and workplace: Work should start now on how to help people adapt as the economy undergoes rapid changes as many existing jobs are lost and new ones are created.

“Until now, most of what is known about AI comes from science fiction books and movies,” Stone said. “This study provides a realistic foundation to discuss how AI technologies are likely to affect society.”

Grosz said she hopes the AI 100 report “initiates a century-long conversation about ways AI-enhanced technologies might be shaped to improve life and societies.”

You can find the A100 website here, and the group’s first paper: “Artificial Intelligence and Life in 2030” here. Unfortunately, I don’t have time to read the report but I hope to do so soon.

The AI100 website’s About page offered a surprise,

This effort, called the One Hundred Year Study on Artificial Intelligence, or AI100, is the brainchild of computer scientist and Stanford alumnus Eric Horvitz who, among other credits, is a former president of the Association for the Advancement of Artificial Intelligence.

In that capacity Horvitz convened a conference in 2009 at which top researchers considered advances in artificial intelligence and its influences on people and society, a discussion that illuminated the need for continuing study of AI’s long-term implications.

Now, together with Russ Altman, a professor of bioengineering and computer science at Stanford, Horvitz has formed a committee that will select a panel to begin a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues.

“Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life,” said Stanford President John Hennessy, who helped initiate the project. “Given’s Stanford’s pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children’s children.”

Five leading academicians with diverse interests will join Horvitz and Altman in launching this effort. They are:

  • Barbara Grosz, the Higgins Professor of Natural Sciences at HarvardUniversity and an expert on multi-agent collaborative systems;
  • Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley, who collaborates with technologists to advance privacy and other democratic values through technical design and policy;

    This effort, called the One Hundred Year Study on Artificial Intelligence, or AI100, is the brainchild of computer scientist and Stanford alumnus Eric Horvitz who, among other credits, is a former president of the Association for the Advancement of Artificial Intelligence.

    In that capacity Horvitz convened a conference in 2009 at which top researchers considered advances in artificial intelligence and its influences on people and society, a discussion that illuminated the need for continuing study of AI’s long-term implications.

    Now, together with Russ Altman, a professor of bioengineering and computer science at Stanford, Horvitz has formed a committee that will select a panel to begin a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues.

    “Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life,” said Stanford President John Hennessy, who helped initiate the project. “Given’s Stanford’s pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children’s children.”

    Five leading academicians with diverse interests will join Horvitz and Altman in launching this effort. They are:

    • Barbara Grosz, the Higgins Professor of Natural Sciences at HarvardUniversity and an expert on multi-agent collaborative systems;
    • Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley, who collaborates with technologists to advance privacy and other democratic values through technical design and policy;
    • Yoav Shoham, a professor of computer science at Stanford, who seeks to incorporate common sense into AI;
    • Tom Mitchell, the E. Fredkin University Professor and chair of the machine learning department at Carnegie Mellon University, whose studies include how computers might learn to read the Web;
    • and Alan Mackworth, a professor of computer science at the University of British Columbia [emphases mine] and the Canada Research Chair in Artificial Intelligence, who built the world’s first soccer-playing robot.

    I wasn’t expecting to see a Canadian listed as a member of the AI100 standing committee and then I got another surprise (from the AI100 People webpage),

    Study Panels

    Study Panels are planned to convene every 5 years to examine some aspect of AI and its influences on society and the world. The first study panel was convened in late 2015 to study the likely impacts of AI on urban life by the year 2030, with a focus on typical North American cities.

    2015 Study Panel Members

    • Peter Stone, UT Austin, Chair
    • Rodney Brooks, Rethink Robotics
    • Erik Brynjolfsson, MIT
    • Ryan Calo, University of Washington
    • Oren Etzioni, Allen Institute for AI
    • Greg Hager, Johns Hopkins University
    • Julia Hirschberg, Columbia University
    • Shivaram Kalyanakrishnan, IIT Bombay
    • Ece Kamar, Microsoft
    • Sarit Kraus, Bar Ilan University
    • Kevin Leyton-Brown, [emphasis mine] UBC [University of British Columbia]
    • David Parkes, Harvard
    • Bill Press, UT Austin
    • AnnaLee (Anno) Saxenian, Berkeley
    • Julie Shah, MIT
    • Milind Tambe, USC
    • Astro Teller, Google[X]
  • [emphases mine] and the Canada Research Chair in Artificial Intelligence, who built the world’s first soccer-playing robot.

I wasn’t expecting to see a Canadian listed as a member of the AI100 standing committee and then I got another surprise (from the AI100 People webpage),

Study Panels

Study Panels are planned to convene every 5 years to examine some aspect of AI and its influences on society and the world. The first study panel was convened in late 2015 to study the likely impacts of AI on urban life by the year 2030, with a focus on typical North American cities.

2015 Study Panel Members

  • Peter Stone, UT Austin, Chair
  • Rodney Brooks, Rethink Robotics
  • Erik Brynjolfsson, MIT
  • Ryan Calo, University of Washington
  • Oren Etzioni, Allen Institute for AI
  • Greg Hager, Johns Hopkins University
  • Julia Hirschberg, Columbia University
  • Shivaram Kalyanakrishnan, IIT Bombay
  • Ece Kamar, Microsoft
  • Sarit Kraus, Bar Ilan University
  • Kevin Leyton-Brown, [emphasis mine] UBC [University of British Columbia]
  • David Parkes, Harvard
  • Bill Press, UT Austin
  • AnnaLee (Anno) Saxenian, Berkeley
  • Julie Shah, MIT
  • Milind Tambe, USC
  • Astro Teller, Google[X]

I see they have representation from Israel, India, and the private sector as well. Refreshingly, there’s more than one woman on the standing committee and in this first study group. It’s good to see these efforts at inclusiveness and I’m particularly delighted with the inclusion of an organization from Asia. All too often inclusiveness means Europe, especially the UK. So, it’s good (and I think important) to see a different range of representation.

As for the content of report, should anyone have opinions about it, please do let me know your thoughts in the blog comments.

Center for Sustainable Nanotechnology or how not to poison and make the planet uninhabitable

I received notice of the Center for Sustainable Nanotechnology’s newest deal with the US National Science Foundation in an August 31, 2015 email University of Wisconsin-Madison (UWM) news release,

The Center for Sustainable Nanotechnology, a multi-institutional research center based at the University of Wisconsin-Madison, has inked a new contract with the National Science Foundation (NSF) that will provide nearly $20 million in support over the next five years.

Directed by UW-Madison chemistry Professor Robert Hamers, the center focuses on the molecular mechanisms by which nanoparticles interact with biological systems.

Nanotechnology involves the use of materials at the smallest scale, including the manipulation of individual atoms and molecules. Products that use nanoscale materials range from beer bottles and car wax to solar cells and electric and hybrid car batteries. If you read your books on a Kindle, a semiconducting material manufactured at the nanoscale underpins the high-resolution screen.

While there are already hundreds of products that use nanomaterials in various ways, much remains unknown about how these modern materials and the tiny particles they are composed of interact with the environment and living things.

“The purpose of the center is to explore how we can make sure these nanotechnologies come to fruition with little or no environmental impact,” explains Hamers. “We’re looking at nanoparticles in emerging technologies.”

In addition to UW-Madison, scientists from UW-Milwaukee, the University of Minnesota, the University of Illinois, Northwestern University and the Pacific Northwest National Laboratory have been involved in the center’s first phase of research. Joining the center for the next five-year phase are Tuskegee University, Johns Hopkins University, the University of Iowa, Augsburg College, Georgia Tech and the University of Maryland, Baltimore County.

At UW-Madison, Hamers leads efforts in synthesis and molecular characterization of nanomaterials. soil science Professor Joel Pedersen and chemistry Professor Qiang Cui lead groups exploring the biological and computational aspects of how nanomaterials affect life.

Much remains to be learned about how nanoparticles affect the environment and the multitude of organisms – from bacteria to plants, animals and people – that may be exposed to them.

“Some of the big questions we’re asking are: How is this going to impact bacteria and other organisms in the environment? What do these particles do? How do they interact with organisms?” says Hamers.

For instance, bacteria, the vast majority of which are beneficial or benign organisms, tend to be “sticky” and nanoparticles might cling to the microorganisms and have unintended biological effects.

“There are many different mechanisms by which these particles can do things,” Hamers adds. “The challenge is we don’t know what these nanoparticles do if they’re released into the environment.”

To get at the challenge, Hamers and his UW-Madison colleagues are drilling down to investigate the molecular-level chemical and physical principles that dictate how nanoparticles interact with living things.
Pedersen’s group, for example, is studying the complexities of how nanoparticles interact with cells and, in particular, their surface membranes.

“To enter a cell, a nanoparticle has to interact with a membrane,” notes Pedersen. “The simplest thing that can happen is the particle sticks to the cell. But it might cause toxicity or make a hole in the membrane.”

Pedersen’s group can make model cell membranes in the lab using the same lipids and proteins that are the building blocks of nature’s cells. By exposing the lab-made membranes to nanomaterials now used commercially, Pedersen and his colleagues can see how the membrane-particle interaction unfolds at the molecular level – the scale necessary to begin to understand the biological effects of the particles.

Such studies, Hamers argues, promise a science-based understanding that can help ensure the technology leaves a minimal environmental footprint by identifying issues before they manifest themselves in the manufacturing, use or recycling of products that contain nanotechnology-inspired materials.

To help fulfill that part of the mission, the center has established working relationships with several companies to conduct research on materials in the very early stages of development.

“We’re taking a look-ahead view. We’re trying to get into the technological design cycle,” Hamers says. “The idea is to use scientific understanding to develop a predictive ability to guide technology and guide people who are designing and using these materials.”

What with this initiative and the LCnano Network at Arizona State University (my April 8, 2014 posting; scroll down about 50% of the way), it seems that environmental and health and safety studies of nanomaterials are kicking into a higher gear as commercialization efforts intensify.

New director for TRIUMF, Canada’s national laboratory for particle and nuclear physics starts

Here’s the announcement, straight from the March 18, 2014 TRIUMF news release,

After a seven month, highly competitive, international search for TRIUMF’s next director, the laboratory’s Board of Management announced today that Dr. Jonathan Bagger, Krieger-Eisenhower Professor, Vice Provost, and former Interim Provost at the Johns Hopkins University, will join TRIUMF this summer as the laboratory’s next director.

TRIUMF is Canada’s national laboratory for particle and nuclear physics, focusing on probing the structure and origins of matter and advancing isotopes for science and medicine.  Located on the campus of the University of British Columbia, TRIUMF is owned and operated by a consortium of 18 leading Canadian universities and supported by the federal and provincial governments.

Bagger was attracted to TRIUMF because, “Its collaborative, interdisciplinary model represents the future for much of science.  TRIUMF helps Canada connect fundamental research to important societal goals, ranging from health and safety to education and innovation.”  Noting TRIUMF’s new strategic plan that recently secured five years of core funding from the Government of Canada, he added, “It is an exciting time to lead the
laboratory.”

Bagger brings extensive experience to the job.  Professor Paul Young, Chair of TRIUMF’s Board of Management and Vice-President of Research and Innovation at the University of Toronto, said, “Jon is an outstanding, internationally renowned physicist with a wealth of leadership experience and a track record of excellence.  He is a welcome addition to Canada and I am confident that under his tenure, TRIUMF will continue to flourish.”

Jim Hanlon, Interim CEO/Chief Administrator Officer of TRIUMF and President and CEO of Advanced Applied Physics Solutions Inc., welcomed the news.  He said, “The laboratory has been shaped and served greatly by its past directors.  Today the need continues for an extraordinary combination of vision, leadership, and excellence.  Jon will bring all of this and more to TRIUMF.  On behalf of the staff, we’re excited about moving forward with Jon
at the helm.”

Bagger expressed his enthusiasm in moving across the border to join TRIUMF as the next director. “TRIUMF is known internationally for its impressive capabilities in science and engineering, ranging from rare-isotope studies on its Vancouver campus to its essential contributions to the Higgs boson discovery at CERN.  All rest on the legendary dedication and commitment of TRIUMF’s researchers and staff.  I look forward to working with this
terrific team to advance innovation and discovery in Vancouver, in Canada, and on the international stage.”

Bagger will lead the laboratory for a six-year term beginning July 1 [2014].  He reports he is ready to go:  “I have installed a metric speedometer in my car, downloaded the Air Canada app, and cleansed my home of all Washington Capitals gear.”

Nice of Bagger to start his new job on Canada Day. From a symbolic perspective, it’s an interesting start date. As for his metric speedometer and Air Canada app, bravo! Perhaps though he might have wanted the last clause to feature the Vancouver Canucks, e.g., ‘and set aside money/have set aside space for Vancouver Canucks gear’. You can find out more about TRIUMF here.

Does education kill the ability to do algebra?

Apparently, the ability to perform basic algebra is innate in humans, mice, fish, and others. Researchers at Johns Hopkins describe some of their findings about algebra and innate abilities in this video,

While the researchers don’t accuse the education system of destroying or damaging one’s ability to perform algebra, I will make the suggestion, the gut level instinct the researchers are describing is educated out of most of us. Here’s more from the March 6, 2014 news item on ScienceDaily describing the research,

Millions of high school and college algebra students are united in a shared agony over solving for x and y, and for those to whom the answers don’t come easily, it gets worse: Most preschoolers and kindergarteners can do some algebra before even entering a math class.

In a just-published study in the journal Developmental Science, lead author and post-doctoral fellow Melissa Kibbe and Lisa Feigenson, associate professor of psychological and brain sciences at Johns Hopkins University’s Krieger School of Arts and Sciences, find that most preschoolers and kindergarteners, or children between 4 and 6, can do basic algebra naturally.

“These very young children, some of whom are just learning to count, and few of whom have even gone to school yet, are doing basic algebra and with little effort,” Kibbe said. “They do it by using what we call their ‘Approximate Number System:’ their gut-level, inborn sense of quantity and number.”

A Johns Hopkins University March 7, 2014 news piece by Latarsha Gatlin describes the research further,

The “Approximate Number System,” or ANS, is also called “number sense,” and describes humans’ and animals’ ability to quickly size up the quantity of objects in their everyday environments. We’re born with this ability, which is probably an evolutionary adaptation to help human and animal ancestors survive in the wild, scientists say.

Previous research has revealed some interesting facts about number sense, including that adolescents with better math abilities also had superior number sense when they were preschoolers, and that number sense peaks at age 35.

Kibbe, who works in Feigenson’s lab, wondered whether preschool-age children could harness that intuitive mathematical ability to solve for a hidden variable. In other words, could they do something akin to basic algebra before they ever received formal classroom mathematics instruction? The answer was “yes,” at least when the algebra problem was acted out by two furry stuffed animals—Gator and Cheetah—using “magic cups” filled with objects like buttons, plastic doll shoes, and pennies.

In the study, children sat down individually with an examiner who introduced them to the two characters, each of which had a cup filled with an unknown quantity of items. Children were told that each character’s cup would “magically” add more items to a pile of objects already sitting on a table. But children were not allowed to see the number of objects in either cup: they only saw the pile before it was added to, and after, so they had to infer approximately how many objects Gator’s cup and Cheetah’s cup contained.

At the end, the examiner pretended that she had mixed up the cups, and asked the children—after showing them what was in one of the cups—to help her figure out whose cup it was. The majority of the children knew whose cup it was, a finding that revealed for the researchers that the pint-sized participants had been solving for a missing quantity. In essence, this is the same as doing basic algebra.

“What was in the cup was the x and y variable, and children nailed it,” said Feigenson, director of the Johns Hopkins Laboratory for Child Development. “Gator’s cup was the x variable and Cheetah’s cup was the y variable. We found out that young children are very, very good at this. It appears that they are harnessing their gut level number sense to solve this task.”

If this kind of basic algebraic reasoning is so simple and natural for 4, 5, and 6-year-olds, then why it is so difficult for teens and others?

“One possibility is that formal algebra relies on memorized rules and symbols that seem to trip many people up,” Feigenson said. “So one of the exciting future directions for this research is to ask whether telling teachers that children have this gut level ability—long before they master the symbols—might help in encouraging students to harness these skills. Teachers may be able to help children master these kind of computations earlier, and more easily, giving them a wedge into the system.”

While number sense helps children in solving basic algebra, more sophisticated concepts and reasoning are needed to master the complex algebra problems that are taught later in the school age years.

Another finding from the research was that an ANS aptitude does not follow gender lines. Boys and girls answered questions correctly in equal proportions during the experiments, the researchers said. Although other research shows that even young children can be influenced by gender stereotypes about girls’ versus boys’ math prowess, “we see no evidence for gender differences in our work on basic number sense,” Feigenson said.

Parents with numerically challenged kids shouldn’t worry that their child will be bad at math. The psychologists say it’s more important to nurture and support young children’s use of their number sense in solving problems that will later be introduced more formally in school.

“We find links at all ages between the precision of people’s Approximate Number System and their formal math ability,” Feigenson said. “But this does not necessarily mean that children with poorer precision grow up to be bad at math. For example, children with poorer number sense may need to rely on other strategies, besides their gut sense of number, to solve math problems. But this is an area where much future research is needed.”

Here’s a link to and a citation for the paper,

Young children ‘solve for x’ using the Approximate Number System by Melissa M. Kibbe and Lisa Feigenson. Article first published online: 3 MAR 2014 DOI: 10.1111/desc.12177

© 2014 John Wiley & Sons Ltd

This paper is behind a paywall.

2013 International Science & Engineering Visualization Challenge Winners

Thanks to a RT from @coreyspowell I stumbled across a Feb. 7, 2014 article in Science (magazine) describing the 2013 International Science & Engineering Visualization Challenge Winners. I am highlighting a few of the entries here but there are more images in the article and a slideshow.

First Place: Illustration

Credit: Greg Dunn and Brian Edwards, Greg Dunn Design, Philadelphia, Pennsylvania; Marty Saggese, Society for Neuroscience, Washington, D.C.; Tracy Bale, University of Pennsylvania, Philadelphia; Rick Huganir, Johns Hopkins University, Baltimore, Maryland

Cortex in Metallic Pastels. Credit: Greg Dunn and Brian Edwards, Greg Dunn Design, Philadelphia, Pennsylvania; Marty Saggese, Society for Neuroscience, Washington, D.C.; Tracy Bale, University of Pennsylvania, Philadelphia; Rick Huganir, Johns Hopkins University, Baltimore, Maryland

From the article, a description of Greg Dunn and his work,

With a Ph.D. in neuroscience and a love of Asian art, it may have been inevitable that Greg Dunn would combine them to create sparse, striking illustrations of the brain. “It was a perfect synthesis of my interests,” Dunn says.

Cortex in Metallic Pastels represents a stylized section of the cerebral cortex, in which axons, dendrites, and other features create a scene reminiscent of a copse of silver birch at twilight. An accurate depiction of a slice of cerebral cortex would be a confusing mess, Dunn says, so he thins out the forest of cells, revealing the delicate branching structure of each neuron.

Dunn blows pigments across the canvas to create the neurons and highlights some of them in gold leaf and palladium, a technique he is keen to develop further.

“My eventual goal is to start an art-science lab,” he says. It would bring students of art and science together to develop new artistic techniques. He is already using lithography to give each neuron in his paintings a different angle of reflectance. “As you walk around, different neurons appear and disappear, so you can pack it with information,” he says.

People’s Choice:  Games & Apps

Meta!Blast: The Leaf. Credit: Eve Syrkin Wurtele, William Schneller, Paul Klippel, Greg Hanes, Andrew Navratil, and Diane Bassham, Iowa State University, Ames

Meta!Blast: The Leaf. Credit: Eve Syrkin Wurtele, William Schneller, Paul Klippel, Greg Hanes, Andrew Navratil, and Diane Bassham, Iowa State University, Ames

More from the article,

“Most people don’t expect a whole ecosystem right on the leaf surface,” says Eve Syrkin Wurtele, a plant biologist at Iowa State University. Meta!Blast: The Leaf, the game that Wurtele and her team created, lets high school students pilot a miniature bioship across this strange landscape, which features nematodes and a lumbering tardigrade. They can dive into individual cells and zoom around a chloroplast, activating photosynthesis with their ship’s search lamp. Pilots can also scan each organelle they encounter to bring up more information about it from the ship’s BioLog—a neat way to put plant biology at the heart of an interactive gaming environment.

This is a second recognition for Meta!Blast, which won an Honorable Mention in the 2011 visualization challenge for a version limited to the inside of a plant cell.

The Metablast website homepage describes the game,

The last remaining plant cell in existence is dying. An expert team of plant scientists have inexplicably disappeared. Can you rescue the lost team, discover what is killing the plant, and save the world?

Meta!Blast is a real-time 3D action-adventure game that puts you in the pilot’s seat. Shrink down to microscopic size and explore the vivid, dynamic world of a soybean plant cell spinning out of control. Interact with numerous characters, fight off plant pathogens, and discover how important plants are to the survival of the human race.

Enjoy!