Tag Archives: University of California at Los Angeles

Injectable bandages for internal bleeding and hydrogel for the brain

This injectable bandage could be a gamechanger (as they say) if it can be taken beyond the ‘in vitro’ (i.e., petri dish) testing stage. A May 22, 2018 news item on Nanowerk makes the announcement (Note: A link has been removed),

While several products are available to quickly seal surface wounds, rapidly stopping fatal internal bleeding has proven more difficult. Now researchers from the Department of Biomedical Engineering at Texas A&M University are developing an injectable hydrogel bandage that could save lives in emergencies such as penetrating shrapnel wounds on the battlefield (Acta Biomaterialia, “Nanoengineered injectable hydrogels for wound healing application”).

A May 22, 2018 US National Institute of Biomedical Engineering and Bioengiineering news release, which originated the news item, provides more detail (Note: Links have been removed),

The researchers combined a hydrogel base (a water-swollen polymer) and nanoparticles that interact with the body’s natural blood-clotting mechanism. “The hydrogel expands to rapidly fill puncture wounds and stop blood loss,” explained Akhilesh Gaharwar, Ph.D., assistant professor and senior investigator on the work. “The surface of the nanoparticles attracts blood platelets that become activated and start the natural clotting cascade of the body.”

Enhanced clotting when the nanoparticles were added to the hydrogel was confirmed by standard laboratory blood clotting tests. Clotting time was reduced from eight minutes to six minutes when the hydrogel was introduced into the mixture. When nanoparticles were added, clotting time was significantly reduced, to less than three minutes.

In addition to the rapid clotting mechanism of the hydrogel composite, the engineers took advantage of special properties of the nanoparticle component. They found they could use the electric charge of the nanoparticles to add growth factors that efficiently adhered to the particles. “Stopping fatal bleeding rapidly was the goal of our work,” said Gaharwar. “However, we found that we could attach growth factors to the nanoparticles. This was an added bonus because the growth factors act to begin the body’s natural wound healing process—the next step needed after bleeding has stopped.”

The researchers were able to attach vascular endothelial growth factor (VEGF) to the nanoparticles. They tested the hydrogel/nanoparticle/VEGF combination in a cell culture test that mimics the wound healing process. The test uses a petri dish with a layer of endothelial cells on the surface that create a solid skin-like sheet. The sheet is then scratched down the center creating a rip or hole in the sheet that resembles a wound.

When the hydrogel containing VEGF bound to the nanoparticles was added to the damaged endothelial cell wound, the cells were induced to grow back and fill-in the scratched region—essentially mimicking the healing of a wound.

“Our laboratory experiments have verified the effectiveness of the hydrogel for initiating both blood clotting and wound healing,” said Gaharwar. “We are anxious to begin tests in animals with the hope of testing and eventual use in humans where we believe our formulation has great potential to have a significant impact on saving lives in critical situations.”

The work was funded by grant EB023454 from the National Institute of Biomedical Imaging and Bioengineering (NIBIB), and the National Science Foundation. The results were reported in the February issue of the journal Acta Biomaterialia.

The paper was published back in April 2018 and there was an April 2, 2018 Texas A&M University news release on EurekAlert making the announcement (and providing a few unique details),

A penetrating injury from shrapnel is a serious obstacle in overcoming battlefield wounds that can ultimately lead to death.Given the high mortality rates due to hemorrhaging, there is an unmet need to quickly self-administer materials that prevent fatality due to excessive blood loss.

With a gelling agent commonly used in preparing pastries, researchers from the Inspired Nanomaterials and Tissue Engineering Laboratory have successfully fabricated an injectable bandage to stop bleeding and promote wound healing.

In a recent article “Nanoengineered Injectable Hydrogels for Wound Healing Application” published in Acta Biomaterialia, Dr. Akhilesh K. Gaharwar, assistant professor in the Department of Biomedical Engineering at Texas A&M University, uses kappa-carrageenan and nanosilicates to form injectable hydrogels to promote hemostasis (the process to stop bleeding) and facilitate wound healing via a controlled release of therapeutics.

“Injectable hydrogels are promising materials for achieving hemostasis in case of internal injuries and bleeding, as these biomaterials can be introduced into a wound site using minimally invasive approaches,” said Gaharwar. “An ideal injectable bandage should solidify after injection in the wound area and promote a natural clotting cascade. In addition, the injectable bandage should initiate wound healing response after achieving hemostasis.”

The study uses a commonly used thickening agent known as kappa-carrageenan, obtained from seaweed, to design injectable hydrogels. Hydrogels are a 3-D water swollen polymer network, similar to Jell-O, simulating the structure of human tissues.

When kappa-carrageenan is mixed with clay-based nanoparticles, injectable gelatin is obtained. The charged characteristics of clay-based nanoparticles provide hemostatic ability to the hydrogels. Specifically, plasma protein and platelets form blood adsorption on the gel surface and trigger a blood clotting cascade.

“Interestingly, we also found that these injectable bandages can show a prolonged release of therapeutics that can be used to heal the wound” said Giriraj Lokhande, a graduate student in Gaharwar’s lab and first author of the paper. “The negative surface charge of nanoparticles enabled electrostatic interactions with therapeutics thus resulting in the slow release of therapeutics.”

Nanoparticles that promote blood clotting and wound healing (red discs), attached to the wound-filling hydrogel component (black) form a nanocomposite hydrogel. The gel is designed to be self-administered to stop bleeding and begin wound-healing in emergency situations. Credit: Lokhande, et al. 1

Here’s a link to and a citation for the paper,

Nanoengineered injectable hydrogels for wound healing application by Giriraj Lokhande, James K. Carrow, Teena Thakur, Janet R. Xavier, Madasamy Parani, Kayla J. Bayless, Akhilesh K. Gaharwar. Acta Biomaterialia Volume 70, 1 April 2018, Pages 35-47
https://doi.org/10.1016/j.actbio.2018.01.045

This paper is behind a paywall.

Hydrogel and the brain

It’s been an interesting week for hydrogels. On May 21, 2018 there was a news item on ScienceDaily about a bioengineered hydrogel which stimulated brain tissue growth after a stroke (mouse model),

In a first-of-its-kind finding, a new stroke-healing gel helped regrow neurons and blood vessels in mice with stroke-damaged brains, UCLA researchers report in the May 21 issue of Nature Materials.

“We tested this in laboratory mice to determine if it would repair the brain in a model of stroke, and lead to recovery,” said Dr. S. Thomas Carmichael, Professor and Chair of neurology at UCLA. “This study indicated that new brain tissue can be regenerated in what was previously just an inactive brain scar after stroke.”

The brain has a limited capacity for recovery after stroke and other diseases. Unlike some other organs in the body, such as the liver or skin, the brain does not regenerate new connections, blood vessels or new tissue structures. Tissue that dies in the brain from stroke is absorbed, leaving a cavity, devoid of blood vessels, neurons or axons, the thin nerve fibers that project from neurons.

After 16 weeks, stroke cavities in mice contained regenerated brain tissue, including new neural networks — a result that had not been seen before. The mice with new neurons showed improved motor behavior, though the exact mechanism wasn’t clear.

Remarkable stuff.

Robot radiologists (artificially intelligent doctors)

Mutaz Musa, a physician at New York Presbyterian Hospital/Weill Cornell (Department of Emergency Medicine) and software developer in New York City, has penned an eyeopening opinion piece about artificial intelligence (or robots if you prefer) and the field of radiology. From a June 25, 2018 opinion piece for The Scientist (Note: Links have been removed),

Although artificial intelligence has raised fears of job loss for many, we doctors have thus far enjoyed a smug sense of security. There are signs, however, that the first wave of AI-driven redundancies among doctors is fast approaching. And radiologists seem to be first on the chopping block.

Andrew Ng, founder of online learning platform Coursera and former CTO of “China’s Google,” Baidu, recently announced the development of CheXNet, a convolutional neural net capable of recognizing pneumonia and other thoracic pathologies on chest X-rays better than human radiologists. Earlier this year, a Hungarian group developed a similar system for detecting and classifying features of breast cancer in mammograms. In 2017, Adelaide University researchers published details of a bot capable of matching human radiologist performance in detecting hip fractures. And, of course, Google achieved superhuman proficiency in detecting diabetic retinopathy in fundus photographs, a task outside the scope of most radiologists.

Beyond single, two-dimensional radiographs, a team at Oxford University developed a system for detecting spinal disease from MRI data with a performance equivalent to a human radiologist. Meanwhile, researchers at the University of California, Los Angeles, reported detecting pathology on head CT scans with an error rate more than 20 times lower than a human radiologist.

Although these particular projects are still in the research phase and far from perfect—for instance, often pitting their machines against a limited number of radiologists—the pace of progress alone is telling.

Others have already taken their algorithms out of the lab and into the marketplace. Enlitic, founded by Aussie serial entrepreneur and University of San Francisco researcher Jeremy Howard, is a Bay-Area startup that offers automated X-ray and chest CAT scan interpretation services. Enlitic’s systems putatively can judge the malignancy of nodules up to 50 percent more accurately than a panel of radiologists and identify fractures so small they’d typically be missed by the human eye. One of Enlitic’s largest investors, Capitol Health, owns a network of diagnostic imaging centers throughout Australia, anticipating the broad rollout of this technology. Another Bay-Area startup, Arterys, offers cloud-based medical imaging diagnostics. Arterys’s services extend beyond plain films to cardiac MRIs and CAT scans of the chest and abdomen. And there are many others.

Musa has offered a compelling argument with lots of links to supporting evidence.

[downloaded from https://www.the-scientist.com/news-opinion/opinion–rise-of-the-robot-radiologists-64356]

And evidence keeps mounting, I just stumbled across this June 30, 2018 news item on Xinhuanet.com,

An artificial intelligence (AI) system scored 2:0 against elite human physicians Saturday in two rounds of competitions in diagnosing brain tumors and predicting hematoma expansion in Beijing.

The BioMind AI system, developed by the Artificial Intelligence Research Centre for Neurological Disorders at the Beijing Tiantan Hospital and a research team from the Capital Medical University, made correct diagnoses in 87 percent of 225 cases in about 15 minutes, while a team of 15 senior doctors only achieved 66-percent accuracy.

The AI also gave correct predictions in 83 percent of brain hematoma expansion cases, outperforming the 63-percent accuracy among a group of physicians from renowned hospitals across the country.

The outcomes for human physicians were quite normal and even better than the average accuracy in ordinary hospitals, said Gao Peiyi, head of the radiology department at Tiantan Hospital, a leading institution on neurology and neurosurgery.

To train the AI, developers fed it tens of thousands of images of nervous system-related diseases that the Tiantan Hospital has archived over the past 10 years, making it capable of diagnosing common neurological diseases such as meningioma and glioma with an accuracy rate of over 90 percent, comparable to that of a senior doctor.

All the cases were real and contributed by the hospital, but never used as training material for the AI, according to the organizer.

Wang Yongjun, executive vice president of the Tiantan Hospital, said that he personally did not care very much about who won, because the contest was never intended to pit humans against technology but to help doctors learn and improve [emphasis mine] through interactions with technology.

“I hope through this competition, doctors can experience the power of artificial intelligence. This is especially so for some doctors who are skeptical about artificial intelligence. I hope they can further understand AI and eliminate their fears toward it,” said Wang.

Dr. Lin Yi who participated and lost in the second round, said that she welcomes AI, as it is not a threat but a “friend.” [emphasis mine]

AI will not only reduce the workload but also push doctors to keep learning and improve their skills, said Lin.

Bian Xiuwu, an academician with the Chinese Academy of Science and a member of the competition’s jury, said there has never been an absolute standard correct answer in diagnosing developing diseases, and the AI would only serve as an assistant to doctors in giving preliminary results. [emphasis mine]

Dr. Paul Parizel, former president of the European Society of Radiology and another member of the jury, also agreed that AI will not replace doctors, but will instead function similar to how GPS does for drivers. [emphasis mine]

Dr. Gauden Galea, representative of the World Health Organization in China, said AI is an exciting tool for healthcare but still in the primitive stages.

Based on the size of its population and the huge volume of accessible digital medical data, China has a unique advantage in developing medical AI, according to Galea.

China has introduced a series of plans in developing AI applications in recent years.

In 2017, the State Council issued a development plan on the new generation of Artificial Intelligence and the Ministry of Industry and Information Technology also issued the “Three-Year Action Plan for Promoting the Development of a New Generation of Artificial Intelligence (2018-2020).”

The Action Plan proposed developing medical image-assisted diagnostic systems to support medicine in various fields.

I note the reference to cars and global positioning systems (GPS) and their role as ‘helpers’;, it seems no one at the ‘AI and radiology’ competition has heard of driverless cars. Here’s Musa on those reassuring comments abut how the technology won’t replace experts but rather augment their skills,

To be sure, these services frame themselves as “support products” that “make doctors faster,” rather than replacements that make doctors redundant. This language may reflect a reserved view of the technology, though it likely also represents a marketing strategy keen to avoid threatening or antagonizing incumbents. After all, many of the customers themselves, for now, are radiologists.

Radiology isn’t the only area where experts might find themselves displaced.

Eye experts

It seems inroads have been made by artificial intelligence systems (AI) into the diagnosis of eye diseases. It got the ‘Fast Company’ treatment (exciting new tech, learn all about it) as can be seen further down in this posting. First, here’s a more restrained announcement, from an August 14, 2018 news item on phys.org (Note: A link has been removed),

An artificial intelligence (AI) system, which can recommend the correct referral decision for more than 50 eye diseases, as accurately as experts has been developed by Moorfields Eye Hospital NHS Foundation Trust, DeepMind Health and UCL [University College London].

The breakthrough research, published online by Nature Medicine, describes how machine-learning technology has been successfully trained on thousands of historic de-personalised eye scans to identify features of eye disease and recommend how patients should be referred for care.

Researchers hope the technology could one day transform the way professionals carry out eye tests, allowing them to spot conditions earlier and prioritise patients with the most serious eye diseases before irreversible damage sets in.

An August 13, 2018 UCL press release, which originated the news item, describes the research and the reasons behind it in more detail,

More than 285 million people worldwide live with some form of sight loss, including more than two million people in the UK. Eye diseases remain one of the biggest causes of sight loss, and many can be prevented with early detection and treatment.

Dr Pearse Keane, NIHR Clinician Scientist at the UCL Institute of Ophthalmology and consultant ophthalmologist at Moorfields Eye Hospital NHS Foundation Trust said: “The number of eye scans we’re performing is growing at a pace much faster than human experts are able to interpret them. There is a risk that this may cause delays in the diagnosis and treatment of sight-threatening diseases, which can be devastating for patients.”

“The AI technology we’re developing is designed to prioritise patients who need to be seen and treated urgently by a doctor or eye care professional. If we can diagnose and treat eye conditions early, it gives us the best chance of saving people’s sight. With further research it could lead to greater consistency and quality of care for patients with eye problems in the future.”

The study, launched in 2016, brought together leading NHS eye health professionals and scientists from UCL and the National Institute for Health Research (NIHR) with some of the UK’s top technologists at DeepMind to investigate whether AI technology could help improve the care of patients with sight-threatening diseases, such as age-related macular degeneration and diabetic eye disease.

Using two types of neural network – mathematical systems for identifying patterns in images or data – the AI system quickly learnt to identify 10 features of eye disease from highly complex optical coherence tomography (OCT) scans. The system was then able to recommend a referral decision based on the most urgent conditions detected.

To establish whether the AI system was making correct referrals, clinicians also viewed the same OCT scans and made their own referral decisions. The study concluded that AI was able to make the right referral recommendation more than 94% of the time, matching the performance of expert clinicians.

The AI has been developed with two unique features which maximise its potential use in eye care. Firstly, the system can provide information that helps explain to eye care professionals how it arrives at its recommendations. This information includes visuals of the features of eye disease it has identified on the OCT scan and the level of confidence the system has in its recommendations, in the form of a percentage. This functionality is crucial in helping clinicians scrutinise the technology’s recommendations and check its accuracy before deciding the type of care and treatment a patient receives.

Secondly, the AI system can be easily applied to different types of eye scanner, not just the specific model on which it was trained. This could significantly increase the number of people who benefit from this technology and future-proof it, so it can still be used even as OCT scanners are upgraded or replaced over time.

The next step is for the research to go through clinical trials to explore how this technology might improve patient care in practice, and regulatory approval before it can be used in hospitals and other clinical settings.

If clinical trials are successful in demonstrating that the technology can be used safely and effectively, Moorfields will be able to use an eventual, regulatory-approved product for free, across all 30 of their UK hospitals and community clinics, for an initial period of five years.

The work that has gone into this project will also help accelerate wider NHS research for many years to come. For example, DeepMind has invested significant resources to clean, curate and label Moorfields’ de-identified research dataset to create one of the most advanced eye research databases in the world.

Moorfields owns this database as a non-commercial public asset, which is already forming the basis of nine separate medical research studies. In addition, Moorfields can also use DeepMind’s trained AI model for future non-commercial research efforts, which could help advance medical research even further.

Mustafa Suleyman, Co-founder and Head of Applied AI at DeepMind Health, said: “We set up DeepMind Health because we believe artificial intelligence can help solve some of society’s biggest health challenges, like avoidable sight loss, which affects millions of people across the globe. These incredibly exciting results take us one step closer to that goal and could, in time, transform the diagnosis, treatment and management of patients with sight threatening eye conditions, not just at Moorfields, but around the world.”

Professor Sir Peng Tee Khaw, director of the NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology said: “The results of this pioneering research with DeepMind are very exciting and demonstrate the potential sight-saving impact AI could have for patients. I am in no doubt that AI has a vital role to play in the future of healthcare, particularly when it comes to training and helping medical professionals so that patients benefit from vital treatment earlier than might previously have been possible. This shows the transformative research than can be carried out in the UK combining world leading industry and NIHR/NHS hospital/university partnerships.”

Matt Hancock, Health and Social Care Secretary, said: “This is hugely exciting and exactly the type of technology which will benefit the NHS in the long term and improve patient care – that’s why we fund over a billion pounds a year in health research as part of our long term plan for the NHS.”

Here’s a link to and a citation for the study,

Clinically applicable deep learning for diagnosis and referral in retinal disease by Jeffrey De Fauw, Joseph R. Ledsam, Bernardino Romera-Paredes, Stanislav Nikolov, Nenad Tomasev, Sam Blackwell, Harry Askham, Xavier Glorot, Brendan O’Donoghue, Daniel Visentin, George van den Driessche, Balaji Lakshminarayanan, Clemens Meyer, Faith Mackinder, Simon Bouton, Kareem Ayoub, Reena Chopra, Dominic King, Alan Karthikesalingam, Cían O. Hughes, Rosalind Raine, Julian Hughes, Dawn A. Sim, Catherine Egan, Adnan Tufail, Hugh Montgomery, Demis Hassabis, Geraint Rees, Trevor Back, Peng T. Khaw, Mustafa Suleyman, Julien Cornebise, Pearse A. Keane, & Olaf Ronneberger. Nature Medicine (2018) DOI: https://doi.org/10.1038/s41591-018-0107-6 Published 13 August 2018

This paper is behind a paywall.

And now, Melissa Locker’s August 15, 2018 article for Fast Company (Note: Links have been removed),

In a paper published in Nature Medicine on Monday, Google’s DeepMind subsidiary, UCL, and researchers at Moorfields Eye Hospital showed off their new AI system. The researchers used deep learning to create algorithm-driven software that can identify common patterns in data culled from dozens of common eye diseases from 3D scans. The result is an AI that can identify more than 50 diseases with incredible accuracy and can then refer patients to a specialist. Even more important, though, is that the AI can explain why a diagnosis was made, indicating which part of the scan prompted the outcome. It’s an important step in both medicine and in making AIs slightly more human

The editor or writer has even highlighted the sentence about the system’s accuracy—not just good but incredible!

I will be publishing something soon [my August 21, 2018 posting] which highlights some of the questions one might want to ask about AI and medicine before diving headfirst into this brave new world of medicine.

Ingenuity Lab (a nanotechnology initiative), the University of Alberta, and Carlo Montemagno—what is happening in Canadian universities? (2 of 2)

You can find Part 1 of the latest installment in this sad story here.

Who says Carlo Montemagno is a star nanotechnology researcher?

Unusually and despite his eminent stature, Dr. Montemagno does not rate a Wikipedia entry. Luckily, his CV (curriculum vitae) is online (placed there by SIU) so we can get to know a bit more (the CV is a 63 pp. document) about the man’s accomplishments (Note: There are some formatting differences), Note: Unusually, I will put my comments into the excerpted CV using [] i.e., square brackets to signify my input,

Carlo Montemagno, PhD
University of Alberta
Department of Chemical and Materials Engineering
and
NRC/CNRC National Institute for Nanotechnology
Edmonton, AB T6G 2V4
Canada

 

Educational Background

1995, Ph.D., Department of Civil Engineering and Geological Sciences, College of Earth and Mineral Sciences University of Notre Dame

1990, M.S., Petroleum and Natural Gas Engineering, College of Earth and Mineral Sciences, Pennsylvania State University

1980, B.S., Agricultural and Biological Engineering, College of Engineering, Cornell University

Supplemental Education

1986, Practical Environmental Law, Federal Publications, Washington, DC

1985, Effective Executive Training Program, Wharton Business School, University of Pennsylvannia, Philadelphia, PA

1980, Civil Engineer Corp Officer Project, CECOS & General Management School, Port Hueneme, CA

[He doesn’t seem to have taken any courses in the last 30 years.]

Professional Experience

(Select Achievements)

Over three decades of experience in shepherding complex organizations both inside and outside academia. Working as a builder, I have led organizations in government, industry and higher education during periods of change and challenge to achieved goals that many perceived to be unattainable.

University of Alberta, Edmonton AB 9/12 to present

9/12 to present, Founding Director, Ingenuity Lab [largely defunct as of April 18, 2018], Province of Alberta

8/13 to present, Director Biomaterials Program, NRC/CNRC National Institute for Nanotechnology [It’s not clear if this position still exists.]

10/13 to present, Canada Research Chair, Government of Canada in Intelligent Nanosystems [Canadian universities receive up to $200,000 for an individual Canada research chair. The money can be used to fund the chair in its entirety or it can be added to other monies., e.g., faculty salary. There are two tiers, one for established researchers and one for new researchers. Montemagno would have been a Tier 1 Canada Research Chair. At McGill University {a major Canadian educational institution} for example, total compensation including salary, academic stipend, benefits, X-coded research funds would be a maximum of $200,000 at Montemagno’s Tier 1 level. See: here scroll down about 90% of the way).

3/13 to present, AITF iCORE Strategic Chair, Province of Alberta in BioNanotechnology and Biomimetic Systems [I cannot find this position in the current list of the University of Alberta Faculty of Science’s research chairs.]

9/12 to present, Professor, Faculty of Engineering, Chemical and Materials Engineering

Crafted and currently lead an Institute that bridges multiple organizations named Ingenuity Lab (www.ingenuitylab.ca). This Institute is a truly integrated multidisciplinary organization comprised of dedicated researchers from STEM, medicine, and the social sciences. Ingenuity Lab leverages Alberta’s strengths in medicine, engineering, science and, agriculture that are present in multiple academic enterprises across the province to solve grand challenges in the areas of energy, environment, and health and rapidly translate the solutions to the economy.

The exciting and relevant feature of Ingenuity Lab is that support comes from resources outside the normal academic funding streams. Core funding of approximately $8.6M/yr emerged by working and communicating a compelling vision directly with the Provincial Executive and Legislative branches of government. [In the material I’ve read, the money for the research was part of how Dr. Montemagno was wooed by the University of Alberta. My understanding is that he himself did not obtain the funding, which in CAD was $100M over 10 years. Perhaps the university was able to attract the funding based on Dr. Montemagno’s reputation and it was contingent on his acceptance?] I significantly augmented these base resources by developing Federal Government, and Industry partnership agreements with a suite of multinational corporations and SME’s across varied industry sectors.

Collectively, this effort is generating enhanced resource streams that support innovative academic programming, builds new research infrastructure, and enables high risk/high reward research. Just as important, it established new pathways to interact meaningfully with local and global communities.

Strategic Leadership

•Created the Ingenuity Lab organization including a governing board representing multiple academic institutions, government and industry sectors.

•Developed and executed a strategic plan to achieve near and long-term strategic objectives.

•Recruited~100 researchers representing a wide range disciplnes.[sic] [How hard can it be to attract researchers in this job climate?]

•Built out ~36,000 S.F. of laboratory and administrative space.

•Crafted operational policy and procedures.

•Developed and implemented a unique stakeholder inclusive management strategy focused on the rapid translation of solutions to the economy.

Innovation and Economic Engagement

•Member of the Expert Panel on innovation, commissioned by the Government of Alberta, to assess opportunities, challenges and design and implementation options for Alberta’s multi-billion dollar investment to drive long-term economic growth and diversification. The developed strategy is currently being implemented. [Details?]

•Served as a representive [sic] on multiple Canadian national trade missions to Asia, United States and the Middle East. [Sounds like he got to enjoy some nice trips.]

•Instituted formal development partnerships with several multi-national corporations including Johnson & Johnson, Cenovus and Sabuto Inc. [Details?]

•Launched multiple for-profit joint ventures founded on technologies collaboratively developed with industry with funding from both private and public sources. [Details?]

Branding

•Developed and implement a communication program focused on branding of Ingenuity Lab’s unique mission, both regionally and globally, to the lay public, academia, government, and industry. [Why didn’t the communication specialist do this? ]

This effort employs traditional paper, online, and social media outlets to effectively reach different demographics.

•Awarded “Best Nanotechnology Research Organization–2014” by The New Economy. [What is the New Economy? The Economist, yes. New Economy, no.]

Global Development

•Executed formal research and education partnerships with the Yonsei Institute of Convergence Technology and the Yonsei Bio-IT MicroFab Center in Korea, Mahatma Gandhi University in India. and the Italian Institute of Technology. [{1}The Yonsei Institute of Convergence Technology doesn’t have any news items prior to 2015 or after 2016. The Ingenuity Lab and/or Carlo Montemagno did not feature in them. {2} There are six Mahatma Ghandi Universities in India. {3} The Italian Institute of Technology does not have any news listings on the English language version of its site.]

•Opened Ingenuity Lab, India in May 2015. Focused on translating 21st-century technology to enable solutions appropriate for developing nations in the Energy, Agriculture, and Health economic sectors. [Found this May 9, 2016 notice on the Asia Pacific Foundation of Canada website, noting this: “… opening of the Ingenuity Lab Research Hub at Mahatma Gandhi University in Kottayam, in the Indian state of Kerala.” There’s also this May 6, 2016 news release. I can’t find anything on the Mahatma Ghandi University Kerala website.]

•Established partnership research and development agreements with SME’s in both Israel and India.

•Developed active research collaborations with medical and educational institutions in Nepal, Qatar, India, Israel, India and the United States.

Community Outreach

•Created Young Innovators research experience program to educate, support and nurture tyro undergraduate researchers and entrepreneurs.

•Developed an educational game, “Scopey’s Nano Adventure” for iOS and Android platforms to educate 6yr to 10yr olds about Nanotechnology. [What did the children learn? Was this really part of the mandate?]

•Delivered educational science programs to the lay public at multiple, high profile events. [Which events? The ones on the trade junkets?]

University of Cincinnati, Cincinnati OH 7/06 to 8/12

7/10 to 8/12 Founding Dean, College of Engineering and Applied Science

7/09 to 6/10 Dean, College of Applied Science

7/06 to 6/10 Dean, College of Engineering

7/06 to 8/12 Geier Professor of College of Engineering Engineering Education

7/06 to 8/12, Professor of Bioengineering, College of Engineering & College of Medicine

University of California, Los Angeles 7/01 to 6/06

5/03 to 6/06, Associate Director California Nanosystems Institute

7/02 to 6/06, Co-Director NASA Center for Cell Mimetic Space Exploration

7/02 to 6/06, Founding Department Chair, Department of Bioengineering

7/02 to 6/06, Chair Biomedical Engineering IDP

7/01 to 6/02, Chair of Academic Biomedical Engineering IDP Affairs

7/01 to 6/06, Carol and Roy College of Engineering and Applied Doumani Professor of Sciences Biomedical Engineering

7/01 to 6/06, Professor Mechanical and Aerospace Engineering

Recommending Montemagno

Presumably the folks at Southern Illinois University asked for recommendations from Montemagno’s previous employers. So, how did he get a recommendation from the folks in Alberta when according to Spoerre’s April 10, 2018 article the Ingenuity Lab was undergoing a review as of June 2017 by the province of Alberta’s Alberta Innovates programme? I find it hard to believe that the folks at the University of Alberta were unaware of the review.

When you’re trying to get rid of someone, it’s pretty much standard practice that once they’ve gotten the message, you give a good recommendation to their prospective employer. The question begs to be asked, how many times have employers done this for Montemagno?

Stars in their eyes

Every one exaggerates a bit on their résumé or CV. One of my difficulties with this whole affair lies in how Montemagno can be described as a ‘nanotechnology star’. The accomplishments foregrounded on Montemagno’s CV are administrative and if memory serves, the University of Cincinnati too. Given the situation with the Ingenuity Lab, I’m wondering about these accomplishments.

Was due diligence performed by SIU, the University of the Alberta, or anywhere else that Montemagno worked? I realize that you’re not likely to get much information from calling up the universities where he worked previously, especially if there was a problem and they wanted to get rid of him. Still, did someone check out his degrees, his start-ups,  dig a little deeper into some of his claims?

His credentials and stated accomplishments are quite impressive and I, too,  would have been dazzled. (He also lists positions at the Argonne National Laboratory and at Cornell University.) I’ve picked at some bits but one thing that stands out to me is the move from UCLA to the University of Cincinnati. It’s all big names: UCLA, Cornell, NASA, Argonne and then, not: University of Cincinnati, University of Alberta, Southern Illinois University—what happened?

(If anyone better versed in the world of academe and career has answers, please do add them to the comments.)

It’s tempting to think the Peter Principle (one of them) was at work here. In brief, this principle states that as you keep getting better jobs on based on past performance you reach a point where you can’t manage the new challenges having risen to your level of incompetence.In accepting the offer from the University of Alberta had Dr. Montemagno risen to his level of incompetence? Or, perhaps it was just one big failure. Unfortunately, any excuses don’t hold up under the weight of a series of misjudgments and ethical failures. Still, I’m guessing that Dr. Montemagno was hoping for a big win on a project such as this (from an Oct. 19, 2016 news release on MarketWired),

Ingenuity Lab Carbon Solutions announced today that it has been named as one of the 27 teams advancing in the $20M NRG COSIA Carbon XPRIZE. The competition sees scientists develop technologies to convert carbon dioxide emissions into products with high net value.

The Ingenuity Lab Carbon Solutions team – headquartered in Edmonton of Alberta, Canada – has made it to the second round of competition. Its team of 14 has proposed to convert CO2 waste emitted from a natural gas power plant into usable chemical products.

Ingenuity Lab Carbon Solutions is comprised of a multidisciplinary group of scientists and engineers, and was formed in the winter of 2012 to develop new approaches for the chemical industry. Ingenuity Lab Carbon Solutions is sponsored by CCEMC, and has also partnered with Ensovi for access to intellectual property and know how.

I can’t identify CCEMC with any certainty but Ensovi is one of Montemagno’s six start-up companies, as listed in his CV,

Founder and Chief Technical Officer, Ensovi, LLC., Focused on the production of low-cost bioenergy and high-value added products from sunlight using bionanotechnology, Total Funding; ~$10M, November 2010-present.

Sadly the April 9,2018 NRG COSIA Carbon XPRIZE news release  announcing the finalists in round 3 of the competition includes an Alberta track of five teams from which the Ingenuity Lab is notably absent.

The Montemagno affair seems to be a story of hubris, greed, and good intentions. Finally, the issues associated with Dr. Montemagno give rise to another, broader question.

Is something rotten in Canada’s higher education establishment?

Starting with the University of Alberta:

it would seem pretty obvious that if you’re hiring family member(s) as part of the deal to secure a new member of faculty that you place and follow very stringent rules. No rewriting of the job descriptions, no direct role in hiring or supervising, no extra benefits, no inflated salaries in other words, no special treatment for your family as they know at the University of Alberta since they have policies for this very situation.

Yes, universities do hire spouses (although a daughter, a nephew, and a son-in-law seems truly excessive) and even when the university follows all of the rules, there’s resentment from staff (I know because I worked in a university). There is a caveat to the rule, there’s resentment unless that spouse is a ‘star’ in his or her own right or an exceptionally pleasant person. It’s also very helpful if the spouse is both.

I have to say I loved Fraser Forbes that crazy University of Alberta engineer who thought he’d make things better by telling us that the family’s salaries had been paid out of federal and provincial funds rather than university funds. (sigh) Forbes was the new dean of engineering at the time of his interview in the CBC’s April 10, 2018 online article but that no longer seems to be the case as of April 19, 2018.

Given Montemagno’s misjudgments, it seems cruel that Forbes was removed after one foolish interview. But, perhaps he didn’t want the job after all. Regardless, those people who were afraid to speak out about Dr. Montemagno cannot feel reassured by Forbes’ apparent removal.

Money, money, money

Anyone who has visited a university in Canada (and presumably the US too) has to have noticed the number of ‘sponsored’ buildings and rooms. The hunger for money seems insatiable and any sensible person knows it’s unsupportable over the long term.

The scramble for students

Mel Broitman in a Sept. 22, 2016 article for Higher Education lays out some harsh truths,

Make no mistake. It is a stunning condemnation and a “wakeup call to higher education worldwide”. The recent UNESCO report states that academic institutions are rife with corruption and turning a blind eye to malpractice right under their noses. When UNESCO, a United Nations organization created after the chaos of World War II to focus on moral and intellectual solidarity, makes such an alarming allegation, it’s sobering and not to be dismissed.

So although Canadians typically think of their society and themselves as among the more honest and transparent found anywhere, how many Canadian institutions are engaging in activities that border on dishonest and are not entirely transparent around the world?

It is overwhelmingly evident that in the last two decades we have witnessed first-hand a remarkable and callous disregard for academic ethics and standards in a scramble by Canadian universities and colleges to sign up foreign students, who represent tens of millions of dollars to their bottom lines.

We have been in a school auditorium in China and listened to the school owner tell prospective parents that the Grade 12 marks from the Canadian provincial school board program can be manipulated to secure admission for their children into Canadian universities. This, while the Canadian teachers sat oblivious to the presentation in Chinese.

In hundreds of our own interaction with students who completed the Canadian provincial school board’s curriculum in China and who achieved grades of 70% and higher in their English class have been unable to achieve even a basic level of English literacy in the written tests we have administered.   But when the largest country of origin for incoming international students and revenue is China – the Canadian universities admitting these students salivate over the dollars and focus less on due diligence.

We were once asked by a university on Canada’s west coast to review 200 applications from Saudi Arabia, in order to identify the two or three Saudi students who were actually eligible for conditional admission to that university’s undergraduate engineering program. But the proposal was scuttled by the university’s ESL department that wanted all 200 to enroll in its language courses. It insisted on and managed conditional admissions for all 200. It’s common at Canadian universities for the ESL program “tail” to wag the campus “dog” when it comes to admissions. In fact, recent Canadian government regulations have been proposed to crack down on this practice as it is an affront to academic integrity.

If you have time, do read the rest as it’s eye-opening. As for the report Broitman cites, I was not able to find it. Broitman gives a link to the report in response to one of the later comments and there’s a link in Tony Bates’s July 31, 2016 posting but you will get a “too bad, so sad” message should you follow either link.The closed I can get to it is this Advisory Statement for Effective International Practice; Combatting Corruption and Enhancing Integrity: A Contemporary Challenge for the Quality and Credibility of Higher Education (PDF). The ‘note’ was jointly published by the (US) Council for Higher Education (CHEA) and UNESCO.

What about the professors?

As they scramble for students, the universities appear to be cutting their ‘teaching costs’, from an April 18, 2018 article by Charles Menzies (professor of anthropology and an elected member of the UBC [University of British Columbia] Board)  for THE UBYSSEY (UBC) student newspaper,

For the first time ever at UBC the contributions of student tuition fees exceeded provincial government contributions to UBC’s core budget. This startling fact was the backdrop to a strenuous grilling of UBC’s VP Finance and Provost Peter Smailes by governors at the Friday the 13 meeting of UBC’s Board of Governors’ standing committee for finance.

Given the fact students contribute more to UBC’s budget than the provincial government, governors asked why more wasn’t being done to enhance the student experience. By way of explanation the provost reiterated UBC’s commitment to the student experience. In a back-and-forth with a governor the provost outlined a range of programs that focus on enhancing the student experience. At several points the chair of the Board would intervene and press the provost for more explanations and elaboration. For his part the provost responded in a measured and deliberate tone outlining the programs in play, conceding more could be done, and affirming the importance of students in the overall process.

As a faculty member listening to this, I wondered about the background discourse undergirding the discussion. How is focussing on a student’s experience at UBC related to our core mission: education and research? What is actually being meant by experience? Why is no one questioning the inadequacy of the government’s core contribution? What about our contingent colleagues? Our part-time precarious colleagues pick up a great deal of the teaching responsibilities across our campuses. Is there not something we can do to improve their working conditions? Remember, faculty working conditions are student learning conditions. From my perspective all these questions received short shrift.

I did take the opportunity to ask the provost, given how financially sound our university is, why more funds couldn’t be directed toward improving the living and working conditions of contingent faculty. However, this was never elaborated upon after the fact.

There is much about the university as a total institution that seems driven to cultivate experiences. A lot of Board discussion circles around ideas of reputation and brand. Who pays and how much they pay (be they governments, donors, or students) is also a big deal. Cultivating a good experience for students is central to many of these discussions.

What is this experience that everyone is talking about? I hear about classroom experience, residence experience, and student experience writ large. Very little of it seems to be specifically tied to learning (unless it’s about more engaging, entertaining, learning with technology). While I’m sure some Board colleagues will disagree with this conclusion, it does seem to me that the experience being touted is really the experience of a customer seeking fulfilment through the purchase of a service. What is seen as important is not what is learned, but the grade; not the productive struggle of learning but the validation of self in a great experience as a member of an imagined community. A good student experience very likely leads to a productive alumni relationship — one where the alumni feels good about giving money.

Inside UBC’s Board of Governors

Should anyone be under illusions as to what goes on at the highest levels of university governance, there is the telling description from Professor Jennifer Berdahl about her experience on a ‘search committee for a new university president’ of the shameful treatment of previous president, Arvind Gupta (from Berdahl’s April 25, 2018 posting on her eponymous blog),

If Prof. Chaudhry’s [Canada Research Chair and Professor Ayesha Chaudhry’s resignation was announced in an April 25, 2018 UBYSSEY article by Alex Nguyen and Zak Vescera] experience was anything like mine on the UBC Presidential Search Committee, she quickly realized how alienating it is to be one of only three faculty members on a 21-person corporate-controlled Board. It was likely even worse for Chaudhry as a woman of color. Combining this with the Board’s shenanigans that are designed to manipulate information and process to achieve desired decisions and minimize academic voices, a sense of helpless futility can set in. [emphasis mine]

These shenanigans include [emphasis mine] strategic seating arrangements, sudden breaks during meetings when conversation veers from the desired direction, hand-written notes from the secretary to speaking members, hundreds of pages of documents sent the night before a meeting, private tête-à-têtes arranged between a powerful board member and a junior or more vulnerable one, portals for community input vetted before sharing, and planning op-eds to promote preferred perspectives. These are a few of many tricks employed to sideline unpopular voices, mostly academic ones.

It’s impossible to believe that UBC’s BoG is the site for these shenanigans take place. The question I have is how many BoGs and how much damage are they inflicting?

Finally getting back to my point, simultaneous with cutting back on teaching and other associated costs and manipulative, childish behaviour at BoG meetings, large amounts of money are being spent to attract ‘stars’ such as Dr. Montemagno. The idea is to attract students (and their money) to the institution where they can network with the ‘stars’. What the student actually learns does not seem to be the primary interest.

So, what kind of deals are the universities making with the ‘stars’?

The Montemagno affair provides a few hints but, in the end,I don’t know and I don’t think anyone outside the ‘sacred circle’ does either. UBC, for example,is quite secretive and, seemingly, quite liberal in its use of nondisclosure agreements (NDA). There was the scandal a few years ago when president Arvind Gupta abruptly resigned after one year in his position. As far as I know, no one has ever gotten to the bottom of this mystery although there certainly seems to have been a fair degree skullduggery involved.

After a previous president, Martha Cook Piper took over the reigns in an interim arrangement, Dr. Santa J. Ono (his Wikipedia entry) was hired.  Interestingly, he was previously at the University of Cincinnati, one of Montemagno’s previous employers. That university’s apparent eagerness to treat Montemagno’s extras seems to have led to the University of Alberta’s excesses.  So, what deal did UBC make with Dr. Ono? I’m pretty sure both he and the university are covered by an NDA but there is this about his tenure as president at the University of Cincinnati (from a June 14, 2016 article by Jack Hauen for THE UBYSSEY),

… in exchange for UC not raising undergraduate tuition, he didn’t accept a salary increase or bonus for two years. And once those two years were up, he kept going: his $200,000 bonus in 2015 went to “14 different organizations and scholarships, including a campus LGBTQ centre, a local science and technology-focused high school and a program for first-generation college students,” according to the Vancouver Sun.

In 2013 he toured around the States promoting UC with a hashtag of his own creation — #HottestCollegeInAmerica — while answering anything and everything asked of him during fireside chats.

He describes himself as a “servant leader,” which is a follower of a philosophy of leadership focused primarily on “the growth and well-being of people and the communities to which they belong.”

“I see my job as working on behalf of the entire UBC community. I am working to serve you, and not vice-versa,” he said in his announcement speech this morning.

Thank goodness it’s possible to end this piece on a more or less upbeat note. Ono seems to be what my father would have called ‘a decent human being’. It’s nice to be able to include a ‘happyish’ note.

Plea

There is huge money at stake where these ‘mega’ science and technology projects are concerned. The Ingenuity Lab was $100M investment to be paid out over 10 years and some basic questions don’t seem to have been asked. How does this person manage money? Leaving aside any issues with an individual’s ethics and moral compass, scientists don’t usually take any courses in business and yet they are expected to manage huge budgets. Had Montemagno handled a large budget or any budget? It’s certainly not foregrounded (and I’d like to see dollar amounts) in his CV.

As well, the Ingenuity Lab was funded as a 10 year project. Had Montemagno ever stayed in one job for 10 years? Not according to his CV. His longest stint was approximately eight years when he was in the US Navy in the 1980s. Otherwise, it was five to six years, including the Ingenuity Lab stint.

Meanwhile, our universities don’t appear to be applying the rules and protocols we have in place to ensure fairness. This unseemly rush for money seems to have infected how Canadian universities attract (local, interprovincial, and, especially, international) students to pay for their education. The infection also seems to have spread into the ways ‘star’ researchers and faculty members are recruited to Canadian universities while the bulk of the teaching staff are ‘starved’ under one pretext or another while a BoG may or may not be indulging in shenanigans designed to drive decision-making to a preordained outcome. And, for the most part, this is occurring under terms of secrecy that our intelligence agencies must envy.

In the end, I can’t be the only person wondering how all this affects our science.

Using open-source software for a 3D look at nanomaterials

A 3-D view of a hyperbranched nanoparticle with complex structure, made possible by Tomviz 1.0, a new open-source software platform developed by researchers at the University of Michigan, Cornell University and Kitware Inc. Image credit: Robert Hovden, Michigan Engineering

An April 3, 2017 news item on ScienceDaily describes this new and freely available software,

Now it’s possible for anyone to see and share 3-D nanoscale imagery with a new open-source software platform developed by researchers at the University of Michigan, Cornell University and open-source software company Kitware Inc.

Tomviz 1.0 is the first open-source tool that enables researchers to easily create 3-D images from electron tomography data, then share and manipulate those images in a single platform.

A March 31, 2017 University of Michigan news release, which originated the news item, expands on the theme,

The world of nanoscale materials—things 100 nanometers and smaller—is an important place for scientists and engineers who are designing the stuff of the future: semiconductors, metal alloys and other advanced materials.

Seeing in 3-D how nanoscale flecks of platinum arrange themselves in a car’s catalytic converter, for example, or how spiky dendrites can cause short circuits inside lithium-ion batteries, could spur advances like safer, longer-lasting batteries; lighter, more fuel efficient cars; and more powerful computers.

“3-D nanoscale imagery is useful in a variety of fields, including the auto industry, semiconductors and even geology,” said Robert Hovden, U-M assistant professor of materials science engineering and one of the creators of the program. “Now you don’t have to be a tomography expert to work with these images in a meaningful way.”

Tomviz solves a key challenge: the difficulty of interpreting data from the electron microscopes that examine nanoscale objects in 3-D. The machines shoot electron beams through nanoparticles from different angles. The beams form projections as they travel through the object, a bit like nanoscale shadow puppets.

Once the machine does its work, it’s up to researchers to piece hundreds of shadows into a single three-dimensional image. It’s as difficult as it sounds—an art as well as a science. Like staining a traditional microscope slide, researchers often add shading or color to 3-D images to highlight certain attributes.

A 3-D view of a particle used in a hydrogen fuel cell powered vehicle. The gray structure is carbon; the red and blue particles are nanoscale flecks of platinum. The image is made possible by Tomviz 1.0. Image credit: Elliot Padget, Cornell UniversityA 3-D view of a particle used in a hydrogen fuel cell powered vehicle. The gray structure is carbon; the red and blue particles are nanoscale flecks of platinum. The image is made possible by Tomviz 1.0. Image credit: Elliot Padget, Cornell UniversityTraditionally, they’ve have had to rely on a hodgepodge of proprietary software to do the heavy lifting. The work is expensive and time-consuming; so much so that even big companies like automakers struggle with it. And once a 3-D image is created, it’s often impossible for other researchers to reproduce it or to share it with others.

Tomviz dramatically simplifies the process and reduces the amount of time and computing power needed to make it happen, its designers say. It also enables researchers to readily collaborate by sharing all the steps that went into creating a given image and enabling them to make tweaks of their own.

“These images are far different from the 3-D graphics you’d see at a movie theater, which are essentially cleverly lit surfaces,” Hovden said. “Tomviz explores both the surface and the interior of a nanoscale object, with detailed information about its density and structure. In some cases, we can see individual atoms.”

Key to making Tomviz happen was getting tomography experts and software developers together to collaborate, Hovden said. Their first challenge was gaining access to a large volume of high-quality tomography. The team rallied experts at Cornell, Berkeley Lab and UCLA to contribute their data, and also created their own using U-M’s microscopy center. To turn raw data into code, Hovden’s team worked with open-source software maker Kitware.

With the release of Tomviz 1.0, Hovden is looking toward the next stages of the project, where he hopes to integrate the software directly with microscopes. He believes that U-M’s atom probe tomography facilities and expertise could help him design a version that could ultimately uncover the chemistry of all atoms in 3-D.

“We are unlocking access to see new 3D nanomaterials that will power the next generation of technology,” Hovden said. “I’m very interested in pushing the boundaries of understanding materials in 3-D.”

There is a video about Tomviz,

You can download Tomviz from here and you can find Kitware here. Happy 3D nanomaterial viewing!

Curcumin gel for burns and scalds

The curcumin debate continues (see my  Jan. 26, 2017 posting titled: Curcumin: a scientific literature review concludes health benefits may be overstated for more about that). In the meantime, scientists at the University of California at Los Angeles’ (UCLA) David Geffen School of Medicine found that curcumin gel might be effective as a treatment for burns. From a March 14, 2017 Pensoft Publishers news release on EurekAlert (Note: Links have been removed),

What is the effect of Topical Curcumin Gel for treating burns and scalds? In a recent research paper, published in the open access journal BioDiscovery, Dr. Madalene Heng, Clinical Professor of Dermatology at the David Geffen School of Medicine, stresses that use of topical curcumin gel for treating skin problems, like burns and scalds, is very different, and appears to work more effectively, when compared to taking curcumin tablets by mouth for other conditions.

“Curcumin gel appears to work much better when used on the skin because the gel preparation allows curcumin to penetrate the skin, inhibit phosphorylase kinase and reduce inflammation,” explains Dr Heng.

In this report, use of curcumin after burns and scalds were found to reduce the severity of the injury, lessen pain and inflammation, and improve healing with less than expected scarring, or even no scarring, of the affected skin. Dr. Heng reports her experience using curcumin gel on such injuries using three examples of patients treated after burns and scalds, and provides a detailed explanation why topical curcumin may work on such injuries.

Curcumin is an ingredient found in the common spice turmeric. Turmeric has been used as a spice for centuries in many Eastern countries and gives well known dishes, such as curry, their typical yellow-gold color. The spice has also been used for cosmetic and medical purposes for just as long in these countries.

In recent years, the medicinal value of curcumin has been the subject of intense scientific studies, with publication numbering in the thousands, looking into the possible beneficial effects of this natural product on many kinds of affliction in humans.

This study published reports that topical curcumin gel applied soon after mild to moderate burns and scalds appears to be remarkably effective in relieving symptoms and improved healing of the affected skin.

“When taken by mouth, curcumin is very poorly absorbed into the body, and may not work as well,” notes Dr. Heng. “Nonetheless, our tests have shown that when the substance is used in a topical gel, the effect is notable.”

The author of the study believes that the effectiveness of curcumin gel on the skin – or topical curcumin – is related to its potent anti-inflammatory activity. Based on studies that she has done both in the laboratory and in patients over 25 years, the key to curcumin’s effectiveness on burns and scalds is that it is a natural inhibitor of an enzyme called phosphorylase kinase.

This enzyme in humans has many important functions, including its involvement in wound healing. Wound healing is the vital process that enables healing of tissues after injury. The process goes through a sequence of acute and chronic inflammatory events, during which there is redness, swelling, pain and then healing, often with scarring in the case of burns and scalds of the skin. The sequence is started by the release of phosphorylase kinase about 5 mins after injury, which activates over 200 genes that are involved in wound healing.

Dr. Heng uses curcumin gel for burns, scalds and other skin conditions as complementary treatment, in addition to standard treatment usually recommended for such conditions.

Caption: These are results from 5 days upon application of curcumin gel to burns, and results after 6 weeks. Credit: Dr. Madalene Heng

Here’s a link to and a citation for the paper,

Phosphorylase Kinase Inhibition Therapy in Burns and Scalds by Madalene Heng. BioDiscovery 20: e11207 (24 Feb 2017) https://doi.org/10.3897/biodiscovery.20.e1120

This paper is in an open access journal.

Mapping 23,000 atoms in a nanoparticle

Identification of the precise 3-D coordinates of iron, shown in red, and platinum atoms in an iron-platinum nanoparticle.. Courtesy of Colin Ophus and Florian Nickel/Berkeley Lab

The image of the iron-platinum nanoparticle (referenced in the headline) reminds of foetal ultrasound images. A Feb. 1, 2017 news item on ScienceDaily tells us more,

In the world of the very tiny, perfection is rare: virtually all materials have defects on the atomic level. These imperfections — missing atoms, atoms of one type swapped for another, and misaligned atoms — can uniquely determine a material’s properties and function. Now, UCLA [University of California at Los Angeles] physicists and collaborators have mapped the coordinates of more than 23,000 individual atoms in a tiny iron-platinum nanoparticle to reveal the material’s defects.

The results demonstrate that the positions of tens of thousands of atoms can be precisely identified and then fed into quantum mechanics calculations to correlate imperfections and defects with material properties at the single-atom level.

A Feb. 1, 2017 UCLA news release, which originated the news item, provides more detail about the work,

Jianwei “John” Miao, a UCLA professor of physics and astronomy and a member of UCLA’s California NanoSystems Institute, led the international team in mapping the atomic-level details of the bimetallic nanoparticle, more than a trillion of which could fit within a grain of sand.

“No one has seen this kind of three-dimensional structural complexity with such detail before,” said Miao, who is also a deputy director of the Science and Technology Center on Real-Time Functional Imaging. This new National Science Foundation-funded consortium consists of scientists at UCLA and five other colleges and universities who are using high-resolution imaging to address questions in the physical sciences, life sciences and engineering.

Miao and his team focused on an iron-platinum alloy, a very promising material for next-generation magnetic storage media and permanent magnet applications.

By taking multiple images of the iron-platinum nanoparticle with an advanced electron microscope at Lawrence Berkeley National Laboratory and using powerful reconstruction algorithms developed at UCLA, the researchers determined the precise three-dimensional arrangement of atoms in the nanoparticle.

“For the first time, we can see individual atoms and chemical composition in three dimensions. Everything we look at, it’s new,” Miao said.

The team identified and located more than 6,500 iron and 16,600 platinum atoms and showed how the atoms are arranged in nine grains, each of which contains different ratios of iron and platinum atoms. Miao and his colleagues showed that atoms closer to the interior of the grains are more regularly arranged than those near the surfaces. They also observed that the interfaces between grains, called grain boundaries, are more disordered.

“Understanding the three-dimensional structures of grain boundaries is a major challenge in materials science because they strongly influence the properties of materials,” Miao said. “Now we are able to address this challenge by precisely mapping out the three-dimensional atomic positions at the grain boundaries for the first time.”

The researchers then used the three-dimensional coordinates of the atoms as inputs into quantum mechanics calculations to determine the magnetic properties of the iron-platinum nanoparticle. They observed abrupt changes in magnetic properties at the grain boundaries.

“This work makes significant advances in characterization capabilities and expands our fundamental understanding of structure-property relationships, which is expected to find broad applications in physics, chemistry, materials science, nanoscience and nanotechnology,” Miao said.

In the future, as the researchers continue to determine the three-dimensional atomic coordinates of more materials, they plan to establish an online databank for the physical sciences, analogous to protein databanks for the biological and life sciences. “Researchers can use this databank to study material properties truly on the single-atom level,” Miao said.

Miao and his team also look forward to applying their method called GENFIRE (GENeralized Fourier Iterative Reconstruction) to biological and medical applications. “Our three-dimensional reconstruction algorithm might be useful for imaging like CT scans,” Miao said. Compared with conventional reconstruction methods, GENFIRE requires fewer images to compile an accurate three-dimensional structure.

That means that radiation-sensitive objects can be imaged with lower doses of radiation.

The US Dept. of Energy (DOE) Lawrence Berkeley National Laboratory issued their own Feb. 1, 2017 news release (also on EurekAlert) about the work with a focus on how their equipment made this breakthrough possible (it repeats a little of the info. from the UCLA news release),

Scientists used one of the world’s most powerful electron microscopes to map the precise location and chemical type of 23,000 atoms in an extremely small particle made of iron and platinum.

The 3-D reconstruction reveals the arrangement of atoms in unprecedented detail, enabling the scientists to measure chemical order and disorder in individual grains, which sheds light on the material’s properties at the single-atom level. Insights gained from the particle’s structure could lead to new ways to improve its magnetic performance for use in high-density, next-generation hard drives.

What’s more, the technique used to create the reconstruction, atomic electron tomography (which is like an incredibly high-resolution CT scan), lays the foundation for precisely mapping the atomic composition of other useful nanoparticles. This could reveal how to optimize the particles for more efficient catalysts, stronger materials, and disease-detecting fluorescent tags.

Microscopy data was obtained and analyzed by scientists from the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) at the Molecular Foundry, in collaboration with Foundry users from UCLA, Oak Ridge National Laboratory, and the United Kingdom’s University of Birmingham. …

Atoms are the building blocks of matter, and the patterns in which they’re arranged dictate a material’s properties. These patterns can also be exploited to greatly improve a material’s function, which is why scientists are eager to determine the 3-D structure of nanoparticles at the smallest scale possible.

“Our research is a big step in this direction. We can now take a snapshot that shows the positions of all the atoms in a nanoparticle at a specific point in its growth. This will help us learn how nanoparticles grow atom by atom, and it sets the stage for a materials-design approach starting from the smallest building blocks,” says Mary Scott, who conducted the research while she was a Foundry user, and who is now a staff scientist. Scott and fellow Foundry scientists Peter Ercius and Colin Ophus developed the method in close collaboration with Jianwei Miao, a UCLA professor of physics and astronomy.

Their nanoparticle reconstruction builds on an achievement they reported last year in which they measured the coordinates of more than 3,000 atoms in a tungsten needle to a precision of 19 trillionths of a meter (19 picometers), which is many times smaller than a hydrogen atom. Now, they’ve taken the same precision, added the ability to distinguish different elements, and scaled up the reconstruction to include tens of thousands of atoms.

Importantly, their method maps the position of each atom in a single, unique nanoparticle. In contrast, X-ray crystallography and cryo-electron microscopy plot the average position of atoms from many identical samples. These methods make assumptions about the arrangement of atoms, which isn’t a good fit for nanoparticles because no two are alike.

“We need to determine the location and type of each atom to truly understand how a nanoparticle functions at the atomic scale,” says Ercius.

A TEAM approach

The scientists’ latest accomplishment hinged on the use of one of the highest-resolution transmission electron microscopes in the world, called TEAM I. It’s located at the National Center for Electron Microscopy, which is a Molecular Foundry facility. The microscope scans a sample with a focused beam of electrons, and then measures how the electrons interact with the atoms in the sample. It also has a piezo-controlled stage that positions samples with unmatched stability and position-control accuracy.

The researchers began growing an iron-platinum nanoparticle from its constituent elements, and then stopped the particle’s growth before it was fully formed. They placed the “partially baked” particle in the TEAM I stage, obtained a 2-D projection of its atomic structure, rotated it a few degrees, obtained another projection, and so on. Each 2-D projection provides a little more information about the full 3-D structure of the nanoparticle.

They sent the projections to Miao at UCLA, who used a sophisticated computer algorithm to convert the 2-D projections into a 3-D reconstruction of the particle. The individual atomic coordinates and chemical types were then traced from the 3-D density based on the knowledge that iron atoms are lighter than platinum atoms. The resulting atomic structure contains 6,569 iron atoms and 16,627 platinum atoms, with each atom’s coordinates precisely plotted to less than the width of a hydrogen atom.

Translating the data into scientific insights

Interesting features emerged at this extreme scale after Molecular Foundry scientists used code they developed to analyze the atomic structure. For example, the analysis revealed chemical order and disorder in interlocking grains, in which the iron and platinum atoms are arranged in different patterns. This has large implications for how the particle grew and its real-world magnetic properties. The analysis also revealed single-atom defects and the width of disordered boundaries between grains, which was not previously possible in complex 3-D boundaries.

“The important materials science problem we are tackling is how this material transforms from a highly randomized structure, what we call a chemically-disordered structure, into a regular highly-ordered structure with the desired magnetic properties,” says Ophus.

To explore how the various arrangements of atoms affect the nanoparticle’s magnetic properties, scientists from DOE’s Oak Ridge National Laboratory ran computer calculations on the Titan supercomputer at ORNL–using the coordinates and chemical type of each atom–to simulate the nanoparticle’s behavior in a magnetic field. This allowed the scientists to see patterns of atoms that are very magnetic, which is ideal for hard drives. They also saw patterns with poor magnetic properties that could sap a hard drive’s performance.

“This could help scientists learn how to steer the growth of iron-platinum nanoparticles so they develop more highly magnetic patterns of atoms,” says Ercius.

Adds Scott, “More broadly, the imaging technique will shed light on the nucleation and growth of ordered phases within nanoparticles, which isn’t fully theoretically understood but is critically important to several scientific disciplines and technologies.”

The folks at the Berkeley Lab have created a video (notice where the still image from the beginning of this post appears),

The Oak Ridge National Laboratory (ORNL), not wanting to be left out, has been mentioned in a Feb. 3, 2017 news item on ScienceDaily,

… researchers working with magnetic nanoparticles at the University of California, Los Angeles (UCLA), and the US Department of Energy’s (DOE’s) Lawrence Berkeley National Laboratory (Berkeley Lab) approached computational scientists at DOE’s Oak Ridge National Laboratory (ORNL) to help solve a unique problem: to model magnetism at the atomic level using experimental data from a real nanoparticle.

“These types of calculations have been done for ideal particles with ideal crystal structures but not for real particles,” said Markus Eisenbach, a computational scientist at the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility located at ORNL.

A Feb. 2, 2017 ORNL news release on EurekAlert, which originated the news item, elucidates further on how their team added to the research,

Eisenbach develops quantum mechanical electronic structure simulations that predict magnetic properties in materials. Working with Paul Kent, a computational materials scientist at ORNL’s Center for Nanophase Materials Sciences, the team collaborated with researchers at UCLA and Berkeley Lab’s Molecular Foundry to combine world-class experimental data with world-class computing to do something new–simulate magnetism atom by atom in a real nanoparticle.

Using the new data from the research teams on the West Coast, Eisenbach and Kent were able to precisely model the measured atomic structure, including defects, from a unique iron-platinum (FePt) nanoparticle and simulate its magnetic properties on the 27-petaflop Titan supercomputer at the OLCF.

Electronic structure codes take atomic and chemical structure and solve for the corresponding magnetic properties. However, these structures are typically derived from many 2-D electron microscopy or x-ray crystallography images averaged together, resulting in a representative, but not true, 3-D structure.

“In this case, researchers were able to get the precise 3-D structure for a real particle,” Eisenbach said. “The UCLA group has developed a new experimental technique where they can tell where the atoms are–the coordinates–and the chemical resolution, or what they are — iron or platinum.”

The ORNL news release goes on to describe the work from the perspective of the people who ran the supercompute simulationsr,

A Supercomputing Milestone

Magnetism at the atomic level is driven by quantum mechanics — a fact that has shaken up classical physics calculations and called for increasingly complex, first-principle calculations, or calculations working forward from fundamental physics equations rather than relying on assumptions that reduce computational workload.

For magnetic recording and storage devices, researchers are particularly interested in magnetic anisotropy, or what direction magnetism favors in an atom.

“If the anisotropy is too weak, a bit written to the nanoparticle might flip at room temperature,” Kent said.

To solve for magnetic anisotropy, Eisenbach and Kent used two computational codes to compare and validate results.

To simulate a supercell of about 1,300 atoms from strongly magnetic regions of the 23,000-atom nanoparticle, they used the Linear Scaling Multiple Scattering (LSMS) code, a first-principles density functional theory code developed at ORNL.

“The LSMS code was developed for large magnetic systems and can tackle lots of atoms,” Kent said.

As principal investigator on 2017, 2016, and previous INCITE program awards, Eisenbach has scaled the LSMS code to Titan for a range of magnetic materials projects, and the in-house code has been optimized for Titan’s accelerated architecture, speeding up calculations more than 8 times on the machine’s GPUs. Exceptionally capable of crunching large magnetic systems quickly, the LSMS code received an Association for Computing Machinery Gordon Bell Prize in high-performance computing achievement in 1998 and 2009, and developments continue to enhance the code for new architectures.

Working with Renat Sabirianov at the University of Nebraska at Omaha, the team also ran VASP, a simulation package that is better suited for smaller atom counts, to simulate regions of about 32 atoms.

“With both approaches, we were able to confirm that the local VASP results were consistent with the LSMS results, so we have a high confidence in the simulations,” Eisenbach said.

Computer simulations revealed that grain boundaries have a strong effect on magnetism. “We found that the magnetic anisotropy energy suddenly transitions at the grain boundaries. These magnetic properties are very important,” Miao said.

In the future, researchers hope that advances in computing and simulation will make a full-particle simulation possible — as first-principles calculations are currently too intensive to solve small-scale magnetism for regions larger than a few thousand atoms.

Also, future simulations like these could show how different fabrication processes, such as the temperature at which nanoparticles are formed, influence magnetism and performance.

“There’s a hope going forward that one would be able to use these techniques to look at nanoparticle growth and understand how to optimize growth for performance,” Kent said.

Finally, here’s a link to and a citation for the paper,

Deciphering chemical order/disorder and material properties at the single-atom level by Yongsoo Yang, Chien-Chun Chen, M. C. Scott, Colin Ophus, Rui Xu, Alan Pryor, Li Wu, Fan Sun, Wolfgang Theis, Jihan Zhou, Markus Eisenbach, Paul R. C. Kent, Renat F. Sabirianov, Hao Zeng, Peter Ercius, & Jianwei Miao. Nature 542, 75–79 (02 February 2017) doi:10.1038/nature21042 Published online 01 February 2017

This paper is behind a paywall.

The mathematics of Disney’s ‘Moana’

The hit Disney movie “Moana” features stunning visual effects, including the animation of water to such a degree that it becomes a distinct character in the film. Courtesy of Walt Disney Animation Studios

Few people think to marvel over the mathematics when watching an animated feature but without mathematicians, the artists would not be able to achieve their artistic goals as a Jan. 4, 2017 news item on phys.org makes clear (Note: A link has been removed),

UCLA [University of California at Los Angeles] mathematics professor Joseph Teran, a Walt Disney consultant on animated movies since 2007, is under no illusion that artists want lengthy mathematics lessons, but many of them realize that the success of animated movies often depends on advanced mathematics.

“In general, the animators and artists at the studios want as little to do with mathematics and physics as possible, but the demands for realism in animated movies are so high,” Teran said. “Things are going to look fake if you don’t at least start with the correct physics and mathematics for many materials, such as water and snow. If the physics and mathematics are not simulated accurately, it will be very glaring that something is wrong with the animation of the material.”

Teran and his research team have helped infuse realism into several Disney movies, including “Frozen,” where they used science to animate snow scenes. Most recently, they applied their knowledge of math, physics and computer science to enliven the new 3-D computer-animated hit, “Moana,” a tale about an adventurous teenage girl who is drawn to the ocean and is inspired to leave the safety of her island on a daring journey to save her people.

A Jan. 3, 2017 UCLA news release, which originated the news item, explains in further nontechnical detail,

Alexey Stomakhin, a former UCLA doctoral student of Teran’s and Andrea Bertozzi’s, played an important role in the making of “Moana.” After earning his Ph.D. in applied mathematics in 2013, he became a senior software engineer at Walt Disney Animation Studios. Working with Disney’s effects artists, technical directors and software developers, Stomakhin led the development of the code that was used to simulate the movement of water in “Moana,” enabling it to play a role as one of the characters in the film.

“The increased demand for realism and complexity in animated movies makes it preferable to get assistance from computers; this means we have to simulate the movement of the ocean surface and how the water splashes, for example, to make it look believable,” Stomakhin explained. “There is a lot of mathematics, physics and computer science under the hood. That’s what we do.”

“Moana” has been praised for its stunning visual effects in words the mathematicians love hearing. “Everything in the movie looks almost real, so the movement of the water has to look real too, and it does,” Teran said. “’Moana’ has the best water effects I’ve ever seen, by far.”

Stomakhin said his job is fun and “super-interesting, especially when we cheat physics and step beyond physics. It’s almost like building your own universe with your own laws of physics and trying to simulate that universe.

“Disney movies are about magic, so magical things happen which do not exist in the real world,” said the software engineer. “It’s our job to add some extra forces and other tricks to help create those effects. If you have an understanding of how the real physical laws work, you can push parameters beyond physical limits and change equations slightly; we can predict the consequences of that.”

To make animated movies these days, movie studios need to solve, or nearly solve, partial differential equations. Stomakhin, Teran and their colleagues build the code that solves the partial differential equations. More accurately, they write algorithms that closely approximate the partial differential equations because they cannot be solved perfectly. “We try to come up with new algorithms that have the highest-quality metrics in all possible categories, including preserving angular momentum perfectly and preserving energy perfectly. Many algorithms don’t have these properties,” Teran said.

Stomakhin was also involved in creating the ocean’s crashing waves that have to break at a certain place and time. That task required him to get creative with physics and use other tricks. “You don’t allow physics to completely guide it,” he said.  “You allow the wave to break only when it needs to break.”

Depicting boats on waves posed additional challenges for the scientists.

“It’s easy to simulate a boat traveling through a static lake, but a boat on waves is much more challenging to simulate,” Stomakhin said. “We simulated the fluid around the boat; the challenge was to blend that fluid with the rest of the ocean. It can’t look like the boat is splashing in a little swimming pool — the blend needs to be seamless.”

Stomakhin spent more than a year developing the code and understanding the physics that allowed him to achieve this effect.

“It’s nice to see the great visual effect, something you couldn’t have achieved if you hadn’t designed the algorithm to solve physics accurately,” said Teran, who has taught an undergraduate course on scientific computing for the visual-effects industry.

While Teran loves spectacular visual effects, he said the research has many other scientific applications as well. It could be used to simulate plasmas, simulate 3-D printing or for surgical simulation, for example. Teran is using a related algorithm to build virtual livers to substitute for the animal livers that surgeons train on. He is also using the algorithm to study traumatic leg injuries.

Teran describes the work with Disney as “bread-and-butter, high-performance computing for simulating materials, as mechanical engineers and physicists at national laboratories would. Simulating water for a movie is not so different, but there are, of course, small tweaks to make the water visually compelling. We don’t have a separate branch of research for computer graphics. We create new algorithms that work for simulating wide ranges of materials.”

Teran, Stomakhin and three other applied mathematicians — Chenfanfu Jiang, Craig Schroeder and Andrew Selle — also developed a state-of-the-art simulation method for fluids in graphics, called APIC, based on months of calculations. It allows for better realism and stunning visual results. Jiang is a UCLA postdoctoral scholar in Teran’s laboratory, who won a 2015 UCLA best dissertation prize.  Schroeder is a former UCLA postdoctoral scholar who worked with Teran and is now at UC Riverside. Selle, who worked at Walt Disney Animation Studios, is now at Google.

Their newest version of APIC has been accepted for publication by the peer-reviewed Journal of Computational Physics.

“Alexey is using ideas from high-performance computing to make movies,” Teran said, “and we are contributing to the scientific community by improving the algorithm.”

Unfortunately, the paper does not seem to have been published early online so I cannot offer a link.

Final comment, it would have been interesting to have had a comment from one of the film’s artists or animators included in the article but it may not have been possible due to time or space constraints.

Sustainable Nanotechnologies (SUN) project draws to a close in March 2017

Two Oct. 31, 2016 news item on Nanowerk signal the impending sunset date for the European Union’s Sustainable Nanotechnologies (SUN) project. The first Oct. 31, 2016 news item on Nanowerk describes the projects latest achievements,

The results from the 3rd SUN annual meeting showed great advancement of the project. The meeting was held in Edinburgh, Scotland, UK on 4-5 October 2016 where the project partners presented the results obtained during the second reporting period of the project.

SUN is a three and a half year EU project, running from 2013 to 2017, with a budget of about €14 million. Its main goal is to evaluate the risks along the supply chain of engineered nanomaterials and incorporate the results into tools and guidelines for sustainable manufacturing.

The ultimate goal of the SUN Project is the development of an online software Decision Support System – SUNDS – aimed at estimating and managing occupational, consumer, environmental and public health risks from nanomaterials in real industrial products along their lifecycles. The SUNDS beta prototype has been released last October, 2015, and since then the main focus has been on refining the methodologies and testing them on selected case studies i.e. nano-copper oxide based wood preserving paint and nano- sized colourants for plastic car part: organic pigment and carbon black. Obtained results and open issues were discussed during the third annual meeting in order collect feedbacks from the consortium that will inform, in the next months, the implementation of the final version of the SUNDS software system, due by March 2017.

An Oct. 27, 2016 SUN project press release, which originated the news item, adds more information,

Significant interest has been payed towards the results obtained in WP2 (Lifecycle Thinking) which main objectives are to assess the environmental impacts arising from each life cycle stage of the SUN case studies (i.e. Nano-WC-Cobalt (Tungsten Carbide-cobalt) sintered ceramics, Nanocopper wood preservatives, Carbon Nano Tube (CNT) in plastics, Silicon Dioxide (SiO2) as food additive, Nano-Titanium Dioxide (TiO2) air filter system, Organic pigment in plastics and Nanosilver (Ag) in textiles), and compare them to conventional products with similar uses and functionality, in order to develop and validate criteria and guiding principles for green nano-manufacturing. Specifically, the consortium partner COLOROBBIA CONSULTING S.r.l. expressed its willingness to exploit the results obtained from the life cycle assessment analysis related to nanoTiO2 in their industrial applications.

On 6th October [2016], the discussions about the SUNDS advancement continued during a Stakeholder Workshop, where representatives from industry, regulatory and insurance sectors shared their feedback on the use of the decision support system. The recommendations collected during the workshop will be used for the further refinement and implemented in the final version of the software which will be released by March 2017.

The second Oct. 31, 2016 news item on Nanowerk led me to this Oct. 27, 2016 SUN project press release about the activities in the upcoming final months,

The project has designed its final events to serve as an effective platform to communicate the main results achieved in its course within the Nanosafety community and bridge them to a wider audience addressing the emerging risks of Key Enabling Technologies (KETs).

The series of events include the New Tools and Approaches for Nanomaterial Safety Assessment: A joint conference organized by NANOSOLUTIONS, SUN, NanoMILE, GUIDEnano and eNanoMapper to be held on 7 – 9 February 2017 in Malaga, Spain, the SUN-CaLIBRAte Stakeholders workshop to be held on 28 February – 1 March 2017 in Venice, Italy and the SRA Policy Forum: Risk Governance for Key Enabling Technologies to be held on 1- 3 March in Venice, Italy.

Jointly organized by the Society for Risk Analysis (SRA) and the SUN Project, the SRA Policy Forum will address current efforts put towards refining the risk governance of emerging technologies through the integration of traditional risk analytic tools alongside considerations of social and economic concerns. The parallel sessions will be organized in 4 tracks:  Risk analysis of engineered nanomaterials along product lifecycle, Risks and benefits of emerging technologies used in medical applications, Challenges of governing SynBio and Biotech, and Methods and tools for risk governance.

The SRA Policy Forum has announced its speakers and preliminary Programme. Confirmed speakers include:

  • Keld Alstrup Jensen (National Research Centre for the Working Environment, Denmark)
  • Elke Anklam (European Commission, Belgium)
  • Adam Arkin (University of California, Berkeley, USA)
  • Phil Demokritou (Harvard University, USA)
  • Gerard Escher (École polytechnique fédérale de Lausanne, Switzerland)
  • Lisa Friedersdor (National Nanotechnology Initiative, USA)
  • James Lambert (President, Society for Risk Analysis, USA)
  • Andre Nel (The University of California, Los Angeles, USA)
  • Bernd Nowack (EMPA, Switzerland)
  • Ortwin Renn (University of Stuttgart, Germany)
  • Vicki Stone (Heriot-Watt University, UK)
  • Theo Vermeire (National Institute for Public Health and the Environment (RIVM), Netherlands)
  • Tom van Teunenbroek (Ministry of Infrastructure and Environment, The Netherlands)
  • Wendel Wohlleben (BASF, Germany)

The New Tools and Approaches for Nanomaterial Safety Assessment (NMSA) conference aims at presenting the main results achieved in the course of the organizing projects fostering a discussion about their impact in the nanosafety field and possibilities for future research programmes.  The conference welcomes consortium partners, as well as representatives from other EU projects, industry, government, civil society and media. Accordingly, the conference topics include: Hazard assessment along the life cycle of nano-enabled products, Exposure assessment along the life cycle of nano-enabled products, Risk assessment & management, Systems biology approaches in nanosafety, Categorization & grouping of nanomaterials, Nanosafety infrastructure, Safe by design. The NMSA conference key note speakers include:

  • Harri Alenius (University of Helsinki, Finland,)
  • Antonio Marcomini (Ca’ Foscari University of Venice, Italy)
  • Wendel Wohlleben (BASF, Germany)
  • Danail Hristozov (Ca’ Foscari University of Venice, Italy)
  • Eva Valsami-Jones (University of Birmingham, UK)
  • Socorro Vázquez-Campos (LEITAT Technolоgical Center, Spain)
  • Barry Hardy (Douglas Connect GmbH, Switzerland)
  • Egon Willighagen (Maastricht University, Netherlands)
  • Nina Jeliazkova (IDEAconsult Ltd., Bulgaria)
  • Haralambos Sarimveis (The National Technical University of Athens, Greece)

During the SUN-caLIBRAte Stakeholder workshop the final version of the SUN user-friendly, software-based Decision Support System (SUNDS) for managing the environmental, economic and social impacts of nanotechnologies will be presented and discussed with its end users: industries, regulators and insurance sector representatives. The results from the discussion will be used as a foundation of the development of the caLIBRAte’s Risk Governance framework for assessment and management of human and environmental risks of MN and MN-enabled products.

The SRA Policy Forum: Risk Governance for Key Enabling Technologies and the New Tools and Approaches for Nanomaterial Safety Assessment conference are now open for registration. Abstracts for the SRA Policy Forum can be submitted till 15th November 2016.
For further information go to:
www.sra.org/riskgovernanceforum2017
http://www.nmsaconference.eu/

There you have it.

Wearable microscopes

It never occurred to me that someone might want a wearable microscope but, apparently, there is a need. A Sept. 27, 2016 news item on phys.org,

UCLA [University of California at Los Angeles] researchers working with a team at Verily Life Sciences have designed a mobile microscope that can detect and monitor fluorescent biomarkers inside the skin with a high level of sensitivity, an important tool in tracking various biochemical reactions for medical diagnostics and therapy.

A Sept. 26, 2016 UCLA news release by Meghan Steele Horan, which originated the news item, describes the work in more detail,

This new system weighs less than a one-tenth of a pound, making it small and light enough for a person to wear around their bicep, among other parts of their body. In the future, technology like this could be used for continuous patient monitoring at home or at point-of-care settings.

The research, which was published in the journal ACS Nano, was led by Aydogan Ozcan, UCLA’s Chancellor’s Professor of Electrical Engineering and Bioengineering and associate director of the California NanoSystems Institute and Vasiliki Demas of Verily Life Sciences (formerly Google Life Sciences).

Fluorescent biomarkers are routinely used for cancer detection and drug delivery and release among other medical therapies. Recently, biocompatible fluorescent dyes have emerged, creating new opportunities for noninvasive sensing and measuring of biomarkers through the skin.

However, detecting artificially added fluorescent objects under the skin is challenging. Collagen, melanin and other biological structures emit natural light in a process called autofluorescence. Various methods have been tried to investigate this problem using different sensing systems. Most are quite expensive and difficult to make small and cost-effective enough to be used in a wearable imaging system.

To test the mobile microscope, researchers first designed a tissue phantom — an artificially created material that mimics human skin optical properties, such as autofluorescence, absorption and scattering. The target fluorescent dye solution was injected into a micro-well with a volume of about one-hundredth of a microliter, thinner than a human hair, and subsequently implanted into the tissue phantom half a millimeter to 2 millimeters from the surface — which would be deep enough to reach blood and other tissue fluids in practice.

To measure the fluorescent dye, the wearable microscope created by Ozcan and his team used a laser to hit the skin at an angle. The fluorescent image at the surface of the skin was captured via the wearable microscope. The image was then uploaded to a computer where it was processed using a custom-designed algorithm, digitally separating the target fluorescent signal from the autofluorescence of the skin, at a very sensitive parts-per-billion level of detection.

“We can place various tiny bio-sensors inside the skin next to each other, and through our imaging system, we can tell them apart,” Ozcan said. “We can monitor all these embedded sensors inside the skin in parallel, even understand potential misalignments of the wearable imager and correct it to continuously quantify a panel of biomarkers.”

This computational imaging framework might also be used in the future to continuously monitor various chronic diseases through the skin using an implantable or injectable fluorescent dye.

Here’s a link to and a citation for the paper,

Quantitative Fluorescence Sensing Through Highly Autofluorescent, Scattering, and Absorbing Media Using Mobile Microscopy by Zoltán Göröcs, Yair Rivenson, Hatice Ceylan Koydemir, Derek Tseng, Tamara L. Troy, Vasiliki Demas, and Aydogan Ozcan. ACS Nano, 2016, 10 (9), pp 8989–8999 DOI: 10.1021/acsnano.6b05129 Publication Date (Web): September 13, 2016

Copyright © 2016 American Chemical Society

This paper is behind a paywall.

Faster predictive toxicology of nanomaterials

As more nanotechnology-enabled products make their way to the market and concerns rise regarding safety, scientists work to find better ways of assessing and predicting the safety of these materials, from an Aug. 13, 2016 news item on Nanowerk,

UCLA [University of California at Los Angeles] researchers have designed a laboratory test that uses microchip technology to predict how potentially hazardous nanomaterials could be.

According to UCLA professor Huan Meng, certain engineered nanomaterials, such as non-purified carbon nanotubes that are used to strengthen commercial products, could have the potential to injure the lungs if inhaled during the manufacturing process. The new test he helped develop could be used to analyze the extent of the potential hazard.

An Aug. 12, 2016 UCLA news release, which originated the news item, expands on the theme,

The same test could also be used to identify biological biomarkers that can help scientists and doctors detect cancer and infectious diseases. Currently, scientists identify those biomarkers using other tests; one of the most common is called enzyme-linked immunosorbent assay, or ELISA. But the new platform, which is called semiconductor electronic label-free assay, or SELFA, costs less and is faster and more accurate, according to research published in the journal Scientific Reports.

The study was led by Meng, a UCLA assistant adjunct professor of medicine, and Chi On Chui, a UCLA associate professor of electrical engineering and bioengineering.

ELISA has been used by scientists for decades to analyze biological samples — for example, to detect whether epithelial cells in the lungs that have been exposed to nanomaterials are inflamed. But ELISA must be performed in a laboratory setting by skilled technicians, and a single test can cost roughly $700 and take five to seven days to process.

In contrast, SELFA uses microchip technology to analyze samples. The test can take between 30 minutes and two hours and, according to the UCLA researchers, could cost just a few dollars per sample when high-volume production begins.

The SELFA chip contains a T-shaped nanowire that acts as an integrated sensor and amplifier. To analyze a sample, scientists place it on a sensor on the chip. The vertical part of the T-shaped nanowire converts the current from the molecule being analyzed, and the horizontal portion amplifies that signal to distinguish the molecule from others.

The use of the T-shaped nanowires created in Chui’s lab is a new application of a UCLA patented invention that was developed by Chui and his colleagues. The device is the first time that “lab-on-a-chip” analysis has been tested in a scenario that mimics a real-life situation.

The UCLA scientists exposed cultured lung cells to different nanomaterials and then compared their results using SELFA with results in a database of previous studies that used other testing methods.

“By measuring biomarker concentrations in the cell culture, we showed that SELFA was 100 times more sensitive than ELISA,” Meng said. “This means that not only can SELFA analyze much smaller sample sizes, but also that it can minimize false-positive test results.”

Chui said, “The results are significant because SELFA measurement allows us to predict the inflammatory potential of a range of nanomaterials inside cells and validate the prediction with cellular imaging and experiments in animals’ lungs.”

Here’s a link to and a citation for the paper,

Semiconductor Electronic Label-Free Assay for Predictive Toxicology by Yufei Mao, Kyeong-Sik Shin, Xiang Wang, Zhaoxia Ji, Huan Meng, & Chi On Chui. Scientific Reports 6, Article number: 24982 (2016) doi:10.1038/srep24982 Published online: 27 April 2016

This paper is open access.