Tag Archives: imaging

Using touch (bionic fingers) instead of x-rays

This is not the most exciting video but it is weirdly fascinating (thank you to ScientifiCult),

A February 15, 2023 news item on Nanowerk provides a textual description for what you’re seeing in the video, Note: A link has been removed,

What if, instead of using X-rays or ultrasound, we could use touch to image the insides of human bodies and electronic devices? In a study publishing in the journal Cell Reports Physical Science (“A smart bionic finger for subsurface tactile-tomography”), researchers present a bionic finger that can create 3D maps of the internal shapes and textures of complex objects by touching their exterior surface.

“We were inspired by human fingers, which have the most sensitive tactile perception that we know of,” says senior author Jianyi Luo, a professor at Wuyi University. “For example, when we touch our own bodies with our fingers, we can sense not only the texture of our skin, but also the outline of the bone beneath it.”

“Our bionic finger goes beyond previous artificial sensors that were only capable of recognizing and discriminating between external shapes, surface textures, and hardness,” says co-author Zhiming Chen, a lecturer at Wuyi University.

The bionic finger “scans” an object by moving across it and applying pressure—think of a constant stream of pokes or prods. With each poke, the carbon fibers compress, and the degree to which they compress provides information about the relative stiffness or softness of the object. Depending on the object’s material, it might also compress when poked by the bionic finger: rigid objects hold their shape, while soft objects will deform when enough pressure is applied. This information, along with the location at which it was recorded, is relayed to a personal computer and displayed onscreen as a 3D map.

A February 13, 2023 Cell Press news release on EurekAlert, which originated the news item, provides more details about the research and some hints at what the researchers may do next,

The researchers tested the bionic finger’s ability to map out the internal and external features of complex objects made of multiple types of material, such as a rigid “letter A” buried under a layer of soft silicon, as well as more abstractly shaped objects. When they used it to scan a small compound object made of three different materials—a rigid internal material, a soft internal material, and a soft outer coating—the bionic finger was able to discriminate between not only the soft outer coating and the internal hard ridges, but it could also tell the difference between the soft outer coating and the soft material that filled the internal grooves.

Next, the researchers tested the finger’s ability to sense and image simulated human tissue. They created this tissue— consisting of a skeletal component, made of three layers of hard polymer, and a soft silicone “muscle” layer—using 3D printing. The bionic finger was able to reproduce a 3D profile of the tissue’s structure and locate a simulated blood vessel beneath the muscle layer.

The team also explored the bionic finger’s ability to diagnose issues in electronic devices without opening them up. By scanning the surface of a defective electronic device with the bionic finger, the researchers were able to create a map of its internal electrical components and pinpoint the location at which the circuit was disconnected, as well as a mis-drilled hole, without breaking through the encapsulating layer.

“This tactile technology opens up a non-optical way for the nondestructive testing of the human body and flexible electronics,” says Luo. “Next, we want to develop the bionic finger’s capacity for omnidirectional detection with different surface materials.”

Here’s a link to and a citation for the paper,

A smart bionic finger for subsurface tactile tomography by Yizhou Li, Zhiming Chen, Youbin Chen, Hao Yang, Junyong Lu, Zhennan Li, Yongyao Chen, Dongyi Ding, Cuiying Zeng, Bingpu Zhou, Hongpeng Liang, Xingpeng Huang, Jiajia Hu, Jingcheng Huang, Jinxiu Wen, Jianyi Luo. Volume 4, Issue 2, 15 February 2023, 101257 DOI: https://doi.org/10.1016/j.xcrp.2023.101257 Published online: February 15, 2023 Version of Record 15 February 2023.

This paper is open access.

Non-invasive chemical imaging reveals the Eykian Lamb of God’s secrets

Left: color image after the 1950s treatment. The ears of the Eyckian Lamb were revealed after removal of the 16th-century overpaint obscuring the background. Right: color image after the 2019 treatment that removed all of the 16th century overpaint, revealing the face of the Eyckian Lamb. The dotted lines indicate the outline of the head before removal of 16th-century overpaint.

Fascinating, yes? More than one person has noticed that the ‘new’ lamb is “disturbingly human-like.” First, here’s more about this masterpiece and the technology used to restore it (from a July 29, 2020 University of Antwerp (Belgium) press release (Note: I do not have all of the figures (images) described in this press release embedded here),

Two non-invasive chemical imaging modalities were employed to help understand the changes made over time to the Lamb of God, the focal point of the Ghent Altarpiece (1432) by Hubert and Jan Van Eyck. Two major results were obtained: a prediction of the facial features of the Lamb of God that had been hidden beneath non-original overpaint dating from the 16th century (and later), and evidence for a smaller earlier version of the Lamb’s body with a more naturalistic build. These non-invasive imaging methods, combined with analysis of paint cross-sections and magnified examination of the paint surface, provide objective chemical evidence to understand the extent of overpaints and the state of preservation of the original Eyckian paint underneath.

The Ghent Altarpiece is one of the founding masterpieces of Western European painting. The central panel, The Adoration of the Lamb, represents the sacrifice of Christ with a depiction of the Lamb of God standing on an altar, blood pouring into a chalice. During conservation treatment and technical analysis in the 1950s, conservators recognized the presence of overpaint on the Lamb and the surrounding area. But based on the evidence available at that time, the decision was made to remove only the overpaint obscuring the background immediately surrounding the head. As a result, the ears of the Eyckian Lamb were uncovered, leading to the surprising effect of a head with four ears (Figure 1).

Figure 1: Left: Color image after the 1950s treatment. The ears of the Eyckian Lamb were revealed after removal of the 16th century overpaint obscuring the background. (© Lukasweb.be – Art in Flanders vzw). Right: Color image after the 2019 treatment that removed all of the 16th century overpaint, revealing the face of the Eyckian Lamb. The dotted lines indicate the outline of the head before removal of 16th century overpaint. (© Lukasweb.be – Art in Flanders vzw).

During the recent conservation treatment of the central panel, chemical images collected before 16th century overpaint was removed revealed facial features that predicted aspects of the Eyckian Lamb, at that time still hidden below the overpaint. For example, the smaller, v-shaped nostrils of the Eyckian Lamb are situated higher than the 16th century nose, as revealed in the map for mercury, an element associated with the red pigment vermilion (Figure 2, red arrow). A pair of eyes that look forward, slightly lower than the 16th century eyes, can be seen in a false-color hyperspectral infrared reflectance image (Figure 2, right). This image also shows dark preparatory underdrawing lines that define pursed lips, and in conjunction with the presence of mercury in this area, suggest the Eyckian lips were more prominent. In addition, the higher, 16th century ears were painted over the gilded rays of the halo (Figure 2, yellow rays). Gilding is typically the artist’s final touch when working on a painting, which supports the conclusion that the lower set of ears is the Eyckian original. Collectively, these facial features indicate that, compared to the 16th century restorer’s overpainted face, the Eyckian Lamb has a smaller face with a distinctive expression.

Figure 2: Left: Colorized composite elemental map showing the distribution of gold (in yellow), mercury (in red), and lead (in white). The red arrow indicates the position of the Eyckian Lamb’s nostrils. (University of Antwerp). Right: Composite false-color infrared reflectance image (blue – 1000 nm, green – 1350 nm, red – 1650 nm) shows underdrawn lines indicating the position of facial features of the Eyckian Lamb, including forward-gazing eyes, the division between the lips, and the jawline. (National Gallery of Art, Washington). The dotted lines indicate the outline of the head before removal of 16th century overpaint.

The new imaging also revealed previously unrecognized revisions to the size and shape of the Lamb’s body: a more naturalistically shaped Lamb, with slightly sagging back, more rounded hindquarters and a smaller tail. The artist’s underdrawing lines used to lay out the design of the smaller shape can be seen in the false-color hyperspectral infrared reflectance image (Figure 3, lower left, white arrows). Mathematical processing of the reflectance dataset to emphasize a spectral feature associated with the pigment lead white resulted in a clearer image of the smaller Lamb (Figure 3, lower right). Differences between the paint handling of the fleece in the initial small Lamb and the revised area of the larger Lamb also were found upon reexamination of the x-radiograph and the paint surface under the microscope.

Figure 3: Upper left: Color image before removal of all 16th century overpaint. (© Lukasweb.be – Art in Flanders vzw). Upper right: Color image after removal of all 16th century overpaint. (© Lukasweb.be – Art in Flanders vzw). Lower left: False-color infrared reflectance image (blue – 1000 nm, green – 1350 nm, red – 1650 nm) reveals underdrawing lines that denote the smaller hindquarters of the initial Lamb. Lower right: Map derived from processing the infrared reflectance image cube showing the initial Lamb with a slightly sagging back, more rounded hindquarters and a smaller tail. Brighter areas of the map indicate stronger absorption from the -OH group associated with one of the forms of lead white. (National Gallery of Art, Washington).

During the conservation treatment completed in 2019, decisions were informed by well-established conservation methods (high-resolution color photography, X-radiography, infrared imaging, paint sample analysis) as well as the new chemical imaging. In this way, the conservation treatment uncovered the smaller face of the Eyckian Lamb, with forward-facing eyes that meet the viewer’s gaze. Only overpaints that could be identified as being later additions dating from the 16th century onward were carefully and safely removed. The body of the Lamb, however, has not changed. The material evidence indicates that the lead white paint layer used to define the larger squared-off hindquarters was applied prior to the 16th century restoration, but because analysis at the present time cannot definitively establish whether this was a change by the original artist(s) or a very early restoration or alteration by another artist, the enlarged contour of the Lamb was left untouched.

Chemical imaging technologies can be used to build confidence about the state of preservation of original paint and help guide the decision to remove overpaint. Combined with the conservators’ thorough optical examination, informed by years of experience and insights derived from paint cross-sections, chemical imaging methods will no doubt be central to ongoing interdisciplinary research, helping to resolve long-standing art-historical issues on the Ghent Altarpiece as well as other works of art. These findings were obtained by researchers from the University of Antwerp using macroscale X-ray fluorescence imaging and researchers at the National Gallery of Art, Washington using infrared reflectance imaging spectroscopy, interpreted in conjunction with the observations of the scientists and the conservation team from The Royal Institute for Cultural Heritage (KIK-IRPA), Brussels.

A January 22, 2020 British Broadcasting Corporation (BBC) online news item notes some of the response to the ‘new’ lamb (Note: A link has been removed),

Restorers found that the central panel of the artwork, known as the Adoration of the Mystic Lamb, had been painted over in the 16th Century.

Another artist had altered the Lamb of God, a symbol for Jesus depicted at the centre of the panel.

Now conservationists have stripped away the overpaint, revealing the lamb’s “intense gaze” and “large frontal eyes”.

Hélène Dubois, the head of the restoration project, told the Art Newspaper the original lamb had a more “intense interaction with the onlookers”.

She said the lamb’s “cartoonish” depiction, which departs from the painting’s naturalistic style, required more research.

The lamb has been described as having an “alarmingly humanoid face” with “penetrating, close-set eyes, full pink lips and flared nostrils” by the Smithsonian Magazine.

These features are “eye-catching, if not alarmingly anthropomorphic”, said the magazine, the official journal of the Smithsonian Institution.

There was also disbelief on social media, where the lamb was called “disturbing” by some and compared to an “alien creature”. Some said they felt it would have been better to not restore the lamb’s original face.

The painter of the panel, Jan Van Eyck, is considered to be one of the most technical and talented artists of his generation. However, it is widely believed that The Ghent Altarpiece was started by his brother, Hubert Van Eyck.

Taken away by the Nazis during World War Two and Napoleon’s troops in the 1700s, the altarpiece is thought to be one of the most frequently stolen artworks of all time.

If you have the time, do read the January 22, 2020 BBC news item in its entirety as it conveys more of the controversy.

Jennifer Ouellette’s July 29, 2020 article for Ars Technica delves further into the technical detail along with some history about this particular 21st Century restoration. The conservators and experts used artificial intelligence (AI) to assist.

Here’s a link to and a citation for the paper,

Dual mode standoff imaging spectroscopy documents the painting process of the Lamb of God in the Ghent Altarpiece by J. and H. Van Eyck by Geert Van der Snickt, Kathryn A. Dooley, Jana Sanyova, Hélène Dubois, John K. Delaney, E. Melanie Gifford, Stijn Legrand, Nathalie Laquiere and Koen Janssens. Science Advances 29 Jul 2020: Vol. 6, no. 31, eabb3379 DOI: 10.1126/sciadv.abb3379

This paper is open access.

How to prevent your scanning tunneling microscope probe’s ‘tip crashes’

The microscopes used for nanoscale research were invented roughly 35 years ago and as fabulous as they’ve been, there is a problem (from a February 12, 2018 news item on Nanowerk),

A University of Texas at Dallas graduate student, his advisor and industry collaborators believe they have addressed a long-standing problem troubling scientists and engineers for more than 35 years: How to prevent the tip of a scanning tunneling microscope from crashing into the surface of a material during imaging or lithography.

The researchers have prepared this video describing their work,

For those who like text, there’s more in this February 12, 2018 University of Texas at Dallas news release,

Scanning tunneling microscopes (STMs) operate in an ultra-high vacuum, bringing a fine-tipped probe with a single atom at its apex very close to the surface of a sample. When voltage is applied to the surface, electrons can jump or tunnel across the gap between the tip and sample.

“Think of it as a needle that is very sharp, atomically sharp,” said Farid Tajaddodianfar, a mechanical engineering graduate student in the Erik Jonsson School of Engineering and Computer Science. “The microscope is like a robotic arm, able to reach atoms on the sample surface and manipulate them.”

The problem is, sometimes the tungsten tip crashes into the sample. If it physically touches the sample surface, it may inadvertently rearrange the atoms or create a “crater,” which could damage the sample. Such a “tip crash” often forces operators to replace the tip many times, forfeiting valuable time.

Dr. John Randall is an adjunct professor at UT Dallas and president of Zyvex Labs, a Richardson, Texas-based nanotechnology company specializing in developing tools and products that fabricate structures atom by atom. Zyvex reached out to Dr. Reza Moheimani, a professor of mechanical engineering, to help address STMs’ tip crash problem. Moheimani’s endowed chair was a gift from Zyvex founder James Von Ehr MS’81, who was honored as a distinguished UTD alumnus in 2004.

“What they’re trying to do is help bring atomically precise manufacturing into reality,” said Randall, who co-authored the article with Tajaddodianfar, Moheimani and Zyvex Labs’ James Owen. “This is considered the future of nanotechnology, and it is extremely important work.”

Randall said such precise manufacturing will lead to a host of innovations.

“By building structures atom by atom, you’re able to create new, extraordinary materials,” said Randall, who is co-chair of the Jonsson School’s Industry Engagement Committee. “We can remove impurities and make materials stronger and more heat resistant. We can build quantum computers. It could radically lower costs and expand capabilities in medicine and other areas. For example, if we can better understand DNA at an atomic and molecular level, that will help us fine-tune and tailor health care according to patients’ needs. The possibilities are endless.”

In addition, Moheimani, a control engineer and expert in nanotechnology, said scientists are attempting to build transistors and quantum computers from a single atom using this technology.

“There’s an international race to build machines, devices and 3-D equipment from the atom up,” said Moheimani, the James Von Ehr Distinguished Chair in Science and Technology.

‘It’s a Big, Big Problem’

Randall said Zyvex Labs has spent a lot of time and money trying to understand what happens to the tips when they crash.

“It’s a big, big problem,” Randall said. “If you can’t protect the tip, you’re not going to build anything. You’re wasting your time.”

Tajaddodianfar and Moheimani said the issue is the controller.

“There’s a feedback controller in the STM that measures the current and moves the needle up and down,” Moheimani said. “You’re moving from one atom to another, across an uneven surface. It is not flat. Because of that, the distance between the sample and tip changes, as does the current between them. While the controller tries to move the tip up and down to maintain the current, it does not always respond well, nor does it regulate the tip correctly. The resulting movement of the tip is often unstable.”

It’s the feedback controller that fails to protect the tip from crashing into the surface, Tajaddodianfar said.

“When the electronic properties are variable across the sample surface, the tip is more prone to crash under conventional control systems,” he said. “It’s meant to be really, really sharp. But when the tip crashes into the sample, it breaks, curls backward and flattens.

“Once the tip crashes into the surface, forget it. Everything changes.”

The Solution

According to Randall, Tajaddodianfar took logical steps for creating the solution.

“The brilliance of Tajaddodianfar is that he looked at the problem and understood the physics of the tunneling between the tip and the surface, that there is a small electronic barrier that controls the rate of tunneling,” Randall said. “He figured out a way of measuring that local barrier height and adjusting the gain on the control system that demonstrably keeps the tip out of trouble. Without it, the tip just bumps along, crashing into the surface. Now, it adjusts to the control parameters on the fly.”

Moheimani said the group hopes to change their trajectory when it comes to building new devices.

“That’s the next thing for us. We set out to find the source of this problem, and we did that. And, we’ve come up with a solution. It’s like everything else in science: Time will tell how impactful our work will be,” Moheimani said. “But I think we have solved the big problem.”

Randall said Tajaddodianfar’s algorithm has been integrated with its system’s software but is not yet available to customers. The research was made possible by funding from the Army Research Office and the Defense Advanced Research Projects Agency.

Here’s a link to and a citation for the paper,

On the effect of local barrier height in scanning tunneling microscopy: Measurement methods and control implications by Farid Tajaddodianfar, S. O. Reza Moheimani, James Owen, and John N. Randall. Review of Scientific Instruments 89, 013701 (2018); https://doi.org/10.1063/1.5003851 Published Online: January 2018

This paper is behind a paywall.

Quantum device provides capabilities of Dr. Who’s sonic screwdriver and Star Trek’s tricorder

I think these Australian scientists are bigger fans of Dr. Who than Star Trek if I read this March 8, 2017 news item on Nanowerk rightly (Note: A link has been removed),

Physicists have designed a handheld device inspired by the sonic screwdriver in Doctor Who and the tricorder in Star Trek that will use the power of MRI and mass spectrometry to perform a chemical analysis of objects (Nano Letters, “Nanomechanical Sensing Using Spins in Diamond”).

The sonic screwdriver is a tool used in Doctor Who to scan and identify matter, among other functions, while the multi-purpose tricorder in Star Trek can provide a detailed analysis of living things.

This video confirms the scientists’ Dr. Who fanhood,

A March 8, 2017 Australian National University (ANU) news release, which originated the news item, provides more technical detail about the research,

Lead researcher Dr Marcus Doherty from ANU said the team had proven the concept of a diamond-based quantum device to perform similar functions to these science fiction tools and would now develop a prototype.

“Laboratories and hospitals will have the power to do full chemical analyses to solve complex problems with our device that they can afford and move around easily,” said Dr Doherty from the ANU Research School of Physics and Engineering (RSPE).

“This device is going to enable many people to use powerful instruments like molecular MRI machines and mass spectrometers much more readily.”

Dr Doherty said medical researchers could use the device to weigh and identify complex molecules such as proteins, which drive diseases, such as cancer, and cures for those diseases.

“Every great advance for microscopy has driven scientific revolution,” he said.

“Our invention will help to solve many complex problems in a wide range of areas, including medical, environmental and biosecurity research.”

Molecular MRI is a form of the common medical imaging technology that is capable of identifying the chemical composition of individual molecules, while mass spectrometers measure the masses within a sample.

Co-researcher Michael Barson said the device would use tiny defects in a diamond to measure the mass and chemical composition of molecules with advanced quantum techniques borrowed from atomic clocks and gravitational wave detectors.

“For the mass spectrometry, when a molecule attaches to the diamond device, its mass changes, which changes the frequency, and we measure the change in frequency using the defects in the diamond,” said Mr Barson, a PhD student from RSPE.

“For the MRI, we are looking at how the magnetic fields in the molecule will influence the defects as well.”

Here’s a link to and a citation for the paper,

Nanomechanical Sensing Using Spins in Diamond by Michael S. J. Barson, Phani Peddibhota, Preeti Ovartchaiyapong, Kumaravelu Ganesan, Richard L. Taylor, Matthew Gebert, Zoe Mielens, Berndt Koslowski, David A. Simpson, Liam P. McGuinness, Jeffrey McCallum, Steven Prawer, Shinobu Onoda, Takeshi Ohshima, Ania C. Bleszynski Jayich, Fedor Jelezko, Neil B. Manson, and Marcus W. Doherty. Nano Lett., 2017, 17 (3), pp 1496–1503 DOI: 10.1021/acs.nanolett.6b04544 Publication Date (Web): February 1, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.

A guide to producing transparent electronics

A blue light shines through a clear, implantable medical sensor onto a brain model. See-through sensors, which have been developed by a team of UW–Madison engineers, should help neural researchers better view brain activity. Credit: Justin Williams research group

A blue light shines through a clear, implantable medical sensor onto a brain model. See-through sensors, which have been developed by a team of UW–Madison engineers, should help neural researchers better view brain activity. Credit: Justin Williams research group

Read this Oct. 13, 2016 news item on ScienceDaily if you want to find out how to make your own transparent electronics,

When University of Wisconsin-Madison engineers announced in the journal Nature Communications that they had developed transparent sensors for use in imaging the brain, researchers around the world took notice.

Then the requests came flooding in. “So many research groups started asking us for these devices that we couldn’t keep up,” says Zhenqiang (Jack) Ma, the Lynn H. Matthias Professor and Vilas Distinguished Achievement Professor in electrical and computer engineering at UW-Madison.

As a result, in a paper published in the journal Nature Protocols, the researchers have described in great detail how to fabricate and use transparent graphene neural electrode arrays in applications in electrophysiology, fluorescent microscopy, optical coherence tomography, and optogenetics. “We described how to do these things so we can start working on the next generation,” says Ma.

Although he and collaborator Justin Williams, the Vilas Distinguished Achievement Professor in biomedical engineering and neurological surgery at UW-Madison, patented the technology through the Wisconsin Alumni Research Foundation, they saw its potential for advancements in research. “That little step has already resulted in an explosion of research in this field,” says Williams. “We didn’t want to keep this technology in our lab. We wanted to share it and expand the boundaries of its applications.”

An Oct. 13, 2016 University of Wisconsin-Madison news release, which originated the news item, provides more detail about the paper and the researchers,

‘This paper is a gateway for other groups to explore the huge potential from here,’ says Ma. ‘Our technology demonstrates one of the key in vivo applications of graphene. We expect more revolutionary research will follow in this interdisciplinary field.’

Ma’s group is a world leader in developing revolutionary flexible electronic devices. The see-through, implantable micro-electrode arrays were light years beyond anything ever created.

Here’s a link to and a citation for the paper,

Fabrication and utility of a transparent graphene neural electrode array for electrophysiology, in vivo imaging, and optogenetics by Dong-Wook Park, Sarah K Brodnick, Jared P Ness, Farid Atry, Lisa Krugner-Higby, Amelia Sandberg, Solomon Mikael, Thomas J Richner, Joseph Novello, Hyungsoo Kim, Dong-Hyun Baek, Jihye Bong, Seth T Frye, Sanitta Thongpang, Kyle I Swanson, Wendell Lake, Ramin Pashaie, Justin C Williams, & Zhenqiang Ma. Nature Protocols 11, 2201–2222 (2016) doi:10.1038/nprot.2016.127 Published online 13 October 2016

Of course this paper is open access. The team’s previous paper published in 2014 was featured here in an Oct. 23, 2014 posting.

Replicating brain’s neural networks with 3D nanoprinting

An announcement about European Union funding for a project to reproduce neural networks by 3D nanoprinting can be found in a June 10, 2016 news item on Nanowerk,

The MESO-BRAIN consortium has received a prestigious award of €3.3million in funding from the European Commission as part of its Future and Emerging Technology (FET) scheme. The project aims to develop three-dimensional (3D) human neural networks with specific biological architecture, and the inherent ability to interrogate the network’s brain-like activity both electrophysiologically and optically. It is expected that the MESO-BRAIN will facilitate a better understanding of human disease progression, neuronal growth and enable the development of large-scale human cell-based assays to test the modulatory effects of pharmacological and toxicological compounds on neural network activity. The use of more physiologically relevant human models will increase drug screening efficiency and reduce the need for animal testing.

A June 9, 2016 Institute of Photonic Sciences (ICFO) press release (also on EurekAlert), which originated the news item, provides more detail,

About the MESO-BRAIN project

The MESO-BRAIN project’s cornerstone will use human induced pluripotent stem cells (iPSCs) that have been differentiated into neurons upon a defined and reproducible 3D scaffold to support the development of human neural networks that emulate brain activity. The structure will be based on a brain cortical module and will be unique in that it will be designed and produced using nanoscale 3D-laser-printed structures incorporating nano-electrodes to enable downstream electrophysiological analysis of neural network function. Optical analysis will be conducted using cutting-edge light sheet-based, fast volumetric imaging technology to enable cellular resolution throughout the 3D network. The MESO-BRAIN project will allow for a comprehensive and detailed investigation of neural network development in health and disease.

Prof Edik Rafailov, Head of the MESO-BRAIN project (Aston University) said: “What we’re proposing to achieve with this project has, until recently, been the stuff of science fiction. Being able to extract and replicate neural networks from the brain through 3D nanoprinting promises to change this. The MESO-BRAIN project has the potential to revolutionise the way we are able to understand the onset and development of disease and discover treatments for those with dementia or brain injuries. We cannot wait to get started!”

The MESO-BRAIN project will launch in September 2016 and research will be conducted over three years.

About the MESO-BRAIN consortium

Each of the consortium partners have been chosen for the highly specific skills & knowledge that they bring to this project. These include technologies and expertise in stem cells, photonics, physics, 3D nanoprinting, electrophysiology, molecular biology, imaging and commercialisation.

Aston University (UK) Aston Institute of Photonic Technologies (School of Engineering and Applied Science) is one of the largest photonic groups in UK and an internationally recognised research centre in the fields of lasers, fibre-optics, high-speed optical communications, nonlinear and biomedical photonics. The Cell & Tissue Biomedical Research Group (Aston Research Centre for Healthy Ageing) combines collective expertise in genetic manipulation, tissue engineering and neuronal modelling with the electrophysiological and optical analysis of human iPSC-derived neural networks. Axol Bioscience Ltd. (UK) was founded to fulfil the unmet demand for high quality, clinically relevant human iPSC-derived cells for use in biomedical research and drug discovery. The Laser Zentrum Hannover (Germany) is a leading research organisation in the fields of laser development, material processing, laser medicine, and laser-based nanotechnologies. The Neurophysics Group (Physics Department) at University of Barcelona (Spain) are experts in combing experiments with theoretical and computational modelling to infer functional connectivity in neuronal circuits. The Institute of Photonic Sciences (ICFO) (Spain) is a world-leading research centre in photonics with expertise in several microscopy techniques including light sheet imaging. KITE Innovation (UK) helps to bridge the gap between the academic and business sectors in supporting collaboration, enterprise, and knowledge-based business development.

For anyone curious about the FET funding scheme, there’s this from the press release,

Horizon 2020 aims to ensure Europe produces world-class science by removing barriers to innovation through funding programmes such as the FET. The FET (Open) funds forward-looking collaborations between advanced multidisciplinary science and cutting-edge engineering for radically new future technologies. The published success rate is below 1.4%, making it amongst the toughest in the Horizon 2020 suite of funding schemes. The MESO-BRAIN proposal scored a perfect 5/5.

You can find out more about the MESO-BRAIN project on its ICFO webpage.

They don’t say anything about it but I can’t help wondering if the scientists aren’t also considering the possibility of creating an artificial brain.

Luminescent upconversion nanoparticles could make imaging more efficient

Researchers at the University of Adelaide (Australia) have found a way to embed luminiscent nanoparticles in glass, according to a June 8, 2016 news item on Nanotechnology,

This new “hybrid glass” successfully combines the properties of these special luminescent (or light-emitting) nanoparticles with the well-known aspects of glass, such as transparency and the ability to be processed into various shapes including very fine optical fibres.

The research, in collaboration with Macquarie University and University of Melbourne, has been published online in the journal Advanced Optical Materials.

A June 7, 2016 University of Adelaide press release (also on EurekAlert), which originated the news item, offers more detail,

“These novel luminescent nanoparticles, called upconversion nanoparticles, have become promising candidates for a whole variety of ultra-high tech applications such as biological sensing, biomedical imaging and 3D volumetric displays,” says lead author Dr Tim Zhao, from the University of Adelaide’s School of Physical Sciences and Institute for Photonics and Advanced Sensing (IPAS).

“Integrating these nanoparticles into glass, which is usually inert, opens up exciting possibilities for new hybrid materials and devices that can take advantage of the properties of nanoparticles in ways we haven’t been able to do before. For example, neuroscientists currently use dye injected into the brain and lasers to be able to guide a glass pipette to the site they are interested in. If fluorescent nanoparticles were embedded in the glass pipettes, the unique luminescence of the hybrid glass could act like a torch to guide the pipette directly to the individual neurons of interest.”

Although this method was developed with upconversion nanoparticles, the researchers believe their new ‘direct-doping’ approach can be generalised to other nanoparticles with interesting photonic, electronic and magnetic properties. There will be many applications – depending on the properties of the nanoparticle.

“If we infuse glass with a nanoparticle that is sensitive to radiation and then draw that hybrid glass into a fibre, we could have a remote sensor suitable for nuclear facilities,” says Dr Zhao.

To date, the method used to integrate upconversion nanoparticles into glass has relied on the in-situ growth of the nanoparticles within the glass.

“We’ve seen remarkable progress in this area but the control over the nanoparticles and the glass compositions has been limited, restricting the development of many proposed applications,” says project leader Professor Heike Ebendorff-Heideprem, Deputy Director of IPAS.

“With our new direct doping method, which involves synthesizing the nanoparticles and glass separately and then combining them using the right conditions, we’ve been able to keep the nanoparticles intact and well dispersed throughout the glass. The nanoparticles remain functional and the glass transparency is still very close to its original quality. We are heading towards a whole new world of hybrid glass and devices for light-based technologies.”

Here’s a link to and a citation for the paper,

Upconversion Nanocrystal-Doped Glass: A New Paradigm for Photonic Materials by Jiangbo Zhao, Xianlin Zheng, Erik P. Schartner, Paul Ionescu, Run Zhang, Tich-Lam Nguyen, Dayong Jin, and Heike Ebendorff-Heidepriem. Advanced Optical Materials DOI: 10.1002/adom.201600296 Version of Record online: 30 MAY 2016

© 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Nano and a Unified Microbiome Initiative (UMI)

A Jan. 6, 2015 news item on Nanowerk features a proposal by US scientists for a Unified Microbiome Initiative (UMI),

In October [2015], an interdisciplinary group of scientists proposed forming a Unified Microbiome Initiative (UMI) to explore the world of microorganisms that are central to life on Earth and yet largely remain a mystery.

An article in the journal ACS Nano (“Tools for the Microbiome: Nano and Beyond”) describes the tools scientists will need to understand how microbes interact with each other and with us.

A Jan. 6, 2016 American Chemical Society (ACS) news release, which originated the news item, expands on the theme,

Microbes live just about everywhere: in the oceans, in the soil, in the atmosphere, in forests and in and on our bodies. Research has demonstrated that their influence ranges widely and profoundly, from affecting human health to the climate. But scientists don’t have the necessary tools to characterize communities of microbes, called microbiomes, and how they function. Rob Knight, Jeff F. Miller, Paul S. Weiss and colleagues detail what these technological needs are.

The researchers are seeking the development of advanced tools in bioinformatics, high-resolution imaging, and the sequencing of microbial macromolecules and metabolites. They say that such technology would enable scientists to gain a deeper understanding of microbiomes. Armed with new knowledge, they could then tackle related medical and other challenges with greater agility than what is possible today.

Here’s a link to and a citation for the paper,

Tools for the Microbiome: Nano and Beyond by Julie S. Biteen, Paul C. Blainey, Zoe G. Cardon, Miyoung Chun, George M. Church, Pieter C. Dorrestein, Scott E. Fraser, Jack A. Gilbert, Janet K. Jansson, Rob Knight, Jeff F. Miller, Aydogan Ozcan, Kimberly A. Prather, Stephen R. Quake, Edward G. Ruby, Pamela A. Silver, Sharif Taha, Ger van den Engh, Paul S. Weiss, Gerard C. L. Wong, Aaron T. Wright, and Thomas D. Young. ACS Nano, Article ASAP DOI: 10.1021/acsnano.5b07826 Publication Date (Web): December 22, 2015

Copyright © 2015 American Chemical Society

This is an open access paper.

I sped through very quickly and found a couple of references to ‘nano’,

Ocean Microbiomes and Nanobiomes

Life in the oceans is supported by a community of extremely small organisms that can be called a “nanobiome.” These nanoplankton particles, many of which measure less than 0.001× the volume of a white blood cell, harvest solar and chemical energy and channel essential elements into the food chain. A deep network of larger life forms (humans included) depends on these tiny microbes for its energy and chemical building blocks.

The importance of the oceanic nanobiome has only recently begun to be fully appreciated. Two dominant forms, Synechococcus and Prochlorococcus, were not discovered until the 1980s and 1990s.(32-34) Prochloroccus has now been demonstrated to be so abundant that it may account for as much as 10% of the world’s living organic carbon. The organism divides on a diel cycle while maintaining constant numbers, suggesting that about 5% of the world’s biomass flows through this species on a daily basis.(35-37)

Metagenomic studies show that many other less abundant life forms must exist but elude direct observation because they can neither be isolated nor grown in culture.

The small sizes of these organisms (and their genomes) indicate that they are highly specialized and optimized. Metagenome data indicate a large metabolic heterogeneity within the nanobiome. Rather than combining all life functions into a single organism, the nanobiome works as a network of specialists that can only exist as a community, therein explaining their resistance to being cultured. The detailed composition of the network is the result of interactions between the organisms themselves and the local physical and chemical environment. There is thus far little insight into how these networks are formed and how they maintain steady-state conditions in the turbulent natural ocean environment.

Rather than combining all life functions into a single organism, the nanobiome works as a network of specialists that can only exist as a community

The serendipitous discovery of Prochlorococcus happened by applying flow cytometry (developed as a medical technique for counting blood cells) to seawater.(34) With these medical instruments, the faint signals from nanoplankton can only be seen with great difficulty against noisy backgrounds. Currently, a small team is adapting flow cytometric technology to improve the capabilities for analyzing individual nanoplankton particles. The latest generation of flow cytometers enables researchers to count and to make quantitative observations of most of the small life forms (including some viruses) that comprise the nanobiome. To our knowledge, there are only two well-equipped mobile flow cytometry laboratories that are regularly taken to sea for real-time observations of the nanobiome. The laboratories include equipment for (meta)genome analysis and equipment to correlate the observations with the local physical parameters and (nutrient) chemistry in the ocean. Ultimately, integration of these measurements will be essential for understanding the complexity of the oceanic microbiome.

The ocean is tremendously undersampled. Ship time is costly and limited. Ultimately, inexpensive, automated, mobile biome observatories will require methods that integrate microbiome and nanobiome measurements, with (meta-) genomics analyses, with local geophysical and geochemical parameters.(38-42) To appreciate how the individual components of the ocean biome are related and work together, a more complete picture must be established.

The marine environment consists of stratified zones, each with a unique, characteristic biome.(43) The sunlit waters near the surface are mixed by wind action. Deeper waters may be mixed only occasionally by passing storms. The dark deepest layers are stabilized by temperature/salinity density gradients. Organic material from the photosynthetically active surface descends into the deep zone, where it decomposes into nutrients that are mixed with compounds that are released by volcanic and seismic action. These nutrients diffuse upward to replenish the depleted surface waters. The biome is stratified accordingly, sometimes with sudden transitions on small scales. Photo-autotrophs dominate near the surface. Chemo-heterotrophs populate the deep. The makeup of the microbial assemblages is dictated by the local nutrient and oxygen concentrations. The spatiotemporal interplay of these systems is highly relevant to such issues as the carbon budget of the planet but remains little understood.

And then, there was this,

Nanoscience and Nanotechnology Opportunities

The great advantage of nanoscience and nanotechnology in studying microbiomes is that the nanoscale is the scale of function in biology. It is this convergence of scales at which we can “see” and at which we can fabricate that heralds the contributions that can be made by developing new nanoscale analysis tools.(159-168) Microbiomes operate from the nanoscale up to much larger scales, even kilometers, so crossing these scales will pose significant challenges to the field, in terms of measurement, stimulation/response, informatics, and ultimately understanding.

Some progress has been made in creating model systems(143-145, 169-173) that can be used to develop tools and methods. In these cases, the tools can be brought to bear on more complex and real systems. Just as nanoscience began with the ability to image atoms and progressed to the ability to manipulate structures both directly and through guided interactions,(162, 163, 174-176) it has now become possible to control structure, materials, and chemical functionality from the submolecular to the centimeter scales simultaneously. Whereas substrates and surface functionalization have often been tailored to be resistant to bioadhesion, deliberate placement of chemical patterns can also be used for the growth and patterning of systems, such as biofilms, to be put into contact with nanoscale probes.(177-180) Such methods in combination with the tools of other fields (vide infra) will provide the means to probe and to understand microbiomes.

Key tools for the microbiome will need to be miniaturized and made parallel. These developments will leverage decades of work in nanotechnology in the areas of nanofabrication,(181) imaging systems,(182, 183) lab-on-a-chip systems,(184) control of biological interfaces,(185) and more. Commercialized and commoditized tools, such as smart phone cameras, can also be adapted for use (vide infra). By guiding the development and parallelization of these tools, increasingly complex microbiomes will be opened for study.(167)

Imaging and sensing, in general, have been enjoying a Renaissance over the past decades, and there are various powerful measurement techniques that are currently available, making the Microbiome Initiative timely and exciting from the broad perspective of advanced analysis techniques. Recent advances in various -omics technologies, electron microscopy, optical microscopy/nanoscopy and spectroscopy, cytometry, mass spectroscopy, atomic force microscopy, nuclear imaging, and other techniques, create unique opportunities for researchers to investigate a wide range of questions related to microbiome interactions, function, and diversity. We anticipate that some of these advanced imaging, spectroscopy, and sensing techniques, coupled with big data analytics, will be used to create multimodal and integrated smart systems that can shed light onto some of the most important needs in microbiome research, including (1) analyzing microbial interactions specifically and sensitively at the relevant spatial and temporal scales; (2) determining and analyzing the diversity covered by the microbial genome, transcriptome, proteome, and metabolome; (3) managing and manipulating microbiomes to probe their function, evaluating the impact of interventions and ultimately harnessing their activities; and (4) helping us identify and track microbial dark matter (referring to 99% of micro-organisms that cannot be cultured).

In this broad quest for creating next-generation imaging and sensing instrumentation to address the needs and challenges of microbiome-related research activities comprehensively, there are important issues that need to be considered, as discussed below.

The piece is extensive and quite interesting, if you have the time.

Nanoscale imaging of a mouse brain

Researchers have developed a new brain imaging tool they would like to use as a founding element for a national brain observatory. From a July 30, 2015 news item on Azonano,

A new imaging tool developed by Boston scientists could do for the brain what the telescope did for space exploration.

In the first demonstration of how the technology works, published July 30 in the journal Cell, the researchers look inside the brain of an adult mouse at a scale previously unachievable, generating images at a nanoscale resolution. The inventors’ long-term goal is to make the resource available to the scientific community in the form of a national brain observatory.

A July 30, 2015 Cell Press news release on EurekAlert, which originated the news item, expands on the theme,

“I’m a strong believer in bottom up-science, which is a way of saying that I would prefer to generate a hypothesis from the data and test it,” says senior study author Jeff Lichtman, of Harvard University. “For people who are imagers, being able to see all of these details is wonderful and we’re getting an opportunity to peer into something that has remained somewhat intractable for so long. It’s about time we did this, and it is what people should be doing about things we don’t understand.”

The researchers have begun the process of mining their imaging data by looking first at an area of the brain that receives sensory information from mouse whiskers, which help the animals orient themselves and are even more sensitive than human fingertips. The scientists used a program called VAST, developed by co-author Daniel Berger of Harvard and the Massachusetts Institute of Technology, to assign different colors and piece apart each individual “object” (e.g., neuron, glial cell, blood vessel cell, etc.).

“The complexity of the brain is much more than what we had ever imagined,” says study first author Narayanan “Bobby” Kasthuri, of the Boston University School of Medicine. “We had this clean idea of how there’s a really nice order to how neurons connect with each other, but if you actually look at the material it’s not like that. The connections are so messy that it’s hard to imagine a plan to it, but we checked and there’s clearly a pattern that cannot be explained by randomness.”

The researchers see great potential in the tool’s ability to answer questions about what a neurological disorder actually looks like in the brain, as well as what makes the human brain different from other animals and different between individuals. Who we become is very much a product of the connections our neurons make in response to various life experiences. To be able to compare the physical neuron-to-neuron connections in an infant, a mathematical genius, and someone with schizophrenia would be a leap in our understanding of how our brains shape who we are (or vice versa).

The cost and data storage demands for this type of research are still high, but the researchers expect expenses to drop over time (as has been the case with genome sequencing). To facilitate data sharing, the scientists are now partnering with Argonne National Laboratory with the hopes of creating a national brain laboratory that neuroscientists around the world can access within the next few years.

“It’s bittersweet that there are many scientists who think this is a total waste of time as well as a big investment in money and effort that could be better spent answering questions that are more proximal,” Lichtman says. “As long as data is showing you things that are unexpected, then you’re definitely doing the right thing. And we are certainly far from being out of the surprise element. There’s never a time when we look at this data that we don’t see something that we’ve never seen before.”

Here’s a link to and a citation for the paper,

Saturated Reconstruction of a Volume of Neocortex by Narayanan Kasthuri, Kenneth Jeffrey Hayworth, Daniel Raimund Berger, Richard Lee Schalek, José Angel Conchello, Seymour Knowles-Barley, Dongil Lee, Amelio Vázquez-Reina, Verena Kaynig, Thouis Raymond Jones, Mike Roberts, Josh Lyskowski Morgan, Juan Carlos Tapia, H. Sebastian Seung, William Gray Roncal, Joshua Tzvi Vogelstein, Randal Burns, Daniel Lewis Sussman, Carey Eldin Priebe, Hanspeter Pfister, Jeff William Lichtman. Cell Volume 162, Issue 3, p648–661, 30 July 2015 DOI: http://dx.doi.org/10.1016/j.cell.2015.06.054

This appears to be an open access paper.

Molecules (arynes) seen for first time in 113 years

Arynes were first theorized in 1902 and they’ve been used as building blocks to synthesize a variety of compounds but they’re existence hasn’t been confirmed until now.

AFM image of an aryne molecule imaged with a CO tip. Courtesy: IBM

AFM image of an aryne molecule imaged with a CO tip. Courtesy: IBM

A July 13, 2015 news item in Nanowerk makes the announcement (Note: A link has been removed),

chemistry teachers and students can breath a sigh of relief. After teaching and learning about a particular family of molecules for decades, scientists have finally proven that they do in fact exist.

In a new paper published online today in Nature Chemistry (“On-surface generation and imaging of arynes by atomic force microscopy”), scientists from IBM Research and CIQUS at the University of Santiago de Compostela, Spain, have confirmed the existence and characterized the structure of arynes, a family of highly-reactive short-lived molecules which was first suggested 113 years ago. The technique has broad applications for on-surface chemistry and electronics, including the preparation of graphene nanoribbons and novel single-molecule devices.

A July 13, 2015 IBM news release by Chris Sciacca, which originated the news item, describes arynes and the imaging process used to capture them for the first time (Note: Links have been removed),

“Arynes are discussed in almost every undergraduate course on organic chemistry around the world. Therefore, it’s kind of a relief to find the final confirmation that these molecules truly exist,” said Prof. Diego Peña, a chemist at the University of Santiago de Compostela.

“I look forward to seeing new chemical challenges solved by the combination of organic synthesis and atomic force microscopy.”

There are trillions of molecules in the universe and some of them are stable enough to be isolated and characterized, but many others are so short-lived that they can only be proposed indirectly, via chemical reactions or spectroscopic methods.

One such species are arynes, which were first suggested in 1902, and since then have been used as intermediates or building blocks in the synthesis of a variety of compounds for applications including medicine, organic electronics and molecular materials. The challenge with these particular molecules is that they only exist for several milliseconds making them extremely challenging to image, until now.

The imaging was accomplished by means of atomic force microscopy (AFM), a scanning technique that can accomplish nanometer-level resolution. After the preparation of the key aryne precursor by CIQUS, IBM scientists used the sharp tip of a scanning tunneling microscope (STM) to generate individual aryne molecules from precursor molecules by atomic manipulation. The experiments were performed on films of sodium chloride, at temperatures near absolute zero, to stabilize the aryne.

Once the molecules were isolated, the team used AFM to measure the tiny forces between the STM tip, which is terminated with a single carbon monoxide molecule, and the sample to image the aryne’s molecular structure. The resulting image was so clear that the scientists could study their chemical nature based on the minute differences between individual bonds.

“Our team has developed several state-of-the-art techniques since 2009, which made this achievement possible,” said Dr. Niko Pavliček, a physicist at IBM Research – Zurich and lead author of the paper. “For this study, it was absolutely essential to pick an insulating film on which the molecules were adsorbed and to deliberately choose the atomic tip-terminations to probe them. We hope this technique will have profound effects on the future of chemistry and electronics.”

Prof. Peña, added that “These findings on arynes can be compared with the long-standing search for the giant squid. For centuries, fishermen had found clues of the existence of this legendary animal. But it was only very recently that scientists managed to film a giant squid alive. In both cases, state-of-the-art technologies were crucial to observe these elusive species alive: a low-noise submarine for the giant squid; a low-temperature AFM for the aryne.”

This research is part of IBM’s five-year, $3 billion investment to push the limits of chip technology and semiconductor innovations needed to meet the emerging demands of cloud computing and Big Data systems.

This work is a result of the large European project called (Planar Atomic and Molecular Scale Devices). PAMS’ main objective is to develop and investigate novel electronic devices of nanometric-scale size. Part of this research is also funded by a European Research Council Advanced Grant awarded to IBM scientist Gerhard Meyer, who is also a co-author of the paper.

Here’s a link to and a citation for the paper,

On-surface generation and imaging of arynes by atomic force microscopy by Niko Pavliček, Bruno Schuler, Sara Collazos, Nikolaj Moll, Dolores Pérez, Enrique Guitián, Gerhard Meyer, Diego Peña, & Leo Gross. Nature Chemistry (2015) doi:10.1038/nchem.2300 Published online 13 July 2015

This paper is behind a paywall.