Category Archives: nanotechnology

Detecting peanut allergies with nanoparticles

Researchers at Notre Dame University are designing a platform that will make allergy detection easier and more precise according to a June 26, 2017 news item on phys.org,

Researchers have developed a novel platform to more accurately detect and identify the presence and severity of peanut allergies, without directly exposing patients to the allergen, according to a new study published in the journal Scientific Reports.

A team of chemical and biomolecular engineers at the University of Notre Dame designed nanoparticles that mimic natural allergens by displaying each allergic component one at a time on their surfaces. The researchers named the nanoparticles “nanoallergens” and used them to dissect the critical components of major peanut allergy proteins and evaluate the potency of the allergic response using the antibodies present in a blood sample from a patient.

“The goal of this study was to show how nanoallergen technology could be used to provide a clearer and more accurate assessment of the severity of an allergic condition,” said Basar Bilgicer, associate professor of chemical and biomolecular engineering and a member of the Advanced Diagnostics and Therapeutics initiative at Notre Dame. “We are currently working with allergy specialist clinicians for further testing and verification of the diagnostic tool using a larger patient population. Ultimately, our vision is to take this technology and make it available to all people who suffer from food allergies.”

A June 26, 2017 University of Notre Dame news release, which originated the news item, explains the need for better allergy detection,

Food allergies are a growing problem in developing countries and are of particular concern to parents. According to the study, 8 percent of children under the age of 4 have a food allergy. Bilgicer said a need exists for more accurate testing, improved diagnostics and better treatment options.

Current food allergy testing methods carry risks or fail to provide detailed information on the severity of the allergic response. For instance, a test known as the oral food challenge requires exposing a patient to increasing amounts of a suspected allergen. Patients must remain under close observation in clinics with highly trained specialists. The test is stopped only when the patient exhibits an extreme allergic response, such as anaphylactic shock. Doctors then treat the reaction with epinephrine injections, antihistamines and steroids.

The skin prick test, another common diagnostic tool, can indicate whether a patient is allergic to a particular food. However, it provides no detail on the severity of those allergies.

During skin prick testing, doctors place a drop of liquid containing the allergen on the patient’s skin, typically on their back, and then scratch the skin to expose the patient. Skin irritations, such as redness, itching and white bumps, are indications that the patient has an allergy.

“Most of the time, parents of children with food allergies are not inclined to have their child go through such excruciating experiences of a food challenge,” Bilgicer said. “Rather than investigate the severity of the allergy, they respond to it with most extreme caution and complete avoidance of the allergen. Meanwhile, there are cases where the skin prick test might have yielded a positive result for a child, and yet the child can consume a handful of the allergen and demonstrate no signs of any allergic response.”

While the study focused on peanut allergens, Bilgicer said he and his team are working on testing the platform on additional allergens and allergic conditions.

Here’s a link to and a citation for the paper,

Determination of Crucial Immunogenic Epitopes in Major Peanut Allergy Protein, Ara h2, via Novel Nanoallergen Platform by Peter E. Deak, Maura R. Vrabel, Tanyel Kiziltepe & Basar Bilgicer. Scientific Reports 7, Article number: 3981 (2017) doi:10.1038/s41598-017-04268-6 Published online: 21 June 2017

This paper is open access.

Ora Sound, a Montréal-based startup, and its ‘graphene’ headphones

For all the excitement about graphene there aren’t that many products as Glenn Zorpette notes in a June 20, 2017 posting about Ora Sound and its headphones on the Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website; Note: Links have been removed),

Graphene has long been touted as a miracle material that would deliver everything from tiny, ultralow-power transistors to the vastly long and ultrastrong cable [PDF] needed for a space elevator. And yet, 13 years of graphene development, and R&D expenditures well in the tens of billions of dollars have so far yielded just a handful of niche products. The most notable by far is a line of tennis racquets in which relatively small amounts of graphene are used to stiffen parts of the frame.

Ora Sound, a Montreal-based [Québec, Canada] startup, hopes to change all that. On 20 June [2017], it unveiled a Kickstarter campaign for a new audiophile-grade headphone that uses cones, also known as membranes, made of a form of graphene. “To the best of our knowledge, we are the first company to find a significant, commercially viable application for graphene,” says Ora cofounder Ari Pinkas, noting that the cones in the headphones are 95 percent graphene.

Kickstarter

It should be noted that participating in a Kickstarter campaign is an investment/gamble. I am not endorsing Ora Sound or its products. That said, this does look interesting (from the ORA: The World’s First Graphene Headphones Kickstarter campaign webpage),

ORA GQ Headphones uses nanotechnology to deliver the most groundbreaking audio listening experience. Scientists have long promised that one day Graphene will find its way into many facets of our lives including displays, electronic circuits and sensors. ORA’s Graphene technology makes it one of the first companies to have created a commercially viable application for this Nobel-prize winning material, a major scientific achievement.

The GQ Headphones come equipped with ORA’s patented GrapheneQ™ membranes, providing unparalleled fidelity. The headphones also offer all the features you would expect from a high-end audio product: wired/wireless operation, a gesture control track-pad, a digital MEMS microphone, breathable lambskin leather and an ear-shaped design optimized for sound quality and isolated comfort.

They have produced a slick video to promote their campaign,

At the time of publishing this post, the campaign will run for another eight days and has raised $650,949 CAD. This is more than $500,000 dollars over the company’s original goal of $135,000. I’m sure they’re ecstatic but this success can be a mixed blessing. They have many more people expecting a set of headphones than they anticipated and that can mean production issues.

Further, there appears to be only one member of the team with business experience and his (Ari Pinkas) experience includes marketing strategy for a few years and then founding an online marketplace for teachers. I would imagine Pinkas will be experiencing a very steep learning curve. Hopefully, Helge Seetzen, a member of the company’s advisory board will be able to offer assistance. According to Seetzen’s Wikipedia entry, he is a “… German technologist and businessman known for imaging & multimedia research and commercialization,” as well as, having a Canadian educational background and business experience. The rest of the team and advisory board appear to be academics.

The technology

A March 14, 2017 article by Andy Riga for the Montréal Gazette gives a general description of the technology,

A Montreal startup is counting on technology sparked by a casual conversation between two brothers pursuing PhDs at McGill University.

They were chatting about their disparate research areas — one, in engineering, was working on using graphene, a form of carbon, in batteries; the other, in music, was looking at the impact of electronics on the perception of audio quality.

At first glance, the invention that ensued sounds humdrum.

It’s a replacement for an item you use every day. It’s paper thin, you probably don’t realize it’s there and its design has not changed much in more than a century. Called a membrane or diaphragm, it’s the part of a loudspeaker that vibrates to create the sound from the headphones over your ears, the wireless speaker on your desk, the cellphone in your hand.

Membranes are normally made of paper, Mylar or aluminum.

Ora’s innovation uses graphene, a remarkable material whose discovery garnered two scientists the 2010 Nobel Prize in physics but which has yet to fulfill its promise.

“Because it’s so stiff, our membrane gets better sound quality,” said Robert-Eric Gaskell, who obtained his PhD in sound recording in 2015. “It can produce more sound with less distortion, and the sound that you hear is more true to the original sound intended by the artist.

“And because it’s so light, we get better efficiency — the lighter it is, the less energy it takes.”

In January, the company demonstrated its membrane in headphones at the Consumer Electronics Show, a big trade convention in Las Vegas.

Six cellphone manufacturers expressed interest in Ora’s technology, some of which are now trying prototypes, said Ari Pinkas, in charge of product marketing at Ora. “We’re talking about big cellphone manufacturers — big, recognizable names,” he said.

Technology companies are intrigued by the idea of using Ora’s technology to make smaller speakers so they can squeeze other things, such as bigger batteries, into the limited space in electronic devices, Pinkas said. Others might want to use Ora’s membrane to allow their devices to play music louder, he added.

Makers of regular speakers, hearing aids and virtual-reality headsets have also expressed interest, Pinkas said.

Ora is still working on headphones.

Riga’s article offers a good overview for people who are not familiar with graphene.

Zorpette’s June 20, 2017 posting (on Nanoclast) offers a few more technical details (Note: Links have been removed),

During an interview and demonstration in the IEEE Spectrum offices, Pinkas and Robert-Eric Gaskell, another of the company’s cofounders, explained graphene’s allure to audiophiles. “Graphene has the ideal properties for a membrane,” Gaskell says. “It’s incredibly stiff, very lightweight—a rare combination—and it’s well damped,” which means it tends to quell spurious vibrations. By those metrics, graphene soundly beats all the usual choices: mylar, paper, aluminum, or even beryllium, Gaskell adds.

The problem is making it in sheets large enough to fashion into cones. So-called “pristine” graphene exists as flakes, [emphasis mine] perhaps 10 micrometers across, and a single atom thick. To make larger, strong sheets of graphene, researchers attach oxygen atoms to the flakes, and then other elements to the oxygen atoms to cross-link the flakes and hold them together strongly in what materials scientists call a laminate structure. The intellectual property behind Ora’s advance came from figuring out how to make these structures suitably thick and in the proper shape to function as speaker cones, Gaskell says. In short, he explains, the breakthrough was, “being able to manufacture” in large numbers, “and in any geometery we want.”

Much of the R&D work that led to Ora’s process was done at nearby McGill University, by professor Thomas Szkopek of the Electrical and Computer Engineering department. Szkopek worked with Peter Gaskell, Robert-Eric’s younger brother. Ora is also making use of patents that arose from work done on graphene by the Nguyen Group at Northwestern University, in Evanston, Ill.

Robert-Eric Gaskell and Pinkas arrived at Spectrum with a preproduction model of their headphones, as well as some other headphones for the sake of comparison. The Ora prototype is clearly superior to the comparison models, but that’s not much of a surprise. …

… In the 20 minutes or so I had to audition Ora’s preproduction model, I listened to an assortment of classical and jazz standards and I came away impressed. The sound is precise, with fine details sharply rendered. To my surprise, I was reminded of planar-magnetic type headphones that are now surging in popularity in the upper reaches of the audiophile headphone market. Bass is smooth and tight. Overall, the unit holds up quite well against closed-back models in the $400 to $500 range I’ve listened to from Grado, Bowers & Wilkins, and Audeze.

Ora’s Kickstarter campaign page (Graphene vs GrapheneQ subsection) offers some information about their unique graphene composite,

A TECHNICAL INTRODUCTION TO GRAPHENE

Graphene is a new material, first isolated only 13 years ago. Formed from a single layer of carbon atoms, Graphene is a hexagonal crystal lattice in a perfect honeycomb structure. This fundamental geometry makes Graphene ridiculously strong and lightweight. In its pure form, Graphene is a single atomic layer of carbon. It can be very expensive and difficult to produce in sizes any bigger than small flakes. These challenges have prevented pristine Graphene from being integrated into consumer technologies.

THE GRAPHENEQ™ SOLUTION

At ORA, we’ve spent the last few years creating GrapheneQ, our own, proprietary Graphene-based nanocomposite formulation. We’ve specifically designed and optimized it for use in acoustic transducers. GrapheneQ is a composite material which is over 95% Graphene by weight. It is formed by depositing flakes of Graphene into thousands of layers that are bonded together with proprietary cross-linking agents. Rather than trying to form one, continuous layer of Graphene, GrapheneQ stacks flakes of Graphene together into a laminate material that preserves the benefits of Graphene while allowing the material to be formed into loudspeaker cones.

Scanning Electron Microscope (SEM) Comparison
Scanning Electron Microscope (SEM) Comparison

If you’re interested in more technical information on sound, acoustics, soundspeakers, and Ora’s graphene-based headphones, it’s all there on Ora’s Kickstarter campaign page.

The Québec nanotechnology scene in context and graphite flakes for graphene

There are two Canadian provinces that are heavily invested in nanotechnology research and commercialization efforts. The province of Québec has poured money into their nanotechnology efforts, while the province of Alberta has also invested heavily in nanotechnology, it has also managed to snare additional federal funds to host Canada’s National Institute of Nanotechnology (NINT). (This appears to be a current NINT website or you can try this one on the National Research Council website). I’d rank Ontario as being a third centre with the other provinces being considerably less invested. As for the North, I’ve not come across any nanotechnology research from that region. Finally, as I stumble more material about nanotechnology in Québec than I do for any other province, that’s the reason I rate Québec as the most successful in its efforts.

Regarding graphene, Canada seems to have an advantage. We have great graphite flakes for making graphene. With mines in at least two provinces, Ontario and Québec, we have a ready source of supply. In my first posting (July 25, 2011) about graphite mines here, I had this,

Who knew large flakes could be this exciting? From the July 25, 2011 news item on Nanowerk,

Northern Graphite Corporation has announced that graphene has been successfully made on a test basis using large flake graphite from the Company’s Bissett Creek project in Northern Ontario. Northern’s standard 95%C, large flake graphite was evaluated as a source material for making graphene by an eminent professor in the field at the Chinese Academy of Sciences who is doing research making graphene sheets larger than 30cm2 in size using the graphene oxide methodology. The tests indicated that graphene made from Northern’s jumbo flake is superior to Chinese powder and large flake graphite in terms of size, higher electrical conductivity, lower resistance and greater transparency.

Approximately 70% of production from the Bissett Creek property will be large flake (+80 mesh) and almost all of this will in fact be +48 mesh jumbo flake which is expected to attract premium pricing and be a better source material for the potential manufacture of graphene. The very high percentage of large flakes makes Bissett Creek unique compared to most graphite deposits worldwide which produce a blend of large, medium and small flakes, as well as a large percentage of low value -150 mesh flake and amorphous powder which are not suitable for graphene, Li ion batteries or other high end, high growth applications.

Since then I’ve stumbled across more information about Québec’s mines than Ontario’s  as can be seen:

There are some other mentions of graphite mines in other postings but they are tangential to what’s being featured:

  • (my Oct. 26, 2015 posting about St. Jean Carbon and its superconducting graphene and
  • my Feb. 20, 2015 posting about Nanoxplore and graphene production in Québec; and
  • this Feb. 23, 2015 posting about Grafoid and its sister company, Focus Graphite which gets its graphite flakes from a deposit in the northeastern part of Québec).

 

After reviewing these posts, I’ve begun to wonder where Ora’s graphite flakes come from? In any event, I wish the folks at Ora and their Kickstarter funders the best of luck.

Carbon nanotubes to repair nerve fibres (cyborg brains?)

Can cyborg brains be far behind now that researchers are looking at ways to repair nerve fibers with carbon nanotubes (CNTs)? A June 26, 2017 news item on ScienceDaily describes the scheme using carbon nanotubes as a material for repairing nerve fibers,

Carbon nanotubes exhibit interesting characteristics rendering them particularly suited to the construction of special hybrid devices — consisting of biological issue and synthetic material — planned to re-establish connections between nerve cells, for instance at spinal level, lost on account of lesions or trauma. This is the result of a piece of research published on the scientific journal Nanomedicine: Nanotechnology, Biology, and Medicine conducted by a multi-disciplinary team comprising SISSA (International School for Advanced Studies), the University of Trieste, ELETTRA Sincrotrone and two Spanish institutions, Basque Foundation for Science and CIC BiomaGUNE. More specifically, researchers have investigated the possible effects on neurons of the interaction with carbon nanotubes. Scientists have proven that these nanomaterials may regulate the formation of synapses, specialized structures through which the nerve cells communicate, and modulate biological mechanisms, such as the growth of neurons, as part of a self-regulating process. This result, which shows the extent to which the integration between nerve cells and these synthetic structures is stable and efficient, highlights the great potentialities of carbon nanotubes as innovative materials capable of facilitating neuronal regeneration or in order to create a kind of artificial bridge between groups of neurons whose connection has been interrupted. In vivo testing has actually already begun.

The researchers have included a gorgeous image to illustrate their work,

Caption: Scientists have proven that these nanomaterials may regulate the formation of synapses, specialized structures through which the nerve cells communicate, and modulate biological mechanisms, such as the growth of neurons, as part of a self-regulating process. Credit: Pixabay

A June 26, 2017 SISSA press release (also on EurekAlert), which originated the news item, describes the work in more detail while explaining future research needs,

“Interface systems, or, more in general, neuronal prostheses, that enable an effective re-establishment of these connections are under active investigation” explain Laura Ballerini (SISSA) and Maurizio Prato (UniTS-CIC BiomaGUNE), coordinating the research project. “The perfect material to build these neural interfaces does not exist, yet the carbon nanotubes we are working on have already proved to have great potentialities. After all, nanomaterials currently represent our best hope for developing innovative strategies in the treatment of spinal cord injuries”. These nanomaterials are used both as scaffolds, a supportive framework for nerve cells, and as means of interfaces releasing those signals that empower nerve cells to communicate with each other.

Many aspects, however, still need to be addressed. Among them, the impact on neuronal physiology of the integration of these nanometric structures with the cell membrane. “Studying the interaction between these two elements is crucial, as it might also lead to some undesired effects, which we ought to exclude”. Laura Ballerini explains: “If, for example, the mere contact provoked a vertiginous rise in the number of synapses, these materials would be essentially unusable”. “This”, Maurizio Prato adds, “is precisely what we have investigated in this study where we used pure carbon nanotubes”.

The results of the research are extremely encouraging: “First of all we have proved that nanotubes do not interfere with the composition of lipids, of cholesterol in particular, which make up the cellular membrane in neurons. Membrane lipids play a very important role in the transmission of signals through the synapses. Nanotubes do not seem to influence this process, which is very important”.

There is more, however. The research has also highlighted the fact that the nerve cells growing on the substratum of nanotubes, thanks to this interaction, develop and reach maturity very quickly, eventually reaching a condition of biological homeostasis. “Nanotubes facilitate the full growth of neurons and the formation of new synapses. This growth, however, is not indiscriminate and unlimited since, as we proved, after a few weeks a physiological balance is attained. Having established the fact that this interaction is stable and efficient is an aspect of fundamental importance”. Maurizio Prato and Laura Ballerini conclude as follows: “We are proving that carbon nanotubes perform excellently in terms of duration, adaptability and mechanical compatibility with the tissue. Now we know that their interaction with the biological material, too, is efficient. Based on this evidence, we are already studying the in vivo application, and preliminary results appear to be quite promising also in terms of recovery of the lost neurological functions”.

Here’s a link to and a citation for the paper,

Sculpting neurotransmission during synaptic development by 2D nanostructured interfaces by Niccolò Paolo Pampaloni, Denis Scaini, Fabio Perissinotto, Susanna Bosi, Maurizio Prato, Laura Ballerini. Nanomedicine: Nanotechnology, Biology and Medicine, DOI: http://dx.doi.org/10.1016/j.nano.2017.01.020 Published online: May 25, 2017

This paper is open access.

IBM to build brain-inspired AI supercomputing system equal to 64 million neurons for US Air Force

This is the second IBM computer announcement I’ve stumbled onto within the last 4 weeks or so,  which seems like a veritable deluge given the last time I wrote about IBM’s computing efforts was in an Oct. 8, 2015 posting about carbon nanotubes,. I believe that up until now that was my  most recent posting about IBM and computers.

Moving onto the news, here’s more from a June 23, 3017 news item on Nanotechnology Now,

IBM (NYSE: IBM) and the U.S. Air Force Research Laboratory (AFRL) today [June 23, 2017] announced they are collaborating on a first-of-a-kind brain-inspired supercomputing system powered by a 64-chip array of the IBM TrueNorth Neurosynaptic System. The scalable platform IBM is building for AFRL will feature an end-to-end software ecosystem designed to enable deep neural-network learning and information discovery. The system’s advanced pattern recognition and sensory processing power will be the equivalent of 64 million neurons and 16 billion synapses, while the processor component will consume the energy equivalent of a dim light bulb – a mere 10 watts to power.

A June 23, 2017 IBM news release, which originated the news item, describes the proposed collaboration, which is based on IBM’s TrueNorth brain-inspired chip architecture (see my Aug. 8, 2014 posting for more about TrueNorth),

IBM researchers believe the brain-inspired, neural network design of TrueNorth will be far more efficient for pattern recognition and integrated sensory processing than systems powered by conventional chips. AFRL is investigating applications of the system in embedded, mobile, autonomous settings where, today, size, weight and power (SWaP) are key limiting factors.

The IBM TrueNorth Neurosynaptic System can efficiently convert data (such as images, video, audio and text) from multiple, distributed sensors into symbols in real time. AFRL will combine this “right-brain” perception capability of the system with the “left-brain” symbol processing capabilities of conventional computer systems. The large scale of the system will enable both “data parallelism” where multiple data sources can be run in parallel against the same neural network and “model parallelism” where independent neural networks form an ensemble that can be run in parallel on the same data.

“AFRL was the earliest adopter of TrueNorth for converting data into decisions,” said Daniel S. Goddard, director, information directorate, U.S. Air Force Research Lab. “The new neurosynaptic system will be used to enable new computing capabilities important to AFRL’s mission to explore, prototype and demonstrate high-impact, game-changing technologies that enable the Air Force and the nation to maintain its superior technical advantage.”

“The evolution of the IBM TrueNorth Neurosynaptic System is a solid proof point in our quest to lead the industry in AI hardware innovation,” said Dharmendra S. Modha, IBM Fellow, chief scientist, brain-inspired computing, IBM Research – Almaden. “Over the last six years, IBM has expanded the number of neurons per system from 256 to more than 64 million – an 800 percent annual increase over six years.’’

The system fits in a 4U-high (7”) space in a standard server rack and eight such systems will enable the unprecedented scale of 512 million neurons per rack. A single processor in the system consists of 5.4 billion transistors organized into 4,096 neural cores creating an array of 1 million digital neurons that communicate with one another via 256 million electrical synapses.    For CIFAR-100 dataset, TrueNorth achieves near state-of-the-art accuracy, while running at >1,500 frames/s and using 200 mW (effectively >7,000 frames/s per Watt) – orders of magnitude lower speed and energy than a conventional computer running inference on the same neural network.

The IBM TrueNorth Neurosynaptic System was originally developed under the auspices of Defense Advanced Research Projects Agency’s (DARPA) Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program in collaboration with Cornell University. In 2016, the TrueNorth Team received the inaugural Misha Mahowald Prize for Neuromorphic Engineering and TrueNorth was accepted into the Computer History Museum.  Research with TrueNorth is currently being performed by more than 40 universities, government labs, and industrial partners on five continents.

There is an IBM video accompanying this news release, which seems more promotional than informational,

The IBM scientist featured in the video has a Dec. 19, 2016 posting on an IBM research blog which provides context for this collaboration with AFRL,

2016 was a big year for brain-inspired computing. My team and I proved in our paper “Convolutional networks for fast, energy-efficient neuromorphic computing” that the value of this breakthrough is that it can perform neural network inference at unprecedented ultra-low energy consumption. Simply stated, our TrueNorth chip’s non-von Neumann architecture mimics the brain’s neural architecture — giving it unprecedented efficiency and scalability over today’s computers.

The brain-inspired TrueNorth processor [is] a 70mW reconfigurable silicon chip with 1 million neurons, 256 million synapses, and 4096 parallel and distributed neural cores. For systems, we present a scale-out system loosely coupling 16 single-chip boards and a scale-up system tightly integrating 16 chips in a 4´4 configuration by exploiting TrueNorth’s native tiling.

For the scale-up systems we summarize our approach to physical placement of neural network, to reduce intra- and inter-chip network traffic. The ecosystem is in use at over 30 universities and government / corporate labs. Our platform is a substrate for a spectrum of applications from mobile and embedded computing to cloud and supercomputers.
TrueNorth Ecosystem for Brain-Inspired Computing: Scalable Systems, Software, and Applications

TrueNorth, once loaded with a neural network model, can be used in real-time as a sensory streaming inference engine, performing rapid and accurate classifications while using minimal energy. TrueNorth’s 1 million neurons consume only 70 mW, which is like having a neurosynaptic supercomputer the size of a postage stamp that can run on a smartphone battery for a week.

Recently, in collaboration with Lawrence Livermore National Laboratory, U.S. Air Force Research Laboratory, and U.S. Army Research Laboratory, we published our fifth paper at IEEE’s prestigious Supercomputing 2016 conference that summarizes the results of the team’s 12.5-year journey (see the associated graphic) to unlock this value proposition. [keep scrolling for the graphic]

Applying the mind of a chip

Three of our partners, U.S. Army Research Lab, U.S. Air Force Research Lab and Lawrence Livermore National Lab, contributed sections to the Supercomputing paper each showcasing a different TrueNorth system, as summarized by my colleagues Jun Sawada, Brian Taba, Pallab Datta, and Ben Shaw:

U.S. Army Research Lab (ARL) prototyped a computational offloading scheme to illustrate how TrueNorth’s low power profile enables computation at the point of data collection. Using the single-chip NS1e board and an Android tablet, ARL researchers created a demonstration system that allows visitors to their lab to hand write arithmetic expressions on the tablet, with handwriting streamed to the NS1e for character recognition, and recognized characters sent back to the tablet for arithmetic calculation.

Of course, the point here is not to make a handwriting calculator, it is to show how TrueNorth’s low power and real time pattern recognition might be deployed at the point of data collection to reduce latency, complexity and transmission bandwidth, as well as back-end data storage requirements in distributed systems.

U.S. Air Force Research Lab (AFRL) contributed another prototype application utilizing a TrueNorth scale-out system to perform a data-parallel text extraction and recognition task. In this application, an image of a document is segmented into individual characters that are streamed to AFRL’s NS1e16 TrueNorth system for parallel character recognition. Classification results are then sent to an inference-based natural language model to reconstruct words and sentences. This system can process 16,000 characters per second! AFRL plans to implement the word and sentence inference algorithms on TrueNorth, as well.

Lawrence Livermore National Lab (LLNL) has a 16-chip NS16e scale-up system to explore the potential of post-von Neumann computation through larger neural models and more complex algorithms, enabled by the native tiling characteristics of the TrueNorth chip. For the Supercomputing paper, they contributed a single-chip application performing in-situ process monitoring in an additive manufacturing process. LLNL trained a TrueNorth network to recognize seven classes related to track weld quality in welds produced by a selective laser melting machine. Real-time weld quality determination allows for closed-loop process improvement and immediate rejection of defective parts. This is one of several applications LLNL is developing to showcase TrueNorth as a scalable platform for low-power, real-time inference.

[downloaded from https://www.ibm.com/blogs/research/2016/12/the-brains-architecture-efficiency-on-a-chip/] Courtesy: IBM

I gather this 2017 announcement is the latest milestone on the TrueNorth journey.

Using only sunlight to desalinate water

The researchers seem to believe that this new desalination technique could be a game changer. From a June 20, 2017 news item on Azonano,

An off-grid technology using only the energy from sunlight to transform salt water into fresh drinking water has been developed as an outcome of the effort from a federally funded research.

The desalination system uses a combination of light-harvesting nanophotonics and membrane distillation technology and is considered to be the first major innovation from the Center for Nanotechnology Enabled Water Treatment (NEWT), which is a multi-institutional engineering research center located at Rice University.

NEWT’s “nanophotonics-enabled solar membrane distillation” technology (NESMD) integrates tried-and-true water treatment methods with cutting-edge nanotechnology capable of transforming sunlight to heat. …

A June 19, 2017 Rice University news release, which originated the news item, expands on the theme,

More than 18,000 desalination plants operate in 150 countries, but NEWT’s desalination technology is unlike any other used today.

“Direct solar desalination could be a game changer for some of the estimated 1 billion people who lack access to clean drinking water,” said Rice scientist and water treatment expert Qilin Li, a corresponding author on the study. “This off-grid technology is capable of providing sufficient clean water for family use in a compact footprint, and it can be scaled up to provide water for larger communities.”

The oldest method for making freshwater from salt water is distillation. Salt water is boiled, and the steam is captured and run through a condensing coil. Distillation has been used for centuries, but it requires complex infrastructure and is energy inefficient due to the amount of heat required to boil water and produce steam. More than half the cost of operating a water distillation plant is for energy.

An emerging technology for desalination is membrane distillation, where hot salt water is flowed across one side of a porous membrane and cold freshwater is flowed across the other. Water vapor is naturally drawn through the membrane from the hot to the cold side, and because the seawater need not be boiled, the energy requirements are less than they would be for traditional distillation. However, the energy costs are still significant because heat is continuously lost from the hot side of the membrane to the cold.

“Unlike traditional membrane distillation, NESMD benefits from increasing efficiency with scale,” said Rice’s Naomi Halas, a corresponding author on the paper and the leader of NEWT’s nanophotonics research efforts. “It requires minimal pumping energy for optimal distillate conversion, and there are a number of ways we can further optimize the technology to make it more productive and efficient.”

NEWT’s new technology builds upon research in Halas’ lab to create engineered nanoparticles that harvest as much as 80 percent of sunlight to generate steam. By adding low-cost, commercially available nanoparticles to a porous membrane, NEWT has essentially turned the membrane itself into a one-sided heating element that alone heats the water to drive membrane distillation.

“The integration of photothermal heating capabilities within a water purification membrane for direct, solar-driven desalination opens new opportunities in water purification,” said Yale University ‘s Menachem “Meny” Elimelech, a co-author of the new study and NEWT’s lead researcher for membrane processes.

In the PNAS study, researchers offered proof-of-concept results based on tests with an NESMD chamber about the size of three postage stamps and just a few millimeters thick. The distillation membrane in the chamber contained a specially designed top layer of carbon black nanoparticles infused into a porous polymer. The light-capturing nanoparticles heated the entire surface of the membrane when exposed to sunlight. A thin half-millimeter-thick layer of salt water flowed atop the carbon-black layer, and a cool freshwater stream flowed below.

Li, the leader of NEWT’s advanced treatment test beds at Rice, said the water production rate increased greatly by concentrating the sunlight. “The intensity got up 17.5 kilowatts per meter squared when a lens was used to concentrate sunlight by 25 times, and the water production increased to about 6 liters per meter squared per hour.”

Li said NEWT’s research team has already made a much larger system that contains a panel that is about 70 centimeters by 25 centimeters. Ultimately, she said, NEWT hopes to produce a modular system where users could order as many panels as they needed based on their daily water demands.

“You could assemble these together, just as you would the panels in a solar farm,” she said. “Depending on the water production rate you need, you could calculate how much membrane area you would need. For example, if you need 20 liters per hour, and the panels produce 6 liters per hour per square meter, you would order a little over 3 square meters of panels.”

Established by the National Science Foundation in 2015, NEWT aims to develop compact, mobile, off-grid water-treatment systems that can provide clean water to millions of people who lack it and make U.S. energy production more sustainable and cost-effective. NEWT, which is expected to leverage more than $40 million in federal and industrial support over the next decade, is the first NSF Engineering Research Center (ERC) in Houston and only the third in Texas since NSF began the ERC program in 1985. NEWT focuses on applications for humanitarian emergency response, rural water systems and wastewater treatment and reuse at remote sites, including both onshore and offshore drilling platforms for oil and gas exploration.

There is a video but it is focused on the NEWT center rather than any specific water technologies,

For anyone interested in the technology, here’s a link to and a citation for the researchers’ paper,

Nanophotonics-enabled solar membrane distillation for off-grid water purification by Pratiksha D. Dongare, Alessandro Alabastri, Seth Pedersen, Katherine R. Zodrow, Nathaniel J. Hogan, Oara Neumann, Jinjian Wu, Tianxiao Wang, Akshay Deshmukh,f, Menachem Elimelech, Qilin Li, Peter Nordlander, and Naomi J. Halas. PNAS {Proceedings of the National Academy of Sciences] doi: 10.1073/pnas.1701835114 June 19, 2017

This paper appears to be open access.

Nanotechnology-enabled warming textile being introduced at Berlin (Germany) Fashion Week July 4 – 7, 2017

Acanthurus GmbH, a Frankfurt-based (Germany) nanotechnology company announced its participation in Berlin Fashion Week’s (July 4 – 7, 2017) showcase for technology in fashion, Panorama Berlin  (according to Berlin Fashion Week’s Fashion Fair Highlights in July 2017 webpage; scroll down to Panorama Berlin subsection).

Here are more details about Acanthurus’ participation from a July 4, 2017 news item on innovationintextiles.com,

This week, Frankfurt-based nanotechnology company Acanthurus GmbH will introduce its innovative nanothermal warming textile technology nanogy at the Berlin FashionTech exhibition. An innovative warming technology was developed by Chinese market leader j-NOVA for the European market, under the brand name nanogy.

A July 3, 2017 nanogy press release, which originated the news item, offers another perspective on the story,

Too cold for your favorite dress? Leave your heavy coat at home and stay warm with ground-breaking nanotechnology instead.

Frankfurt-based nano technology company Acanthurus GmbH has brought an innovative warming technology developed by Chinese market leader j-NOVA© to the European market, under the brand name nanogy. “This will make freezing a thing of the past,” says Carsten Wortmann, founder and CEO of Acanthurus GmbH. The ultra-light, high-tech textiles can be integrated into any garment – including that go-to jacket everyone loves to wear on chilly days. All you need is a standard power bank to feel the warmth flow through your body, even on the coldest of days.

The innovative, lightweight technology is completely non-metallic, meaning it emits no radiation. The non-metallic nature of the technology allows it to be washed at any temperature, so there’s no need to worry about accidental spillages, whatever the circumstances. The technology is extremely thin and flexible and, as there is absolutely no metal included, can be scrunched or crumpled without damaging its function. This also means that the technology can be integrated into garments without any visible lines or hems, making it the optimal solution for fashion and textile companies alike.

nanogy measures an energy conversion rate of over 90%, making it one of the most sustainable and environmentally friendly warming solutions ever developed. The technology is also recyclable, so consumers can dispose of it as they would any other garment.

“Our focus is not just to provide world class technology, but also to improve people’s lives without harming our environment. We call this a nanothermal experience, and our current use cases have only covered a fraction of potential opportunities,” says Jeni Odley, Director of Acanthurus GmbH. As expected for any modern tech company, users can even control the temperature of the textile with a mobile app, making the integration of nanogy a simplified, one-touch experience.

I wasn’t able to find much about j-Nova but there was this from the ISPO Munich 2017 exhibitor details webpage,

j-NOVA.WORKS Co., Ltd.

4-B302, No. 328 Creative Industry Park, Xinhu St., Suzhou Industrial Park
215123 Jiangsu Prov.
China
P  +49 69 130277-70
F  +49 69 130277-75

As the new generation of warming technology, we introduce our first series of intelligent textiles: j-NOVA intelligent warming textiles.

The intelligent textiles are based on complex nano-technology, and maintain a constant temperature whilst preserving a low energy conversion rate. The technology can achieve an efficiency level of up to 90%, depending on its power source.

The combination of advanced nano material and intelligent modules bring warmth from the fabric and garment itself, which can be scrunched up or washed without affecting its function.

j-NOVA.WORKS aims to balance technology with tradition, and to improve the relationship between nature and humans.

Acanthurus GmbH is the sole European Distributor.

So, j-NOVA is the company with the nanotechnology and Acanthurus represents their interests in Europe. I wish I could find out more about the technology but this is the best I’ve been able to accomplish in the time I have available.

Brain stuff: quantum entanglement and a multi-dimensional universe

I have two brain news bits, one about neural networks and quantum entanglement and another about how the brain operates on more than three dimensions.

Quantum entanglement and neural networks

A June 13, 2017 news item on phys.org describes how machine learning can be used to solve problems in physics (Note: Links have been removed),

Machine learning, the field that’s driving a revolution in artificial intelligence, has cemented its role in modern technology. Its tools and techniques have led to rapid improvements in everything from self-driving cars and speech recognition to the digital mastery of an ancient board game.

Now, physicists are beginning to use machine learning tools to tackle a different kind of problem, one at the heart of quantum physics. In a paper published recently in Physical Review X, researchers from JQI [Joint Quantum Institute] and the Condensed Matter Theory Center (CMTC) at the University of Maryland showed that certain neural networks—abstract webs that pass information from node to node like neurons in the brain—can succinctly describe wide swathes of quantum systems.

An artist’s rendering of a neural network with two layers. At the top is a real quantum system, like atoms in an optical lattice. Below is a network of hidden neurons that capture their interactions (Credit: E. Edwards/JQI)

A June 12, 2017 JQI news release by Chris Cesare, which originated the news item, describes how neural networks can represent quantum entanglement,

Dongling Deng, a JQI Postdoctoral Fellow who is a member of CMTC and the paper’s first author, says that researchers who use computers to study quantum systems might benefit from the simple descriptions that neural networks provide. “If we want to numerically tackle some quantum problem,” Deng says, “we first need to find an efficient representation.”

On paper and, more importantly, on computers, physicists have many ways of representing quantum systems. Typically these representations comprise lists of numbers describing the likelihood that a system will be found in different quantum states. But it becomes difficult to extract properties or predictions from a digital description as the number of quantum particles grows, and the prevailing wisdom has been that entanglement—an exotic quantum connection between particles—plays a key role in thwarting simple representations.

The neural networks used by Deng and his collaborators—CMTC Director and JQI Fellow Sankar Das Sarma and Fudan University physicist and former JQI Postdoctoral Fellow Xiaopeng Li—can efficiently represent quantum systems that harbor lots of entanglement, a surprising improvement over prior methods.

What’s more, the new results go beyond mere representation. “This research is unique in that it does not just provide an efficient representation of highly entangled quantum states,” Das Sarma says. “It is a new way of solving intractable, interacting quantum many-body problems that uses machine learning tools to find exact solutions.”

Neural networks and their accompanying learning techniques powered AlphaGo, the computer program that beat some of the world’s best Go players last year (link is external) (and the top player this year (link is external)). The news excited Deng, an avid fan of the board game. Last year, around the same time as AlphaGo’s triumphs, a paper appeared that introduced the idea of using neural networks to represent quantum states (link is external), although it gave no indication of exactly how wide the tool’s reach might be. “We immediately recognized that this should be a very important paper,” Deng says, “so we put all our energy and time into studying the problem more.”

The result was a more complete account of the capabilities of certain neural networks to represent quantum states. In particular, the team studied neural networks that use two distinct groups of neurons. The first group, called the visible neurons, represents real quantum particles, like atoms in an optical lattice or ions in a chain. To account for interactions between particles, the researchers employed a second group of neurons—the hidden neurons—which link up with visible neurons. These links capture the physical interactions between real particles, and as long as the number of connections stays relatively small, the neural network description remains simple.

Specifying a number for each connection and mathematically forgetting the hidden neurons can produce a compact representation of many interesting quantum states, including states with topological characteristics and some with surprising amounts of entanglement.

Beyond its potential as a tool in numerical simulations, the new framework allowed Deng and collaborators to prove some mathematical facts about the families of quantum states represented by neural networks. For instance, neural networks with only short-range interactions—those in which each hidden neuron is only connected to a small cluster of visible neurons—have a strict limit on their total entanglement. This technical result, known as an area law, is a research pursuit of many condensed matter physicists.

These neural networks can’t capture everything, though. “They are a very restricted regime,” Deng says, adding that they don’t offer an efficient universal representation. If they did, they could be used to simulate a quantum computer with an ordinary computer, something physicists and computer scientists think is very unlikely. Still, the collection of states that they do represent efficiently, and the overlap of that collection with other representation methods, is an open problem that Deng says is ripe for further exploration.

Here’s a link to and a citation for the paper,

Quantum Entanglement in Neural Network States by Dong-Ling Deng, Xiaopeng Li, and S. Das Sarma. Phys. Rev. X 7, 021021 – Published 11 May 2017

This paper is open access.

Blue Brain and the multidimensional universe

Blue Brain is a Swiss government brain research initiative which officially came to life in 2006 although the initial agreement between the École Politechnique Fédérale de Lausanne (EPFL) and IBM was signed in 2005 (according to the project’s Timeline page). Moving on, the project’s latest research reveals something astounding (from a June 12, 2017 Frontiers Publishing press release on EurekAlert),

For most people, it is a stretch of the imagination to understand the world in four dimensions but a new study has discovered structures in the brain with up to eleven dimensions – ground-breaking work that is beginning to reveal the brain’s deepest architectural secrets.

Using algebraic topology in a way that it has never been used before in neuroscience, a team from the Blue Brain Project has uncovered a universe of multi-dimensional geometrical structures and spaces within the networks of the brain.

The research, published today in Frontiers in Computational Neuroscience, shows that these structures arise when a group of neurons forms a clique: each neuron connects to every other neuron in the group in a very specific way that generates a precise geometric object. The more neurons there are in a clique, the higher the dimension of the geometric object.

“We found a world that we had never imagined,” says neuroscientist Henry Markram, director of Blue Brain Project and professor at the EPFL in Lausanne, Switzerland, “there are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions.”

Markram suggests this may explain why it has been so hard to understand the brain. “The mathematics usually applied to study networks cannot detect the high-dimensional structures and spaces that we now see clearly.”

If 4D worlds stretch our imagination, worlds with 5, 6 or more dimensions are too complex for most of us to comprehend. This is where algebraic topology comes in: a branch of mathematics that can describe systems with any number of dimensions. The mathematicians who brought algebraic topology to the study of brain networks in the Blue Brain Project were Kathryn Hess from EPFL and Ran Levi from Aberdeen University.

“Algebraic topology is like a telescope and microscope at the same time. It can zoom into networks to find hidden structures – the trees in the forest – and see the empty spaces – the clearings – all at the same time,” explains Hess.

In 2015, Blue Brain published the first digital copy of a piece of the neocortex – the most evolved part of the brain and the seat of our sensations, actions, and consciousness. In this latest research, using algebraic topology, multiple tests were performed on the virtual brain tissue to show that the multi-dimensional brain structures discovered could never be produced by chance. Experiments were then performed on real brain tissue in the Blue Brain’s wet lab in Lausanne confirming that the earlier discoveries in the virtual tissue are biologically relevant and also suggesting that the brain constantly rewires during development to build a network with as many high-dimensional structures as possible.

When the researchers presented the virtual brain tissue with a stimulus, cliques of progressively higher dimensions assembled momentarily to enclose high-dimensional holes, that the researchers refer to as cavities. “The appearance of high-dimensional cavities when the brain is processing information means that the neurons in the network react to stimuli in an extremely organized manner,” says Levi. “It is as if the brain reacts to a stimulus by building then razing a tower of multi-dimensional blocks, starting with rods (1D), then planks (2D), then cubes (3D), and then more complex geometries with 4D, 5D, etc. The progression of activity through the brain resembles a multi-dimensional sandcastle that materializes out of the sand and then disintegrates.”

The big question these researchers are asking now is whether the intricacy of tasks we can perform depends on the complexity of the multi-dimensional “sandcastles” the brain can build. Neuroscience has also been struggling to find where the brain stores its memories. “They may be ‘hiding’ in high-dimensional cavities,” Markram speculates.

###

About Blue Brain

The aim of the Blue Brain Project, a Swiss brain initiative founded and directed by Professor Henry Markram, is to build accurate, biologically detailed digital reconstructions and simulations of the rodent brain, and ultimately, the human brain. The supercomputer-based reconstructions and simulations built by Blue Brain offer a radically new approach for understanding the multilevel structure and function of the brain. http://bluebrain.epfl.ch

About Frontiers

Frontiers is a leading community-driven open-access publisher. By taking publishing entirely online, we drive innovation with new technologies to make peer review more efficient and transparent. We provide impact metrics for articles and researchers, and merge open access publishing with a research network platform – Loop – to catalyse research dissemination, and popularize research to the public, including children. Our goal is to increase the reach and impact of research articles and their authors. Frontiers has received the ALPSP Gold Award for Innovation in Publishing in 2014. http://www.frontiersin.org.

Here’s a link to and a citation for the paper,

Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function by Michael W. Reimann, Max Nolte, Martina Scolamiero, Katharine Turner, Rodrigo Perin, Giuseppe Chindemi, Paweł Dłotko, Ran Levi, Kathryn Hess, and Henry Markram. Front. Comput. Neurosci., 12 June 2017 | https://doi.org/10.3389/fncom.2017.00048

This paper is open access.

2017 S.NET annual meeting early bird registration open until July 14, 2017

The Society for the Study of New and Emerging Technologies (S.NET), which at one time was known as the Society for the Study of Nano and other Emerging Technologies, is holding its 2017 annual meeting in Arizona, US. Here’s more from a July 4, 2017 S.NET notice (received via email),

We have an exciting schedule planned for our 2017 meeting in Phoenix,
Arizona. Our confirmed plenary speakers –Professors Langdon Winner,
Alfred Nordmann and Ulrike Felt– and a diverse host of researchers from
across the planet promise to make this conference intellectually
engaging, as well as exciting.

If you haven’t already, make sure to register for the conference and the
dinner. THE DEADLINE HAS BEEN MOVED BACK TO JULY 14. 2017.

I tried to find more information about the meeting and discovered the meeting theme here in the February 2017 S.NET Newsletter,

October 9-11, 2017, Arizona State University, Tempe (USA)

Conference Theme: Engaging the Flux

Even the most seemingly stable entities fluctuate over time. Facts and artifacts, cultures and constitutions, people and planets. As the new and the old act, interact and intra-act within broader systems of time, space and meaning, we observe—and necessarily engage with—the constantly changing forms of socio-technological orders. As scholars and practitioners of new and emerging sciences and technologies, we are constantly tracking these moving targets, and often from within them. As technologists and researchers, we are also acutely aware that our research activities can influence the developmental trajectories of our objects of concern and study, as well as ourselves, our colleagues and the governance structures in which we live and work.

“Engaging the Flux” captures this sense that ubiquitous change is all about us, operative at all observable scales. “Flux” points to the perishability of apparently natural orders, as well as apparently stable technosocial orders. In embracing flux as its theme, the 2017 conference encourages participants to examine what the widely acknowledged acceleration of change reverberating across the planet means for the production of the technosciences, the social studies of knowledge production, art practices that engage technosciences and public deliberations about the societal significance of these practices in the contemporary moment.

This year’s conference theme aims to encourage us to examine the ways we—as scholars, scientists, artists, experts, citizens—have and have not taken into account the myriad modulations flowing and failing to flow from our engagements with our objects of study. The theme also invites us to anticipate how the conditions that partially structure these engagements may themselves be changing.

Our goal is to draw a rich range of examinations of flux and its implications for technoscientific and technocultural practices, broadly construed. Questions of specific interest include: Given the pervasiveness of political, ecological and technological fluctuations, what are the most socially responsible roles for experts, particularly in the context of policymaking? What would it mean to not merely accept perishability, but to lean into it, to positively embrace the going under of technological systems? What value can imaginaries offer in developing navigational capacities in periods of accelerated change? How can young and junior researchers —in social sciences, natural sciences, humanities or engineering— position themselves for meaningful, rewarding careers given the complementary uncertainties? How can the growing body of research straddling art and science communities help us make sense of flux and chart a course through it? What types of recalibrations are called for in order to speak effectively to diverse, and increasingly divergent, publics about the value of knowledge production and scientific rigor?

There are a few more details about the conference here on the  S.NET 2017 meeting registration page,

The ​2017 ​S. ​NET ​conference ​is ​held ​in ​Phoenix, ​Arizona ​(USA) ​and ​hosted ​by ​Arizona ​State ​University. ​ ​This ​year’s ​meeting ​will ​provide ​a ​forum ​for ​scholarly ​engagement ​and ​reflection ​on ​the ​meaning ​of ​coupled ​socio-technical ​change ​as ​a ​contemporary ​political ​phenomenon, ​a ​recurrent ​historical ​theme, ​and ​an ​object ​of ​future ​anticipation. ​ ​

HOTEL ​BLOCK ​- ​the ​new ​Marriott ​in ​downtown ​Phoenix ​has ​reserved ​rooms ​at ​$139 ​(single) ​or ​$159 ​(double ​bed). ​ ​ ​Please ​use ​the ​link ​on ​the ​S.Net ​home ​page ​to ​book ​your ​room. ​ ​

REGISTRATION ​for ​non-students: ​ ​
Early ​bird ​pricing ​is ​available ​until ​Saturday, ​July ​14, ​2017. ​ ​
Registration ​increases ​to ​$220 ​starting ​Sunday, ​July ​15, ​2017. ​
Start Your Registration
Select registrant type *
Select registrant type *
Faculty/Postdoc/private industry/gov employee ($175) Details
Student – submitting abstract or poster ($50)
Student – not submitting abstract or poster ($100)

There you have it.