Tag Archives: University of Manchester

The need for Wi-Fi speed

Yes, it’s a ‘Top Gun’ movie quote (1986) or more accurately, a paraphrasing of Tom Cruise’s line “I feel the need for speed.” I understand there’s a sequel, which is due to arrive in movie theatres or elsewhere at sometime in this decade.

Where wireless and WiFi are concerned I think there is a dog/poodle situation. ‘Dog’ is a general description where ‘poodle’ is a specific description. All poodles (specific) are dogs (general) but not all dogs are poodles. So, wireless is a general description and Wi-Fi is a specific type of wireless communication. All WiFi is wireless but not all wireless is Wi-Fi. That said, onto the research.

Given what seems to be an insatiable desire for speed in the wireless world, the quote seems quite à propos in relation to the latest work on quantum tunneling and its impact on Wi-Fi speed from the Moscow Institute of Physics and Technology (from a February 3, 2021 news item on phys.org,

Scientists from MIPT (Moscow Institute of Physics and Technology), Moscow Pedagogical State University and the University of Manchester have created a highly sensitive terahertz detector based on the effect of quantum-mechanical tunneling in graphene. The sensitivity of the device is already superior to commercially available analogs based on semiconductors and superconductors, which opens up prospects for applications of the graphene detector in wireless communications, security systems, radio astronomy, and medical diagnostics. The research results are published in Nature Communications.

A February 3, 2021 MIPT press release (also on EurekAlert), which originated the news item, provides more technical detail about the work and its relation WiFi,

Information transfer in wireless networks is based on transformation of a high-frequency continuous electromagnetic wave into a discrete sequence of bits. This technique is known as signal modulation. To transfer the bits faster, one has to increase the modulation frequency. However, this requires synchronous increase in carrier frequency. A common FM-radio transmits at frequencies of hundred megahertz, a Wi-Fi receiver uses signals of roughly five gigahertz frequency, while the 5G mobile networks can transmit up to 20 gigahertz signals. This is far from the limit, and further increase in carrier frequency admits a proportional increase in data transfer rates. Unfortunately, picking up signals with hundred gigahertz frequencies and higher is an increasingly challenging problem.

A typical receiver used in wireless communications consists of a transistor-based amplifier of weak signals and a demodulator that rectifies the sequence of bits from the modulated signal. This scheme originated in the age of radio and television, and becomes inefficient at frequencies of hundreds of gigahertz desirable for mobile systems. The fact is that most of the existing transistors aren’t fast enough to recharge at such a high frequency.

An evolutionary way to solve this problem is just to increase the maximum operation frequency of a transistor. Most specialists in the area of nanoelectronics work hard in this direction. A revolutionary way to solve the problem was theoretically proposed in the beginning of 1990’s by physicists Michael Dyakonov and Michael Shur, and realized, among others, by the group of authors in 2018. It implies abandoning active amplification by transistor, and abandoning a separate demodulator. What’s left in the circuit is a single transistor, but its role is now different. It transforms a modulated signal into bit sequence or voice signal by itself, due to non-linear relation between its current and voltage drop.

In the present work, the authors have proved that the detection of a terahertz signal is very efficient in the so-called tunneling field-effect transistor. To understand its work, one can just recall the principle of an electromechanical relay, where the passage of current through control contacts leads to a mechanical connection between two conductors and, hence, to the emergence of current. In a tunneling transistor, applying voltage to the control contact (termed as ”gate”) leads to alignment of the energy levels of the source and channel. This also leads to the flow of current. A distinctive feature of a tunneling transistor is its very strong sensitivity to control voltage. Even a small “detuning” of energy levels is enough to interrupt the subtle process of quantum mechanical tunneling. Similarly, a small voltage at the control gate is able to “connect” the levels and initiate the tunneling current

“The idea of ??a strong reaction of a tunneling transistor to low voltages is known for about fifteen years,” says Dr. Dmitry Svintsov, one of the authors of the study, head of the laboratory for optoelectronics of two-dimensional materials at the MIPT center for photonics and 2D materials. “But it’s been known only in the community of low-power electronics. No one realized before us that the same property of a tunneling transistor can be applied in the technology of terahertz detectors. Georgy Alymov (co-author of the study) and I were lucky to work in both areas. We realized then: if the transistor is opened and closed at a low power of the control signal, then it should also be good in picking up weak signals from the ambient surrounding. “

The created device is based on bilayer graphene, a unique material in which the position of energy levels (more strictly, the band structure) can be controlled using an electric voltage. This allowed the authors to switch between classical transport and quantum tunneling transport within a single device, with just a change in the polarities of the voltage at the control contacts. This possibility is of extreme importance for an accurate comparison of the detecting ability of a classical and quantum tunneling transistor.

The experiment showed that the sensitivity of the device in the tunnelling mode is few orders of magnitude higher than that in the classical transport mode. The minimum signal distinguishable by the detector against the noisy background already competes with that of commercially available superconducting and semiconductor bolometers. However, this is not the limit – the sensitivity of the detector can be further increased in “cleaner” devices with a low concentration of residual impurities. The developed detection theory, tested by the experiment, shows that the sensitivity of the “optimal” detector can be a hundred times higher.

“The current characteristics give rise to great hopes for the creation of fast and sensitive detectors for wireless communications,” says the author of the work, Dr. Denis Bandurin. And this area is not limited to graphene and is not limited to tunnel transistors. We expect that, with the same success, a remarkable detector can be created, for example, based on an electrically controlled phase transition. Graphene turned out to be just a good launching pad here, just a door, behind which is a whole world of exciting new research.”

The results presented in this paper are an example of a successful collaboration between several research groups. The authors note that it is this format of work that allows them to obtain world-class scientific results. For example, earlier, the same team of scientists demonstrated how waves in the electron sea of ??graphene can contribute to the development of terahertz technology. “In an era of rapidly evolving technology, it is becoming increasingly difficult to achieve competitive results.” – comments Dr. Georgy Fedorov, deputy head of the nanocarbon materials laboratory, MIPT, – “Only by combining the efforts and expertise of several groups can we successfully realize the most difficult tasks and achieve the most ambitious goals, which we will continue to do.”

Here’s a link to and a citation for the latest paper,

Tunnel field-effect transistors for sensitive terahertz detection by I. Gayduchenko, S. G. Xu, G. Alymov, M. Moskotin, I. Tretyakov, T. Taniguchi, K. Watanabe, G. Goltsman, A. K. Geim, G. Fedorov, D. Svintsov & D. A. Bandurin. Nature Communications volume 12, Article number: 543 (2021) DOI: https://doi.org/10.1038/s41467-020-20721-z Published: 22 January 2021

This paper is open access.

One last comment, I’m assuming since the University of Manchester is mentioned that A. K. Geim is Sir Andre K. Geim (you can look him up here is you’re not familiar with his role in the graphene research community).

Architecture, the practice of science, and meaning

The 1979 book, Laboratory Life: the Social Construction of Scientific Facts by Bruno Latour and Steve Woolgar immediately came to mind on reading about a new book (The New Architecture of Science: Learning from Graphene) linking architecture to the practice of science (research on graphene). It turns out that one of the authors studied with Latour. (For more about laboratory Life see: Bruno Latour’s Wikipedia entry; scroll down to Main Works)

A June 19, 2020 news item on Nanowerk announces the new book (Note: A link has been removed),

How does the architecture of scientific buildings matter for science? How does the design of specific spaces such as laboratories, gas rooms, transportation roots, atria, meeting spaces, clean rooms, utilities blocks and mechanical workshops affect how scientists think, conduct experiments, interact and collaborate? What does it mean to design a science lab today? What are the new challenges for the architects of science buildings? And what is the best method to study the constantly evolving architectures of science?

Over the past four decades, the design of lab buildings has drawn the attention of scholars from different disciplines. Yet, existing research tends to focus either purely on the technical side of lab design or on the human interface and communication aspects.

To grasp the specificity of the new generation of scientific buildings, however, a more refined gaze is needed: one that accounts simultaneously for the complex technical infrastructure and the variability of human experience that it facilitates.

Weaving together two tales of the NGI [National Graphene Institute] building in Manchester, lead scientist and one of the designers, Kostya [or Konstantin] Novoselov, and architectural anthropologist, Albena Yaneva, combine an analysis of its distinctive design features with ethnographic observation of the practices of scientists, facility managers, technicians, administrators and house service staff in The New Architecture of Science: Learning from Graphene.

A June 19 (?), 2020 World Scientific press release (also on EurekAlert), which originated the news item, provides more insight into the book’s ambitions,

Drawing on a meticulous study of ‘the social life’ of the building, the book offers a fresh account of the mutual shaping of architecture and science at the intersection of scientific studies, cognitive anthropology and architectural theory. By bringing the voices of the scientist as a client and the architectural theorist into a dual narrative, The New Architecture of Science presents novel insights on the new generation of science buildings.

Glimpses into aspects of the ‘life’ of a scientific building and the complex sociotechnical and collective processes of design and dwelling, as well as into the practices of nanoscientists, will fascinate a larger audience of students across the fields of Architecture, Public Communication of Science, Science and Technology Studies, Physics, Material Science, Chemistry.

The volume is expected to appeal to academic faculty members looking for ways to teach architecture beyond authorship and seeking instead to develop a more comprehensive perspective of the built environment in its complexity of material and social meanings. The book can thus be used for undergraduate and post-graduate course syllabi on the theory of architecture, design and urban studies, science and technology studies, and science communication. It will be a valuable guidebook for innovative studio projects and an inspirational reading for live project courses.

The New Architecture of Science: Learning from Graphene retails for US$49 / £45 (hardcover). To order or know more about the book, visit http://www.worldscientific.com/worldscibooks/10.1142/11840.

The building and the architects

Here’s what it looks like,

©DanielShearing Jestico + Whiles

In addition to occasioning a book, the building has also garnered an engineering award for Jestico + Whiles according to a page dedicated to the UK’s National Institute of Graphene on theplan.it website. Whoever wrote this did an excellent job of reviewing the history of graphene and its relation to the University of Manchester and provides considerable insight into the thinking behind the design and construction of this building,

The RIBA [Royal Institute of British Architects] award-winning National Graphene Institute (NGI) is a world-leading research and incubator centre dedicated to the development of graphene. Located in Manchester, it is an essential component in the UK’s bid to remain at the forefront of the commercialisation of this pioneering and revolutionary material.

Jestico + Whiles was appointed lead architect of the new National Graphene Institute at the University of Manchester in 2012, working closely with Sir Kostya Novoselov – who, along with Sir Andre Geim, first isolated graphene at the University of Manchester in 2004. The two were jointly awarded the Nobel Prize in Physics in 2010. [emphases mine]

Located in the university campus’ science quarter, the institute is housed in a compact 7,600m2 five-storey building, with the main cleanroom located on the lower ground floor to achieve best vibration performance. The ceiling of the viewing corridor that wraps around the cleanroom is cleverly angled so that scientists in the basement are visible to the public from street level.

On the insistence of Professor Novoselov most of the laboratories, including the cleanrooms, have natural daylight and view, to ensure that the intense work schedules do not deprive researchers of awareness and enjoyment of external conditions. All offices are naturally ventilated with openable windows controlled by occupants. Offices and labs on all floors are intermixed to create flexible and autonomous working zones which are easily changed and adapted to suit emerging new directions of research and changing team structures, including invited industry collaborators.

The building also provides generous collaborative and breakout spaces for meetings, relaxation and social interaction, including a double height roof-lit atrium and a top-floor multifunction seminar room/café that opens onto a south facing roof terrace with a biodiverse garden. A special design feature that has been incorporated to promote and facilitate informal exchanges of ideas is the full-height ‘writable’ walls along the corridors – a black PVC cladding that functions like traditional blackboards but obviates the health and safety issue of chalk dust.

The appearance and imagery of this building was of high importance to the client, who recognised the significant impact a cutting-edge research facility for such a potentially world-changing material could bring to the university. Nobel laureate end users, heads of departments, the Estates Directorate, and different members of the design and project team all made contributions to deciding what this was. Speaking in an article in the New Yorker, fellow graphene researcher James Tour of Rice University, Texas said ‘What Andre Geim and Kostya Novoselov did was to show the world the amazingness of graphene.’ Our design sought to convey this ‘amazingness’ through the imagery and materiality of the NGI.

The material chosen for the outer veil is a black Rimex stainless steel, which has the quality of mirror-like reflectivity, but infinitely varies in colour depending on light conditions and the angle of the view. The resulting image is that of a mysterious, ever-changing mirage that evokes the universal experience of scientific exploration. An exploration enveloped by a 2D, ultra-thin, black material that has a mercurial, undefinable character – a perfect visual reference for graphene.

This mystery is deepened by subtle delineation of the equations used in graphene research all over the façade through perforations in the panels. These are intentionally obscure and only apparent upon inspection. The equations include two hidden deliberate mistakes set by Professor Novoselov.

The perforations themselves are hexagonal in shape, representing the 2D atomic formation of graphene. They are laser cut based on a completely regular orthogonal grid, with only the variations in the size of each hole making the pattern of the letters and symbols of the equations. We believe this is a unique design in using parametric design tools to generate organic and random looking patterns out of a completely regular grid.

Who are Albena Yaneva and Sir Konstantin (Kostya) Sergeevich Novoselov?

Yaneva is the author who studied with Latour as you can see in this excerpt from her Univerisiy of Manchester faculty webpage,

After a PhD in Sociology and Anthropology from Ecole Nationale Supérieure des mines de Paris (2001) with Professor Bruno Latour, Yaneva has worked at Harvard University, the Max-Planck Institite for the History of Science in Berlin and the Austrian Academy of Science in Vienna. Her research is intrinsically transdisciplinary and spans the boundaries of science studies, cognitive anthropology, architectural theory and political philosophy. Her work has been translated in German, Italian, Spanish, Portuguese, French, Thai, Polish, Turkish and Japanese.  

Her book The Making of a Building: A Pragmatist Approach to Architecture (Oxford: Peter Lang, 2009) provides a unique anthropological account of architecture in the making, whereas Made by the OMA: An Ethnography of Design (Rotterdam: 010 Publishers, 2009) draws on an original approach of ethnography of design and was defined by the critics as “revolutionary in analyzing the day-to-day practice of designers.” For her innovative use of ethnography in the architectural discourses Yaneva was awarded the RIBA President’s Award for Outstanding University-located Research (2010).

Yaneva’s book Mapping Controversies in Architecture (Routledge, 2012) brought the newest developments in social sciences into architectural theory. It introduced Mapping Controversies as a research and teaching methodology for following design debates. A recent volume in collaboration with Alejandro Zaera-Polo What is Cosmopolitical Design? (Routledge, 2015) questioned the role of architectural design at the time of the Anthropocene and provided many examples of cosmopolitically correct design.  

Her monograph Five Ways to Make Architecture Political. An Introduction to the Politics of Design Practice (Bloomsbury, 2017) takes inspiration from object-oriented political thought and engages in an informed enquiry into the different ways architectural design can be political. The study contributes to a better understanding of the political outreach of the engagement of designers with their publics.  

Professor Yaneva’s monograph Crafting History: Archiving and the Quest for ArchitecturalLegacy (Cornell University Press, 2020) explores the daily practices of archiving in its mundane and practical course and is based on ethnographic observation of the Canadian Centre for Architecture (CCA) [emphasis mine] in Montreal, a leading archival institution, and interviews with a range of practitioners around the world, including Álvaro Siza and Peter Eisenman. Unravelling the multiple epistemic dimensions of archiving, the book tells a powerful story about how collections form the basis of Architectural History. 

I did not expect any Canadian content!

Oddly, I cannot find anything nearly as expansive for Novoselov on the University of Manchester website. There’s this rather concise faculty webpage and this more fulsome biography on the National Graphene Institute website. For the record, he’s a physicist.

For the most detail about his career in physics, I suggest the Konstantin Novoselov Wikipedia entry which includes this bit about his involvement in art,

Novoselov is known for his interest in art.[61] He practices in Chinese traditional drawing[62] and has been involved in several projects on modern art.[63] Thus, in February 2015 he combined forces with Cornelia Parker to create a display for the opening of the Whitworth Art Gallery. Cornelia Parker’s meteorite shower firework (pieces of meteorites loaded in firework) was launched by Novoselov breathing on graphene gas sensor (which changed the resistance of graphene due to doping by water vapour). Graphene was obtained through exfoliation of graphite which was extracted from a drawing of William Blake. Novoselov suggested that he also exfoliated graphite obtained from the drawings of other prominent artists: John Constable, Pablo Picasso, J. M. W. Turner, Thomas Girtin. He said that only microscopic amounts (flake size less than 100 micrometres) was extracted from each of the drawings.[63] In 2015 he participated in “in conversation” session with Douglas Gordon during Interdependence session at Manchester International Festival.[64]

I have published two posts about Novoselov’s participation in art/science projects, the first was on August 13, 2018 and titled: “See Nobel prize winner’s (Kostya Novoselov) collaborative art/science video project on August 17, 2018 (Manchester, UK)” and the second was in February 25, 2019 and titled: “Watch a Physics Nobel Laureate make art on February 26, 2019 at Mobile World Congress 19 in Barcelona, Spain.” (I may have to seriously consider more variety in my titles.)

I hope that one of these days I’ll get my hands on this book. In the meantime, I intend to spend more time perusing Bruno Latour’s website.

Regulating body temperature, graphene-style

I find some illustrations a little difficult to decipher,

Caption: Graphene thermal smart materials. Credit: The University of Manchester

I believe the red in the ‘on/off’ images, signifies heat from the surrounding environment and is not an indicator for body heat and the yellow square in the ‘on’ image indicates the shirt is working and repelling that heat.

Moving on, a June 18, 2020 news item on Nanowerk describes this latest work on a smart textile that can help regulate body temperature when it’s hot,

New research on the two-dimensional (2D) material graphene has allowed researchers to create smart adaptive clothing which can lower the body temperature of the wearer in hot climates.

A team of scientists from The University of Manchester’s National Graphene Institute have created a prototype garment to demonstrate dynamic thermal radiation control within a piece of clothing by utilising the remarkable thermal properties and flexibility of graphene. The development also opens the door to new applications such as, interactive infrared displays and covert infrared communication on textiles.

A June 18, 2020 University of Manchester press release (also on EurekAlert), which originated the news item, provides more detail,

The human body radiates energy in the form of electromagnetic waves in the infrared spectrum (known as blackbody radiation). In a hot climate it is desirable to make use the full extent of the infrared radiation to lower the body temperature that can be achieved by using infrared-transparent textiles. As for the opposite case, infrared-blocking covers are ideal to minimise the energy loss from the body. Emergency blankets are a common example used to deal with treating extreme cases of body temperature fluctuation.

The collaborative team of scientists demonstrated the dynamic transition between two opposite states by electrically tuning the infrared emissivity (the ability to radiate energy) of the graphene layers integrated onto textiles.

One-atom thick graphene was first isolated and explored in 2004 at The University of Manchester. Its potential uses are vast and research has already led to leaps forward in commercial products including; batteries, mobile phones, sporting goods and automotive.

The new research published today in journal Nano Letters, demonstrates that the smart optical textile technology can change its thermal visibility. The technology uses graphene layers to control of thermal radiation from textile surfaces.

Professor Coskun Kocabas, who led the research, said: “Ability to control the thermal radiation is a key necessity for several critical applications such as temperature management of the body in excessive temperature climates. Thermal blankets are a common example used for this purpose. However, maintaining these functionalities as the surroundings heats up or cools down has been an outstanding challenge.”

Prof Kocabas added: “The successful demonstration of the modulation of optical properties on different forms of textile can leverage the ubiquitous use of fibrous architectures and enable new technologies operating in the infrared and other regions of the electromagnetic spectrum for applications including textile displays, communication, adaptive space suits, and fashion”.

This study built on the same group’s previous research using graphene to create thermal camouflage which would fool infrared cameras. The new research can also be integrated into existing mass-manufacture textile materials such as cotton. To demonstrate, the team developed a prototype product within a t-shirt allowing the wearer to project coded messages invisible to the naked eye but readable by infrared cameras.

“We believe that our results are timely showing the possibility of turning the exceptional optical properties of graphene into novel enabling technologies. The demonstrated capabilities cannot be achieved with conventional materials.”

“The next step for this area of research is to address the need for dynamic thermal management of earth-orbiting satellites. Satellites in orbit experience excesses of temperature, when they face the sun, and they freeze in the earth’s shadow. Our technology could enable dynamic thermal management of satellites by controlling the thermal radiation and regulate the satellite temperature on demand.” said Kocabas.

Professor Sir Kostya Novoselov was also involved in the research: “This is a beautiful effect, intrinsically routed in the unique band structure of graphene. It is really exciting to see that such effects give rise to the high-tech applications.” he said.

Advanced materials is one of The University of Manchester’s research beacons – examples of pioneering discoveries, interdisciplinary collaboration and cross-sector partnerships that are tackling some of the biggest questions facing the planet. #ResearchBeacons

Here’s a link to and a citation for the paper,

Graphene-Enabled Adaptive Infrared Textiles by M. Said Ergoktas, Gokhan Bakan, Pietro Steiner, Cian Bartlam, Yury Malevich, Elif Ozden-Yenigun, Guanliang He, Nazmul Karim, Pietro Cataldi, Mark A. Bissett, Ian A. Kinloch, Kostya S. Novoselov, and Coskun Kocabas. Nano Lett. 2020, XXXX, XXX, XXX-XXX DOI: https://doi.org/10.1021/acs.nanolett.0c01694 Publication Date:June 18, 2020 Copyright © 2020 American Chemical Society

This paper is behind a paywall.

Understanding the fundamental limits of graphene electronics by way of a new quantum phenomenon

A July 26, 2019 news item on Nanowerk takes us into the world of quantum physics and graphene (Note: Links have been removed),

A team of researchers from the Universities of Manchester, Nottingham and Loughborough have discovered quantum phenomena that helps to understand the fundamental limits of graphene electronics.

As published in Nature Communications (“Strong magnetophonon oscillations in extra-large graphene”), the work describes how electrons in a single atomically-thin sheet of graphene scatter off the vibrating carbon atoms which make up the hexagonal crystal lattice.

By applying a magnetic field perpendicular to the plane of graphene, the current-carrying electrons are forced to move in closed circular “cyclotron” orbits. In pure graphene, the only way in which an electron can escape from this orbit is by bouncing off a “phonon” in a scattering event. These phonons are particle-like bundles of energy and momentum and are the “quanta” of the sound waves associated with the vibrating carbon atom. The phonons are generated in increasing numbers when the graphene crystal is warmed up from very low temperatures.

By passing a small electrical current through the graphene sheet, the team were able to measure precisely the amount of energy and momentum that is transferred between an electron and a phonon during a scattering event.

A July 26, 2019 University of Manchester press release, which originated the news item, provides additional technical details,

Their experiment revealed that two types of phonon scatter the electrons: transverse acoustic (TA) phonons in which the carbon atoms vibrate perpendicular to the direction of phonon propagation and wave motion (somewhat analogous to surface waves on water) and longitudinal acoustic (LA) phonons in which the carbon atoms vibrate back and forth along the direction of the phonon and the wave motion; (this motion is somewhat analogous to the motion of sound waves through air).

The measurements provide a very accurate measure of the speed of both types of phonons, a measurement which is otherwise difficult to make for the case of a single atomic layer. An important outcome of the experiments is the discovery that TA phonon scattering dominates over LA phonon scattering.

The observed phenomena, commonly referred to as “magnetophonon oscillations”, was measured in many semiconductors years before the discovery of graphene. It is one of the oldest quantum transport phenomena that has been known for more than fifty years, predating the quantum Hall effect. Whereas graphene possesses a number of novel, exotic electronic properties, this rather fundamental phenomenon has remained hidden.

Laurence Eaves & Roshan Krishna Kumar, co-authors of the work said: “We were pleasantly surprised to find such prominent magnetophonon oscillations appearing in graphene. We were also puzzled why people had not seen them before, considering the extensive amount of literature on quantum transport in graphene.”

Their appearance requires two key ingredients. First, the team had to fabricate high quality graphene transistors with large areas at the National Graphene Institute. If the device dimensions are smaller than a few micrometres the phenomena could not be observed.

Piranavan Kumaravadivel from The University of Manchester, lead author of the paper said: “At the beginning of quantum transport experiments, people used to study macroscopic, millimetre sized crystals. In most of the work on quantum transport on graphene, the studied devices are typically only a few micrometres in size. It seems that making larger graphene devices is not only important for applications but now also for fundamental studies.”

The second ingredient is temperature. Most graphene quantum transport experiments are performed at ultra-cold temperatures in-order to slow down the vibrating carbon atoms and “freeze-out” the phonons that usually break quantum coherence. Therefore, the graphene is warmed up as the phonons need to be active to cause the effect.

Mark Greenaway, from Loughborough University, who worked on the quantum theory of this effect said: “This result is extremely exciting – it opens a new route to probe the properties of phonons in two-dimensional crystals and their heterostructures. This will allow us to better understand electron-phonon interactions in these promising materials, understanding which is vital to develop them for use in new devices and applications.”

Here’s a link to and a citation for the paper,

Strong magnetophonon oscillations in extra-large graphene by P. Kumaravadivel, M. T. Greenaway, D. Perello, A. Berdyugin, J. Birkbeck, J. Wengraf, S. Liu, J. H. Edgar, A. K. Geim, L. Eaves & R. Krishna Kumar. ature Communicationsvolume 10, Article number: 3334 (2019) DOI: https://doi.org/10.1038/s41467-019-11379-3 Published 26 July 2019

This paper is open access.

Red wine for making wearable electronics?

Courtesy: University of Manchester [1920_stock-photo-red-wine-pouring-58843885-927462.jpg]

A July 12, 2019 news item on Nanowerk may change how you view that glass of red wine,

A team of scientists are seeking to kick-start a wearable technology revolution by creating flexible fibres and adding acids from red wine.

Extracting tannic acid from red wine, coffee or black tea, led a team of scientists from The University of Manchester to develop much more durable and flexible wearable devices. The addition of tannins improved mechanical properties of materials such as cotton to develop wearable sensors for rehabilitation monitoring, drastically increasing the devices lifespan.

A July 11, 2019 University of Manchester press release, which originated the news item, describes how this new approach could affect the scientists’ previous work,

The team have developed wearable devices such as capacitive breath sensors and artificial hands for extreme conditions by improving the durability of flexible sensors. Previously, wearable technology has been subject to fail after repeated bending and folding which can interrupt the conductivity of such devices due to tiny micro cracks. Improving this could open the door to more long-lasting integrated technology.

Dr Xuqing Liu who led the research team said: “We are using this method to develop new flexible, breathable, wearable devices. The main research objective of our group is to develop comfortable wearable devices for flexible human-machine interface.

“Traditional conductive material suffers from weak bonding to the fibers which can result in low conductivity. When red wine, or coffee, or black tea, is spilled on a dress, it’s difficult to get rid of these stains. The main reason is that they all contain tannic acid, which can firmly adsorb the material on the surface of the fiber. This good adhesion is exactly what we need for durable wearable, conductive devices.”

The new research published in the journal Small demonstrated that without this layer of tannic acid, the conductivity is several hundred times, or even thousands of times, less than traditional conductive material samples as the conductive coating becomes easily detached from the textile surface through repeated bending and flexing.

Here’s a link to and a citation for the paper,

A Nature‐Inspired, Flexible Substrate Strategy for Future Wearable Electronics by Chuang Zhu, Evelyn Chalmers, Liming Chen, Yuqi Wang, Ben Bin Xu, Yi Li, Xuqing Liu. Small Online Version of Record before inclusion in an issue 1902440 DOI: https://doi.org/10.1002/smll.201902440 First published: 19 June 2019

This paper is behind a paywall.

Human Brain Project: update

The European Union’s Human Brain Project was announced in January 2013. It, along with the Graphene Flagship, had won a multi-year competition for the extraordinary sum of one million euros each to be paid out over a 10-year period. (My January 28, 2013 posting gives the details available at the time.)

At a little more than half-way through the project period, Ed Yong, in his July 22, 2019 article for The Atlantic, offers an update (of sorts),

Ten years ago, a neuroscientist said that within a decade he could simulate a human brain. Spoiler: It didn’t happen.

On July 22, 2009, the neuroscientist Henry Markram walked onstage at the TEDGlobal conference in Oxford, England, and told the audience that he was going to simulate the human brain, in all its staggering complexity, in a computer. His goals were lofty: “It’s perhaps to understand perception, to understand reality, and perhaps to even also understand physical reality.” His timeline was ambitious: “We can do it within 10 years, and if we do succeed, we will send to TED, in 10 years, a hologram to talk to you.” …

It’s been exactly 10 years. He did not succeed.

One could argue that the nature of pioneers is to reach far and talk big, and that it’s churlish to single out any one failed prediction when science is so full of them. (Science writers joke that breakthrough medicines and technologies always seem five to 10 years away, on a rolling window.) But Markram’s claims are worth revisiting for two reasons. First, the stakes were huge: In 2013, the European Commission awarded his initiative—the Human Brain Project (HBP)—a staggering 1 billion euro grant (worth about $1.42 billion at the time). Second, the HBP’s efforts, and the intense backlash to them, exposed important divides in how neuroscientists think about the brain and how it should be studied.

Markram’s goal wasn’t to create a simplified version of the brain, but a gloriously complex facsimile, down to the constituent neurons, the electrical activity coursing along them, and even the genes turning on and off within them. From the outset, the criticism to this approach was very widespread, and to many other neuroscientists, its bottom-up strategy seemed implausible to the point of absurdity. The brain’s intricacies—how neurons connect and cooperate, how memories form, how decisions are made—are more unknown than known, and couldn’t possibly be deciphered in enough detail within a mere decade. It is hard enough to map and model the 302 neurons of the roundworm C. elegans, let alone the 86 billion neurons within our skulls. “People thought it was unrealistic and not even reasonable as a goal,” says the neuroscientist Grace Lindsay, who is writing a book about modeling the brain.
And what was the point? The HBP wasn’t trying to address any particular research question, or test a specific hypothesis about how the brain works. The simulation seemed like an end in itself—an overengineered answer to a nonexistent question, a tool in search of a use. …

Markram seems undeterred. In a recent paper, he and his colleague Xue Fan firmly situated brain simulations within not just neuroscience as a field, but the entire arc of Western philosophy and human civilization. And in an email statement, he told me, “Political resistance (non-scientific) to the project has indeed slowed us down considerably, but it has by no means stopped us nor will it.” He noted the 140 people still working on the Blue Brain Project, a recent set of positive reviews from five external reviewers, and its “exponentially increasing” ability to “build biologically accurate models of larger and larger brain regions.”

No time frame, this time, but there’s no shortage of other people ready to make extravagant claims about the future of neuroscience. In 2014, I attended TED’s main Vancouver conference and watched the opening talk, from the MIT Media Lab founder Nicholas Negroponte. In his closing words, he claimed that in 30 years, “we are going to ingest information. …

I’m happy to see the update. As I recall, there was murmuring almost immediately about the Human Brain Project (HBP). I never got details but it seemed that people were quite actively unhappy about the disbursements. Of course, this kind of uproar is not unusual when great sums of money are involved and the Graphene Flagship also had its rocky moments.

As for Yong’s contribution, I’m glad he’s debunking some of the hype and glory associated with the current drive to colonize the human brain and other efforts (e.g. genetics) which they often claim are the ‘future of medicine’.

To be fair. Yong is focused on the brain simulation aspect of the HBP (and Markram’s efforts in the Blue Brain Project) but there are other HBP efforts, as well, even if brain simulation seems to be the HBP’s main interest.

After reading the article, I looked up Henry Markram’s Wikipedia entry and found this,

In 2013, the European Union funded the Human Brain Project, led by Markram, to the tune of $1.3 billion. Markram claimed that the project would create a simulation of the entire human brain on a supercomputer within a decade, revolutionising the treatment of Alzheimer’s disease and other brain disorders. Less than two years into it, the project was recognised to be mismanaged and its claims overblown, and Markram was asked to step down.[7][8]

On 8 October 2015, the Blue Brain Project published the first digital reconstruction and simulation of the micro-circuitry of a neonatal rat somatosensory cortex.[9]

I also looked up the Human Brain Project and, talking about their other efforts, was reminded that they have a neuromorphic computing platform, SpiNNaker (mentioned here in a January 24, 2019 posting; scroll down about 50% of the way). For anyone unfamiliar with the term, neuromorphic computing/engineering is what scientists call the effort to replicate the human brain’s ability to synthesize and process information in computing processors.

In fact, there was some discussion in 2013 that the Human Brain Project and the Graphene Flagship would have some crossover projects, e.g., trying to make computers more closely resemble human brains in terms of energy use and processing power.

The Human Brain Project’s (HBP) Silicon Brains webpage notes this about their neuromorphic computing platform,

Neuromorphic computing implements aspects of biological neural networks as analogue or digital copies on electronic circuits. The goal of this approach is twofold: Offering a tool for neuroscience to understand the dynamic processes of learning and development in the brain and applying brain inspiration to generic cognitive computing. Key advantages of neuromorphic computing compared to traditional approaches are energy efficiency, execution speed, robustness against local failures and the ability to learn.

Neuromorphic Computing in the HBP

In the HBP the neuromorphic computing Subproject carries out two major activities: Constructing two large-scale, unique neuromorphic machines and prototyping the next generation neuromorphic chips.

The large-scale neuromorphic machines are based on two complementary principles. The many-core SpiNNaker machine located in Manchester [emphasis mine] (UK) connects 1 million ARM processors with a packet-based network optimized for the exchange of neural action potentials (spikes). The BrainScaleS physical model machine located in Heidelberg (Germany) implements analogue electronic models of 4 Million neurons and 1 Billion synapses on 20 silicon wafers. Both machines are integrated into the HBP collaboratory and offer full software support for their configuration, operation and data analysis.

The most prominent feature of the neuromorphic machines is their execution speed. The SpiNNaker system runs at real-time, BrainScaleS is implemented as an accelerated system and operates at 10,000 times real-time. Simulations at conventional supercomputers typical run factors of 1000 slower than biology and cannot access the vastly different timescales involved in learning and development ranging from milliseconds to years.

Recent research in neuroscience and computing has indicated that learning and development are a key aspect for neuroscience and real world applications of cognitive computing. HBP is the only project worldwide addressing this need with dedicated novel hardware architectures.

I’ve highlighted Manchester because that’s a very important city where graphene is concerned. The UK’s National Graphene Institute is housed at the University of Manchester where graphene was first isolated in 2004 by two scientists, Andre Geim and Konstantin (Kostya) Novoselov. (For their effort, they were awarded the Nobel Prize for physics in 2010.)

Getting back to the HBP (and the Graphene Flagship for that matter), the funding should be drying up sometime around 2023 and I wonder if it will be possible to assess the impact.

Fake graphene

Michael Berger’s October 9, 2018 Nanowerk Spotlight article about graphene brings to light a problem, which in hindsight seems obvious, fake graphene (Note: Links have been removed),

Peter Bøggild over at DTU [Technical University of Denmark] just published an interesting opinion piece in Nature titled “The war on fake graphene”.

The piece refers to a paper published in Advanced Materials (“The Worldwide Graphene Flake Production”) that studied graphene purchased from 60 producers around the world.

The study’s [“The Worldwide Graphene Flake Production”] findings show unequivocally “that the quality of the graphene produced in the world today is rather poor, not optimal for most applications, and most companies are producing graphite microplatelets. This is possibly the main reason for the slow development of graphene applications, which usually require a customized solution in terms of graphene properties.”

A conclusion that sounds even more damming is that “our extensive studies of graphene production worldwide indicate that there is almost no high quality graphene, as defined by ISO [International Organization for Standardization], in the market yet.”

The team also points out that a large number of the samples on the market labelled as graphene are actually graphene oxide and reduced graphene oxide. Furthermore, carbon content analysis shows that in many cases there is substantial contamination of the samples and a large number of companies produce material a with low carbon content. Contamination has many possible sources but most likely, it arises from the chemicals used in the processes.

Peter Bøggild’s October 8, 2018 opinion piece in Nature

Graphite is composed of layers of carbon atoms just a single atom in thickness, known as graphene sheets, to which it owes many of its remarkable properties. When the thickness of graphite flakes is reduced to just a few graphene layers, some of the material’s technologically most important characteristics are greatly enhanced — such as the total surface area per gram, and the mechanical flexibility of the individual flakes. In other words, graphene is more than just thin graphite. Unfortunately, it seems that many graphene producers either do not know or do not care about this. …

Imagine a world in which antibiotics could be sold by anybody, and were not subject to quality standards and regulations. Many people would be afraid to use them because of the potential side effects, or because they had no faith that they would work, with potentially fatal consequences. For emerging nanomaterials such as graphene, a lack of standards is creating a situation that, although not deadly, is similarly unacceptable.

It seems that the high-profile scientific discoveries, technical breakthroughs and heavy investment in graphene have created a Wild West for business opportunists: the study shows that some producers are labelling black powders that mostly contain cheap graphite as graphene, and selling them for top dollar. The problem is exacerbated because the entry barrier to becoming a graphene provider is exceptionally low — anyone can buy bulk graphite, grind it to powder and make a website to sell it on.

Nevertheless, the work [“The Worldwide Graphene Flake Production”] is a timely and ambitious example of the rigorous mindset needed to make rapid progress, not just in graphene research, but in work on any nanomaterial entering the market. To put it bluntly, there can be no quality without quality control.

Here are links to and citations for the study providing the basis for both Berger’s Spotlight article and Bøggild’s opinion piece,

The Worldwide Graphene Flake Production by Alan P. Kauling, Andressa T. Seefeldt, Diego P. Pisoni, Roshini C. Pradeep, Ricardo Bentini, Ricardo V. B. Oliveira, Konstantin S. Novoselov [emphasis mine], Antonio H. Castro Neto. Advanced Materials Volume 30, Issue 44 November 2, 2018 1803784 https://doi.org/10.1002/adma.201803784

The study which includes Konstantin Novoselov, a Nobel prize winner for his and Andre Geim’s work at the University of Manchester where they first isolated graphene, is behind a paywall.

It’s a very ‘carbony’ time: graphene jacket, graphene-skinned airplane, and schwarzite

In August 2018, I been stumbled across several stories about graphene-based products and a new form of carbon.

Graphene jacket

The company producing this jacket has as its goal “… creating bionic clothing that is both bulletproof and intelligent.” Well, ‘bionic‘ means biologically-inspired engineering and ‘intelligent‘ usually means there’s some kind of computing capability in the product. This jacket, which is the first step towards the company’s goal, is not bionic, bulletproof, or intelligent. Nonetheless, it represents a very interesting science experiment in which you, the consumer, are part of step two in the company’s R&D (research and development).

Onto Vollebak’s graphene jacket,

Courtesy: Vollebak

From an August 14, 2018 article by Jesus Diaz for Fast Company,

Graphene is the thinnest possible form of graphite, which you can find in your everyday pencil. It’s purely bi-dimensional, a single layer of carbon atoms that has unbelievable properties that have long threatened to revolutionize everything from aerospace engineering to medicine. …

Despite its immense promise, graphene still hasn’t found much use in consumer products, thanks to the fact that it’s hard to manipulate and manufacture in industrial quantities. The process of developing Vollebak’s jacket, according to the company’s cofounders, brothers Steve and Nick Tidball, took years of intensive research, during which the company worked with the same material scientists who built Michael Phelps’ 2008 Olympic Speedo swimsuit (which was famously banned for shattering records at the event).

The jacket is made out of a two-sided material, which the company invented during the extensive R&D process. The graphene side looks gunmetal gray, while the flipside appears matte black. To create it, the scientists turned raw graphite into something called graphene “nanoplatelets,” which are stacks of graphene that were then blended with polyurethane to create a membrane. That, in turn, is bonded to nylon to form the other side of the material, which Vollebak says alters the properties of the nylon itself. “Adding graphene to the nylon fundamentally changes its mechanical and chemical properties–a nylon fabric that couldn’t naturally conduct heat or energy, for instance, now can,” the company claims.

The company says that it’s reversible so you can enjoy graphene’s properties in different ways as the material interacts with either your skin or the world around you. “As physicists at the Max Planck Institute revealed, graphene challenges the fundamental laws of heat conduction, which means your jacket will not only conduct the heat from your body around itself to equalize your skin temperature and increase it, but the jacket can also theoretically store an unlimited amount of heat, which means it can work like a radiator,” Tidball explains.

He means it literally. You can leave the jacket out in the sun, or on another source of warmth, as it absorbs heat. Then, the company explains on its website, “If you then turn it inside out and wear the graphene next to your skin, it acts like a radiator, retaining its heat and spreading it around your body. The effect can be visibly demonstrated by placing your hand on the fabric, taking it away and then shooting the jacket with a thermal imaging camera. The heat of the handprint stays long after the hand has left.”

There’s a lot more to the article although it does feature some hype and I’m not sure I believe Diaz’s claim (August 14, 2018 article) that ‘graphene-based’ hair dye is perfectly safe ( Note: A link has been removed),

Graphene is the thinnest possible form of graphite, which you can find in your everyday pencil. It’s purely bi-dimensional, a single layer of carbon atoms that has unbelievable properties that will one day revolutionize everything from aerospace engineering to medicine. Its diverse uses are seemingly endless: It can stop a bullet if you add enough layers. It can change the color of your hair with no adverse effects. [emphasis mine] It can turn the walls of your home into a giant fire detector. “It’s so strong and so stretchy that the fibers of a spider web coated in graphene could catch a falling plane,” as Vollebak puts it in its marketing materials.

Not unless things have changed greatly since March 2018. My August 2, 2018 posting featured the graphene-based hair dye announcement from March 2018 and a cautionary note from Dr. Andrew Maynard (scroll down ab out 50% of the way for a longer excerpt of Maynard’s comments),

Northwestern University’s press release proudly announced, “Graphene finds new application as nontoxic, anti-static hair dye.” The announcement spawned headlines like “Enough with the toxic hair dyes. We could use graphene instead,” and “’Miracle material’ graphene used to create the ultimate hair dye.”

From these headlines, you might be forgiven for getting the idea that the safety of graphene-based hair dyes is a done deal. Yet having studied the potential health and environmental impacts of engineered nanomaterials for more years than I care to remember, I find such overly optimistic pronouncements worrying – especially when they’re not backed up by clear evidence.

These studies need to be approached with care, as the precise risks of graphene exposure will depend on how the material is used, how exposure occurs and how much of it is encountered. Yet there’s sufficient evidence to suggest that this substance should be used with caution – especially where there’s a high chance of exposure or that it could be released into the environment.

The full text of Dr. Maynard’s comments about graphene hair dyes and risk can be found here.

Bearing in mind  that graphene-based hair dye is an entirely different class of product from the jacket, I wouldn’t necessarily dismiss risks; I would like to know what kind of risk assessment and safety testing has been done. Due to their understandable enthusiasm, the brothers Tidball have focused all their marketing on the benefits and the opportunity for the consumer to test their product (from graphene jacket product webpage),

While it’s completely invisible and only a single atom thick, graphene is the lightest, strongest, most conductive material ever discovered, and has the same potential to change life on Earth as stone, bronze and iron once did. But it remains difficult to work with, extremely expensive to produce at scale, and lives mostly in pioneering research labs. So following in the footsteps of the scientists who discovered it through their own highly speculative experiments, we’re releasing graphene-coated jackets into the world as experimental prototypes. Our aim is to open up our R&D and accelerate discovery by getting graphene out of the lab and into the field so that we can harness the collective power of early adopters as a test group. No-one yet knows the true limits of what graphene can do, so the first edition of the Graphene Jacket is fully reversible with one side coated in graphene and the other side not. If you’d like to take part in the next stage of this supermaterial’s history, the experiment is now open. You can now buy it, test it and tell us about it. [emphasis mine]

How maverick experiments won the Nobel Prize

While graphene’s existence was first theorised in the 1940s, it wasn’t until 2004 that two maverick scientists, Andre Geim and Konstantin Novoselov, were able to isolate and test it. Through highly speculative and unfunded experimentation known as their ‘Friday night experiments,’ they peeled layer after layer off a shaving of graphite using Scotch tape until they produced a sample of graphene just one atom thick. After similarly leftfield thinking won Geim the 2000 Ig Nobel prize for levitating frogs using magnets, the pair won the Nobel prize in 2010 for the isolation of graphene.

Should you be interested, in beta-testing the jacket, it will cost you $695 (presumably USD); order here. One last thing, Vollebak is based in the UK.

Graphene skinned plane

An August 14, 2018 news item (also published as an August 1, 2018 Haydale press release) by Sue Keighley on Azonano heralds a new technology for airplans,

Haydale, (AIM: HAYD), the global advanced materials group, notes the announcement made yesterday from the University of Central Lancashire (UCLAN) about the recent unveiling of the world’s first graphene skinned plane at the internationally renowned Farnborough air show.

The prepreg material, developed by Haydale, has potential value for fuselage and wing surfaces in larger scale aero and space applications especially for the rapidly expanding drone market and, in the longer term, the commercial aerospace sector. By incorporating functionalised nanoparticles into epoxy resins, the electrical conductivity of fibre-reinforced composites has been significantly improved for lightning-strike protection, thereby achieving substantial weight saving and removing some manufacturing complexities.

Before getting to the photo, here’s a definition for pre-preg from its Wikipedia entry (Note: Links have been removed),

Pre-preg is “pre-impregnated” composite fibers where a thermoset polymer matrix material, such as epoxy, or a thermoplastic resin is already present. The fibers often take the form of a weave and the matrix is used to bond them together and to other components during manufacture.

Haydale has supplied graphene enhanced prepreg material for Juno, a three-metre wide graphene-enhanced composite skinned aircraft, that was revealed as part of the ‘Futures Day’ at Farnborough Air Show 2018. [downloaded from https://www.azonano.com/news.aspx?newsID=36298]

A July 31, 2018 University of Central Lancashire (UCLan) press release provides a tiny bit more (pun intended) detail,

The University of Central Lancashire (UCLan) has unveiled the world’s first graphene skinned plane at an internationally renowned air show.

Juno, a three-and-a-half-metre wide graphene skinned aircraft, was revealed on the North West Aerospace Alliance (NWAA) stand as part of the ‘Futures Day’ at Farnborough Air Show 2018.

The University’s aerospace engineering team has worked in partnership with the Sheffield Advanced Manufacturing Research Centre (AMRC), the University of Manchester’s National Graphene Institute (NGI), Haydale Graphene Industries (Haydale) and a range of other businesses to develop the unmanned aerial vehicle (UAV), which also includes graphene batteries and 3D printed parts.

Billy Beggs, UCLan’s Engineering Innovation Manager, said: “The industry reaction to Juno at Farnborough was superb with many positive comments about the work we’re doing. Having Juno at one the world’s biggest air shows demonstrates the great strides we’re making in leading a programme to accelerate the uptake of graphene and other nano-materials into industry.

“The programme supports the objectives of the UK Industrial Strategy and the University’s Engineering Innovation Centre (EIC) to increase industry relevant research and applications linked to key local specialisms. Given that Lancashire represents the fourth largest aerospace cluster in the world, there is perhaps no better place to be developing next generation technologies for the UK aerospace industry.”

Previous graphene developments at UCLan have included the world’s first flight of a graphene skinned wing and the launch of a specially designed graphene-enhanced capsule into near space using high altitude balloons.

UCLan engineering students have been involved in the hands-on project, helping build Juno on the Preston Campus.

Haydale supplied much of the material and all the graphene used in the aircraft. Ray Gibbs, Chief Executive Officer, said: “We are delighted to be part of the project team. Juno has highlighted the capability and benefit of using graphene to meet key issues faced by the market, such as reducing weight to increase range and payload, defeating lightning strike and protecting aircraft skins against ice build-up.”

David Bailey Chief Executive of the North West Aerospace Alliance added: “The North West aerospace cluster contributes over £7 billion to the UK economy, accounting for one quarter of the UK aerospace turnover. It is essential that the sector continues to develop next generation technologies so that it can help the UK retain its competitive advantage. It has been a pleasure to support the Engineering Innovation Centre team at the University in developing the world’s first full graphene skinned aircraft.”

The Juno project team represents the latest phase in a long-term strategic partnership between the University and a range of organisations. The partnership is expected to go from strength to strength following the opening of the £32m EIC facility in February 2019.

The next step is to fly Juno and conduct further tests over the next two months.

Next item, a new carbon material.


I love watching this gif of a schwarzite,

The three-dimensional cage structure of a schwarzite that was formed inside the pores of a zeolite. (Graphics by Yongjin Lee and Efrem Braun)

An August 13, 2018 news item on Nanowerk announces the new carbon structure,

The discovery of buckyballs [also known as fullerenes, C60, or buckminsterfullerenes] surprised and delighted chemists in the 1980s, nanotubes jazzed physicists in the 1990s, and graphene charged up materials scientists in the 2000s, but one nanoscale carbon structure – a negatively curved surface called a schwarzite – has eluded everyone. Until now.

University of California, Berkeley [UC Berkeley], chemists have proved that three carbon structures recently created by scientists in South Korea and Japan are in fact the long-sought schwarzites, which researchers predict will have unique electrical and storage properties like those now being discovered in buckminsterfullerenes (buckyballs or fullerenes for short), nanotubes and graphene.

An August 13, 2018 UC Berkeley news release by Robert Sanders, which originated the news item, describes how the Berkeley scientists and the members of their international  collaboration from Germany, Switzerland, Russia, and Italy, have contributed to the current state of schwarzite research,

The new structures were built inside the pores of zeolites, crystalline forms of silicon dioxide – sand – more commonly used as water softeners in laundry detergents and to catalytically crack petroleum into gasoline. Called zeolite-templated carbons (ZTC), the structures were being investigated for possible interesting properties, though the creators were unaware of their identity as schwarzites, which theoretical chemists have worked on for decades.

Based on this theoretical work, chemists predict that schwarzites will have unique electronic, magnetic and optical properties that would make them useful as supercapacitors, battery electrodes and catalysts, and with large internal spaces ideal for gas storage and separation.

UC Berkeley postdoctoral fellow Efrem Braun and his colleagues identified these ZTC materials as schwarzites based of their negative curvature, and developed a way to predict which zeolites can be used to make schwarzites and which can’t.

“We now have the recipe for how to make these structures, which is important because, if we can make them, we can explore their behavior, which we are working hard to do now,” said Berend Smit, an adjunct professor of chemical and biomolecular engineering at UC Berkeley and an expert on porous materials such as zeolites and metal-organic frameworks.

Smit, the paper’s corresponding author, Braun and their colleagues in Switzerland, China, Germany, Italy and Russia will report their discovery this week in the journal Proceedings of the National Academy of Sciences. Smit is also a faculty scientist at Lawrence Berkeley National Laboratory.

Playing with carbon

Diamond and graphite are well-known three-dimensional crystalline arrangements of pure carbon, but carbon atoms can also form two-dimensional “crystals” — hexagonal arrangements patterned like chicken wire. Graphene is one such arrangement: a flat sheet of carbon atoms that is not only the strongest material on Earth, but also has a high electrical conductivity that makes it a promising component of electronic devices.

schwarzite carbon cage

The cage structure of a schwarzite that was formed inside the pores of a zeolite. The zeolite is subsequently dissolved to release the new material. (Graphics by Yongjin Lee and Efrem Braun)

Graphene sheets can be wadded up to form soccer ball-shaped fullerenes – spherical carbon cages that can store molecules and are being used today to deliver drugs and genes into the body. Rolling graphene into a cylinder yields fullerenes called nanotubes, which are being explored today as highly conductive wires in electronics and storage vessels for gases like hydrogen and carbon dioxide. All of these are submicroscopic, 10,000 times smaller than the width of a human hair.

To date, however, only positively curved fullerenes and graphene, which has zero curvature, have been synthesized, feats rewarded by Nobel Prizes in 1996 and 2010, respectively.

In the 1880s, German physicist Hermann Schwarz investigated negatively curved structures that resemble soap-bubble surfaces, and when theoretical work on carbon cage molecules ramped up in the 1990s, Schwarz’s name became attached to the hypothetical negatively curved carbon sheets.

“The experimental validation of schwarzites thus completes the triumvirate of possible curvatures to graphene; positively curved, flat, and now negatively curved,” Braun added.

Minimize me

Like soap bubbles on wire frames, schwarzites are topologically minimal surfaces. When made inside a zeolite, a vapor of carbon-containing molecules is injected, allowing the carbon to assemble into a two-dimensional graphene-like sheet lining the walls of the pores in the zeolite. The surface is stretched tautly to minimize its area, which makes all the surfaces curve negatively, like a saddle. The zeolite is then dissolved, leaving behind the schwarzite.

soap bubble schwarzite structure

A computer-rendered negatively curved soap bubble that exhibits the geometry of a carbon schwarzite. (Felix Knöppel image)

“These negatively-curved carbons have been very hard to synthesize on their own, but it turns out that you can grow the carbon film catalytically at the surface of a zeolite,” Braun said. “But the schwarzites synthesized to date have been made by choosing zeolite templates through trial and error. We provide very simple instructions you can follow to rationally make schwarzites and we show that, by choosing the right zeolite, you can tune schwarzites to optimize the properties you want.”

Researchers should be able to pack unusually large amounts of electrical charge into schwarzites, which would make them better capacitors than conventional ones used today in electronics. Their large interior volume would also allow storage of atoms and molecules, which is also being explored with fullerenes and nanotubes. And their large surface area, equivalent to the surface areas of the zeolites they’re grown in, could make them as versatile as zeolites for catalyzing reactions in the petroleum and natural gas industries.

Braun modeled ZTC structures computationally using the known structures of zeolites, and worked with topological mathematician Senja Barthel of the École Polytechnique Fédérale de Lausanne in Sion, Switzerland, to determine which of the minimal surfaces the structures resembled.

The team determined that, of the approximately 200 zeolites created to date, only 15 can be used as a template to make schwarzites, and only three of them have been used to date to produce schwarzite ZTCs. Over a million zeolite structures have been predicted, however, so there could be many more possible schwarzite carbon structures made using the zeolite-templating method.

Other co-authors of the paper are Yongjin Lee, Seyed Mohamad Moosavi and Barthel of the École Polytechnique Fédérale de Lausanne, Rocio Mercado of UC Berkeley, Igor Baburin of the Technische Universität Dresden in Germany and Davide Proserpio of the Università degli Studi di Milano in Italy and Samara State Technical University in Russia.

Here’s a link to and a citation for the paper,

Generating carbon schwarzites via zeolite-templating by Efrem Braun, Yongjin Lee, Seyed Mohamad Moosavi, Senja Barthel, Rocio Mercado, Igor A. Baburin, Davide M. Proserpio, and Berend Smit. PNAS August 14, 2018. 201805062; published ahead of print August 14, 2018. https://doi.org/10.1073/pnas.1805062115

This paper appears to be open access.

Watch a Physics Nobel Laureate make art on February 26, 2019 at Mobile World Congress 19 in Barcelona, Spain

Konstantin (Kostya) Novoselov (Nobel Prize in Physics 2010) strikes out artistically, again. The last time was in 2018 (see my August 13, 2018 posting about Novoselov’s project with artist Mary Griffiths).

This time around, Novoselov and artist, Kate Daudy, will be creating an art piece during a demonstration at the Mobile World Congress 19 (MWC 19) in Barcelona, Spain. From a February 21, 2019 news item on Azonano,

Novoselov is most popular for his revolutionary experiments on graphene, which is lightweight, flexible, stronger than steel, and more conductive when compared to copper. Due to this feat, Professors Andre Geim and Kostya Novoselov grabbed the Nobel Prize in Physics in 2010. Moreover, Novoselov is one of the founding principal researchers of the Graphene Flagship, which is a €1 billion research project funded by the European Commission.

At MWC 2019, Novoselov will join hands with British textile artist Kate Daudy, a collaboration which indicates his usual interest in art projects. During the show, the pair will produce a piece of art using materials printed with embedded graphene. The installation will be named “Everything is Connected,” the slogan of the Graphene Flagship and reflective of the themes at MWC 2019.

The demonstration will be held on Tuesday, February 26th, 2019 at 11:30 CET in the Graphene Pavilion, an area devoted to showcasing inventions accomplished by funding from the Graphene Flagship. Apart from the art demonstration, exhibitors in the Graphene Pavilion will demonstrate 26 modern graphene-based prototypes and devices that will revolutionize the future of telecommunications, mobile phones, home technology, and wearables.

A February 20, 2019 University of Manchester press release, which originated the news item, goes on to describe what might be called the real point of this exercise,

Interactive demonstrations include a selection of health-related wearable technologies, which will be exhibited in the ‘wearables of the future’ area. Prototypes in this zone include graphene-enabled pressure sensing insoles, which have been developed by Graphene Flagship researchers at the University of Cambridge to accurately identify problematic walking patterns in wearers.

Another prototype will demonstrate how graphene can be used to reduce heat in mobile phone batteries, therefore prolong their lifespan. In fact, the material required for this invention is the same that will be used during the art installation demonstration.

Andrea Ferrari, Science and Technology Officer and Chair of the management panel of the Graphene Flagship said: “Graphene and related layered materials have steadily progressed from fundamental to applied research and from the lab to the factory floor. Mobile World Congress is a prime opportunity for the Graphene Flagship to showcase how the European Commission’s investment in research is beginning to create tangible products and advanced prototypes. Outreach is also part of the Graphene Flagship mission and the interplay between graphene, culture and art has been explored by several Flagship initiatives over the years. This unique live exhibition of Kostya is a first for the Flagship and the Mobile World Congress, and I invite everybody to attend.”

More information on the Graphene Pavilion, the prototypes on show and the interactive demonstrations at MWC 2019, can be found on the press@graphene-flagship.euGraphene Flagship website. Alternatively, contact the Graphene Flagship directly on press@graphene-flagship.eu.

The Novoselov/Daudy project sounds as if they’ve drawn inspiration from performance art practices. In any case, it seems like a creative and fun way to engage the audience. For anyone curious about Kate Daudy‘s work,

[downloaded from https://katedaudy.com/]

Brainy and brainy: a novel synaptic architecture and a neuromorphic computing platform called SpiNNaker

I have two items about brainlike computing. The first item hearkens back to memristors, a topic I have been following since 2008. (If you’re curious about the various twists and turns just enter  the term ‘memristor’ in this blog’s search engine.) The latest on memristors is from a team than includes IBM (US), École Politechnique Fédérale de Lausanne (EPFL; Swizterland), and the New Jersey Institute of Technology (NJIT; US). The second bit comes from a Jülich Research Centre team in Germany and concerns an approach to brain-like computing that does not include memristors.

Multi-memristive synapses

In the inexorable march to make computers function more like human brains (neuromorphic engineering/computing), an international team has announced its latest results in a July 10, 2018 news item on Nanowerk,

Two New Jersey Institute of Technology (NJIT) researchers, working with collaborators from the IBM Research Zurich Laboratory and the École Polytechnique Fédérale de Lausanne, have demonstrated a novel synaptic architecture that could lead to a new class of information processing systems inspired by the brain.

The findings are an important step toward building more energy-efficient computing systems that also are capable of learning and adaptation in the real world. …

A July 10, 2018 NJIT news release (also on EurekAlert) by Tracey Regan, which originated by the news item, adds more details,

The researchers, Bipin Rajendran, an associate professor of electrical and computer engineering, and S. R. Nandakumar, a graduate student in electrical engineering, have been developing brain-inspired computing systems that could be used for a wide range of big data applications.

Over the past few years, deep learning algorithms have proven to be highly successful in solving complex cognitive tasks such as controlling self-driving cars and language understanding. At the heart of these algorithms are artificial neural networks – mathematical models of the neurons and synapses of the brain – that are fed huge amounts of data so that the synaptic strengths are autonomously adjusted to learn the intrinsic features and hidden correlations in these data streams.

However, the implementation of these brain-inspired algorithms on conventional computers is highly inefficient, consuming huge amounts of power and time. This has prompted engineers to search for new materials and devices to build special-purpose computers that can incorporate the algorithms. Nanoscale memristive devices, electrical components whose conductivity depends approximately on prior signaling activity, can be used to represent the synaptic strength between the neurons in artificial neural networks.

While memristive devices could potentially lead to faster and more power-efficient computing systems, they are also plagued by several reliability issues that are common to nanoscale devices. Their efficiency stems from their ability to be programmed in an analog manner to store multiple bits of information; however, their electrical conductivities vary in a non-deterministic and non-linear fashion.

In the experiment, the team showed how multiple nanoscale memristive devices exhibiting these characteristics could nonetheless be configured to efficiently implement artificial intelligence algorithms such as deep learning. Prototype chips from IBM containing more than one million nanoscale phase-change memristive devices were used to implement a neural network for the detection of hidden patterns and correlations in time-varying signals.

“In this work, we proposed and experimentally demonstrated a scheme to obtain high learning efficiencies with nanoscale memristive devices for implementing learning algorithms,” Nandakumar says. “The central idea in our demonstration was to use several memristive devices in parallel to represent the strength of a synapse of a neural network, but only chose one of them to be updated at each step based on the neuronal activity.”

Here’s a link to and a citation for the paper,

Neuromorphic computing with multi-memristive synapses by Irem Boybat, Manuel Le Gallo, S. R. Nandakumar, Timoleon Moraitis, Thomas Parnell, Tomas Tuma, Bipin Rajendran, Yusuf Leblebici, Abu Sebastian, & Evangelos Eleftheriou. Nature Communications volume 9, Article number: 2514 (2018) DOI: https://doi.org/10.1038/s41467-018-04933-y Published 28 June 2018

This is an open access paper.

Also they’ve got a couple of very nice introductory paragraphs which I’m including here, (from the June 28, 2018 paper in Nature Communications; Note: Links have been removed),

The human brain with less than 20 W of power consumption offers a processing capability that exceeds the petaflops mark, and thus outperforms state-of-the-art supercomputers by several orders of magnitude in terms of energy efficiency and volume. Building ultra-low-power cognitive computing systems inspired by the operating principles of the brain is a promising avenue towards achieving such efficiency. Recently, deep learning has revolutionized the field of machine learning by providing human-like performance in areas, such as computer vision, speech recognition, and complex strategic games1. However, current hardware implementations of deep neural networks are still far from competing with biological neural systems in terms of real-time information-processing capabilities with comparable energy consumption.

One of the reasons for this inefficiency is that most neural networks are implemented on computing systems based on the conventional von Neumann architecture with separate memory and processing units. There are a few attempts to build custom neuromorphic hardware that is optimized to implement neural algorithms2,3,4,5. However, as these custom systems are typically based on conventional silicon complementary metal oxide semiconductor (CMOS) circuitry, the area efficiency of such hardware implementations will remain relatively low, especially if in situ learning and non-volatile synaptic behavior have to be incorporated. Recently, a new class of nanoscale devices has shown promise for realizing the synaptic dynamics in a compact and power-efficient manner. These memristive devices store information in their resistance/conductance states and exhibit conductivity modulation based on the programming history6,7,8,9. The central idea in building cognitive hardware based on memristive devices is to store the synaptic weights as their conductance states and to perform the associated computational tasks in place.

The two essential synaptic attributes that need to be emulated by memristive devices are the synaptic efficacy and plasticity. …

It gets more complicated from there.

Now onto the next bit.


At a guess, those capitalized N’s are meant to indicate ‘neural networks’. As best I can determine, SpiNNaker is not based on the memristor. Moving on, a July 11, 2018 news item on phys.org announces work from a team examining how neuromorphic hardware and neuromorphic software work together,

A computer built to mimic the brain’s neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. Tested for accuracy, speed and energy efficiency, this custom-built computer named SpiNNaker, has the potential to overcome the speed and power consumption problems of conventional supercomputers. The aim is to advance our knowledge of neural processing in the brain, to include learning and disorders such as epilepsy and Alzheimer’s disease.

A July 11, 2018 Frontiers Publishing news release on EurekAlert, which originated the news item, expands on the latest work,

“SpiNNaker can support detailed biological models of the cortex–the outer layer of the brain that receives and processes information from the senses–delivering results very similar to those from an equivalent supercomputer software simulation,” says Dr. Sacha van Albada, lead author of this study and leader of the Theoretical Neuroanatomy group at the Jülich Research Centre, Germany. “The ability to run large-scale detailed neural networks quickly and at low power consumption will advance robotics research and facilitate studies on learning and brain disorders.”

The human brain is extremely complex, comprising 100 billion interconnected brain cells. We understand how individual neurons and their components behave and communicate with each other and on the larger scale, which areas of the brain are used for sensory perception, action and cognition. However, we know less about the translation of neural activity into behavior, such as turning thought into muscle movement.

Supercomputer software has helped by simulating the exchange of signals between neurons, but even the best software run on the fastest supercomputers to date can only simulate 1% of the human brain.

“It is presently unclear which computer architecture is best suited to study whole-brain networks efficiently. The European Human Brain Project and Jülich Research Centre have performed extensive research to identify the best strategy for this highly complex problem. Today’s supercomputers require several minutes to simulate one second of real time, so studies on processes like learning, which take hours and days in real time are currently out of reach.” explains Professor Markus Diesmann, co-author, head of the Computational and Systems Neuroscience department at the Jülich Research Centre.

He continues, “There is a huge gap between the energy consumption of the brain and today’s supercomputers. Neuromorphic (brain-inspired) computing allows us to investigate how close we can get to the energy efficiency of the brain using electronics.”

Developed over the past 15 years and based on the structure and function of the human brain, SpiNNaker — part of the Neuromorphic Computing Platform of the Human Brain Project — is a custom-built computer composed of half a million of simple computing elements controlled by its own software. The researchers compared the accuracy, speed and energy efficiency of SpiNNaker with that of NEST–a specialist supercomputer software currently in use for brain neuron-signaling research.

“The simulations run on NEST and SpiNNaker showed very similar results,” reports Steve Furber, co-author and Professor of Computer Engineering at the University of Manchester, UK. “This is the first time such a detailed simulation of the cortex has been run on SpiNNaker, or on any neuromorphic platform. SpiNNaker comprises 600 circuit boards incorporating over 500,000 small processors in total. The simulation described in this study used just six boards–1% of the total capability of the machine. The findings from our research will improve the software to reduce this to a single board.”

Van Albada shares her future aspirations for SpiNNaker, “We hope for increasingly large real-time simulations with these neuromorphic computing systems. In the Human Brain Project, we already work with neuroroboticists who hope to use them for robotic control.”

Before getting to the link and citation for the paper, here’s a description of SpiNNaker’s hardware from the ‘Spiking neural netowrk’ Wikipedia entry, Note: Links have been removed,

Neurogrid, built at Stanford University, is a board that can simulate spiking neural networks directly in hardware. SpiNNaker (Spiking Neural Network Architecture) [emphasis mine], designed at the University of Manchester, uses ARM processors as the building blocks of a massively parallel computing platform based on a six-layer thalamocortical model.[5]

Now for the link and citation,

Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model by
Sacha J. van Albada, Andrew G. Rowley, Johanna Senk, Michael Hopkins, Maximilian Schmidt, Alan B. Stokes, David R. Lester, Markus Diesmann, and Steve B. Furber. Neurosci. 12:291. doi: 10.3389/fnins.2018.00291 Published: 23 May 2018

As noted earlier, this is an open access paper.