Monthly Archives: February 2017

nano tech 2017 being held in Tokyo from February 15-17, 2017

I found some news about the Alberta technology scene in the programme for Japan’s nano tech 2017 exhibition and conference to be held Feb. 15 – 17, 2017 in Tokyo. First, here’s more about the show in Japan from a Jan. 17, 2017 nano tech 2017 press release on Business Wire (also on Yahoo News),

The nano tech executive committee (chairman: Tomoji Kawai, Specially Appointed Professor, Osaka University) will be holding “nano tech 2017” – one of the world’s largest nanotechnology exhibitions, now in its 16th year – on February 15, 2017, at the Tokyo Big Sight convention center in Japan. 600 organizations (including over 40 first-time exhibitors) from 23 countries and regions are set to exhibit at the event in 1,000 booths, demonstrating revolutionary and cutting edge core technologies spanning such industries as automotive, aerospace, environment/energy, next-generation sensors, cutting-edge medicine, and more. Including attendees at the concurrently held exhibitions, the total number of visitors to the event is expected to exceed 50,000.

The theme of this year’s nano tech exhibition is “Open Nano Collaboration.” By bringing together organizations working in a wide variety of fields, the business matching event aims to promote joint development through cross-field collaboration.

Special Symposium: “Nanotechnology Contributing to the Super Smart Society”

Each year nano tech holds Special Symposium, in which industry specialists from top organizations from Japan and abroad speak about the issues surrounding the latest trends in nanotech. The themes of this year’s Symposium are Life Nanotechnology, Graphene, AI/IoT, Cellulose Nanofibers, and Materials Informatics.

Notable sessions include:

Life Nanotechnology
“Development of microRNA liquid biopsy for early detection of cancer”
Takahiro Ochiya, National Cancer Center Research Institute Division of Molecular and Cellular Medicine, Chief

AI / IoT
“AI Embedded in the Real World”
Hideki Asoh, AIST Deputy Director, Artificial Intelligence Research Center

Cellulose Nanofibers [emphasis mine]
“The Current Trends and Challenges for Industrialization of Nanocellulose”
Satoshi Hirata, Nanocellulose Forum Secretary-General

Materials Informatics
“Perspective of Materials Research”
Hideo Hosono, Tokyo Institute of Technology Professor

View the full list of sessions:
>> http://nanotech2017.icsbizmatch.jp/Presentation/en/Info/List#main_theater

nano tech 2017 Homepage:
>> http://nanotechexpo.jp/

nano tech 2017, the 16th International Nanotechnology Exhibition & Conference
Date: February 15-17, 2017, 10:00-17:00
Venue: Tokyo Big Sight (East Halls 4-6 & Conference Tower)
Organizer: nano tech Executive Committee, JTB Communication Design

As you may have guessed the Alberta information can be found in the .Cellulose Nanofibers session. From the conference/seminar program page; scroll down about 25% of the way to find the Alberta presentation,

Production and Applications Development of Cellulose Nanocrystals (CNC) at InnoTech Alberta

Behzad (Benji) Ahvazi
InnoTech Alberta Team Lead, Cellulose Nanocrystals (CNC)

[ Abstract ]

The production and use of cellulose nanocrystals (CNC) is an emerging technology that has gained considerable interest from a range of industries that are working towards increased use of “green” biobased materials. The construction of one-of-a-kind CNC pilot plant [emphasis mine] at InnoTech Alberta and production of CNC samples represents a critical step for introducing the cellulosic based biomaterials to industrial markets and provides a platform for the development of novel high value and high volume applications. Major key components including feedstock, acid hydrolysis formulation, purification, and drying processes were optimized significantly to reduce the operation cost. Fully characterized CNC samples were provided to a large number of academic and research laboratories including various industries domestically and internationally for applications development.

[ Profile ]

Dr. Ahvazi completed his Bachelor of Science in Honours program at the Department of Chemistry and Biochemistry and graduated with distinction at Concordia University in Montréal, Québec. His Ph.D. program was completed in 1998 at McGill Pulp and Paper Research Centre in the area of macromolecules with solid background in Lignocellulosic, organic wood chemistry as well as pulping and paper technology. After completing his post-doctoral fellowship, he joined FPInnovations formally [formerly?] known as PAPRICAN as a research scientist (R&D) focusing on a number of confidential chemical pulping and bleaching projects. In 2006, he worked at Tembec as a senior research scientist and as a Leader in Alcohol and Lignin (R&D). In April 2009, he held a position as a Research Officer in both National Bioproducts (NBP1 & NBP2) and Industrial Biomaterials Flagship programs at National Research Council Canada (NRC). During his tenure, he had directed and performed innovative R&D activities within both programs on extraction, modification, and characterization of biomass as well as polymer synthesis and formulation for industrial applications. Currently, he is working at InnoTech Alberta as Team Lead for Biomass Conversion and Processing Technologies.

Canada scene update

InnoTech Alberta was until Nov. 1, 2016 known as Alberta Innovates – Technology Futures. Here’s more about InnoTech Alberta from the Alberta Innovates … home page,

Effective November 1, 2016, Alberta Innovates – Technology Futures is one of four corporations now consolidated into Alberta Innovates and a wholly owned subsidiary called InnoTech Alberta.

You will find all the existing programs, services and information offered by InnoTech Alberta on this website. To access the basic research funding and commercialization programs previously offered by Alberta Innovates – Technology Futures, explore here. For more information on Alberta Innovates, visit the new Alberta Innovates website.

As for InnoTech Alberta’s “one-of-a-kind CNC pilot plant,” I’d like to know more about it’s one-of-a-kind status since there are two other CNC production plants in Canada. (Is the status a consequence of regional chauvinism or a writer unfamiliar with the topic?). Getting back to the topic, the largest company (and I believe the first) with a CNC plant was CelluForce, which started as a joint venture between Domtar and FPInnovations and powered with some very heavy investment from the government of Canada. (See my July 16, 2010 posting about the construction of the plant in Quebec and my June 6, 2011 posting about the newly named CelluForce.) Interestingly, CelluForce will have a booth at nano tech 2017 (according to its Jan. 27, 2017 news release) although the company doesn’t seem to have any presentations on the schedule. The other Canadian company is Blue Goose Biorefineries in Saskatchewan. Here’s more about Blue Goose from the company website’s home page,

Blue Goose Biorefineries Inc. (Blue Goose) is pleased to introduce our R3TM process. R3TM technology incorporates green chemistry to fractionate renewable plant biomass into high value products.

Traditionally, separating lignocellulosic biomass required high temperatures, harsh chemicals, and complicated processes. R3TM breaks this costly compromise to yield high quality cellulose, lignin and hemicellulose products.

The robust and environmentally friendly R3TM technology has numerous applications. Our current product focus is cellulose nanocrystals (CNC). Cellulose nanocrystals are “Mother Nature’s Building Blocks” possessing unique properties. These unique properties encourage the design of innovative products from a safe, inherently renewable, sustainable, and carbon neutral resource.

Blue Goose assists companies and research groups in the development of applications for CNC, by offering CNC for sale without Intellectual Property restrictions. [emphasis mine]

Bravo to Blue Goose! Unfortunately, I was not able to determine if the company will be at nano tech 2017.

One final comment, there was some excitement about CNC a while back where I had more than one person contact me asking for information about how to buy CNC. I wasn’t able to be helpful because there was, apparently, an attempt by producers to control sales and limit CNC access to a select few for competitive advantage. Coincidentally or not, CelluForce developed a stockpile which has persisted for some years as I noted in my Aug. 17, 2016 posting (scroll down about 70% of the way) where the company announced amongst other events that it expected deplete its stockpile by mid-2017.

‘Superhemophobic’ medical implants

Counterintuitively, repelling blood is the concept behind a new type of medical implant according to a Jan. 18, 2017 news item on ScienceDaily,

Medical implants like stents, catheters and tubing introduce risk for blood clotting and infection — a perpetual problem for many patients.

Colorado State University engineers offer a potential solution: A specially grown, “superhemophobic” titanium surface that’s extremely repellent to blood. The material could form the basis for surgical implants with lower risk of rejection by the body.

Blood, plasma and water droplets beading on a superomniphobic surface. CSU researchers have created a superhemophobic titanium surface, repellent to blood, that has potential applications for biocompatible medical devices. Courtesy: Colorado State University

A Jan. 18, 2017 Colorado State University news release by Anne Ju Manning, which originated the news item, explains more,

t’s an outside-the-box innovation achieved at the intersection of two disciplines: biomedical engineering and materials science. The work, recently published in Advanced Healthcare Materials, is a collaboration between the labs of Arun Kota, assistant professor of mechanical engineering and biomedical engineering; and Ketul Popat, associate professor in the same departments.

Kota, an expert in novel, “superomniphobic” materials that repel virtually any liquid, joined forces with Popat, an innovator in tissue engineering and bio-compatible materials. Starting with sheets of titanium, commonly used for medical devices, their labs grew chemically altered surfaces that act as perfect barriers between the titanium and blood. Their teams conducted experiments showing very low levels of platelet adhesion, a biological process that leads to blood clotting and eventual rejection of a foreign material.

Chemical compatibility

A material “phobic” (repellent) to blood might seem counterintuitive, the researchers say, as often biomedical scientists use materials “philic” (with affinity) to blood to make them biologically compatible. “What we are doing is the exact opposite,” Kota said. “We are taking a material that blood hates to come in contact with, in order to make it compatible with blood.” The key innovation is that the surface is so repellent, that blood is tricked into believing there’s virtually no foreign material there at all.

The undesirable interaction of blood with foreign materials is an ongoing problem in medical research, Popat said. Over time, stents can form clots, obstructions, and lead to heart attacks or embolisms. Often patients need blood-thinning medications for the rest of their lives – and the drugs aren’t foolproof.

“The reason blood clots is because it finds cells in the blood to go to and attach,” Popat said. “Normally, blood flows in vessels. If we can design materials where blood barely contacts the surface, there is virtually no chance of clotting, which is a coordinated set of events. Here, we’re targeting the prevention of the first set of events.”

nanotubes

Fluorinated nanotubes provided the best superhemophobic surface in the researchers’ experiments.

The researchers analyzed variations of titanium surfaces, including different textures and chemistries, and they compared the extent of platelet adhesion and activation. Fluorinated nanotubes offered the best protection against clotting, and they plan to conduct follow-up experiments.

Growing a surface and testing it in the lab is only the beginning, the researchers say. They want to continue examining other clotting factors, and eventually, to test real medical devices.

Here’s a link to and a citation for the paper,

Hemocompatibility of Superhemophobic Titania Surfaces by Sanli Movafaghi, Victoria Leszczak, Wei Wang, Jonathan A. Sorkin, Lakshmi P. Dasi, Ketul C. Popat, and Arun K. Kota. Advanced Healthcare Materials DOI: 10.1002/adhm.201600717 Version of Record online: 21 DEC 2016

© 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

Communicating science effectively—a December 2016 book from the US National Academy of Sciences

I stumbled across this Dec. 13, 2016  essay/book announcement by Dr. Andrew Maynard and Dr. Dietram A. Scheufele on The Conversation,

Many scientists and science communicators have grappled with disregard for, or inappropriate use of, scientific evidence for years – especially around contentious issues like the causes of global warming, or the benefits of vaccinating children. A long debunked study on links between vaccinations and autism, for instance, cost the researcher his medical license but continues to keep vaccination rates lower than they should be.

Only recently, however, have people begun to think systematically about what actually works to promote better public discourse and decision-making around what is sometimes controversial science. Of course scientists would like to rely on evidence, generated by research, to gain insights into how to most effectively convey to others what they know and do.

As it turns out, the science on how to best communicate science across different issues, social settings and audiences has not led to easy-to-follow, concrete recommendations.

About a year ago, the National Academies of Sciences, Engineering and Medicine brought together a diverse group of experts and practitioners to address this gap between research and practice. The goal was to apply scientific thinking to the process of how we go about communicating science effectively. Both of us were a part of this group (with Dietram as the vice chair).

The public draft of the group’s findings – “Communicating Science Effectively: A Research Agenda” – has just been published. In it, we take a hard look at what effective science communication means and why it’s important; what makes it so challenging – especially where the science is uncertain or contested; and how researchers and science communicators can increase our knowledge of what works, and under what conditions.

At some level, all science communication has embedded values. Information always comes wrapped in a complex skein of purpose and intent – even when presented as impartial scientific facts. Despite, or maybe because of, this complexity, there remains a need to develop a stronger empirical foundation for effective communication of and about science.

Addressing this, the National Academies draft report makes an extensive number of recommendations. A few in particular stand out:

  • Use a systems approach to guide science communication. In other words, recognize that science communication is part of a larger network of information and influences that affect what people and organizations think and do.
  • Assess the effectiveness of science communication. Yes, researchers try, but often we still engage in communication first and evaluate later. Better to design the best approach to communication based on empirical insights about both audiences and contexts. Very often, the technical risk that scientists think must be communicated have nothing to do with the hopes or concerns public audiences have.
  • Get better at meaningful engagement between scientists and others to enable that “honest, bidirectional dialogue” about the promises and pitfalls of science that our committee chair Alan Leshner and others have called for.
  • Consider social media’s impact – positive and negative.
  • Work toward better understanding when and how to communicate science around issues that are contentious, or potentially so.

The paper version of the book has a cost but you can get a free online version.  Unfortunately,  I cannot copy and paste the book’s table of contents here and was not able to find a book index although there is a handy list of reference texts.

I have taken a very quick look at the book. If you’re in the field, it’s definitely worth a look. It is, however, written for and by academics. If you look at the list of writers and reviewers, you will find over 90% are professors at one university or another. That said, I was happy to see references to Dan Kahan’s work at the Yale Law School’s Culture Cognition Project cited. As happens they weren’t able to cite his latest work [***see my xxx, 2017 curiosity post***], released about a month after “Communicating Science Effectively: A Research Agenda.”

I was unable to find any reference to science communication via popular culture. I’m a little dismayed as I feel that this is a seriously ignored source of information by science communication specialists and academicians but not by the folks at MIT (Massachusetts Institute of Technology) who announced a wireless app in the same week as it was featured in an episode of the US television comedy, The Big Bang Theory. Here’s more from MIT’s emotion detection wireless app in a Feb. 1, 2017 news release (also on EurekAlert),

It’s a fact of nature that a single conversation can be interpreted in very different ways. For people with anxiety or conditions such as Asperger’s, this can make social situations extremely stressful. But what if there was a more objective way to measure and understand our interactions?

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Institute of Medical Engineering and Science (IMES) say that they’ve gotten closer to a potential solution: an artificially intelligent, wearable system that can predict if a conversation is happy, sad, or neutral based on a person’s speech patterns and vitals.

“Imagine if, at the end of a conversation, you could rewind it and see the moments when the people around you felt the most anxious,” says graduate student Tuka Alhanai, who co-authored a related paper with PhD candidate Mohammad Ghassemi that they will present at next week’s Association for the Advancement of Artificial Intelligence (AAAI) conference in San Francisco. “Our work is a step in this direction, suggesting that we may not be that far away from a world where people can have an AI social coach right in their pocket.”

As a participant tells a story, the system can analyze audio, text transcriptions, and physiological signals to determine the overall tone of the story with 83 percent accuracy. Using deep-learning techniques, the system can also provide a “sentiment score” for specific five-second intervals within a conversation.

“As far as we know, this is the first experiment that collects both physical data and speech data in a passive but robust way, even while subjects are having natural, unstructured interactions,” says Ghassemi. “Our results show that it’s possible to classify the emotional tone of conversations in real-time.”

The researchers say that the system’s performance would be further improved by having multiple people in a conversation use it on their smartwatches, creating more data to be analyzed by their algorithms. The team is keen to point out that they developed the system with privacy strongly in mind: The algorithm runs locally on a user’s device as a way of protecting personal information. (Alhanai says that a consumer version would obviously need clear protocols for getting consent from the people involved in the conversations.)

How it works

Many emotion-detection studies show participants “happy” and “sad” videos, or ask them to artificially act out specific emotive states. But in an effort to elicit more organic emotions, the team instead asked subjects to tell a happy or sad story of their own choosing.

Subjects wore a Samsung Simband, a research device that captures high-resolution physiological waveforms to measure features such as movement, heart rate, blood pressure, blood flow, and skin temperature. The system also captured audio data and text transcripts to analyze the speaker’s tone, pitch, energy, and vocabulary.

“The team’s usage of consumer market devices for collecting physiological data and speech data shows how close we are to having such tools in everyday devices,” says Björn Schuller, professor and chair of Complex and Intelligent Systems at the University of Passau in Germany, who was not involved in the research. “Technology could soon feel much more emotionally intelligent, or even ‘emotional’ itself.”

After capturing 31 different conversations of several minutes each, the team trained two algorithms on the data: One classified the overall nature of a conversation as either happy or sad, while the second classified each five-second block of every conversation as positive, negative, or neutral.

Alhanai notes that, in traditional neural networks, all features about the data are provided to the algorithm at the base of the network. In contrast, her team found that they could improve performance by organizing different features at the various layers of the network.

“The system picks up on how, for example, the sentiment in the text transcription was more abstract than the raw accelerometer data,” says Alhanai. “It’s quite remarkable that a machine could approximate how we humans perceive these interactions, without significant input from us as researchers.”

Results

Indeed, the algorithm’s findings align well with what we humans might expect to observe. For instance, long pauses and monotonous vocal tones were associated with sadder stories, while more energetic, varied speech patterns were associated with happier ones. In terms of body language, sadder stories were also strongly associated with increased fidgeting and cardiovascular activity, as well as certain postures like putting one’s hands on one’s face.

On average, the model could classify the mood of each five-second interval with an accuracy that was approximately 18 percent above chance, and a full 7.5 percent better than existing approaches.

The algorithm is not yet reliable enough to be deployed for social coaching, but Alhanai says that they are actively working toward that goal. For future work the team plans to collect data on a much larger scale, potentially using commercial devices such as the Apple Watch that would allow them to more easily implement the system out in the world.

“Our next step is to improve the algorithm’s emotional granularity so that it is more accurate at calling out boring, tense, and excited moments, rather than just labeling interactions as ‘positive’ or ‘negative,’” says Alhanai. “Developing technology that can take the pulse of human emotions has the potential to dramatically improve how we communicate with each other.”

This research was made possible in part by the Samsung Strategy and Innovation Center.

Episode 14 of season 10 of The Big Bang Theory was titled “The Emotion Detection Automation”  (full episode can be found on this webpage) and broadcast on Feb. 2, 2017. There’s also a Feb. 2, 2017 recap (recapitulation) by Lincee Ray for EW.com (it seems Ray is unaware that there really is such a machine),

Who knew we would see the day when Sheldon and Raj figured out solutions for their social ineptitudes? Only The Big Bang Theory writers would think to tackle our favorite physicists’ lack of social skills with an emotion detector and an ex-girlfriend focus group. It’s been a while since I enjoyed both storylines as much as I did in this episode. That’s no bazinga.

When Raj tells the guys that he is back on the market, he wonders out loud what is wrong with his game. Why do women reject him? Sheldon receives the information like a scientist and runs through many possible answers. Raj shuts him down with a simple, “I’m fine.”

Sheldon is irritated when he learns that this obligatory remark is a mask for what Raj is really feeling. It turns out, Raj is not fine. Sheldon whines, wondering why no one just says exactly what’s on their mind. It’s quite annoying for those who struggle with recognizing emotional cues.

Lo and behold, Bernadette recently read about a gizmo that was created for people who have this exact same anxiety. MIT has a prototype, and because Howard is an alum, he can probably submit Sheldon’s name as a beta tester.

Of course this is a real thing. If anyone can build an emotion detector, it’s a bunch of awkward scientists with zero social skills.

This is the first time I’ve noticed an academic institution’s news release to be almost simultaneous with mention of its research in a popular culture television program, which suggests things have come a long way since I featured news about a webinar by the National Academies ‘ Science and Entertainment Exchange for film and television productions collaborating with scientists in an Aug. 28, 2012 post.

One last science/popular culture moment: Hidden Figures, a movie about African American women who were human computers supporting NASA (US National Aeronautics and Space Agency) efforts during the 1960s space race and getting a man on the moon was (shockingly) no. 1 in the US box office for a few weeks (there’s more about the movie here in my Sept. 2, 2016 post covering then upcoming movies featuring science).  After the movie was released, Mary Elizabeth Williams wrote up a Jan. 23, 2017 interview with the ‘Hidden Figures’ scriptwriter for Salon.com

I [Allison Schroeder] got on the phone with her [co-producer Renee Witt] and Donna  [co-producer Donna Gigliotti] and I said, “You have to hire me for this; I was born to write this.” Donna sort of rolled her eyes and was like, “God, these Hollywood types would say anything.” I said, “No, no, I grew up at Cape Canaveral. My grandmother was a computer programmer at NASA, my grandfather worked on the Mercury prototype, and I interned there all through high school and then the summer after my freshman year at Stanford I interned. I worked at a missile launch company.”

She was like, “OK that’s impressive.” And I said, “No, I literally grew up climbing on the Mercury capsule — hitting all the buttons, trying to launch myself into space.”

She said, “Well do you think you can handle the math?” I said that I had to study a certain amount of math at Stanford for economics degree. She said, “Oh, all right, that sounds pretty good.”

I pitched her a few scenes. I pitched her the end of the movie that you saw with Katherine running the numbers as John Glenn is trying to get up in space. I pitched her the idea of one of the women as a mechanic and to see her legs underneath the engine. You’re used to seeing a guy like that, but what would it be like to see heels and pantyhose and a skirt and she’s a mechanic and fixing something? Those are some of the scenes that I pitched them, and I got the job.

I love that the film begins with setting up their mechanical aptitude. You set up these are women; you set up these women of color. You set up exactly what that means in this moment in history. It’s like you just go from there.

I was on a really tight timeline because this started as an indie film. It was just Donna Gigliotti, Renee Witt, me and the author Margot Lee Shetterly for about a year working on it. I was only given four weeks for research and 12 weeks for writing the first draft. I’m not sure if I hadn’t known NASA and known the culture and just knew what the machines would look like, knew what the prototypes looked like, if I could have done it that quickly. I turned in that draft and Donna was like, “OK you’ve got the math and the science; it’s all here. Now go have fun.” Then I did a few more drafts and that was really enjoyable because I could let go of the fact I did it and make sure that the characters and the drive of the story and everything just fit what needed to happen.

For anyone interested in the science/popular culture connection, David Bruggeman of the Pasco Phronesis blog does a better job than I do of keeping up with the latest doings.

Getting back to ‘Communicating Science Effectively: A Research Agenda’, even with a mention of popular culture, it is a thoughtful book on the topic.

Fireworks for fuel?

Scientists are attempting to harness the power in fireworks for use as fuel according to a Jan. 18, 2017 news item on Nanowerk,

The world relies heavily on gasoline and other hydrocarbons to power its cars and trucks. In search of an alternative fuel type, some researchers are turning to the stuff of fireworks and explosives: metal powders. And now one team is reporting a method to produce a metal nanopowder fuel with high energy content that is stable in air and doesn’t go boom until ignited.

A Jan. 18, 2017 American Chemical Society (ACS) news release, which originated the news item, expands on the theme,

Hydrocarbon fuels are liquid at room temperature, are simple to store, and their energy can be used easily in cars and trucks. Metal powders, which can contain large amounts of energy, have long been used as a fuel in explosives, propellants and pyrotechnics. It might seem counterintuitive to develop them as a fuel for vehicles, but some researchers have proposed to do just that. A major challenge is that high-energy metal nanopowder fuels tend to be unstable and ignite on contact with air. Albert Epshteyn and colleagues wanted to find a way to harness and control them, producing a fuel with both high energy content and good air stability.

The researchers developed a method using an ultrasound-mediated chemical process to combine the metals titanium, aluminum and boron with a sprinkle of hydrogen in a mixed-metal nanopowder fuel. The resulting material was both more stable and had a higher energy content than the standard nano-aluminum fuels. With an energy density of at least 89 kilojoules/milliliter, which is significantly superior to hydrocarbons’ 33 kilojoules/milliliter, this new titanium-aluminum-boron nanopowder packs a big punch in a small package.

Here’s a link to and a citation for the paper,

Optimization of a High Energy Ti-Al-B Nanopowder Fuel by Albert Epshteyn, Michael Raymond Weismiller, Zachary John Huba, Emily L. Maling, and Adam S. Chaimowitz. Energy Fuels, DOI: 10.1021/acs.energyfuels.6b02321 Publication Date (Web): December 30, 2016

Copyright © 2016 American Chemical Society

This paper is behind a paywall.

Little black graphene dress

Graphene Dress. Courtesy: intu

I don’t think there are many women who can carry off this garment. Of course that’s not the point as the dress is designed to show off its technical capabilities. A Jan. 31, 2017 news item on Nanowerk announces the little black graphene dress (lbgd?),

Science and fashion have been brought together to create the world’s most technically advanced dress, the intu Little Black Graphene Dress.

The new prototype garment showcases the future uses of the revolutionary, Nobel Prize winning material graphene and incorporating it into fashion for the first time, in the ultimate wearable tech statement garment.

A Jan. 25, 2017 National Graphene Institute at University of Manchester press release, which originated the news item, expands on the theme,

The project between intu Trafford Centre, renowned wearable tech company Cute Circuit which has made dresses for the likes of Katy Perry and Nicole Scherzinger and the National Graphene Institute at The University of Manchester, uses graphene in a number of innovative ways to create the world’s most high tech LBD – highlighting the material’s incredible properties.

The dress is complete with a graphene sensor which captures the rate in which the wearer is breathing via a contracting graphene band around the models waist, the micro LED which is featured across the bust on translucent conductive graphene responds to the sensor making the LED flash and changing colour depending on breathing rate. It marks a major step in the future uses of graphene in fashion where it is hoped the highly conductive transparent material could be used to create designs which act as screens showcasing digital imagery which could be programmed to change and updated by the wearer meaning one garment could be in any colour hue or design.

The 3D printed graphene filament shows the intricate structural detail of graphene in raised diamond shaped patterns and showcases graphene’s unrivalled conductivity with flashing LED lights.

The high tech LBD can be controlled by The Q app created by Cute Circuit to change the way the garment illuminates.

The dress was created by the Manchester shopping centre to celebrate Manchester’s crown as the European City of Science. The dress will then be on display at intu Trafford Centre, it will then be available for museums and galleries to loan for fashion displays.

Richard Paxton, general manager at intu Trafford Centre said: “We have a real passion for fashion and fashion firsts, this dress is a celebration of Manchester, its amazing love for innovation and textiles, showcasing this new wonder material in a truly unique and exciting way. It really is the world’s most high-tech dress featuring the most advanced super-material and something intu is very proud to have created in collaboration with Cute Circuit and the National Graphene Institute. Hopefully this project inspires more people to experiment with graphene and its wide range of uses.”

Francesca Rosella, Chief Creative Director for Cute Circuit said: “This was such an exciting project for us to get involved in, graphene has never been used in the fashion industry and being the first to use it was a real honour allowing us to have a lot of fun creating the stunning intu Little Black Graphene Dress, and showcasing graphene’s amazing properties.”

Dr Paul Wiper, Research Associate, National Graphene Institute said: “This is a fantastic project, graphene is still very much at its infancy for real-world applications and showcasing its amazing properties through the forum of fashion is very exciting. The dress is truly a one of a kind and shows what creativity, imagination and a desire to innovate can create using graphene and related two-dimensional materials.”

The dress is modelled by Britain’s Next Top Model finalist Bethan Sowerby who is from Manchester and used to work at intu Trafford Centre’s Top Shop.

Probably not coming soon to a store near you.

Synthetic genetics and imprinting a sequence of a single DNA (deoxyribonucleic acid) strand

Caption: A polymer negative of a sequence of the genetic code, chemically active and able to bind complementary nucleobases, has been created by researchers from the Institute of Physical Chemistry of the Polish Academy of Sciences in Warsaw. Credit: IPC PAS, Grzegorz Krzyzewski

Those are very large hands! In any event, I think they left out the word ‘model’ when describing what the researcher is holding.

A Jan. 19, 2017 news item on phys.org announces the research from the Institute of Physical Chemistry of the Polish Academy of Sciences (IPC PAS),

In a carefully designed polymer, researchers at the Polish Academy of Sciences have imprinted a sequence of a single strand of DNA. The resulting negative remained chemically active and was capable of binding the appropriate nucleobases of a genetic code. The polymer matrix—the first of its type—thus functioned exactly like a sequence of real DNA.

A Jan. 18, 2017 IPC PAS press release, which originated the news item, provides more detail about the breakthrough and explains how it could lead to synthetic genetics,

Imprinting of chemical molecules in a polymer, or molecular imprinting, is a well-known method that has been under development for many years. However, no-one has ever before used it to construct a polymer chain complementing a sequence of a single strand of DNA. This feat has just been accomplished by researchers from the Institute of Physical Chemistry of the Polish Academy of Sciences (IPC PAS) in Warsaw in collaboration with the University of North Texas (UNT) in Denton, USA, and the University of Milan in Italy. In an appropriately selected polymer, they reproduced a genetically important DNA sequence, constructed of six nucleobases.

Typically, molecular imprinting is accomplished in several steps. The molecules intended for imprinting are first placed to a solution of monomers (i.e. the basic “building blocks” from which the future polymer is to be formed). The monomers are selected so as to automatically arrange themselves around the molecules being imprinted. Next, the resulting complex is electrochemically polymerized and then the imprinted molecules are extracted from the fixed structure. This process results in a polymer structure with molecular cavities matching the original molecules with their size and shape, and even their local chemical properties.

“Using molecular imprinting, we can produce, e.g. recognition films for chemical sensors, capturing molecules of only a specific chemical compound from the surroundings – since only these molecules fit into the existing molecular cavities. However, there’s no rose without a thorn. Molecular imprinting is perfect for smaller chemical molecules, but the larger the molecule, the more difficult it is to imprint it accurately into the polymer,” explains Prof. Wlodzimierz Kutner (IPC PAS).

Molecules of deoxyribonucleic acid, or DNA, are really large: their lengths are of the order of centimetres. These molecules generally consist of of two long strands, paired up with each other. A single strand is made up of nucleotides with multiple repetitions, each of which contains one of the nucleobases: adenine (A), guanine (G), cytosine (C), or thymine (T). The bases on both strands are not arranged freely: adenine on one strand always corresponds to thymine on the other, and guanine to cytosine. So, when we have one thread, we can always recreate its complement, which is the second strand.

The complementarity of nucleobases in DNA strands is very important for cells. Not only does it increase the permanence of the record of the genetic code (damage in one strand can be repaired based on the construction of the other), but it also makes it possible to transfer it from DNA to RNA in the process known as transcription. Transcription is the first step in the synthesis of proteins.

“Our idea was to try to imprint in the polymer a sequence of a single-stranded DNA. At the same time, we wanted to reproduce not only the shape of the strand, but also the sequential order of the constituent nucleobases,” says Dr. Agnieszka Pietrzyk-Le (IPC PAS).

In the study, financed on the Polish side by grants from the Foundation for Polish Science and the National Centre for Science, researchers from the IPC PAS used sequences of the genetic code known as TATAAA. This sequence plays an important biological role: it participates in deciding on the activation of the gene behind it. TATAAA is found in most eukaryotic cells (those containing a nucleus); in humans it is present in about every fourth gene.

A key step of the research was to design synthetic monomers undergoing electrochemical polymerization. These had to be capable of accurately surrounding the imprinted molecule in such a way that each of the adenines and thymines on the DNA strand were accompanied by their complementary bases. The mechanical requirements were also important, because after polymerization the matrix had to be stable. Suitable monomers were synthesized by the group of Prof. Francis D’Souza (UNT).

“When all the reagents and apparatus have been prepared, the imprinting itself of the TATAAA oligonucleotide is not especially complicated. The most important processes take place automatically in solutions in no more than a few dozen minutes. Finally, on the electrode used for electropolymerization, we obtain a layer of conductive polymer with molecular cavities where the nucleobases are arranged in the TTTATA sequence, that is, complementary to the extracted original”, describes doctoral student Katarzyna Bartold (IPC PAS).

Do polymer matrices prepared in this manner really reconstruct the original sequence of the DNA chain? To answer this question, at the IPC PAS careful measurements were carried out on the properties of the new polymers and a series of experiments was performed that confirmed the interaction of the polymers with various nucleobases in solutions. The results leave no doubt: the polymer DNA negative really is chemically active and selectively binds the TATAAA oligonucleotide, correctly reproducing the sequence of nucleobases.

The possibility of the relatively simple and low-cost production of stable polymer equivalents of DNA sequences is an important step in the development of synthetic genetics, especially in terms of its widespread applications in biotechnology and molecular medicine. If an improvement in the method developed at the IPC PAS is accomplished in the future, it will be possible to reproduce longer sequences of the genetic code in polymer matrices. This opens up inspiring perspectives associated not only with learning about the details of the process of transcription in cells or the construction of chemosensors for applications in nanotechnologies operating on chains of DNA, but also with the permanent archiving and replicating of the genetic code of different organisms.

Here’s a link to and a citation for the paper,

Programmed transfer of sequence information into molecularly imprinted polymer (MIP) for hexa(2,2’-bithien-5-yl) DNA analog formation towards single nucleotide polymorphism (SNP) detection by Katarzyna Bartold, Agnieszka Pietrzyk-Le, Tan-Phat Huynh, Zofia Iskierko, Marta I. Sosnowska, Krzysztof Noworyta, Wojciech Lisowski, Francesco Maria Enrico Sannicolo, Silvia Cauteruccio, Emanuela Licandro, Francis D’Souza, and Wlodzimierz Kutner. ACS Appl. Mater. Interfaces, Just Accepted Manuscript
DOI: 10.1021/acsami.6b14340 Publication Date (Web): January 10, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.

Nanoparticles can activate viruses lying dormant in lung cells

The nanoparticles in question are from combustion engines, which means that we are exposed to them. One other note, the testing has not been done on humans but rather on cells. From a Jan. 16, 2017 news item on ScienceDaily,

Nanoparticles from combustion engines can activate viruses that are dormant in in lung tissue cells. This is the result of a study by researchers of Helmholtz Zentrum München, a partner in the German Center for Lung Research (DZL), which has now been published in the journal Particle and Fibre Toxicology.

To evade the immune system, some viruses hide in cells of their host and persist there. In medical terminology, this state is referred to as a latent infection. If the immune system becomes weakened or if certain conditions change, the viruses become active again, begin to proliferate and destroy the host cell. A team of scientists led by Dr. Tobias Stöger of the Institute of Lung Biology and Prof. Dr. Heiko Adler, deputy head of the research unit Lung Repair and Regeneration at Helmholtz Zentrum München, now report that nanoparticles can also trigger this process.

A Jan. 16, 2017 Helmholtz Zentrum München press release (also on EurekAlert), which originated the news item, provides more detail,

“From previous model studies we already knew that the inhalation of nanoparticles has an inflammatory effect and alters the immune system,” said study leader Stöger. Together with his colleagues Heiko Adler and Prof. Dr. Philippe Schmitt-Kopplin, he showed that “an exposure to nanoparticles can reactivate latent herpes viruses in the lung.”

Specifically, the scientists tested the influence of nanoparticles typically generated by fossil fuel combustion in an experimental model for a particular herpes virus infection. They detected a significant increase in viral proteins, which are only produced with active virus proliferation. “Metabolic and gene expression analyses also revealed patterns resembling acute infection,” said Philippe Schmitt-Kopplin, head of the research unit Analytical BioGeoChemistry (BGC). Moreover, further experiments with human cells demonstrated that Epstein-Barr viruses are also ‘awakened’ when they come into contact with the nanoparticles.

Potential approach for chronic lung diseases

In further studies, the research team would like to test whether the results can also be transferred to humans. “Many people carry herpes viruses, and patients with idiopathic pulmonary fibrosis are particularly affected,” said Heiko Adler. “If the results are confirmed in humans, it would be important to investigate the molecular process of the reactivation of latent herpes viruses induced by particle inhalation. Then we could try to influence this pathway therapeutically.”

Special cell culture models shall therefore elucidate the exact mechanism of virus reactivation by nanoparticles. “In addition,” Stöger said, ”in long-term studies we would like to investigate to what extent  repeated nanoparticle exposure with corresponding virus reactivation leads to chronic inflammatory and remodeling processes in the lung.”

Further Information

Background:
In 2015 another group at the Helmholtz Zentrum München demonstrated how the Epstein-Barr virus  hides in human cells. In March 2016 researchers also showed that microRNAs silence immune alarm signals of cells infected with the Epstein-Barr virus.

Original Publication:
Sattler, C. et al. (2016): Nanoparticle exposure reactivates latent herpesvirus and restores a signature of acute infection. Particle and Fibre Toxicology, DOI 10.1186/s12989-016-0181-1

Here’s a link to and a citation for the paper on investigating latent herpes virus,

Nanoparticle exposure reactivates latent herpesvirus and restores a signature of acute infection by Christine Sattler, Franco Moritz, Shanze Chen, Beatrix Steer, David Kutschke, Martin Irmler, Johannes Beckers, Oliver Eickelberg, Philippe Schmitt-Kopplin, Heiko Adler. Particle and Fibre Toxicology201714:2 DOI: 10.1186/s12989-016-0181-1 Published: 10 January 2017

©  The Author(s). 2017

This paper is open access and, so too, is the 2016 paper.

Hairy strength could lead to new body armour

A Jan. 18, 2017 news item on Nanowerk announces research into hair strength from the University of California at San Diego (UCSD or UC San Diego),

In a new study, researchers at the University of California San Diego investigate why hair is incredibly strong and resistant to breaking. The findings could lead to the development of new materials for body armor and help cosmetic manufacturers create better hair care products.

Hair has a strength to weight ratio comparable to steel. It can be stretched up to one and a half times its original length before breaking. “We wanted to understand the mechanism behind this extraordinary property,” said Yang (Daniel) Yu, a nanoengineering Ph.D. student at UC San Diego and the first author of the study.

A Jan. 18 (?), 2017 UCSD news release, which originated the news item, provides more information,

“Nature creates a variety of interesting materials and architectures in very ingenious ways. We’re interested in understanding the correlation between the structure and the properties of biological materials to develop synthetic materials and designs — based on nature — that have better performance than existing ones,” said Marc Meyers, a professor of mechanical engineering at the UC San Diego Jacobs School of Engineering and the lead author of the study.

In a study published online in Dec. in the journal Materials Science and Engineering C, researchers examined at the nanoscale level how a strand of human hair behaves when it is deformed, or stretched. The team found that hair behaves differently depending on how fast or slow it is stretched. The faster hair is stretched, the stronger it is. “Think of a highly viscous substance like honey,” Meyers explained. “If you deform it fast it becomes stiff, but if you deform it slowly it readily pours.”

Hair consists of two main parts — the cortex, which is made up of parallel fibrils, and the matrix, which has an amorphous (random) structure. The matrix is sensitive to the speed at which hair is deformed, while the cortex is not. The combination of these two components, Yu explained, is what gives hair the ability to withstand high stress and strain.

And as hair is stretched, its structure changes in a particular way. At the nanoscale, the cortex fibrils in hair are each made up of thousands of coiled spiral-shaped chains of molecules called alpha helix chains. As hair is deformed, the alpha helix chains uncoil and become pleated sheet structures known as beta sheets. This structural change allows hair to handle a large amount deformation without breaking.

This structural transformation is partially reversible. When hair is stretched under a small amount of strain, it can recover its original shape. Stretch it further, the structural transformation becomes irreversible. “This is the first time evidence for this transformation has been discovered,” Yu said.

“Hair is such a common material with many fascinating properties,” said Bin Wang, a UC San Diego PhD alumna from the Department of Mechanical and Aerospace Engineering and co-author on the paper. Wang is now at the Shenzhen Institutes of Advanced Technology in China continuing research on hair.

The team also conducted stretching tests on hair at different humidity levels and temperatures. At higher humidity levels, hair can withstand up to 70 to 80 percent deformation before breaking (dry hair can undergo up to 50 percent deformation). Water essentially “softens” hair — it enters the matrix and breaks the sulfur bonds connecting the filaments inside a strand of hair. Researchers also found that hair starts to undergo permanent damage at 60 degrees Celsius (140 degrees Fahrenheit). Beyond this temperature, hair breaks faster at lower stress and strain.

“Since I was a child I always wondered why hair is so strong. Now I know why,” said Wen Yang, a former postdoctoral researcher in Meyers’ research group and co-author on the paper.

The team is currently conducting further studies on the effects of water on the properties of human hair. Moving forward, the team is investigating the detailed mechanism of how washing hair causes it to return to its original shape.

Here’s a link to and a citation for the paper,

Structure and mechanical behavior of human hair by Yang Yua, Wen Yang, Bin Wang, Marc André Meyers. Materials Science and Engineering: C Volume 73, 1 April 2017, Pages 152–163    http://dx.doi.org/10.1016/j.msec.2016.12.008

This paper is behind a paywall.

Going underground to observe atoms in a bid for better batteries

A Jan. 16, 2017 news item on ScienceDaily describes what lengths researchers at Stanford University (US) will go to in pursuit of their goals,

In a lab 18 feet below the Engineering Quad of Stanford University, researchers in the Dionne lab camped out with one of the most advanced microscopes in the world to capture an unimaginably small reaction.

The lab members conducted arduous experiments — sometimes requiring a continuous 30 hours of work — to capture real-time, dynamic visualizations of atoms that could someday help our phone batteries last longer and our electric vehicles go farther on a single charge.

Toiling underground in the tunneled labs, they recorded atoms moving in and out of nanoparticles less than 100 nanometers in size, with a resolution approaching 1 nanometer.

A Jan. 16, 2017 Stanford University news release (also on EurekAlert) by Taylor Kubota, which originated the news item, provides more detail,

“The ability to directly visualize reactions in real time with such high resolution will allow us to explore many unanswered questions in the chemical and physical sciences,” said Jen Dionne, associate professor of materials science and engineering at Stanford and senior author of the paper detailing this work, published Jan. 16 [2017] in Nature Communications. “While the experiments are not easy, they would not be possible without the remarkable advances in electron microscopy from the past decade.”

Their experiments focused on hydrogen moving into palladium, a class of reactions known as an intercalation-driven phase transition. This reaction is physically analogous to how ions flow through a battery or fuel cell during charging and discharging. Observing this process in real time provides insight into why nanoparticles make better electrodes than bulk materials and fits into Dionne’s larger interest in energy storage devices that can charge faster, hold more energy and stave off permanent failure.

Technical complexity and ghosts

For these experiments, the Dionne lab created palladium nanocubes, a form of nanoparticle, that ranged in size from about 15 to 80 nanometers, and then placed them in a hydrogen gas environment within an electron microscope. The researchers knew that hydrogen would change both the dimensions of the lattice and the electronic properties of the nanoparticle. They thought that, with the appropriate microscope lens and aperture configuration, techniques called scanning transmission electron microscopy and electron energy loss spectroscopy might show hydrogen uptake in real time.

After months of trial and error, the results were extremely detailed, real-time videos of the changes in the particle as hydrogen was introduced. The entire process was so complicated and novel that the first time it worked, the lab didn’t even have the video software running, leading them to capture their first movie success on a smartphone.

Following these videos, they examined the nanocubes during intermediate stages of hydrogenation using a second technique in the microscope, called dark-field imaging, which relies on scattered electrons. In order to pause the hydrogenation process, the researchers plunged the nanocubes into an ice bath of liquid nitrogen mid-reaction, dropping their temperature to 100 degrees Kelvin (-280 F). These dark-field images served as a way to check that the application of the electron beam hadn’t influenced the previous observations and allowed the researchers to see detailed structural changes during the reaction.

“With the average experiment spanning about 24 hours at this low temperature, we faced many instrument problems and called Ai Leen Koh [co-author and research scientist at Stanford’s Nano Shared Facilities] at the weirdest hours of the night,” recalled Fariah Hayee, co-lead author of the study and graduate student in the Dionne lab. “We even encountered a ‘ghost-of-the-joystick problem,’ where the joystick seemed to move the sample uncontrollably for some time.”

While most electron microscopes operate with the specimen held in a vacuum, the microscope used for this research has the advanced ability to allow the researchers to introduce liquids or gases to their specimen.

“We benefit tremendously from having access to one of the best microscope facilities in the world,” said Tarun Narayan, co-lead author of this study and recent doctoral graduate from the Dionne lab. “Without these specific tools, we wouldn’t be able to introduce hydrogen gas or cool down our samples enough to see these processes take place.”

Pushing out imperfections

Aside from being a widely applicable proof of concept for this suite of visualization techniques, watching the atoms move provides greater validation for the high hopes many scientists have for nanoparticle energy storage technologies.

The researchers saw the atoms move in through the corners of the nanocube and observed the formation of various imperfections within the particle as hydrogen moved within it. This sounds like an argument against the promise of nanoparticles but that’s because it’s not the whole story.

“The nanoparticle has the ability to self-heal,” said Dionne. “When you first introduce hydrogen, the particle deforms and loses its perfect crystallinity. But once the particle has absorbed as much hydrogen as it can, it transforms itself back to a perfect crystal again.”

The researchers describe this as imperfections being “pushed out” of the nanoparticle. This ability of the nanocube to self-heal makes it more durable, a key property needed for energy storage materials that can sustain many charge and discharge cycles.

Looking toward the future

As the efficiency of renewable energy generation increases, the need for higher quality energy storage is more pressing than ever. It’s likely that the future of storage will rely on new chemistries and the findings of this research, including the microscopy techniques the researchers refined along the way, will apply to nearly any solution in those categories.

For its part, the Dionne lab has many directions it can go from here. The team could look at a variety of material compositions, or compare how the sizes and shapes of nanoparticles affect the way they work, and, soon, take advantage of new upgrades to their microscope to study light-driven reactions. At present, Hayee has moved on to experimenting with nanorods, which have more surface area for the ions to move through, promising potentially even faster kinetics.

Here’s a link to and a citation for the paper,

Direct visualization of hydrogen absorption dynamics in individual palladium nanoparticles by Tarun C. Narayan, Fariah Hayee, Andrea Baldi, Ai Leen Koh, Robert Sinclair, & Jennifer A. Dionne. Nature Communications 8, Article number: 14020 (2017) doi:10.1038/ncomms14020 Published online: 16 January 2017

This paper is open access.

What is a multiregional brain-on-a-chip?

In response to having created a multiregional brain-on-a-chip, there’s an explanation from the team at Harvard University (which answers my question) in a Jan. 13, 2017 Harvard John A. Paulson School of Engineering and Applied Sciences news release (also on EurekAlert) by Leah Burrows,

Harvard University researchers have developed a multiregional brain-on-a-chip that models the connectivity between three distinct regions of the brain. The in vitro model was used to extensively characterize the differences between neurons from different regions of the brain and to mimic the system’s connectivity.

“The brain is so much more than individual neurons,” said Ben Maoz, co-first author of the paper and postdoctoral fellow in the Disease Biophysics Group in the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS). “It’s about the different types of cells and the connectivity between different regions of the brain. When modeling the brain, you need to be able to recapitulate that connectivity because there are many different diseases that attack those connections.”

“Roughly twenty-six percent of the US healthcare budget is spent on neurological and psychiatric disorders,” said Kit Parker, the Tarr Family Professor of Bioengineering and Applied Physics Building at SEAS and Core Faculty Member of the Wyss Institute for Biologically Inspired Engineering at Harvard University. “Tools to support the development of therapeutics to alleviate the suffering of these patients is not only the human thing to do, it is the best means of reducing this cost.”

Researchers from the Disease Biophysics Group at SEAS and the Wyss Institute modeled three regions of the brain most affected by schizophrenia — the amygdala, hippocampus and prefrontal cortex.

They began by characterizing the cell composition, protein expression, metabolism, and electrical activity of neurons from each region in vitro.

“It’s no surprise that neurons in distinct regions of the brain are different but it is surprising just how different they are,” said Stephanie Dauth, co-first author of the paper and former postdoctoral fellow in the Disease Biophysics Group. “We found that the cell-type ratio, the metabolism, the protein expression and the electrical activity all differ between regions in vitro. This shows that it does make a difference which brain region’s neurons you’re working with.”

Next, the team looked at how these neurons change when they’re communicating with one another. To do that, they cultured cells from each region independently and then let the cells establish connections via guided pathways embedded in the chip.

The researchers then measured cell composition and electrical activity again and found that the cells dramatically changed when they were in contact with neurons from different regions.

“When the cells are communicating with other regions, the cellular composition of the culture changes, the electrophysiology changes, all these inherent properties of the neurons change,” said Maoz. “This shows how important it is to implement different brain regions into in vitro models, especially when studying how neurological diseases impact connected regions of the brain.”

To demonstrate the chip’s efficacy in modeling disease, the team doped different regions of the brain with the drug Phencyclidine hydrochloride — commonly known as PCP — which simulates schizophrenia. The brain-on-a-chip allowed the researchers for the first time to look at both the drug’s impact on the individual regions as well as its downstream effect on the interconnected regions in vitro.

The brain-on-a-chip could be useful for studying any number of neurological and psychiatric diseases, including drug addiction, post traumatic stress disorder, and traumatic brain injury.

“To date, the Connectome project has not recognized all of the networks in the brain,” said Parker. “In our studies, we are showing that the extracellular matrix network is an important part of distinguishing different brain regions and that, subsequently, physiological and pathophysiological processes in these brain regions are unique. This advance will not only enable the development of therapeutics, but fundamental insights as to how we think, feel, and survive.”

Here’s an image from the researchers,

Caption: Image of the in vitro model showing three distinct regions of the brain connected by axons. Credit: Disease Biophysics Group/Harvard University

Here’s a link to and a citation for the paper,

Neurons derived from different brain regions are inherently different in vitro: A novel multiregional brain-on-a-chip by Stephanie Dauth, Ben M Maoz, Sean P Sheehy, Matthew A Hemphill, Tara Murty, Mary Kate Macedonia, Angie M Greer, Bogdan Budnik, Kevin Kit Parker. Journal of Neurophysiology Published 28 December 2016 Vol. no. [?] , DOI: 10.1152/jn.00575.2016

This paper is behind a paywall and they haven’t included the vol. no. in the citation I’ve found.