Monthly Archives: July 2016

Peruvian scientist Marino Morikawa nanoremediates wetlands

Peru’s El Cascajo Lake has undergone a successful nanotechology-enabled remediation technique developed by scientist Marino Morikawa and which he hopes can be used to clean up Lake Titicaca according to a July 6, 2016 news item on news.co.cr,

Peruvian scientist Marino Morikawa, who “revived” polluted wetlands in 15 days using nanotechnology, now plans to try to clean up Lake Titicaca and the Huacachina lagoon, an oasis in the middle of the desert.

El Cascajo, an ecosystem of roughly 50 hectares (123 acres) in Chancay district, located north of Lima, began its recovery in 2010 with two inventions that Morikawa came up with using his own resources and money.

The idea of restoring the wetlands came from a call from Morikawa’s father, who told the scientist that El Cascajo, where they used to go fishing when Marino was a child, “was in very bad condition,” Morikawa told EFE.

Marino Morikawa, who earned a degree in environmental science from Japan’s Tsukuba University, visited the wetlands and found a dump for sewage ringed by an illegal landfill where migratory birds fed.

The stinky swamp was covered by aquatic plants, Morikawa said.

The fifteen day timeline for the cleanup seems to be contradicted in this June 22, 2014 article by Rosana Pinheiro for Agencia Plano (a Latin American news portal) describes the situation at Lake El Cascajo and the nanotechnology in more detail,

Peruvian scientist Marino Morikawa created a cleanse system using nanobubbles to decontaminate lake El Cascajo, located at Chancay district, north of Lima, Peru’s capital. After nearly four years of the start of the project, 90% of the lake waters are recovered, and the place is now visited once again by at least 70 species of migratory birds.

The lake was once home to more than a thousand species of migratory birds in the 1990s. …

The [nanotechnology-enabled] treatment is done with tiny bubbles, the nanobubbles, a thousand times smaller than the ones we can see in a glass of soda. These bubbles attract bacteria and metals using static charge and then decompose, releasing free radicals which destroy viruses present in water. The process has been recognized by the Commission of Science, Technology and Innovation of the Peruvian Congress.

Biofilters were also deployed to ease the cleaning process of the water. Morikawa divided the wetland area with pieces of bamboo, creating sectors to order the withdrawal of the aquatic weeds.

… At the beginning, in December 2010, he worked alone, making daily visits to the region to develop the project. After some time, he started receiving help from friends, local population and local government.

A few months after the beginning of the treatment, it was possible to see that El Cascajo waters were more crystalline. However, it was only in January 2013 that “a miracle happened” as Morikawa says: Thousands of migratory birds returned to the lake and occupied about 70% of the area, forming a white cover around the water.

Whether this took fifteen days or several months seems less important than the remediation of the wetlands, Lake El Cascajo, the return of the birds, and a better functioning ecosystem. Let’s hope the same success can be enjoyed at Lake Titicaca.

There are more details in both pieces which I encourage you to read in their entirety.

D-PLACE: an open access database of places, language, culture, and enviroment

In an attempt to be a bit more broad in my interpretation of the ‘society’ part of my commentary I’m including this July 8, 2016 news item on ScienceDaily (Note: A link has been removed),

An international team of researchers has developed a website at d-place.org to help answer long-standing questions about the forces that shaped human cultural diversity.

D-PLACE — the Database of Places, Language, Culture and Environment — is an expandable, open access database that brings together a dispersed body of information on the language, geography, culture and environment of more than 1,400 human societies. It comprises information mainly on pre-industrial societies that were described by ethnographers in the 19th and early 20th centuries.

A July 8, 2016 University of Toronto news release (also on EurekAlert), which originated the news item, expands on the theme,

“Human cultural diversity is expressed in numerous ways: from the foods we eat and the houses we build, to our religious practices and political organisation, to who we marry and the types of games we teach our children,” said Kathryn Kirby, a postdoctoral fellow in the Departments of Ecology & Evolutionary Biology and Geography at the University of Toronto and lead author of the study. “Cultural practices vary across space and time, but the factors and processes that drive cultural change and shape patterns of diversity remain largely unknown.

“D-PLACE will enable a whole new generation of scholars to answer these long-standing questions about the forces that have shaped human cultural diversity.”

Co-author Fiona Jordan, senior lecturer in anthropology at the University of Bristol and one of the project leads said, “Comparative research is critical for understanding the processes behind cultural diversity. Over a century of anthropological research around the globe has given us a rich resource for understanding the diversity of humanity – but bringing different resources and datasets together has been a huge challenge in the past.

“We’ve drawn on the emerging big data sets from ecology, and combined these with cultural and linguistic data so researchers can visualise diversity at a glance, and download data to analyse in their own projects.”

D-PLACE allows users to search by cultural practice (e.g., monogamy vs. polygamy), environmental variable (e.g. elevation, mean annual temperature), language family (e.g. Indo-European, Austronesian), or region (e.g. Siberia). The search results can be displayed on a map, a language tree or in a table, and can also be downloaded for further analysis.

It aims to enable researchers to investigate the extent to which patterns in cultural diversity are shaped by different forces, including shared history, demographics, migration/diffusion, cultural innovations, and environmental and ecological conditions.

D-PLACE was developed by an international team of scientists interested in cross-cultural research. It includes researchers from Max Planck Institute for the Science of Human history in Jena Germany, University of Auckland, Colorado State University, University of Toronto, University of Bristol, Yale, Human Relations Area Files, Washington University in Saint Louis, University of Michigan, American Museum of Natural History, and City University of New York.

The diverse team included: linguists; anthropologists; biogeographers; data scientists; ethnobiologists; and evolutionary ecologists, who employ a variety of research methods including field-based primary data collection; compilation of cross-cultural data sources; and analyses of existing cross-cultural datasets.

“The team’s diversity is reflected in D-PLACE, which is designed to appeal to a broad user base,” said Kirby. “Envisioned users range from members of the public world-wide interested in comparing their cultural practices with those of other groups, to cross-cultural researchers interested in pushing the boundaries of existing research into the drivers of cultural change.”

Here’s a link to and a citation for the paper,

D-PLACE: A Global Database of Cultural, Linguistic and Environmental Diversity by Kathryn R. Kirby, Russell D. Gray, Simon J. Greenhill, Fiona M. Jordan, Stephanie Gomes-Ng, Hans-Jörg Bibiko, Damián E. Blasi, Carlos A. Botero, Claire Bowern, Carol R. Ember, Dan Leehr, Bobbi S. Low, Joe McCarter, William Divale, Michael C. Gavin.  PLOS ONE, 2016; 11 (7): e0158391 DOI: 10.1371/journal.pone.0158391 Published July 8, 2016.

This paper is open access.

You can find D-PLACE here.

While it might not seem like that there would be a close link between anthropology and physics in the 19th and early 20th centuries, that information can be mined for more contemporary applications. For example, someone who wants to make a case for a more diverse scientific community may want to develop a social science approach to the discussion. The situation in my June 16, 2016 post titled: Science literacy, science advice, the US Supreme Court, and Britain’s House of Commons, could  be extended into a discussion and educational process using data from D-Place and other sources to make the point,

Science literacy may not be just for the public, it would seem that US Supreme Court judges may not have a basic understanding of how science works. David Bruggeman’s March 24, 2016 posting (on his Pasco Phronesis blog) describes a then current case before the Supreme Court (Justice Antonin Scalia has since died), Note: Links have been removed,

It’s a case concerning aspects of the University of Texas admissions process for undergraduates and the case is seen as a possible means of restricting race-based considerations for admission.  While I think the arguments in the case will likely revolve around factors far removed from science and or technology, there were comments raised by two Justices that struck a nerve with many scientists and engineers.

Both Justice Antonin Scalia and Chief Justice John Roberts raised questions about the validity of having diversity where science and scientists are concerned [emphasis mine].  Justice Scalia seemed to imply that diversity wasn’t esential for the University of Texas as most African-American scientists didn’t come from schools at the level of the University of Texas (considered the best university in Texas).  Chief Justice Roberts was a bit more plain about not understanding the benefits of diversity.  He stated, “What unique perspective does a black student bring to a class in physics?”

To that end, Dr. S. James Gates, theoretical physicist at the University of Maryland, and member of the President’s Council of Advisers on Science and Technology (and commercial actor) has an editorial in the March 25 [2016] issue of Science explaining that the value of having diversity in science does not accrue *just* to those who are underrepresented.

Dr. Gates relates his personal experience as a researcher and teacher of how people’s background inform their practice of science, and that two different people may use the same scientific method, but think about the problem differently.

I’m guessing that both Scalia and Roberts and possibly others believe that science is the discovery and accumulation of facts. In this worldview science facts such as gravity are waiting for discovery and formulation into a ‘law’. They do not recognize that most science is a collection of beliefs and may be influenced by personal beliefs. For example, we believe we’ve proved the existence of the Higgs boson but no one associated with the research has ever stated unequivocally that it exists.

More generally, with D-PLACE and the recently announced Trans-Atlantic Platform (see my July 15, 2016 post about it), it seems Canada’s humanities and social sciences communities are taking strides toward greater international collaboration and a more profound investment in digital scholarship.

Trans-Atlantic Platform (T-AP) is a unique collaboration of humanities and social science researchers from Europe and the Americas

Launched in 2013, the Trans-Atlantic Platform is co-chaired by Dr.Ted Hewitt, president of the Social Sciences and Humanities Research Council of Canada (SSHRC) , and Dr. Renée van Kessel-Hagesteijn, Netherlands Organisation for Scientific Research—Social Sciences (NWO—Social Sciences).

An EU (European Union) publication, International Innovation features an interview about T-AP with Ted Hewitt in a June 30, 2016 posting,

The Trans-Atlantic Platform is a unique collaboration of humanities and social science funders from Europe and the Americas. International Innovation’s Rebecca Torr speaks with Ted Hewitt, President of the Social Sciences and Humanities Research Council and Co-Chair of T-AP to understand more about the Platform and its pilot funding programme, Digging into Data.

Many commentators have called for better integration between natural and social scientists, to ensure that the societal benefits of STEM research are fully realised. Does the integration of diverse scientific disciplines form part of T-AP’s remit, and if so, how are you working to achieve this?

T-AP was designed primarily to promote and facilitate research across SSH. However, given the Platform’s thematic priorities and the funding opportunities being contemplated, we anticipate that a good number of non-SSH [emphasis mine] researchers will be involved.

As an example, on March 1, T-AP launched its first pilot funding opportunity: the T-AP Digging into Data Challenge. One of the sponsors is the Natural Sciences and Engineering Research Council of Canada (NSERC), Canada’s federal funding agency for research in the natural sciences and engineering. Their involvement ensures that the perspective of the natural sciences is included in the challenge. The Digging into Data Challenge is open to any project that addresses research questions in the SSH by using large-scale digital data analysis techniques, and is then able to show how these techniques can lead to new insights. And the challenge specifically aims to advance multidisciplinary collaborative projects.

When you tackle a research question or undertake research to address a social challenge, you need collaboration between various SSH disciplines or between SSH and STEM disciplines. So, while proposals must address SSH research questions, the individual teams often involve STEM researchers, such as computer scientists.

In previous rounds of the Digging into Data Challenge, this has led to invaluable research. One project looked at how the media shaped public opinion around the 1918 Spanish flu pandemic. Another used CT scans to examine hundreds of mummies, ultimately discovering that atherosclerosis, a form of heart disease, was prevalent 4,000 years ago. In both cases, these multidisciplinary historical research projects have helped inform our thinking of the present.

Of course, Digging into Data isn’t the only research area in which T-AP will be involved. Since its inception, T-AP partners have identified three priority areas beyond digital scholarship: diversity, inequality and difference; resilient and innovative societies; and transformative research on the environment. Each of these areas touches on a variety of SSH fields, while the transformative research on the environment area has strong connections with STEM fields. In September 2015, T-AP organised a workshop around this third priority area; environmental science researchers were among the workshop participants.

I wish Hewitt hadn’t described researchers from disciplines other than the humanities and social sciences as “non-SSH.” The designation divides the world in two: us and non-take your pick: non-Catholic/Muslim/American/STEM/SSH/etc.

Getting back to the interview, it is surprisingly Canuck-centric in places,

How does T-AP fit in with Social Sciences and Humanities Research Council of Canada (SSHRC)’s priorities?

One of the objectives in SSHRC’s new strategic plan is to develop partnerships that enable us to expand the reach of our funding. As T-AP provides SSHRC with links to 16 agencies across Europe and the Americas, it is an efficient mechanism for us to broaden the scope of our support and promotion of post-secondary-based research and training in SSH.

It also provides an opportunity to explore cutting edge areas of research, such as big data (as we did with the first call we put out, Digging into Data). The research enterprise is becoming increasingly international, by which I mean that researchers are working on issues with international dimensions or collaborating in international teams. In this globalised environment, SSHRC must partner with international funders to support research excellence. By developing international funding opportunities, T-AP helps researchers create teams better positioned to tackle the most exciting and promising research topics.

Finally, it is a highly effective way of broadly promoting the value of SSH research throughout Canada and around the globe. There are significant costs and complexities involved in international research, and uncoordinated funding from multiple national funders can actually create barriers to collaboration. A platform like T-AP helps funders coordinate and streamline processes.

The interview gets a little more international scope when it turns to the data project,

What is the significance of your pilot funding programme in digital scholarship and what types of projects will it support?

The T-AP Digging into Data Challenge is significant for several reasons. First, the geographic reach of Digging is truly significant. With 16 participants from 11 countries, this round of Digging has significantly broader participation from previous rounds. This is also the first time Digging into Data includes funders from South America.

The T-AP Digging into Data Challenge is open to any research project that addresses questions in SSH. In terms of what those projects will end up being is anybody’s guess – projects from past competitions have involved fields ranging from musicology to anthropology to political science.

The Challenge’s main focus is, of course, the use of big data in research.

You may want to read the interview in its entirety here.

I have checked out the Trans-Atlantic Platform website but cannot determine how someone or some institution might consult that site for information on how to get involved in their projects or get funding. However, there is a T-AP Digging into Data website where there is evidence of the first international call for funding submissions. Sadly, the deadline for the 2016 call has passed if the website is to be believed (sometimes people are late when changing deadline dates).

Exploring the fundamental limits of invisibility cloaks

There’s some interesting work on invisibility cloaks coming from the University of Texas at Austin according to a July 6, 2015 news item on Nanowerk,

Researchers in the Cockrell School of Engineering at The University of Texas at Austin have been able to quantify fundamental physical limitations on the performance of cloaking devices, a technology that allows objects to become invisible or undetectable to electromagnetic waves including radio waves, microwaves, infrared and visible light.

A July 5, 2016 University of Texas at Austin news release (also on EurekAlert), which originated the news item, expands on the theme,

The researchers’ theory confirms that it is possible to use cloaks to perfectly hide an object for a specific wavelength, but hiding an object from an illumination containing different wavelengths becomes more challenging as the size of the object increases.

Andrea Alù, an electrical and computer engineering professor and a leading researcher in the area of cloaking technology, along with graduate student Francesco Monticone, created a quantitative framework that now establishes boundaries on the bandwidth capabilities of electromagnetic cloaks for objects of different sizes and composition. As a result, researchers can calculate the expected optimal performance of invisibility devices before designing and developing a specific cloak for an object of interest. …

Cloaks are made from artificial materials, called metamaterials, that have special properties enabling a better control of the incoming wave, and can make an object invisible or transparent. The newly established boundaries apply to cloaks made of passive metamaterials — those that do not draw energy from an external power source.

Understanding the bandwidth and size limitations of cloaking is important to assess the potential of cloaking devices for real-world applications such as communication antennas, biomedical devices and military radars, Alù said. The researchers’ framework shows that the performance of a passive cloak is largely determined by the size of the object to be hidden compared with the wavelength of the incoming wave, and it quantifies how, for shorter wavelengths, cloaking gets drastically more difficult.

For example, it is possible to cloak a medium-size antenna from radio waves over relatively broad bandwidths for clearer communications, but it is essentially impossible to cloak large objects, such as a human body or a military tank, from visible light waves, which are much shorter than radio waves.

“We have shown that it will not be possible to drastically suppress the light scattering of a tank or an airplane for visible frequencies with currently available techniques based on passive materials,” Monticone said. “But for objects comparable in size to the wavelength that excites them (a typical radio-wave antenna, for example, or the tip of some optical microscopy tools), the derived bounds show that you can do something useful, the restrictions become looser, and we can quantify them.”

In addition to providing a practical guide for research on cloaking devices, the researchers believe that the proposed framework can help dispel some of the myths that have been developed around cloaking and its potential to make large objects invisible.
“The question is, ‘Can we make a passive cloak that makes human-scale objects invisible?’ ” Alù said. “It turns out that there are stringent constraints in coating an object with a passive material and making it look as if the object were not there, for an arbitrary incoming wave and observation point.”

Now that bandwidth limits on cloaking are available, researchers can focus on developing practical applications with this technology that get close to these limits.

“If we want to go beyond the performance of passive cloaks, there are other options,” Monticone said. “Our group and others have been exploring active and nonlinear cloaking techniques, for which these limits do not apply. Alternatively, we can aim for looser forms of invisibility, as in cloaking devices that introduce phase delays as light is transmitted through, camouflaging techniques, or other optical tricks that give the impression of transparency, without actually reducing the overall scattering of light.”

Alù’s lab is working on the design of active cloaks that use metamaterials plugged to an external energy source to achieve broader transparency bandwidths.

“Even with active cloaks, Einstein’s theory of relativity fundamentally limits the ultimate performance for invisibility,” Alù said. “Yet, with new concepts and designs, such as active and nonlinear metamaterials, it is possible to move forward in the quest for transparency and invisibility.”

The researchers have prepared a diagram illustrating their work,

The graph shows the trade-off between how much an object can be made transparent (scattering reduction; vertical axis) and the color span (bandwidth; horizontal axis) over which this phenomenon can be achieved. Courtesy: University of Texas at Austin

The graph shows the trade-off between how much an object can be made transparent (scattering reduction; vertical axis) and the color span (bandwidth; horizontal axis) over which this phenomenon can be achieved. Courtesy: University of Texas at Austin

Here’s a link to and a citation for the paper,

Invisibility exposed: physical bounds on passive cloaking by Francesco Monticone and Andrea Alù. Optica Vol. 3, Issue 7, pp. 718-724 (2016) •doi: 10.1364/OPTICA.3.000718

This paper is open access.

Nanotechnology in the house; a guide to what you already have

A July 4, 2016 essay by Cameron Shearer of Flinders University (Australia) on The Conversation website describes how nanotechnology can be found in our homes (Note: Links have been removed),

All kitchens have a sink, most of which are fitted with a water filter. This filter removes microbes and compounds that can give water a bad taste.

Common filter materials are activated carbon and silver nanoparticles.

Activated carbon is a special kind of carbon that’s made to have a very high surface area. This is achieved by milling it down to a very small size. Its high surface area gives more room for unwanted compounds to stick to it, removing them from water.

The antimicrobial properties of silver makes it one of the most common nanomaterials today. Silver nanoparticles kill algae and bacteria by releasing silver ions (single silver atoms) that enter into the cell wall of the organisms and become toxic.

It is so effective and fashionable that silver nanoparticles are now used to coat cutlery, surfaces, fridges, door handles, pet bowls and almost anywhere else microorganisms are unwanted.

Other nanoparticles are used to prepare heat-resistant and self-cleaning surfaces, such as floors and benchtops. By applying a thin coating containing silicon dioxide or titanium dioxide nanoparticles, a surface can become water repelling, which prevents stains (similar to how scotch guard protects fabrics).

Nanoparticle films can be so thin that they can’t be seen. The materials also have very poor heat conductivity, which means they are heat resistant.

The kitchen sink (or dishwasher) is used for washing dishes with the aid of detergents. Detergents form nanoparticles called micelles.

A micelle is formed when detergent molecules self-assemble into a sphere. The centre of this sphere is chemically similar to grease, oils and fats, which are what you want to wash off. The detergent traps oils and fats within the cavity of the sphere to separate them from water and aid dish washing.

Your medicine cabinet may include nanotechnology similar to micelles, with many pharmaceuticals using liposomes.

A liposome is an extended micelle where there is an extra interior cavity within the sphere. Making liposomes from tailored molecules allows them to carry therapeutics inside; the outside of the nanoparticle can be made to target a specific area of the body.

Shearer’s essay goes on to cover the laundry, bathroom, closets, and garage. (h/t July 5, 2016 news item on phys.org)

Re-envisioning the laboratory: an art/sci or sci-art (take your pick) symposium

DFA186 Hades. 2012. Unique digital C-print on watercolor paper. Artist: Brandon Ballengee

DFA186 Hades. 2012. Unique digital C-print on watercolor paper. Artist: Brandon Ballengée

Artist (work seen above)/Biologist/Environmental Activist, Brandon Ballengée will be a keynote speaker at the Re-envisioning the Laboratory: Sci-Art Symposium being held at the University of Wyoming. Thursday, Sept. 8, 2016 is for the evening reception while the symposium is being held Friday, Sept. 9 – Saturday, Sept. 10, 2016. You can read more about the symposium (the schedule is not yet complete) in a July 12, 2016 posting by CommNatural (Bethann G. Merkle) on her CommNatural blog,

I’m super excited to invite you to register for a Sci-Art Symposium I’ve been co-planning for the past year. The big idea is to bring together a wide-ranging set of ideas, examples, and thinkers/do-ers to build a powerful foundation for on-going SciArt synergy on the University of Wyoming campus, in Wyoming communities, and beyond. We’re organizing sessions around not just beautiful examples and great ideas, but also challenges and funding opportunities, with the intent to address not just what works, but how it works, what gets in the way, and how to move ahead with the SciArt initiatives you envision.

The rest of this blog post provides essential information about the symposium. If you have any questions, don’t hesitate to contact me or any of the other organizers – there’s a slew of us from art and science disciplines across campus!

Hope to see you there!

SYMPOSIUM INFORMATION

The 2016 Sci-Art Symposium will provide a forum for inspiration, scholarly research, networking and opportunities to get the tools, methods and momentum to take on innovative interdisciplinary work across community, disciplinary, and topical boundaries. Sessions will be organized into five thematic categories: influences and opportunities, processes and methods, outcomes and products, challenges and opportunities, and next steps and future applications. Keynote address will feature artist-biologist Brandon Ballengée, and other sessions will feature presenters from throughout the nation.

Registration Fees:

$75  General Admission
$0    Full-time Student Admission (Only applicable to students enrolled in full-time schedule, may be asked for verification)

Click here for transportation and lodging information, on the event website.

CONTACT INFORMATION

If you have questions about your registration or if you need to cancel your attendance, please contact Katie Christensen, Curator of Education and Statewide Engagement, at katie.christensen@uwyo.edu or 307-766-3496.

Re-envisioning the Lab: 2016 Sci-Art Symposium is made possible by University of Wyoming Art Museum, in partnership with the Biodiversity Institute, Haub School of Environment and Natural Resources, Department of Art and Art History, Science and Math Teaching Center and MFA in Creative Writing.

I’m a little surprised that the US National Science Foundation is not one of the funders. In fact, most, if not all, of the funders are part of the University of Wyoming.

As to whether there is a correct form: artsci or sciart; art/sci or sci/art; sci-art or art-sci; SciArt or ArtSci, and whether the terms refer to the same thing or two different approaches to bringing together art and science in a project, I have no idea. Perhaps they’ll discuss terminology at the symposium.

One final thought, since they don’t have the final schedule nailed down, perhaps it’s possible to submit a proposal for a talk or entry for a sciart piece. Good luck!

Viewing RNA (ribonucleic acid) more closely at the nanoscale with expansion microscopy (EXM) and off-the-shelf parts

A close cousin to DNA (deoxyribonucleic acid), RNA (ribonucleic acid) is a communicator according to a July 4, 2016 news item on ScienceDaily describing how a team at the Massachusetts Institute of Technology (MIT) managed to image RNA more precisely,

Cells contain thousands of messenger RNA molecules, which carry copies of DNA’s genetic instructions to the rest of the cell. MIT engineers have now developed a way to visualize these molecules in higher resolution than previously possible in intact tissues, allowing researchers to precisely map the location of RNA throughout cells.

Key to the new technique is expanding the tissue before imaging it. By making the sample physically larger, it can be imaged with very high resolution using ordinary microscopes commonly found in research labs.

“Now we can image RNA with great spatial precision, thanks to the expansion process, and we also can do it more easily in large intact tissues,” says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT, a member of MIT’s Media Lab and McGovern Institute for Brain Research, and the senior author of a paper describing the technique in the July 4, 2016 issue of Nature Methods.

A July 4, 2016 MIT news release (also on EurekAlert), which originated the news item, explains why scientists want a better look at RNA and how the MIT team accomplished the task,

Studying the distribution of RNA inside cells could help scientists learn more about how cells control their gene expression and could also allow them to investigate diseases thought to be caused by failure of RNA to move to the correct location.

Boyden and colleagues first described the underlying technique, known as expansion microscopy (ExM), last year, when they used it to image proteins inside large samples of brain tissue. In a paper appearing in Nature Biotechnology on July 4, the MIT team has now presented a new version of the technology that employs off-the-shelf chemicals, making it easier for researchers to use.

MIT graduate students Fei Chen and Asmamaw Wassie are the lead authors of the Nature Methods paper, and Chen and graduate student Paul Tillberg are the lead authors of the Nature Biotechnology paper.

A simpler process

The original expansion microscopy technique is based on embedding tissue samples in a polymer that swells when water is added. This tissue enlargement allows researchers to obtain images with a resolution of around 70 nanometers, which was previously possible only with very specialized and expensive microscopes. However, that method posed some challenges because it requires generating a complicated chemical tag consisting of an antibody that targets a specific protein, linked to both a fluorescent dye and a chemical anchor that attaches the whole complex to a highly absorbent polymer known as polyacrylate. Once the targets are labeled, the researchers break down the proteins that hold the tissue sample together, allowing it to expand uniformly as the polyacrylate gel swells.

In their new studies, to eliminate the need for custom-designed labels, the researchers used a different molecule to anchor the targets to the gel before digestion. This molecule, which the researchers dubbed AcX, is commercially available and therefore makes the process much simpler.

AcX can be modified to anchor either proteins or RNA to the gel. In the Nature Biotechnology study, the researchers used it to anchor proteins, and they also showed that the technique works on tissue that has been previously labeled with either fluorescent antibodies or proteins such as green fluorescent protein (GFP).

“This lets you use completely off-the-shelf parts, which means that it can integrate very easily into existing workflows,” Tillberg says. “We think that it’s going to lower the barrier significantly for people to use the technique compared to the original ExM.”

Using this approach, it takes about an hour to scan a piece of tissue 500 by 500 by 200 microns, using a light sheet fluorescence microscope. The researchers showed that this technique works for many types of tissues, including brain, pancreas, lung, and spleen.

Imaging RNA

In the Nature Methods paper, the researchers used the same kind of anchoring molecule but modified it to target RNA instead. All of the RNAs in the sample are anchored to the gel, so they stay in their original locations throughout the digestion and expansion process.

After the tissue is expanded, the researchers label specific RNA molecules using a process known as fluorescence in situ hybridization (FISH), which was originally developed in the early 1980s and is widely used. This allows researchers to visualize the location of specific RNA molecules at high resolution, in three dimensions, in large tissue samples.

This enhanced spatial precision could allow scientists to explore many questions about how RNA contributes to cellular function. For example, a longstanding question in neuroscience is how neurons rapidly change the strength of their connections to store new memories or skills. One hypothesis is that RNA molecules encoding proteins necessary for plasticity are stored in cell compartments close to the synapses, poised to be translated into proteins when needed.

With the new system, it should be possible to determine exactly which RNA molecules are located near the synapses, waiting to be translated.

“People have found hundreds of these locally translated RNAs, but it’s hard to know where exactly they are and what they’re doing,” Chen says. “This technique would be useful to study that.”

Boyden’s lab is also interested in using this technology to trace the connections between neurons and to classify different subtypes of neurons based on which genes they are expressing.

There’s a brief (30 secs.), silent video illustrating the work (something about a ‘Brainbow Hippocampus’) made available by MIT,


Here’s a link to and a citation for the paper,

Nanoscale imaging of RNA with expansion microscopy by Fei Chen, Asmamaw T Wassie, Allison J Cote, Anubhav Sinha, Shahar Alon, Shoh Asano, Evan R Daugharthy, Jae-Byum Chang, Adam Marblestone, George M Church, Arjun Raj, & Edward S Boyden.     Nature Methods (2016)  doi:10.1038/nmeth.3899 Published online 04 July 2016

This paper is behind a paywall.

Pushing efficiency of perovskite-based solar cells to 31%

This atomic force microscopy image of the grainy surface of a perovskite solar cell reveals a new path to much greater efficiency. Individual grains are outlined in black, low-performing facets are red, and high-performing facets are green. A big jump in efficiency could possibly be obtained if the material can be grown so that more high-performing facets develop. (Credit: Berkeley Lab)

This atomic force microscopy image of the grainy surface of a perovskite solar cell reveals a new path to much greater efficiency. Individual grains are outlined in black, low-performing facets are red, and high-performing facets are green. A big jump in efficiency could possibly be obtained if the material can be grown so that more high-performing facets develop. (Credit: Berkeley Lab)

It’s always fascinating to observe a trend (or a craze) in science, an endeavour that outsiders (like me) tend to think of as impervious to such vagaries. Perovskite seems to be making its way past the trend/craze phase and moving into a more meaningful phase. From a July 4, 2016 news item on Nanowerk,

Scientists from the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have discovered a possible secret to dramatically boosting the efficiency of perovskite solar cells hidden in the nanoscale peaks and valleys of the crystalline material.

Solar cells made from compounds that have the crystal structure of the mineral perovskite have captured scientists’ imaginations. They’re inexpensive and easy to fabricate, like organic solar cells. Even more intriguing, the efficiency at which perovskite solar cells convert photons to electricity has increased more rapidly than any other material to date, starting at three percent in 2009 — when researchers first began exploring the material’s photovoltaic capabilities — to 22 percent today. This is in the ballpark of the efficiency of silicon solar cells.

Now, as reported online July 4, 2016 in the journal Nature Energy (“Facet-dependent photovoltaic efficiency variations in single grains of hybrid halide perovskite”), a team of scientists from the Molecular Foundry and the Joint Center for Artificial Photosynthesis, both at Berkeley Lab, found a surprising characteristic of a perovskite solar cell that could be exploited for even higher efficiencies, possibly up to 31 percent.

A July 4, 2016 Berkeley Lab news release (also on EurekAlert), which originated the news item, details the research,

Using photoconductive atomic force microscopy, the scientists mapped two properties on the active layer of the solar cell that relate to its photovoltaic efficiency. The maps revealed a bumpy surface composed of grains about 200 nanometers in length, and each grain has multi-angled facets like the faces of a gemstone.

Unexpectedly, the scientists discovered a huge difference in energy conversion efficiency between facets on individual grains. They found poorly performing facets adjacent to highly efficient facets, with some facets approaching the material’s theoretical energy conversion limit of 31 percent.

The scientists say these top-performing facets could hold the secret to highly efficient solar cells, although more research is needed.

“If the material can be synthesized so that only very efficient facets develop, then we could see a big jump in the efficiency of perovskite solar cells, possibly approaching 31 percent,” says Sibel Leblebici, a postdoctoral researcher at the Molecular Foundry.

Leblebici works in the lab of Alexander Weber-Bargioni, who is a corresponding author of the paper that describes this research. Ian Sharp, also a corresponding author, is a Berkeley Lab scientist at the Joint Center for Artificial Photosynthesis. Other Berkeley Lab scientists who contributed include Linn Leppert, Francesca Toma, and Jeff Neaton, the director of the Molecular Foundry.

A team effort

The research started when Leblebici was searching for a new project. “I thought perovskites are the most exciting thing in solar right now, and I really wanted to see how they work at the nanoscale, which has not been widely studied,” she says.

She didn’t have to go far to find the material. For the past two years, scientists at the nearby Joint Center for Artificial Photosynthesis have been making thin films of perovskite-based compounds, and studying their ability to convert sunlight and CO2 into useful chemicals such as fuel. Switching gears, they created pervoskite solar cells composed of methylammonium lead iodide. They also analyzed the cells’ performance at the macroscale.

The scientists also made a second set of half cells that didn’t have an electrode layer. They packed eight of these cells on a thin film measuring one square centimeter. These films were analyzed at the Molecular Foundry, where researchers mapped the cells’ surface topography at a resolution of ten nanometers. They also mapped two properties that relate to the cells’ photovoltaic efficiency: photocurrent generation and open circuit voltage.

This was performed using a state-of-the-art atomic force microscopy technique, developed in collaboration with Park Systems, which utilizes a conductive tip to scan the material’s surface. The method also eliminates friction between the tip and the sample. This is important because the material is so rough and soft that friction can damage the tip and sample, and cause artifacts in the photocurrent.

Surprise discovery could lead to better solar cells

The resulting maps revealed an order of magnitude difference in photocurrent generation, and a 0.6-volt difference in open circuit voltage, between facets on the same grain. In addition, facets with high photocurrent generation had high open circuit voltage, and facets with low photocurrent generation had low open circuit voltage.

“This was a big surprise. It shows, for the first time, that perovskite solar cells exhibit facet-dependent photovoltaic efficiency,” says Weber-Bargioni.

Adds Toma, “These results open the door to exploring new ways to control the development of the material’s facets to dramatically increase efficiency.”

In practice, the facets behave like billions of tiny solar cells, all connected in parallel. As the scientists discovered, some cells operate extremely well and others very poorly. In this scenario, the current flows towards the bad cells, lowering the overall performance of the material. But if the material can be optimized so that only highly efficient facets interface with the electrode, the losses incurred by the poor facets would be eliminated.

“This means, at the macroscale, the material could possibly approach its theoretical energy conversion limit of 31 percent,” says Sharp.

A theoretical model that describes the experimental results predicts these facets should also impact the emission of light when used as an LED. …

The Molecular Foundry is a DOE Office of Science User Facility located at Berkeley Lab. The Joint Center for Artificial Photosynthesis is a DOE Energy Innovation Hub led by the California Institute of Technology in partnership with Berkeley Lab.

Here’s a link to and a citation for the paper,

Facet-dependent photovoltaic efficiency variations in single grains of hybrid halide perovskite by Sibel Y. Leblebici, Linn Leppert, Yanbo Li, Sebastian E. Reyes-Lillo, Sebastian Wickenburg, Ed Wong, Jiye Lee, Mauro Melli, Dominik Ziegler, Daniel K. Angell, D. Frank Ogletree, Paul D. Ashby, Francesca M. Toma, Jeffrey B. Neaton, Ian D. Sharp, & Alexander Weber-Bargioni. Nature Energy 1, Article number: 16093 (2016  doi:10.1038/nenergy.2016.93 Published online: 04 July 2016

This paper is behind a paywall.

Dexter Johnson’s July 6, 2016 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website} presents his take on the impact that this new finding may have,

The rise of the crystal perovskite as a potential replacement for silicon in photovoltaics has been impressive over the last decade, with its conversion efficiency improving from 3.8 to 22.1 percent over that time period. Nonetheless, there has been a vague sense that this rise is beginning to peter out of late, largely because when a solar cell made from perovskite gets larger than 1 square centimeter the best conversion efficiency had been around 15.6 percent. …

Artificial pancreas in 2018?

According to Dr. Roman Hovorka and Dr. Hood Thabit of the University of Cambridge, UK, there will be an artificial pancreas assuming issues such as cybersecurity are resolved. From a June 30, 2016 Diabetologia press release on EurekAlert,

The artificial pancreas — a device which monitors blood glucose in patients with type 1 diabetes and then automatically adjusts levels of insulin entering the body — is likely to be available by 2018, conclude authors of a paper in Diabetologia (the journal of the European Association for the Study of Diabetes). Issues such as speed of action of the forms of insulin used, reliability, convenience and accuracy of glucose monitors plus cybersecurity to protect devices from hacking, are among the issues that are being addressed.

The press release describes the current technology available for diabetes type 1 patients and alternatives other than an artificial pancreas,

Currently available technology allows insulin pumps to deliver insulin to people with diabetes after taking a reading or readings from glucose meters, but these two components are separate. It is the joining together of both parts into a ‘closed loop’ that makes an artificial pancreas, explain authors Dr Roman Hovorka and Dr Hood Thabit of the University of Cambridge, UK. “In trials to date, users have been positive about how use of an artificial pancreas gives them ‘time off’ or a ‘holiday’ from their diabetes management, since the system is managing their blood sugar effectively without the need for constant monitoring by the user,” they say.

One part of the clinical need for the artificial pancreas is the variability of insulin requirements between and within individuals — on one day a person could use one third of their normal requirements, and on another 3 times what they normally would. This is dependent on the individual, their diet, their physical activity and other factors. The combination of all these factors together places a burden on people with type 1 diabetes to constantly monitor their glucose levels, to ensure they don’t end up with too much blood sugar (hyperglycaemic) or more commonly, too little (hypoglycaemic). Both of these complications can cause significant damage to blood vessels and nerve endings, making complications such as cardiovascular problems more likely.

There are alternatives to the artificial pancreas, with improvements in technology in both whole pancreas transplantation and also transplants of just the beta cells from the pancreas which produce insulin. However, recipients of these transplants require drugs to supress their immune systems just as in other organ transplants. In the case of whole pancreas transplantation, major surgery is required; and in beta cell islet transplantation, the body’s immune system can still attack the transplanted cells and kill off a large proportion of them (80% in some cases). The artificial pancreas of course avoids the need for major surgery and immunosuppressant drugs.

Researchers are working to solve one of the major problems with an artificial pancreas according to the press release,

Researchers globally continue to work on a number of challenges faced by artificial pancreas technology. One such challenge is that even fast-acting insulin analogues do not reach their peak levels in the bloodstream until 0.5 to 2 hours after injection, with their effects lasting 3 to 5 hours. So this may not be fast enough for effective control in, for example, conditions of vigorous exercise. Use of the even faster acting ‘insulin aspart’ analogue may remove part of this problem, as could use of other forms of insulin such as inhaled insulin. Work also continues to improve the software in closed loop systems to make it as accurate as possible in blood sugar management.

The press release also provides a brief outline of some of the studies being run on one artificial pancreas or another, offers an abbreviated timeline for when the medical device may be found on the market, and notes specific cybersecurity issues,

A number of clinical studies have been completed using the artificial pancreas in its various forms, in various settings such as diabetes camps for children, and real life home testing. Many of these trials have shown as good or better glucose control than existing technologies (with success defined by time spent in a target range of ideal blood glucose concentrations and reduced risk of hypoglycaemia). A number of other studies are ongoing. The authors say: “Prolonged 6- to 24-month multinational closed-loop clinical trials and pivotal studies are underway or in preparation including adults and children. As closed loop devices may be vulnerable to cybersecurity threats such as interference with wireless protocols and unauthorised data retrieval, implementation of secure communications protocols is a must.”

The actual timeline to availability of the artificial pancreas, as with other medical devices, encompasses regulatory approvals with reassuring attitudes of regulatory agencies such as the US Food and Drug Administration (FDA), which is currently reviewing one proposed artificial pancreas with approval possibly as soon as 2017. And a recent review by the UK National Institute of Health Research (NIHR) reported that automated closed-loop systems may be expected to appear in the (European) market by the end of 2018. The authors say: “This timeline will largely be dependent upon regulatory approvals and ensuring that infrastructures and support are in place for healthcare professionals providing clinical care. Structured education will need to continue to augment efficacy and safety.”

The authors say: “Cost-effectiveness of closed-loop is to be determined to support access and reimbursement. In addition to conventional endpoints such as blood sugar control, quality of life is to be included to assess burden of disease management and hypoglycaemia. Future research may include finding out which sub-populations may benefit most from using an artificial pancreas. Research is underway to evaluate these closed-loop systems in the very young, in pregnant women with type 1 diabetes, and in hospital in-patients who are suffering episodes of hyperglycaemia.”

They conclude: “Significant milestones moving the artificial pancreas from laboratory to free-living unsupervised home settings have been achieved in the past decade. Through inter-disciplinary collaboration, teams worldwide have accelerated progress and real-world closed-loop applications have been demonstrated. Given the challenges of beta-cell transplantation, closed-loop technologies are, with continuing innovation potential, destined to provide a viable alternative for existing insulin pump therapy and multiple daily insulin injections.”

Here’s a link to and a citation for the paper,

Coming of age: the artificial pancreas for type 1 diabetes by Hood Thabit, Roman Hovorka. Diabetologia (2016). doi:10.1007/s00125-016-4022-4 First Online: 30 June 2016

This is an open access paper.

Wireless, wearable carbon nanotube-based gas sensors for soldiers

Researchers at MIT (Massachusetts Institute of Technology) are hoping to make wireless, toxic gas detectors the size of badges. From a June 30, 2016 news item on Nanowerk,

MIT researchers have developed low-cost chemical sensors, made from chemically altered carbon nanotubes, that enable smartphones or other wireless devices to detect trace amounts of toxic gases.

Using the sensors, the researchers hope to design lightweight, inexpensive radio-frequency identification (RFID) badges to be used for personal safety and security. Such badges could be worn by soldiers on the battlefield to rapidly detect the presence of chemical weapons — such as nerve gas or choking agents — and by people who work around hazardous chemicals prone to leakage.

A June 30, 2016 MIT news release (also on EurekAlert), which originated the news item, describes the technology further,

“Soldiers have all this extra equipment that ends up weighing way too much and they can’t sustain it,” says Timothy Swager, the John D. MacArthur Professor of Chemistry and lead author on a paper describing the sensors that was published in the Journal of the American Chemical Society. “We have something that would weigh less than a credit card. And [soldiers] already have wireless technologies with them, so it’s something that can be readily integrated into a soldier’s uniform that can give them a protective capacity.”

The sensor is a circuit loaded with carbon nanotubes, which are normally highly conductive but have been wrapped in an insulating material that keeps them in a highly resistive state. When exposed to certain toxic gases, the insulating material breaks apart, and the nanotubes become significantly more conductive. This sends a signal that’s readable by a smartphone with near-field communication (NFC) technology, which allows devices to transmit data over short distances.

The sensors are sensitive enough to detect less than 10 parts per million of target toxic gases in about five seconds. “We are matching what you could do with benchtop laboratory equipment, such as gas chromatographs and spectrometers, that is far more expensive and requires skilled operators to use,” Swager says.

Moreover, the sensors each cost about a nickel to make; roughly 4 million can be made from about 1 gram of the carbon nanotube materials. “You really can’t make anything cheaper,” Swager says. “That’s a way of getting distributed sensing into many people’s hands.”

The paper’s other co-authors are from Swager’s lab: Shinsuke Ishihara, a postdoc who is also a member of the International Center for Materials Nanoarchitectonics at the National Institute for Materials Science, in Japan; and PhD students Joseph Azzarelli and Markrete Krikorian.

Wrapping nanotubes

In recent years, Swager’s lab has developed other inexpensive, wireless sensors, called chemiresistors, that have detected spoiled meat and the ripeness of fruit, among other things [go to the end of this post for links to previous posts about Swager’s work]. All are designed similarly, with carbon nanotubes that are chemically modified, so their ability to carry an electric current changes when exposed to a target chemical.

This time, the researchers designed sensors highly sensitive to “electrophilic,” or electron-loving, chemical substances, which are often toxic and used for chemical weapons.

To do so, they created a new type of metallo-supramolecular polymer, a material made of metals binding to polymer chains. The polymer acts as an insulation, wrapping around each of the sensor’s tens of thousands of single-walled carbon nanotubes, separating them and keeping them highly resistant to electricity. But electrophilic substances trigger the polymer to disassemble, allowing the carbon nanotubes to once again come together, which leads to an increase in conductivity.

In their study, the researchers drop-cast the nanotube/polymer material onto gold electrodes, and exposed the electrodes to diethyl chlorophosphate, a skin irritant and reactive simulant of nerve gas. Using a device that measures electric current, they observed a 2,000 percent increase in electrical conductivity after five seconds of exposure. Similar conductivity increases were observed for trace amounts of numerous other electrophilic substances, such as thionyl chloride (SOCl2), a reactive simulant in choking agents. Conductivity was significantly lower in response to common volatile organic compounds, and exposure to most nontarget chemicals actually increased resistivity.

Creating the polymer was a delicate balancing act but critical to the design, Swager says. As a polymer, the material needs to hold the carbon nanotubes apart. But as it disassembles, its individual monomers need to interact more weakly, letting the nanotubes regroup. “We hit this sweet spot where it only works when it’s all hooked together,” Swager says.

Resistance is readable

To build their wireless system, the researchers created an NFC tag that turns on when its electrical resistance dips below a certain threshold.

Smartphones send out short pulses of electromagnetic fields that resonate with an NFC tag at radio frequency, inducing an electric current, which relays information to the phone. But smartphones can’t resonate with tags that have a resistance higher than 1 ohm.

The researchers applied their nanotube/polymer material to the NFC tag’s antenna. When exposed to 10 parts per million of SOCl2 for five seconds, the material’s resistance dropped to the point that the smartphone could ping the tag. Basically, it’s an “on/off indicator” to determine if toxic gas is present, Swager says.

According to the researchers, such a wireless system could be used to detect leaks in Li-SOCl2 (lithium thionyl chloride) batteries, which are used in medical instruments, fire alarms, and military systems.

The next step, Swager says, is to test the sensors on live chemical agents, outside of the lab, which are more dispersed and harder to detect, especially at trace levels. In the future, there’s also hope for developing a mobile app that could make more sophisticated measurements of the signal strength of an NFC tag: Differences in the signal will mean higher or lower concentrations of a toxic gas. “But creating new cell phone apps is a little beyond us right now,” Swager says. “We’re chemists.”

Here’s a link to and a citation for the paper,

Ultratrace Detection of Toxic Chemicals: Triggered Disassembly of Supramolecular Nanotube Wrappers by Shinsuke Ishihara, Joseph M. Azzarelli, Markrete Krikorian, and Timothy M. Swager. J. Am. Chem. Soc., Article ASAP DOI: 10.1021/jacs.6b03869 Publication Date (Web): June 23, 2016

Copyright © 2016 American Chemical Society

This paper is behind a paywall.

Here are links to other posts about Swager’s work featured here previously:

Carbon nanotubes sense spoiled food (April 23, 2015 post)

Smart suits for US soldiers—an update of sorts from the Lawrence Livermore National Laboratory (Feb. 25, 2014 post)

Come, see my etchings … they detect poison gases (Oct. 9, 2012 post)

Soldiers sniff overripe fruit (May 1, 2012 post)