Tag Archives: University of Wisconsin-Madison

Nanojuice in your gut

A July 7, 2014 news item on Azonano features a new technique that could help doctors better diagnose problems in the intestines (guts),

Located deep in the human gut, the small intestine is not easy to examine. X-rays, MRIs and ultrasound images provide snapshots but each suffers limitations. Help is on the way.

University at Buffalo [State University of New York] researchers are developing a new imaging technique involving nanoparticles suspended in liquid to form “nanojuice” that patients would drink. Upon reaching the small intestine, doctors would strike the nanoparticles with a harmless laser light, providing an unparalleled, non-invasive, real-time view of the organ.

A July 5, 2014 University of Buffalo news release (also on EurekAlert) by Cory Nealon, which originated the news item, describes some of the challenges associated with medical imaging of small intestines,

“Conventional imaging methods show the organ and blockages, but this method allows you to see how the small intestine operates in real time,” said corresponding author Jonathan Lovell, PhD, UB assistant professor of biomedical engineering. “Better imaging will improve our understanding of these diseases and allow doctors to more effectively care for people suffering from them.”

The average human small intestine is roughly 23 feet long and 1 inch thick. Sandwiched between the stomach and large intestine, it is where much of the digestion and absorption of food takes place. It is also where symptoms of irritable bowel syndrome, celiac disease, Crohn’s disease and other gastrointestinal illnesses occur.

To assess the organ, doctors typically require patients to drink a thick, chalky liquid called barium. Doctors then use X-rays, magnetic resonance imaging and ultrasounds to assess the organ, but these techniques are limited with respect to safety, accessibility and lack of adequate contrast, respectively.

Also, none are highly effective at providing real-time imaging of movement such as peristalsis, which is the contraction of muscles that propels food through the small intestine. Dysfunction of these movements may be linked to the previously mentioned illnesses, as well as side effects of thyroid disorders, diabetes and Parkinson’s disease.

The news release goes on to describe how the researchers manipulated dyes that are usually unsuitable for the purpose of imaging an organ in the body,

Lovell and a team of researchers worked with a family of dyes called naphthalcyanines. These small molecules absorb large portions of light in the near-infrared spectrum, which is the ideal range for biological contrast agents.

They are unsuitable for the human body, however, because they don’t disperse in liquid and they can be absorbed from the intestine into the blood stream.

To address these problems, the researchers formed nanoparticles called “nanonaps” that contain the colorful dye molecules and added the abilities to disperse in liquid and move safely through the intestine.

In laboratory experiments performed with mice, the researchers administered the nanojuice orally. They then used photoacoustic tomography (PAT), which is pulsed laser lights that generate pressure waves that, when measured, provide a real-time and more nuanced view of the small intestine.

The researchers plan to continue to refine the technique for human trials, and move into other areas of the gastrointestinal tract.

Here’s an image of the nanojuice in the guts of a mouse,

The combination of "nanojuice" and photoacoustic tomography illuminates the intestine of a mouse. (Credit: Jonathan Lovell)

The combination of “nanojuice” and photoacoustic tomography illuminates the intestine of a mouse. (Credit: Jonathan Lovell)

This is an international collaboration both from a research perspective and a funding perspective (from the news release),

Additional authors of the study come from UB’s Department of Chemical and Biological Engineering, Pohang University of Science and Technology in Korea, Roswell Park Cancer Institute in Buffalo, the University of Wisconsin-Madison, and McMaster University in Canada.

The research was supported by grants from the National Institutes of Health, the Department of Defense and the Korean Ministry of Science, ICT and Future Planning.

Here’s a link to and a citation for the paper,

Non-invasive multimodal functional imaging of the intestine with frozen micellar naphthalocyanines by Yumiao Zhang, Mansik Jeon, Laurie J. Rich, Hao Hong, Jumin Geng, Yin Zhang, Sixiang Shi, Todd E. Barnhart, Paschalis Alexandridis, Jan D. Huizinga, Mukund Seshadri, Weibo Cai, Chulhong Kim, & Jonathan F. Lovell. Nature Nanotechnology (2014) doi:10.1038/nnano.2014.130 Published online 06 July 2014

This paper is behind a paywall.

Good lignin, bad lignin: Florida researchers use plant waste to create lignin nanotubes while researchers in British Columbia develop trees with less lignin

An April 4, 2014 news item on Azonano describes some nanotube research at the University of Florida that reaches past carbon to a new kind of nanotube,

Researchers with the University of Florida’s [UF] Institute of Food and Agricultural Sciences took what some would consider garbage and made a remarkable scientific tool, one that could someday help to correct genetic disorders or treat cancer without chemotherapy’s nasty side effects.

Wilfred Vermerris, an associate professor in UF’s department of microbiology and cell science, and Elena Ten, a postdoctoral research associate, created from plant waste a novel nanotube, one that is much more flexible than rigid carbon nanotubes currently used. The researchers say the lignin nanotubes – about 500 times smaller than a human eyelash – can deliver DNA directly into the nucleus of human cells in tissue culture, where this DNA could then correct genetic conditions. Experiments with DNA injection are currently being done with carbon nanotubes, as well.

“That was a surprising result,” Vermerris said. “If you can do this in actual human beings you could fix defective genes that cause disease symptoms and replace them with functional DNA delivered with these nanotubes.”

An April 3, 2014 University of Florida’s Institute of Food and Agricultural Sciences news release, which originated the news item, describes the lignin nanotubes (LNTs) and future applications in more detail,

The nanotube is made up of lignin from plant material obtained from a UF biofuel pilot facility in Perry, Fla. Lignin is an integral part of the secondary cell walls of plants and enables water movement from the roots to the leaves, but it is not used to make biofuels and would otherwise be burned to generate heat or electricity at the biofuel plant. The lignin nanotubes can be made from a variety of plant residues, including sorghum, poplar, loblolly pine and sugar cane. [emphasis mine]

The researchers first tested to see if the nanotubes were toxic to human cells and were surprised to find that they were less so than carbon nanotubes. Thus, they could deliver a higher dose of medicine to the human cell tissue.  Then they researched if the nanotubes could deliver plasmid DNA to the same cells and that was successful, too. A plasmid is a small DNA molecule that is physically separate from, and can replicate independently of, chromosomal DNA within a cell.

“It’s not a very smooth road because we had to try different experiments to confirm the results,” Ten said. “But it was very fruitful.”

In cases of genetic disorders, the nanotube would be loaded with a functioning copy of a gene, and injected into the body, where it would target the affected tissue, which then makes the missing protein and corrects the genetic disorder.

Although Vermerris cautioned that treatment in humans is many years away, among the conditions that these gene-carrying nanotubes could correct include cystic fibrosis and muscular dystrophy. But, he added, that patients would have to take the corrective DNA via nanotubes on a continuing basis.

Another application under consideration is to use the lignin nanotubes for the delivery of chemotherapy drugs in cancer patients. The nanotubes would ensure the drugs only get to the tumor without affecting healthy tissues.

Vermerris said they created different types of nanotubes, depending on the experiment. They could also adapt nanotubes to a patient’s specific needs, a process called customization.

“You can think about it as a chest of drawers and, depending on the application, you open one drawer or use materials from a different drawer to get things just right for your specific application,” he said.  “It’s not very difficult to do the customization.”

The next step in the research process is for Vermerris and Ten to begin experiments on mice. They are in the application process for those experiments, which would take several years to complete.  If those are successful, permits would need to be obtained for their medical school colleagues to conduct research on human patients, with Vermerris and Ten providing the nanotubes for that research.

“We are a long way from that point,” Vermerris said. “That’s the optimistic long-term trajectory.”

I hope they have good luck with this work. I have emphasized the plant waste the University of Florida scientists studied due to the inclusion of poplar, which is featured in the University of British Columbia research work also being mentioned in this post.

Getting back to Florida for a moment, here’s a link to and a citation for the paper,

Lignin Nanotubes As Vehicles for Gene Delivery into Human Cells by Elena Ten, Chen Ling, Yuan Wang, Arun Srivastava, Luisa Amelia Dempere, and Wilfred Vermerris. Biomacromolecules, 2014, 15 (1), pp 327–338 DOI: 10.1021/bm401555p Publication Date (Web): December 5, 2013
Copyright © 2013 American Chemical Society

This is an open access paper.

Meanwhile, researchers at the University of British Columbia (UBC) are trying to limit the amount of lignin in trees (specifically poplars, which are not mentioned in this excerpt but in the next). From an April 3, 2014 UBC news release,

Researchers have genetically engineered trees that will be easier to break down to produce paper and biofuel, a breakthrough that will mean using fewer chemicals, less energy and creating fewer environmental pollutants.

“One of the largest impediments for the pulp and paper industry as well as the emerging biofuel industry is a polymer found in wood known as lignin,” says Shawn Mansfield, a professor of Wood Science at the University of British Columbia.

Lignin makes up a substantial portion of the cell wall of most plants and is a processing impediment for pulp, paper and biofuel. Currently the lignin must be removed, a process that requires significant chemicals and energy and causes undesirable waste.

Researchers used genetic engineering to modify the lignin to make it easier to break down without adversely affecting the tree’s strength.

“We’re designing trees to be processed with less energy and fewer chemicals, and ultimately recovering more wood carbohydrate than is currently possible,” says Mansfield.

Researchers had previously tried to tackle this problem by reducing the quantity of lignin in trees by suppressing genes, which often resulted in trees that are stunted in growth or were susceptible to wind, snow, pests and pathogens.

“It is truly a unique achievement to design trees for deconstruction while maintaining their growth potential and strength.”

The study, a collaboration between researchers at the University of British Columbia, the University of Wisconsin-Madison, Michigan State University, is a collaboration funded by Great Lakes Bioenergy Research Center, was published today in Science.

Here’s more about lignin and how a decrease would free up more material for biofuels in a more environmentally sustainable fashion, from the news release,

The structure of lignin naturally contains ether bonds that are difficult to degrade. Researchers used genetic engineering to introduce ester bonds into the lignin backbone that are easier to break down chemically.

The new technique means that the lignin may be recovered more effectively and used in other applications, such as adhesives, insolation, carbon fibres and paint additives.

Genetic modification

The genetic modification strategy employed in this study could also be used on other plants like grasses to be used as a new kind of fuel to replace petroleum.

Genetic modification can be a contentious issue, but there are ways to ensure that the genes do not spread to the forest. These techniques include growing crops away from native stands so cross-pollination isn’t possible; introducing genes to make both the male and female trees or plants sterile; and harvesting trees before they reach reproductive maturity.

In the future, genetically modified trees could be planted like an agricultural crop, not in our native forests. Poplar is a potential energy crop for the biofuel industry because the tree grows quickly and on marginal farmland. [emphasis mine] Lignin makes up 20 to 25 per cent of the tree.

“We’re a petroleum reliant society,” says Mansfield. “We rely on the same resource for everything from smartphones to gasoline. We need to diversify and take the pressure off of fossil fuels. Trees and plants have enormous potential to contribute carbon to our society.”

As noted earlier, the researchers in Florida mention poplars in their paper (Note: Links have been removed),

Gymnosperms such as loblolly pine (Pinus taeda L.) contain lignin that is composed almost exclusively of G-residues, whereas lignin from angiosperm dicots, including poplar (Populus spp.) contains a mixture of G- and S-residues. [emphasis mine] Due to the radical-mediated addition of monolignols to the growing lignin polymer, lignin contains a variety of interunit bonds, including aryl–aryl, aryl–alkyl, and alkyl–alkyl bonds.(3) This feature, combined with the association between lignin and cell-wall polysaccharides, which involves both physical and chemical interactions, make the isolation of lignin from plant cell walls challenging. Various isolation methods exist, each relying on breaking certain types of chemical bonds within the lignin, and derivatizations to solubilize the resulting fragments.(5) Several of these methods are used on a large scale in pulp and paper mills and biorefineries, where lignin needs to be removed from woody biomass and crop residues(6) in order to use the cellulose for the production of paper, biofuels, and biobased polymers. The lignin is present in the waste stream and has limited intrinsic economic value.(7)

Since hydroxyl and carboxyl groups in lignin facilitate functionalization, its compatibility with natural and synthetic polymers for different commercial applications have been extensively studied.(8-12) One of the promising directions toward the cost reduction associated with biofuel production is the use of lignin for low-cost carbon fibers.(13) Other recent studies reported development and characterization of lignin nanocomposites for multiple value-added applications. For example, cellulose nanocrystals/lignin nanocomposites were developed for improved optical, antireflective properties(14, 15) and thermal stability of the nanocomposites.(16) [emphasis mine] Model ultrathin bicomponent films prepared from cellulose and lignin derivatives were used to monitor enzyme binding and cellulolytic reactions for sensing platform applications.(17) Enzymes/“synthetic lignin” (dehydrogenation polymer (DHP)) interactions were also investigated to understand how lignin impairs enzymatic hydrolysis during the biomass conversion processes.(18)

The synthesis of lignin nanotubes and nanowires was based on cross-linking a lignin base layer to an alumina membrane, followed by peroxidase-mediated addition of DHP and subsequent dissolution of the membrane in phosphoric acid.(1) Depending upon monomers used for the deposition of DHP, solid nanowires, or hollow nanotubes could be manufactured and easily functionalized due to the presence of many reactive groups. Due to their autofluorescence, lignin nanotubes permit label-free detection under UV radiation.(1) These features make lignin nanotubes suitable candidates for numerous biomedical applications, such as the delivery of therapeutic agents and DNA to specific cells.

The synthesis of LNTs in a sacrificial template membrane is not limited to a single source of lignin or a single lignin isolation procedure. Dimensions of the LNTs and their cytotoxicity to HeLa cells appear to be determined primarily by the lignin isolation procedure, whereas the transfection efficiency is also influenced by the source of the lignin (plant species and genotype). This means that LNTs can be tailored to the application for which they are intended. [emphasis mine] The ability to design LNTs for specific purposes will benefit from a more thorough understanding of the relationship between the structure and the MW of the lignin used to prepare the LNTs, the nanomechanical properties, and the surface characteristics.

We have shown that DNA is physically associated with the LNTs and that the LNTs enter the cytosol, and in some case the nucleus. The LNTs made from NaOH-extracted lignin are of special interest, as they were the shortest in length, substantially reduced HeLa cell viability at levels above approximately 50 mg/mL, and, in the case of pine and poplar, were the most effective in the transfection [penetrating the cell with a bacterial plasmid to leave genetic material in this case] experiments. [emphasis mine]

As I see the issues presented with these two research efforts, there are environmental and energy issues with extracting the lignin while there seem to be some very promising medical applications possible with lignin ‘waste’. These two research efforts aren’t necessarily antithetical but they do raise some very interesting issues as to how we approach our use of resources and future policies.

ETA May 16, 2014: The beat goes on with the Georgia (US) Institute of Technology issues a roadmap for making money from lignin. From a Georgia Tech May 15, 2014 news release on EurekAlert,

When making cellulosic ethanol from plants, one problem is what to do with a woody agricultural waste product called lignin. The old adage in the pulp industry has been that one can make anything from lignin except money.

A new review article in the journal Science points the way toward a future where lignin is transformed from a waste product into valuable materials such as low-cost carbon fiber for cars or bio-based plastics. Using lignin in this way would create new markets for the forest products industry and make ethanol-to-fuel conversion more cost-effective.

“We’ve developed a roadmap for integrating genetic engineering with analytical chemistry tools to tailor the structure of lignin and its isolation so it can be used for materials, chemicals and fuels,” said Arthur Ragauskas, a professor in the School of Chemistry and Biochemistry at the Georgia Institute of Technology. Ragauskas is also part of the Institute for Paper Science and Technology at Georgia Tech.

The roadmap was published May 15 [2014] in the journal Science. …

Here’s a link to and citation for the ‘roadmap’,

Lignin Valorization: Improving Lignin Processing in the Biorefinery by  Arthur J. Ragauskas, Gregg T. Beckham, Mary J. Biddy, Richard Chandra, Fang Chen, Mark F. Davis, Brian H. Davison, Richard A. Dixon, Paul Gilna, Martin Keller, Paul Langan, Amit K. Naskar, Jack N. Saddler, Timothy J. Tschaplinski, Gerald A. Tuskan, and Charles E. Wyman. Science 16 May 2014: Vol. 344 no. 6185 DOI: 10.1126/science.1246843

This paper is behind a paywall.

Cleaning up oil* spills with cellulose nanofibril aerogels

Given the ever-expanding scope of oil and gas production as previously impossible to reach sources are breached and previously unusable contaminated sources are purified for use while major pipelines and mega tankers are being built to transport all this product, it’s good to see that research into cleaning up oil spills is taking place. A Feb. 26, 2014 news item on Azonano features a project at the University of Wisconsin–Madison,

Cleaning up oil spills and metal contaminates in a low-impact, sustainable and inexpensive manner remains a challenge for companies and governments globally.

But a group of researchers at the University of Wisconsin–Madison is examining alternative materials that can be modified to absorb oil and chemicals without absorbing water. If further developed, the technology may offer a cheaper and “greener” method to absorb oil and heavy metals from water and other surfaces.

Shaoqin “Sarah” Gong, a researcher at the Wisconsin Institute for Discovery (WID) and associate professor of biomedical engineering, graduate student Qifeng Zheng, and Zhiyong Cai, a project leader at the USDA Forest Products Laboratory in Madison, have recently created and patented the new aerogel technology.

The Feb. 25, 2014 University of Wisconsin–Madison news release, which originated the news item, explains a little bit about aergels and about what makes these cellulose nanofibril-based aerogels special,

Aerogels, which are highly porous materials and the lightest solids in existence, are already used in a variety of applications, ranging from insulation and aerospace materials to thickening agents in paints. The aerogel prepared in Gong’s lab is made of cellulose nanofibrils (sustainable wood-based materials) and an environmentally friendly polymer. Furthermore, these cellulose-based aerogels are made using an environmentally friendly freeze-drying process without the use of organic solvents.

It’s the combination of this “greener” material and its high performance that got Gong’s attention.

“For this material, one unique property is that it has superior absorbing ability for organic solvents — up to nearly 100 times its own weight,” she says. “It also has strong absorbing ability for metal ions.”

Treating the cellulose-based aerogel with specific types of silane after it is made through the freeze-drying process is a key step that gives the aerogel its water-repelling and oil-absorbing properties.

The researchers have produced a video showing their aerogel in operation,

For those who don’t have the time for a video, the news release describes some of the action taking place,

“So if you had an oil spill, for example, the idea is you could throw this aerogel sheet in the water and it would start to absorb the oil very quickly and efficiently,” she says. “Once it’s fully saturated, you can take it out and squeeze out all the oil. Although its absorbing capacity reduces after each use, it can be reused for a couple of cycles.”

In addition, this cellulose-based aerogel exhibits excellent flexibility as demonstrated by compression mechanical testing.

Though much work needs to be done before the aerogel can be mass-produced, Gong says she’s eager to share the technology’s potential benefits beyond the scientific community.

“We are living in a time where pollution is a serious problem — especially for human health and for animals in the ocean,” she says. “We are passionate to develop technology to make a positive societal impact.”

Here’s a link to and a citation for the research paper,

Green synthesis of polyvinyl alcohol (PVA)–cellulose nanofibril (CNF) hybrid aerogels and their use as superabsorbents by Qifeng Zheng, Zhiyong Cai, and Shaoqin Gong.  J. Mater. Chem. A, 2014,2, 3110-3118 DOI: 10.1039/C3TA14642A First published online 16 Dec 2013

This paper is behind a paywall. I last wrote about oil-absorbing nanosponges in an April 17, 2012 posting. Those sponges were based on carbon nanotubes (CNTs).

* ‘oils’ in headline changed to ‘oil’ on May 6, 2014.

Tweet your nano

Researchers at the University of Wisconsin-Madison have published a study titled, “Tweeting nano: how public discourses about nanotechnology develop in social media environments,”  which analyses, for the first time, nanotechnology discourse on Twitter social media. From the Life Sciences Communication University of Wisconsin-Madison research webpage,

The study, “Tweeting nano: how public discourses about nanotechnology develop in social media environments,” mapped social media traffic about nanotechnology, finding that Twitter traffic expressing opinion about nanotechnology is more likely to originate from states with a federally-funded National Nanotechnology Initiative center or network than states without such centers.

Runge [Kristin K. Runge, doctoral student] and her co-authors used computational linguistic software to analyze a census of all English-language nanotechnology-related tweets expressing opinion posted on Twitter over one calendar year. In addition to mapping tweets by state, the team coded sentiment along two axes: certain vs. uncertain, and optimistic-neutral-pessimistic. They found 55% of nanotechnology-related opinions expressed certainty, 41% expressed pessimistic outlooks and 32% expressed neutral outlooks.

In addition to shedding light on how social media is used in communicating about an emerging technology, this study is believed to be the first published study to use a census of social media messages rather than a sample.

“We likely wouldn’t have captured these results if we had to rely on a sample rather than a complete census,” said Runge. “That would have been unfortunate, because the distinct geographic origins of the tweets and the tendency toward certainty in opinion expression will be useful in helping us understand how key online influencers are shaping the conversation around nanotechnology.”

It’s not obvious from this notice or the title of the study but it is stated clearly in the study that the focus is the world of US nano, not the English language world of nano. After reading the study (very quickly), I can say it’s interesting and, hopefully, will stimulate more work about public opinion that takes social media into account. (I’d love to know how they limited their study to US tweets only and how they determined the region that spawned the tweet. )

The one thing which puzzles me is they don’t mention retweets (RTs) specifically. Did they consider only original tweets? If not, did they take into account the possibility that someone might RT an item that does not reflect their own opinion? I occasionally RT something that doesn’t reflect my opinion when there isn’t sufficient space to include comment indicating otherwise because I want to promote discussion and that doesn’t necessarily take place on Twitter or in Twitter’s public space. This leads to another question, did the researchers include direct messages in their study? Unfortunately, there’s no mention in the two sections  (Discussion and Implications for future research) of the conclusion.

For those who would like to see the research for themselves (Note: The article is behind a paywall),

Tweeting nano: how public discourses about nanotechnology develop in social media environments by Kristin K. Runge, Sara K. Yeo, Michael Cacciatore, Dietram A. Scheufele, Dominique Brossard, Michael Xenos, Ashley Anderson, Doo-hun Choi, Jiyoun Kim, Nan Li, Xuan Liang, Maria Stubbings, and Leona Yi-Fan Su. Journal of Nanoparticle Research; An Interdisciplinary Forum for Nanoscale Science and Technology© Springer 10.1007/s11051-012-1381-8. Published online Jan. 4, 2013

It’s no surprise to see Dietram Scheufele and Dominique Brossard who are both located the University of Wisconsin-Madison and publish steadily on the topic of nanotechnology and public opinion listed as authors.

Unintended consequences of reading science news online

University of Wisconsin-Madison researchers Dominique Brossard and  Dietram Scheufele have written a cautionary piece for the AAAS’s (American Association for the Advancement of Science) magazine, Science, according to a Jan. 3, 2013 news item on ScienceDaily,

A science-inclined audience and wide array of communications tools make the Internet an excellent opportunity for scientists hoping to share their research with the world. But that opportunity is fraught with unintended consequences, according to a pair of University of Wisconsin-Madison life sciences communication professors.

Dominique Brossard and Dietram Scheufele, writing in a Perspectives piece for the journal Science, encourage scientists to join an effort to make sure the public receives full, accurate and unbiased information on science and technology.

“This is an opportunity to promote interest in science — especially basic research, fundamental science — but, on the other hand, we could be missing the boat,” Brossard says. “Even our most well-intended effort could backfire, because we don’t understand the ways these same tools can work against us.”

The Jan. 3, 2012 University of Wisconsin-Madison news release by Chris Barncard (which originated the news item) notes,

Recent research by Brossard and Scheufele has described the way the Internet may be narrowing public discourse, and new work shows that a staple of online news presentation — the comments section — and other ubiquitous means to provide endorsement or feedback can color the opinions of readers of even the most neutral science stories.

Online news sources pare down discussion or limit visibility of some information in several ways, according to Brossard and Scheufele.

Many news sites use the popularity of stories or subjects (measured by the numbers of clicks they receive, or the rate at which users share that content with others, or other metrics) to guide the presentation of material.

The search engine Google offers users suggested search terms as they make requests, offering up “nanotechnology in medicine,” for example, to those who begin typing “nanotechnology” in a search box. Users often avail themselves of the list of suggestions, making certain searches more popular, which in turn makes those search terms even more likely to appear as suggestions.

Brossard and Scheufele have published an earlier study about the ‘narrowing’ effects of search engines such as Google, using the example of the topic ‘nanotechnology’, as per my May 19, 2010 posting. The researchers appear to be building on this earlier work,

The consequences become more daunting for the researchers as Brossard and Scheufele uncover more surprising effects of Web 2.0.

In their newest study, they show that independent of the content of an article about a new technological development, the tone of comments posted by other readers can make a significant difference in the way new readers feel about the article’s subject. The less civil the accompanying comments, the more risk readers attributed to the research described in the news story.

“The day of reading a story and then turning the page to read another is over,” Scheufele says. “Now each story is surrounded by numbers of Facebook likes and tweets and comments that color the way readers interpret even truly unbiased information. This will produce more and more unintended effects on readers, and unless we understand what those are and even capitalize on them, they will just cause more and more problems.”

If even some of the for-profit media world and advocacy organizations are approaching the digital landscape from a marketing perspective, Brossard and Scheufele argue, scientists need to turn to more empirical communications research and engage in active discussions across disciplines of how to most effectively reach large audiences.

“It’s not because there is not decent science writing out there. We know all kinds of excellent writers and sources,” Brossard says. “But can people be certain that those are the sites they will find when they search for information? That is not clear.”

It’s not about preparing for the future. It’s about catching up to the present. And the present, Scheufele says, includes scientific subjects — think fracking, or synthetic biology — that need debate and input from the public.

Here’s a citation and link for the Science article,

Science, New Media, and the Public by Dominique Brossard and Dietram A. Scheufele in Science 4 January 2013: Vol. 339 no. 6115 pp. 40-41 DOI: 10.1126/science.1232329

This article is behind a paywall.

Better night vision goggles for the military

I remember a military type, a friend who served as a Canadian peacekeeper (Infantry) in the Balkans, describing night-vision goggles and mentioning they are loud. After all, it’s imaging equipment and that requires a power source or, in this case, a source of noise. The Dec. 29, 2012 news item on Nanowerk about improved imaging for night vision goggles doesn’t mention noise but hopefully, the problem has been addressed or mitigated (assuming this technology is meant to be worn),

Through some key breakthroughs in flexible semiconductors, electrical and computer engineering Professor Zhenqiang “Jack” Ma has created two imaging technologies that have potential applications beyond the 21st century battlefield.

With $750,000 in support from the Air Force Office of Scientific Research (AFOSR), Ma has developed curved night-vision goggles using germanium nanomembranes.

The Dec. 28, 2012 University of Wisconsin-Madison news release, which originated the news item, describes the Air Force project and another night vision project for the US Department of Defense,

Creating night-vision goggles with a curved surface allows a wider field of view for pilots, but requires highly photosensitive materials with mechanical bendability-the silicon used in conventional image sensors doesn’t cut it.

…  Ma’s design employs flexible germanium nanomembranes: a transferrable flexible semiconductor that until now has been too challenging to use in imagers due to a high dark current, the background electrical current that flows through photosensitive materials even when they aren’t exposed to light.

“Because of their higher dark current, the image often comes up much noisier on germanium-based imagers,” says Ma. “We solved that problem.”

Ma’s dark current reduction technology has also been recently licensed to Intel.

In another imaging project, the U.S. Department of Defense has provided Ma with $750,000 in support of development of imagers for military surveillance that span multiple spectra, combining infrared and visible light into a single image.

“The reason they are interested in IR is because visible light can be blocked by clouds, dust, smoke,” says Ma. “IR can go through, so simultaneous visible and IR imaging allows them to see everything.”

Inexpensive silicon makes production of visible light imagers a simple task, but IR relies on materials incompatible with silicon.

The current approach involves a sensor for IR images and a sensor for visible light, combining the two images in post-processing, which requires greater computing power and hardware complexity. Instead, Ma will employ a heterogeneous semiconductor nanomembrane, stacking the two incompatible materials in each pixel of the new imager to layer IR and visible images on top of one another in a single image.

The result will be imagers that can seamlessly shift between IR and visible images, allowing the picture to be richer and more quickly utilized for strategic decisionmaking.

It’s impossible to tell from the description if this particular technology will be worn by foot soldiers or human military personnel but, in the event it will be worn,  it does well to remember that it will need a power source. Interestingly, the average soldier already carries a lot of weight in batteries (up to 35 pounds!) as per my May 9, 2012 posting about energy-harvesting textiles and the military.

Sunflower season when thoughts turn to solar power systems

Sunflowers in Fargo, North Dakota, USA.. This image was released by the Agricultural Research Service, the research agency of the United States Department of Agriculture, with the ID K5751-1 (Downloaded from Wikipedia; http://en.wikipedia.org/wiki/Sunflower)

I love the big sunflowers, the ones where the stalks extend many feet past my 5’4″ and which are topped with those improbable, lush, huge flowers. The flowers’ height always puts me in mind of trees.  While scientists may appreciate the aesthetics and poetry as much as I do, their thoughts tend to turn to less fanciful matters. From the Aug. 16, 2012 news item on ScienceDaily,

A field of young sunflowers will slowly rotate from east to west during the course of a sunny day, each leaf seeking out as much sunlight as possible as the sun moves across the sky through an adaptation called heliotropism.

It’s a clever bit of natural engineering that inspired imitation from a UW-Madison electrical and computer engineer, who has found a way to mimic the passive heliotropism seen in sunflowers for use in the next crop of solar power systems.

Unlike other “active” solar systems that track the sun’s position with GPS and reposition panels with motors, electrical and computer engineering professor Hongrui Jiang’s concept leverages the properties of unique materials in concert to create a passive method of re-orienting solar panels in the direction of the most direct sunlight.

Here’s a demonstration of Jiang’s concept, not as a pretty as a sunflower, in a very bare bones video where you have to watch closely or you might miss the action,

Here’s a description of what you’re witnessing from Mark Reichers’ Aug. 15, 2012 news release for the University of Wisconsin-Madison,

His design, published Aug. 1 in Advanced Functional Materials and recently highlighted in Nature, employs a combination of liquid crystalline elastomer (LCE), which goes through a phase change and contracts in the presence of heat, with carbon nanotubes, which can absorb a wide range of light wavelengths.

“Carbon nanotubes have a very wide range of absorption, visible light all the way to infrared,” says Jiang. “That is something we can take advantage of, since it is possible to use sunlight to drive it directly.”

Direct sunlight hits a mirror beneath the solar panel, focused onto one of multiple actuators composed of LCE laced with carbon nanotubes. The carbon nanotubes heat up as they absorb light, and the heat differential between the environment and inside the actuator causes the LCE to shrink.

This causes the entire assembly to bow in the direction of the strongest sunlight. As the sun moves across the sky, the actuators will cool and re-expand, and new ones will shrink, re-positioning the panel over the 180 degrees of sky that the sun covers in the course of the day.

This new approach improves solar panel efficiency by 10%. This is significant in a field where an increase of even a few percentage points is cause for celebration (my July 30, 2012 posting makes reference to this phenomenon of celebrating relatively small increases in solar power systems efficiencies).

Science communication at the US National Academy of Sciences

I guess it’s going to be a science communication kind of day on this blog. Dr. Andrew Maynard on his 2020 Science blog posted a May 22, 2012 piece about a recent two-day science communication event at the US National Academy of Sciences in Washington, DC.

Titled The Science of Science Communication and held May 21 – 22, 2012, I was a little concerned about the content since it suggests a dedication to metrics (which are useful but I find often misused) and the possibility of a predetermined result for science communication. After watching a webcast of the first session (Introduction and Overviews offered by Baruch Fischhof [Carnegie Mellon University] and Dietram Scheufele [University of Wisconsin at Madison], 55:35 mins.), I’m relieved to say that the first two presenters mostly avoided those pitfalls.

You can go here to watch any of the sessions held during that two days, although I will warn you that these are not TED talks. The shortest are roughly 27 mins. with most running over 1 hour, while a couple  of them run over two hours.

Getting back to Andrew and his take on the proceedings, excerpted from his May 22, 2012 posting,

It’s important that the National Academies of Science are taking the study of science communication (and its practice) seriously.  Inviting a bunch of social scientists into the National Academies – and into a high profile colloquium like this – was a big deal.  And irrespective of the meeting’s content, it flags a commitment to work closely with researchers studying science communication and decision analysis to better ensure informed and effective communication strategies and practice.  Given the substantial interest in the colloquium – on the web as well as at the meeting itself – I hope that the National Academies build on this and continue to engage fully in this area.

Moving forward, there needs to be more engagement between science communication researchers and practitioners.  Practitioners of science communication – and the practical insight they bring – were notable by their absence (in the main) from the colloquium program.  If the conversation around empirical research is to connect with effective practice, there must be better integration of these two communities.

It’s interesting to read about the colloquia (the science communication event was one of a series events known as the Arthur M. Sackler Colloquia) from the perspective of a someone who was present in real time.