Tag Archives: US National Science Foundation

US White House establishes new initiatives to commercialize nanotechnology

As I’ve noted several times, there’s a strong push in the US to commercialize nanotechnology and May 20, 2015 was a banner day for the efforts. The US White House announced a series of new initiatives to speed commercialization efforts in a May 20, 2015 posting by Lloyd Whitman, Tom Kalil, and JJ Raynor,

Today, May 20 [2015], the National Economic Council and the Office of Science and Technology Policy held a forum at the White House to discuss opportunities to accelerate the commercialization of nanotechnology.

In recognition of the importance of nanotechnology R&D, representatives from companies, government agencies, colleges and universities, and non-profits are announcing a series of new and expanded public and private initiatives that complement the Administration’s efforts to accelerate the commercialization of nanotechnology and expand the nanotechnology workforce:

  • The Colleges of Nanoscale Science and Engineering at SUNY Polytechnic Institute in Albany, NY and the National Institute for Occupational Safety and Health are launching the Nano Health & Safety Consortium to advance research and guidance for occupational safety and health in the nanoelectronics and other nanomanufacturing industry settings.
  • Raytheon has brought together a group of representatives from the defense industry and the Department of Defense to identify collaborative opportunities to advance nanotechnology product development, manufacturing, and supply-chain support with a goal of helping the U.S. optimize development, foster innovation, and take more rapid advantage of new commercial nanotechnologies.
  • BASF Corporation is taking a new approach to finding solutions to nanomanufacturing challenges. In March, BASF launched a prize-based “NanoChallenge” designed to drive new levels of collaborative innovation in nanotechnology while connecting with potential partners to co-create solutions that address industry challenges.
  • OCSiAl is expanding the eligibility of its “iNanoComm” matching grant program that provides low-cost, single-walled carbon nanotubes to include more exploratory research proposals, especially proposals for projects that could result in the creation of startups and technology transfers.
  • The NanoBusiness Commercialization Association (NanoBCA) is partnering with Venture for America and working with the National Science Foundation (NSF) to promote entrepreneurship in nanotechnology.  Three companies (PEN, NanoMech, and SouthWest NanoTechnologies), are offering to support NSF’s Innovation Corps (I-Corps) program with mentorship for entrepreneurs-in-training and, along with three other companies (NanoViricides, mPhase Technologies, and Eikos), will partner with Venture for America to hire recent graduates into nanotechnology jobs, thereby strengthening new nanotech businesses while providing needed experience for future entrepreneurs.
  • TechConnect is establishing a Nano and Emerging Technologies Student Leaders Conference to bring together the leaders of nanotechnology student groups from across the country. The conference will highlight undergraduate research and connect students with venture capitalists, entrepreneurs, and industry leaders.  Five universities have already committed to participating, led by the University of Virginia Nano and Emerging Technologies Club.
  • Brewer Science, through its Global Intern Program, is providing more than 30 students from high schools, colleges, and graduate schools across the country with hands-on experience in a wide range of functions within the company.  Brewer Science plans to increase the number of its science and engineering interns by 50% next year and has committed to sharing best practices with other nanotechnology businesses interested in how internship programs can contribute to a small company’s success.
  • The National Institute of Standards and Technology’s Center for Nanoscale Science and Technology is expanding its partnership with the National Science Foundation to provide hands-on experience for students in NSF’s Advanced Technology Education program. The partnership will now run year-round and will include opportunities for students at Hudson Valley Community College and the University of the District of Columbia Community College.
  • Federal agencies participating in the NNI [US National Nanotechnology Initiative], supported by the National Nanotechnology Coordination Office [NNCO], are launching multiple new activities aimed at educating students and the public about nanotechnology, including image and video contests highlighting student research, a new webinar series focused on providing nanotechnology information for K-12 teachers, and a searchable web portal on nano.gov of nanoscale science and engineering resources for teachers and professors.

Interestingly, May 20, 2015 is also the day the NNCO held its second webinar for small- and medium-size businesses in the nanotechnology community. You can find out more about that webinar and future ones by following the links in my May 13, 2015 posting.

Since the US White House announcement, OCSiAl has issued a May 26, 2015 news release which provides a brief history and more details about its newly expanded NanoComm program,

OCSiAl launched the iNanoComm, which stands for the Integrated Nanotube Commercialization Award, program in February 2015 to help researchers lower the cost of their most promising R&D projects dedicated to SWCNT [single-walled carbon nanotube] applications. The first round received 33 applications from 28 university groups, including The Smalley-Curl Center for Nanoscale Science and Technology at Rice University and the Concordia Center for Composites at Concordia University [Canada] among others. [emphasis mine] The aim of iNanoComm is to stimulate universities and research organizations to develop innovative market products based on nano-augmented materials, also known as clean materials.

Now the program’s criteria are being broadened to enable greater private sector engagement in potential projects and the creation of partnerships in commercializing nanotechnology. The program will now support early stage commercialization efforts connected to university research in the form of start-ups, technology transfers, new businesses and university spinoffs to support the mass commercialization of SWCNT products and technologies.

The announcement of the program’s expansion took place at the 2015 Roundtable of the US NanoBusiness Commercialization Association (NanoBCA), the world’s first non-profit association focused on the commercialization of nanotechnologies. NanoBCA is dedicated to creating an environment that nurtures research and innovation in nanotechnology, promotes tech-transfer of nanotechnology from academia to industry, encourages private capital investments in nanotechnology companies, and helps its corporate members bring innovative nanotechnology products to market.

“Enhancing iNanoComm as a ‘start-up incubator’ is a concrete step in promoting single-wall carbon nanotube applications in the commercial world,” said Max Atanassov, CEO of OCSiAl USA. “It was the logical thing for us to do, now that high quality carbon nanotubes have become broadly available and are affordably priced to be used on a mass industrial scale.”

Vince Caprio, Executive Director of NanoBCA, added that “iNanoComm will make an important contribution to translating fundamental nanotechnology research into commercial products. By facilitating the formation of more start-ups, it will encourage more scientists to pursue their dreams and develop their ideas into commercially successful businesses.”

For more information on the program expansion and how it can reduce the cost of early stage research connected to university projects, visit the iNanoComm website at www.inanocomm.org or contact info@inanocomm.org.

h/t Azonano May 27, 2015 news item

I sing the body cyber: two projects funded by the US National Science Foundation

Points to anyone who recognized the reference to Walt Whitman’s poem, “I sing the body electric,” from his classic collection, Leaves of Grass (1867 edition; h/t Wikipedia entry). I wonder if the cyber physical systems (CPS) work being funded by the US National Science Foundation (NSF) in the US will occasion poetry too.

More practically, a May 15, 2015 news item on Nanowerk, describes two cyber physical systems (CPS) research projects newly funded by the NSF,

Today [May 12, 2015] the National Science Foundation (NSF) announced two, five-year, center-scale awards totaling $8.75 million to advance the state-of-the-art in medical and cyber-physical systems (CPS).

One project will develop “Cyberheart”–a platform for virtual, patient-specific human heart models and associated device therapies that can be used to improve and accelerate medical-device development and testing. The other project will combine teams of microrobots with synthetic cells to perform functions that may one day lead to tissue and organ re-generation.

CPS are engineered systems that are built from, and depend upon, the seamless integration of computation and physical components. Often called the “Internet of Things,” CPS enable capabilities that go beyond the embedded systems of today.

“NSF has been a leader in supporting research in cyber-physical systems, which has provided a foundation for putting the ‘smart’ in health, transportation, energy and infrastructure systems,” said Jim Kurose, head of Computer & Information Science & Engineering at NSF. “We look forward to the results of these two new awards, which paint a new and compelling vision for what’s possible for smart health.”

Cyber-physical systems have the potential to benefit many sectors of our society, including healthcare. While advances in sensors and wearable devices have the capacity to improve aspects of medical care, from disease prevention to emergency response, and synthetic biology and robotics hold the promise of regenerating and maintaining the body in radical new ways, little is known about how advances in CPS can integrate these technologies to improve health outcomes.

These new NSF-funded projects will investigate two very different ways that CPS can be used in the biological and medical realms.

A May 12, 2015 NSF news release (also on EurekAlert), which originated the news item, describes the two CPS projects,

Bio-CPS for engineering living cells

A team of leading computer scientists, roboticists and biologists from Boston University, the University of Pennsylvania and MIT have come together to develop a system that combines the capabilities of nano-scale robots with specially designed synthetic organisms. Together, they believe this hybrid “bio-CPS” will be capable of performing heretofore impossible functions, from microscopic assembly to cell sensing within the body.

“We bring together synthetic biology and micron-scale robotics to engineer the emergence of desired behaviors in populations of bacterial and mammalian cells,” said Calin Belta, a professor of mechanical engineering, systems engineering and bioinformatics at Boston University and principal investigator on the project. “This project will impact several application areas ranging from tissue engineering to drug development.”

The project builds on previous research by each team member in diverse disciplines and early proof-of-concept designs of bio-CPS. According to the team, the research is also driven by recent advances in the emerging field of synthetic biology, in particular the ability to rapidly incorporate new capabilities into simple cells. Researchers so far have not been able to control and coordinate the behavior of synthetic cells in isolation, but the introduction of microrobots that can be externally controlled may be transformative.

In this new project, the team will focus on bio-CPS with the ability to sense, transport and work together. As a demonstration of their idea, they will develop teams of synthetic cell/microrobot hybrids capable of constructing a complex, fabric-like surface.

Vijay Kumar (University of Pennsylvania), Ron Weiss (MIT), and Douglas Densmore (BU) are co-investigators of the project.

Medical-CPS and the ‘Cyberheart’

CPS such as wearable sensors and implantable devices are already being used to assess health, improve quality of life, provide cost-effective care and potentially speed up disease diagnosis and prevention. [emphasis mine]

Extending these efforts, researchers from seven leading universities and centers are working together to develop far more realistic cardiac and device models than currently exist. This so-called “Cyberheart” platform can be used to test and validate medical devices faster and at a far lower cost than existing methods. CyberHeart also can be used to design safe, patient-specific device therapies, thereby lowering the risk to the patient.

“Innovative ‘virtual’ design methodologies for implantable cardiac medical devices will speed device development and yield safer, more effective devices and device-based therapies, than is currently possible,” said Scott Smolka, a professor of computer science at Stony Brook University and one of the principal investigators on the award.

The group’s approach combines patient-specific computational models of heart dynamics with advanced mathematical techniques for analyzing how these models interact with medical devices. The analytical techniques can be used to detect potential flaws in device behavior early on during the device-design phase, before animal and human trials begin. They also can be used in a clinical setting to optimize device settings on a patient-by-patient basis before devices are implanted.

“We believe that our coordinated, multi-disciplinary approach, which balances theoretical, experimental and practical concerns, will yield transformational results in medical-device design and foundations of cyber-physical system verification,” Smolka said.

The team will develop virtual device models which can be coupled together with virtual heart models to realize a full virtual development platform that can be subjected to computational analysis and simulation techniques. Moreover, they are working with experimentalists who will study the behavior of virtual and actual devices on animals’ hearts.

Co-investigators on the project include Edmund Clarke (Carnegie Mellon University), Elizabeth Cherry (Rochester Institute of Technology), W. Rance Cleaveland (University of Maryland), Flavio Fenton (Georgia Tech), Rahul Mangharam (University of Pennsylvania), Arnab Ray (Fraunhofer Center for Experimental Software Engineering [Germany]) and James Glimm and Radu Grosu (Stony Brook University). Richard A. Gray of the U.S. Food and Drug Administration is another key contributor.

It is fascinating to observe how terminology is shifting from pacemakers and deep brain stimulators as implants to “CPS such as wearable sensors and implantable devices … .” A new category has been created, CPS, which conjoins medical devices with other sensing devices such as wearable fitness monitors found in the consumer market. I imagine it’s an attempt to quell fears about injecting strange things into or adding strange things to your body—microrobots and nanorobots partially derived from synthetic biology research which are “… capable of performing heretofore impossible functions, from microscopic assembly to cell sensing within the body.” They’ve also sneaked in a reference to synthetic biology, an area of research where some concerns have been expressed, from my March 19, 2013 post about a poll and synthetic biology concerns,

In our latest survey, conducted in January 2013, three-fourths of respondents say they have heard little or nothing about synthetic biology, a level consistent with that measured in 2010. While initial impressions about the science are largely undefined, these feelings do not necessarily become more positive as respondents learn more. The public has mixed reactions to specific synthetic biology applications, and almost one-third of respondents favor a ban “on synthetic biology research until we better understand its implications and risks,” while 61 percent think the science should move forward.

I imagine that for scientists, 61% in favour of more research is not particularly comforting given how easily and quickly public opinion can shift.

Synthesizing nerve tissues with 3D printers and cellulose nanocrystals (CNC)

There are lots of stories about bioprinting and tissue engineering here and I think it’s time (again) for one which one has some good, detailed descriptions and, bonus, it features cellulose nanocrystals (CNC) and graphene. From a May 13, 2015 news item on Azonano,

The printer looks like a toaster oven with the front and sides removed. Its metal frame is built up around a stainless steel circle lit by an ultraviolet light. Stainless steel hydraulics and thin black tubes line the back edge, which lead to an inner, topside box made of red plastic.

In front, the metal is etched with the red Bio Bot logo. All together, the gray metal frame is small enough to fit on top of an old-fashioned school desk, but nothing about this 3D printer is old school. In fact, the tissue-printing machine is more like a sci-fi future in the flesh—and it has very real medical applications.

Researchers at Michigan Technological University hope to use this newly acquired 3D bioprinter to make synthesized nerve tissue. The key is developing the right “bioink” or printable tissue. The nanotechnology-inspired material could help regenerate damaged nerves for patients with spinal cord injuries, says Tolou Shokuhfar, an assistant professor of mechanical engineering and biomedical engineering at Michigan Tech.

Shokuhfar directs the In-Situ Nanomedicine and Nanoelectronics Laboratory at Michigan Tech, and she is an adjunct assistant professor in the Bioengineering Department and the College of Dentistry at the University of Illinois at Chicago.

In the bioprinting research, Shokuhfar collaborates with Reza Shahbazian-Yassar, the Richard and Elizabeth Henes Associate Professor in the Department of Mechanical Engineering-Engineering Mechanics at Michigan Tech. Shahbazian-Yassar’s highly interdisciplinary background on cellulose nanocrystals as biomaterials, funded by the National Science Foundation’s (NSF) Biomaterials Program, helped inspire the lab’s new 3D printing research. “Cellulose nanocrystals with extremely good mechanical properties are highly desirable for bioprinting of scaffolds that can be used for live tissues,” says Shahbazian-Yassar. [emphases mine]

A May 11, 2015 Michigan Technological University (MTU) news release by Allison Mills, which originated the news item, explains the ‘why’ of the research,

“We wanted to target a big issue,” Shokuhfar says, explaining that nerve regeneration is a particularly difficult biomedical engineering conundrum. “We are born with all the nerve cells we’ll ever have, and damaged nerves don’t heal very well.”

Other facilities are trying to address this issue as well. Many feature large, room-sized machines that have built-in cell culture hoods, incubators and refrigeration. The precision of this equipment allows them to print full organs. But innovation is more nimble at smaller scales.

“We can pursue nerve regeneration research with a simpler printer set-up,” says Shayan Shafiee, a PhD student working with Shokuhfar. He gestures to the small gray box across the lab bench.

He opens the red box under the top side of the printer’s box. Inside the plastic casing, a large syringe holds a red jelly-like fluid. Shafiee replenishes the needle-tipped printer, pulls up his laptop and, with a hydraulic whoosh, he starts to print a tissue scaffold.

The news release expands on the theme,

At his lab bench in the nanotechnology lab at Michigan Tech, Shafiee holds up a petri dish. Inside is what looks like a red gummy candy, about the size of a half-dollar.

Here’s a video from MTU illustrating the printing process,

Back to the news release, which notes graphene could be instrumental in this research,

“This is based on fractal geometry,” Shafiee explains, pointing out the small crenulations and holes pockmarking the jelly. “These are similar to our vertebrae—the idea is to let a nerve pass through the holes.”

Making the tissue compatible with nerve cells begins long before the printer starts up. Shafiee says the first step is to synthesize a biocompatible polymer that is syrupy—but not too thick—that can be printed. That means Shafiee and Shokuhfar have to create their own materials to print with; there is no Amazon.com or even a specialty shop for bioprinting nerves.

Nerves don’t just need a biocompatible tissue to act as a carrier for the cells. Nerve function is all about electric pulses. This is where Shokuhfar’s nanotechnology research comes in: Last year, she was awarded a CAREER grant from NSF for her work using graphene in biomaterials research. [emphasis mine] “Graphene is a wonder material,” she says. “And it has very good electrical conductivity properties.”

The team is extending the application of this material for nerve cell printing. “Our work always comes back to the question, is it printable or not?” Shafiee says, adding that a successful material—a biocompatible, graphene-bound polymer—may just melt, mush or flat out fail under the pressure of printing. After all, imagine building up a substance more delicate than a soufflé using only the point of a needle. And in the nanotechnology world, a needlepoint is big, even clumsy.

Shafiee and Shokuhfar see these issues as mechanical obstacles that can be overcome.

“It’s like other 3D printers, you need a design to work from,” Shafiee says, adding that he will tweak and hone the methodology for printing nerve cells throughout his dissertation work. He is also hopeful that the material will have use beyond nerve regeneration.

This looks like a news release designed to publicize work funded at MTU by the US National Science Foundation (NSF) which is why there is no mention of published work.

One final comment regarding cellulose nanocrystals (CNC). They have also been called nanocrystalline cellulose (NCC), which you will still see but it seems CNC is emerging as the generic term. NCC has been trademarked by CelluForce, a Canadian company researching and producing CNC (or if you prefer, NCC) from forest products.

Entangling thousands of atoms

Quantum entanglement as an idea seems extraordinary to me like something from of the fevered imagination made possible only with certain kinds of hallucinogens. I suppose you could call theoretical physicists who’ve conceptualized entanglement a different breed as they don’t seem to need chemical assistance for their flights of fancy, which turn out to be reality. Researchers at MIT (Massachusetts Institute of Technology) and the University of Belgrade (Serbia) have entangled thousands of atoms with a single photon according to a March 26, 2015 news item on Nanotechnology Now,

Physicists from MIT and the University of Belgrade have developed a new technique that can successfully entangle 3,000 atoms using only a single photon. The results, published today in the journal Nature, represent the largest number of particles that have ever been mutually entangled experimentally.

The researchers say the technique provides a realistic method to generate large ensembles of entangled atoms, which are key components for realizing more-precise atomic clocks.

“You can make the argument that a single photon cannot possibly change the state of 3,000 atoms, but this one photon does — it builds up correlations that you didn’t have before,” says Vladan Vuletic, the Lester Wolfe Professor in MIT’s Department of Physics, and the paper’s senior author. “We have basically opened up a new class of entangled states we can make, but there are many more new classes to be explored.”

A March 26, 2015 MIT news release by Jennifer Chu (also on EurekAlert but dated March 25, 2015), which originated the news item, describes entanglement with particular attention to how it relates to atomic timekeeping,

Entanglement is a curious phenomenon: As the theory goes, two or more particles may be correlated in such a way that any change to one will simultaneously change the other, no matter how far apart they may be. For instance, if one atom in an entangled pair were somehow made to spin clockwise, the other atom would instantly be known to spin counterclockwise, even though the two may be physically separated by thousands of miles.

The phenomenon of entanglement, which physicist Albert Einstein once famously dismissed as “spooky action at a distance,” is described not by the laws of classical physics, but by quantum mechanics, which explains the interactions of particles at the nanoscale. At such minuscule scales, particles such as atoms are known to behave differently from matter at the macroscale.

Scientists have been searching for ways to entangle not just pairs, but large numbers of atoms; such ensembles could be the basis for powerful quantum computers and more-precise atomic clocks. The latter is a motivation for Vuletic’s group.

Today’s best atomic clocks are based on the natural oscillations within a cloud of trapped atoms. As the atoms oscillate, they act as a pendulum, keeping steady time. A laser beam within the clock, directed through the cloud of atoms, can detect the atoms’ vibrations, which ultimately determine the length of a single second.

“Today’s clocks are really amazing,” Vuletic says. “They would be less than a minute off if they ran since the Big Bang — that’s the stability of the best clocks that exist today. We’re hoping to get even further.”

The accuracy of atomic clocks improves as more and more atoms oscillate in a cloud. Conventional atomic clocks’ precision is proportional to the square root of the number of atoms: For example, a clock with nine times more atoms would only be three times as accurate. If these same atoms were entangled, a clock’s precision could be directly proportional to the number of atoms — in this case, nine times as accurate. The larger the number of entangled particles, then, the better an atomic clock’s timekeeping.

It seems weak lasers make big entanglements possible (from the news release),

Scientists have so far been able to entangle large groups of atoms, although most attempts have only generated entanglement between pairs in a group. Only one team has successfully entangled 100 atoms — the largest mutual entanglement to date, and only a small fraction of the whole atomic ensemble.

Now Vuletic and his colleagues have successfully created a mutual entanglement among 3,000 atoms, virtually all the atoms in the ensemble, using very weak laser light — down to pulses containing a single photon. The weaker the light, the better, Vuletic says, as it is less likely to disrupt the cloud. “The system remains in a relatively clean quantum state,” he says.

The researchers first cooled a cloud of atoms, then trapped them in a laser trap, and sent a weak laser pulse through the cloud. They then set up a detector to look for a particular photon within the beam. Vuletic reasoned that if a photon has passed through the atom cloud without event, its polarization, or direction of oscillation, would remain the same. If, however, a photon has interacted with the atoms, its polarization rotates just slightly — a sign that it was affected by quantum “noise” in the ensemble of spinning atoms, with the noise being the difference in the number of atoms spinning clockwise and counterclockwise.

“Every now and then, we observe an outgoing photon whose electric field oscillates in a direction perpendicular to that of the incoming photons,” Vuletic says. “When we detect such a photon, we know that must have been caused by the atomic ensemble, and surprisingly enough, that detection generates a very strongly entangled state of the atoms.”

Vuletic and his colleagues are currently using the single-photon detection technique to build a state-of-the-art atomic clock that they hope will overcome what’s known as the “standard quantum limit” — a limit to how accurate measurements can be in quantum systems. Vuletic says the group’s current setup may be a step toward developing even more complex entangled states.

“This particular state can improve atomic clocks by a factor of two,” Vuletic says. “We’re striving toward making even more complicated states that can go further.”

This research was supported in part by the National Science Foundation, the Defense Advanced Research Projects Agency, and the Air Force Office of Scientific Research.

Here’s a link to and a citation for the paper,

Entanglement with negative Wigner function of almost 3,000 atoms heralded by one photon by Robert McConnell, Hao Zhang, Jiazhong Hu, Senka Ćuk & Vladan Vuletić. Nature 519 439–442 (26 March 2015) doi:10.1038/nature14293 Published online 25 March 2015

This article is behind a paywall but there is a free preview via ReadCube Access.

This image illustrates the entanglement of a large number of atoms. The atoms, shown in purple, are shown mutually entangled with one another. Image: Christine Daniloff/MIT and Jose-Luis Olivares/MIT

This image illustrates the entanglement of a large number of atoms. The atoms, shown in purple, are shown mutually entangled with one another.
Image: Christine Daniloff/MIT and Jose-Luis Olivares/MIT

From monitoring glucose in kidneys to climate change in trees

That headline is almost poetic but I admit It’s a bit of a stretch rhymewise, kidneys/trees. In any event, a Feb. 6, 2015 news item on Azonano describes research into monitoring the effects of climate change on trees,

Serving as a testament to the far-reaching impact of Governor Andrew M. Cuomo’s commitment to maintaining New York State’s global leadership in nanotechnology innovation, SUNY Polytechnic Institute’s Colleges of Nanoscale Science and Engineering (SUNY Poly CNSE) today announced the National Science Foundation (NSF) has awarded $837,000 to support development of a first of its kind nanoscale sensor to monitor the effects of climate change on trees.

A Feb. 5, 2015 SUNY Poly CNSE news release, which originated the news item, provides more details including information about the sensor’s link to measuring glucose in kidneys,

The NSF grant was generated through the Instrument Development for Biological Research (IDBR) program, which provides funds to develop new classes of devices for bio-related research. The NANAPHID, a novel aphid-like nanosensor, will provide real-time measurements of carbohydrates in live plant tissue. Carbohydrate levels in trees are directly connected to plant productivity, such as maple sap production and survival. The NANAPHID will enable researchers to determine the effects of a variety of environmental changes including temperature, precipitation, carbon dioxide, soil acidity, pests and pathogens. The nanosensor can also provide real-time monitoring of sugar concentration levels, which are of signficant importance in maple syrup production and apple and grape farming.

“The technology for the NANAPHID is rooted in a nanoscale sensor SUNY Poly CNSE developed to monitor glucose levels in human kidneys being prepared for transplant. Our team determined that certain adjustments would enable the sensor to provide similar monitoring for plants, and provide a critical insight to the effects of climate change on the environment,” said Dr. James Castracane, professor and head of the Nanobioscience Constellation at SUNY Polytechnic Institute. “This is a perfect example of the cycle of innovation made possible through the ongoing nanotechnology research and development at SUNY Poly CNSE’s NanoTech Complex.”

“This new sensor will be used in several field experiments on measuring sensitivity of boreal forest to climate warming. Questions about forest response to rising air and soil temperatures are extremely important for forecasting future atmospheric carbon dioxide levels, climate change and forest health,” said Dr. Andrei Lapenas, principal investigator and associate professor of climatology at the University at Albany. “At the same time, we already see some potential commercial application for NANAPHID-type sensors in agriculture, food industry and other fields. Our collaboration with SUNY Poly CNSE has been extremely productive and I look forward to continuing our work together.”

The NANAPHID project began in 2014 with a $135,000 SUNY Research Foundation Network of Excellence grant. SUNY Poly CNSE will receive $400,000 of the NSF award for the manufacturing aspects of the sensor array development and testing. The remaining funds will be shared between Dr. Lapenas and researchers Dr. Ruth Yanai (ESF), Dr. Thomas Horton (ESF), and Dr. Pamela Templer (Boston University) for data collection and analysis.

“With current technology, analyzing carbohydrates in plant tissues requires hours in the lab or more than $100 a sample if you want to send them out. And you can’t sample the same tissue twice, the sample is destroyed in the analysis,” said Dr. Yanai. “The implantable device will be cheap to produce and will provide continuous monitoring of sugar concentrations, which is orders of magnitude better in both cost and in the information provided. Research questions we never dreamed of asking before will become possible, like tracking changes in photosynthate over the course of a day or along the stem of a plant, because it’s a nondestructive assay.”

“I see incredible promise for the NANAPHID device in plant ecology. We can use the sensors at the root tip where plants give sugars to symbiotic fungi in exchange for soil nutrients,” said Dr. Horton. “Some fungi are believed to be significant carbon sinks because they produce extensive fungal networks in soils and we can use the sensors to compare the allocation of photosynthate to roots colonized by these fungi versus the allocation to less carbon demanding fungi. Further, the vast majority of these symbiotic fungi cannot be cultured in lab. These sensors will provide valuable insights into plant-microbe interactions under field conditions.”

“The creation of this new sensor will make understanding the effects of a variety of environmental changes, including climate change, on the health and productivity of forests much easier to measure,” said Dr. Templer. “For the first time, we will be able to measure concentrations of carbohydrates in living trees continuously and in real-time, expanding our ability to examine controls on photosynthesis, sap flow, carbon sequestration and other processes in forest ecosystems.”

Fascinating, eh? I wonder who made the connection between human kidneys and plants and how that person made the connection.

Poopy gold, silver, platinum, and more

In the future, gold rushes could occur in sewage plants. Precious metals have been found in large quantity by researchers investigating waste and the passage of nanoparticles (gold, silver, platinum, etc.) into our water. From a Jan. 29, 2015 news article by Adele Peters for Fast Company (Note: Links have been removed),

One unlikely potential source of gold, silver, platinum, and other metals: Sewage sludge. A new study estimates that in a city of a million people, $13 million of metals could be collecting in sewage every year, or $280 per ton of sludge. There’s gold (and silver, copper, and platinum) in them thar poop.

Funded in part by a grant for “nano-prospecting,” the researchers looked at a huge sample of sewage from cities across the U.S., and then studied several specific waste treatment plants. “Initially we thought gold was at just one or two hotspots, but we find it even in smaller wastewater treatment plants,” says Paul Westerhoff, an engineering professor at Arizona State University, who led the new study.

Some of the metals likely come from a variety of sources—we may ingest tiny particles of silver, for example, when we eat with silverware or when we drink water from pipes that have silver alloys. Medical diagnostic tools often use gold or silver. …

The metallic particles Peters is describing are nanoparticles some of which are naturally occurring  as she notes but, increasingly, we are dealing with engineered nanoparticles making their way into the environment.

Engineered or naturally occurring, a shocking quantity of these metallic nanoparticles can be found in our sewage. For example, a waste treatment centre in Japan recorded 1,890 grammes of gold per tonne of ash from incinerated sludge as compared to the 20 – 40 grammes of gold per tonne of ore recovered from one of the world’s top producing gold mines (Miho Yoshikawa’s Jan. 30, 2009 article for Reuters).

While finding it is one thing, extracting it is going to be something else as Paul Westerhoff notes in Peters’ article. For the curious, here’s a link to and a citation for the research paper,

Characterization, Recovery Opportunities, and Valuation of Metals in Municipal Sludges from U.S. Wastewater Treatment Plants Nationwide by Paul Westerhoff, Sungyun Lee, Yu Yang, Gwyneth W. Gordon, Kiril Hristovski, Rolf U. Halden, and Pierre Herckes. Environ. Sci. Technol., Article ASAP DOI: 10.1021/es505329q Publication Date (Web): January 12, 2015

Copyright © 2015 American Chemical Society

This paper is behind a paywall.

On a completely other topic, this is the first time I’ve noticed this type of note prepended to an abstract,

 Note

This article published January 26, 2015 with errors throughout the text. The corrected version published January 27, 2015.

Getting back to the topic at hand, I checked into nano-prospecting and found this Sept. 19, 2013 Arizona State University news release describing the project launch,

Growing use of nanomaterials in manufactured products is heightening concerns about their potential environmental impact – particularly in water resources.

Tiny amounts of materials such as silver, titanium, silica and platinum are being used in fabrics, clothing, shampoos, toothpastes, tennis racquets and even food products to provide antibacterial protection, self-cleaning capability, food texture and other benefits.

Nanomaterials are also put into industrial polishing agents and catalysts, and are released into the environment when used.

As more of these products are used and disposed of, increasing amounts of the nanomaterials are accumulating in soils, waterways and water-systems facilities. That’s prompting efforts to devise more effective ways of monitoring the movement of the materials and assessing their potential threat to environmental safety and human health.

Three Arizona State University faculty members will lead a research project to help improve methods of gathering accurate information about the fate of the materials and predicting when, where and how they may pose a hazard.

Their “nanoprospecting” endeavor is supported by a recently awarded $300,000 grant from the National Science Foundation.

You can find out more about Paul Westerhoff and his work here.

Bone implants and restorative dentistry at the University of Malaya

The research into biomedical implants at the University of Malaya is part of an international effort and is in response to a demographic reality, hugely increased populations of the aged. From a Sept. 18, 2014 news item on ScienceDaily,

A major success in developing new biomedical implants with the ability to accelerate bone healing has been reported by a group of scientists from the Department of Restorative Dentistry, University of Malaya. This stems from a project partly funded by HIR [High Impact Research] and also involves Mr. Alireza Yaghoubi, HIR Young Scientist.

According to WHO (World Health Organization), between 2000 and 2050, the world’s population over 60 years is expected to increase from 605 million to more than 2 billion. This trend is particularly more prominent in Asia and Europe where in some countries by 2050, the majority of people will be older than 50. That is why in recent years, regenerative medicine has been among the most active and well-funded research areas in many developing nations.

As part of this global effort to realize better treatments for age-related conditions, a group of scientists from the department of restorative dentistry, University of Malaya and four other universities in the US have recently reported a major success in developing new biomedical implants with the ability to accelerate bone healing.

Two studies were published according to the Sept.15, 2014 University of Malaya news release, which originated the news item,

The two studies funded by the National Science Fund (NSF) in the US and the High Impact Research (HIR) program in Malaysia tackled the issue of bone-implant integration from different angles. In the first study appearing on the front cover of the July issue of Applied Surface Science, researchers demonstrated a mechanically superior bioactive coating based on magnesium silicates rather than the commercially available calcium phosphate which develops microcracks during preparation and delaminates under pressure. The new material owing to its lower thermal mismatch with titanium can prolong the durability of load-bearing orthopedic implants and reduce chances of post-surgery complications.

The other study published in the American Chemical Society’s Applied Materials & Interfaces reported a method for fabricating titanium implants with special surface topographies which double the chance of cell viability in early stages. The new technique is also much simpler as compared to the existing ones and therefore enables the preparation of personalized implants at the fraction of time and cost while offering a higher mechanical reliability.

Alireza Yaghoubi, the corresponding author of both studies believes that we are moving toward a future of personalized products. “It is very much like your taste in music and TV shows. People are different and the new trend in biotechnology is to make personalized medicine that matches the patient’s needs” Yaghoubi said. He continued “With regard to implants, we have the problem of variations in bone density in patients with osteoporosis and in some cases, even healthy individuals. Finding ways to integrate the implants with bone tissues can be challenging. There are also problems with the long-term performance of implants, such as release of debris from bioactive films which can potentially lead to osteolysis and chronic inflammation”.

The new technique employed by the scientists to create titanium implants with desirable surface properties uses microwave heating to create a porosity gradient on top of a dense core. The principles are very similar to a kitchen microwave and how it can make cooking easier, however apparently the fast heating capability is not only useful in cooking but it has numerous industrial applications. Prof. Bhaduri, the Director of Multi-functional materials laboratory at University of Toledo says that they have been using microwave for years to simplify fabrication of complex metallic components. “We needed a way to streamline the process and microwave sintering was a natural fit. With our new method, making the implant from titanium powder in custom sizes and with specific surface topographies is achieved through one easy step.” Bhaduri elaborated.

Researchers are hoping to carry out the clinical trial for this new generation of implants in order to make them available to the market soon. Dr. Kutty, one of the lead authors suggests that there is still room for improvement. Kutty concluded that “Roughened surfaces and bioceramics have desirable effects on osseointegration, but we are not stopping there. We are now developing new ways to use peptides for enhancing the performance of implants even further.”

This image provides an illustration of the proposed new material for implants,

The artwork appeared on the front cover of Applied Surface Science summarizes the benefits of a new bioceramic coating versus the commercially available Calcium Phosphate which develops microcracks during processing and may later cause osteolysis in load-bearing orthopedic implants. Courtesy: University of Malaya

The artwork appeared on the front cover of Applied Surface Science summarizes the benefits of a new bioceramic coating versus the commercially available Calcium Phosphate which develops microcracks during processing and may later cause osteolysis in load-bearing orthopedic implants. Courtesy: University of Malaya

Here are links to and citations for the papers,

Electrophoretic deposition of magnesium silicates on titanium implants: Ion migration and silicide interfaces by M. Afshar-Mohajer, A. Yaghoubi, S. Ramesh, A.R. Bushroa, K.M.C. Chin, C.C. Tin, and W.S. Chiu.  Applied Surface Science (2014) , Volume 307, 15 July 2014, Pages 1–6, DOI: 10.1016/j.apsusc.2014.04.033

Microwave-assisted Fabrication of Titanium Implants with Controlled Surface Topography for Rapid Bone Healing by Muralithran G. Kutty, Alok De, Sarit B. Bhaduri, and Alireza Yaghoubi. ACS Appl. Mater. Interfaces, 2014, 6 (16), pp 13587–13593 DOI: 10.1021/am502967n Publication Date (Web): August 6, 2014

Copyright © 2014 American Chemical Society

Both of these papers are behind paywalls.

Robo Brain; a new robot learning project

Having covered the RoboEarth project (a European Union funded ‘internet for robots’ first mentioned here in a Feb. 14, 2011 posting [scroll down about 1/4 of the way] and again in a March 12 2013 posting about the project’s cloud engine, Rapyuta and. most recently in a Jan. 14, 2014 posting), an Aug. 25, 2014 Cornell University news release by Bill Steele (also on EurekAlert with some editorial changes) about the US Robo Brain project immediately caught my attention,

Robo Brain – a large-scale computational system that learns from publicly available Internet resources – is currently downloading and processing about 1 billion images, 120,000 YouTube videos, and 100 million how-to documents and appliance manuals. The information is being translated and stored in a robot-friendly format that robots will be able to draw on when they need it.

The news release spells out why and how researchers have created Robo Brain,

To serve as helpers in our homes, offices and factories, robots will need to understand how the world works and how the humans around them behave. Robotics researchers have been teaching them these things one at a time: How to find your keys, pour a drink, put away dishes, and when not to interrupt two people having a conversation.

This will all come in one package with Robo Brain, a giant repository of knowledge collected from the Internet and stored in a robot-friendly format that robots will be able to draw on when they need it. [emphasis mine]

“Our laptops and cell phones have access to all the information we want. If a robot encounters a situation it hasn’t seen before it can query Robo Brain in the cloud,” explained Ashutosh Saxena, assistant professor of computer science.

Saxena and colleagues at Cornell, Stanford and Brown universities and the University of California, Berkeley, started in July to download about one billion images, 120,000 YouTube videos and 100 million how-to documents and appliance manuals, along with all the training they have already given the various robots in their own laboratories. Robo Brain will process images to pick out the objects in them, and by connecting images and video with text, it will learn to recognize objects and how they are used, along with human language and behavior.

Saxena described the project at the 2014 Robotics: Science and Systems Conference, July 12-16 [2014] in Berkeley.

If a robot sees a coffee mug, it can learn from Robo Brain not only that it’s a coffee mug, but also that liquids can be poured into or out of it, that it can be grasped by the handle, and that it must be carried upright when it is full, as opposed to when it is being carried from the dishwasher to the cupboard.

The system employs what computer scientists call “structured deep learning,” where information is stored in many levels of abstraction. An easy chair is a member of the class of chairs, and going up another level, chairs are furniture. Sitting is something you can do on a chair, but a human can also sit on a stool, a bench or the lawn.

A robot’s computer brain stores what it has learned in a form mathematicians call a Markov model, which can be represented graphically as a set of points connected by lines (formally called nodes and edges). The nodes could represent objects, actions or parts of an image, and each one is assigned a probability – how much you can vary it and still be correct. In searching for knowledge, a robot’s brain makes its own chain and looks for one in the knowledge base that matches within those probability limits.

“The Robo Brain will look like a gigantic, branching graph with abilities for multidimensional queries,” said Aditya Jami, a visiting researcher at Cornell who designed the large-scale database for the brain. It might look something like a chart of relationships between Facebook friends but more on the scale of the Milky Way.

Like a human learner, Robo Brain will have teachers, thanks to crowdsourcing. The Robo Brain website will display things the brain has learned, and visitors will be able to make additions and corrections.

The “robot-friendly format” for information in the European project (RoboEarth) meant machine language but if I understand what’s written in the news release correctly, this project incorporates a mix of machine language and natural (human) language.

This is one of the times the funding sources (US National Science Foundation, two of the armed forces, businesses and a couple of not-for-profit agencies) seem particularly interesting (from the news release),

The project is supported by the National Science Foundation, the Office of Naval Research, the Army Research Office, Google, Microsoft, Qualcomm, the Alfred P. Sloan Foundation and the National Robotics Initiative, whose goal is to advance robotics to help make the United States more competitive in the world economy.

For the curious, here’s a link to the Robo Brain and RoboEarth websites.

Intel’s 14nm chip: architecture revealed and scientist discusses the limits to computers

Anxieties about how much longer we can design and manufacture smaller, faster computer chips are commonplace even as companies continue to announce new, faster, smaller chips. Just before the US National Science Foundation (NSF) issued a press release concerning a Nature (journal) essay on the limits of computation, Intel announced a new microarchitecture for its 14nm chips .

First, there’s Intel. In an Aug. 12, 2014 news item on Azonano, Intel announced its newest microarchitecture optimization,

Intel today disclosed details of its newest microarchitecture that is optimized with Intel’s industry-leading 14nm manufacturing process. Together these technologies will provide high-performance and low-power capabilities to serve a broad array of computing needs and products from the infrastructure of cloud computing and the Internet of Things to personal and mobile computing.

An Aug. 11, 2014 Intel news release, which originated the news item, lists key points,

  • Intel disclosed details of the microarchitecture of the Intel® Core™ M processor, the first product to be manufactured using 14nm.
  • The combination of the new microarchitecture and manufacturing process will usher in a wave of innovation in new form factors, experiences and systems that are thinner and run silent and cool.
  • Intel architects and chip designers have achieved greater than two times reduction in the thermal design point when compared to a previous generation of processor while providing similar performance and improved battery life.
  • The new microarchitecture was optimized to take advantage of the new capabilities of the 14nm manufacturing process.
  • Intel has delivered the world’s first 14nm technology in volume production. It uses second-generation Tri-gate (FinFET) transistors with industry-leading performance, power, density and cost per transistor.
  • Intel’s 14nm technology will be used to manufacture a wide range of high-performance to low-power products including servers, personal computing devices and Internet of Things.
  • The first systems based on the Intel® Core™ M processor will be on shelves for the holiday selling season followed by broader OEM availability in the first half of 2015.
  • Additional products based on the Broadwell microarchitecture and 14nm process technology will be introduced in the coming months.

The company has made available supporting materials including videos titled, ‘Advancing Moore’s Law in 2014′, ‘Microscopic Mark Bohr: 14nm Explained’, and ‘Intel 14nm Manufacturing Process’ which can be found here. An earlier mention of Intel and its 14nm manufacturing process can be found in my July 9, 2014 posting.

Meanwhile, in a more contemplative mood, Igor Markov of the University of Michigan has written an essay for Nature questioning the limits of computation as per an Aug. 14, 2014 news item on Azonano,

From their origins in the 1940s as sequestered, room-sized machines designed for military and scientific use, computers have made a rapid march into the mainstream, radically transforming industry, commerce, entertainment and governance while shrinking to become ubiquitous handheld portals to the world.

This progress has been driven by the industry’s ability to continually innovate techniques for packing increasing amounts of computational circuitry into smaller and denser microchips. But with miniature computer processors now containing millions of closely-packed transistor components of near atomic size, chip designers are facing both engineering and fundamental limits that have become barriers to the continued improvement of computer performance.

Have we reached the limits to computation?

In a review article in this week’s issue of the journal Nature, Igor Markov of the University of Michigan reviews limiting factors in the development of computing systems to help determine what is achievable, identifying “loose” limits and viable opportunities for advancements through the use of emerging technologies. His research for this project was funded in part by the National Science Foundation (NSF).

An Aug. 13, 2014 NSF news release, which originated the news item, describes Markov’s Nature essay in greater detail,

“Just as the second law of thermodynamics was inspired by the discovery of heat engines during the industrial revolution, we are poised to identify fundamental laws that could enunciate the limits of computation in the present information age,” says Sankar Basu, a program director in NSF’s Computer and Information Science and Engineering Directorate. “Markov’s paper revolves around this important intellectual question of our time and briefly touches upon most threads of scientific work leading up to it.”

The article summarizes and examines limitations in the areas of manufacturing and engineering, design and validation, power and heat, time and space, as well as information and computational complexity.​

“What are these limits, and are some of them negotiable? On which assumptions are they based? How can they be overcome?” asks Markov. “Given the wealth of knowledge about limits to computation and complicated relations between such limits, it is important to measure both dominant and emerging technologies against them.”

Limits related to materials and manufacturing are immediately perceptible. In a material layer ten atoms thick, missing one atom due to imprecise manufacturing changes electrical parameters by ten percent or more. Shrinking designs of this scale further inevitably leads to quantum physics and associated limits.

Limits related to engineering are dependent upon design decisions, technical abilities and the ability to validate designs. While very real, these limits are difficult to quantify. However, once the premises of a limit are understood, obstacles to improvement can potentially be eliminated. One such breakthrough has been in writing software to automatically find, diagnose and fix bugs in hardware designs.

Limits related to power and energy have been studied for many years, but only recently have chip designers found ways to improve the energy consumption of processors by temporarily turning off parts of the chip. There are many other clever tricks for saving energy during computation. But moving forward, silicon chips will not maintain the pace of improvement without radical changes. Atomic physics suggests intriguing possibilities but these are far beyond modern engineering capabilities.

Limits relating to time and space can be felt in practice. The speed of light, while a very large number, limits how fast data can travel. Traveling through copper wires and silicon transistors, a signal can no longer traverse a chip in one clock cycle today. A formula limiting parallel computation in terms of device size, communication speed and the number of available dimensions has been known for more than 20 years, but only recently has it become important now that transistors are faster than interconnections. This is why alternatives to conventional wires are being developed, but in the meantime mathematical optimization can be used to reduce the length of wires by rearranging transistors and other components.

Several key limits related to information and computational complexity have been reached by modern computers. Some categories of computational tasks are conjectured to be so difficult to solve that no proposed technology, not even quantum computing, promises consistent advantage. But studying each task individually often helps reformulate it for more efficient computation.

When a specific limit is approached and obstructs progress, understanding the assumptions made is key to circumventing it. Chip scaling will continue for the next few years, but each step forward will meet serious obstacles, some too powerful to circumvent.

What about breakthrough technologies? New techniques and materials can be helpful in several ways and can potentially be “game changers” with respect to traditional limits. For example, carbon nanotube transistors provide greater drive strength and can potentially reduce delay, decrease energy consumption and shrink the footprint of an overall circuit. On the other hand, fundamental limits–sometimes not initially anticipated–tend to obstruct new and emerging technologies, so it is important to understand them before promising a new revolution in power, performance and other factors.

“Understanding these important limits,” says Markov, “will help us to bet on the right new techniques and technologies.”

Here’s a link to and a citation for Markov’s article,

Limits on fundamental limits to computation by Igor L. Markov. Nature 512, 147–154 (14 August 2014) doi:10.1038/nature13570 Published online 13 August 2014

This paper is behind a paywall but a free preview is available via ReadCube Access.

It’s a fascinating question, what are the limits? It’s one being asked not only with regard to computation but also to medicine, human enhancement, and artificial intelligence for just a few areas of endeavour.