Tag Archives: Cornell University

Robo Brain; a new robot learning project

Having covered the RoboEarth project (a European Union funded ‘internet for robots’ first mentioned here in a Feb. 14, 2011 posting [scroll down about 1/4 of the way] and again in a March 12 2013 posting about the project’s cloud engine, Rapyuta and. most recently in a Jan. 14, 2014 posting), an Aug. 25, 2014 Cornell University news release by Bill Steele (also on EurekAlert with some editorial changes) about the US Robo Brain project immediately caught my attention,

Robo Brain – a large-scale computational system that learns from publicly available Internet resources – is currently downloading and processing about 1 billion images, 120,000 YouTube videos, and 100 million how-to documents and appliance manuals. The information is being translated and stored in a robot-friendly format that robots will be able to draw on when they need it.

The news release spells out why and how researchers have created Robo Brain,

To serve as helpers in our homes, offices and factories, robots will need to understand how the world works and how the humans around them behave. Robotics researchers have been teaching them these things one at a time: How to find your keys, pour a drink, put away dishes, and when not to interrupt two people having a conversation.

This will all come in one package with Robo Brain, a giant repository of knowledge collected from the Internet and stored in a robot-friendly format that robots will be able to draw on when they need it. [emphasis mine]

“Our laptops and cell phones have access to all the information we want. If a robot encounters a situation it hasn’t seen before it can query Robo Brain in the cloud,” explained Ashutosh Saxena, assistant professor of computer science.

Saxena and colleagues at Cornell, Stanford and Brown universities and the University of California, Berkeley, started in July to download about one billion images, 120,000 YouTube videos and 100 million how-to documents and appliance manuals, along with all the training they have already given the various robots in their own laboratories. Robo Brain will process images to pick out the objects in them, and by connecting images and video with text, it will learn to recognize objects and how they are used, along with human language and behavior.

Saxena described the project at the 2014 Robotics: Science and Systems Conference, July 12-16 [2014] in Berkeley.

If a robot sees a coffee mug, it can learn from Robo Brain not only that it’s a coffee mug, but also that liquids can be poured into or out of it, that it can be grasped by the handle, and that it must be carried upright when it is full, as opposed to when it is being carried from the dishwasher to the cupboard.

The system employs what computer scientists call “structured deep learning,” where information is stored in many levels of abstraction. An easy chair is a member of the class of chairs, and going up another level, chairs are furniture. Sitting is something you can do on a chair, but a human can also sit on a stool, a bench or the lawn.

A robot’s computer brain stores what it has learned in a form mathematicians call a Markov model, which can be represented graphically as a set of points connected by lines (formally called nodes and edges). The nodes could represent objects, actions or parts of an image, and each one is assigned a probability – how much you can vary it and still be correct. In searching for knowledge, a robot’s brain makes its own chain and looks for one in the knowledge base that matches within those probability limits.

“The Robo Brain will look like a gigantic, branching graph with abilities for multidimensional queries,” said Aditya Jami, a visiting researcher at Cornell who designed the large-scale database for the brain. It might look something like a chart of relationships between Facebook friends but more on the scale of the Milky Way.

Like a human learner, Robo Brain will have teachers, thanks to crowdsourcing. The Robo Brain website will display things the brain has learned, and visitors will be able to make additions and corrections.

The “robot-friendly format” for information in the European project (RoboEarth) meant machine language but if I understand what’s written in the news release correctly, this project incorporates a mix of machine language and natural (human) language.

This is one of the times the funding sources (US National Science Foundation, two of the armed forces, businesses and a couple of not-for-profit agencies) seem particularly interesting (from the news release),

The project is supported by the National Science Foundation, the Office of Naval Research, the Army Research Office, Google, Microsoft, Qualcomm, the Alfred P. Sloan Foundation and the National Robotics Initiative, whose goal is to advance robotics to help make the United States more competitive in the world economy.

For the curious, here’s a link to the Robo Brain and RoboEarth websites.

Two-organ tests (body-on-a-chip) show liver damage possible from nanoparticles

This is the first time I’ve seen testing of two organs for possible adverse effects from nanoparticles. In this case, the researchers were especially interested in the liver. From an Aug. 12, 2014 news item on Azonano,

Nanoparticles in food, sunscreen and other everyday products have many benefits. But Cornell [University] biomedical scientists are finding that at certain doses, the particles might cause human organ damage.

A recently published study in Lab on a Chip by the Royal Society of Chemistry and led by senior research associate Mandy Esch shows that nanoparticles injure liver cells when they are in microfluidic devices designed to mimic organs of the human body. The injury was worse when tested in two-organ systems, as opposed to single organs – potentially raising concerns for humans and animals.

Anne Ju’s Aug. 11, 2014 article for Cornell University’s Chronicle describes the motivation for this work and the research itself in more detail,

“We are looking at the effects of what are considered to be harmless nanoparticles in humans,” Esch said. “These particles are not necessarily lethal, but … are there other consequences? We’re looking at the non-lethal consequences.”

She used 50-nanometer carboxylated polystyrene nanoparticles, found in some animal food sources and considered model inert particles. Shuler’s lab specializes in “body-on-a-chip” microfluidics, which are engineered chips with carved compartments that contain cell cultures to represent the chemistry of individual organs.

In Esch’s experiment, she made a human intestinal compartment, a liver compartment and a compartment to represent surrounding tissues in the body. She then observed the effects of fluorescently labeled nanoparticles as they traveled through the system.

Esch found that both single nanoparticles as well as small clusters crossed the gastrointestinal barrier and reached liver cells, and the liver cells released an enzyme called aspartate transaminase, known to be released during cell death or damage.

It’s unclear exactly what damage is occurring or why, but the results indicate that the nanoparticles must be undergoing changes as they cross the gastrointestinal barrier, and that these alterations may change their toxic potential, Esch said. Long-term consequences for organs in proximity could be a concern, she said.

“The motivation behind this study was twofold: one, to show that multi-organ, in vitro systems give us more information when testing for the interaction of a substance with the human body, and two … to look at nanoparticles because they have a huge potential for medicine, yet adverse effects have not been studied in detail yet,” Esch said.

Mary Macleod’s July 3, 2014 article for Chemistry World features a diagram of the two-organ system and more technical details about the research,

Schematic of the two-organ system [downloaded from http://www.rsc.org/chemistryworld/2014/07/nanoparticle-liver-gastrointestinal-tract-microfluidic-chip]

Schematic of the two-organ system [downloaded from http://www.rsc.org/chemistryworld/2014/07/nanoparticle-liver-gastrointestinal-tract-microfluidic-chip]

HepG2/C3A cells were used to represent the liver, with the intestinal cell co-culture consisting of enterocytes (Caco-2) and mucin-producing (HT29-MTX) cells. Carboxylated polystyrene nanoparticles were fluorescently labelled so their movement between the chambers could be tracked. Levels of aspartate transaminase, a cytosolic enzyme released into the culture medium upon cell death, were measured to give an indication of liver damage.

The study saw that single nanoparticles and smaller nanoparticle aggregates were able to cross the GI barrier and reach the liver cells. The increased zeta potentials of these nanoparticles suggest that crossing the barrier may raise their toxic potential. However, larger nanoparticles, which interact with cell membranes and aggregate into clusters, were stopped much more effectively by the GI tract barrier.

The gastrointestinal tract is an important barrier preventing ingested substances crossing into systemic circulation. Initial results indicate that soluble mediators released upon low-level injury to liver cells may enhance the initial injury by damaging the cells which form the GI tract. These adverse effects were not seen in conventional single-organ tests.

Here’s a link to and a citation for the paper,

Body-on-a-chip simulation with gastrointestinal tract and liver tissues suggests that ingested nanoparticles have the potential to cause liver injury by Mandy B. Esch, Gretchen J. Mahler, Tracy Stokol, and Michael L. Shuler. Lab Chip, 2014,14, 3081-3092 DOI: 10.1039/C4LC00371C First published online 27 Jun 2014

This paper is open access until Aug. 12, 2014.

While this research is deeply concerning, it should be noted the researchers are being very careful in their conclusions as per Ju’s article, “It’s unclear exactly what damage is occurring or why, but the results indicate that the nanoparticles must be undergoing changes as they cross the gastrointestinal barrier, and that these alterations may change their toxic potential … Long-term consequences for organs in proximity could be a concern … .”

TrueNorth, a brain-inspired chip architecture from IBM and Cornell University

As a Canadian, “true north” is invariably followed by “strong and free” while singing our national anthem. For many Canadians it is almost the only phrase that is remembered without hesitation.  Consequently, some of the buzz surrounding the publication of a paper celebrating ‘TrueNorth’, a brain-inspired chip, is a bit disconcerting. Nonetheless, here is the latest IBM (in collaboration with Cornell University) news from an Aug. 8, 2014 news item on Nanowerk,

Scientists from IBM unveiled the first neurosynaptic computer chip to achieve an unprecedented scale of one million programmable neurons, 256 million programmable synapses and 46 billion synaptic operations per second per watt. At 5.4 billion transistors, this fully functional and production-scale chip is currently one of the largest CMOS chips ever built, yet, while running at biological real time, it consumes a minuscule 70mW—orders of magnitude less power than a modern microprocessor. A neurosynaptic supercomputer the size of a postage stamp that runs on the energy equivalent of a hearing-aid battery, this technology could transform science, technology, business, government, and society by enabling vision, audition, and multi-sensory applications.

An Aug. 7, 2014 IBM news release, which originated the news item, provides an overview of the multi-year process this breakthrough represents (Note: Links have been removed),

There is a huge disparity between the human brain’s cognitive capability and ultra-low power consumption when compared to today’s computers. To bridge the divide, IBM scientists created something that didn’t previously exist—an entirely new neuroscience-inspired scalable and efficient computer architecture that breaks path with the prevailing von Neumann architecture used almost universally since 1946.

This second generation chip is the culmination of almost a decade of research and development, including the initial single core hardware prototype in 2011 and software ecosystem with a new programming language and chip simulator in 2013.

The new cognitive chip architecture has an on-chip two-dimensional mesh network of 4096 digital, distributed neurosynaptic cores, where each core module integrates memory, computation, and communication, and operates in an event-driven, parallel, and fault-tolerant fashion. To enable system scaling beyond single-chip boundaries, adjacent chips, when tiled, can seamlessly connect to each other—building a foundation for future neurosynaptic supercomputers. To demonstrate scalability, IBM also revealed a 16-chip system with sixteen million programmable neurons and four billion programmable synapses.

“IBM has broken new ground in the field of brain-inspired computers, in terms of a radically new architecture, unprecedented scale, unparalleled power/area/speed efficiency, boundless scalability, and innovative design techniques. We foresee new generations of information technology systems – that complement today’s von Neumann machines – powered by an evolving ecosystem of systems, software, and services,” said Dr. Dharmendra S. Modha, IBM Fellow and IBM Chief Scientist, Brain-Inspired Computing, IBM Research. “These brain-inspired chips could transform mobility, via sensory and intelligent applications that can fit in the palm of your hand but without the need for Wi-Fi. This achievement underscores IBM’s leadership role at pivotal transformational moments in the history of computing via long-term investment in organic innovation.”

The Defense Advanced Research Projects Agency (DARPA) has funded the project since 2008 with approximately $53M via Phase 0, Phase 1, Phase 2, and Phase 3 of the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program. Current collaborators include Cornell Tech and iniLabs, Ltd.

Building the Chip

The chip was fabricated using Samsung’s 28nm process technology that has a dense on-chip memory and low-leakage transistors.

“It is an astonishing achievement to leverage a process traditionally used for commercially available, low-power mobile devices to deliver a chip that emulates the human brain by processing extreme amounts of sensory information with very little power,” said Shawn Han, vice president of Foundry Marketing, Samsung Electronics. “This is a huge architectural breakthrough that is essential as the industry moves toward the next-generation cloud and big-data processing. It’s a pleasure to be part of technical progress for next-generation through Samsung’s 28nm technology.”

The event-driven circuit elements of the chip used the asynchronous design methodology developed at Cornell Tech [aka Cornell University] and refined with IBM since 2008.

“After years of collaboration with IBM, we are now a step closer to building a computer similar to our brain,” said Professor Rajit Manohar, Cornell Tech.

The combination of cutting-edge process technology, hybrid asynchronous-synchronous design methodology, and new architecture has led to a power density of 20mW/cm2 which is nearly four orders of magnitude less than today’s microprocessors.

Advancing the SyNAPSE Ecosystem

The new chip is a component of a complete end-to-end vertically integrated ecosystem spanning a chip simulator, neuroscience data, supercomputing, neuron specification, programming paradigm, algorithms and applications, and prototype design models. The ecosystem supports all aspects of the programming cycle from design through development, debugging, and deployment.

To bring forth this fundamentally different technological capability to society, IBM has designed a novel teaching curriculum for universities, customers, partners, and IBM employees.

Applications and Vision

This ecosystem signals a shift in moving computation closer to the data, taking in vastly varied kinds of sensory data, analyzing and integrating real-time information in a context-dependent way, and dealing with the ambiguity found in complex, real-world environments.

Looking to the future, IBM is working on integrating multi-sensory neurosynaptic processing into mobile devices constrained by power, volume and speed; integrating novel event-driven sensors with the chip; real-time multimedia cloud services accelerated by neurosynaptic systems; and neurosynaptic supercomputers by tiling multiple chips on a board, creating systems that would eventually scale to one hundred trillion synapses and beyond.

Building on previously demonstrated neurosynaptic cores with on-chip, online learning, IBM envisions building learning systems that adapt in real world settings. While today’s hardware is fabricated using a modern CMOS process, the underlying architecture is poised to exploit advances in future memory, 3D integration, logic, and sensor technologies to deliver even lower power, denser package, and faster speed.

I have two articles that may prove of interest, Peter Stratton’s Aug. 7, 2014 article for The Conversation provides an easy-to-read introduction to both brains, human and computer, (as they apply to this research) and TrueNorth (h/t phys.org also hosts Stratton’s article). There’s also an Aug. 7, 2014 article by Rob Farber for techenablement.com which includes information from a range of text and video sources about TrueNorth and cognitive computing as it’s also known (well worth checking out).

Here’s a link to and a citation for the paper,

A million spiking-neuron integrated circuit with a scalable communication network and interface by Paul A. Merolla, John V. Arthur, Rodrigo Alvarez-Icaza, Andrew S. Cassidy, Jun Sawada, Filipp Akopyan, Bryan L. Jackson, Nabil Imam, Chen Guo, Yutaka Nakamura, Bernard Brezzo, Ivan Vo, Steven K. Esser, Rathinakumar Appuswamy, Brian Taba, Arnon Amir, Myron D. Flickner, William P. Risk, Rajit Manohar, and Dharmendra S. Modha. Science 8 August 2014: Vol. 345 no. 6197 pp. 668-673 DOI: 10.1126/science.1254642

This paper is behind a paywall.

Graphene-based sensor mimics pain (mu-opioid) receptor

I once had a job where I had to perform literature searches and read papers on pain research as it related to morphine tolerance. Not a pleasant task, it has left me eager to encourage and write about alternatives to animal testing, a key component of pain research. So, with a ‘song in my heart’, I feature this research from the University of Pennsylvania written up in a May 12, 2014 news item on ScienceDaily,

Almost every biological process involves sensing the presence of a certain chemical. Finely tuned over millions of years of evolution, the body’s different receptors are shaped to accept certain target chemicals. When they bind, the receptors tell their host cells to produce nerve impulses, regulate metabolism, defend the body against invaders or myriad other actions depending on the cell, receptor and chemical type.

Now, researchers from the University of Pennsylvania have led an effort to create an artificial chemical sensor based on one of the human body’s most important receptors, one that is critical in the action of painkillers and anesthetics. In these devices, the receptors’ activation produces an electrical response rather than a biochemical one, allowing that response to be read out by a computer.

By attaching a modified version of this mu-opioid receptor to strips of graphene, they have shown a way to mass produce devices that could be useful in drug development and a variety of diagnostic tests. And because the mu-opioid receptor belongs to the most common class of such chemical sensors, the findings suggest that the same technique could be applied to detect a wide range of biologically relevant chemicals.

A May 6, 2014 University of Pennsylvania news release, which originated the news item, describes the main teams involved in this research along with why and how they worked together (Note: Links have been removed),

The study, published in the journal Nano Letters, was led by A.T. Charlie Johnson, director of Penn’s Nano/Bio Interface Center and professor of physics in Penn’s School of Arts & Sciences; Renyu Liu, assistant professor of anesthesiology in Penn’s Perelman School of Medicine; and Mitchell Lerner, then a graduate student in Johnson’s lab. It was made possible through a collaboration with Jeffery Saven, professor of chemistry in Penn Arts & Sciences. The Penn team also worked with researchers from the Seoul National University in South Korea.

Their study combines recent advances from several disciplines.

Johnson’s group has extensive experience attaching biological components to nanomaterials for use in chemical detectors. Previous studies have involved wrapping carbon nanotubes with single-stranded DNA to detect odors related to cancer and attaching antibodies to nanotubes to detect the presence of the bacteria associated with Lyme disease.

After Saven and Liu addressed these problems with the redesigned receptor, they saw that it might be useful to Johnson, who had previously published a study on attaching a similar receptor protein to carbon nanotubes. In that case, the protein was difficult to grow genetically, and Johnson and his colleagues also needed to include additional biological structures from the receptors’ natural membranes in order to keep them stable.

In contrast, the computationally redesigned protein could be readily grown and attached directly to graphene, opening up the possibility of mass producing biosensor devices that utilize these receptors.

“Due to the challenges associated with isolating these receptors from their membrane environment without losing functionality,” Liu said, “the traditional methods of studying them involved indirectly investigating the interactions between opioid and the receptor via radioactive or fluorescent labeled ligands, for example. This multi-disciplinary effort overcame those difficulties, enabling us to investigate these interactions directly in a cell free system without the need to label any ligands.”

With Saven and Liu providing a version of the receptor that could stably bind to sheets of graphene, Johnson’s team refined their process of manufacturing those sheets and connecting them to the circuitry necessary to make functional devices.

The news release provides more technical details about the graphene sensor,

“We start by growing a piece of graphene that is about six inches wide by 12 inches long,” Johnson said. “That’s a pretty big piece of graphene, but we don’t work with the whole thing at once. Mitchell Lerner, the lead author of the study, came up with a very clever idea to cut down on chemical contamination. We start with a piece that is about an inch square, then separate them into ribbons that are about 50 microns across.

“The nice thing about these ribbons is that we can put them right on top of the rest of the circuitry, and then go on to attach the receptors. This really reduces the potential for contamination, which is important because contamination greatly degrades the electrical properties we measure.”

Because the mechanism by which the device reports on the presence of the target molecule relies only on the receptor’s proximity to the nanostructure when it binds to the target, Johnson’s team could employ the same chemical technique for attaching the antibodies and other receptors used in earlier studies.

Once attached to the ribbons, the opioid receptors would produce changes in the surrounding graphene’s electrical properties whenever they bound to their target. Those changes would then produce electrical signals that would be transmitted to a computer via neighboring electrodes.

The high reliability of the manufacturing process — only one of the 193 devices on the chip failed — enables applications in both clinical diagnostics and further research. [emphasis mine]

“We can measure each device individually and average the results, which greatly reduces the noise,” said Johnson. “Or you could imagine attaching 10 different kinds of receptors to 20 devices each, all on the same chip, if you wanted to test for multiple chemicals at once.”

In the researchers’ experiment, they tested their devices’ ability to detect the concentration of a single type of molecule. They used naltrexone, a drug used in alcohol and opioid addiction treatment, because it binds to and blocks the natural opioid receptors that produce the narcotic effects patients seek.

“It’s not clear whether the receptors on the devices are as selective as they are in the biological context,” Saven said, “as the ones on your cells can tell the difference between an agonist, like morphine, and an antagonist, like naltrexone, which binds to the receptor but does nothing. By working with the receptor-functionalized graphene devices, however, not only can we make better diagnostic tools, but we can also potentially get a better understanding of how the bimolecular system actually works in the body.”

“Many novel opioids have been developed over the centuries,” Liu said. “However, none of them has achieved potent analgesic effects without notorious side effects, including devastating addiction and respiratory depression. This novel tool could potentially aid the development of new opioids that minimize these side effects.”

Wherever these devices find applications, they are a testament to the potential usefulness of the Nobel-prize winning material they are based on.

“Graphene gives us an advantage,” Johnson said, “in that its uniformity allows us to make 192 devices on a one-inch chip, all at the same time. There are still a number of things we need to work out, but this is definitely a pathway to making these devices in large quantities.”

There is no mention of animal research but it seems likely to me that this work could lead to a decreased use of animals in pain research.

This project must have been quite something as it involved collaboration across many institutions (from the news release),

Also contributing to the study were Gang Hee Han, Sung Ju Hong and Alexander Crook of Penn Arts & Sciences’ Department of Physics and Astronomy; Felipe Matsunaga and Jin Xi of the Department of Anesthesiology at the Perelman School of Medicine, José Manuel Pérez-Aguilar of Penn Arts & Sciences’ Department of Chemistry; and Yung Woo Park of Seoul National University. Mitchell Lerner is now at SPAWAR Systems Center Pacific, Felipe Matsunaga at Albert Einstein College of Medicine, José Manuel Pérez-Aguilar at Cornell University and Sung Ju Hong at Seoul National University.

Here’s a link to and a citation for the paper,

Scalable Production of Highly Sensitive Nanosensors Based on Graphene Functionalized with a Designed G Protein-Coupled Receptor by Mitchell B. Lerner, Felipe Matsunaga, Gang Hee Han, Sung Ju Hong, Jin Xi, Alexander Crook, Jose Manuel Perez-Aguilar, Yung Woo Park, Jeffery G. Saven, Renyu Liu, and A. T. Charlie Johnson.Nano Lett., Article ASAP
DOI: 10.1021/nl5006349 Publication Date (Web): April 17, 2014
Copyright © 2014 American Chemical Society

This paper is behind a paywall.

Art and nanotechnology at Cornell University’s (US) 2014 Biennial/Biennale

The 2014 Cornell [University located in New York State, US] Council for the Arts (CCA) Biennial, “Intimate Cosmologies: The Aesthetics of Scale in an Age of Nanotechnology” was announced in a Dec. 5, 2013 news item on Nanowerk,

A campuswide exhibition next fall will explore the cultural and human consequences of seeing the world at the micro and macro levels, through nanoscience and networked communications.

From Sept. 15 to Dec. 22, the 2014 Cornell Council for the Arts (CCA) Biennial, “Intimate Cosmologies: The Aesthetics of Scale in an Age of Nanotechnology”, will feature several events and principal projects by faculty and student investigators and guest artists – artist-in-residence kimsooja, Trevor Paglen and Rafael Lozano-Hemmer – working in collaboration with Cornell scientists and researchers.

The Dec.5, 2013 Cornell University news release written by Daniel Aloi, which originated the news item, describes the plans for and events leading to the biennale in Fall 2014,

The inaugural biennial theme was chosen to frame dynamic changes in 21st-century culture and art practice, and in nanoscale technology. The multidisciplinary initiative intends to engage students, faculty and the community in demonstrations of how radical shifts in scale have become commonplace, and how artists address realms of human experience lying beyond immediate sensory perception.

“Participating in the biennial is very exciting. We’re engaging the idea of nano and investigating scale as part of the value of art in performance,” said Beth Milles ’88, associate professor in the Department of Performing and Media Arts, who is collaborating on a project with students and with artist Lynn Tomlinson ’88.

A series of events and curricula this fall and spring are preceding the main Biennial exhibition. Joe Davis and Nathaniel Stern ’99 presented talks this semester, and CCA will bring Paul Thomas, Stephanie Rothenberg, Ana Viseu and others to campus in the coming months.

kimsooja, an acclaimed multimedia artist in performance, video and installation, addresses issues of the displaced self and recently represented Korea in the 55th Venice Biennale. She visited the campus Nov. 22-23 to meet with Uli Weisner and students from his research group, who will work with her to realize her large-scale installation here next fall.

Lozano-Hemmer has worked on both ends of the scale spectrum, from laser-etched poetry on human hairs to an interactive light sculpture over Mexico City, Toronto and Yamaguchi, Japan. Paglen’s researched-based work blurs lines between science, contemporary art, journalism and other disciplines.

The Biennial focus brings together artists and scientists who share a common curiosity regarding the position of the individual within the larger world, CCA Director Stephanie Owens said.

“Scientists are suddenly designers creating new forms,” she said. “And artists are increasingly interested in how things are structured, down to the biological level. Both are designing and discovering new ways of synthesizing natural properties of the material world with the fabricated experiences that extend and express the impact of these properties on our lives.”

Here’s a sample of the work that will be featured at the Biennale,

A prototype image of architecture professor Jenny Sabin's "eSkin" CCA Biennial project, an interactive simulation of a building façade that behaves like a living organism. Credit: Jenny Sabin Courtesy: Cornell University

A prototype image of architecture professor Jenny Sabin’s “eSkin” CCA Biennial project, an interactive simulation of a building façade that behaves like a living organism. Credit: Jenny Sabin Courtesy: Cornell University

Aloi includes a description of some of the exhibits and shows to be featured,

 The principal projects to be presented are:

  • “eSkin” – Architecture professor Jenny Sabin addresses ecology and sustainability issues with buildings that behave like organisms. Her project is an interactive simulation of a façade material incorporating nano- and microscale substrates plated with human cells.
  • “Nano Performance: In 13 Boxes” – Performing and media arts professor Beth Milles ‘88, animator/visual artist Lynn Tomlinson ‘88 and students from different majors will collaborate on 13 media installations and live performances situated across campus. Computer mapping and clues linking the project’s components will assist in “synthesizing the 13 events as a whole experience – it has a lot to do with discovering the performance,” Mills said.
  • “Nano Where: Gas In, Light Out” – Juan Hinestroza, fiber science, and So-Yeon Yoon, design and environmental analysis, will demonstrate the potential of molecular-level metal-organic frameworks as wearable sensors to detect methane and poisonous gases, using a sealed gas chamber and 3-D visual art.
  • “Paperscapes” – Three architecture students – teaching associate Caio Barboza ’13; Joseph Kennedy ’15 and Sonny Xu ’13 – will render the microscopic textures of a sheet of paper as a 3-D inhabitable landscape.
  • “When Art Exceeds Perception” – Ph.D. student in applied physics Robert Hovden will explore replication and plagiarism in nanoscale reproductions, 1,000 times smaller than the naked eye can see, of famous works of art inscribed onto a silicon crystal.

The Cornell Council for the Arts (CCA) has more information about their 2014 ‘nano Biennale’ here. This looks very exciting and I wish I could be there.

One final note, I’ve used the Biennale rather than Biennial as I associate Biennial and the US with the dates of 1776 and 1976 when the country celebrated its 200th anniversary.

Nano-enabled fique fiber filters harmful dyes from water

A Sept. 30, 2013 news item on ScienceDaily highlights a new technique for cleaning water,

A cheap and simple process using natural fibers embedded with nanoparticles can almost completely rid water of harmful textile dyes in minutes, report Cornell University and Colombian researchers who worked with native Colombian plant fibers.

Dyes, such as indigo blue used to color blue jeans, threaten waterways near textile plants in South America, India and China. Such dyes are toxic, and they discolor the water, thereby reducing light to the water plants, which limits photosynthesis and lowers the oxygen in the water.

The study, published in the August issue of the journal Green Chemistry, describes a proof of principle, but the researchers are testing how effectively their method treats such endocrine-disrupting water pollutants as phenols, pesticides, antibiotics, hormones and phthalates.

The Sept. 30, 2013 Cornell University news release on EurekAlert, which originated the news item,, describes the research in more detail,

The research takes advantage of nano-sized cavities found in cellulose that co-author Juan Hinestroza, Cornell associate professor of fiber science, has previously used to produce nanoparticles inside cotton fibers.

The paper describes the method: Colombian fique plant fibers, commonly used to make coffee bags, are immersed in a solution of sodium permanganate and then treated with ultrasound; as a result, manganese oxide molecules grow in the tiny cellulose cavities. Manganese oxides in the fibers react with the dyes and break them down into non-colored forms.

In the study, the treated fibers removed 99 percent of the dye from water within minutes. Furthermore, the same fibers can be used repeatedly — after eight cycles, the fibers still removed between 97 percent and 99 percent of the dye.

“No expensive or particular starting materials are needed to synthesize the biocomposite,” said Combariza [Marianny Combariza, co-author and researcher at Colombia’s Universidad Industrial de Santander]. “The synthesis can be performed in a basic chemistry lab.”

“This is the first evidence of the effectiveness of this simple technique,” said Hinestroza. “It uses water-based chemistry, and it is easily transferable to real-world situations.”

The researchers are testing their process on other types of pollutants, other fibers and composite materials. “We are working now on developing a low-cost filtering unit prototype to treat polluted waters,” said Combariza. “We are not only focused on manganese oxides, we also work on a variety of materials based on transition metal oxides that show exceptional degradation activity.”

Here’s a link to and a citation for the paper,

Biocomposite of nanostructured MnO2 and fique fibers for efficient dye degradation by Martha L. Chacón-Patiño,a   Cristian Blanco-Tirado, Juan P. Hinestroza, and  Marianny Y. Combariza. Green Chem., 2013,15, 2920-2928 DOI: 10.1039/C3GC40911B First published online 19 Aug 2013

This paper is behind a paywall.

For anyone not familiar with the fique plant,

The native Columbian fique plant, Frucraea Andina. (Credit: Vasyl Kacapyr)

The native Columbian fique plant, Frucraea Andina. (Credit: Vasyl Kacapyr)

I have mentioned Juan Hinestroza and the research he and his students perform on nano-enabled textiles a number of times including this May 15, 2012 posting on anti-malaria textiles.

Erasing time to create a temporal invisibility cloak

The idea of taking an eraser and just rubbing out embarrassing (or worse) incidents in one’s life is tempting but not yet possible despite efforts by researchers at Purdue University (Indiana, US). From a June 5, 2013 news item on ScienceDaily,

Researchers have demonstrated a method for “temporal cloaking” of optical communications, representing a potential tool to thwart would-be eavesdroppers and improve security for telecommunications.

“More work has to be done before this approach finds practical application, but it does use technology that could integrate smoothly into the existing telecommunications infrastructure,” said Purdue University graduate student Joseph Lukens, working with Andrew Weiner, the Scifres Family Distinguished Professor of Electrical and Computer Engineering.

Other researchers in 2012 invented temporal cloaking, but it cloaked only a tiny fraction — about a 10,000th of a percent — of the time available for sending data in optical communications. Now the Purdue researchers have increased that to about 46 percent, potentially making the concept practical for commercial applications.

The Purdue University June 5, 2013 news release, which originated the news item, describes the new technique,

The technique works by manipulating the phase, or timing, of light pulses. The propagation of light can be likened to waves in the ocean. If one wave is going up and interacts with another wave that’s going down, they cancel each other and the light has zero intensity. The phase determines the level of interference between these waves.

“By letting them interfere with each other you are able to make them add up to a one or a zero,” Lukens said. “The zero is a hole where there is nothing.”

Any data in regions where the signal is zero would be cloaked.

Controlling phase allows the transmission of signals in ones and zeros to send data over optical fibers. A critical piece of hardware is a component called a phase modulator, which is commonly found in optical communications to modify signals.

In temporal cloaking, two phase modulators are used to first create the holes and two more to  cover them up, making it look as though nothing was done to the signal.

“It’s a potentially higher level of security because it doesn’t even look like you are communicating,” Lukens said. “Eavesdroppers won’t realize the signal is cloaked because it looks like no signal is being sent.”

Such a technology also could find uses in the military, homeland security or law enforcement.

“It might be used to prevent communication between people, to corrupt their communication links without them knowing,” he said. “And you can turn it on and off, so if they suspected something strange was going on you could return it to normal communication.”

The technique could be improved to increase its operational bandwidth and the percentage of cloaking beyond 46 percent, he said.

In a July 14, 2011 posting I wrote about some of the research that laid the groundwork for this breakthrough at Purdue University,

Ian Sample in his July 13, 2011 posting on The Guardian Science blogs describes an entirely different approach, one that focusses on cloaking events rather than objects. From Samples’s posting,

The theoretical prospect of a “space-time” cloak – or “history editor” – was raised by Martin McCall and Paul Kinsler at Imperial College in a paper published earlier this year. The physicists explained that when light passes through a material, such as a lens, the light waves slow down. But it is possible to make a lens that splits the light in two, so that half – say the shorter wavelengths – speed up, while the other half, the longer wavelengths, slow down. This opens a gap in the light in which an event can be hidden, because half the light arrives before it has happened, and the other half arrives after the event.

In their paper, McCall and Kinsler outline a scenario whereby a video camera would be unable to record a crime being committed because there was a means of splitting the light such that 1/2 of it reached the camera before the crime occurred and the other 1/2  reached the camera afterwards. Fascinating, non?

It seems researchers at Cornell University have developed a device that can in a rudimentary fashion cloak events (from Samples’s posting),

The latest device, which has been shown to work for the first time by Moti Fridman and Alexander Gaeta at Cornell University, goes beyond the more familiar invisibility cloak, which aims to hide objects from view, by making entire events invisible.

Zeeya Merali in her extensive June 5, 2013 article (Temporal cloak erases data from history) for Nature provides an in depth explanation of the Purdue research,

To speed up the cloaking rate, Lukens and his colleagues exploited a wave phenomenon that was first discovered by British inventor Henry Fox Talbot in 1836. When a light wave passes through a series of parallel slits called a diffraction grating, it splits apart. The rays emanating from the slits combine on the other side to create an intricate interference pattern of peaks and troughs. Talbot discovered that this pattern repeats at regular intervals, creating what is now known as a Talbot carpet. There is also a temporal version of this effect in which you manipulate light over time to generate regular periods with zero light intensity, says Lukens. Data can be then be hidden in these holes in time.

Lukens’ team created its Talbot carpet in time by passing laser light through a ‘phase modulator’, a waveguide that also had an oscillating electrical voltage applied to it. As the voltage varied, the speed at which the light travelled through the waveguide was altered, splitting the light into its constituent frequencies and knocking these out of step. As predicted, at regular time intervals, the separate frequencies recombined destructively to generate time holes. Lukens’ team then used a second round of phase modulation to compress the energy further, expanding the duration of the time windows to 36 picoseconds (or 36 trillionths of a second).

The researchers tested the cloak to see if it was operating correctly by inserting a separate encoded data stream into the fibre during the time windows. They then applied two more rounds of phase modulation — to “undo the damage of the first two rounds”, says Lukens — decompressing the energy again and then combining the separated frequencies back into one. They confirmed that a user downstream would pick up the original laser signal alone, as though it had never been disturbed. The cloak successfully hid data added at a rate of 12.7 gigabits per second.

Unfortunately, the researchers were a little too successful and managed to erase the event entirely, which seems to answer a question I posed facetiously in my July 14, 2011 posting,

If you can’t see the object (light bending cloak), and you never saw the event (temporal cloak), did it exist and did it happen?

In addition to the military applications that Lukens imagines for temporal invisibility cloaks, Merali notes a another possibility in her Nature article,

Ironically, the first application of temporal cloaks may not be to hide data, but to help them to be read more accurately. The team has shown that splitting and recombining light waves in time creates increased periods in which the main data stream can be made immune to corruption by inserted data. “This could be useful to cut down crosstalk when multiple data streams share the same fibre,” says Lukens.

Gaeta agrees that the primary use for cloaking will probably be for innocent, mundane purposes. “People always imagine doing something illicit when they hear ‘cloaking’,” he says. “But these ways for manipulating light will probably be used to make current non-secret communication techniques more sophisticated.”

The research paper can be found here,

A temporal cloak at telecommunication data rate by Joseph M. Lukens, Daniel E. Leaird & Andrew M. Weiner. Nature (2013) doi:10.1038/nature12224 Published online 05 June 2013

This paper is behind a paywall. Fortunately, anyone can access my June 5, 2013 posting (Memories, science, archiving, and authenticity) which seems relevant here for two reasons. First, there’s a mention of a new open access initiative in the US which would make this research more freely available in the future with a proposal (there may be others as this initiative develops) called the Clearinghouse for the Open Research of the United States (CHORUS).  I imagine there would be some caveats and I notice that Nature magazine has signed up for this proposal. I think the second reason for mentioning yesterday’s post is pretty obvious, memory/erasing, etc.

Buildable, bendable, and biological; a kirigami-based project at Cornell University

A May 18, 2013 news item on Azonano highlights a new project at Cornell University,

Cornell researchers Jenny Sabin, assistant professor of architecture, and Dan Luo, professor of biological and environmental engineering, are among the lead investigators on a new research project to produce “buildable, bendable and biological materials” for a wide range of applications.

The project is intended to bring new ideas, motifs, portability and design to the formation of intricate chemical, biological and architectural materials.

Based on Kirigami (from the Japanese word kiru, “to cut”), the project “offers a previously unattainable level of design, dynamics and deployability” to self-folding and unfolding materials from the molecular scale to the architectural level, according to the researchers.

The May 16, 2013 Cornell University news release by Daniel Aloi, which originated the news item, describes the project’s intent,

The project is intended to illuminate new principles of architecture, materials synthesis and biological structures, and advance several technologies – including meta-materials, sensors, stealth aircraft and adaptive and sustainable buildings. A complementary goal is to generate public interest through an enhanced impact on science, art and engineering.

“Like the opening and closing of flowers, satellites and even greeting cards, our research will offer a rich and diverse set of intricate surprises, problems and challenges for students at all levels, and broaden their interest and awareness of emerging science and engineering,” according to the project proposal, “Cutting and Pasting: Kirigami in Architecture, Technology and Science” (KATS).

The Emerging Frontiers in Research Innovation grant from the NSF is in the research category of Origami Design for Integration of Self-assembling Systems for Engineering Innovation.

I wish they had a few sample illustrations of how this project might look as a macroscale architectural (or other type of) project even it is a complete fantasy.

Nanotechnology-enabled fashion at Cornell University

The image you see below is one of several featuring work from Cornell University’s Textiles Nanotechnology Laboratory,

Wearable Charging StationCredit: Textiles Nanotechnology Laboratory/Cornell UniversityAbbey Liebman, a design student at Cornell University in Ithaca, N.Y., created a dress made with conductive cotton that can charge an iPhone via solar panels.

Wearable Charging StationCredit: Textiles Nanotechnology Laboratory/Cornell University. Abbey Liebman, a design student at Cornell University in Ithaca, N.Y., created a dress made with conductive cotton that can charge an iPhone via solar panels.

It’s part of a May 7, 2013 slide show put together by Denise Chow at the LiveScience website. Also shown in the slide show are Olivia Ong’s anti-bacterial clothing (featured here in an Aug. 5, 2011 posting) and some anti-malarial clothing by Matilda Ceesay (featured here in a May 15, 2012 posting). I have more details about the textiles and the work but the pictures on LiveScience are better.

As I’ve not come across LiveScience before ,my curiosity was piqued and to satisfy it, I found this on their About page,

LiveScience, launched in 2004, is the trusted and provocative source for highly accessible science, health and technology news for people who are curious about their minds, bodies, and the world around them. Our team of experienced science reporters, editors and video producers explore the latest discoveries, trends and myths, interviewing expert sources and offering up deep and broad analyses of topics that affect peoples’ lives in meaningful ways. LiveScience articles are regularly featured on the web sites of our media partners: MSNBC.com, Yahoo!, the Christian Science Monitor and others.

Most of the science on LiveScience is ‘bite-sized’ and provides information for people who are busy and/or don’t want much detail.