Tag Archives: Cornell University

Café Scientifique (Vancouver, Canada) and noise on Oct. 27, 2015

On Tuesday, October 27, 2015  Café Scientifique, in the back room of The Railway Club (2nd floor of 579 Dunsmuir St. [at Seymour St.]), will be hosting a talk on the history of noise (from the Oct. 13, 2015 announcement),

Our speaker for the evening will be Dr. Shawn Bullock.  The title of his talk is:

The History of Noise: Perspectives from Physics and Engineering

The word “noise” is often synonymous with “nuisance,” which implies something to be avoided as much as possible. We label blaring sirens, the space between stations on the radio dial and the din of a busy street as “noise.” Is noise simply a sound we don’t like? We will consider the evolution of how scientists and engineers have thought about noise, beginning in the 19th-century and continuing to the present day. We will explore the idea of noise both as a social construction and as a technological necessity. We’ll also touch on critical developments in the study of sound, the history of physics and engineering, and the development of communications technology.

This description is almost identical to the description Bullock gave for a November 2014 talk he titled: Snap, Crackle, Pop!: A Short History of Noise which he summarizes this way after delivering the talk,

I used ideas from the history of physics, the history of music, the discipline of sound studies, and the history of electrical engineering to make the point that understanding “noise” is essential to understanding advancements in physics and engineering in the last century. We began with a discussion of 19th-century attitudes toward noise (and its association with “progress” and industry) before moving on to examine the early history of recorded sound and music, early attempts to measure noise, and the noise abatement movement. I concluded with a brief overview of my recent work on the role of noise in the development of the modem during the early Cold War.

You can find out more about Dr. Bullock who is an assistant professor of science education at Simon Fraser University here at his website.

On the subject of noise, although not directly related to Bullock’s work, there’s some research suggesting that noise may be having a serious impact on marine life. From an Oct. 8, 2015 Elsevier press release on EurekAlert,

Quiet areas should be sectioned off in the oceans to give us a better picture of the impact human generated noise is having on marine animals, according to a new study published in Marine Pollution Bulletin. By assigning zones through which ships cannot travel, researchers will be able to compare the behavior of animals in these quiet zones to those living in noisier areas, helping decide the best way to protect marine life from harmful noise.

The authors of the study, from the University of St Andrews, UK, the Oceans Initiative, Cornell University, USA, and Curtin University, Australia, say focusing on protecting areas that are still quiet will give researchers a better insight into the true impact we are having on the oceans.

Almost all marine organisms, including mammals like whales and dolphins, fish and even invertebrates, use sound to find food, avoid predators, choose mates and navigate. Chronic noise from human activities such as shipping can have a big impact on these animals, since it interferes with their acoustic signaling – increased background noise can mean animals are unable to hear important signals, and they tend to swim away from sources of noise, disrupting their normal behavior.

The number of ships in the oceans has increased fourfold since 1992, increasing marine noise dramatically. Ships are also getting bigger, and therefore noisier: in 2000 the biggest cargo ships could carry 8,000 containers; today’s biggest carry 18,000.

“Marine animals, especially whales, depend on a naturally quiet ocean for survival, but humans are polluting major portions of the ocean with noise,” said Dr. Christopher Clark from the Bioacoustics Research Program, Cornell University. “We must make every effort to protect quiet ocean regions now, before they grow too noisy from the din of our activities.”

For the new study, lead author Dr. Rob Williams and the team mapped out areas of high and low noise pollution in the oceans around Canada. Using shipping route and speed data from Environment Canada, the researchers put together a model of noise based on ships’ location, size and speed, calculating the cumulative sound they produce over the course of a year. They used the maps to predict how noisy they thought a particular area ought to be.

To test their predictions, in partnership with Cornell University, they deployed 12 autonomous hydrophones – devices that can measure noise in water – and found a correlation in terms of how the areas ranked from quietest to noisiest. The quiet areas are potential noise conservation zones.

“We tend to focus on problems in conservation biology. This was a fun study to work on, because we looked for opportunities to protect species by working with existing patterns in noise and animal distribution, and found that British Colombia offers many important habitat for whales that are still quiet,” said Dr. Rob Williams, lead author of the study. “If we think of quiet, wild oceans as a natural resource, we are lucky that Canada is blessed with globally rare pockets of acoustic wilderness. It makes sense to talk about protecting acoustic sanctuaries before we lose them.”

Although it is clear that noise has an impact on marine organisms, the exact effect is still not well understood. By changing their acoustic environment, we could be inadvertently choosing winners and losers in terms of survival; researchers are still at an early stage of predicting who will win or lose under different circumstances. The quiet areas the team identified could serve as experimental control sites for research like the International Quiet Ocean Experiment to see what effects ocean noise is having on marine life.

“Sound is perceived differently by different species, and some are more affected by noise than others,” said Christine Erbe, co-author of the study and Director of the Marine Science Center, Curtin University, Australia.

So far, the researchers have focused on marine mammals – whales, dolphins, porpoises, seals and sea lions. With a Pew Fellowship in Marine Conservation, Dr. Williams now plans to look at the effects of noise on fish, which are less well understood. By starting to quantify that and let people know what the likely economic effect on fisheries or on fish that are culturally important, Dr. Williams hopes to get the attention of the people who make decisions that affect ocean noise.

“When protecting highly mobile and migratory species that are poorly studied, it may make sense to focus on threats rather than the animals themselves. Shipping patterns decided by humans are often more predictable than the movements of whales and dolphins,” said Erin Ashe, co-author of the study and co-founder of the Oceans Initiative from the University of St Andrews.

Keeping areas of the ocean quiet is easier than reducing noise in already busy zones, say the authors of the study. However, if future research that stems from noise protected zones indicates that overall marine noise should be reduced, there are several possible approaches to reducing noise. The first is speed reduction: the faster a ship goes, the noisier it gets, so slowing down would reduce overall noise. The noisiest ships could also be targeted for replacement: by reducing the noise produced by the noisiest 10% of ships in use today, overall marine noise could be reduced by more than half. The third, more long-term, option would be to build quieter ships from the outset.

I can’t help wondering why Canadian scientists aren’t involved in this research taking place off our shores. Regardless, here’s a link to and a citation for the paper,

Quiet(er) marine protected areas by Rob Williams, Christine Erbe, Erin Ashe, & Christopher W. Clark. Marine Pollution Bulletin Available online 16 September 2015 In Press, Corrected Proof doi:10.1016/j.marpolbul.2015.09.012

This is an open access paper.

A soft heart from Cornell University (US)

Caption: This is an artificial foam heart created by Rob Shepherd and his engineering team at Cornell University. Credit: Cornell University

Caption: This is an artificial foam heart created by Rob Shepherd and his engineering team at Cornell University.
Credit: Cornell University

It’s not exactly what I imagined on seeing the words “foam heart” but this is what researchers at Cornell University have produced as a ‘working concept’. From an Oct. 14, 2015 Cornell University news release (also on EurekAlert but dated Oct. 15, 2015) describes the research in more detail,

Cornell University researchers have developed a new lightweight and stretchable material with the consistency of memory foam that has potential for use in prosthetic body parts, artificial organs and soft robotics. The foam is unique because it can be formed and has connected pores that allow fluids to be pumped through it.

The polymer foam starts as a liquid that can be poured into a mold to create shapes, and because of the pathways for fluids, when air or liquid is pumped through it, the material moves and can change its length by 300 percent.

While applications for use inside the body require federal approval and testing, Cornell researchers are close to making prosthetic body parts with the so-called “elastomer foam.”

“We are currently pretty far along for making a prosthetic hand this way,” said Rob Shepherd, assistant professor of mechanical and aerospace engineering, and senior author of a paper appearing online and in an upcoming issue of the journal Advanced Materials. Benjamin Mac Murray, a graduate student in Shepherd’s lab, is the paper’s first author.

In the paper, the researchers demonstrated a pump they made into a heart, mimicking both shape and function.

The researchers used carbon fiber and silicone on the outside to fashion a structure that expands at different rates on the surface – to make a spherical shape into an egg shape, for example, that would hold its form when inflated.

“This paper was about exploring the effect of porosity on the actuator, but now we would like to make the foam actuators faster and with higher strength, so we can apply more force. We are also focusing on biocompatibility,” Shepherd said.

Cornell has made a video of researcher Rob Shepherd describing the work,

Here’s a link to and a citation for the paper,

Poroelastic Foams for Simple Fabrication of Complex Soft Robots by Benjamin C. Mac Murray, Xintong An, Sanlin S. Robinson, Ilse M. van Meerbeek, Kevin W. O’Brien, Huichan Zhao, andRobert F. Shepherd. Advanced Materials DOI: 10.1002/adma.201503464 Article first published online: 19 SEP 2015

© 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

$81M for US National Nanotechnology Coordinated Infrastructure (NNCI)

Academics, small business, and industry researchers are the big winners in a US National Science Foundation bonanza according to a Sept. 16, 2015 news item on Nanowerk,

To advance research in nanoscale science, engineering and technology, the National Science Foundation (NSF) will provide a total of $81 million over five years to support 16 sites and a coordinating office as part of a new National Nanotechnology Coordinated Infrastructure (NNCI).

The NNCI sites will provide researchers from academia, government, and companies large and small with access to university user facilities with leading-edge fabrication and characterization tools, instrumentation, and expertise within all disciplines of nanoscale science, engineering and technology.

A Sept. 16, 2015 NSF news release provides a brief history of US nanotechnology infrastructures and describes this latest effort in slightly more detail (Note: Links have been removed),

The NNCI framework builds on the National Nanotechnology Infrastructure Network (NNIN), which enabled major discoveries, innovations, and contributions to education and commerce for more than 10 years.

“NSF’s long-standing investments in nanotechnology infrastructure have helped the research community to make great progress by making research facilities available,” said Pramod Khargonekar, assistant director for engineering. “NNCI will serve as a nationwide backbone for nanoscale research, which will lead to continuing innovations and economic and societal benefits.”

The awards are up to five years and range from $500,000 to $1.6 million each per year. Nine of the sites have at least one regional partner institution. These 16 sites are located in 15 states and involve 27 universities across the nation.

Through a fiscal year 2016 competition, one of the newly awarded sites will be chosen to coordinate the facilities. This coordinating office will enhance the sites’ impact as a national nanotechnology infrastructure and establish a web portal to link the individual facilities’ websites to provide a unified entry point to the user community of overall capabilities, tools and instrumentation. The office will also help to coordinate and disseminate best practices for national-level education and outreach programs across sites.

New NNCI awards:

Mid-Atlantic Nanotechnology Hub for Research, Education and Innovation, University of Pennsylvania with partner Community College of Philadelphia, principal investigator (PI): Mark Allen
Texas Nanofabrication Facility, University of Texas at Austin, PI: Sanjay Banerjee

Northwest Nanotechnology Infrastructure, University of Washington with partner Oregon State University, PI: Karl Bohringer

Southeastern Nanotechnology Infrastructure Corridor, Georgia Institute of Technology with partners North Carolina A&T State University and University of North Carolina-Greensboro, PI: Oliver Brand

Midwest Nano Infrastructure Corridor, University of  Minnesota Twin Cities with partner North Dakota State University, PI: Stephen Campbell

Montana Nanotechnology Facility, Montana State University with partner Carlton College, PI: David Dickensheets
Soft and Hybrid Nanotechnology Experimental Resource,

Northwestern University with partner University of Chicago, PI: Vinayak Dravid

The Virginia Tech National Center for Earth and Environmental Nanotechnology Infrastructure, Virginia Polytechnic Institute and State University, PI: Michael Hochella

North Carolina Research Triangle Nanotechnology Network, North Carolina State University with partners Duke University and University of North Carolina-Chapel Hill, PI: Jacob Jones

San Diego Nanotechnology Infrastructure, University of California, San Diego, PI: Yu-Hwa Lo

Stanford Site, Stanford University, PI: Kathryn Moler

Cornell Nanoscale Science and Technology Facility, Cornell University, PI: Daniel Ralph

Nebraska Nanoscale Facility, University of Nebraska-Lincoln, PI: David Sellmyer

Nanotechnology Collaborative Infrastructure Southwest, Arizona State University with partners Maricopa County Community College District and Science Foundation Arizona, PI: Trevor Thornton

The Kentucky Multi-scale Manufacturing and Nano Integration Node, University of Louisville with partner University of Kentucky, PI: Kevin Walsh

The Center for Nanoscale Systems at Harvard University, Harvard University, PI: Robert Westervelt

The universities are trumpeting this latest nanotechnology funding,

NSF-funded network set to help businesses, educators pursue nanotechnology innovation (North Carolina State University, Duke University, and University of North Carolina at Chapel Hill)

Nanotech expertise earns Virginia Tech a spot in National Science Foundation network

ASU [Arizona State University] chosen to lead national nanotechnology site

UChicago, Northwestern awarded $5 million nanotechnology infrastructure grant

That is a lot of excitement.

A pragmatic approach to alternatives to animal testing

Retitled and cross-posted from the June 30, 2015 posting (Testing times: the future of animal alternatives) on the International Innovation blog (a CORDIS-listed project dissemination partner for FP7 and H2020 projects).

Maryse de la Giroday explains how emerging innovations can provide much-needed alternatives to animal testing. She also shares highlights of the 9th World Congress on Alternatives to Animal Testing.

‘Guinea pigging’ is the practice of testing drugs that have passed in vitro and in vivo tests on healthy humans in a Phase I clinical trial. In fact, healthy humans can make quite a bit of money as guinea pigs. The practice is sufficiently well-entrenched that there is a magazine, Guinea Pig Zero, devoted to professionals. While most participants anticipate some unpleasant side effects, guinea pigging can sometimes be a dangerous ‘profession’.


One infamous incident highlighting the dangers of guinea pigging occurred in 2006 at Northwick Park Hospital outside London. Volunteers were offered £2,000 to participate in a Phase I clinical trial to test a prospective treatment – a monoclonal antibody designed for rheumatoid arthritis and multiple sclerosis. The drug, called TGN1412, caused catastrophic systemic organ failure in participants. All six individuals receiving the drug required hospital treatment. One participant reportedly underwent amputation of fingers and toes. Another reacted with symptoms comparable to John Merrick, the Elephant Man.

The root of the disaster lay in subtle immune system differences between humans and cynomolgus monkeys – the model animal tested prior to the clinical trial. The drug was designed for the CD28 receptor on T cells. The monkeys’ receptors closely resemble those found in humans. However, unlike these monkeys, humans have other immune cells that carry CD28. The trial participants received a starting dosage that was 0.2 per cent of what the monkeys received in their final tests, but failure to take these additional receptors into account meant a dosage that was supposed to occupy 10 per cent of the available CD28 receptors instead occupied 90 per cent. After the event, a Russian inventor purchased the commercial rights to the drug and renamed it TAB08. It has been further developed by Russian company, TheraMAB, and TAB08 is reportedly in Phase II clinical trials.


While animal testing has been a powerful and useful tool for determining safe usage for pharmaceuticals and other types of chemicals, it is also a cruel and imperfect practice. Moreover, it typically only predicts 30-60 per cent of human responses to new drugs. Nanotechnology and other emerging innovations present possibilities for reducing, and in some cases eliminating, the use of animal models.

People for the Ethical Treatment of Animals (PETA), still better known for its publicity stunts, maintains a webpage outlining a number of alternatives including in silico testing (computer modelling), and, perhaps most interestingly, human-on-a-chip and organoid (tissue engineering) projects.

Organ-on-a-chip projects use stem cells to create human tissues that replicate the functions of human organs. Discussions about human-on-a-chip activities – a phrase used to describe 10 interlinked organ chips – were a highlight of the 9th World Congress on Alternatives to Animal Testing held in Prague, Czech Republic, last year. One project highlighted at the event was a joint US National Institutes of Health (NIH), US Food and Drug Administration (FDA) and US Defense Advanced Research Projects Agency (DARPA) project led by Dan Tagle that claimed it would develop functioning human-on-a-chip by 2017. However, he and his team were surprisingly close-mouthed and provided few details making it difficult to assess how close they are to achieving their goal.

By contrast, Uwe Marx – Leader of the ‘Multi-Organ-Chip’ programme in the Institute of Biotechnology at the Technical University of Berlin and Scientific Founder of TissUse, a human-on-a-chip start-up company – claims to have sold two-organ chips. He also claims to have successfully developed a four-organ chip and that he is on his way to building a human-on-a-chip. Though these chips remain to be seen, if they are, they will integrate microfluidics, cultured cells and materials patterned at the nanoscale to mimic various organs, and will allow chemical testing in an environment that somewhat mirrors a human.

Another interesting alternative for animal testing is organoids – a feature in regenerative medicine that can function as test sites. Engineers based at Cornell University recently published a paper on their functional, synthetic immune organ. Inspired by the lymph node, the organoid is comprised of gelatin-based biomaterials, which are reinforced with silicate nanoparticles (to keep the tissue from melting when reaching body temperature) and seeded with cells allowing it to mimic the anatomical microenvironment of a lymphatic node. It behaves like its inspiration converting B cells to germinal centres which activate, mature and mutate antibody genes when the body is under attack. The engineers claim to be able to control the immune response and to outperform 2D cultures with their 3D organoid. If the results are reproducible, the organoid could be used to develop new therapeutics.

Maryse de la Giroday is a science communications consultant and writer.

Full disclosure: Maryse de la Giroday received transportation and accommodation for the 9th World Congress on Alternatives to Animal Testing from SEURAT-1, a European Union project, making scientific inquiries to facilitate the transition to animal testing alternatives, where possible.

ETA July 1, 2015: I would like to acknowledge more sources for the information in this article,


The guinea pigging term, the ‘professional aspect, the Northwick Park story, and the Guinea Pig Zero magazine can be found in Carl Elliot’s excellent 2006 story titled ‘Guinea-Pigging’ for New Yorker magazine.


Information about the drug used in the Northwick Park Hospital disaster, the sale of the rights to a Russian inventor, and the June 2015 date for the current Phase II clinical trials were found in this Wikipedia essay titled, TGN 1412.


Additional information about the renamed drug, TAB08 and its Phase II clinical trials was found on (a) a US government website for information on clinical trials, (b) in a Dec. 2014 (?) TheraMAB  advertisement in a Nature group magazine and a Jan. 2014 press release,




An April 2015 article (Experimental drug that injured UK volunteers resumes in human trials) by Owen Dyer for the British Medical Journal also mentioned the 2015 TheraMab Phase II clinical trials and provided information about the information about Macaque (cynomolgus) monkey tests.


BMJ 2015; 350 doi: http://dx.doi.org.proxy.lib.sfu.ca/10.1136/bmj.h1831 (Published 02 April 2015) Cite this as: BMJ 2015;350:h1831

A 2009 study by Christopher Horvath and Mark Milton somewhat contradicts the Dyer article’s contention that a species Macaque monkey was used as an animal model. (As the Dyer article is more recent and the Horvath/Milton analysis is more complex covering TGN 1412 in the context of other MAB drugs and their precursor tests along with specific TGN 1412 tests, I opted for the simple description.)

The TeGenero Incident [another name for the Northwick Park Accident] and the Duff Report Conclusions: A Series of Unfortunate Events or an Avoidable Event? by Christopher J. Horvath and Mark N. Milton. Published online before print February 24, 2009, doi: 10.1177/0192623309332986 Toxicol Pathol April 2009 vol. 37 no. 3 372-383


Philippa Roxbuy’s May 24, 2013 BBC news online article provided confirmation and an additional detail or two about the Northwick Park Hospital accident. It notes that other models, in addition to animal models, are being developed.


Anne Ju’s excellent June 10,2015 news release about the Cornell University organoid (synthetic immune organ) project was very helpful.


There will also be a magazine article in International Innovation, which will differ somewhat from the blog posting, due to editorial style and other requirements.

ETA July 22, 2015: I now have a link to the magazine article.

Cornell University’s (US) immune organoid

A synthetic immune organ that produces antibodies has been developed at Cornell University. From a June 11, 2015 news item on Azonano,

Cornell engineers have created a functional, synthetic immune organ that produces antibodies and can be controlled in the lab, completely separate from a living organism. The engineered organ has implications for everything from rapid production of immune therapies to new frontiers in cancer or infectious disease research.

The immune organoid was created in the lab of Ankur Singh, assistant professor of mechanical and aerospace engineering, who applies engineering principles to the study and manipulation of the human immune system. …

A June 10, 2015 Cornell University news release (also on EurekAlert) by Anne Ju, which originated the news item, describes how the organ/organoid functions,

The synthetic organ is bio-inspired by secondary immune organs like the lymph node or spleen. It is made from gelatin-based biomaterials reinforced with nanoparticles and seeded with cells, and it mimics the anatomical microenvironment of lymphoid tissue. Like a real organ, the organoid converts B cells – which make antibodies that respond to infectious invaders – into germinal centers, which are clusters of B cells that activate, mature and mutate their antibody genes when the body is under attack. Germinal centers are a sign of infection and are not present in healthy immune organs.

The engineers have demonstrated how they can control this immune response in the organ and tune how quickly the B cells proliferate, get activated and change their antibody types. According to their paper, their 3-D organ outperforms existing 2-D cultures and can produce activated B cells up to 100 times faster.

The immune organ, made of a hydrogel, is a soft, nanocomposite biomaterial. The engineers reinforced the material with silicate nanoparticles to keep the structure from melting at the physiologically relevant temperature of 98.6 degrees.

The organ could lead to increased understanding of B cell functions, an area of study that typically relies on animal models to observe how the cells develop and mature.

What’s more, Singh said, the organ could be used to study specific infections and how the body produces antibodies to fight those infections – from Ebola to HIV.

“You can use our system to force the production of immunotherapeutics at much faster rates,” he said. Such a system also could be used to test toxic chemicals and environmental factors that contribute to infections or organ malfunctions.

The process of B cells becoming germinal centers is not well understood, and in fact, when the body makes mistakes in the genetic rearrangement related to this process, blood cancer can result.

“In the long run, we anticipate that the ability to drive immune reaction ex vivo at controllable rates grants us the ability to reproduce immunological events with tunable parameters for better mechanistic understanding of B cell development and generation of B cell tumors, as well as screening and translation of new classes of drugs,” Singh said.

The researchers have provided an image of their work,

When exposed to a foreign agent, such as an immunogenic protein, B cells in lymphoid organs undergo germinal center reactions. The image on the left is an immunized mouse spleen with activated B cells (brown) that produce antibodies. At right, top: a scanning electron micrograph of porous synthetic immune organs that enable rapid proliferation and activation of B cells into antibody-producing cells. At right, bottom: primary B cell viability and distribution is visible 24 hours following encapsulation procedure. Courtesy: Cornell University

When exposed to a foreign agent, such as an immunogenic protein, B cells in lymphoid organs undergo germinal center reactions. The image on the left is an immunized mouse spleen with activated B cells (brown) that produce antibodies. At right, top: a scanning electron micrograph of porous synthetic immune organs that enable rapid proliferation and activation of B cells into antibody-producing cells. At right, bottom: primary B cell viability and distribution is visible 24 hours following encapsulation procedure. Courtesy: Cornell University

Here’s a link to and a citation for the paper,

Ex vivo Engineered Immune Organoids for Controlled Germinal Center Reactions by Alberto Purwada, Manish K. Jaiswal, Haelee Ahn, Takuya Nojima, Daisuke Kitamura, Akhilesh K. Gaharwar, Leandro Cerchietti, & Ankur Singh. Biomaterials DOI: 10.1016/j.biomaterials.2015.06.002 Available online 3 June 2015

This paper is behind a paywall.

Robo Brain; a new robot learning project

Having covered the RoboEarth project (a European Union funded ‘internet for robots’ first mentioned here in a Feb. 14, 2011 posting [scroll down about 1/4 of the way] and again in a March 12 2013 posting about the project’s cloud engine, Rapyuta and. most recently in a Jan. 14, 2014 posting), an Aug. 25, 2014 Cornell University news release by Bill Steele (also on EurekAlert with some editorial changes) about the US Robo Brain project immediately caught my attention,

Robo Brain – a large-scale computational system that learns from publicly available Internet resources – is currently downloading and processing about 1 billion images, 120,000 YouTube videos, and 100 million how-to documents and appliance manuals. The information is being translated and stored in a robot-friendly format that robots will be able to draw on when they need it.

The news release spells out why and how researchers have created Robo Brain,

To serve as helpers in our homes, offices and factories, robots will need to understand how the world works and how the humans around them behave. Robotics researchers have been teaching them these things one at a time: How to find your keys, pour a drink, put away dishes, and when not to interrupt two people having a conversation.

This will all come in one package with Robo Brain, a giant repository of knowledge collected from the Internet and stored in a robot-friendly format that robots will be able to draw on when they need it. [emphasis mine]

“Our laptops and cell phones have access to all the information we want. If a robot encounters a situation it hasn’t seen before it can query Robo Brain in the cloud,” explained Ashutosh Saxena, assistant professor of computer science.

Saxena and colleagues at Cornell, Stanford and Brown universities and the University of California, Berkeley, started in July to download about one billion images, 120,000 YouTube videos and 100 million how-to documents and appliance manuals, along with all the training they have already given the various robots in their own laboratories. Robo Brain will process images to pick out the objects in them, and by connecting images and video with text, it will learn to recognize objects and how they are used, along with human language and behavior.

Saxena described the project at the 2014 Robotics: Science and Systems Conference, July 12-16 [2014] in Berkeley.

If a robot sees a coffee mug, it can learn from Robo Brain not only that it’s a coffee mug, but also that liquids can be poured into or out of it, that it can be grasped by the handle, and that it must be carried upright when it is full, as opposed to when it is being carried from the dishwasher to the cupboard.

The system employs what computer scientists call “structured deep learning,” where information is stored in many levels of abstraction. An easy chair is a member of the class of chairs, and going up another level, chairs are furniture. Sitting is something you can do on a chair, but a human can also sit on a stool, a bench or the lawn.

A robot’s computer brain stores what it has learned in a form mathematicians call a Markov model, which can be represented graphically as a set of points connected by lines (formally called nodes and edges). The nodes could represent objects, actions or parts of an image, and each one is assigned a probability – how much you can vary it and still be correct. In searching for knowledge, a robot’s brain makes its own chain and looks for one in the knowledge base that matches within those probability limits.

“The Robo Brain will look like a gigantic, branching graph with abilities for multidimensional queries,” said Aditya Jami, a visiting researcher at Cornell who designed the large-scale database for the brain. It might look something like a chart of relationships between Facebook friends but more on the scale of the Milky Way.

Like a human learner, Robo Brain will have teachers, thanks to crowdsourcing. The Robo Brain website will display things the brain has learned, and visitors will be able to make additions and corrections.

The “robot-friendly format” for information in the European project (RoboEarth) meant machine language but if I understand what’s written in the news release correctly, this project incorporates a mix of machine language and natural (human) language.

This is one of the times the funding sources (US National Science Foundation, two of the armed forces, businesses and a couple of not-for-profit agencies) seem particularly interesting (from the news release),

The project is supported by the National Science Foundation, the Office of Naval Research, the Army Research Office, Google, Microsoft, Qualcomm, the Alfred P. Sloan Foundation and the National Robotics Initiative, whose goal is to advance robotics to help make the United States more competitive in the world economy.

For the curious, here’s a link to the Robo Brain and RoboEarth websites.

Two-organ tests (body-on-a-chip) show liver damage possible from nanoparticles

This is the first time I’ve seen testing of two organs for possible adverse effects from nanoparticles. In this case, the researchers were especially interested in the liver. From an Aug. 12, 2014 news item on Azonano,

Nanoparticles in food, sunscreen and other everyday products have many benefits. But Cornell [University] biomedical scientists are finding that at certain doses, the particles might cause human organ damage.

A recently published study in Lab on a Chip by the Royal Society of Chemistry and led by senior research associate Mandy Esch shows that nanoparticles injure liver cells when they are in microfluidic devices designed to mimic organs of the human body. The injury was worse when tested in two-organ systems, as opposed to single organs – potentially raising concerns for humans and animals.

Anne Ju’s Aug. 11, 2014 article for Cornell University’s Chronicle describes the motivation for this work and the research itself in more detail,

“We are looking at the effects of what are considered to be harmless nanoparticles in humans,” Esch said. “These particles are not necessarily lethal, but … are there other consequences? We’re looking at the non-lethal consequences.”

She used 50-nanometer carboxylated polystyrene nanoparticles, found in some animal food sources and considered model inert particles. Shuler’s lab specializes in “body-on-a-chip” microfluidics, which are engineered chips with carved compartments that contain cell cultures to represent the chemistry of individual organs.

In Esch’s experiment, she made a human intestinal compartment, a liver compartment and a compartment to represent surrounding tissues in the body. She then observed the effects of fluorescently labeled nanoparticles as they traveled through the system.

Esch found that both single nanoparticles as well as small clusters crossed the gastrointestinal barrier and reached liver cells, and the liver cells released an enzyme called aspartate transaminase, known to be released during cell death or damage.

It’s unclear exactly what damage is occurring or why, but the results indicate that the nanoparticles must be undergoing changes as they cross the gastrointestinal barrier, and that these alterations may change their toxic potential, Esch said. Long-term consequences for organs in proximity could be a concern, she said.

“The motivation behind this study was twofold: one, to show that multi-organ, in vitro systems give us more information when testing for the interaction of a substance with the human body, and two … to look at nanoparticles because they have a huge potential for medicine, yet adverse effects have not been studied in detail yet,” Esch said.

Mary Macleod’s July 3, 2014 article for Chemistry World features a diagram of the two-organ system and more technical details about the research,

Schematic of the two-organ system [downloaded from http://www.rsc.org/chemistryworld/2014/07/nanoparticle-liver-gastrointestinal-tract-microfluidic-chip]

Schematic of the two-organ system [downloaded from http://www.rsc.org/chemistryworld/2014/07/nanoparticle-liver-gastrointestinal-tract-microfluidic-chip]

HepG2/C3A cells were used to represent the liver, with the intestinal cell co-culture consisting of enterocytes (Caco-2) and mucin-producing (HT29-MTX) cells. Carboxylated polystyrene nanoparticles were fluorescently labelled so their movement between the chambers could be tracked. Levels of aspartate transaminase, a cytosolic enzyme released into the culture medium upon cell death, were measured to give an indication of liver damage.

The study saw that single nanoparticles and smaller nanoparticle aggregates were able to cross the GI barrier and reach the liver cells. The increased zeta potentials of these nanoparticles suggest that crossing the barrier may raise their toxic potential. However, larger nanoparticles, which interact with cell membranes and aggregate into clusters, were stopped much more effectively by the GI tract barrier.

The gastrointestinal tract is an important barrier preventing ingested substances crossing into systemic circulation. Initial results indicate that soluble mediators released upon low-level injury to liver cells may enhance the initial injury by damaging the cells which form the GI tract. These adverse effects were not seen in conventional single-organ tests.

Here’s a link to and a citation for the paper,

Body-on-a-chip simulation with gastrointestinal tract and liver tissues suggests that ingested nanoparticles have the potential to cause liver injury by Mandy B. Esch, Gretchen J. Mahler, Tracy Stokol, and Michael L. Shuler. Lab Chip, 2014,14, 3081-3092 DOI: 10.1039/C4LC00371C First published online 27 Jun 2014

This paper is open access until Aug. 12, 2014.

While this research is deeply concerning, it should be noted the researchers are being very careful in their conclusions as per Ju’s article, “It’s unclear exactly what damage is occurring or why, but the results indicate that the nanoparticles must be undergoing changes as they cross the gastrointestinal barrier, and that these alterations may change their toxic potential … Long-term consequences for organs in proximity could be a concern … .”

TrueNorth, a brain-inspired chip architecture from IBM and Cornell University

As a Canadian, “true north” is invariably followed by “strong and free” while singing our national anthem. For many Canadians it is almost the only phrase that is remembered without hesitation.  Consequently, some of the buzz surrounding the publication of a paper celebrating ‘TrueNorth’, a brain-inspired chip, is a bit disconcerting. Nonetheless, here is the latest IBM (in collaboration with Cornell University) news from an Aug. 8, 2014 news item on Nanowerk,

Scientists from IBM unveiled the first neurosynaptic computer chip to achieve an unprecedented scale of one million programmable neurons, 256 million programmable synapses and 46 billion synaptic operations per second per watt. At 5.4 billion transistors, this fully functional and production-scale chip is currently one of the largest CMOS chips ever built, yet, while running at biological real time, it consumes a minuscule 70mW—orders of magnitude less power than a modern microprocessor. A neurosynaptic supercomputer the size of a postage stamp that runs on the energy equivalent of a hearing-aid battery, this technology could transform science, technology, business, government, and society by enabling vision, audition, and multi-sensory applications.

An Aug. 7, 2014 IBM news release, which originated the news item, provides an overview of the multi-year process this breakthrough represents (Note: Links have been removed),

There is a huge disparity between the human brain’s cognitive capability and ultra-low power consumption when compared to today’s computers. To bridge the divide, IBM scientists created something that didn’t previously exist—an entirely new neuroscience-inspired scalable and efficient computer architecture that breaks path with the prevailing von Neumann architecture used almost universally since 1946.

This second generation chip is the culmination of almost a decade of research and development, including the initial single core hardware prototype in 2011 and software ecosystem with a new programming language and chip simulator in 2013.

The new cognitive chip architecture has an on-chip two-dimensional mesh network of 4096 digital, distributed neurosynaptic cores, where each core module integrates memory, computation, and communication, and operates in an event-driven, parallel, and fault-tolerant fashion. To enable system scaling beyond single-chip boundaries, adjacent chips, when tiled, can seamlessly connect to each other—building a foundation for future neurosynaptic supercomputers. To demonstrate scalability, IBM also revealed a 16-chip system with sixteen million programmable neurons and four billion programmable synapses.

“IBM has broken new ground in the field of brain-inspired computers, in terms of a radically new architecture, unprecedented scale, unparalleled power/area/speed efficiency, boundless scalability, and innovative design techniques. We foresee new generations of information technology systems – that complement today’s von Neumann machines – powered by an evolving ecosystem of systems, software, and services,” said Dr. Dharmendra S. Modha, IBM Fellow and IBM Chief Scientist, Brain-Inspired Computing, IBM Research. “These brain-inspired chips could transform mobility, via sensory and intelligent applications that can fit in the palm of your hand but without the need for Wi-Fi. This achievement underscores IBM’s leadership role at pivotal transformational moments in the history of computing via long-term investment in organic innovation.”

The Defense Advanced Research Projects Agency (DARPA) has funded the project since 2008 with approximately $53M via Phase 0, Phase 1, Phase 2, and Phase 3 of the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program. Current collaborators include Cornell Tech and iniLabs, Ltd.

Building the Chip

The chip was fabricated using Samsung’s 28nm process technology that has a dense on-chip memory and low-leakage transistors.

“It is an astonishing achievement to leverage a process traditionally used for commercially available, low-power mobile devices to deliver a chip that emulates the human brain by processing extreme amounts of sensory information with very little power,” said Shawn Han, vice president of Foundry Marketing, Samsung Electronics. “This is a huge architectural breakthrough that is essential as the industry moves toward the next-generation cloud and big-data processing. It’s a pleasure to be part of technical progress for next-generation through Samsung’s 28nm technology.”

The event-driven circuit elements of the chip used the asynchronous design methodology developed at Cornell Tech [aka Cornell University] and refined with IBM since 2008.

“After years of collaboration with IBM, we are now a step closer to building a computer similar to our brain,” said Professor Rajit Manohar, Cornell Tech.

The combination of cutting-edge process technology, hybrid asynchronous-synchronous design methodology, and new architecture has led to a power density of 20mW/cm2 which is nearly four orders of magnitude less than today’s microprocessors.

Advancing the SyNAPSE Ecosystem

The new chip is a component of a complete end-to-end vertically integrated ecosystem spanning a chip simulator, neuroscience data, supercomputing, neuron specification, programming paradigm, algorithms and applications, and prototype design models. The ecosystem supports all aspects of the programming cycle from design through development, debugging, and deployment.

To bring forth this fundamentally different technological capability to society, IBM has designed a novel teaching curriculum for universities, customers, partners, and IBM employees.

Applications and Vision

This ecosystem signals a shift in moving computation closer to the data, taking in vastly varied kinds of sensory data, analyzing and integrating real-time information in a context-dependent way, and dealing with the ambiguity found in complex, real-world environments.

Looking to the future, IBM is working on integrating multi-sensory neurosynaptic processing into mobile devices constrained by power, volume and speed; integrating novel event-driven sensors with the chip; real-time multimedia cloud services accelerated by neurosynaptic systems; and neurosynaptic supercomputers by tiling multiple chips on a board, creating systems that would eventually scale to one hundred trillion synapses and beyond.

Building on previously demonstrated neurosynaptic cores with on-chip, online learning, IBM envisions building learning systems that adapt in real world settings. While today’s hardware is fabricated using a modern CMOS process, the underlying architecture is poised to exploit advances in future memory, 3D integration, logic, and sensor technologies to deliver even lower power, denser package, and faster speed.

I have two articles that may prove of interest, Peter Stratton’s Aug. 7, 2014 article for The Conversation provides an easy-to-read introduction to both brains, human and computer, (as they apply to this research) and TrueNorth (h/t phys.org also hosts Stratton’s article). There’s also an Aug. 7, 2014 article by Rob Farber for techenablement.com which includes information from a range of text and video sources about TrueNorth and cognitive computing as it’s also known (well worth checking out).

Here’s a link to and a citation for the paper,

A million spiking-neuron integrated circuit with a scalable communication network and interface by Paul A. Merolla, John V. Arthur, Rodrigo Alvarez-Icaza, Andrew S. Cassidy, Jun Sawada, Filipp Akopyan, Bryan L. Jackson, Nabil Imam, Chen Guo, Yutaka Nakamura, Bernard Brezzo, Ivan Vo, Steven K. Esser, Rathinakumar Appuswamy, Brian Taba, Arnon Amir, Myron D. Flickner, William P. Risk, Rajit Manohar, and Dharmendra S. Modha. Science 8 August 2014: Vol. 345 no. 6197 pp. 668-673 DOI: 10.1126/science.1254642

This paper is behind a paywall.

Graphene-based sensor mimics pain (mu-opioid) receptor

I once had a job where I had to perform literature searches and read papers on pain research as it related to morphine tolerance. Not a pleasant task, it has left me eager to encourage and write about alternatives to animal testing, a key component of pain research. So, with a ‘song in my heart’, I feature this research from the University of Pennsylvania written up in a May 12, 2014 news item on ScienceDaily,

Almost every biological process involves sensing the presence of a certain chemical. Finely tuned over millions of years of evolution, the body’s different receptors are shaped to accept certain target chemicals. When they bind, the receptors tell their host cells to produce nerve impulses, regulate metabolism, defend the body against invaders or myriad other actions depending on the cell, receptor and chemical type.

Now, researchers from the University of Pennsylvania have led an effort to create an artificial chemical sensor based on one of the human body’s most important receptors, one that is critical in the action of painkillers and anesthetics. In these devices, the receptors’ activation produces an electrical response rather than a biochemical one, allowing that response to be read out by a computer.

By attaching a modified version of this mu-opioid receptor to strips of graphene, they have shown a way to mass produce devices that could be useful in drug development and a variety of diagnostic tests. And because the mu-opioid receptor belongs to the most common class of such chemical sensors, the findings suggest that the same technique could be applied to detect a wide range of biologically relevant chemicals.

A May 6, 2014 University of Pennsylvania news release, which originated the news item, describes the main teams involved in this research along with why and how they worked together (Note: Links have been removed),

The study, published in the journal Nano Letters, was led by A.T. Charlie Johnson, director of Penn’s Nano/Bio Interface Center and professor of physics in Penn’s School of Arts & Sciences; Renyu Liu, assistant professor of anesthesiology in Penn’s Perelman School of Medicine; and Mitchell Lerner, then a graduate student in Johnson’s lab. It was made possible through a collaboration with Jeffery Saven, professor of chemistry in Penn Arts & Sciences. The Penn team also worked with researchers from the Seoul National University in South Korea.

Their study combines recent advances from several disciplines.

Johnson’s group has extensive experience attaching biological components to nanomaterials for use in chemical detectors. Previous studies have involved wrapping carbon nanotubes with single-stranded DNA to detect odors related to cancer and attaching antibodies to nanotubes to detect the presence of the bacteria associated with Lyme disease.

After Saven and Liu addressed these problems with the redesigned receptor, they saw that it might be useful to Johnson, who had previously published a study on attaching a similar receptor protein to carbon nanotubes. In that case, the protein was difficult to grow genetically, and Johnson and his colleagues also needed to include additional biological structures from the receptors’ natural membranes in order to keep them stable.

In contrast, the computationally redesigned protein could be readily grown and attached directly to graphene, opening up the possibility of mass producing biosensor devices that utilize these receptors.

“Due to the challenges associated with isolating these receptors from their membrane environment without losing functionality,” Liu said, “the traditional methods of studying them involved indirectly investigating the interactions between opioid and the receptor via radioactive or fluorescent labeled ligands, for example. This multi-disciplinary effort overcame those difficulties, enabling us to investigate these interactions directly in a cell free system without the need to label any ligands.”

With Saven and Liu providing a version of the receptor that could stably bind to sheets of graphene, Johnson’s team refined their process of manufacturing those sheets and connecting them to the circuitry necessary to make functional devices.

The news release provides more technical details about the graphene sensor,

“We start by growing a piece of graphene that is about six inches wide by 12 inches long,” Johnson said. “That’s a pretty big piece of graphene, but we don’t work with the whole thing at once. Mitchell Lerner, the lead author of the study, came up with a very clever idea to cut down on chemical contamination. We start with a piece that is about an inch square, then separate them into ribbons that are about 50 microns across.

“The nice thing about these ribbons is that we can put them right on top of the rest of the circuitry, and then go on to attach the receptors. This really reduces the potential for contamination, which is important because contamination greatly degrades the electrical properties we measure.”

Because the mechanism by which the device reports on the presence of the target molecule relies only on the receptor’s proximity to the nanostructure when it binds to the target, Johnson’s team could employ the same chemical technique for attaching the antibodies and other receptors used in earlier studies.

Once attached to the ribbons, the opioid receptors would produce changes in the surrounding graphene’s electrical properties whenever they bound to their target. Those changes would then produce electrical signals that would be transmitted to a computer via neighboring electrodes.

The high reliability of the manufacturing process — only one of the 193 devices on the chip failed — enables applications in both clinical diagnostics and further research. [emphasis mine]

“We can measure each device individually and average the results, which greatly reduces the noise,” said Johnson. “Or you could imagine attaching 10 different kinds of receptors to 20 devices each, all on the same chip, if you wanted to test for multiple chemicals at once.”

In the researchers’ experiment, they tested their devices’ ability to detect the concentration of a single type of molecule. They used naltrexone, a drug used in alcohol and opioid addiction treatment, because it binds to and blocks the natural opioid receptors that produce the narcotic effects patients seek.

“It’s not clear whether the receptors on the devices are as selective as they are in the biological context,” Saven said, “as the ones on your cells can tell the difference between an agonist, like morphine, and an antagonist, like naltrexone, which binds to the receptor but does nothing. By working with the receptor-functionalized graphene devices, however, not only can we make better diagnostic tools, but we can also potentially get a better understanding of how the bimolecular system actually works in the body.”

“Many novel opioids have been developed over the centuries,” Liu said. “However, none of them has achieved potent analgesic effects without notorious side effects, including devastating addiction and respiratory depression. This novel tool could potentially aid the development of new opioids that minimize these side effects.”

Wherever these devices find applications, they are a testament to the potential usefulness of the Nobel-prize winning material they are based on.

“Graphene gives us an advantage,” Johnson said, “in that its uniformity allows us to make 192 devices on a one-inch chip, all at the same time. There are still a number of things we need to work out, but this is definitely a pathway to making these devices in large quantities.”

There is no mention of animal research but it seems likely to me that this work could lead to a decreased use of animals in pain research.

This project must have been quite something as it involved collaboration across many institutions (from the news release),

Also contributing to the study were Gang Hee Han, Sung Ju Hong and Alexander Crook of Penn Arts & Sciences’ Department of Physics and Astronomy; Felipe Matsunaga and Jin Xi of the Department of Anesthesiology at the Perelman School of Medicine, José Manuel Pérez-Aguilar of Penn Arts & Sciences’ Department of Chemistry; and Yung Woo Park of Seoul National University. Mitchell Lerner is now at SPAWAR Systems Center Pacific, Felipe Matsunaga at Albert Einstein College of Medicine, José Manuel Pérez-Aguilar at Cornell University and Sung Ju Hong at Seoul National University.

Here’s a link to and a citation for the paper,

Scalable Production of Highly Sensitive Nanosensors Based on Graphene Functionalized with a Designed G Protein-Coupled Receptor by Mitchell B. Lerner, Felipe Matsunaga, Gang Hee Han, Sung Ju Hong, Jin Xi, Alexander Crook, Jose Manuel Perez-Aguilar, Yung Woo Park, Jeffery G. Saven, Renyu Liu, and A. T. Charlie Johnson.Nano Lett., Article ASAP
DOI: 10.1021/nl5006349 Publication Date (Web): April 17, 2014
Copyright © 2014 American Chemical Society

This paper is behind a paywall.