The latest buzz in the information technology industry regards “the Internet of things” — the idea that vehicles, appliances, civil-engineering structures, manufacturing equipment, and even livestock would have their own embedded sensors that report information directly to networked servers, aiding with maintenance and the coordination of tasks.
Realizing that vision, however, will require extremely low-power sensors that can run for months without battery changes — or, even better, that can extract energy from the environment to recharge.
Last week, at the Symposia on VLSI Technology and Circuits, MIT [Massachusetts Institute of Technology] researchers presented a new power converter chip that can harvest more than 80 percent of the energy trickling into it, even at the extremely low power levels characteristic of tiny solar cells. [emphasis mine] Previous experimental ultralow-power converters had efficiencies of only 40 or 50 percent.
Moreover, the researchers’ chip achieves those efficiency improvements while assuming additional responsibilities. Where its predecessors could use a solar cell to either charge a battery or directly power a device, this new chip can do both, and it can power the device directly from the battery.
All of those operations also share a single inductor — the chip’s main electrical component — which saves on circuit board space but increases the circuit complexity even further. Nonetheless, the chip’s power consumption remains low.
“We still want to have battery-charging capability, and we still want to provide a regulated output voltage,” says Dina Reda El-Damak, an MIT graduate student in electrical engineering and computer science and first author on the new paper. “We need to regulate the input to extract the maximum power, and we really want to do all these tasks with inductor sharing and see which operational mode is the best. And we want to do it without compromising the performance, at very limited input power levels — 10 nanowatts to 1 microwatt — for the Internet of things.”
The prototype chip was manufactured through the Taiwan Semiconductor Manufacturing Company’s University Shuttle Program.
The MIT news release goes on to describe chip specifics,
The circuit’s chief function is to regulate the voltages between the solar cell, the battery, and the device the cell is powering. If the battery operates for too long at a voltage that’s either too high or too low, for instance, its chemical reactants break down, and it loses the ability to hold a charge.
To control the current flow across their chip, El-Damak and her advisor, Anantha Chandrakasan, the Joseph F. and Nancy P. Keithley Professor in Electrical Engineering, use an inductor, which is a wire wound into a coil. When a current passes through an inductor, it generates a magnetic field, which in turn resists any change in the current.
Throwing switches in the inductor’s path causes it to alternately charge and discharge, so that the current flowing through it continuously ramps up and then drops back down to zero. Keeping a lid on the current improves the circuit’s efficiency, since the rate at which it dissipates energy as heat is proportional to the square of the current.
Once the current drops to zero, however, the switches in the inductor’s path need to be thrown immediately; otherwise, current could begin to flow through the circuit in the wrong direction, which would drastically diminish its efficiency. The complication is that the rate at which the current rises and falls depends on the voltage generated by the solar cell, which is highly variable. So the timing of the switch throws has to vary, too.
To control the switches’ timing, El-Damak and Chandrakasan use an electrical component called a capacitor, which can store electrical charge. The higher the current, the more rapidly the capacitor fills. When it’s full, the circuit stops charging the inductor.
The rate at which the current drops off, however, depends on the output voltage, whose regulation is the very purpose of the chip. Since that voltage is fixed, the variation in timing has to come from variation in capacitance. El-Damak and Chandrakasan thus equip their chip with a bank of capacitors of different sizes. As the current drops, it charges a subset of those capacitors, whose selection is determined by the solar cell’s voltage. Once again, when the capacitor fills, the switches in the inductor’s path are flipped.
“In this technology space, there’s usually a trend to lower efficiency as the power gets lower, because there’s a fixed amount of energy that’s consumed by doing the work,” says Brett Miwa, who leads a power conversion development project as a fellow at the chip manufacturer Maxim Integrated. “If you’re only coming in with a small amount, it’s hard to get most of it out, because you lose more as a percentage. [El-Damak’s] design is unusually efficient for how low a power level she’s at.”
“One of the things that’s most notable about it is that it’s really a fairly complete system,” he adds. “It’s really kind of a full system-on-a chip for power management. And that makes it a little more complicated, a little bit larger, and a little bit more comprehensive than some of the other designs that might be reported in the literature. So for her to still achieve these high-performance specs in a much more sophisticated system is also noteworthy.”
I wonder how close they are to commercializing this chip (see below),
The MIT researchers’ prototype for a chip measuring 3 millimeters by 3 millimeters. The magnified detail shows the chip’s main control circuitry, including the startup electronics; the controller that determines whether to charge the battery, power a device, or both; and the array of switches that control current flow to an external inductor coil. This active area measures just 2.2 millimeters by 1.1 millimeters. (click on image to enlarge) Courtesy: MIT
Points to anyone who recognized the reference to Walt Whitman’s poem, “I sing the body electric,” from his classic collection, Leaves of Grass (1867 edition; h/t Wikipedia entry). I wonder if the cyber physical systems (CPS) work being funded by the US National Science Foundation (NSF) in the US will occasion poetry too.
More practically, a May 15, 2015 news item on Nanowerk, describes two cyber physical systems (CPS) research projects newly funded by the NSF,
Today [May 12, 2015] the National Science Foundation (NSF) announced two, five-year, center-scale awards totaling $8.75 million to advance the state-of-the-art in medical and cyber-physical systems (CPS).
One project will develop “Cyberheart”–a platform for virtual, patient-specific human heart models and associated device therapies that can be used to improve and accelerate medical-device development and testing. The other project will combine teams of microrobots with synthetic cells to perform functions that may one day lead to tissue and organ re-generation.
CPS are engineered systems that are built from, and depend upon, the seamless integration of computation and physical components. Often called the “Internet of Things,” CPS enable capabilities that go beyond the embedded systems of today.
“NSF has been a leader in supporting research in cyber-physical systems, which has provided a foundation for putting the ‘smart’ in health, transportation, energy and infrastructure systems,” said Jim Kurose, head of Computer & Information Science & Engineering at NSF. “We look forward to the results of these two new awards, which paint a new and compelling vision for what’s possible for smart health.”
Cyber-physical systems have the potential to benefit many sectors of our society, including healthcare. While advances in sensors and wearable devices have the capacity to improve aspects of medical care, from disease prevention to emergency response, and synthetic biology and robotics hold the promise of regenerating and maintaining the body in radical new ways, little is known about how advances in CPS can integrate these technologies to improve health outcomes.
These new NSF-funded projects will investigate two very different ways that CPS can be used in the biological and medical realms.
A team of leading computer scientists, roboticists and biologists from Boston University, the University of Pennsylvania and MIT have come together to develop a system that combines the capabilities of nano-scale robots with specially designed synthetic organisms. Together, they believe this hybrid “bio-CPS” will be capable of performing heretofore impossible functions, from microscopic assembly to cell sensing within the body.
“We bring together synthetic biology and micron-scale robotics to engineer the emergence of desired behaviors in populations of bacterial and mammalian cells,” said Calin Belta, a professor of mechanical engineering, systems engineering and bioinformatics at Boston University and principal investigator on the project. “This project will impact several application areas ranging from tissue engineering to drug development.”
The project builds on previous research by each team member in diverse disciplines and early proof-of-concept designs of bio-CPS. According to the team, the research is also driven by recent advances in the emerging field of synthetic biology, in particular the ability to rapidly incorporate new capabilities into simple cells. Researchers so far have not been able to control and coordinate the behavior of synthetic cells in isolation, but the introduction of microrobots that can be externally controlled may be transformative.
In this new project, the team will focus on bio-CPS with the ability to sense, transport and work together. As a demonstration of their idea, they will develop teams of synthetic cell/microrobot hybrids capable of constructing a complex, fabric-like surface.
Vijay Kumar (University of Pennsylvania), Ron Weiss (MIT), and Douglas Densmore (BU) are co-investigators of the project.
Medical-CPS and the ‘Cyberheart’
CPS such as wearable sensors and implantable devices are already being used to assess health, improve quality of life, provide cost-effective care and potentially speed up disease diagnosis and prevention. [emphasis mine]
Extending these efforts, researchers from seven leading universities and centers are working together to develop far more realistic cardiac and device models than currently exist. This so-called “Cyberheart” platform can be used to test and validate medical devices faster and at a far lower cost than existing methods. CyberHeart also can be used to design safe, patient-specific device therapies, thereby lowering the risk to the patient.
“Innovative ‘virtual’ design methodologies for implantable cardiac medical devices will speed device development and yield safer, more effective devices and device-based therapies, than is currently possible,” said Scott Smolka, a professor of computer science at Stony Brook University and one of the principal investigators on the award.
The group’s approach combines patient-specific computational models of heart dynamics with advanced mathematical techniques for analyzing how these models interact with medical devices. The analytical techniques can be used to detect potential flaws in device behavior early on during the device-design phase, before animal and human trials begin. They also can be used in a clinical setting to optimize device settings on a patient-by-patient basis before devices are implanted.
“We believe that our coordinated, multi-disciplinary approach, which balances theoretical, experimental and practical concerns, will yield transformational results in medical-device design and foundations of cyber-physical system verification,” Smolka said.
The team will develop virtual device models which can be coupled together with virtual heart models to realize a full virtual development platform that can be subjected to computational analysis and simulation techniques. Moreover, they are working with experimentalists who will study the behavior of virtual and actual devices on animals’ hearts.
Co-investigators on the project include Edmund Clarke (Carnegie Mellon University), Elizabeth Cherry (Rochester Institute of Technology), W. Rance Cleaveland (University of Maryland), Flavio Fenton (Georgia Tech), Rahul Mangharam (University of Pennsylvania), Arnab Ray (Fraunhofer Center for Experimental Software Engineering [Germany]) and James Glimm and Radu Grosu (Stony Brook University). Richard A. Gray of the U.S. Food and Drug Administration is another key contributor.
It is fascinating to observe how terminology is shifting from pacemakers and deep brain stimulators as implants to “CPS such as wearable sensors and implantable devices … .” A new category has been created, CPS, which conjoins medical devices with other sensing devices such as wearable fitness monitors found in the consumer market. I imagine it’s an attempt to quell fears about injecting strange things into or adding strange things to your body—microrobots and nanorobots partially derived from synthetic biology research which are “… capable of performing heretofore impossible functions, from microscopic assembly to cell sensing within the body.” They’ve also sneaked in a reference to synthetic biology, an area of research where some concerns have been expressed, from my March 19, 2013 post about a poll and synthetic biology concerns,
In our latest survey, conducted in January 2013, three-fourths of respondents say they have heard little or nothing about synthetic biology, a level consistent with that measured in 2010. While initial impressions about the science are largely undefined, these feelings do not necessarily become more positive as respondents learn more. The public has mixed reactions to specific synthetic biology applications, and almost one-third of respondents favor a ban “on synthetic biology research until we better understand its implications and risks,” while 61 percent think the science should move forward.
I imagine that for scientists, 61% in favour of more research is not particularly comforting given how easily and quickly public opinion can shift.
Making a graphene filter that allows water to pass through while screening out salt and/or noxious materials has been more challenging than one might think. According to a May 7, 2015 news item on Nanowerk, graphene filters can be ‘leaky’,
For faster, longer-lasting water filters, some scientists are looking to graphene –thin, strong sheets of carbon — to serve as ultrathin membranes, filtering out contaminants to quickly purify high volumes of water.
Graphene’s unique properties make it a potentially ideal membrane for water filtration or desalination. But there’s been one main drawback to its wider use: Making membranes in one-atom-thick layers of graphene is a meticulous process that can tear the thin material — creating defects through which contaminants can leak.
Now engineers at MIT [Massachusetts Institute of Technology], Oak Ridge National Laboratory, and King Fahd University of Petroleum and Minerals (KFUPM) have devised a process to repair these leaks, filling cracks and plugging holes using a combination of chemical deposition and polymerization techniques. The team then used a process it developed previously to create tiny, uniform pores in the material, small enough to allow only water to pass through.
Combining these two techniques, the researchers were able to engineer a relatively large defect-free graphene membrane — about the size of a penny. The membrane’s size is significant: To be exploited as a filtration membrane, graphene would have to be manufactured at a scale of centimeters, or larger.
In experiments, the researchers pumped water through a graphene membrane treated with both defect-sealing and pore-producing processes, and found that water flowed through at rates comparable to current desalination membranes. The graphene was able to filter out most large-molecule contaminants, such as magnesium sulfate and dextran.
Rohit Karnik, an associate professor of mechanical engineering at MIT, says the group’s results, published in the journal Nano Letters, represent the first success in plugging graphene’s leaks.
“We’ve been able to seal defects, at least on the lab scale, to realize molecular filtration across a macroscopic area of graphene, which has not been possible before,” Karnik says. “If we have better process control, maybe in the future we don’t even need defect sealing. But I think it’s very unlikely that we’ll ever have perfect graphene — there will always be some need to control leakages. These two [techniques] are examples which enable filtration.”
Sean O’Hern, a former graduate research assistant at MIT, is the paper’s first author. Other contributors include MIT graduate student Doojoon Jang, former graduate student Suman Bose, and Professor Jing Kong.
A delicate transfer
“The current types of membranes that can produce freshwater from saltwater are fairly thick, on the order of 200 nanometers,” O’Hern says. “The benefit of a graphene membrane is, instead of being hundreds of nanometers thick, we’re on the order of three angstroms — 600 times thinner than existing membranes. This enables you to have a higher flow rate over the same area.”
O’Hern and Karnik have been investigating graphene’s potential as a filtration membrane for the past several years. In 2009, the group began fabricating membranes from graphene grown on copper — a metal that supports the growth of graphene across relatively large areas. However, copper is impermeable, requiring the group to transfer the graphene to a porous substrate following fabrication.
However, O’Hern noticed that this transfer process would create tears in graphene. What’s more, he observed intrinsic defects created during the growth process, resulting perhaps from impurities in the original material.
Plugging graphene’s leaks
To plug graphene’s leaks, the team came up with a technique to first tackle the smaller intrinsic defects, then the larger transfer-induced defects. For the intrinsic defects, the researchers used a process called “atomic layer deposition,” placing the graphene membrane in a vacuum chamber, then pulsing in a hafnium-containing chemical that does not normally interact with graphene. However, if the chemical comes in contact with a small opening in graphene, it will tend to stick to that opening, attracted by the area’s higher surface energy.
The team applied several rounds of atomic layer deposition, finding that the deposited hafnium oxide successfully filled in graphene’s nanometer-scale intrinsic defects. However, O’Hern realized that using the same process to fill in much larger holes and tears — on the order of hundreds of nanometers — would require too much time.
Instead, he and his colleagues came up with a second technique to fill in larger defects, using a process called “interfacial polymerization” that is often employed in membrane synthesis. After they filled in graphene’s intrinsic defects, the researchers submerged the membrane at the interface of two solutions: a water bath and an organic solvent that, like oil, does not mix with water.
In the two solutions, the researchers dissolved two different molecules that can react to form nylon. Once O’Hern placed the graphene membrane at the interface of the two solutions, he observed that nylon plugs formed only in tears and holes — regions where the two molecules could come in contact because of tears in the otherwise impermeable graphene — effectively sealing the remaining defects.
Using a technique they developed last year, the researchers then etched tiny, uniform holes in graphene — small enough to let water molecules through, but not larger contaminants. In experiments, the group tested the membrane with water containing several different molecules, including salt, and found that the membrane rejected up to 90 percent of larger molecules. However, it let salt through at a faster rate than water.
The preliminary tests suggest that graphene may be a viable alternative to existing filtration membranes, although Karnik says techniques to seal its defects and control its permeability will need further improvements.
“Water desalination and nanofiltration are big applications where, if things work out and this technology withstands the different demands of real-world tests, it would have a large impact,” Karnik says. “But one could also imagine applications for fine chemical- or biological-sample processing, where these membranes could be useful. And this is the first report of a centimeter-scale graphene membrane that does any kind of molecular filtration. That’s exciting.”
De-en Jiang, an assistant professor of chemistry at the University of California at Riverside, sees the defect-sealing technique as “a great advance toward making graphene filtration a reality.”
“The two-step technique is very smart: sealing the defects while preserving the desired pores for filtration,” says Jiang, who did not contribute to the research. “This would make the scale-up much easier. One can produce a large graphene membrane first, not worrying about the defects, which can be sealed later.”
I have featured graphene and water desalination work before from these researchers at MIT in a Feb. 27, 2014 posting. Interestingly, there was no mention of problems with defects in the news release highlighting this previous work.
Here’s a link to and a citation for the latest paper,
Courtesy: MIT (Massachusetts Institute of Technology)
I love this .gif; it says a lot without a word. However for details, you need words and here’s what an April 15, 2015 news item on Nanowerk has to say about the research illustrated by the .gif,
MIT [Massachusetts Institute of Technology] chemists have devised an inexpensive, portable sensor that can detect gases emitted by rotting meat, allowing consumers to determine whether the meat in their grocery store or refrigerator is safe to eat.
The sensor, which consists of chemically modified carbon nanotubes, could be deployed in “smart packaging” that would offer much more accurate safety information than the expiration date on the package, says Timothy Swager, the John D. MacArthur Professor of Chemistry at MIT.
It could also cut down on food waste, he adds. “People are constantly throwing things out that probably aren’t bad,” says Swager, who is the senior author of a paper describing the new sensor this week in the journal Angewandte Chemie.
This latest study is builds on previous work at Swager’s lab (Note: Links have been removed),
The sensor is similar to other carbon nanotube devices that Swager’s lab has developed in recent years, including one that detects the ripeness of fruit. All of these devices work on the same principle: Carbon nanotubes can be chemically modified so that their ability to carry an electric current changes in the presence of a particular gas.
In this case, the researchers modified the carbon nanotubes with metal-containing compounds called metalloporphyrins, which contain a central metal atom bound to several nitrogen-containing rings. Hemoglobin, which carries oxygen in the blood, is a metalloporphyrin with iron as the central atom.
For this sensor, the researchers used a metalloporphyrin with cobalt at its center. Metalloporphyrins are very good at binding to nitrogen-containing compounds called amines. Of particular interest to the researchers were the so-called biogenic amines, such as putrescine and cadaverine, which are produced by decaying meat.
When the cobalt-containing porphyrin binds to any of these amines, it increases the electrical resistance of the carbon nanotube, which can be easily measured.
“We use these porphyrins to fabricate a very simple device where we apply a potential across the device and then monitor the current. When the device encounters amines, which are markers of decaying meat, the current of the device will become lower,” Liu says.
In this study, the researchers tested the sensor on four types of meat: pork, chicken, cod, and salmon. They found that when refrigerated, all four types stayed fresh over four days. Left unrefrigerated, the samples all decayed, but at varying rates.
There are other sensors that can detect the signs of decaying meat, but they are usually large and expensive instruments that require expertise to operate. “The advantage we have is these are the cheapest, smallest, easiest-to-manufacture sensors,” Swager says.
“There are several potential advantages in having an inexpensive sensor for measuring, in real time, the freshness of meat and fish products, including preventing foodborne illness, increasing overall customer satisfaction, and reducing food waste at grocery stores and in consumers’ homes,” says Roberto Forloni, a senior science fellow at Sealed Air, a major supplier of food packaging, who was not part of the research team.
The new device also requires very little power and could be incorporated into a wireless platform Swager’s lab recently developed that allows a regular smartphone to read output from carbon nanotube sensors such as this one.
The funding sources are interesting, as I am appreciating with increasing frequency these days (from the news release),
The researchers have filed for a patent on the technology and hope to license it for commercial development. The research was funded by the National Science Foundation and the Army Research Office through MIT’s Institute for Soldier Nanotechnologies.
There are other posts here about the quest to create food sensors including this Sept. 26, 2013 piece which features a critique (by another blogger) about trying to create food sensors that may be more expensive than the item they are protecting, a problem Swager claims to have overcome in an April 17, 2015 article by Ben Schiller for Fast Company (Note: Links have been removed),
Swager has set up a company to commercialize the technology and he expects to do the first demonstrations to interested clients this summer. The first applications are likely to be for food workers working with meat and fish, but there’s no reason why consumers shouldn’t get their own devices in due time.
There are efforts to create visual clues for food status. But Swager says his method is better because it doesn’t rely on perception: it produces hard data that can be logged and tracked. And it also has potential to be very cheap.
“The resistance method is a game-changer because it’s two to three orders of magnitude cheaper than other technology. It’s hard to imagine doing this cheaper,” he says.
It seems that ovens are an essential piece of equipment when manufacturing aircraft parts but that may change if research from MIT (Massachusetts Institute of Technology) proves successful. An April 14, 2015 news item on ScienceDaily describes the current process and the MIT research,
Composite materials used in aircraft wings and fuselages are typically manufactured in large, industrial-sized ovens: Multiple polymer layers are blasted with temperatures up to 750 degrees Fahrenheit, and solidified to form a solid, resilient material. Using this approach, considerable energy is required first to heat the oven, then the gas around it, and finally the actual composite.
Aerospace engineers at MIT have now developed a carbon nanotube (CNT) film that can heat and solidify a composite without the need for massive ovens. When connected to an electrical power source, and wrapped over a multilayer polymer composite, the heated film stimulates the polymer to solidify.
The group tested the film on a common carbon-fiber material used in aircraft components, and found that the film created a composite as strong as that manufactured in conventional ovens — while using only 1 percent of the energy.
The new “out-of-oven” approach may offer a more direct, energy-saving method for manufacturing virtually any industrial composite, says Brian L. Wardle, an associate professor of aeronautics and astronautics at MIT.
“Typically, if you’re going to cook a fuselage for an Airbus A350 or Boeing 787, you’ve got about a four-story oven that’s tens of millions of dollars in infrastructure that you don’t need,” Wardle says. “Our technique puts the heat where it is needed, in direct contact with the part being assembled. Think of it as a self-heating pizza. … Instead of an oven, you just plug the pizza into the wall and it cooks itself.”
Wardle says the carbon nanotube film is also incredibly lightweight: After it has fused the underlying polymer layers, the film itself — a fraction of a human hair’s diameter — meshes with the composite, adding negligible weight.
An April 14, 2015 MIT news release, which originated the news item, describes the origins of the team’s latest research, the findings, and the implications,
Carbon nanotube deicers
Wardle and his colleagues have experimented with CNT films in recent years, mainly for deicing airplane wings. The team recognized that in addition to their negligible weight, carbon nanotubes heat efficiently when exposed to an electric current.
The group first developed a technique to create a film of aligned carbon nanotubes composed of tiny tubes of crystalline carbon, standing upright like trees in a forest. The researchers used a rod to roll the “forest” flat, creating a dense film of aligned carbon nanotubes.
In experiments, Wardle and his team integrated the film into airplane wings via conventional, oven-based curing methods, showing that when voltage was applied, the film generated heat, preventing ice from forming.
The deicing tests inspired a question: If the CNT film could generate heat, why not use it to make the composite itself?
How hot can you go?
In initial experiments, the researchers investigated the film’s potential to fuse two types of aerospace-grade composite typically used in aircraft wings and fuselages. Normally the material, composed of about 16 layers, is solidified, or cross-linked, in a high-temperature industrial oven.
The researchers manufactured a CNT film about the size of a Post-It note, and placed the film over a square of Cycom 5320-1. They connected electrodes to the film, then applied a current to heat both the film and the underlying polymer in the Cycom composite layers.
The team measured the energy required to solidify, or cross-link, the polymer and carbon fiber layers, finding that the CNT film used one-hundredth the electricity required for traditional oven-based methods to cure the composite. Both methods generated composites with similar properties, such as cross-linking density.
Wardle says the results pushed the group to test the CNT film further: As different composites require different temperatures in order to fuse, the researchers looked to see whether the CNT film could, quite literally, take the heat.
“At some point, heaters fry out,” Wardle says. “They oxidize, or have different ways in which they fail. What we wanted to see was how hot could this material go.”
To do this, the group tested the film’s ability to generate higher and higher temperatures, and found it topped out at over 1,000 F. In comparison, some of the highest-temperature aerospace polymers require temperatures up to 750 F in order to solidify.
“We can process at those temperatures, which means there’s no composite we can’t process,” Wardle says. “This really opens up all polymeric materials to this technology.”
The team is working with industrial partners to find ways to scale up the technology to manufacture composites large enough to make airplane fuselages and wings.
“There needs to be some thought given to electroding, and how you’re going to actually make the electrical contact efficiently over very large areas,” Wardle says. “You’d need much less power than you are currently putting into your oven. I don’t think it’s a challenge, but it has to be done.”
Gregory Odegard, a professor of computational mechanics at Michigan Technological University, says the group’s carbon nanotube film may go toward improving the quality and efficiency of fabrication processes for large composites, such as wings on commercial aircraft. The new technique may also open the door to smaller firms that lack access to large industrial ovens.
“Smaller companies that want to fabricate composite parts may be able to do so without investing in large ovens or outsourcing,” says Odegard, who was not involved in the research. “This could lead to more innovation in the composites sector, and perhaps improvements in the performance and usage of composite materials.”
It can be interesting to find out who funds the research (from the news release),
This research was funded in part by Airbus Group, Boeing, Embraer, Lockheed Martin, Saab AB, TohoTenax, ANSYS Inc., the Air Force Research Laboratory at Wright-Patterson Air Force Base, and the U.S. Army Research Office.
Here’s a link to and citation for the research paper,
Quantum entanglement as an idea seems extraordinary to me like something from of the fevered imagination made possible only with certain kinds of hallucinogens. I suppose you could call theoretical physicists who’ve conceptualized entanglement a different breed as they don’t seem to need chemical assistance for their flights of fancy, which turn out to be reality. Researchers at MIT (Massachusetts Institute of Technology) and the University of Belgrade (Serbia) have entangled thousands of atoms with a single photon according to a March 26, 2015 news item on Nanotechnology Now,
Physicists from MIT and the University of Belgrade have developed a new technique that can successfully entangle 3,000 atoms using only a single photon. The results, published today in the journal Nature, represent the largest number of particles that have ever been mutually entangled experimentally.
The researchers say the technique provides a realistic method to generate large ensembles of entangled atoms, which are key components for realizing more-precise atomic clocks.
“You can make the argument that a single photon cannot possibly change the state of 3,000 atoms, but this one photon does — it builds up correlations that you didn’t have before,” says Vladan Vuletic, the Lester Wolfe Professor in MIT’s Department of Physics, and the paper’s senior author. “We have basically opened up a new class of entangled states we can make, but there are many more new classes to be explored.”
A March 26, 2015 MIT news release by Jennifer Chu (also on EurekAlert but dated March 25, 2015), which originated the news item, describes entanglement with particular attention to how it relates to atomic timekeeping,
Entanglement is a curious phenomenon: As the theory goes, two or more particles may be correlated in such a way that any change to one will simultaneously change the other, no matter how far apart they may be. For instance, if one atom in an entangled pair were somehow made to spin clockwise, the other atom would instantly be known to spin counterclockwise, even though the two may be physically separated by thousands of miles.
The phenomenon of entanglement, which physicist Albert Einstein once famously dismissed as “spooky action at a distance,” is described not by the laws of classical physics, but by quantum mechanics, which explains the interactions of particles at the nanoscale. At such minuscule scales, particles such as atoms are known to behave differently from matter at the macroscale.
Scientists have been searching for ways to entangle not just pairs, but large numbers of atoms; such ensembles could be the basis for powerful quantum computers and more-precise atomic clocks. The latter is a motivation for Vuletic’s group.
Today’s best atomic clocks are based on the natural oscillations within a cloud of trapped atoms. As the atoms oscillate, they act as a pendulum, keeping steady time. A laser beam within the clock, directed through the cloud of atoms, can detect the atoms’ vibrations, which ultimately determine the length of a single second.
“Today’s clocks are really amazing,” Vuletic says. “They would be less than a minute off if they ran since the Big Bang — that’s the stability of the best clocks that exist today. We’re hoping to get even further.”
The accuracy of atomic clocks improves as more and more atoms oscillate in a cloud. Conventional atomic clocks’ precision is proportional to the square root of the number of atoms: For example, a clock with nine times more atoms would only be three times as accurate. If these same atoms were entangled, a clock’s precision could be directly proportional to the number of atoms — in this case, nine times as accurate. The larger the number of entangled particles, then, the better an atomic clock’s timekeeping.
It seems weak lasers make big entanglements possible (from the news release),
Scientists have so far been able to entangle large groups of atoms, although most attempts have only generated entanglement between pairs in a group. Only one team has successfully entangled 100 atoms — the largest mutual entanglement to date, and only a small fraction of the whole atomic ensemble.
Now Vuletic and his colleagues have successfully created a mutual entanglement among 3,000 atoms, virtually all the atoms in the ensemble, using very weak laser light — down to pulses containing a single photon. The weaker the light, the better, Vuletic says, as it is less likely to disrupt the cloud. “The system remains in a relatively clean quantum state,” he says.
The researchers first cooled a cloud of atoms, then trapped them in a laser trap, and sent a weak laser pulse through the cloud. They then set up a detector to look for a particular photon within the beam. Vuletic reasoned that if a photon has passed through the atom cloud without event, its polarization, or direction of oscillation, would remain the same. If, however, a photon has interacted with the atoms, its polarization rotates just slightly — a sign that it was affected by quantum “noise” in the ensemble of spinning atoms, with the noise being the difference in the number of atoms spinning clockwise and counterclockwise.
“Every now and then, we observe an outgoing photon whose electric field oscillates in a direction perpendicular to that of the incoming photons,” Vuletic says. “When we detect such a photon, we know that must have been caused by the atomic ensemble, and surprisingly enough, that detection generates a very strongly entangled state of the atoms.”
Vuletic and his colleagues are currently using the single-photon detection technique to build a state-of-the-art atomic clock that they hope will overcome what’s known as the “standard quantum limit” — a limit to how accurate measurements can be in quantum systems. Vuletic says the group’s current setup may be a step toward developing even more complex entangled states.
“This particular state can improve atomic clocks by a factor of two,” Vuletic says. “We’re striving toward making even more complicated states that can go further.”
This research was supported in part by the National Science Foundation, the Defense Advanced Research Projects Agency, and the Air Force Office of Scientific Research.
I have two items about implants and brains and an item about being able to exert remote control of the brain, all of which hint at a cyborg future for at least a few of us.
e-Dura, the spinal column, and the brain
The first item concerns some research, at the École Polytechnique de Lausanne (EPFL) which features flexible electronics. From a March 24, 2015 article by Ben Schiller for Fast Company (Note: Links have been removed),
Researchers at the Swiss Federal Institute of Technology, in Lausanne, have developed the e-Dura—a tiny skinlike device that attaches directly to damaged spinal cords. By sending out small electrical pulses, it stimulates the cord as if it were receiving signals from the brain, thus allowing movement.
“The purpose of the neuro-prosthesis is to excite the neurons that are on the spinal cord below the site of the injury and activate them, just like if they were receiving information from the brain,” says Stéphanie Lacour, a professor at the institute.
EPFL scientists have managed to get rats walking on their own again using a combination of electrical and chemical stimulation. But applying this method to humans would require multifunctional implants that could be installed for long periods of time on the spinal cord without causing any tissue damage. This is precisely what the teams of professors Stéphanie Lacour and Grégoire Courtine have developed. Their e-Dura implant is designed specifically for implantation on the surface of the brain or spinal cord. The small device closely imitates the mechanical properties of living tissue, and can simultaneously deliver electric impulses and pharmacological substances. The risks of rejection and/or damage to the spinal cord have been drastically reduced. An article about the implant will appear in early January  in Science Magazine.
So-called “surface implants” have reached a roadblock; they cannot be applied long term to the spinal cord or brain, beneath the nervous system’s protective envelope, otherwise known as the “dura mater,” because when nerve tissues move or stretch, they rub against these rigid devices. After a while, this repeated friction causes inflammation, scar tissue buildup, and rejection.
Here’s what the implant looks like,
The press release describes how the implant is placed (Note: A link has been removed),
Flexible and stretchy, the implant developed at EPFL is placed beneath the dura mater, directly onto the spinal cord. Its elasticity and its potential for deformation are almost identical to the living tissue surrounding it. This reduces friction and inflammation to a minimum. When implanted into rats, the e-Dura prototype caused neither damage nor rejection, even after two months. More rigid traditional implants would have caused significant nerve tissue damage during this period of time.
The researchers tested the device prototype by applying their rehabilitation protocol — which combines electrical and chemical stimulation – to paralyzed rats. Not only did the implant prove its biocompatibility, but it also did its job perfectly, allowing the rats to regain the ability to walk on their own again after a few weeks of training.
“Our e-Dura implant can remain for a long period of time on the spinal cord or the cortex, precisely because it has the same mechanical properties as the dura mater itself. This opens up new therapeutic possibilities for patients suffering from neurological trauma or disorders, particularly individuals who have become paralyzed following spinal cord injury,” explains Lacour, co-author of the paper, and holder of EPFL’s Bertarelli Chair in Neuroprosthetic Technology.
The press release goes on to describe the engineering achievements,
Developing the e-Dura implant was quite a feat of engineering. As flexible and stretchable as living tissue, it nonetheless includes electronic elements that stimulate the spinal cord at the point of injury. The silicon substrate is covered with cracked gold electric conducting tracks that can be pulled and stretched. The electrodes are made of an innovative composite of silicon and platinum microbeads. They can be deformed in any direction, while still ensuring optimal electrical conductivity. Finally, a fluidic microchannel enables the delivery of pharmacological substances – neurotransmitters in this case – that will reanimate the nerve cells beneath the injured tissue.
The implant can also be used to monitor electrical impulses from the brain in real time. When they did this, the scientists were able to extract with precision the animal’s motor intention before it was translated into movement.
“It’s the first neuronal surface implant designed from the start for long-term application. In order to build it, we had to combine expertise from a considerable number of areas,” explains Courtine, co-author and holder of EPFL’s IRP Chair in Spinal Cord Repair. “These include materials science, electronics, neuroscience, medicine, and algorithm programming. I don’t think there are many places in the world where one finds the level of interdisciplinary cooperation that exists in our Center for Neuroprosthetics.”
For the time being, the e-Dura implant has been primarily tested in cases of spinal cord injury in paralyzed rats. But the potential for applying these surface implants is huge – for example in epilepsy, Parkinson’s disease and pain management. The scientists are planning to move towards clinical trials in humans, and to develop their prototype in preparation for commercialization.
EPFL has provided a video of researcher Stéphanie Lacour describing e-Dura and expressing hopes for its commercialization,
Here’s a link to and a citation for the paper,
Electronic dura mater for long-term multimodal neural interfaces by Ivan R. Minev, Pavel Musienko, Arthur Hirsch, Quentin Barraud, Nikolaus Wenger, Eduardo Martin Moraud, Jérôme Gandar, Marco Capogrosso, Tomislav Milekovic, Léonie Asboth, Rafael Fajardo Torres, Nicolas Vachicouras, Qihan Liu, Natalia Pavlova, Simone Duis, Alexandre Larmagnac, Janos Vörös, Silvestro Micera, Zhigang Suo, Grégoire Courtine, Stéphanie P. Lacour. Science 9 January 2015: Vol. 347 no. 6218 pp. 159-163 DOI: 10.1126/science.1260318
This paper is behind a paywall.
Carbon nanotube fibres could connect to the brain
Researchers at Rice University (Texas, US) are excited about the possibilities that carbon nanotube fibres offer in the field of implantable electronics for the brain. From a March 25, 2015 news item on Nanowerk,
Carbon nanotube fibers invented at Rice University may provide the best way to communicate directly with the brain.
The fibers have proven superior to metal electrodes for deep brain stimulation and to read signals from a neuronal network. Because they provide a two-way connection, they show promise for treating patients with neurological disorders while monitoring the real-time response of neural circuits in areas that control movement, mood and bodily functions.
New experiments at Rice demonstrated the biocompatible fibers are ideal candidates for small, safe electrodes that interact with the brain’s neuronal system, according to the researchers. They could replace much larger electrodes currently used in devices for deep brain stimulation therapies in Parkinson’s disease patients.
They may also advance technologies to restore sensory or motor functions and brain-machine interfaces as well as deep brain stimulation therapies for other neurological disorders, including dystonia and depression, the researchers wrote.
The fibers created by the Rice lab of chemist and chemical engineer Matteo Pasquali consist of bundles of long nanotubes originally intended for aerospace applications where strength, weight and conductivity are paramount.
The individual nanotubes measure only a few nanometers across, but when millions are bundled in a process called wet spinning, they become thread-like fibers about a quarter the width of a human hair.
“We developed these fibers as high-strength, high-conductivity materials,” Pasquali said. “Yet, once we had them in our hand, we realized that they had an unexpected property: They are really soft, much like a thread of silk. Their unique combination of strength, conductivity and softness makes them ideal for interfacing with the electrical function of the human body.”
The simultaneous arrival in 2012 of Caleb Kemere, a Rice assistant professor who brought expertise in animal models of Parkinson’s disease, and lead author Flavia Vitale, a research scientist in Pasquali’s lab with degrees in chemical and biomedical engineering, prompted the investigation.
“The brain is basically the consistency of pudding and doesn’t interact well with stiff metal electrodes,” Kemere said. “The dream is to have electrodes with the same consistency, and that’s why we’re really excited about these flexible carbon nanotube fibers and their long-term biocompatibility.”
Weeks-long tests on cells and then in rats with Parkinson’s symptoms proved the fibers are stable and as efficient as commercial platinum electrodes at only a fraction of the size. The soft fibers caused little inflammation, which helped maintain strong electrical connections to neurons by preventing the body’s defenses from scarring and encapsulating the site of the injury.
The highly conductive carbon nanotube fibers also show much more favorable impedance – the quality of the electrical connection — than state-of-the-art metal electrodes, making for better contact at lower voltages over long periods, Kemere said.
The working end of the fiber is the exposed tip, which is about the width of a neuron. The rest is encased with a three-micron layer of a flexible, biocompatible polymer with excellent insulating properties.
The challenge is in placing the tips. “That’s really just a matter of having a brain atlas, and during the experiment adjusting the electrodes very delicately and putting them into the right place,” said Kemere, whose lab studies ways to connect signal-processing systems and the brain’s memory and cognitive centers.
Doctors who implant deep brain stimulation devices start with a recording probe able to “listen” to neurons that emit characteristic signals depending on their functions, Kemere said. Once a surgeon finds the right spot, the probe is removed and the stimulating electrode gently inserted. Rice carbon nanotube fibers that send and receive signals would simplify implantation, Vitale said.
The fibers could lead to self-regulating therapeutic devices for Parkinson’s and other patients. Current devices include an implant that sends electrical signals to the brain to calm the tremors that afflict Parkinson’s patients.
“But our technology enables the ability to record while stimulating,” Vitale said. “Current electrodes can only stimulate tissue. They’re too big to detect any spiking activity, so basically the clinical devices send continuous pulses regardless of the response of the brain.”
Kemere foresees a closed-loop system that can read neuronal signals and adapt stimulation therapy in real time. He anticipates building a device with many electrodes that can be addressed individually to gain fine control over stimulation and monitoring from a small, implantable device.
“Interestingly, conductivity is not the most important electrical property of the nanotube fibers,” Pasquali said. “These fibers are intrinsically porous and extremely stable, which are both great advantages over metal electrodes for sensing electrochemical signals and maintaining performance over long periods of time.”
The paper is open access provided you register on the website.
Remote control for stimulation of the brain
Mo Costandi, neuroscientist and freelance science writer, has written a March 24, 2015 post for the Guardian science blog network focusing on neuronal remote control,
Two teams of scientists have developed new ways of stimulating neurons with nanoparticles, allowing them to activate brain cells remotely using light or magnetic fields. The new methods are quicker and far less invasive than other hi-tech methods available, so could be more suitable for potential new treatments for human diseases.
Researchers have various methods for manipulating brain cell activity, arguably the most powerful being optogenetics, which enables them to switch specific brain cells on or off with unprecedented precision, and simultaneously record their behaviour, using pulses of light.
This is very useful for probing neural circuits and behaviour, but involves first creating genetically engineered mice with light-sensitive neurons, and then inserting the optical fibres that deliver light into the brain, so there are major technical and ethical barriers to its use in humans.
Nanomedicine could get around this. Francisco Bezanilla of the University of Chicago and his colleagues knew that gold nanoparticles can absorb light and convert it into heat, and several years ago they discovered that infrared light can make neurons fire nervous impulses by heating up their cell membranes.
Polina Anikeeva’s team at the Massachusetts Institute of Technology adopted a slightly different approach, using spherical iron oxide particles that give off heat when exposed to an alternating magnetic field.
Although still in the experimental stages, research like this may eventually allow for wireless and minimally invasive deep brain stimulation of the human brain. Bezanilla’s group aim to apply their method to develop treatments for macular degeneration and other conditions that kill off light-sensitive cells in the retina. This would involve injecting nanoparticles into the eye so that they bind to other retinal cells, allowing natural light to excite them into firing impulses to the optic nerve.
Costandi’s article is intended for an audience that either understands the science or can deal with the uncertainty of not understanding absolutely everything. Provided you fall into either of those categories, the article is well written and it provides links and citations to the papers for both research teams being featured.
Taken together, the research at EPFL, Rice University, University of Chicago, and Massachusetts Institute of Technology provides a clue as to how much money and intellectual power is being directed at the brain.
The blue-rayed limpet is a tiny mollusk that lives in kelp beds along the coasts of Norway, Iceland, the United Kingdom, Portugal, and the Canary Islands. These diminutive organisms — as small as a fingernail — might escape notice entirely, if not for a very conspicuous feature: bright blue dotted lines that run in parallel along the length of their translucent shells. Depending on the angle at which light hits, a limpet’s shell can flash brilliantly even in murky water.
Now scientists at MIT and Harvard University have identified two optical structures within the limpet’s shell that give its blue-striped appearance. The structures are configured to reflect blue light while absorbing all other wavelengths of incoming light. The researchers speculate that such patterning may have evolved to protect the limpet, as the blue lines resemble the color displays on the shells of more poisonous soft-bodied snails.
The findings, reported this week in the journal Nature Communications, represent the first evidence of an organism using mineralized structural components to produce optical displays. While birds, butterflies, and beetles can display brilliant blues, among other colors, they do so with organic structures, such as feathers, scales, and plates. The limpet, by contrast, produces its blue stripes through an interplay of inorganic, mineral structures, arranged in such a way as to reflect only blue light.
The researchers say such natural optical structures may serve as a design guide for engineering color-selective, controllable, transparent displays that require no internal light source and could be incorporated into windows and glasses.
“Let’s imagine a window surface in a car where you obviously want to see the outside world as you’re driving, but where you also can overlay the real world with an augmented reality that could involve projecting a map and other useful information on the world that exists on the other side of the windshield,” says co-author Mathias Kolle, an assistant professor of mechanical engineering at MIT. “We believe that the limpet’s approach to displaying color patterns in a translucent shell could serve as a starting point for developing such displays.”
The news release then reveals how this research came about,
Kolle, whose research is focused on engineering bioinspired, optical materials — including color-changing, deformable fibers — started looking into the optical features of the limpet when his brother Stefan, a marine biologist now working at Harvard, brought Kolle a few of the organisms in a small container. Stefan Kolle was struck by the mollusk’s brilliant patterning, and recruited his brother, along with several others, to delve deeper into the limpet shell’s optical properties.
To do this, the team of researchers — which also included Ling Li and Christine Ortiz at MIT and James Weaver and Joanna Aizenberg at Harvard — performed a detailed structural and optical analysis of the limpet shells. They observed that the blue stripes first appear in juveniles, resembling dashed lines. The stripes grow more continuous as a limpet matures, and their shade varies from individual to individual, ranging from deep blue to turquoise.
The researchers scanned the surface of a limpet’s shell using scanning electron microscopy, and found no structural differences in areas with and without the stripes — an observation that led them to think that perhaps the stripes arose from features embedded deeper in the shell.
To get a picture of what lay beneath, the researchers used a combination of high-resolution 2-D and 3-D structural analysis to reveal the 3-D nanoarchitecture of the photonic structures embedded in the limpets’ translucent shells.
What they found was revealing: In the regions with blue stripes, the shells’ top and bottom layers were relatively uniform, with dense stacks of calcium carbonate platelets and thin organic layers, similar to the shell structure of other mollusks. However, about 30 microns beneath the shell surface the researchers noted a stark difference. In these regions, the researchers found that the regular plates of calcium carbonate morphed into two distinct structural features: a multilayered structure with regular spacing between calcium carbonate layers resembling a zigzag pattern, and beneath this, a layer of randomly dispersed, spherical particles.
The researchers measured the dimensions of the zigzagging plates, and found the spacing between them was much wider than the more uniform plates running through the shell’s unstriped sections. They then examined the potential optical roles of both the multilayer zigzagging structure and the spherical particles.
Kolle and his colleagues used optical microscopy, spectroscopy, and diffraction microscopy to quantify the blue stripe’s light-reflection properties. They then measured the zigzagging structures and their angle with respect to the shell surface, and determined that this structure is optimized to reflect blue and green light.
The researchers also determined that the disordered arrangement of spherical particles beneath the zigzag structures serves to absorb transmitted light that otherwise could de-saturate the reflected blue color.
From these results, Kolle and his team deduced that the zigzag pattern acts as a filter, reflecting only blue light. As the rest of the incoming light passes through the shell, the underlying particles absorb this light — an effect that makes a shell’s stripes appear even more brilliantly blue.
And, for those who can never get enough detail, the news release provides a bit more than the video,
The team then sought to tackle a follow-up question: What purpose do the blue stripes serve? The limpets live either concealed at the base of kelp plants, or further up in the fronds, where they are visually exposed. Those at the base grow a thicker shell with almost no stripes, while their blue-striped counterparts live higher on the plant.
Limpets generally don’t have well-developed eyes, so the researchers reasoned that the blue stripes must not serve as a communication tool, attracting one organism to another. Rather, they think that the limpet’s stripes may be a defensive mechanism: The mollusk sits largely exposed on a frond, so a plausible defense against predators may be to appear either invisible or unappetizing. The researchers determined that the latter is more likely the case, as the limpet’s blue stripes resemble the patterning of poisonous marine snails that also happen to inhabit similar kelp beds.
Kolle says the group’s work has revealed an interesting insight into the limpet’s optical properties, which may be exploited to engineer advanced transparent optical displays. The limpet, he points out, has evolved a microstructure in its shell to satisfy an optical purpose without overly compromising the shell’s mechanical integrity. Materials scientists and engineers could take inspiration from this natural balancing act.
“It’s all about multifunctional materials in nature: Every organism — no matter if it has a shell, or skin, or feathers — interacts in various ways with the environment, and the materials with which it interfaces to the outside world frequently have to fulfill multiple functions simultaneously,” Kolle says. “[Engineers] are more and more focusing on not only optimizing just one single property in a material or device, like a brighter screen or higher pixel density, but rather on satisfying several … design and performance criteria simultaneously. We can gain inspiration and insight from nature.”
Peter Vukusic, an associate professor of physics at the University of Exeter in the United Kingdom, says the researchers “have done an exquisite job” in uncovering the optical mechanism behind the limpet’s conspicuous appearance.
“By using multiple and complementary analysis techniques they have elucidated, in glorious detail, the many structural and physiological factors that have given rise to the optical signature of this highly evolved system,” says Vukusic, who was not involved in the study. “The animal’s complex morphology is highly interesting for photonics scientists and technologists interested in manipulating light and creating specialized appearances.”
I have two items about the CRISPR gene editing technique. The first concerns a new use for the CRISPR technique developed by researchers at Johns Hopkins University School of Medicine described in a Jan. 5, 2015 Johns Hopkins University news release on EurekAlert,
A powerful “genome editing” technology known as CRISPR has been used by researchers since 2012 to trim, disrupt, replace or add to sequences of an organism’s DNA. Now, scientists at Johns Hopkins Medicine have shown that the system also precisely and efficiently alters human stem cells.
“Stem cell technology is quickly advancing, and we think that the days when we can use iPSCs [human-induced pluripotent stem cells] for human therapy aren’t that far away,” says Zhaohui Ye, Ph.D., an instructor of medicine at the Johns Hopkins University School of Medicine. “This is one of the first studies to detail the use of CRISPR in human iPSCs, showcasing its potential in these cells.”
CRISPR originated from a microbial immune system that contains DNA segments known as clustered regularly interspaced short palindromic repeats. The engineered editing system makes use of an enzyme that nicks together DNA with a piece of small RNA that guides the tool to where researchers want to introduce cuts or other changes in the genome.
Previous research has shown that CRISPR can generate genomic changes or mutations through these interventions far more efficiently than other gene editing techniques, such as TALEN, short for transcription activator-like effector nuclease.
Despite CRISPR’s advantages, a recent study suggested that it might also produce a large number of “off-target” effects in human cancer cell lines, specifically modification of genes that researchers didn’t mean to change.
To see if this unwanted effect occurred in other human cell types, Ye; Linzhao Cheng, Ph.D., a professor of medicine and oncology in the Johns Hopkins University School of Medicine; and their colleagues pitted CRISPR against TALEN in human iPSCs, adult cells reprogrammed to act like embryonic stem cells. Human iPSCs have already shown enormous promise for treating and studying disease.
The researchers compared the ability of both genome editing systems to either cut out pieces of known genes in iPSCs or cut out a piece of these genes and replace it with another. As model genes, the researchers used JAK2, a gene that when mutated causes a bone marrow disorder known as polycythemia vera; SERPINA1, a gene that when mutated causes alpha1-antitrypsin deficiency, an inherited disorder that may cause lung and liver disease; and AAVS1, a gene that’s been recently discovered to be a “safe harbor” in the human genome for inserting foreign genes.
Their comparison found that when simply cutting out portions of genes, the CRISPR system was significantly more efficient than TALEN in all three gene systems, inducing up to 100 times more cuts. However, when using these genome editing tools for replacing portions of the genes, such as the disease-causing mutations in JAK2 and SERPINA1 genes, CRISPR and TALEN showed about the same efficiency in patient-derived iPSCs, the researchers report.
Contrary to results of the human cancer cell line study, both CRISPR and TALEN had the same targeting specificity in human iPSCs, hitting only the genes they were designed to affect, the team says. The researchers also found that the CRISPR system has an advantage over TALEN: It can be designed to target only the mutation-containing gene without affecting the healthy gene in patients, where only one copy of a gene is affected.
The findings, together with a related study that was published earlier in a leading journal of stem cell research (Cell Stem Cell), offer reassurance that CRISPR will be a useful tool for editing the genes of human iPSCs with little risk of off-target effects, say Ye and Cheng.
“CRISPR-mediated genome editing opens the door to many genetic applications in biologically relevant cells that can lead to better understanding of and potential cures for human diseases,” says Cheng.
Here’s a link to and citation for the paper by the Johns Hopkins researchers,
Not mentioned in the Johns Hopkins Medicine news release is a brewing patent battle over the CRISPR technique. A Dec. 31, 2014 post by Glyn Moody for Techdirt lays out the situation (Note: Links have been removed),
Although not many outside the world of the biological sciences have heard of it yet, the CRISPR gene editing technique may turn out to be one of the most important discoveries of recent years — if patent battles don’t ruin it. Technology Review describes it as:
an invention that may be the most important new genetic engineering technique since the beginning of the biotechnology age in the 1970s. The CRISPR system, dubbed a “search and replace function” for DNA, lets scientists easily disable genes or change their function by replacing DNA letters. During the last few months, scientists have shown that it’s possible to use CRISPR to rid mice of muscular dystrophy, cure them of a rare liver disease, make human cells immune to HIV, and genetically modify monkeys.
Unfortunately, rivalry between scientists claiming the credit for key parts of CRISPR threatens to spill over into patent litigation …
Moody describes three scientists vying for control via their patents,
[A researcher at the MIT-Harvard Broad Institute, Feng] Zhang cofounded Editas Medicine, and this week the startup announced that it had licensed his patent from the Broad Institute. But Editas doesn’t have CRISPR sewn up.
That’s because [Jennifer] Doudna, a structural biologist at the University of California, Berkeley, was a cofounder of Editas, too. And since Zhang’s patent came out, she’s broken off with the company, and her intellectual property — in the form of her own pending patent — has been licensed to Intellia, a competing startup unveiled only last month.
Making matters still more complicated, [another CRISPR researcher, Emmanuelle] Charpentier sold her own rights in the same patent application to CRISPR Therapeutics.
Whether obvious or not, it looks like the patent granted may complicate turning the undoubtedly important CRISPR technique into products. That, in its turn, will mean delays for life-changing and even life-saving therapies: for example, CRISPR could potentially allow the defective gene that causes serious problems for those with cystic fibrosis to be edited to produce normal proteins, thus eliminating those problems.
It’s dispiriting to think that potentially valuable therapies could be lost to litigation battles particularly since the researchers are academics and their work was funded by taxpayers. In any event, I hope sanity reigns and they are able to avoid actions which will grind research down to a standstill.
The deal signed by the Massachusetts Institute of Technology (MIT) and one of the largest universities in Latin America covers a five-year period and its initial focus is on nanoscience and nanotechnology. From a Nov. 3, 2014 news item on Azonano,
MIT has established a formal relationship with Tecnológico de Monterrey, one of Latin America’s largest universities, to bring students and faculty from Mexico to Cambridge [Massachusetts, US] for fellowships, internships, and research stays in MIT labs and centers. The agreement will initially focus on research at the frontier of nanoscience and nanotechnology.
The agreement was celebrated today with a signing ceremony at MIT attended by a delegation from Tecnológico de Monterrey that included President Salvador Alva; the chairman of the board of trustees, José Antonio Fernández Carbajal; Mexico’s ambassador to the United States, Eduardo Medina Mora; and Daniel Hernández Joseph, the consul general of Mexico in Boston.
“We feel honored for the confidence that the MIT community has placed in us,” Alva says. “Our goal is to educate even more entrepreneurial leaders with the capacity and the motivation to solve humanity’s grand challenges. Leaders capable of creating and sustaining economic and social value. Leaders that will transform the lives of millions of people.”
The agreement sets the stage for increasing long-term cooperation and collaboration between the two universities with an initial academic program that will enable undergraduates, graduate students, postdocs, and junior faculty from Tecnológico de Monterrey to visit the MIT campus, where they will be embedded in labs and centers alongside MIT faculty and students. The participants will gain direct experience in disciplines and topics that match their interests. The program may change or expand its focus after five years.
“The goal for the first five years is to provide students and scholars from Tecnológico de Monterrey with a world-class research experience in nanoscience and nanotechnology and to accelerate research programs of critical importance to Mexico and the world,” says Jesús del Álamo, the Donner Professor of Electrical Engineering, who will coordinate the program at MIT. “And because faculty hosts of participants in the initial program will be recruited from any MIT academic department with relevant activities, we will be able to accommodate interests in nanoscale research over a very broad intellectual front.”
MIT is currently constructing a new facility, MIT.nano, that will be a key resource for future extensions of the program. The new 200,000-square-foot facility, which is being constructed on the site of Building 12 at the center of the MIT campus, will house state-of-the-art cleanroom, imaging, and prototyping facilities supporting research with nanoscale materials and processes — in fields including energy, health, life sciences, quantum sciences, electronics, and manufacturing.
In honor of the new relationship, the facility’s Computer-Aided Visualization Environment will be named after Tecnológico de Monterrey, says Vladimir Bulović, the Fariborz Maseeh Chair in Emerging Technology and faculty lead for the MIT.nano building project.
“When it is completed, MIT.nano will enable students and faculty from Tecnológico de Monterrey to learn and work in one of the most advanced facilities in the world and will give them invaluable experience at the forefront of innovation,” says Bulović, who is also the associate dean for innovation in MIT’s School of Engineering and co-chair of the MIT Innovation Initiative.
Tecnológico de Monterrey is one of the largest universities in Latin America, with nearly 100,000 high school, undergraduate, and graduate students; 31 campuses in Mexico; and 19 international locations and branches in the Americas, Europe, and Japan. This week’s agreement establishes a new relationship between MIT and Tecnológico de Monterrey, but the two institutions have a shared history.
Tecnológico de Monterrey was founded in 1943 by Eugenio Garza Sada, who graduated from MIT in 1914 with a degree in civil engineering. After studying at MIT, Garza Sada — with his brother, Roberto, who graduated from MIT in 1918 — grew his family’s brewery in Mexico into a company that today is known as FEMSA, the largest beverage company in Mexico and Latin America. Tecnológico de Monterrey’s founding director-general was León Ávalos Vez, a mechanical engineer from the MIT Class of 1929.
“We believe that both MIT and Tecnológico de Monterrey play a leadership role in shaping minds and creating knowledge, in serving as catalysts for innovation, entrepreneurship and economic growth, but they also have a responsibility to address the critical problems in the world,” says Fernández, the chairman of the board of trustees at Tecnológico de Monterrey. “This agreement will encourage the implementation of educational programs and accelerate research in nanotechnology in ways that will truly make a difference.”
The new program will commence next spring, with the first students and faculty targeted to come to MIT next summer .
It’ll be interesting to note if this exchange ever reverses and MIT students start visiting Tecnológico de Monterrey campuses. It seems there’s a quite a selection with 31 in Mexico and 19 in various locations internationally.