Tag Archives: University of Tokyo

“Brute force” technique for biomolecular information processing

The research is being announced by the University of Tokyo but there is definitely a French flavour to this project. From a June 20, 2016 news item on ScienceDaily,

A Franco-Japanese research group at the University of Tokyo has developed a new “brute force” technique to test thousands of biochemical reactions at once and quickly home in on the range of conditions where they work best. Until now, optimizing such biomolecular systems, which can be applied for example to diagnostics, would have required months or years of trial and error experiments, but with this new technique that could be shortened to days.

A June 20, 2016 University of Tokyo news release on EurekAlert, which originated the news item, describes the project in more detail,

“We are interested in programming complex biochemical systems so that they can process information in a way that is analogous to electronic devices. If you could obtain a high-resolution map of all possible combinations of reaction conditions and their corresponding outcomes, the development of such reactions for specific purposes like diagnostic tests would be quicker than it is today,” explains Centre National de la Recherche Scientifique (CNRS) researcher Yannick Rondelez at the Institute of Industrial Science (IIS) [located at the University of Tokyo].

“Currently researchers use a combination of computer simulations and painstaking experiments. However, while simulations can test millions of conditions, they are based on assumptions about how molecules behave and may not reflect the full detail of reality. On the other hand, testing all possible conditions, even for a relatively simple design, is a daunting job.”

Rondelez and his colleagues at the Laboratory for Integrated Micro-Mechanical Systems (LIMMS), a 20-year collaboration between the IIS and the French CNRS, demonstrated a system that can test ten thousand different biochemical reaction conditions at once. Working with the IIS Applied Microfluidic Laboratory of Professor Teruo Fujii, they developed a platform to generate a myriad of micrometer-sized droplets containing random concentrations of reagents and then sandwich a single layer of them between glass slides. Fluorescent markers combined with the reagents are automatically read by a microscope to determine the precise concentrations in each droplet and also observe how the reaction proceeds.

“It was difficult to fine-tune the device at first,” explains Dr Anthony Genot, a CNRS researcher at LIMMS. “We needed to create generate thousands of droplets containing reagents within a precise range of concentrations to produce high resolution maps of the reactions we were studying. We expected that this would be challenging. But one unanticipated difficulty was immobilizing the droplets for the several days it took for some reactions to unfold. It took a lot of testing to create a glass chamber design that was airtight and firmly held the droplets in place.” Overall, it took nearly two years to fine-tune the device until the researchers could get their droplet experiment to run smoothly.

Seeing the new system producing results was revelatory. “You start with a screen full of randomly-colored dots, and then suddenly the computer rearranges them into a beautiful high-resolution map, revealing hidden information about the reaction dynamics. Seeing them all slide into place to produce something that had only ever been seen before through simulation was almost magical,” enthuses Rondelez.

“The map can tell us not only about the best conditions of biochemical reactions, it can also tell us about how the molecules behave in certain conditions. Using this map we’ve already found a molecular behavior that had been predicted theoretically, but had not been shown experimentally. With our technique we can explore how molecules talk to each other in test tube conditions. Ultimately, we hope to illuminate the intimate machinery of living molecular systems like ourselves,” says Rondelez.

Here’s a link to and a citation for the paper,

High-resolution mapping of bifurcations in nonlinear biochemical circuits by A. J. Genot, A. Baccouche, R. Sieskind, N. Aubert-Kato, N. Bredeche, J. F. Bartolo, V. Taly, T. Fujii, & Y. Rondelez. Nature Chemistry (2016)
doi:10.1038/nchem.2544 Published online 20 June 2016

This paper is behind a paywall.

pH dependent nanoparticle-based contrast agent for MRIs (magnetic resonance images)

This news about a safer and more effective contrast agent for MRIs (magnetic resonance images) developed by Japanese scientists come from a June 6, 2016 article by Heather Zeiger on phys.org. First some explanations,

Magnetic resonance imaging relies on the excitation and subsequent relaxation of protons. In clinical MRI studies, the signal is determined by the relaxation time of the hydrogen protons in water. To get a stronger signal, scientists can use contrast agents to shorten the relaxation time of the protons.

MRI is non-invasive and does not involve radiation, making it a safe diagnostic tool. However, its weak signal makes tumor detection difficult. The ideal contrast agent would select for malignant tumors, making its location and diagnosis much more obvious.

Nanoparticle contrast agents have been of interested because nanoparticles can be functionalized and, as in this study, can contain various metals. Researchers have attempted to functionalize nanoparticles with ligands that attach to chemical factors on the surface of cancer cells. However, cancer cells tend to be compositionally heterogeneous, leading some researchers to look for nanoparticles that respond to differences in pH or redox potential compared to normal cells.

Now for the research,

Researchers from the University of Tokyo, Tokyo Institute of Technology, Kawasaki Institute of Industry Promotion, and the Japan Agency for Quantum and Radiological Science and Technology have developed a contrast agent from calcium phosphate-based nanoparticles that release a manganese ion an acidic environment. …

Peng Mi, Daisuke Kokuryo, Horacio Cabral, Hailiang Wu, Yasuko Terada, Tsuneo Saga, Ichio Aoki, Nobuhiro Nishiyama, and Kazunori Kataoka developed a contrast agent that is comprised of Mn2+– doped CaP nanoparticles with a PEG shell. They reasoned that using CaP nanoparticles, which are known to be pH sensitive, would allow the targeted release of Mn2+ ions in the tumor microenvironment. The tumor microenvironment tends to have a lower pH than the normal regions to rapid cell metabolism in an oxygen-depleted environment. Manganese ions were tested because they are paramagnetic, which makes for a good contrast agent. They also bind to proteins creating a slowly rotating manganese-protein system that results in sharp contrast enhancement.

These results were promising, so Peng Mi, et al. then tested whether the CaPMnPEG contrast agent worked in solid tumors. Because Mn2+ remains confined within the nanoparticle matrix at physiological pH, CaPMnPEG demonstrate a much lower toxicity [emphasis mine] compared to MnCl2. MRI studies showed a tumor-to-normal contrast of 131% after 30 minute, which is much higher than Gd-DTPA [emphasis mine], a clinically approved contrast agent. After an hour, the tumor-to-normal ratio was 160% and remained around 170% for several hours.

Three-dimensional MRI studies of solid tumors showed that without the addition of CaPMnPEG, only blood vessels were visible. However, upon adding CaPMnPEG, the tumor was easily distinguishable. Additionally, there is evidence that excess Mn2+ leaves the plasma after an hour. The contrast signal remained strong for several hours indicating that protein binding rather than Mn2+ concentration is important for signal enhancement.

Finally, tests with metastatic tumors in the liver (C26 colon cancer cells) showed that CaPMnPEG works well in solid organ analysis and is highly sensitive to detecting millimeter-sized micrometastasis [emphasis mine]. Unlike other contrast agents used in the clinic, CaPMnPEG provided a contrast signal that lasted for several hours after injection. After an hour, the signal was enhanced by 25% and after two hours, the signal was enhanced by 39%.

This is exciting stuff. Bravo to the researchers!

Here’s a link to and citation for the paper,

A pH-activatable nanoparticle with signal-amplification capabilities for non-invasive imaging of tumour malignancy by Peng Mi, Daisuke Kokuryo, Horacio Cabral, Hailiang Wu, Yasuko Terada, Tsuneo Saga, Ichio Aoki, Nobuhiro Nishiyama, & Kazunori Kataoka. Nature Nanotechnology (2016) doi:10.1038/nnano.2016.72 Published online 16 May 2016

This paper is behind a paywall.

Fingertip pressure sensors from Japan

Pressure sensor The pressure sensors wraps around and conforms to the shape of the fingers while still accurately measuring pressure distribution. © 2016 Someya Laboratory.

Pressure sensor
The pressure sensors wraps around and conforms to the shape of the fingers while still accurately measuring pressure distribution.
© 2016 Someya Laboratory.

Those fingertip sensors could be jewellery but they’re not. From a March 8, 2016 news item on Nanowerk (Note: A link has been removed),

Researchers at the University of Tokyo working with American colleagues have developed a transparent, bendable and sensitive pressure sensor (“A Transparent, Bending Insensitive Pressure Sensor”). Healthcare practitioners may one day be able to physically screen for breast cancer using pressure-sensitive rubber gloves to detect tumors, owing to this newly developed sensor.

A March 7, 2016 University of Tokyo press release, which originated the news item, expands on the theme,

Conventional pressure sensors are flexible enough to fit to soft surfaces such as human skin, but they cannot measure pressure changes accurately once they are twisted or wrinkled, making them unsuitable for use on complex and moving surfaces. Additionally, it is difficult to reduce them below 100 micrometers thickness because of limitations in current production methods.

To address these issues, an international team of researchers led by Dr. Sungwon Lee and Professor Takao Someya of the University of Tokyo’s Graduate School of Engineering has developed a nanofiber-type pressure sensor that can measure pressure distribution of rounded surfaces such as an inflated balloon and maintain its sensing accuracy even when bent over a radius of 80 micrometers, equivalent to just twice the width of a human hair. The sensor is roughly 8 micrometers thick and can measure the pressure in 144 locations at once.

The device demonstrated in this study consists of organic transistors, electronic switches made from carbon and oxygen based organic materials, and a pressure sensitive nanofiber structure. Carbon nanotubes and graphene were added to an elastic polymer to create nanofibers with a diameter of 300 to 700 nanometers, which were then entangled with each other to form a transparent, thin and light porous structure.

“We’ve also tested the performance of our pressure sensor with an artificial blood vessel and found that it could detect small pressure changes and speed of pressure propagation,” says Lee. He continues, “Flexible electronics have great potential for implantable and wearable devices. I realized that many groups are developing flexible sensors that can measure pressure but none of them are suitable for measuring real objects since they are sensitive to distortion. That was my main motivation and I think we have proposed an effective solution to this problem.”

Here’s a link to and a citation for the paper,

A transparent bending-insensitive pressure sensor by Sungwon Lee, Amir Reuveny, Jonathan Reeder, Sunghoon Lee, Hanbit Jin, Qihan Liu, Tomoyuki Yokota, Tsuyoshi Sekitani, Takashi Isoyama, Yusuke Abe, Zhigang Suo & Takao Someya. Nature Nanotechnology (2016)  doi:10.1038/nnano.2015.324 Published online 25 January 2016

This paper is behind a paywall.

Origami and our pop-up future

They should have declared Jan. 25, 2016 ‘L. Mahadevan Day’ at Harvard University. The researcher was listed as an author on two major papers. I covered the first piece of research, 4D printed hydrogels, in this Jan. 26, 2016 posting. Now for Mahadevan’s other work, from a Jan. 27, 2016 news item on Nanotechnology Now,

What if you could make any object out of a flat sheet of paper?

That future is on the horizon thanks to new research by L. Mahadevan, the Lola England de Valpine Professor of Applied Mathematics, Organismic and Evolutionary Biology, and Physics at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS). He is also a core faculty member of the Wyss Institute for Biologically Inspired Engineering, and member of the Kavli Institute for Bionano Science and Technology, at Harvard University.

Mahadevan and his team have characterized a fundamental origami fold, or tessellation, that could be used as a building block to create almost any three-dimensional shape, from nanostructures to buildings. …

A Jan. 26, 2016 Harvard University news release by Leah Burrows, which originated the news item, provides more detail about the specific fold the team has been investigating,

The folding pattern, known as the Miura-ori, is a periodic way to tile the plane using the simplest mountain-valley fold in origami. It was used as a decorative item in clothing at least as long ago as the 15th century. A folded Miura can be packed into a flat, compact shape and unfolded in one continuous motion, making it ideal for packing rigid structures like solar panels.  It also occurs in nature in a variety of situations, such as in insect wings and certain leaves.

“Could this simple folding pattern serve as a template for more complicated shapes, such as saddles, spheres, cylinders, and helices?” asked Mahadevan.

“We found an incredible amount of flexibility hidden inside the geometry of the Miura-ori,” said Levi Dudte, graduate student in the Mahadevan lab and first author of the paper. “As it turns out, this fold is capable of creating many more shapes than we imagined.”

Think surgical stents that can be packed flat and pop-up into three-dimensional structures once inside the body or dining room tables that can lean flat against the wall until they are ready to be used.

“The collapsibility, transportability and deployability of Miura-ori folded objects makes it a potentially attractive design for everything from space-bound payloads to small-space living to laparoscopic surgery and soft robotics,” said Dudte.

Here’s a .gif demonstrating the fold,

This spiral folds rigidly from flat pattern through the target surface and onto the flat-folded plane (Image courtesy of Mahadevan Lab) Harvard University

This spiral folds rigidly from flat pattern through the target surface and onto the flat-folded plane (Image courtesy of Mahadevan Lab) Harvard University

The news release offers some details about the research,

To explore the potential of the tessellation, the team developed an algorithm that can create certain shapes using the Miura-ori fold, repeated with small variations. Given the specifications of the target shape, the program lays out the folds needed to create the design, which can then be laser printed for folding.

The program takes into account several factors, including the stiffness of the folded material and the trade-off between the accuracy of the pattern and the effort associated with creating finer folds – an important characterization because, as of now, these shapes are all folded by hand.

“Essentially, we would like to be able to tailor any shape by using an appropriate folding pattern,” said Mahadevan. “Starting with the basic mountain-valley fold, our algorithm determines how to vary it by gently tweaking it from one location to the other to make a vase, a hat, a saddle, or to stitch them together to make more and more complex structures.”

“This is a step in the direction of being able to solve the inverse problem – given a functional shape, how can we design the folds on a sheet to achieve it,” Dudte said.

“The really exciting thing about this fold is it is completely scalable,” said Mahadevan. “You can do this with graphene, which is one atom thick, or you can do it on the architectural scale.”

Co-authors on the study include Etienne Vouga, currently at the University of Texas at Austin, and Tomohiro Tachi from the University of Tokyo. …

Here’s a link to and a citation for the paper,

Programming curvature using origami tessellations by Levi H. Dudte, Etienne Vouga, Tomohiro Tachi, & L. Mahadevan. Nature Materials (2016) doi:10.1038/nmat4540 Published online 25 January 2016

This paper is behind a paywall.

Happy Thanksgiving! Oct. 12, 2015, my last mention of science debates in the Canadian 2015 federal election, and my 4001st posting

Two things for me to celebrate today: Thanksgiving (in Canada, we celebrate on the 2nd Monday of October) and my 4001st posting (this one).

Science for the people

Plus, there’s much to celebrate about science discussion during the 2015 Canadian federal election. I stumbled across Science for the People, which is a weekly radio show based in Canada (from the About page),

Science for the People is a syndicated radio show and podcast that broadcasts weekly across North America. We are a long-format interview show that explores the connections between science, popular culture, history, and public policy, to help listeners understand the evidence and arguments behind what’s in the news and on the shelves.

Every week, our hosts sit down with science researchers, writers, authors, journalists, and experts to discuss science from the past, the science that affects our lives today, and how science might change our future.


If you have comments, show ideas, or questions about Science for the People, email feedback@scienceforthepeople.ca.

Theme Song

Our theme song music comes from the song “Binary Consequence” by the band Fractal Pattern. You can find the full version of it on their album No Hope But Mt. Hope.

License & Copyright

All Science for the People episodes are under the Creative Commons license. You are free to distribute unedited versions of the episodes for non-commercial purposes. If you would like to edit the episode please contact us.

Episode #338 (2015 Canadian federal election and science) was originally broadcast on Oct. 9,  2015 and features,

This week, we’re talking about politics, and the prospects for pro-science politicians, parties and voters in Canada. We’ll spend the hour with panelists Katie Gibbs, Executive Director of Evidence for Democracy, science librarian John Dupuis, journalist Mike De Souza, and former Canadian government scientist Steven Campana, for an in-depth discussion about the treatment of science by the current Canadian government, and what’s at stake for science in the upcoming federal election.

The podcast is approximately one hour long and Désirée Schell (sp?) hosts/moderates an interesting discussion where one of the participants notes that issues about science and science muzzles predate Harper. The speaker dates the issues back to the Chrétien/Martin years. Note: Jean Chrétien was Prime Minister from 1993 to 2003 and Paul Martin, his successor, was Prime Minister from 2003 to 2006 when he was succeeded by current Prime Minister, Stephen Harper. (I attended a Philosophers’ Cafe event on Oct. 1, 2015 where the moderator dated the issues back to the Mulroney years. Note: Brian Mulroney was Prime Minister from 1984 – 1993.) So, it’s been 10, 20, or 30 years depending on your viewpoint and when you started noticing (assuming you’re of an age to have noticed something happening 30 years ago).

The participants also spent some time discussing why Canadians would care about science. Interestingly, one of the speakers claimed the current Syrian refugee crisis has its roots in climate change, a science issue, and he noted the US Dept. of Defense views climate change as a threat multiplier. For anyone who doesn’t know, the US Dept. of Defense funds a lot of science research.

It’s a far ranging discussion, which doesn’t really touch on science as an election issue until some 40 mins. into the podcast.

One day later on Oct. 10, 2015 (where you’ll find the podcast), the Canadian Broadcasting Corporation’s Quirks & Quarks radio programme broadcast and made available its podcast of a 2015 Canadian election science debate/panel,

There is just over a week to go before Canadians head to the polls to elect a new government. But one topic that hasn’t received much attention on the campaign trail is science.

So we thought we’d gather together candidates from each of the major federal parties to talk about science and environmental issues in this election.

We asked each of them where they and their parties stood on federal funding of science; basic vs. applied research; the controversy around federal scientists being permitted to speak about their research, and how to cut greenhouse gas emissions while protecting jobs and the economy.

Our panel of candidates were:

– Lynne Quarmby, The Green Party candidate [and Green Party Science critic] in Burnaby North-Seymour, and  professor and Chair of the Department of Molecular Biology and Biochemistry at Simon Fraser University

– Gary Goodyear, Conservative Party candidate in Cambridge, Ontario, and former Minister of State for Science and Technology

– Marc Garneau, Liberal Party candidate in NDG-Westmount, and a former Canadian astronaut

– Megan Leslie, NDP candidate in Halifax and her party’s environment critic

It was a crackling debate. Gary Goodyear was the biggest surprise in that he was quite vigorous and informed in his defence of the government’s track record. Unfortunately, he was also quite patronizing.

The others didn’t seem to have as much information and data at their fingertips. Goodyear quote OECD reports of Canada doing well in the sciences and they didn’t have any statistics of their own to provide a counter argument. Quarmby, Garneau, and Leslie did at one time or another come back strongly on one point or another but none of them seriously damaged Goodyear’s defense. I can’t help wondering if Kennedy Stewart, NDP science critic, or Laurin Liu, NDP deputy science critic, and Ted Hsu, Liberal science critic might have been better choices for this debate.

The Quirks & Quarks debate was approximately 40 or 45 mins. with the remainder of the broadcast devoted to Canadian 2015 Nobel Prize winner in Physics, Arthur B. McDonald (Takaaki Kajita of the University of Tokyo shared the prize) for the discovery of neutrino oscillations, i.e., neutrinos have mass.

Kate Allen writing an Oct. 9, 2015 article for thestar.com got a preview of the pretaped debate and excerpted a few of the exchanges,

On science funding

Gary Goodyear: Currently, we spend more than twice what the Liberals spent in their last year. We have not cut science, and in fact our science budget this year is over $10 billion. But the strategy is rather simple. We are very strong in Canada on basic research. Where we fall down sometimes as compared to other countries is moving the knowledge that we discover in our laboratories out of the laboratory onto our factory floors where we can create jobs, and then off to the hospitals and living rooms of the world — which is how we make that home run. No longer is publishing an article the home run, as it once was.

Lynne Quarmby: I would take issue with the statement that science funding is robust in this country … The fact is that basic scientific research is at starvation levels. Truly fundamental research, without an obvious immediate application, is starving. And that is the research that is feeding the creativity — it’s the source of new ideas, and new understanding about the world, that ultimately feeds innovation.

If you’re looking for a good representation of the discussion and you don’t have time to listen to the podcast, Allen’s article is a good choice.

Finally, Research2Reality, a science outreach and communication project I profiled earlier in 2015 has produced an Oct. 9, 2015 election blog posting by Karyn Ho, which in addition to the usual ‘science is dying in Canada’ talk includes links to more information and to the official party platforms, as well as, an exhortation to get out there and vote.

Something seems to be in the air as voter turnout for the advance polls is somewhere from 24% to 34% higher than usual.

Happy Thanksgiving!

ETA Oct. 14, 2015:  There’s been some commentary about the Quirks & Quarks debate elsewhere. First, there’s David Bruggeman’s Oct. 13, 2015 post on his Pasco Phronesis blog (Note: Links have been removed),

Chalk it up to being a Yank who doesn’t give Canadian science policy his full attention, but one thing (among several) I learned from the recent Canadian cross-party science debate concerns open access policy.

As I haven’t posted anything on Canadian open access policies since 2010, clearly I need to catch up.  I am assuming Goodyear is referring to the Tri-Agency Open Access Policy, introduced in February by his successor as Minister of State for Science and Technology.  It applies to all grants issued from May 1, 2015 and forward (unless the work was already applicable to preexisting government open access policy), and applies most of the open access policy of the Canadian Institutes for Health Research (CIHR) to the other major granting agencies (the Natural Sciences and Engineering Research Council of Canada and the Social Sciences and Humanities Research Council of Canada).

The policy establishes that grantees must make research articles coming from their grants available free to the public within 12 months of publication. …

Then, there’s Michael Rennie, an Assistant Professor at Lakehead University and a former Canadian government scientist whose Oct. 14, 2015 posting on his unmuzzled science blog notes this,

This [Gary Goodyear’s debate presentation] pissed me off so much it made me come out of retirement on this blog.

Listening to Gary Goodyear (Conservative representative, and MP in Cambridge and former Minister of State for Science and Technology), I became furious with the level of misinformation given. …

Rennie went ahead and Storified the twitter responses to the Goodyear’s comments (Note: Links have been removed),

Here’s my Storify of tweets that help clarify a good deal of the misinformation Gary Goodyear presented during the debate, as well as some rebuttals from folks who are in the know: I was a Canadian Government Scientist with DFO [Department of Fisheries and Oceans] from 2010-2014, and was a Research Scientist at the Experimental Lakes Area [ELA], who heard about the announcement regarding the intention of the government to close the facility first-hand on the telephone at ELA.

Goodyear: “I was involved in that decision. With respect to the Experimental Lakes, we never said we would shut it down. We said that we wanted to transfer it to a facility that was better suited to operate it. And that’s exactly what we’ve done. Right now, DFO is up there undertaking some significant remediation effects to clean up those lakes that are contaminated by the science that’s been going on up there. We all hope these lakes will recover soon so that science and experimentation can continue but not under the federal envelope. So it’s secure and it’s misleading to suggest that we were trying to stop science there.”
There’s so many inaccuracies in here, it’s hard to know where to start. First, Goodyear’s assertion that there are “contaminated lakes” at ELA is nonsense. Experiments conducted there are done using environmentally-relevant exposures; in other words, what you’d see going on somewhere else on earth, and in every case, each lake has recovered to it’s natural state, simply by stopping the experiment.

Second, there ARE experiments going on at ELA currently, many of which I am involved in; the many tours, classes and researchers on site this year can attest to this.

Third, this “cleanup” that is ongoing is to clean up all the crap that was left behind by DFO staff during 40 years of experiments- wood debris, old gear, concrete, basically junk that was left on the shorelines of lakes. No “lake remediation” to speak of.

Fourth, the conservative government DID stop science at ELA- no new experiments were permitted to begin, even ones that were already funded and on the books like the nanosilver experiment which was halted until 2014, jeopardizing the futures the futures of many students involved. Only basic monitoring occurred between 2012-2014.

Last, the current government deserves very little credit for the transfer of ELA to another operator; the successful move was conceived and implemented largely by other people and organizations, and the attempts made by the government to try and move the facility to a university were met with incredulity by the deans and vice presidents invited to the discussion.

There’s a lot more and I strongly recommend reading Rennie’s Storify piece.

It was unfortunate that the representatives from the other parties were not able to seriously question Goodyear’s points.

Perhaps next time (fingers crossed), the representatives from the various parties will be better prepared. I’d also like to suggest that there be some commentary from experts afterwards in the same way the leaders’ debates are followed by commentary. And while I’m dreaming, maybe there could be an opportunity for phone-in or Twitter questions.

Quantum teleportation

It’s been two years (my Aug. 16, 2013 posting features a German-Japanese collaboration) since the last quantum teleportation posting here. First, a little visual stimulation,

Captain James T Kirk (credit: http://www.comicvine.com/james-t-kirk/4005-20078/)

Captain James T Kirk (credit: http://www.comicvine.com/james-t-kirk/4005-20078/)

Captain Kirk, also known as William Shatner, is from Montréal, Canada and that’s not the only Canadian connection to this story which is really about some research at York University (UK). From an Oct. 1, 2015 news item on Nanotechnology Now,

Mention the word ‘teleportation’ and for many people it conjures up “Beam me up, Scottie” images of Captain James T Kirk.

But in the last two decades quantum teleportation – transferring the quantum structure of an object from one place to another without physical transmission — has moved from the realms of Star Trek fantasy to tangible reality.

A Sept. 30, 2015 York University (UK) press release, which originated the news item, describes the quantum teleportation research problem and solution,

Quantum teleportation is an important building block for quantum computing, quantum communication and quantum network and, eventually, a quantum Internet. While theoretical proposals for a quantum Internet already exist, the problem for scientists is that there is still debate over which of various technologies provides the most efficient and reliable teleportation system. This is the dilemma which an international team of researchers, led by Dr Stefano Pirandola of the Department of Computer Science at the University of York, set out to resolve.

In a paper published in Nature Photonics, the team, which included scientists from the Freie Universität Berlin and the Universities of Tokyo and Toronto [emphasis mine], reviewed the theoretical ideas around quantum teleportation focusing on the main experimental approaches and their attendant advantages and disadvantages.

None of the technologies alone provide a perfect solution, so the scientists concluded that a hybridisation of the various protocols and underlying structures would offer the most fruitful approach.

For instance, systems using photonic qubits work over distances up to 143 kilometres, but they are probabilistic in that only 50 per cent of the information can be transported. To resolve this, such photon systems may be used in conjunction with continuous variable systems, which are 100 per cent effective but currently limited to short distances.

Most importantly, teleportation-based optical communication needs an interface with suitable matter-based quantum memories where quantum information can be stored and further processed.

Dr Pirandola, who is also a member of the York Centre for Quantum Technologies, said: “We don’t have an ideal or universal technology for quantum teleportation. The field has developed a lot but we seem to need to rely on a hybrid approach to get the best from each available technology.

“The use of quantum teleportation as a building block for a quantum network depends on its integration with quantum memories. The development of good quantum memories would allow us to build quantum repeaters, therefore extending the range of teleportation. They would also give us the ability to store and process the transmitted quantum information at local quantum computers.

“This could ultimately form the backbone of a quantum Internet. The revised hybrid architecture will likely rely on teleportation-based long-distance quantum optical communication, interfaced with solid state devices for quantum information processing.”

Here’s a link to and a citation for the paper,

Advances in quantum teleportation by S. Pirandola, J. Eisert, C. Weedbrook, A. Furusawa, & S. L. Braunstein. Nature Photonics 9, 641–652 (2015) doi:10.1038/nphoton.2015.154 Published online 29 September 2015

This paper is behind a paywall.


Japanese researchers note the emergence of the ‘Devil’s staircase’

I wanted to know why it’s called the ‘Devil’s staircase’ and this is what I found. According to Wikipedia there are several of them,

I gather the scientists are referring to the Cantor function (mathematics), Note: Links have been removed,

In mathematics, the Cantor function is an example of a function that is continuous, but not absolutely continuous. It is also referred to as the Cantor ternary function, the Lebesgue function, Lebesgue’s singular function, the Cantor-Vitali function, the Devil’s staircase,[1] the Cantor staircase function,[2] and the Cantor-Lebesgue function.[3]

Here’s a diagram illustrating the Cantor function (from the Wikipedia entry),

CC BY-SA 3.0 File:CantorEscalier.svg Uploaded by Theon Created: January 24, 2009

CC BY-SA 3.0
Uploaded by Theon
Created: January 24, 2009

As for this latest ‘Devil’s staircase’, a June 17, 2015 news item on Nanowerk announces the research (Note: A link has been removed),

Researchers at the University of Tokyo have revealed a novel magnetic structure named the “Devil’s staircase” in cobalt oxides using soft X-rays (“Observation of a Devil’s Staircase in the Novel Spin-Valve System SrCo6O11“). This is an important result since the researchers succeeded in determining the detailed magnetic structure of a very small single crystal invisible to the human eye.

A June 17, 2015 University of Tokyo press release, which originated the news item on Nanowerk, describes why this research is now possible and the impact it could have,

Recent remarkable progress in resonant soft x-ray diffraction performed in synchrotron facilities has made it possible to determine spin ordering (magnetic structure) in small-volume samples including thin films and nanostructures, and thus is expected to lead not only to advances in materials science but also application to spintronics, a technology which is expected to form the basis of future electronic devices. Cobalt oxide is known as one material that is suitable for spintronics applications, but its magnetic structure was not fully understood.

The research group of Associate Professor Hiroki Wada at the University of Tokyo Institute for Solid State Physics, together with the researchers at Kyoto University and in Germany, performed a resonant soft X-ray diffraction study of cobalt (Co) oxides in the synchrotron facility BESSY II in Germany. They observed all the spin orderings which are theoretically possible and determined how these orderings change with the application of magnetic fields. The plateau-like behavior of magnetic structure as a function of magnetic field is called the “Devil’s staircase,” and is the first such discovery in spin systems in 3D transition metal oxides including cobalt, iron, manganese.

By further resonant soft X-ray diffraction studies, one can expect to find similar “Devil’s staircase” behavior in other materials. By increasing the spatial resolution of microscopic observation of the “Devil’s staircase” may lead to the development of novel types of spintronics materials.

Here’s an example of the ‘cobalt’ Devil’s staircase,

The magnetic structure that gives rise to the Devil's Staircase Magnetization (vertical axis) of cobalt oxide shows plateau like behaviors as a function of the externally-applied magnetic field (horizontal axis). The researchers succeeded in determining the magnetic structures which create such plateaus. Red and blue arrows indicate spin direction. © 2015 Hiroki Wadati.

The magnetic structure that gives rise to the Devil’s Staircase
Magnetization (vertical axis) of cobalt oxide shows plateau like behaviors as a function of the externally-applied magnetic field (horizontal axis). The researchers succeeded in determining the magnetic structures which create such plateaus. Red and blue arrows indicate spin direction.
© 2015 Hiroki Wadati.

Here’s a link to and a citation for the paper,

Observation of a Devil’s Staircase in the Novel Spin-Valve System SrCo6O11 by T. Matsuda, S. Partzsch, T. Tsuyama, E. Schierle, E. Weschke, J. Geck, T. Saito, S. Ishiwata, Y. Tokura, and H. Wadati. Phys. Rev. Lett. 114, 236403 – Published 11 June 2015 (paper: Vol. 114, Iss. 23 — 12 June 2015)  DOI: 10.1103/PhysRevLett.114.236403

This paper is behind a paywall.

Magnetic sensitivity under the microscope

Humans do not have the sense of magnetoreception (the ability to detect magnetic fields) unless they’ve been enhanced. On the other hand, species of fish, insects, birds, and some mammals (other than human) possess the sense naturally. Scientists at the University of Tokyo (Japan) have developed a microscope capable of observing magnetoreception according to a June 4, 2015 news item on Nanowerk (Note: A link has been removed),

Researchers at the University of Tokyo have succeeded in developing a new microscope capable of observing the magnetic sensitivity of photochemical reactions believed to be responsible for the ability of some animals to navigate in the Earth’s magnetic field, on a scale small enough to follow these reactions taking place inside sub-cellular structures (Angewandte Chemie International Edition, “Optical Absorption and Magnetic Field Effect Based Imaging of Transient Radicals”).

A June 4, 2015 University of Tokyo news release on EurekAlert, which originated the news item, describes the research in more detail,

Several species of insects, fish, birds and mammals are believed to be able to detect magnetic fields – an ability known as magnetoreception. For example, birds are able to sense the Earth’s magnetic field and use it to help navigate when migrating. Recent research suggests that a group of proteins called cryptochromes and particularly the molecule flavin adenine dinucleotide (FAD) that forms part of the cryptochrome, are implicated in magnetoreception. When cryptochromes absorb blue light, they can form what are known as radical pairs. The magnetic field around the cryptochromes determines the spins of these radical pairs, altering their reactivity. However, to date there has been no way to measure the effect of magnetic fields on radical pairs in living cells.

The research group of Associate Professor Jonathan Woodward at the Graduate School of Arts and Sciences are specialists in radical pair chemistry and investigating the magnetic sensitivity of biological systems. In this latest research, PhD student Lewis Antill made measurements using a special microscope to detect radical pairs formed from FAD, and the influence of very weak magnetic fields on their reactivity, in volumes less than 4 millionths of a billionth of a liter (4 femtoliters). This was possible using a technique the group developed called TOAD (transient optical absorption detection) imaging, employing a microscope built by postdoctoral research associate Dr. Joshua Beardmore based on a design by Beardmore and Woodward.

“In the future, using another mode of the new microscope called MIM (magnetic intensity modulation), also introduced in this work, it may be possible to directly image only the magnetically sensitive regions of living cells,” says Woodward. “The new imaging microscope developed in this research will enable the study of the magnetic sensitivity of photochemical reactions in a variety of important biological and other contexts, and hopefully help to unlock the secrets of animals’ miraculous magnetic sense.”

Here’s a link to and a citation for the paper,

Optical Absorption and Magnetic Field Effect Based Imaging of Transient Radicals by Dr. Joshua P. Beardmore, Lewis M. Antill, and Prof. Jonathan R. Woodward. Angewandte Chemie International Edition DOI: 10.1002/anie.201502591 Article first published online: 3 JUN 2015

© 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

I mentioned human enhancement earlier with regard to magnetoreception. There are people (body hackers) who’ve had implants that give them this extra sense. Dann Berg in a March 21, 2012 post on his website blog (iamdann.com) describes why he implanted a magnet into his finger and his experience with it (at that time, three years and counting),

I quickly learned that magnetic surfaces provided almost no sensation at all. Rather, it was movement that caused my finger to perk up. Things like power cord transformers, microwaves, and laptop fans became interactive in a whole new way. Each object has its own unique field, with different strength and “texture.” I started holding my finger over almost everything that I could, getting a feeling for each object’s invisible reach.

Portable electronics proved to be an experience as well. There were two fairly large electronic items that hit the shelves around the same time as I got my implant: the first iPad and the Kindle 2.

Something to consider,

Courtesy: iamdann.com (Dann Berg)

Courtesy: iamdann.com (Dann Berg)

A city of science in Japan: Kawasaki (Kanagawa)

Happily, I’m getting more nanotechnology (for the most part) information from Japan. Given Japan’s prominence in this field of endeavour I’ve long felt FrogHeart has not adequately represented Japanese contributions. Now that I’m receiving English language translations, I hope to better address the situation.

This morning (March 26, 2015), there were two news releases from Kawasaki INnovation Gateway at SKYFRONT (KING SKYFRONT), Coastal Area International Strategy Office, Kawasaki City, Japan in my mailbox. Before getting on to the news releases, here’s a little about  the city of Kawasaki and about its innovation gateway. From the Kawasaki, Kanagawa entry in Wikipedia (Note: Links have been removed),

Kawasaki (川崎市 Kawasaki-shi?) is a city in Kanagawa Prefecture, Japan, located between Tokyo and Yokohama. It is the 9th most populated city in Japan and one of the main cities forming the Greater Tokyo Area and Keihin Industrial Area.

Kawasaki occupies a belt of land stretching about 30 kilometres (19 mi) along the south bank of the Tama River, which divides it from Tokyo. The eastern end of the belt, centered on JR Kawasaki Station, is flat and largely consists of industrial zones and densely built working-class housing, the Western end mountainous and more suburban. The coastline of Tokyo Bay is occupied by vast heavy industrial complexes built on reclaimed land.

There is a 2014 video about Kawasaki’s innovation gateway, which despite its 14 mins. 39 secs. running time I am embedding here. (Caution: They highlight their animal testing facility at some length.)

Now on to the two news releases. The first concerns research on gold nanoparticles that was published in 2014. From a March 26, 2015 Kawasaki INnovation Gateway news release,

Gold nanoparticles size up to cancer treatment

Incorporating gold nanoparticles helps optimise treatment carrier size and stability to improve delivery of cancer treatment to cells.

Treatments that attack cancer cells through the targeted silencing of cancer genes could be developed using small interfering RNA molecules (siRNA). However delivering the siRNA into the cells intact is a challenge as it is readily degraded by enzymes in the blood and small enough to be eliminated from the blood stream by kidney filtration.  Now Kazunori Kataoka at the University of Tokyo and colleagues at Tokyo Institute of Technology have designed a protective treatment delivery vehicle with optimum stability and size for delivering siRNA to cells.

The researchers formed a polymer complex with a single siRNA molecule. The siRNA-loaded complex was then bonded to a 20 nm gold nanoparticle, which thanks to advances in synthesis techniques can be produced with a reliably low size distribution. The resulting nanoarchitecture had the optimum overall size – small enough to infiltrate cells while large enough to accumulate.

In an assay containing heparin – a biological anti-coagulant with a high negative charge density – the complex was found to release the siRNA due to electrostatic interactions. However when the gold nanoparticle was incorporated the complex remained stable. Instead, release of the siRNA from the complex with the gold nanoparticle could be triggered once inside the cell by the presence of glutathione, which is present in high concentrations in intracellular fluid. The glutathione bonded with the gold nanoparticles and the complex, detaching them from each other and leaving the siRNA prone to release.

The researchers further tested their carrier in a subcutaneous tumour model. The authors concluded that the complex bonded to the gold nanoparticle “enabled the efficient tumor accumulation of siRNA and significant in vivo gene silencing effect in the tumor, demonstrating the potential for siRNA-based cancer therapies.”

The news release provides links to the March 2015 newsletter which highlights this research and to the specific article and video,

March 2015 Issue of Kawasaki SkyFront iNewsletter: http://inewsletter-king-skyfront.jp/en/


Feature video on Professor Kataoka’s research : http://inewsletter-king-skyfront.jp/en/video_feature/vol_3/feature01/

Research highlights: http://inewsletter-king-skyfront.jp/en/research_highlights/vol_3/research01/

Here’s a link to and a citation for the paper,

Precise Engineering of siRNA Delivery Vehicles to Tumors Using Polyion Complexes and Gold Nanoparticles by Hyun Jin Kim, Hiroyasu Takemoto, Yu Yi, Meng Zheng, Yoshinori Maeda, Hiroyuki Chaya, Kotaro Hayashi, Peng Mi, Frederico Pittella, R. James Christie, Kazuko Toh, Yu Matsumoto, Nobuhiro Nishiyama, Kanjiro Miyata, and Kazunori Kataoka. ACS Nano, 2014, 8 (9), pp 8979–8991 DOI: 10.1021/nn502125h Publication Date (Web): August 18, 2014
Copyright © 2014 American Chemical Society

This article is behind a paywall.

The second March 26, 2015 Kawasaki INnovation Gateway news release concerns a DNA chip and food-borne pathogens,

Rapid and efficient DNA chip technology for testing 14 major types of food borne pathogens

Conventional methods for testing food-borne pathogens is based on the cultivation of pathogens, a process that is complicated and time consuming. So there is demand for alternative methods to test for food-borne pathogens that are simpler, quick and applicable to a wide range of potential applications.

Now Toshiba Ltd and Kawasaki City Institute for Public Health have collaborated in the development of a rapid and efficient automatic abbreviated DNA detection technology that can test for 14 major types of food borne pathogens. The so called ‘DNA chip card’ employs electrochemical DNA chips and overcomes the complicated procedures associated with genetic testing of conventional methods. The ‘DNA chip card’ is expected to find applications in hygiene management in food manufacture, pharmaceuticals, and cosmetics.


The so-called automatic abbreviated DNA detection technology ‘DNA chip card’ was developed by Toshiba Ltd and in a collaboration with Kawasaki City Institute for Public Health, used to simultaneously detect 14 different types of food-borne pathogens in less than 90 minutes. The detection sensitivity depends on the target pathogen and has a range of 1E+01~05 cfu/mL.

Notably, such tests would usually take 4-5 days using conventional methods based on pathogen cultivation. Furthermore, in contrast to conventional DNA protocols that require high levels of skill and expertise, the ‘DNA chip card’ only requires the operator to inject nucleic acid, thereby making the procedure easier to use and without specialized operating skills.

Examples of pathogens associated with food poisoning that were tested with the “DNA chip card”

Enterohemorrhagic Escherichia coli



Vibrio parahaemolyticus


Staphylococcus aureus

Enterotoxigenic Escherichia coli

Enteroaggregative Escherichia coli

Enteropathogenic Escherichia coli

Clostridium perfringens

Bacillus cereus



Vibrio cholerae

I think 14 is the highest number of tests I’ve seen for one of these chips. This chip is quite an achievement.

One final bit from the news release about the DNA chip provides a brief description of the gateway and something they call King SkyFront,


The Kawasaki INnovation Gateway (KING) SKYFRONT is the flagship science and technology innovation hub of Kawasaki City. KING SKYFRONT is a 40 hectare area located in the Tonomachi area of the Keihin Industrial Region that spans Tokyo and Kanagawa Prefecture and Tokyo International Airport (also often referred to as Haneda Airport).

KING SKYFRONT was launched in 2013 as a base for scholars, industrialists and government administrators to work together to devise real life solutions to global issues in the life sciences and environment.

I find this emphasis on the city interesting. It seems that cities are becoming increasingly important and active where science research and development are concerned. Europe seems to have adopted a biannual event wherein a city is declared a European City of Science in conjunction with the EuroScience Open Forum (ESOF) conferences. The first such city was Dublin in 2012 (I believe the Irish came up with the concept themselves) and was later adopted by Copenhagen for 2014. The latest city to embrace the banner will be Manchester in 2016.

Quantum teleportation from a Japan-Germany collaboration

An Aug. 15, 2013 Johannes Gutenberg University Mainz press release (also on EurekAlert) has somewhat gobsmacked me with its talk of teleportation,

By means of the quantum-mechanical entanglement of spatially separated light fields, researchers in Tokyo and Mainz have managed to teleport photonic qubits with extreme reliability. This means that a decisive breakthrough has been achieved some 15 years after the first experiments in the field of optical teleportation. The success of the experiment conducted in Tokyo is attributable to the use of a hybrid technique in which two conceptually different and previously incompatible approaches were combined. “Discrete digital optical quantum information can now be transmitted continuously – at the touch of a button, if you will,” explained Professor Peter van Loock of Johannes Gutenberg University Mainz (JGU). As a theoretical physicist, van Loock advised the experimental physicists in the research team headed by Professor Akira Furusawa of the University of Tokyo on how they could most efficiently perform the teleportation experiment to ultimately verify the success of quantum teleportation.

The press release goes on to describe quantum teleportation,

Quantum teleportation involves the transfer of arbitrary quantum states from a sender, dubbed Alice, to a spatially distant receiver, named Bob. This requires that Alice and Bob initially share an entangled quantum state across the space in question, e.g., in the form of entangled photons. Quantum teleportation is of fundamental importance to the processing of quantum information (quantum computing) and quantum communication. Photons are especially valued as ideal information carriers for quantum communication since they can be used to transmit signals at the speed of light. A photon can represent a quantum bit or qubit analogous to a binary digit (bit) in standard classical information processing. Such photons are known as ‘flying quantum bits.

Before explaining the new technique, there’s an overview of previous efforts,

The first attempts to teleport single photons or light particles were made by the Austrian physicist Anton Zeilinger. Various other related experiments have been performed in the meantime. However, teleportation of photonic quantum bits using conventional methods proved to have its limitations because of experimental deficiencies and difficulties with fundamental principles.

What makes the experiment in Tokyo so different is the use of a hybrid technique. With its help, a completely deterministic and highly reliable quantum teleportation of photonic qubits has been achieved. The accuracy of the transfer was 79 to 82 percent for four different qubits. In addition, the qubits were teleported much more efficiently than in previous experiments, even at a low degree of entanglement.

The concept of entanglement was first formulated by Erwin Schrödinger and involves a situation in which two quantum systems, such as two light particles for example, are in a joint state, so that their behavior is mutually dependent to a greater extent than is normally (classically) possible. In the Tokyo experiment, continuous entanglement was achieved by means of entangling many photons with many other photons. This meant that the complete amplitudes and phases of two light fields were quantum correlated. Previous experiments only had a single photon entangled with another single photon – a less efficient solution. “The entanglement of photons functioned very well in the Tokyo experiment – practically at the press of a button, as soon as the laser was switched on,” said van Loock, Professor for Theory of Quantum Optics and Quantum Information at Mainz University. This continuous entanglement was accomplished with the aid of so-called ‘squeezed light’, which takes the form of an ellipse in the phase space of the light field. Once entanglement has been achieved, a third light field can be attached to the transmitter. From there, in principle, any state and any number of states can be transmitted to the receiver. “In our experiment, there were precisely four sufficiently representative test states that were transferred from Alice to Bob using entanglement. Thanks to continuous entanglement, it was possible to transmit the photonic qubits in a deterministic fashion to Bob, in other words, in each run,” added van Loock.

Earlier attempts to achieve optical teleportation were performed differently and, before now, the concepts used have proved to be incompatible. Although in theory it had already been assumed that the two different strategies, from the discrete and the continuous world, needed to be combined, it represents a technological breakthrough that this has actually now been experimentally demonstrated with the help of the hybrid technique. “The two separate worlds, the discrete and the continuous, are starting to converge,” concluded van Loock.

The researchers have provided an image illustrating quantum teleportation,

Deterministic quantum teleportation of a photonic quantum bit. Each qubit that flies from the left into the teleporter leaves the teleporter on the right with a loss of quality of only around 20 percent, a value not achievable without entanglement. Courtesy University of Tokyo

Deterministic quantum teleportation of a photonic quantum bit. Each qubit that flies from the left into the teleporter leaves the teleporter on the right with a loss of quality of only around 20 percent, a value not achievable without entanglement. Courtesy University of Tokyo

Here’s a citation for and a link to the published paper,

Deterministic quantum teleportation of photonic quantum bits by a hybrid technique by Shuntaro Takeda, Takahiro Mizuta, Maria Fuwa, Peter van Loock & Akira Furusawa. Nature 500, 315–318 (15 August 2013) doi:10.1038/nature12366 Published online 14 August 2013

This article  is behind a paywall although there is a preview capability (ReadCube Access) available.