Tag Archives: University of Tokyo

A city of science in Japan: Kawasaki (Kanagawa)

Happily, I’m getting more nanotechnology (for the most part) information from Japan. Given Japan’s prominence in this field of endeavour I’ve long felt FrogHeart has not adequately represented Japanese contributions. Now that I’m receiving English language translations, I hope to better address the situation.

This morning (March 26, 2015), there were two news releases from Kawasaki INnovation Gateway at SKYFRONT (KING SKYFRONT), Coastal Area International Strategy Office, Kawasaki City, Japan in my mailbox. Before getting on to the news releases, here’s a little about  the city of Kawasaki and about its innovation gateway. From the Kawasaki, Kanagawa entry in Wikipedia (Note: Links have been removed),

Kawasaki (川崎市 Kawasaki-shi?) is a city in Kanagawa Prefecture, Japan, located between Tokyo and Yokohama. It is the 9th most populated city in Japan and one of the main cities forming the Greater Tokyo Area and Keihin Industrial Area.

Kawasaki occupies a belt of land stretching about 30 kilometres (19 mi) along the south bank of the Tama River, which divides it from Tokyo. The eastern end of the belt, centered on JR Kawasaki Station, is flat and largely consists of industrial zones and densely built working-class housing, the Western end mountainous and more suburban. The coastline of Tokyo Bay is occupied by vast heavy industrial complexes built on reclaimed land.

There is a 2014 video about Kawasaki’s innovation gateway, which despite its 14 mins. 39 secs. running time I am embedding here. (Caution: They highlight their animal testing facility at some length.)

Now on to the two news releases. The first concerns research on gold nanoparticles that was published in 2014. From a March 26, 2015 Kawasaki INnovation Gateway news release,

Gold nanoparticles size up to cancer treatment

Incorporating gold nanoparticles helps optimise treatment carrier size and stability to improve delivery of cancer treatment to cells.

Treatments that attack cancer cells through the targeted silencing of cancer genes could be developed using small interfering RNA molecules (siRNA). However delivering the siRNA into the cells intact is a challenge as it is readily degraded by enzymes in the blood and small enough to be eliminated from the blood stream by kidney filtration.  Now Kazunori Kataoka at the University of Tokyo and colleagues at Tokyo Institute of Technology have designed a protective treatment delivery vehicle with optimum stability and size for delivering siRNA to cells.

The researchers formed a polymer complex with a single siRNA molecule. The siRNA-loaded complex was then bonded to a 20 nm gold nanoparticle, which thanks to advances in synthesis techniques can be produced with a reliably low size distribution. The resulting nanoarchitecture had the optimum overall size – small enough to infiltrate cells while large enough to accumulate.

In an assay containing heparin – a biological anti-coagulant with a high negative charge density – the complex was found to release the siRNA due to electrostatic interactions. However when the gold nanoparticle was incorporated the complex remained stable. Instead, release of the siRNA from the complex with the gold nanoparticle could be triggered once inside the cell by the presence of glutathione, which is present in high concentrations in intracellular fluid. The glutathione bonded with the gold nanoparticles and the complex, detaching them from each other and leaving the siRNA prone to release.

The researchers further tested their carrier in a subcutaneous tumour model. The authors concluded that the complex bonded to the gold nanoparticle “enabled the efficient tumor accumulation of siRNA and significant in vivo gene silencing effect in the tumor, demonstrating the potential for siRNA-based cancer therapies.”

The news release provides links to the March 2015 newsletter which highlights this research and to the specific article and video,

March 2015 Issue of Kawasaki SkyFront iNewsletter: http://inewsletter-king-skyfront.jp/en/

Contents

Feature video on Professor Kataoka’s research : http://inewsletter-king-skyfront.jp/en/video_feature/vol_3/feature01/

Research highlights: http://inewsletter-king-skyfront.jp/en/research_highlights/vol_3/research01/

Here’s a link to and a citation for the paper,

Precise Engineering of siRNA Delivery Vehicles to Tumors Using Polyion Complexes and Gold Nanoparticles by Hyun Jin Kim, Hiroyasu Takemoto, Yu Yi, Meng Zheng, Yoshinori Maeda, Hiroyuki Chaya, Kotaro Hayashi, Peng Mi, Frederico Pittella, R. James Christie, Kazuko Toh, Yu Matsumoto, Nobuhiro Nishiyama, Kanjiro Miyata, and Kazunori Kataoka. ACS Nano, 2014, 8 (9), pp 8979–8991 DOI: 10.1021/nn502125h Publication Date (Web): August 18, 2014
Copyright © 2014 American Chemical Society

This article is behind a paywall.

The second March 26, 2015 Kawasaki INnovation Gateway news release concerns a DNA chip and food-borne pathogens,

Rapid and efficient DNA chip technology for testing 14 major types of food borne pathogens

Conventional methods for testing food-borne pathogens is based on the cultivation of pathogens, a process that is complicated and time consuming. So there is demand for alternative methods to test for food-borne pathogens that are simpler, quick and applicable to a wide range of potential applications.

Now Toshiba Ltd and Kawasaki City Institute for Public Health have collaborated in the development of a rapid and efficient automatic abbreviated DNA detection technology that can test for 14 major types of food borne pathogens. The so called ‘DNA chip card’ employs electrochemical DNA chips and overcomes the complicated procedures associated with genetic testing of conventional methods. The ‘DNA chip card’ is expected to find applications in hygiene management in food manufacture, pharmaceuticals, and cosmetics.

Details

The so-called automatic abbreviated DNA detection technology ‘DNA chip card’ was developed by Toshiba Ltd and in a collaboration with Kawasaki City Institute for Public Health, used to simultaneously detect 14 different types of food-borne pathogens in less than 90 minutes. The detection sensitivity depends on the target pathogen and has a range of 1E+01~05 cfu/mL.

Notably, such tests would usually take 4-5 days using conventional methods based on pathogen cultivation. Furthermore, in contrast to conventional DNA protocols that require high levels of skill and expertise, the ‘DNA chip card’ only requires the operator to inject nucleic acid, thereby making the procedure easier to use and without specialized operating skills.

Examples of pathogens associated with food poisoning that were tested with the “DNA chip card”

Enterohemorrhagic Escherichia coli

Salmonella

Campylobacter

Vibrio parahaemolyticus

Shigella

Staphylococcus aureus

Enterotoxigenic Escherichia coli

Enteroaggregative Escherichia coli

Enteropathogenic Escherichia coli

Clostridium perfringens

Bacillus cereus

Yersinia

Listeria

Vibrio cholerae

I think 14 is the highest number of tests I’ve seen for one of these chips. This chip is quite an achievement.

One final bit from the news release about the DNA chip provides a brief description of the gateway and something they call King SkyFront,

About KING SKYFRONT

The Kawasaki INnovation Gateway (KING) SKYFRONT is the flagship science and technology innovation hub of Kawasaki City. KING SKYFRONT is a 40 hectare area located in the Tonomachi area of the Keihin Industrial Region that spans Tokyo and Kanagawa Prefecture and Tokyo International Airport (also often referred to as Haneda Airport).

KING SKYFRONT was launched in 2013 as a base for scholars, industrialists and government administrators to work together to devise real life solutions to global issues in the life sciences and environment.

I find this emphasis on the city interesting. It seems that cities are becoming increasingly important and active where science research and development are concerned. Europe seems to have adopted a biannual event wherein a city is declared a European City of Science in conjunction with the EuroScience Open Forum (ESOF) conferences. The first such city was Dublin in 2012 (I believe the Irish came up with the concept themselves) and was later adopted by Copenhagen for 2014. The latest city to embrace the banner will be Manchester in 2016.

Quantum teleportation from a Japan-Germany collaboration

An Aug. 15, 2013 Johannes Gutenberg University Mainz press release (also on EurekAlert) has somewhat gobsmacked me with its talk of teleportation,

By means of the quantum-mechanical entanglement of spatially separated light fields, researchers in Tokyo and Mainz have managed to teleport photonic qubits with extreme reliability. This means that a decisive breakthrough has been achieved some 15 years after the first experiments in the field of optical teleportation. The success of the experiment conducted in Tokyo is attributable to the use of a hybrid technique in which two conceptually different and previously incompatible approaches were combined. “Discrete digital optical quantum information can now be transmitted continuously – at the touch of a button, if you will,” explained Professor Peter van Loock of Johannes Gutenberg University Mainz (JGU). As a theoretical physicist, van Loock advised the experimental physicists in the research team headed by Professor Akira Furusawa of the University of Tokyo on how they could most efficiently perform the teleportation experiment to ultimately verify the success of quantum teleportation.

The press release goes on to describe quantum teleportation,

Quantum teleportation involves the transfer of arbitrary quantum states from a sender, dubbed Alice, to a spatially distant receiver, named Bob. This requires that Alice and Bob initially share an entangled quantum state across the space in question, e.g., in the form of entangled photons. Quantum teleportation is of fundamental importance to the processing of quantum information (quantum computing) and quantum communication. Photons are especially valued as ideal information carriers for quantum communication since they can be used to transmit signals at the speed of light. A photon can represent a quantum bit or qubit analogous to a binary digit (bit) in standard classical information processing. Such photons are known as ‘flying quantum bits.

Before explaining the new technique, there’s an overview of previous efforts,

The first attempts to teleport single photons or light particles were made by the Austrian physicist Anton Zeilinger. Various other related experiments have been performed in the meantime. However, teleportation of photonic quantum bits using conventional methods proved to have its limitations because of experimental deficiencies and difficulties with fundamental principles.

What makes the experiment in Tokyo so different is the use of a hybrid technique. With its help, a completely deterministic and highly reliable quantum teleportation of photonic qubits has been achieved. The accuracy of the transfer was 79 to 82 percent for four different qubits. In addition, the qubits were teleported much more efficiently than in previous experiments, even at a low degree of entanglement.

The concept of entanglement was first formulated by Erwin Schrödinger and involves a situation in which two quantum systems, such as two light particles for example, are in a joint state, so that their behavior is mutually dependent to a greater extent than is normally (classically) possible. In the Tokyo experiment, continuous entanglement was achieved by means of entangling many photons with many other photons. This meant that the complete amplitudes and phases of two light fields were quantum correlated. Previous experiments only had a single photon entangled with another single photon – a less efficient solution. “The entanglement of photons functioned very well in the Tokyo experiment – practically at the press of a button, as soon as the laser was switched on,” said van Loock, Professor for Theory of Quantum Optics and Quantum Information at Mainz University. This continuous entanglement was accomplished with the aid of so-called ‘squeezed light’, which takes the form of an ellipse in the phase space of the light field. Once entanglement has been achieved, a third light field can be attached to the transmitter. From there, in principle, any state and any number of states can be transmitted to the receiver. “In our experiment, there were precisely four sufficiently representative test states that were transferred from Alice to Bob using entanglement. Thanks to continuous entanglement, it was possible to transmit the photonic qubits in a deterministic fashion to Bob, in other words, in each run,” added van Loock.

Earlier attempts to achieve optical teleportation were performed differently and, before now, the concepts used have proved to be incompatible. Although in theory it had already been assumed that the two different strategies, from the discrete and the continuous world, needed to be combined, it represents a technological breakthrough that this has actually now been experimentally demonstrated with the help of the hybrid technique. “The two separate worlds, the discrete and the continuous, are starting to converge,” concluded van Loock.

The researchers have provided an image illustrating quantum teleportation,

Deterministic quantum teleportation of a photonic quantum bit. Each qubit that flies from the left into the teleporter leaves the teleporter on the right with a loss of quality of only around 20 percent, a value not achievable without entanglement. Courtesy University of Tokyo

Deterministic quantum teleportation of a photonic quantum bit. Each qubit that flies from the left into the teleporter leaves the teleporter on the right with a loss of quality of only around 20 percent, a value not achievable without entanglement. Courtesy University of Tokyo

Here’s a citation for and a link to the published paper,

Deterministic quantum teleportation of photonic quantum bits by a hybrid technique by Shuntaro Takeda, Takahiro Mizuta, Maria Fuwa, Peter van Loock & Akira Furusawa. Nature 500, 315–318 (15 August 2013) doi:10.1038/nature12366 Published online 14 August 2013

This article  is behind a paywall although there is a preview capability (ReadCube Access) available.

Special coating eliminates need to de-ice airplanes

There was a big airplane accident years ago where the chief pilot had failed to de-ice the wings just before take off. The plane took off from Dulles Airport (Washington, DC) and crashed minutes later killing the crew and passengers (if memory serves, everyone died).

I read the story in a book about sociolinguistics and work. When the ‘black box’ (a recorder that’s in all airplanes) was recovered, sociolinguists were included in the team that was tasked with trying to establish the cause(s). From the sociolinguists’ perspective, it came down to this. The chief pilot hadn’t flown from Washington, DC very often and was unaware that icing could be as prevalent there as it is more northern airports. He did de-ice the wings but the plane did not take off in its assigned time slot (busy airport). After several minutes and just prior to takeoff, the chief pilot’s second-in-command who was more familiar with Washington’s weather conditions gently suggested de-icing wings a second time and was ignored. (They reproduced some of the dialogue in the text I was reading.) The story made quite an impact on me since I’m very familiar with the phenomenon (confession: I’ve been on both sides of the equation) of comments in the workplace being ignored, although not with such devastating consequences. Predictably, the sociolinguists suggested changing the crew’s communication habits (always a good idea) but it never occurred to them (or to me at the time of reading the text) that technology might help provide an answer.

A Japanese research team (Riho Kamada, Chuo University;  Katsuaki Morita, The University of Tokyo; Koji Okamoto, The University of Tokyo; Akihito Aoki, Kanagawa Institute of Technology; Shigeo Kimura, Kanagawa Institute of Technology; Hirotaka Sakaue, Japan Aerospace Exploration Agency [JAXA]) presented an anti-icing (or de-icing) solution for airplanes at the 65th Annual Meeting of the APS* Division of Fluid Dynamics, November 18–20, 2012 in San Diego, California, from the Nov. 16, 2012 news release on EurekAlert,

To help planes fly safely through cold, wet, and icy conditions, a team of Japanese scientists has developed a new super water-repellent surface that can prevent ice from forming in these harsh atmospheric conditions. Unlike current inflight anti-icing techniques, the researchers envision applying this new anti-icing method to an entire aircraft like a coat of paint.

As airplanes fly through clouds of super-cooled water droplets, areas around the nose, the leading edges of the wings, and the engine cones experience low airflow, says Hirotaka Sakaue, a researcher in the fluid dynamics group at the Japan Aerospace Exploration Agency (JAXA). This enables water droplets to contact the aircraft and form an icy layer. If ice builds up on the wings it can change the way air flows over them, hindering control and potentially making the airplane stall. Other members of the research team are with the University of Tokyo, the Kanagawa Institute of Technology, and Chuo University.

Current anti-icing techniques include diverting hot air from the engines to the wings, preventing ice from forming in the first place, and inflatable membranes known as pneumatic boots, which crack ice off the leading edge of an aircraft’s wings. The super-hydrophobic, or water repelling, coating being developed by Sakaue, Katsuaki Morita – a graduate student at the University of Tokyo – and their colleagues works differently, by preventing the water from sticking to the airplane’s surface in the first place.

The researchers developed a coating containing microscopic particles of a Teflon-based material called polytetrafluoroethylene (PTFE), which reduces the energy needed to detach a drop of water from a surface. “If this energy is small, the droplet is easy to remove,” says Sakaue. “In other words, it’s repelled,” he adds.

The PTFE microscale particles created a rough surface, and the rougher it is, on a microscopic scale, the less energy it takes to detach water from that surface. The researchers varied the size of the PTFE particles in their coatings, from 5 to 30 micrometers, in order to find the most water-repellant size. By measuring the contact angle – the angle between the coating and the drop of water – they could determine how well a surface repelled water.

While this work isn’t occurring at the nanoscale, I thought I’d make an exception due to my interest in the subject.

*APS is the American Physical Society

Sometimes when we touch: Touché, a sensing project from Disnery Research and Carnegie Mellon

Researchers at Carnegie Mellon University and Disney Research, Pittsburgh (Philadelphia, US) have taken capacitive sensing, used for touchscreens such as smartphones, and added new capabilities. From the May 4, 2012 news item on Nanowerk,

A doorknob that knows whether to lock or unlock based on how it is grasped, a smartphone that silences itself if the user holds a finger to her lips and a chair that adjusts room lighting based on recognizing if a user is reclining or leaning forward are among the many possible applications of Touché, a new sensing technique developed by a team at Disney Research, Pittsburgh, and Carnegie Mellon University.

Touché is a form of capacitive touch sensing, the same principle underlying the types of touchscreens used in most smartphones. But instead of sensing electrical signals at a single frequency, like the typical touchscreen, Touché monitors capacitive signals across a broad range of frequencies.

This Swept Frequency Capacitive Sensing (SFCS) makes it possible to not only detect a “touch event,” but to recognize complex configurations of the hand or body that is doing the touching. An object thus could sense how it is being touched, or might sense the body configuration of the person doing the touching.

Disney Research, Pittsburgh made this video describing the technology and speculating on some of the possible applications (this is a research-oriented video, not your standard Disney fare),

Here’s a bit  more about the technology (from the May 4, 2012 news item),

Both Touché and smartphone touchscreens are based on the phenomenon known as capacitive coupling. In a capacitive touchscreen, the surface is coated with a transparent conductor that carries an electrical signal. That signal is altered when a person’s finger touches it, providing an alternative path for the electrical charge.

By monitoring the change in the signal, the device can determine if a touch occurs. By monitoring a range of signal frequencies, however, Touché can derive much more information. Different body tissues have different capacitive properties, so monitoring a range of frequencies can detect a number of different paths that the electrical charge takes through the body.

Making sense of all of that SFCS information, however, requires analyzing hundreds of data points. As microprocessors have become steadily faster and less expensive, it now is feasible to use SFCS in touch interfaces, the researchers said.

“Devices keep getting smaller and increasingly are embedded throughout the environment, which has made it necessary for us to find ways to control or interact with them, and that is where Touché could really shine,” Harrison [Chris Harrison, a Ph.D. student in Carnegie Mellon’s Human-Computer Interaction Institute] said. Sato [Munehiko Sato, a Disney intern and a Ph.D. student in engineering at the University of Tokyo] said Touché could make computer interfaces as invisible to users as the embedded computers themselves. “This might enable us to one day do away with keyboards, mice and perhaps even conventional touchscreens for many applications,” he said.

We’re seeing more of these automatic responses to a gesture or movement. For example, common spelling errors are corrected as you key in (type) text in wordprocessing packages and in search engines. In fact, there are times when an applications insists on its own correction and I have to insist (and I don’t always manage to override the system) if I have something which is nonstandard. As I watch these videos and read about these new technical possibilities, I keep asking myself, Where is the override?