Researchers in Singapore have proposed a new technology for cleaning up oil spills, according to a June 17, 2016 news item on Nanowerk,
Large-scale oil spills, where hundreds of tons of petroleum products are accidentally released into the oceans, not only have devastating effects on the environment, but have significant socio-economic impact as well .
Current techniques of cleaning up oil spills are not very efficient and may even cause further pollution or damage to the environment. These methods, which include the use of toxic detergent-like compounds called dispersants or burning of the oil slick, result in incomplete removal of the oil. The oil molecules remain in the water over long periods and may even be spread over a larger area as they are carried by wind and waves. Further, burning can only be applied to fresh oil slicks of at least 3 millimeters thick, and this process would also cause secondary environmental pollution.
In a bid to improve the technology utilized by cleanup crews to manage and contain such large spills, researchers from the Institute of Bioengineering and Nanotechnology (IBN) of A*STAR [located in Singapore] have invented a smart oil-scavenging material or supergelators that could help clean up oil spills efficiently and rapidly to prevent secondary pollution.
These supergelators are derived from highly soluble small organic molecules, which instantly self-assemble into nanofibers to form a 3D net that traps the oil molecules so that they can be removed easily from the surface of the water.
“Marine oil spills have a disastrous impact on the environment and marine life, and result in an enormous economic burden on society. Our rapid-acting supergelators offer an effective cleanup solution that can help to contain the severe environmental damage and impact of such incidents in the future,” said IBN Executive Director Professor Jackie Y. Ying.
Motivated by the urgent need for a more effective oil spill control solution, the IBN researchers developed new compounds that dissolve easily in environmentally friendly solvents and gel rapidly upon contact with oil. The supergelator molecules arrange themselves into a 3D network, entangling the oil molecules into clumps that can then be easily skimmed off the water’s surface.
“The most interesting and useful characteristic of our molecules is their ability to stack themselves on top of each other. These stacked columns allow our researchers to create and test different molecular constructions, while finding the best structure that will yield the desired properties,” said IBN Team Leader and Principal Research Scientist Dr Huaqiang Zeng. (Animation: Click to see how the supergelators stack themselves into columns.)
IBN’s supergelators have been tested on various types of weathered and unweathered crude oil in seawater, and have been found to be effective in solidifying all of them. The supergelators take only minutes to solidify the oil at room temperature for easy removal from water. In addition, tests carried out by the research team showed that the supergelator was not toxic to human cells, as well as zebrafish embryos and larvae. The researchers believe that these qualities would make the supergelators suitable for use in large oil spill areas.
The Institute is looking for industrial partners to further develop its technology for commercial use. [emphasis mine]
The well documented BP Gulf of Mexico oil well accident in 2010 was a catastrophe on an unprecedented scale, with damages amounting to hundreds of billions of dollars. Its wide-ranging effects on the marine ecosystem, as well as the fishing and tourism industries, can still be felt six years on.
I have featured other nanotechnology-enabled oil spill cleanup solutions here. One of the more recent pieces is my Dec. 7, 2015 post about boron nitride sponges. The search terms: ‘oil spill’ and ‘oil spill cleanup’ will help you unearth more.
There have been some promising possibilities and I hope one day these clean up technologies will be brought to market.
This work on quantum networks comes from a joint Singapore/UK research project, from a June 2, 2016 news item on ScienceDaily,
You can’t sign up for the quantum internet just yet, but researchers have reported a major experimental milestone towards building a global quantum network — and it’s happening in space.
With a network that carries information in the quantum properties of single particles, you can create secure keys for secret messaging and potentially connect powerful quantum computers in the future. But scientists think you will need equipment in space to get global reach.
Researchers from the National University of Singapore (NUS) and the University of Strathclyde, UK, have become the first to test in orbit technology for satellite-based quantum network nodes.
They have put a compact device carrying components used in quantum communication and computing into orbit. And it works: the team report first data in a paper published 31 May 2016 in the journal Physical Review Applied.
The team’s device, dubbed SPEQS, creates and measures pairs of light particles, called photons. Results from space show that SPEQS is making pairs of photons with correlated properties – an indicator of performance.
Team-leader Alexander Ling, an Assistant Professor at the Centre for Quantum Technologies (CQT) at NUS said, “This is the first time anyone has tested this kind of quantum technology in space.”
The team had to be inventive to redesign a delicate, table-top quantum setup to be small and robust enough to fly inside a nanosatellite only the size of a shoebox. The whole satellite weighs just 1.65-kilogramme.
Making correlated photons is a precursor to creating entangled photons. Described by Einstein as “spooky action at a distance”, entanglement is a connection between quantum particles that lends security to communication and power to computing.
Professor Artur Ekert, Director of CQT, invented the idea of using entangled particles for cryptography. He said, “Alex and his team are taking entanglement, literally, to a new level. Their experiments will pave the road to secure quantum communication and distributed quantum computation on a global scale. I am happy to see that Singapore is one of the world leaders in this area.”
Local quantum networks already exist [emphasis mine]. The problem Ling’s team aims to solve is a distance limit. Losses limit quantum signals sent through air at ground level or optical fibre to a few hundred kilometers – but we might ultimately use entangled photons beamed from satellites to connect points on opposite sides of the planet. Although photons from satellites still have to travel through the atmosphere, going top-to-bottom is roughly equivalent to going only 10 kilometres at ground level.
The group’s first device is a technology pathfinder. It takes photons from a BluRay laser and splits them into two, then measures the pair’s properties, all on board the satellite. To do this it contains a laser diode, crystals, mirrors and photon detectors carefully aligned inside an aluminum block. This sits on top of a 10 centimetres by 10 centimetres printed circuit board packed with control electronics.
Through a series of pre-launch tests – and one unfortunate incident – the team became more confident that their design could survive a rocket launch and space conditions. The team had a device in the October 2014 Orbital-3 rocket which exploded on the launch pad. The satellite containing that first device was later found on a beach intact and still in working order.
Even with the success of the more recent mission, a global network is still a few milestones away. The team’s roadmap calls for a series of launches, with the next space-bound SPEQS slated to produce entangled photons. SPEQS stands for Small Photon-Entangling Quantum System.
With later satellites, the researchers will try sending entangled photons to Earth and to other satellites. The team are working with standard “CubeSat” nanosatellites, which can get relatively cheap rides into space as rocket ballast. Ultimately, completing a global network would mean having a fleet of satellites in orbit and an array of ground stations.
In the meantime, quantum satellites could also carry out fundamental experiments – for example, testing entanglement over distances bigger than Earth-bound scientists can manage. “We are reaching the limits of how precisely we can test quantum theory on Earth,” said co-author Dr Daniel Oi at the University of Strathclyde.
There’s rather intriguing Swiss research into atoms and so-called Bell Correlations according to an April 21, 2016 news item on ScienceDaily,
The microscopic world is governed by the rules of quantum mechanics, where the properties of a particle can be completely undetermined and yet strongly correlated with those of other particles. Physicists from the University of Basel have observed these so-called Bell correlations for the first time between hundreds of atoms. Their findings are published in the scientific journal Science.
Everyday objects possess properties independently of each other and regardless of whether we observe them or not. Einstein famously asked whether the moon still exists if no one is there to look at it; we answer with a resounding yes. This apparent certainty does not exist in the realm of small particles. The location, speed or magnetic moment of an atom can be entirely indeterminate and yet still depend greatly on the measurements of other distant atoms.
With the (false) assumption that atoms possess their properties independently of measurements and independently of each other, a so-called Bell inequality can be derived. If it is violated by the results of an experiment, it follows that the properties of the atoms must be interdependent. This is described as Bell correlations between atoms, which also imply that each atom takes on its properties only at the moment of the measurement. Before the measurement, these properties are not only unknown – they do not even exist.
A team of researchers led by professors Nicolas Sangouard and Philipp Treutlein from the University of Basel, along with colleagues from Singapore, have now observed these Bell correlations for the first time in a relatively large system, specifically among 480 atoms in a Bose-Einstein condensate. Earlier experiments showed Bell correlations with a maximum of four light particles or 14 atoms. The results mean that these peculiar quantum effects may also play a role in larger systems.
Large number of interacting particles
In order to observe Bell correlations in systems consisting of many particles, the researchers first had to develop a new method that does not require measuring each particle individually – which would require a level of control beyond what is currently possible. The team succeeded in this task with the help of a Bell inequality that was only recently discovered. The Basel researchers tested their method in the lab with small clouds of ultracold atoms cooled with laser light down to a few billionths of a degree above absolute zero. The atoms in the cloud constantly collide, causing their magnetic moments to become slowly entangled. When this entanglement reaches a certain magnitude, Bell correlations can be detected. Author Roman Schmied explains: “One would expect that random collisions simply cause disorder. Instead, the quantum-mechanical properties become entangled so strongly that they violate classical statistics.”
More specifically, each atom is first brought into a quantum superposition of two states. After the atoms have become entangled through collisions, researchers count how many of the atoms are actually in each of the two states. This division varies randomly between trials. If these variations fall below a certain threshold, it appears as if the atoms have ‘agreed’ on their measurement results; this agreement describes precisely the Bell correlations.
New scientific territory
The work presented, which was funded by the National Centre of Competence in Research Quantum Science and Technology (NCCR QSIT), may open up new possibilities in quantum technology; for example, for generating random numbers or for quantum-secure data transmission. New prospects in basic research open up as well: “Bell correlations in many-particle systems are a largely unexplored field with many open questions – we are entering uncharted territory with our experiments,” says Philipp Treutlein.
Here’s a link to and a citation for the paper,
Bell correlations in a Bose-Einstein condensate by Roman Schmied, Jean-Daniel Bancal, Baptiste Allard, Matteo Fadel, Valerio Scarani, Philipp Treutlein, Nicolas Sangouard. Science 22 Apr 2016: Vol. 352, Issue 6284, pp. 441-444 DOI: 10.1126/science.aad8665
A century ago, more than 60,000 tigers roamed the wild. Today, the worldwide estimate has dwindled to around 3,200. Poaching is one of the main drivers of this precipitous drop. Whether killed for skins, medicine or trophy hunting, humans have pushed tigers to near-extinction. The same applies to other large animal species like elephants and rhinoceros that play unique and crucial roles in the ecosystems where they live.
Human patrols serve as the most direct form of protection of endangered animals, especially in large national parks. However, protection agencies have limited resources for patrols.
With support from the National Science Foundation (NSF) and the Army Research Office, researchers are using artificial intelligence (AI) and game theory to solve poaching, illegal logging and other problems worldwide, in collaboration with researchers and conservationists in the U.S., Singapore, Netherlands and Malaysia.
“In most parks, ranger patrols are poorly planned, reactive rather than pro-active, and habitual,” according to Fei Fang, a Ph.D. candidate in the computer science department at the University of Southern California (USC).
Fang is part of an NSF-funded team at USC led by Milind Tambe, professor of computer science and industrial and systems engineering and director of the Teamcore Research Group on Agents and Multiagent Systems.
Their research builds on the idea of “green security games” — the application of game theory to wildlife protection. Game theory uses mathematical and computer models of conflict and cooperation between rational decision-makers to predict the behavior of adversaries and plan optimal approaches for containment. The Coast Guard and Transportation Security Administration have used similar methods developed by Tambe and others to protect airports and waterways.
“This research is a step in demonstrating that AI can have a really significant positive impact on society and allow us to assist humanity in solving some of the major challenges we face,” Tambe said.
PAWS puts the claws in anti-poaching
The team presented papers describing how they use their methods to improve the success of human patrols around the world at the AAAI Conference on Artificial Intelligence in February .
The researchers first created an AI-driven application called PAWS (Protection Assistant for Wildlife Security) in 2013 and tested the application in Uganda and Malaysia in 2014. Pilot implementations of PAWS revealed some limitations, but also led to significant improvements.
Here’s a video describing the issues and PAWS,
For those who prefer to read about details rather listen, there’s more from the news release,
PAWS uses data on past patrols and evidence of poaching. As it receives more data, the system “learns” and improves its patrol planning. Already, the system has led to more observations of poacher activities per kilometer.
Its key technical advance lies in its ability to incorporate complex terrain information, including the topography of protected areas. That results in practical patrol routes that minimize elevation changes, saving time and energy. Moreover, the system can also take into account the natural transit paths that have the most animal traffic – and thus the most poaching – creating a “street map” for patrols.
“We need to provide actual patrol routes that can be practically followed,” Fang said. “These routes need to go back to a base camp and the patrols can’t be too long. We list all possible patrol routes and then determine which is most effective.”
The application also randomizes patrols to avoid falling into predictable patterns.
“If the poachers observe that patrols go to some areas more often than others, then the poachers place their snares elsewhere,” Fang said.
Since 2015, two non-governmental organizations, Panthera and Rimbat, have used PAWS to protect forests in Malaysia. The research won the Innovative Applications of Artificial Intelligence award for deployed application, as one of the best AI applications with measurable benefits.
The team recently combined PAWS with a new tool called CAPTURE (Comprehensive Anti-Poaching Tool with Temporal and Observation Uncertainty Reasoning) that predicts attacking probability even more accurately.
In addition to helping patrols find poachers, the tools may assist them with intercepting trafficked wildlife products and other high-risk cargo, adding another layer to wildlife protection. The researchers are in conversations with wildlife authorities in Uganda to deploy the system later this year. They will present their findings at the 15th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2016) in May.
“There is an urgent need to protect the natural resources and wildlife on our beautiful planet, and we computer scientists can help in various ways,” Fang said. “Our work on PAWS addresses one facet of the problem, improving the efficiency of patrols to combat poaching.”
There is yet another potential use for PAWS, the prevention of illegal logging,
While Fang and her colleagues work to develop effective anti-poaching patrol planning systems, other members of the USC team are developing complementary methods to prevent illegal logging, a major economic and environmental problem for many developing countries.
The World Wildlife Fund estimates trade in illegally harvested timber to be worth between $30 billion and $100 billion annually. The practice also threatens ancient forests and critical habitats for wildlife.
Researchers at USC, the University of Texas at El Paso and Michigan State University recently partnered with the non-profit organization Alliance Vohoary Gasy to limit the illegal logging of rosewood and ebony trees in Madagascar, which has caused a loss of forest cover on the island nation.
Forest protection agencies also face limited budgets and must cover large areas, making sound investments in security resources critical.
The research team worked to determine the balance of security resources in which Madagascar should invest to maximize protection, and to figure out how to best deploy those resources.
Past work in game theory-based security typically involved specified teams — the security workers assigned to airport checkpoints, for example, or the air marshals deployed on flight tours. Finding optimal security solutions for those scenarios is difficult; a solution involving an open-ended team had not previously been feasible.
To solve this problem, the researchers developed a new method called SORT (Simultaneous Optimization of Resource Teams) that they have been experimentally validating using real data from Madagascar.
The research team created maps of the national parks, modeled the costs of all possible security resources using local salaries and budgets, and computed the best combination of resources given these conditions.
“We compared the value of using an optimal team determined by our algorithm versus a randomly chosen team and the algorithm did significantly better,” said Sara Mc Carthy, a Ph.D. student in computer science at USC.
The algorithm is simple and fast, and can be generalized to other national parks with different characteristics. The team is working to deploy it in Madagascar in association with the Alliance Vohoary Gasy.
“I am very proud of what my PhD students Fei Fang and Sara Mc Carthy have accomplished in this research on AI for wildlife security and forest protection,” said Tambe, the team lead. “Interdisciplinary collaboration with practitioners in the field was key in this research and allowed us to improve our research in artificial intelligence.”
Moreover, the project shows other computer science researchers the potential impact of applying their research to the world’s problems.
“This work is not only important because of the direct beneficial impact that it has on the environment, protecting wildlife and forests, but on the way that it can inspire other to dedicate their efforts into making the world a better place,” Mc Carthy said.
The curious can find out more about Panthera here and about Alliance Vohoary Gasy here (be prepared to use your French language skills). Unfortunately, I could not find more information about Rimbat.
This research from Singapore could make neuroprosthetics and exoskeletons a little easier to manage as long as you don’t mind having a neural implant. From a Feb. 11, 2016 news item on ScienceDaily,
A versatile chip offers multiple applications in various electronic devices, report researchers, suggested that there is now hope that a low-powered, wireless neural implant may soon be a reality. Neural implants when embedded in the brain can alleviate the debilitating symptoms of Parkinson’s disease or give paraplegic people the ability to move their prosthetic limbs.
Caption: NTU Asst Prof Arindam Basu is holding his low-powered smart chip. Credit: NTU Singapore
Scientists at Nanyang Technological University, Singapore (NTU Singapore) have developed a small smart chip that can be paired with neural implants for efficient wireless transmission of brain signals.
Neural implants when embedded in the brain can alleviate the debilitating symptoms of Parkinson’s disease or give paraplegic people the ability to move their prosthetic limbs.
However, they need to be connected by wires to an external device outside the body. For a prosthetic patient, the neural implant is connected to a computer that decodes the brain signals so the artificial limb can move.
These external wires are not only cumbersome but the permanent openings which allow the wires into the brain increases the risk of infections.
The new chip by NTU scientists can allow the transmission of brain data wirelessly and with high accuracy.
Assistant Professor Arindam Basu from NTU’s School of Electrical and Electronic Engineering said the research team have tested the chip on data recorded from animal models, which showed that it could decode the brain’s signal to the hand and fingers with 95 per cent accuracy.
“What we have developed is a very versatile smart chip that can process data, analyse patterns and spot the difference,” explained Prof Basu.
“It is about a hundred times more efficient than current processing chips on the market. It will lead to more compact medical wearable devices, such as portable ECG monitoring devices and neural implants, since we no longer need large batteries to power them.”
Different from other wireless implants
To achieve high accuracy in decoding brain signals, implants require thousands of channels of raw data. To wirelessly transmit this large amount of data, more power is also needed which means either bigger batteries or more frequent recharging.
This is not feasible as there is limited space in the brain for implants while frequent recharging means the implants cannot be used for long-term recording of signals.
Current wireless implant prototypes thus suffer from a lack of accuracy as they lack the bandwidth to send out thousands of channels of raw data.
Instead of enlarging the power source to support the transmission of raw data, Asst Prof Basu tried to reduce the amount of data that needs to be transmitted.
Designed to be extremely power-efficient, NTU’s patented smart chip will analyse and decode the thousands of signals from the neural implants in the brain, before compressing the results and sending it wirelessly to a small external receiver.
This invention and its findings were published last month [December 2015] in the prestigious journal, IEEE Transactions on Biomedical Circuits & Systems, by the Institute of Electrical and Electronics Engineers, the world’s largest professional association for the advancement of technology.
Its underlying science was also featured in three international engineering conferences (two in Atlanta, USA and one in China) over the last three months.
Versatile smart chip with multiple uses
This new smart chip is designed to analyse data patterns and spot any abnormal or unusual patterns.
For example, in a remote video camera, the chip can be programmed to send a video back to the servers only when a specific type of car or something out of the ordinary is detected, such as an intruder.
This would be extremely beneficial for the Internet of Things (IOT), where every electrical and electronic device is connected to the Internet through a smart chip.
With a report by marketing research firm Gartner Inc predicting that 6.4 billion smart devices and appliances will be connected to the Internet by 2016, and will rise to 20.8 billion devices by 2020, reducing network traffic will be a priority for most companies.
Using NTU’s new chip, the devices can process and analyse the data on site, before sending back important details in a compressed package, instead of sending the whole data stream. This will reduce data usage by over a thousand times.
Asst Prof Basu is now in talks with Singapore Technologies Electronics Limited to adapt his smart chip that can significantly reduce power consumption and the amount of data transmitted by battery-operated remote sensors, such as video cameras.
The team is also looking to expand the applications of the chip into commercial products, such as to customise it for smart home sensor networks, in collaboration with a local electronics company.
The chip, measuring 5mm by 5mm can now be licensed by companies from NTU’s commercialisation arm, NTUitive.
Earlier this month there was a Feb. 9, 2016 announcement about a planned human clinical trial in Australia for a new brain-machine interface (neural implant). Before proceeding with the news, here’s what this implant looks like,
Caption: This tiny device, the size of a small paperclip, is implanted in to a blood vessel next to the brain and can read electrical signals from the motor cortex, the brain’s control centre. These signals can then be transmitted to an exoskeleton or wheelchair to give paraplegic patients greater mobility. Users will need to learn how to communicate with their machinery, but over time, it is thought it will become second nature, like driving or playing the piano. The first human trials are slated for 2017 in Melbourne, Australia. Credit: The University of Melbourne.
Melbourne medical researchers have created a new minimally invasive brain-machine interface, giving people with spinal cord injuries new hope to walk again with the power of thought.
The brain machine interface consists of a stent-based electrode (stentrode), which is implanted within a blood vessel next to the brain, and records the type of neural activity that has been shown in pre-clinical trials to move limbs through an exoskeleton or to control bionic limbs.
The new device is the size of a small paperclip and will be implanted in the first in-human trial at The Royal Melbourne Hospital in 2017.
The results published today in Nature Biotechnology show the device is capable of recording high-quality signals emitted from the brain’s motor cortex, without the need for open brain surgery.
Principal author and Neurologist at The Royal Melbourne Hospital and Research Fellow at The Florey Institute of Neurosciences and the University of Melbourne, Dr Thomas Oxley, said the stentrode was revolutionary.
“The development of the stentrode has brought together leaders in medical research from The Royal Melbourne Hospital, The University of Melbourne and the Florey Institute of Neuroscience and Mental Health. In total 39 academic scientists from 16 departments were involved in its development,” Dr Oxley said.
“We have been able to create the world’s only minimally invasive device that is implanted into a blood vessel in the brain via a simple day procedure, avoiding the need for high risk open brain surgery.
“Our vision, through this device, is to return function and mobility to patients with complete paralysis by recording brain activity and converting the acquired signals into electrical commands, which in turn would lead to movement of the limbs through a mobility assist device like an exoskeleton. In essence this a bionic spinal cord.”
Stroke and spinal cord injuries are leading causes of disability, affecting 1 in 50 people. There are 20,000 Australians with spinal cord injuries, with the typical patient a 19-year old male, and about 150,000 Australians left severely disabled after stroke.
Co-principal investigator and biomedical engineer at the University of Melbourne, Dr Nicholas Opie, said the concept was similar to an implantable cardiac pacemaker – electrical interaction with tissue using sensors inserted into a vein, but inside the brain.
“Utilising stent technology, our electrode array self-expands to stick to the inside wall of a vein, enabling us to record local brain activity. By extracting the recorded neural signals, we can use these as commands to control wheelchairs, exoskeletons, prosthetic limbs or computers,” Dr Opie said.
“In our first-in-human trial, that we anticipate will begin within two years, we are hoping to achieve direct brain control of an exoskeleton for three people with paralysis.”
“Currently, exoskeletons are controlled by manual manipulation of a joystick to switch between the various elements of walking – stand, start, stop, turn. The stentrode will be the first device that enables direct thought control of these devices”
Neurophysiologist at The Florey, Professor Clive May, said the data from the pre-clinical study highlighted that the implantation of the device was safe for long-term use.
“Through our pre-clinical study we were able to successfully record brain activity over many months. The quality of recording improved as the device was incorporated into tissue,” Professor May said.
“Our study also showed that it was safe and effective to implant the device via angiography, which is minimally invasive compared with the high risks associated with open brain surgery.
“The brain-computer interface is a revolutionary device that holds the potential to overcome paralysis, by returning mobility and independence to patients affected by various conditions.”
Professor Terry O’Brien, Head of Medicine at Departments of Medicine and Neurology, The Royal Melbourne Hospital and University of Melbourne said the development of the stentrode has been the “holy grail” for research in bionics.
“To be able to create a device that can record brainwave activity over long periods of time, without damaging the brain is an amazing development in modern medicine,” Professor O’Brien said.
“It can also be potentially used in people with a range of diseases aside from spinal cord injury, including epilepsy, Parkinsons and other neurological disorders.”
The development of the minimally invasive stentrode and the subsequent pre-clinical trials to prove its effectiveness could not have been possible without the support from the major funding partners – US Defense Department DARPA [Defense Advanced Research Projects Agency] and Australia’s National Health and Medical Research Council.
So, DARPA is helping fund this, eh? Interesting but not a surprise given the agency’s previous investments in brain research and neuroprosthetics.
For those who like to get their news via video,
Here’s a link to and a citation for the paper,
Minimally invasive endovascular stent-electrode array for high-fidelity, chronic recordings of cortical neural activity by Thomas J Oxley, Nicholas L Opie, Sam E John, Gil S Rind, Stephen M Ronayne, Tracey L Wheeler, Jack W Judy, Alan J McDonald, Anthony Dornom, Timothy J H Lovell, Christopher Steward, David J Garrett, Bradford A Moffat, Elaine H Lui, Nawaf Yassi, Bruce C V Campbell, Yan T Wong, Kate E Fox, Ewan S Nurse, Iwan E Bennett, Sébastien H Bauquier, Kishan A Liyanage, Nicole R van der Nagel, Piero Perucca, Arman Ahnood et al. Nature Biotechnology (2016) doi:10.1038/nbt.3428 Published online 08 February 2016
This paper is behind a paywall.
I wish the researchers in Singapore, Australia, and elsewhere, good luck!
The Mustafa Prize is a top science and technology award granted to the top researchers and scientists of the Organization of Islamic Cooperation (OIC) member states biennially.
The Prize seeks to encourage education and research and is set to play the pioneering role in developing relations between science and technology institutions working in the OIC member countries.
It also aims to improve scientific relation between academics and researchers to facilitate the growth and perfection of science in the OIC member states.
The laureates in each section will be awarded 500,000 USD which is financed through the endowments made to the Prize. The winners will also be adorned with a special medal and certificate.
The Mustafa Prize started its job in 2013. The Policy making Council of the Prize which is tasked with supervising various procedures of the event is comprised of high-profile universities and academic centers of OIC member states.
The prize will be granted to the works which have improved the human life and have made tangible and cutting-edge innovations on the boundaries of science or have presented new scientific methodology.
Dr. Hossein Zohour, Chairman of the science committee of Mustafa Scientific Prize, has announced the laureates on Wednesday [Dec. 16, 2015].
According to the Public Relations Department of Mustafa (PBUH) Prize, Professor Jackie Y. Ying from Singapore and Professor Omar Yaghi from Jordan won the top science and technology award of the Islamic world.
Zohour cited that the Mustafa (PBUH) Prize is awarded in four categories including, Life Sciences and Medicine, Nanoscience and Nanotechnology, Information and Communication Technologies and Top Scientific Achievement in general fields. “In the first three categories, the nominees must be citizens of one of the 57 Islamic countries while in the fourth category the nominee must be Muslim but being citizen of an Islamic country is not mandatory,” he added.
Professor Jackie Y. Ying, CEO and faculty member of the Institute of Bioengineering and Nanotechnology of Singapore and Professor Omar Yaghi, president of Kavli Nano-energy Organization and faculty member of University of California, Berkeley are the laureates in the fields of Nano-biotechnology sciences and Nanoscience and Nanotechnology respectively.
Zohour continued, “Professor Ying is awarded in recognition of her efforts in development of ‘stimulus response systems in targeted delivery of drugs’ in the field of Nano-biotechnology.”
These systems are consisted of polymeric nanoparticles, which auto-regulate the release of insulin therapeutic depending on the blood glucose levels without the need for sampling. The technology was first developed in her knowledge-based company and now being commercialized in big pharmaceutical firms to be at the service of human health.
Professor Omar Yaghi, prominent Jordanian chemist, has also been selected for his extensive research in the field of metal-organic frameworks (MOFs) in the category of nanoscience and nanotechnology.
It’s worth noting that this [sic] MOFs have a wide range of applications in clean energy technologies, carbon dioxide capturing and hydrogen and methane storage systems due to their extremely high surface areas.
The Mustafa (PBUH) Prize Award Ceremony will take place on Friday December 25  at Vahdat Hall to honor the laureates.
The molecule-car of a registered team has at its disposal a runway prepared on a small portion of the (111) face of the same crystalline gold surface. The surface is maintained at a very low temperature that is 5 Kelvin = – 268°C (LT) in ultra-high vacuum that is 10-8 Pa or 10-10 mbar 10-10 Torr (UHV) for at least the duration of the competition. The race itself last no more than 2 days and 2 nights including the construction time needed to build up atom by atom the same identical runway for each competitor. The construction and the imaging of a given runway are obtained by a low temperature scanning tunneling microscope (LT-UHV-STM) and certified by independent Track Commissioners before the starting of the race itself.
On this gold surface and per competitor, one runway is constructed atom by atom using a few surface gold metal ad-atoms. A molecule-car has to circulate around those ad-atoms, from the starting to the arrival lines, each line being delimited by 2 gold ad-atoms. The spacing between two metal ad-atoms along a runway is less than 4 nm. A minimum of 5 gold ad-atoms line has to be constructed per team and per runway.
The organizers have included an example of a runway,
A preliminary runway constructed by C. Manzano and We Hyo Soe (A*Star, IMRE) in Singapore, with the 2 starting gold ad-atoms, the 5 gold ad-atoms for the track and the 2 gold ad-atoms had been already constructed atom by atom.
The French southwestern town of Toulouse is preparing for the first-ever international race of molecule-cars: five teams will present their car prototype during the Futurapolis event on November 27, 2015. These cars, which only measure a few nanometers in length and are propelled by an electric current, are scheduled to compete on a gold atom surface next year. Participants will be able to synthesize and test their molecule-car until October 2016 prior to taking part in the NanoCar Race organized at the CNRS Centre d’élaboration des matériaux et d’études structurales (CEMES) by Christian Joachim, senior researcher at the CNRS and Gwénaël Rapenne, professor at Université Toulouse III-Paul Sabatier, with the support of the CNRS.
There is a video describing the upcoming 2016 race (English, spoken and in subtitles),
Rice University will send an entry to the first international NanoCar Race, which will be held next October at Pico-Lab CEMES-CNRS in Toulouse, France.
Nobody will see this miniature grand prix, at least not directly. But cars from five teams, including a collaborative effort by the Rice lab of chemist James Tour and scientists at the University of Graz, Austria, will be viewable through sophisticated microscopes developed for the event.
Time trials will determine which nanocar is the fastest, though there may be head-to-head races with up to four cars on the track at once, according to organizers.
A nanocar is a single-molecule vehicle of 100 or so atoms that incorporates a chassis, axles and freely rotating wheels. Each of the entries will be propelled across a custom-built gold surface by an electric current supplied by the tip of a scanning electron microscope. The track will be cold at 5 kelvins (minus 450 degrees Fahrenheit) and in a vacuum.
Rice’s entry will be a new model and the latest in a line that began when Tour and his team built the world’s first nanocar more than 10 years ago.
“It’s challenging because, first of all, we have to design a car that can be manipulated on that specific surface,” Tour said. “Then we have to figure out the driving techniques that are appropriate for that car. But we’ll be ready.”
Victor Garcia, a graduate student at Rice, is building what Tour called his group’s Model 1, which will be driven by members of Professor Leonhard Grill’s group at Graz. The labs are collaborating to optimize the design.
The races are being organized by the Center for Materials Elaboration and Structural Studies (CEMES) of the French National Center for Scientific Research (CNRS).
The race was first proposed in a 2013 ACS Nano paper by Christian Joachim, a senior researcher at CNRS, and Gwénaël Rapenne, a professor at Paul Sabatier University.
Joining Rice are teams from Ohio University; Dresden University of Technology; the National Institute for Materials Science, Tsukuba, Japan; and Paul Sabatier [Université Toulouse III-Paul Sabatier].
To register for the first edition of the molecule-car Grand Prix in Toulouse, a team has to deliver to the organizers well before March 2016:
The detail of its institution (Academic, public, private)
The design of its molecule-vehicle including the delivery of the xyz file coordinates of the atomic structure of its molecule-car
The propulsion mode, preferably by tunneling inelastic effects
The evaporation conditions of the molecule-vehicles
If possible a first UHV-STM image of the molecule-vehicle
The name and nationality of the LT-UHV-STM driver
Those information are used by the organizers for selecting the teams and for organizing training sessions for the accepted teams in a way to optimize their molecule-car design and to learn the driving conditions on the LT-Nanoprobe instrument in Toulouse. Then, the organizers will deliver an official invitation letter for a given team to have the right to experiment on the Toulouse LT-Nanoprobe instrument with their own drivers. A detail training calendar will be determined starting September 2015.
A new and stable phase of gold with different physical and optical properties from those of conventional gold has been synthesized by Agency for Science, Technology and Research (A*STAR) researchers , Singapore, and promises to be useful for a wide range of applications, including plasmonics and catalysis.
Many materials exist in a variety of crystal structures, known as phases or polymorphs. These different phases have the same chemical composition but different physical structures, which give rise to different properties. For example, two well-known polymorphs of carbon, graphite and diamond, arranged differently, have radically different physical properties, despite being the same element.
Gold has been used for many purposes throughout history, including jewelry, electronics and catalysis. Until now it has always been produced in one phase ― a face-centered cubic structure in which atoms are located at the corners and the center of each face of the constituent cubes.
Now, Lin Wu and colleagues at the Institute of the A*STAR Institute of High Performance Computing have modeled the optical and plasmonic properties of nanoscale ribbons of a new phase of gold — the 4H hexagonal phase (…) — produced and characterized by collaborators at other institutes in Singapore, China and the USA. The team synthesized nanoribbons of the new phase by simply heating the gold (III) chloride hydrate (HAuCl4) with a mixture of three organic solvents and then centrifuging and washing the product. This gave a high yield of about 60 per cent.
The researchers also produced 4H hexagonal phases of the precious metals silver, platinum and palladium by growing them on top of the gold 4H hexagonal phase.
The cubic phase looks identical when viewed front on, from one side or from above. In contrast, the new 4H hexagonal phase lacks this cubic symmetry and hence varies more with direction — a property known as anisotropy. This lower symmetry gives it more directionally varying optical properties, which may make it useful for plasmonic applications. “Our finding is not only is of fundamental interest, but it also provides a new avenue for unconventional applications of plasmonic devices,” says Wu.
The team is keen to explore the potential of their new phase. “In the future, we hope to leverage the unconventional anisotropic properties of the new gold phase and design new devices with excellent performances not achievable with conventional face-centered-cubic gold,” says Wu. The synthesis method also gives rise to the potential for new strategies for controlling the crystalline phase of nanomaterials made from the noble metals.
Here’s a link to and a citation for the paper,
Stabilization of 4H hexagonal phase in gold nanoribbons by Zhanxi Fan, Michel Bosman, Xiao Huang, Ding Huang, Yi Yu, Khuong P. Ong, Yuriy A. Akimov, Lin Wu, Bing Li, Jumiati Wu, Ying Huang, Qing Liu, Ching Eng Png, Chee Lip Gan, Peidong Yang & Hua Zhang. Nature Communications 6, Article number: 7684 doi:10.1038/ncomms8684 Published 28 July 2015
It’s not always easy to get perspective about nanotechnology research and commercialization efforts in Japan and South Korea. So, it was good to see Marjo Johne’s Nov. 9, 2015 article for the Globe and Mail,
Nanotechnology, a subfield in advanced manufacturing [?] that produces technologies less than 100 nanometres in size (a human hair is about 800 times wider), is a burgeoning industry that’s projected to grow to about $135-billion in Japan by 2020. South Korea’s government said it is aiming to boost its share of the sector to 20 per cent of the global market in 2020.
“Japan and Korea are active markets for nanotechnology,” says Mark Foley, a consultant with NanoGlobe Pte. Ltd., a Singapore-based firm that helps nanotech companies bring their products to market. “Japan is especially strong on the research side and [South] Korea is very fast in plugging nanotechnology into applications.”
Andrej Zagar, author of a research paper on nanotechnology in Japan, points to maturing areas in Japan’s nanotechnology sector: applications such as nano electronics, coatings, power electronic, and nano-micro electromechanical systems for sensors. “Japan’s IT sector is making the most progress as the implementations here are made most quickly,” says Mr. Zagar, who works as business development manager at LECIP Holdings Corp., a Tokyo-based company that manufactures intelligent transport systems for global markets. “As Japan is very environmentally focused, the environment sector in nanotech – fuel-cell materials, lithium-ion nanomaterials – is worth focusing on.”
A very interesting article, although don’t take everything as gospel. The definition of nanotechnology as a subfield in advanced manufacturing is problematic to me since nanotechnology has medical and agricultural applications, which wouldn’t typically be described as part of an advanced manufacturing subfield. As well, I’m not sure where biomimicry would fit into this advanced manufacturing scheme. In any event, the applications mentioned in the article do fit that definition; its just not a comprehensive one.
Anyone who’s read this blog for a while knows I’m not a big fan of patents or the practice of using filed patents as a measure of scientific progress but in the absence of of a viable alternative, there’s this from Johne’s article,
Patent statistics suggest accelerated rates of nanotech-related innovations in these countries. According to StatNano, a website that monitors nanotechnology developments in the world, Japan and South Korea have the second and third highest number of nanotechnology patents filed this year with the United States Patent and Trademark Office.
As of September, Japan had filed close to 3,283 patents while South Korea’s total was 1,845. While these numbers are but a fraction of the United States’ 13,759 nanotech patents filed so far this year, they top Germany, which has only 1,100 USPTO nanotech patent filings this year, and Canada, which ranks 10th worldwide with 375 filings.
In South Korea, the rise of nanotechnology can be traced back to 2001, when the South Korean government launched its nanotechnology development plan, along with $94-million in funding. Since then, South Korea has poured more money into nanotechnology. As of 2012, it had invested close to $2-billion in nanotech research and development.
The applications mentioned in the article are the focus of competition not only in Japan and South Korea but internationally,
Mr. Foley says nanofibres and smart clothing are particularly hot areas in Japan these days. Nanofibers have broad applications and can be used in water and air filtration systems. He points to Toray Industries Inc. and Teijin Ltd. as leaders in advanced fibre technology.
“We’ve also seen advances in smart clothing in the last year or two, with clothing that can conduct electricity and measure things like heart rate, body temperature and sweat,” he says. “Last year, a sporting company in Japan released smart clothing based on Toray technology.”
How did Foley determine that ‘smart clothing’ is a particularly hot area in Japan? Is it the number of patents filed? Is it the amount of product in the marketplace? Is it consumer demand? And, how do those numbers compare with other countries? Also, I would have liked a little more detail as to what Foley meant by ‘nanofibres’.
This is a very Asia-centric story, which is a welcome change from US-centric and European-centric stories on this topic, and inevitably, China is mentioned,
As the nanotechnology industry continues to gain traction on a global scale, Mr. Foley says Japan and South Korea may have a hard time holding on to their top spots in the international market; China is moving up fast from behind.
“Top Chinese researchers from Harvard and Cambridge are returning to China, where in Suzhou City they’ve built a nanocity with over 200 nanotechnology-related companies,” he says …
The ‘nano city’ Foley mentions is called Nanopolis or Nanopolis Suzhou. It’s been mentioned here twice, first in a Jan. 20, 2014 posting and again in a Sept. 26, 2014 posting. It’s a massive project and I gather that while some buildings are occupied there are still a significant percentage under construction.
The United Nations (UN) and cultural rights don’t immediately leap to mind when the subjects of copyright and patents are discussed. A Mar. 13, 2015 posting by Tim Cushing on Techdirt and an Oct. 14, 2015 posting by Glyn Moody also on Techdirt explain the connection in the person of Farida Shaheed, the UN Special Rapporteur on cultural rights and the author of two UN reports one on copyright and one on patents.
Shaheed said a “widely shared concern stems from the tendency for copyright protection to be strengthened with little consideration to human rights issues.” This is illustrated by trade negotiations conducted in secrecy, and with the participation of corporate entities, she said.
She stressed the fact that one of the key points of her report is that intellectual property rights are not human rights. “This equation is false and misleading,” she said.
The last statement fires shots over the bows of “moral rights” purveyors, as well as those who view infringement as a moral issue, rather than just a legal one.
Shaheed also points out that the protections being installed around the world at the behest of incumbent industries are not necessarily reflective of creators’ desires. …
There is no human right to patent protection. The right to protection of moral and material interests cannot be used to defend patent laws that inadequately respect the right to participate in cultural life, to enjoy the benefits of scientific progress and its applications, to scientific freedoms and the right to food and health and the rights of indigenous peoples and local communities.
Patents, when properly structured, may expand the options and well-being of all people by making new possibilities available. Yet, they also give patent-holders the power to deny access to others, thereby limiting or denying the public’s right of participation to science and culture. The human rights perspective demands that patents do not extend so far as to interfere with individuals’ dignity and well-being. Where patent rights and human rights are in conflict, human rights must prevail.
The report touches on many issues previously discussed here on Techdirt. For example, how pharmaceutical patents limit access to medicines by those unable to afford the high prices monopolies allow — a particularly hot topic in the light of TPP’s rules on data exclusivity for biologics. The impact of patents on seedindependence is considered, and there is a warning about corporate sovereignty chapters in trade agreements, and the chilling effects they can have on the regulatory function of states and their ability to legislate in the public interest — for example, with patent laws.
I have two Canadian examples for data exclusivity and corporate sovereignty issues, both from Techdirt. There’s an Oct. 19, 2015 posting by Glyn Moody featuring a recent Health Canada move to threaten a researcher into suppressing information from human clinical trials,
… one of the final sticking points of the TPP negotiations [Trans Pacific Partnership] was the issue of data exclusivity for the class of drugs known as biologics. We’ve pointed out that the very idea of giving any monopoly on what amounts to facts is fundamentally anti-science, but that’s a rather abstract way of looking at it. A recent case in Canada makes plain what data exclusivity means in practice. As reported by CBC [Canadian Broadcasting Corporation] News, it concerns unpublished clinical trial data about a popular morning sickness drug:
Dr. Navindra Persaud has been fighting for four years to get access to thousands of pages of drug industry documents being held by Health Canada.
He finally received the material a few weeks ago, but now he’s being prevented from revealing what he has discovered.
That’s because Health Canada required him to sign a confidentiality agreement, and has threatened him with legal action if he breaks it.
The clinical trials data is so secret that he’s been told that he must destroy the documents once he’s read them, and notify Health Canada in writing that he has done so….
For those who aren’t familiar with it, the Trans Pacific Partnership is a proposed trade agreement including 12 countries (Australia, Brunei Darussalam, Canada, Chile, Japan, Malaysia, Mexico, New Zealand, Peru, Singapore, United States, and Vietnam) from the Pacific Rim. If all the countries sign on (it looks as if they will; Canada’s new Prime Minister as of Oct. 19, 2015 seems to be in favour of the agreement although he has yet to make a definitive statement), the TPP will represent a trading block that is almost double the size of the European Union.
An Oct. 8, 2015 posting by Mike Masnick provides a description of corporate sovereignty and of the Eli Lilly suit against the Canadian government.
We’ve pointed out a few times in the past that while everyone refers to the Trans Pacific Partnership (TPP) agreement as a “free trade” agreement, the reality is that there’s very little in there that’s actually about free trade. If it were truly a free trade agreement, then there would be plenty of reasons to support it. But the details show it’s not, and yet, time and time again, we see people supporting the TPP because “well, free trade is good.” …
… it’s that “harmonizing regulatory regimes” thing where the real nastiness lies, and where you quickly discover that most of the key factors in the TPP are not at all about free trade, but the opposite. It’s about as protectionist as can be. That’s mainly because of the really nasty corprorate sovereignty clauses in the agreement (which are officially called “investor state dispute settlement” or ISDS in an attempt to make it sound so boring you’ll stop paying attention). Those clauses basically allow large incumbents to force the laws of countries to change to their will. Companies who feel that some country’s regulation somehow takes away “expected profits” can convene a tribunal, and force a country to change its laws. Yes, technically a tribunal can only issue monetary sanctions against a country, but countries who wish to avoid such monetary payments will change their laws.
Remember how Eli Lilly is demanding $500 million from Canada after Canada rejected some Eli Lilly patents, noting that the new compound didn’t actually do anything new and useful? Eli Lilly claims that using such a standard to reject patents unfairly attacks its expected future profits, and thus it can demand $500 million from Canadian taxpayers. Now, imagine that on all sorts of other systems.
Cultural rights, human rights, corporate rights. It would seem that corporate rights are going to run counter to human rights, if nothing else.