Tag Archives: Oxford University

Nanoscientists speculate that artificial life forms could be medicine of the future

Even after all these years, my jaw is still capable of dropping but then I read the details. This looks a lot like ‘medical nanobots’ which researchers have been talking about for a long time. Nice twist on a familiar theme. From an October 5, 2023 news item on ScienceDaily,

Imagine a life form that doesn’t resemble any of the organisms found on the tree of life. One that has its own unique control system, and that a doctor would want to send into your body. It sounds like a science fiction movie, but according to nanoscientists, it can—and should—happen in the future.

Creating artificial life is a recurring theme in both science and popular literature, where it conjures images of creeping slime creatures with malevolent intentions or super-cute designer pets. At the same time, the question arises: What role should artificial life play in our environment here on Earth, where all life forms are created by nature and have their own place and purpose?

Associate professor Chenguang Lou from the Department of Physics, Chemistry, and Pharmacy, University of Southern Denmark, together with Professor Hanbin Mao from Kent State University, is the parent of a special artificial hybrid molecule that could lead to the creation of artificial life forms. They have now published a review in the journal Cell Reports Physical Science on the state of research in the field behind their creation. The field is called “hybrid peptide-DNA nanostructures,” and it is an emerging field, less than ten years old.

An October 5, 2023 University of Southern Denmark press release (also on EurekAlert) by Birgitte Svennevig, which originated the news item, shares the researcher’s (Chenguang Lou) vision for the research and more technical details about “hybrid peptide-DNA nanostructures” along with other international research efforts,

Lou’s vision is to create viral vaccines (modified and weakened versions of a virus) and artificial life forms that can be used for diagnosing and treating diseases.

“In nature, most organisms have natural enemies, but some do not. For example, some disease-causing viruses have no natural enemy. It would be a logical step to create an artificial life form that could become an enemy to them,” he says.

Similarly, he envisions such artificial life forms can act as vaccines against viral infection and can be used as nanorobots [also known as nanobots] or nanomachines loaded with medication or diagnostic elements and sent into a patient’s body.

“An artificial viral vaccine may be about 10 years away. An artificial cell, on the other hand, is on the horizon because it consists of many elements that need to be controlled before we can start building with them. But with the knowledge we have, there is, in principle, no hindrance to produce artificial cellular organisms in the future,” he says.

What are the building blocks that Lou and his colleagues in this field will use to create viral vaccines and artificial life? DNA and peptides are some of the most important biomolecules in nature, making DNA technology and peptide technology the two most powerful molecular tools in the nanotechnological toolkit today. DNA technology provides precise control over programming, from the atomic level to the macro level, but it can only provide limited chemical functions since it only has four bases: A, C, G, and T. Peptide technology, on the other hand, can provide sufficient chemical functions on a large scale, as there are 20 amino acids to work with. Nature uses both DNA and peptides to build various protein factories found in cells, allowing them to evolve into organisms.

Recently, Hanbin Mao and Chenguang Lou have succeeded in linking designed three-stranded DNA structures with three-stranded peptide structures, thus creating an artificial hybrid molecule that combines the strengths of both. This work was published in Nature Communications in 2022. (read the article here “Chirality transmission in macromolecular domains” and the press release at https://www.sdu.dk/en/om_sdu/fakulteterne/naturvidenskab/nyheder-2022/supermolekyle)

Elsewhere in the world, other researchers are also working on connecting DNA and peptides because this connection forms a strong foundation for the development of more advanced biological entities and life forms.

At Oxford University, researchers have succeeded in building a nanomachine made of DNA and peptides that can drill through a cell membrane, creating an artificial membrane channel through which small molecules can pass. (Spruijt et al., Nat. Nanotechnol. 2018, 13, 739-745)

At Arizona State University, Nicholas Stephanopoulos and colleagues have enabled DNA and peptides to self-assemble into 2D and 3D structures. (Buchberger et al., J. Am. Chem. Soc. 2020, 142, 1406-1416)

At Northwest University [Northwestern University?], researchers have shown that microfibers can form in conjunction with DNA and peptides self-assembling. DNA and peptides operate at the nano level, so when considering the size differences, microfibers are huge. (Freeman et al., Science, 2018, 362, 808-813)

At Ben-Gurion University of the Negev, scientists have used hybrid molecules to create an onion-like spherical structure containing cancer medication, which holds promise to be used in the body to target cancerous tumors. (Chotera et al., Chem. Eur. J., 2018, 24, 10128-10135)

“In my view, the overall value of all these efforts is that they can be used to improve society’s ability to diagnose and treat sick people. Looking forward, I will not be surprised that one day we can arbitrarily create hybrid nanomachines, viral vaccines and even artificial life forms from these building blocks to help the society to combat those difficult-to-cure diseases. It would be a revolution in healthcare,” says Chenguang Lou.

Here’s a link to and a citation for the latest paper,

Peptide-DNA conjugates as building blocks for de novo design of hybrid nanostructures by Mathias Bogetoft Danielsen, Hanbin Mao, Chenguang Lou. Cell Reports Physical Science Volume 4, Issue 10, 18 October 2023, 101620 DOI: https://doi.org/10.1016/j.xcrp.2023.101620

This paper is open access.

Creating time crystals with a quantum computer

This November 30, 2021 news item on phys.org about time crystals caught my attention,

There is a huge global effort to engineer a computer capable of harnessing the power of quantum physics to carry out computations of unprecedented complexity. While formidable technological obstacles still stand in the way of creating such a quantum computer, today’s early prototypes are still capable of remarkable feats.

For example, the creation of a new phase of matter called a “time crystal.” Just as a crystal’s structure repeats in space, a time crystal repeats in time and, importantly, does so infinitely and without any further input of energy—like a clock that runs forever without any batteries. The quest to realize this phase of matter has been a longstanding challenge in theory and experiment—one that has now finally come to fruition.

In research published Nov. 30 [2021] in Nature, a team of scientists from Stanford University, Google Quantum AI, the Max Planck Institute for Physics of Complex Systems and Oxford University detail their creation of a time crystal using Google’s Sycamore quantum computing hardware.

The Google Sycamore chip used in the creation of a time crystal. Credit: Google Quantum AI [downloaded from https://phys.org/news/2021-11-physicists-crystals-quantum.html]

A November 30, 2021 Stanford University news release (also on EurekAlert) by Taylor Kubota, which originated the news item, delves further into the work and into the nature of time crystals,

“The big picture is that we are taking the devices that are meant to be the quantum computers of the future and thinking of them as complex quantum systems in their own right,” said Matteo Ippoliti, a postdoctoral scholar at Stanford and co-lead author of the work. “Instead of computation, we’re putting the computer to work as a new experimental platform to realize and detect new phases of matter.”

For the team, the excitement of their achievement lies not only in creating a new phase of matter but in opening up opportunities to explore new regimes in their field of condensed matter physics, which studies the novel phenomena and properties brought about by the collective interactions of many objects in a system. (Such interactions can be far richer than the properties of the individual objects.)

“Time-crystals are a striking example of a new type of non-equilibrium quantum phase of matter,” said Vedika Khemani, assistant professor of physics at Stanford and a senior author of the paper. “While much of our understanding of condensed matter physics is based on equilibrium systems, these new quantum devices are providing us a fascinating window into new non-equilibrium regimes in many-body physics.”

What a time crystal is and isn’t

The basic ingredients to make this time crystal are as follows: The physics equivalent of a fruit fly and something to give it a kick. The fruit fly of physics is the Ising model, a longstanding tool for understanding various physical phenomena – including phase transitions and magnetism – which consists of a lattice where each site is occupied by a particle that can be in two states, represented as a spin up or down.

During her graduate school years, Khemani, her doctoral advisor Shivaji Sondhi, then at Princeton University, and Achilleas Lazarides and Roderich Moessner at the Max Planck Institute for Physics of Complex Systems stumbled upon this recipe for making time crystals unintentionally. They were studying non-equilibrium many-body localized systems – systems where the particles get “stuck” in the state in which they started and can never relax to an equilibrium state. They were interested in exploring phases that might develop in such systems when they are periodically “kicked” by a laser. Not only did they manage to find stable non-equilibrium phases, they found one where the spins of the particles flipped between patterns that repeat in time forever, at a period twice that of the driving period of the laser, thus making a time crystal.

The periodic kick of the laser establishes a specific rhythm to the dynamics. Normally the “dance” of the spins should sync up with this rhythm, but in a time crystal it doesn’t. Instead, the spins flip between two states, completing a cycle only after being kicked by the laser twice. This means that the system’s “time translation symmetry” is broken. Symmetries play a fundamental role in physics, and they are often broken – explaining the origins of regular crystals, magnets and many other phenomena; however, time translation symmetry stands out because unlike other symmetries, it can’t be broken in equilibrium. The periodic kick is a loophole that makes time crystals possible.

The doubling of the oscillation period is unusual, but not unprecedented. And long-lived oscillations are also very common in the quantum dynamics of few-particle systems. What makes a time crystal unique is that it’s a system of millions of things that are showing this kind of concerted behavior without any energy coming in or leaking out.

“It’s a completely robust phase of matter, where you’re not fine-tuning parameters or states but your system is still quantum,” said Sondhi, professor of physics at Oxford and co-author of the paper. “There’s no feed of energy, there’s no drain of energy, and it keeps going forever and it involves many strongly interacting particles.”

While this may sound suspiciously close to a “perpetual motion machine,” a closer look reveals that time crystals don’t break any laws of physics. Entropy – a measure of disorder in the system – remains stationary over time, marginally satisfying the second law of thermodynamics by not decreasing.

Between the development of this plan for a time crystal and the quantum computer experiment that brought it to reality, many experiments by many different teams of researchers achieved various almost-time-crystal milestones. However, providing all the ingredients in the recipe for “many-body localization” (the phenomenon that enables an infinitely stable time crystal) had remained an outstanding challenge.

For Khemani and her collaborators, the final step to time crystal success was working with a team at Google Quantum AI. Together, this group used Google’s Sycamore quantum computing hardware to program 20 “spins” using the quantum version of a classical computer’s bits of information, known as qubits.

Revealing just how intense the interest in time crystals currently is, another time crystal was published in Science this month [November 2021]. That crystal was created using qubits within a diamond by researchers at Delft University of Technology in the Netherlands.

Quantum opportunities

The researchers were able to confirm their claim of a true time crystal thanks to special capabilities of the quantum computer. Although the finite size and coherence time of the (imperfect) quantum device meant that their experiment was limited in size and duration – so that the time crystal oscillations could only be observed for a few hundred cycles rather than indefinitely – the researchers devised various protocols for assessing the stability of their creation. These included running the simulation forward and backward in time and scaling its size.

“We managed to use the versatility of the quantum computer to help us analyze its own limitations,” said Moessner, co-author of the paper and director at the Max Planck Institute for Physics of Complex Systems. “It essentially told us how to correct for its own errors, so that the fingerprint of ideal time-crystalline behavior could be ascertained from finite time observations.”

A key signature of an ideal time crystal is that it shows indefinite oscillations from all states. Verifying this robustness to choice of states was a key experimental challenge, and the researchers devised a protocol to probe over a million states of their time crystal in just a single run of the machine, requiring mere milliseconds of runtime. This is like viewing a physical crystal from many angles to verify its repetitive structure.

“A unique feature of our quantum processor is its ability to create highly complex quantum states,” said Xiao Mi, a researcher at Google and co-lead author of the paper. “These states allow the phase structures of matter to be effectively verified without needing to investigate the entire computational space – an otherwise intractable task.”

Creating a new phase of matter is unquestionably exciting on a fundamental level. In addition, the fact that these researchers were able to do so points to the increasing usefulness of quantum computers for applications other than computing. “I am optimistic that with more and better qubits, our approach can become a main method in studying non-equilibrium dynamics,” said Pedram Roushan, researcher at Google and senior author of the paper.

“We think that the most exciting use for quantum computers right now is as platforms for fundamental quantum physics,” said Ippoliti. “With the unique capabilities of these systems, there’s hope that you might discover some new phenomenon that you hadn’t predicted.”

A view of the Google dilution refrigerator, which houses the Sycamore chip. Credit: Google Quantum AI [downloaded from https://scitechdaily.com/stanford-and-google-team-up-to-create-time-crystals-with-quantum-computers/]

Here’s a link to and a citation for the paper,

Time-Crystalline Eigenstate Order on a Quantum Processor by Xiao Mi, Matteo Ippoliti, Chris Quintana, Ami Greene, Zijun Chen, Jonathan Gross, Frank Arute, Kunal Arya, Juan Atalaya, Ryan Babbush, Joseph C. Bardin, Joao Basso, Andreas Bengtsson, Alexander Bilmes, Alexandre Bourassa, Leon Brill, Michael Broughton, Bob B. Buckley, David A. Buell, Brian Burkett, Nicholas Bushnell, Benjamin Chiaro, Roberto Collins, William Courtney, Dripto Debroy, Sean Demura, Alan R. Derk, Andrew Dunsworth, Daniel Eppens, Catherine Erickson, Edward Farhi, Austin G. Fowler, Brooks Foxen, Craig Gidney, Marissa Giustina, Matthew P. Harrigan, Sean D. Harrington, Jeremy Hilton, Alan Ho, Sabrina Hong, Trent Huang, Ashley Huff, William J. Huggins, L. B. Ioffe, Sergei V. Isakov, Justin Iveland, Evan Jeffrey, Zhang Jiang, Cody Jones, Dvir Kafri, Tanuj Khattar, Seon Kim, Alexei Kitaev, Paul V. Klimov, Alexander N. Korotkov, Fedor Kostritsa, David Landhuis, Pavel Laptev, Joonho Lee, Kenny Lee, Aditya Locharla, Erik Lucero, Orion Martin, Jarrod R. McClean, Trevor McCourt, Matt McEwen, Kevin C. Miao, Masoud Mohseni, Shirin Montazeri, Wojciech Mruczkiewicz, Ofer Naaman, Matthew Neeley, Charles Neill, Michael Newman, Murphy Yuezhen Niu, Thomas E. O’Brien, Alex Opremcak, Eric Ostby, Balint Pato, Andre Petukhov, Nicholas C. Rubin, Daniel Sank, Kevin J. Satzinger, Vladimir Shvarts, Yuan Su, Doug Strain, Marco Szalay, Matthew D. Trevithick, Benjamin Villalonga, Theodore White, Z. Jamie Yao, Ping Yeh, Juhwan Yoo, Adam Zalcman, Hartmut Neven, Sergio Boixo, Vadim Smelyanskiy, Anthony Megrant, Julian Kelly, Yu Chen, S. L. Sondhi, Roderich Moessner, Kostyantyn Kechedzhi, Vedika Khemani & Pedram Roushan. Nature (2021) DOI: https://doi.org/10.1038/s41586-021-04257-w Published 30 November 2021

This is a preview of the unedited paper being provided by Nature. Click on the Download PDF button (to the right of the title) to get access.

Want to help Arctic science and look at polar bears from the comfort of home?

Two polar bears scored according to the Polar Bear Score Card Standard Fatness Index. The bear on the left is categorized as thin, a score of 2/5, while the bear on the right is considered very fat, 5/5. (Photo: Doug Clark, USask

A March 1, 2021 news item on phys.org announced a call for volunteers from University of Saskatchewan (USask) polar bear researcher Doug Clark (the response was tremendous),

University of Saskatchewan (USask) researcher Doug Clark is launching a first-of-its-kind research project that will engage citizen volunteers to help advance knowledge about polar bear behavior by analyzing a decade’s worth of images captured by trail cameras at Wapusk National Park in northern Manitoba.

“This is a totally different way to do polar bear research,” said Clark, an associate professor at USask’s School of Environment and Sustainability. “It’s non-invasive, it involves the public for the first time, and it’s being done in a way that can carry on through the pandemic without endangering anyone in northern communities.”

A February 26, 2021 University of Saskatchewan news release by Sarath Peiris, which originated the news item, described the project

Clark is collaborating with Oxford University penguinologist Tom Hart on the project, which will be run on Zooniverse—a “people-powered” online platform that has more than two million volunteers worldwide who assist researchers in almost every discipline to sort and organize data.

Hart has been using Zooniverse to help with his Antarctic Penguin Watch and Seabird Watch projects. He’s helping Clark and his students to set up the polar bear project by aggregating and uploading data, and will work with Clark on the analysis. (The platform gets institutional support from Oxford University and the Adler Planetarium, and receives grants from a variety of sources.)

“This allows people, who might otherwise just passively consume images on TV and social media, to participate in polar bear research and understand how these bears are interacting with people and other wildlife in what we know is a rapidly changing environment,” said Clark.

The volunteers are supplied with a field guide and asked to count the number of bears in photos, their gender, cubs, body condition and other factors, choosing from provided options. Beta testing with more than 60 volunteers showed the process works well. The photos will be uploaded in tranches over the coming months, allowing volunteers to work through one batch before moving on to the next.

“Volunteers can help us process data in ways that are incredibly labour-intensive, which otherwise would take us and our students years to do. Frankly, Zooniverse produces more robust data and more robust analyses than if we were tiredly flipping through photos on our own.”

The project … launched Feb. 27 [2021\, on International Polar Bear Day.

The research project began in 2011 when Clark was asked by Parks Canada to find out if the field camps it established in Wapusk attracted or repelled polar bears—a question that still hasn’t been conclusively answered.

Other questions his team is trying to answer are:

  • What are the drivers of polar bear visits to human infrastructure/activity? (i.e. is it environmental, is it a result of a lack of sea ice/nutritional stress, or is it a response to human activity?)
  • Are there changes over time in where/when polar bears, and all the other Arctic and boreal species seen in the photos, are observed?

Researchers have installed five non-invasive trail cameras at each of three field camp sites, and eight more at the Churchill Northern Studies Centre that operate year round, and have captured more than 600 discrete polar bear observations over 10 years, along with images of other species such as wolf, caribou, grizzly bears, moose, Arctic and red foxes, and even occasional wolverines.

The four sites are along the Hudson Bay coast and are separated by almost 200 kilometres, across the ecological boundary between boreal forest and tundra providing invaluable data on multiple species in a changing environment.

Ryan Brook, an associate professor in USask’s College of Agriculture and Bioresources, is taking advantage of the lucky “by-catch” of Clark’s project—the images of caribou and wolves—to conduct research on these species, especially caribou populations, at a time of Arctic warming and changing weather patterns.

Here’s more about the project from The Arctic Bears Project on Zooniverse,

Work with us to understand how polar, grizzly, and black bears behave in a changing environment

About The Arctic Bears Project

We’re learning how polar, grizzly, and black bears behave in the changing Arctic environment, with special attention to how they interact with people. The images you’ll see come from remote cameras set up on the fences of field camps in Wapusk National Park, on the west coast of Hudson Bay in Manitoba, Canada. Wapusk means “white bear” in the Cree language, and the park was established in 1996. At the time the park was established the area was well-known for its importance as polar bear denning habitat, and local people knew black bears lived in the forests there, but the appearance of grizzly bears in the late 1990s was a surprise. Read more about our research findings here.

When we say “we”, that includes a whole lot of people who all contribute to making this project happen: and not just the researchers! Wapusk National Park’s staff in Churchill, Manitoba, got the ball rolling in 2010 and since then community members in Churchill and elsewhere have helped us shape this project. Their enthusiasm for non-invasive wildlife research tools, and for the unexpected things we see on the cameras, motivates our team. In the early days of this work we were just excited that our cameras survived over the winter, but pretty soon we were realizing just how many photos we were collecting. This is where you come in: Zooniverse volunteers. Your help processing a decade’s worth of pictures from a changing sub-Arctic landscape is a critical task, and we’re so grateful to have your assistance with this research. These photos are downloaded once a year from most cameras, and the days when we finally see those images are special treats that every one of our team enjoys. We hope you experience the same feeling.

As of Wednesday, March 3, 2021, The Arctic Bears Project is now out of data but hopefully there will be more in the future. In the meantime, you can check out the Zooniverse for other projects.

Robot radiologists (artificially intelligent doctors)

Mutaz Musa, a physician at New York Presbyterian Hospital/Weill Cornell (Department of Emergency Medicine) and software developer in New York City, has penned an eyeopening opinion piece about artificial intelligence (or robots if you prefer) and the field of radiology. From a June 25, 2018 opinion piece for The Scientist (Note: Links have been removed),

Although artificial intelligence has raised fears of job loss for many, we doctors have thus far enjoyed a smug sense of security. There are signs, however, that the first wave of AI-driven redundancies among doctors is fast approaching. And radiologists seem to be first on the chopping block.

Andrew Ng, founder of online learning platform Coursera and former CTO of “China’s Google,” Baidu, recently announced the development of CheXNet, a convolutional neural net capable of recognizing pneumonia and other thoracic pathologies on chest X-rays better than human radiologists. Earlier this year, a Hungarian group developed a similar system for detecting and classifying features of breast cancer in mammograms. In 2017, Adelaide University researchers published details of a bot capable of matching human radiologist performance in detecting hip fractures. And, of course, Google achieved superhuman proficiency in detecting diabetic retinopathy in fundus photographs, a task outside the scope of most radiologists.

Beyond single, two-dimensional radiographs, a team at Oxford University developed a system for detecting spinal disease from MRI data with a performance equivalent to a human radiologist. Meanwhile, researchers at the University of California, Los Angeles, reported detecting pathology on head CT scans with an error rate more than 20 times lower than a human radiologist.

Although these particular projects are still in the research phase and far from perfect—for instance, often pitting their machines against a limited number of radiologists—the pace of progress alone is telling.

Others have already taken their algorithms out of the lab and into the marketplace. Enlitic, founded by Aussie serial entrepreneur and University of San Francisco researcher Jeremy Howard, is a Bay-Area startup that offers automated X-ray and chest CAT scan interpretation services. Enlitic’s systems putatively can judge the malignancy of nodules up to 50 percent more accurately than a panel of radiologists and identify fractures so small they’d typically be missed by the human eye. One of Enlitic’s largest investors, Capitol Health, owns a network of diagnostic imaging centers throughout Australia, anticipating the broad rollout of this technology. Another Bay-Area startup, Arterys, offers cloud-based medical imaging diagnostics. Arterys’s services extend beyond plain films to cardiac MRIs and CAT scans of the chest and abdomen. And there are many others.

Musa has offered a compelling argument with lots of links to supporting evidence.

[downloaded from https://www.the-scientist.com/news-opinion/opinion–rise-of-the-robot-radiologists-64356]

And evidence keeps mounting, I just stumbled across this June 30, 2018 news item on Xinhuanet.com,

An artificial intelligence (AI) system scored 2:0 against elite human physicians Saturday in two rounds of competitions in diagnosing brain tumors and predicting hematoma expansion in Beijing.

The BioMind AI system, developed by the Artificial Intelligence Research Centre for Neurological Disorders at the Beijing Tiantan Hospital and a research team from the Capital Medical University, made correct diagnoses in 87 percent of 225 cases in about 15 minutes, while a team of 15 senior doctors only achieved 66-percent accuracy.

The AI also gave correct predictions in 83 percent of brain hematoma expansion cases, outperforming the 63-percent accuracy among a group of physicians from renowned hospitals across the country.

The outcomes for human physicians were quite normal and even better than the average accuracy in ordinary hospitals, said Gao Peiyi, head of the radiology department at Tiantan Hospital, a leading institution on neurology and neurosurgery.

To train the AI, developers fed it tens of thousands of images of nervous system-related diseases that the Tiantan Hospital has archived over the past 10 years, making it capable of diagnosing common neurological diseases such as meningioma and glioma with an accuracy rate of over 90 percent, comparable to that of a senior doctor.

All the cases were real and contributed by the hospital, but never used as training material for the AI, according to the organizer.

Wang Yongjun, executive vice president of the Tiantan Hospital, said that he personally did not care very much about who won, because the contest was never intended to pit humans against technology but to help doctors learn and improve [emphasis mine] through interactions with technology.

“I hope through this competition, doctors can experience the power of artificial intelligence. This is especially so for some doctors who are skeptical about artificial intelligence. I hope they can further understand AI and eliminate their fears toward it,” said Wang.

Dr. Lin Yi who participated and lost in the second round, said that she welcomes AI, as it is not a threat but a “friend.” [emphasis mine]

AI will not only reduce the workload but also push doctors to keep learning and improve their skills, said Lin.

Bian Xiuwu, an academician with the Chinese Academy of Science and a member of the competition’s jury, said there has never been an absolute standard correct answer in diagnosing developing diseases, and the AI would only serve as an assistant to doctors in giving preliminary results. [emphasis mine]

Dr. Paul Parizel, former president of the European Society of Radiology and another member of the jury, also agreed that AI will not replace doctors, but will instead function similar to how GPS does for drivers. [emphasis mine]

Dr. Gauden Galea, representative of the World Health Organization in China, said AI is an exciting tool for healthcare but still in the primitive stages.

Based on the size of its population and the huge volume of accessible digital medical data, China has a unique advantage in developing medical AI, according to Galea.

China has introduced a series of plans in developing AI applications in recent years.

In 2017, the State Council issued a development plan on the new generation of Artificial Intelligence and the Ministry of Industry and Information Technology also issued the “Three-Year Action Plan for Promoting the Development of a New Generation of Artificial Intelligence (2018-2020).”

The Action Plan proposed developing medical image-assisted diagnostic systems to support medicine in various fields.

I note the reference to cars and global positioning systems (GPS) and their role as ‘helpers’;, it seems no one at the ‘AI and radiology’ competition has heard of driverless cars. Here’s Musa on those reassuring comments abut how the technology won’t replace experts but rather augment their skills,

To be sure, these services frame themselves as “support products” that “make doctors faster,” rather than replacements that make doctors redundant. This language may reflect a reserved view of the technology, though it likely also represents a marketing strategy keen to avoid threatening or antagonizing incumbents. After all, many of the customers themselves, for now, are radiologists.

Radiology isn’t the only area where experts might find themselves displaced.

Eye experts

It seems inroads have been made by artificial intelligence systems (AI) into the diagnosis of eye diseases. It got the ‘Fast Company’ treatment (exciting new tech, learn all about it) as can be seen further down in this posting. First, here’s a more restrained announcement, from an August 14, 2018 news item on phys.org (Note: A link has been removed),

An artificial intelligence (AI) system, which can recommend the correct referral decision for more than 50 eye diseases, as accurately as experts has been developed by Moorfields Eye Hospital NHS Foundation Trust, DeepMind Health and UCL [University College London].

The breakthrough research, published online by Nature Medicine, describes how machine-learning technology has been successfully trained on thousands of historic de-personalised eye scans to identify features of eye disease and recommend how patients should be referred for care.

Researchers hope the technology could one day transform the way professionals carry out eye tests, allowing them to spot conditions earlier and prioritise patients with the most serious eye diseases before irreversible damage sets in.

An August 13, 2018 UCL press release, which originated the news item, describes the research and the reasons behind it in more detail,

More than 285 million people worldwide live with some form of sight loss, including more than two million people in the UK. Eye diseases remain one of the biggest causes of sight loss, and many can be prevented with early detection and treatment.

Dr Pearse Keane, NIHR Clinician Scientist at the UCL Institute of Ophthalmology and consultant ophthalmologist at Moorfields Eye Hospital NHS Foundation Trust said: “The number of eye scans we’re performing is growing at a pace much faster than human experts are able to interpret them. There is a risk that this may cause delays in the diagnosis and treatment of sight-threatening diseases, which can be devastating for patients.”

“The AI technology we’re developing is designed to prioritise patients who need to be seen and treated urgently by a doctor or eye care professional. If we can diagnose and treat eye conditions early, it gives us the best chance of saving people’s sight. With further research it could lead to greater consistency and quality of care for patients with eye problems in the future.”

The study, launched in 2016, brought together leading NHS eye health professionals and scientists from UCL and the National Institute for Health Research (NIHR) with some of the UK’s top technologists at DeepMind to investigate whether AI technology could help improve the care of patients with sight-threatening diseases, such as age-related macular degeneration and diabetic eye disease.

Using two types of neural network – mathematical systems for identifying patterns in images or data – the AI system quickly learnt to identify 10 features of eye disease from highly complex optical coherence tomography (OCT) scans. The system was then able to recommend a referral decision based on the most urgent conditions detected.

To establish whether the AI system was making correct referrals, clinicians also viewed the same OCT scans and made their own referral decisions. The study concluded that AI was able to make the right referral recommendation more than 94% of the time, matching the performance of expert clinicians.

The AI has been developed with two unique features which maximise its potential use in eye care. Firstly, the system can provide information that helps explain to eye care professionals how it arrives at its recommendations. This information includes visuals of the features of eye disease it has identified on the OCT scan and the level of confidence the system has in its recommendations, in the form of a percentage. This functionality is crucial in helping clinicians scrutinise the technology’s recommendations and check its accuracy before deciding the type of care and treatment a patient receives.

Secondly, the AI system can be easily applied to different types of eye scanner, not just the specific model on which it was trained. This could significantly increase the number of people who benefit from this technology and future-proof it, so it can still be used even as OCT scanners are upgraded or replaced over time.

The next step is for the research to go through clinical trials to explore how this technology might improve patient care in practice, and regulatory approval before it can be used in hospitals and other clinical settings.

If clinical trials are successful in demonstrating that the technology can be used safely and effectively, Moorfields will be able to use an eventual, regulatory-approved product for free, across all 30 of their UK hospitals and community clinics, for an initial period of five years.

The work that has gone into this project will also help accelerate wider NHS research for many years to come. For example, DeepMind has invested significant resources to clean, curate and label Moorfields’ de-identified research dataset to create one of the most advanced eye research databases in the world.

Moorfields owns this database as a non-commercial public asset, which is already forming the basis of nine separate medical research studies. In addition, Moorfields can also use DeepMind’s trained AI model for future non-commercial research efforts, which could help advance medical research even further.

Mustafa Suleyman, Co-founder and Head of Applied AI at DeepMind Health, said: “We set up DeepMind Health because we believe artificial intelligence can help solve some of society’s biggest health challenges, like avoidable sight loss, which affects millions of people across the globe. These incredibly exciting results take us one step closer to that goal and could, in time, transform the diagnosis, treatment and management of patients with sight threatening eye conditions, not just at Moorfields, but around the world.”

Professor Sir Peng Tee Khaw, director of the NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology said: “The results of this pioneering research with DeepMind are very exciting and demonstrate the potential sight-saving impact AI could have for patients. I am in no doubt that AI has a vital role to play in the future of healthcare, particularly when it comes to training and helping medical professionals so that patients benefit from vital treatment earlier than might previously have been possible. This shows the transformative research than can be carried out in the UK combining world leading industry and NIHR/NHS hospital/university partnerships.”

Matt Hancock, Health and Social Care Secretary, said: “This is hugely exciting and exactly the type of technology which will benefit the NHS in the long term and improve patient care – that’s why we fund over a billion pounds a year in health research as part of our long term plan for the NHS.”

Here’s a link to and a citation for the study,

Clinically applicable deep learning for diagnosis and referral in retinal disease by Jeffrey De Fauw, Joseph R. Ledsam, Bernardino Romera-Paredes, Stanislav Nikolov, Nenad Tomasev, Sam Blackwell, Harry Askham, Xavier Glorot, Brendan O’Donoghue, Daniel Visentin, George van den Driessche, Balaji Lakshminarayanan, Clemens Meyer, Faith Mackinder, Simon Bouton, Kareem Ayoub, Reena Chopra, Dominic King, Alan Karthikesalingam, Cían O. Hughes, Rosalind Raine, Julian Hughes, Dawn A. Sim, Catherine Egan, Adnan Tufail, Hugh Montgomery, Demis Hassabis, Geraint Rees, Trevor Back, Peng T. Khaw, Mustafa Suleyman, Julien Cornebise, Pearse A. Keane, & Olaf Ronneberger. Nature Medicine (2018) DOI: https://doi.org/10.1038/s41591-018-0107-6 Published 13 August 2018

This paper is behind a paywall.

And now, Melissa Locker’s August 15, 2018 article for Fast Company (Note: Links have been removed),

In a paper published in Nature Medicine on Monday, Google’s DeepMind subsidiary, UCL, and researchers at Moorfields Eye Hospital showed off their new AI system. The researchers used deep learning to create algorithm-driven software that can identify common patterns in data culled from dozens of common eye diseases from 3D scans. The result is an AI that can identify more than 50 diseases with incredible accuracy and can then refer patients to a specialist. Even more important, though, is that the AI can explain why a diagnosis was made, indicating which part of the scan prompted the outcome. It’s an important step in both medicine and in making AIs slightly more human

The editor or writer has even highlighted the sentence about the system’s accuracy—not just good but incredible!

I will be publishing something soon [my August 21, 2018 posting] which highlights some of the questions one might want to ask about AI and medicine before diving headfirst into this brave new world of medicine.

Meet Pepper, a robot for health care clinical settings

A Canadian project to introduce robots like Pepper into clinical settings (aside: can seniors’ facilities be far behind?) is the subject of a June 23, 2017 news item on phys.org,

McMaster and Ryerson universities today announced the Smart Robots for Health Communication project, a joint research initiative designed to introduce social robotics and artificial intelligence into clinical health care.

A June 22, 2017 McMaster University news release, which originated the news item, provides more detail,

With the help of Softbank’s humanoid robot Pepper and IBM Bluemix Watson Cognitive Services, the researchers will study health information exchange through a state-of-the-art human-robot interaction system. The project is a collaboration between David Harris Smith, professor in the Department of Communication Studies and Multimedia at McMaster University, Frauke Zeller, professor in the School of Professional Communication at Ryerson University and Hermenio Lima, a dermatologist and professor of medicine at McMaster’s Michael G. DeGroote School of Medicine. His main research interests are in the area of immunodermatology and technology applied to human health.

The research project involves the development and analysis of physical and virtual human-robot interactions, and has the capability to improve healthcare outcomes by helping healthcare professionals better understand patients’ behaviour.

Zeller and Harris Smith have previously worked together on hitchBOT, the friendly hitchhiking robot that travelled across Canada and has since found its new home in the [Canada] Science and Technology Museum in Ottawa.

“Pepper will help us highlight some very important aspects and motives of human behaviour and communication,” said Zeller.

Designed to be used in professional environments, Pepper is a humanoid robot that can interact with people, ‘read’ emotions, learn, move and adapt to its environment, and even recharge on its own. Pepper is able to perform facial recognition and develop individualized relationships when it interacts with people.

Lima, the clinic director, said: “We are excited to have the opportunity to potentially transform patient engagement in a clinical setting, and ultimately improve healthcare outcomes by adapting to clients’ communications needs.”

At Ryerson, Pepper was funded by the Co-lab in the Faculty of Communication and Design. FCAD’s Co-lab provides strategic leadership, technological support and acquisitions of technologies that are shaping the future of communications.

“This partnership is a testament to the collaborative nature of innovation,” said dean of FCAD, Charles Falzon. “I’m thrilled to support this multidisciplinary project that pushes the boundaries of research, and allows our faculty and students to find uses for emerging tech inside and outside the classroom.”

“This project exemplifies the value that research in the Humanities can bring to the wider world, in this case building understanding and enhancing communications in critical settings such as health care,” says McMaster’s Dean of Humanities, Ken Cruikshank.

The integration of IBM Watson cognitive computing services with the state-of-the-art social robot Pepper, offers a rich source of research potential for the projects at Ryerson and McMaster. This integration is also supported by IBM Canada and [Southern Ontario Smart Computing Innovation Platform] SOSCIP by providing the project access to high performance research computing resources and staff in Ontario.

“We see this as the initiation of an ongoing collaborative university and industry research program to develop and test applications of embodied AI, a research program that is well-positioned to integrate and apply emerging improvements in machine learning and social robotics innovations,” said Harris Smith.

I just went to a presentation at the facility where my mother lives and it was all about delivering more individualized and better care for residents. Given that most seniors in British Columbia care facilities do not receive the number of service hours per resident recommended by the province due to funding issues, it seemed a well-meaning initiative offered in the face of daunting odds against success. Now with this news, I wonder what impact ‘Pepper’ might ultimately have on seniors and on the people who currently deliver service. Of course, this assumes that researchers will be able to tackle problems with understanding various accents and communication strategies, which are strongly influenced by culture and, over time, the aging process.

After writing that last paragraph I stumbled onto this June 27, 2017 Sage Publications press release on EurekAlert about a related matter,

Existing digital technologies must be exploited to enable a paradigm shift in current healthcare delivery which focuses on tests, treatments and targets rather than the therapeutic benefits of empathy. Writing in the Journal of the Royal Society of Medicine, Dr Jeremy Howick and Dr Sian Rees of the Oxford Empathy Programme, say a new paradigm of empathy-based medicine is needed to improve patient outcomes, reduce practitioner burnout and save money.

Empathy-based medicine, they write, re-establishes relationship as the heart of healthcare. “Time pressure, conflicting priorities and bureaucracy can make practitioners less likely to express empathy. By re-establishing the clinical encounter as the heart of healthcare, and exploiting available technologies, this can change”, said Dr Howick, a Senior Researcher in Oxford University’s Nuffield Department of Primary Care Health Sciences.

Technology is already available that could reduce the burden of practitioner paperwork by gathering basic information prior to consultation, for example via email or a mobile device in the waiting room.

During the consultation, the computer screen could be placed so that both patient and clinician can see it, a help to both if needed, for example, to show infographics on risks and treatment options to aid decision-making and the joint development of a treatment plan.

Dr Howick said: “The spread of alternatives to face-to-face consultations is still in its infancy, as is our understanding of when a machine will do and when a person-to-person relationship is needed.” However, he warned, technology can also get in the way. A computer screen can become a barrier to communication rather than an aid to decision-making. “Patients and carers need to be involved in determining the need for, and designing, new technologies”, he said.

I sincerely hope that the Canadian project has taken into account some of the issues described in the ’empathy’ press release and in the article, which can be found here,

Overthrowing barriers to empathy in healthcare: empathy in the age of the Internet
by J Howick and S Rees. Journaly= of the Royal Society of Medicine Article first published online: June 27, 2017 DOI: https://doi.org/10.1177/0141076817714443

This article is open access.

Brown recluse spider, one of the world’s most venomous spiders, shows off unique spinning technique

Caption: American Brown Recluse Spider is pictured. Credit: Oxford University

According to scientists from Oxford University this deadly spider could teach us a thing or two about strength. From a Feb. 15, 2017 news item on ScienceDaily,

Brown recluse spiders use a unique micro looping technique to make their threads stronger than that of any other spider, a newly published UK-US collaboration has discovered.

One of the most feared and venomous arachnids in the world, the American brown recluse spider has long been known for its signature necro-toxic venom, as well as its unusual silk. Now, new research offers an explanation for how the spider is able to make its silk uncommonly strong.

Researchers suggest that if applied to synthetic materials, the technique could inspire scientific developments and improve impact absorbing structures used in space travel.

The study, published in the journal Material Horizons, was produced by scientists from Oxford University’s Department of Zoology, together with a team from the Applied Science Department at Virginia’s College of William & Mary. Their surveillance of the brown recluse spider’s spinning behaviour shows how, and to what extent, the spider manages to strengthen the silk it makes.

A Feb. 15, 2017 University of Oxford press release, which originated the news item,  provides more detail about the research,

From observing the arachnid, the team discovered that unlike other spiders, who produce round ribbons of thread, recluse silk is thin and flat. This structural difference is key to the thread’s strength, providing the flexibility needed to prevent premature breakage and withstand the knots created during spinning which give each strand additional strength.

Professor Hannes Schniepp from William & Mary explains: “The theory of knots adding strength is well proven. But adding loops to synthetic filaments always seems to lead to premature fibre failure. Observation of the recluse spider provided the breakthrough solution; unlike all spiders its silk is not round, but a thin, nano-scale flat ribbon. The ribbon shape adds the flexibility needed to prevent premature failure, so that all the microloops can provide additional strength to the strand.”

By using computer simulations to apply this technique to synthetic fibres, the team were able to test and prove that adding even a single loop significantly enhances the strength of the material.

William & Mary PhD student Sean Koebley adds: “We were able to prove that adding even a single loop significantly enhances the toughness of a simple synthetic sticky tape. Our observations open the door to new fibre technology inspired by the brown recluse.”

Speaking on how the recluse’s technique could be applied more broadly in the future, Professor Fritz Vollrath, of the Department of Zoology at Oxford University, expands: “Computer simulations demonstrate that fibres with many loops would be much, much tougher than those without loops. This right away suggests possible applications. For example carbon filaments could be looped to make them less brittle, and thus allow their use in novel impact absorbing structures. One example would be spider-like webs of carbon-filaments floating in outer space, to capture the drifting space debris that endangers astronaut lives’ and satellite integrity.”

Here’s a link to and a citation for the paper,

Toughness-enhancing metastructure in the recluse spider’s looped ribbon silk by
S. R. Koebley, F. Vollrath, and H. C. Schniepp. Mater. Horiz., 2017, Advance Article DOI: 10.1039/C6MH00473C First published online 15 Feb 2017

This paper is open access although you may need to register with the Royal Society of Chemistry’s publishing site to get access.

The character of water: both types

This is to use an old term, ‘mindblowing’. Apparently, there are two types of the liquid we call water according to a Nov. 10, 2016 news item on phys.org,

There are two types of liquid water, according to research carried out by an international scientific collaboration. This new peculiarity adds to the growing list of strange phenomena in what we imagine is a simple substance. The discovery could have implications for making and using nanoparticles as well as in understanding how proteins fold into their working shape in the body or misfold to cause diseases such as Alzheimer’s or CJD [Creutzfeldt-Jakob Disease].

A Nov. 10, 2016 Inderscience Publishers news release, which originated the news item, expands on the theme,

Writing in the International Journal of Nanotechnology, Oxford University’s Laura Maestro and her colleagues in Italy, Mexico, Spain and the USA, explain how the physical and chemical properties of water have been studied for more than a century and revealed some odd behavior not seen in other substances. For instance, when water freezes it expands. By contrast, almost every other known substance contracts when it is cooled. Water also exists as solid, liquid and gas within a very small temperature range (100 degrees Celsius) whereas the melting and boiling points of most other compounds span a much greater range.

Many of water’s bizarre properties are due to the molecule’s ability to form short-lived connections with each other known as hydrogen bonds. There is a residual positive charge on the hydrogen atoms in the V-shaped water molecule either or both of which can form such bonds with the negative electrons on the oxygen atom at the point of the V. This makes fleeting networks in water possible that are frozen in place when the liquid solidifies. They bonds are so short-lived that they do not endow the liquid with any structure or memory, of course.

The team has looked closely at several physical properties of water like its dielectric constant (how well an electric field can permeate a substance) or the proton-spin lattice relaxation (the process by which the magnetic moments of the hydrogen atoms in water can lose energy having been excited to a higher level). They have found that these phenomena seem to flip between two particular characters at around 50 degrees Celsius, give or take 10 degrees, i.e. from 40 to 60 degrees Celsius. The effect is that thermal expansion, speed of sound and other phenomena switch between two different states at this crossover temperature.

These two states could have important implications for studying and using nanoparticles where the character of water at the molecule level becomes important for the thermal and optical properties of such particles. Gold and silver nanoparticles are used in nanomedicine for diagnostics and as antibacterial agents, for instance. Moreover, the preliminary findings suggest that the structure of liquid water can strongly influence the stability of proteins and how they are denatured at the crossover temperature, which may well have implications for understanding protein processing in the food industry but also in understanding how disease arises when proteins misfold.

Here’s a link to and a citation for the paper,

On the existence of two states in liquid water: impact on biological and nanoscopic systems
by L.M. Maestro, M.I. Marqués, E. Camarillo, D. Jaque, J. García Solé, J.A. Gonzalo, F. Jaque, Juan C. Del Valle, F. Mallamace, H.E. Stanley.
International Journal of Nanotechnology (IJNT), Vol. 13, No. 8/9, 2016 DOI: 10.1504/IJNT.2016.079670

This paper is behind a paywall.

Spider silk as a bio super-lens

Bangor University (Wales, UK) is making quite the impact these days. I’d never heard of the institution until their breakthrough with nanobeads (Sept. 7, 2016 posting) to break through a resolution barrier and now there’s a second breakthrough with their partners at Oxford University (England, UK). From an Aug. 19, 2016 news item on ScienceDaily (Note: A link has been removed),

Scientists at the UK’s Bangor and Oxford universities have achieved a world first: using spider-silk as a superlens to increase the microscope’s potential.

Extending the limit of classical microscope’s resolution has been the ‘El Dorado’ or ‘Holy Grail’ of microscopy for over a century. Physical laws of light make it impossible to view objects smaller than 200 nm — the smallest size of bacteria, using a normal microscope alone. However, superlenses which enable us to see beyond the current magnification have been the goal since the turn of the millennium.

Hot on the heels of a paper (Sci. Adv. 2 e1600901,2016) revealing that a team at Bangor University’s School of Electronic Engineering has used a nanobead-derived superlens to break the perceived resolution barrier, the same team has achieved another world first.

Now the team, led by Dr Zengbo Wang and in colloboration with Prof. Fritz Vollrath’s silk group at Oxford University’s Department of Zoology, has used a naturally occurring material — dragline silk of the golden web spider, as an additional superlens, applied to the surface of the material to be viewed, to provide an additional 2-3 times magnification.

This is the first time that a naturally occurring biological material has been used as a superlens.

An Aug. 19, 2016 Bangor University press release (also on EurekAlert), which originated the news item, provides more information about the new work,

In the paper in Nano Letters (DOI: 10.1021/acs.nanolett.6b02641, Aug 17 2016), the joint team reveals how they used a cylindrical piece of spider silk from the thumb sized Nephila spider as a lens.

Dr Zengbo Wang said:

“We have proved that the resolution barrier of microscope can be broken using a superlens, but production of manufactured superlenses invovles some complex engineering processes which are not widely accessible to other reserchers. This is why we have been interested in looking for naturally occurring superlenses provided by ‘Mother Nature’, which may exist around us, so that everyone can access superlenses.”

Prof Fritz Vollrath adds:

“It is very exciting to find yet another cutting edge and totally novel use for a spider silk, which we have been studying for over two decades in my laboratory.”

These lenses could be used for seeing and viewing previously ‘invisible’ structures, including engineered nano-structures and biological micro-structures as well as, potentially, native germs and viruses.

The natural cylindrical structure at a micron- and submicron-scale make silks ideal candidates, in this case, the individual filaments had diameters of one tenth of a thin human hair.

The spider filament enabled the group to view details on a micro-chip and a blue- ray disk which would be invisible using the unmodified optical microscope.

In much the same was as when you look through a cylindrical glass or bottle, the clearest image only runs along the narrow strip directly opposite your line of vision, or resting on the surface being viewed, the single filament provides a one dimensional viewing image along its length.

Wang explains:

“The cylindrical silk lens has advantages in the larger field-of-view when compared to a microsphere superlens. Importantly for potential commercial applications, a spider silk nanoscope would be robust and economical, which in turn could provide excellent manufacturing platforms for a wide range of applications.”

James Monks, a co-author on the paper comments: “it has been an exciting time to be able to develop this project as part of my honours degree in electronic engineering at Bangor University and I am now very much looking forward to joining Dr Wang’s team as a PhD student in nano-photonics.”

The researchers have provided a close up image with details,

Caption: (a) Nephila edulis spider in its web. (b) Schematic drawing of reflection mode silk biosuperlens imaging. The spider silk was placed directly on top of the sample surface by using a soft tape, which magnify underlying nano objects 2-3 times (c) SEM image of Blu-ray disk with 200/100 nm groove and lines (d) Clear magnified image (2.1x) of Blu-ray disk under spider silk superlens. Credit: Bangor University/ University of Oxford

Caption: (a) Nephila edulis spider in its web. (b) Schematic drawing of reflection mode silk biosuperlens imaging. The spider silk was placed directly on top of the sample surface by using a soft tape, which magnify underlying nano objects 2-3 times (c) SEM image of Blu-ray disk with 200/100 nm groove and lines (d) Clear magnified image (2.1x) of Blu-ray disk under spider silk superlens. Credit: Bangor University/ University of Oxford

Here’s a link to and a citation for the ‘spider silk’ superlens paper,

Spider Silk: Mother Nature’s Bio-Superlens by James N. Monks, Bing Yan, Nicholas Hawkins, Fritz Vollrath, and Zengbo Wang. Nano Lett., Article ASAP DOI: 10.1021/acs.nanolett.6b02641 Publication Date (Web): August 17, 2016

Copyright © 2016 American Chemical Society

This paper is behind a paywall.

Brushing your way to nanofibres

The scientists are using what looks like a hairbrush to create nanofibres ,

Figure 2: Brush-spinning of nanofibers. (Reprinted with permission by Wiley-VCH Verlag)) [downloaded from http://www.nanowerk.com/spotlight/spotid=41398.php]

Figure 2: Brush-spinning of nanofibers. (Reprinted with permission by Wiley-VCH Verlag)) [downloaded from http://www.nanowerk.com/spotlight/spotid=41398.php]

A Sept. 23, 2015 Nanowerk Spotlight article by Michael Berger provides an in depth look at this technique (developed by a joint research team of scientists from the University of Georgia, Princeton University, and Oxford University) which could make producing nanofibers for use in scaffolds (tissue engineering and other applications) more easily and cheaply,

Polymer nanofibers are used in a wide range of applications such as the design of new composite materials, the fabrication of nanostructured biomimetic scaffolds for artificial bones and organs, biosensors, fuel cells or water purification systems.

“The simplest method of nanofiber fabrication is direct drawing from a polymer solution using a glass micropipette,” Alexander Tokarev, Ph.D., a Research Associate in the Nanostructured Materials Laboratory at the University of Georgia, tells Nanowerk. “This method however does not scale up and thus did not find practical applications. In our new work, we introduce a scalable method of nanofiber spinning named touch-spinning.”

James Cook in a Sept. 23, 2015 article for Materials Views provides a description of the technology,

A glass rod is glued to a rotating stage, whose diameter can be chosen over a wide range of a few centimeters to more than 1 m. A polymer solution is supplied, for example, from a needle of a syringe pump that faces the glass rod. The distance between the droplet of polymer solution and the tip of the glass rod is adjusted so that the glass rod contacts the polymer droplet as it rotates.

Following the initial “touch”, the polymer droplet forms a liquid bridge. As the stage rotates the bridge stretches and fiber length increases, with the diameter decreasing due to mass conservation. It was shown that the diameter of the fiber can be precisely controlled down to 40 nm by the speed of the stage rotation.

The method can be easily scaled-up by using a round hairbrush composed of 600 filaments.

When the rotating brush touches the surface of a polymer solution, the brush filaments draw many fibers simultaneously producing hundred kilometers of fibers in minutes.

The drawn fibers are uniform since the fiber diameter depends on only two parameters: polymer concentration and speed of drawing.

Returning to Berger’s Spotlight article, there is an important benefit with this technique,

As the team points out, one important aspect of the method is the drawing of single filament fibers.

These single filament fibers can be easily wound onto spools of different shapes and dimensions so that well aligned one-directional, orthogonal or randomly oriented fiber meshes with a well-controlled average mesh size can be fabricated using this very simple method.

“Owing to simplicity of the method, our set-up could be used in any biomedical lab and facility,” notes Tokarev. “For example, a customized scaffold by size, dimensions and othermorphologic characteristics can be fabricated using donor biomaterials.”

Berger’s and Cook’s articles offer more illustrations and details.

Here’s a link to and a citation for the paper,

Touch- and Brush-Spinning of Nanofibers by Alexander Tokarev, Darya Asheghal, Ian M. Griffiths, Oleksandr Trotsenko, Alexey Gruzd, Xin Lin, Howard A. Stone, and Sergiy Minko. Advanced Materials DOI: 10.1002/adma.201502768ViewFirst published: 23 September 2015

This paper is behind a paywall.

AI assistant makes scientific discovery at Tufts University (US)

In light of this latest research from Tufts University, I thought it might be interesting to review the “algorithms, artificial intelligence (AI), robots, and world of work” situation before moving on to Tufts’ latest science discovery. My Feb. 5, 2015 post provides a roundup of sorts regarding work and automation. For those who’d like the latest, there’s a May 29, 2015 article by Sophie Weiner for Fast Company, featuring a predictive interactive tool designed by NPR (US National Public Radio) based on data from Oxford University researchers, which tells you how likely automating your job could be, no one knows for sure, (Note: A link has been removed),

Paralegals and food service workers: the robots are coming.

So suggests this interactive visualization by NPR. The bare-bones graphic lets you select a profession, from tellers and lawyers to psychologists and authors, to determine who is most at risk of losing their jobs in the coming robot revolution. From there, it spits out a percentage. …

You can find the interactive NPR tool here. I checked out the scientist category (in descending order of danger: Historians [43.9%], Economists, Geographers, Survey Researchers, Epidemiologists, Chemists, Animal Scientists, Sociologists, Astronomers, Social Scientists, Political Scientists, Materials Scientists, Conservation Scientists, and Microbiologists [1.2%]) none of whom seem to be in imminent danger if you consider that bookkeepers are rated at  97.6%.

Here at last is the news from Tufts (from a June 4, 2015 Tufts University news release, also on EurekAlert),

An artificial intelligence system has for the first time reverse-engineered the regeneration mechanism of planaria–the small worms whose extraordinary power to regrow body parts has made them a research model in human regenerative medicine.

The discovery by Tufts University biologists presents the first model of regeneration discovered by a non-human intelligence and the first comprehensive model of planarian regeneration, which had eluded human scientists for over 100 years. The work, published in PLOS Computational Biology, demonstrates how “robot science” can help human scientists in the future.

To mine the fast-growing mountain of published experimental data in regeneration and developmental biology Lobo and Levin developed an algorithm that would use evolutionary computation to produce regulatory networks able to “evolve” to accurately predict the results of published laboratory experiments that the researchers entered into a database.

“Our goal was to identify a regulatory network that could be executed in every cell in a virtual worm so that the head-tail patterning outcomes of simulated experiments would match the published data,” Lobo said.

The paper represents a successful application of the growing field of “robot science” – which Levin says can help human researchers by doing much more than crunch enormous datasets quickly.

“While the artificial intelligence in this project did have to do a whole lot of computations, the outcome is a theory of what the worm is doing, and coming up with theories of what’s going on in nature is pretty much the most creative, intuitive aspect of the scientist’s job,” Levin said. “One of the most remarkable aspects of the project was that the model it found was not a hopelessly-tangled network that no human could actually understand, but a reasonably simple model that people can readily comprehend. All this suggests to me that artificial intelligence can help with every aspect of science, not only data mining but also inference of meaning of the data.”

Here’s a link to and a citation for the paper,

Inferring Regulatory Networks from Experimental Morphological Phenotypes: A Computational Method Reverse-Engineers Planarian Regeneration by Daniel Lobo and Michael Levin. PLOS (Computational Biology) DOI: DOI: 10.1371/journal.pcbi.1004295 Published: June 4, 2015

This paper is open access.

It will be interesting to see if attributing the discovery to an algorithm sets off criticism suggesting that the researchers overstated the role the AI assistant played.