Tag Archives: University of Oxford

Nanotechnology book suggestions for 2020

A January 23, 2020 news item on Nanowerk features a number of new books. Here are summaries of a couple of them from the news item (Note: Links have been removed),

The main goal of “Nanotechnology in Skin, Soft Tissue, and Bone Infections” is to deal with the role of nanobiotechnology in skin, soft tissue and bone infections since it is difficult to treat the infections due to the development of resistance in them against existing antibiotics.

The present interdisciplinary book is very useful for a diverse group of readers including nanotechnologists, medical microbiologists, dermatologists, osteologists, biotechnologists, bioengineers.

Nanotechnology in Skin, Soft-Tissue, and Bone Infections” is divided into four sections: Section I- includes role of nanotechnology in skin infections such as atopic dermatitis, and nanomaterials for combating infections caused by bacteria and fungi. Section II- incorporates how nanotechnology can be used for soft-tissue infections such as diabetic foot ulcer and other wound infections; Section III- discusses about the nanomaterials in artificial scaffolds bone engineering and bone infections caused by bacteria and fungi; and also about the toxicity issues generated by the nanomaterials in general and nanoparticles in particular.

Advanced Materials for Defense: Development, Analysis and Applications” is a collection of high quality research and review papers submitted to the 1st World Conference on Advanced Materials for Defense (AUXDEFENSE 2018).

A wide range of topics related to the defense area such as ballistic protection, impact and energy absorption, composite materials, smart materials and structures, nanomaterials and nano structures, CBRN protection, thermoregulation, camouflage, auxetic materials, and monitoring systems is covered.

Written by the leading experts in these subjects, this work discusses both technological advances in terms of materials as well as product designing, analysis as well as case studies.

This volume will prove to be a valuable resource for researchers and scientists from different engineering disciplines such as materials science, chemical engineering, biological sciences, textile engineering, mechanical engineering, environmental science, and nanotechnology.

Nanoengineering is a branch of engineering that exploits the unique properties of nanomaterials—their size and quantum effects—and the interaction between these materials, in order to design and manufacture novel structures and devices that possess entirely new functionality and capabilities, which are not obtainable by macroscale engineering.

While the term nanoengineering is often used synonymously with the general term nanotechnology, the former technically focuses more closely on the engineering aspects of the field, as opposed to the broader science and general technology aspects that are encompassed by the latter.

Nanoengineering: The Skills and Tools Making Technology Invisible” puts a spotlight on some of the scientists who are pushing the boundaries of technology and it gives examples of their work and how they are advancing knowledge one little step at a time.

This book is a collection of essays about researchers involved in nanoengineering and many other facets of nanotechnologies. This research involves truly multidisciplinary and international efforts, covering a wide range of scientific disciplines such as medicine, materials sciences, chemistry, toxicology, biology and biotechnology, physics and electronics.

The book showcases 176 very specific research projects and you will meet the scientists who develop the theories, conduct the experiments, and build the new materials and devices that will make nanoengineering a core technology platform for many future products and applications.

On January 28, 2020, Azonano featured a book review for “Nano Comes to Life: How Nanotechnology is Transforming Medicine and the Future of Biology.” The review by Rebecca Megson-Smith, marketing lead, was originally published on the NuNano company blog

Covering sciences ‘greatest hits’ since we have been able to look at the world on the nanoscale, as well as where it is taking our understanding of life, Nano Comes to Life: How Nanotechnology is Transforming Medicine and the Future of Biology is an inspiring and joyful read.

As author Sonia Contera writes, biology is an area of intense interest and study. With the advent of nanotechnology, a more diverse range of scientists from across the disciplines are now coming together to solve some of the biggest issues of our time.

The ability to visualise, interact with, manipulate and create matter at the nanometer scale – the level of molecules, proteins and DNA – combined with the physicists quantitative and mathematical approach is revolutionising our understanding of the complexity which underpins life.

I particularly enjoyed the section that discussed the history of scanning tools. Here Contera highlights how profoundly the development of the STM [scanning tunneling microscope] transformed human interaction with matter.

Not only did it image at the atomic level with ‘unprecedented accuracy using a relatively simple, cheap tool’, but the STM was able to pick up and move the atoms around one by one. And what it couldn’t do effectively – work within the biological environments – was and is achievable through the introduction of the AFM [atomic force microscope].

She [Contera] writes:

“Physics urges us to consider life as a whole emergent from the greater whole – emanating from the same rules that govern the entire cosmos.”

I leave you with another bold declaration from Sonia about the good that the merging of the sciences has offered and, on behalf of everyone at NuNano, would like to wish you all a very Merry Christmas and Happy New Year – see you in 2020!

“As physics, engineering, computer science and materials science merge with biology, they are actually helping to reconnect science and technology with the deep questions that humans have asked themselves from the beginning of civilization: What is life? What does it mean to be human when we can manipulate and even exploit our own biology?”

Sonia Contera is professor of biological physics in the Department of Physics at the University of Oxford. She is a leading pioneer in the field of nanotechnology.

Megson-Smith certainly seems enthused about the book and she reminded me of how interested I was in STMs and AFMs when I first started investigating and writing about nanotechnology. Given the review but not having seen the book myself, it seems this might be a good introduction.

My introductory book was the 2009 Soft Machines: Nanotechnology and Life by Richard Jones, a professor of physics and astronomy at the University of Sheffield. I have great affection for the book and, if memory serves, it hasn’t really aged. One more thing, Jones can be very funny. It’s not many people who can successfully combine humour and nanotechnology.

You can find Megson-Smith’s original posting here.

Electronics begone! Enter: the light-based brainlike computing chip

At this point, it’s possible I’m wrong but I think this is the first ‘memristor’ type device (also called a neuromorphic chip) based on light rather than electronics that I’ve featured here on this blog. In other words, it’s not, technically speaking, a memristor but it does have the same properties so it is a neuromorphic chip.

Caption: The optical microchips that the researchers are working on developing are about the size of a one-cent piece. Credit: WWU Muenster – Peter Leßmann

A May 8, 2019 news item on Nanowerk announces this new approach to neuromorphic hardware (Note: A link has been removed),

Researchers from the Universities of Münster (Germany), Oxford and Exeter (both UK) have succeeded in developing a piece of hardware which could pave the way for creating computers which resemble the human brain.

The scientists produced a chip containing a network of artificial neurons that works with light and can imitate the behaviour of neurons and their synapses. The network is able to “learn” information and use this as a basis for computing and recognizing patterns. As the system functions solely with light and not with electrons, it can process data many times faster than traditional systems. …

A May 8, 2019 University of Münster press release (also on EurekAlert), which originated the news item, reveals the full story,

A technology that functions like a brain? In these times of artificial intelligence, this no longer seems so far-fetched – for example, when a mobile phone can recognise faces or languages. With more complex applications, however, computers still quickly come up against their own limitations. One of the reasons for this is that a computer traditionally has separate memory and processor units – the consequence of which is that all data have to be sent back and forth between the two. In this respect, the human brain is way ahead of even the most modern computers because it processes and stores information in the same place – in the synapses, or connections between neurons, of which there are a million-billion in the brain. An international team of researchers from the Universities of Münster (Germany), Oxford and Exeter (both UK) have now succeeded in developing a piece of hardware which could pave the way for creating computers which resemble the human brain. The scientists managed to produce a chip containing a network of artificial neurons that works with light and can imitate the behaviour of neurons and their synapses.

The researchers were able to demonstrate, that such an optical neurosynaptic network is able to “learn” information and use this as a basis for computing and recognizing patterns – just as a brain can. As the system functions solely with light and not with traditional electrons, it can process data many times faster. “This integrated photonic system is an experimental milestone,” says Prof. Wolfram Pernice from Münster University and lead partner in the study. “The approach could be used later in many different fields for evaluating patterns in large quantities of data, for example in medical diagnoses.” The study is published in the latest issue of the “Nature” journal.

The story in detail – background and method used

Most of the existing approaches relating to so-called neuromorphic networks are based on electronics, whereas optical systems – in which photons, i.e. light particles, are used – are still in their infancy. The principle which the German and British scientists have now presented works as follows: optical waveguides that can transmit light and can be fabricated into optical microchips are integrated with so-called phase-change materials – which are already found today on storage media such as re-writable DVDs. These phase-change materials are characterised by the fact that they change their optical properties dramatically, depending on whether they are crystalline – when their atoms arrange themselves in a regular fashion – or amorphous – when their atoms organise themselves in an irregular fashion. This phase-change can be triggered by light if a laser heats the material up. “Because the material reacts so strongly, and changes its properties dramatically, it is highly suitable for imitating synapses and the transfer of impulses between two neurons,” says lead author Johannes Feldmann, who carried out many of the experiments as part of his PhD thesis at the Münster University.

In their study, the scientists succeeded for the first time in merging many nanostructured phase-change materials into one neurosynaptic network. The researchers developed a chip with four artificial neurons and a total of 60 synapses. The structure of the chip – consisting of different layers – was based on the so-called wavelength division multiplex technology, which is a process in which light is transmitted on different channels within the optical nanocircuit.

In order to test the extent to which the system is able to recognise patterns, the researchers “fed” it with information in the form of light pulses, using two different algorithms of machine learning. In this process, an artificial system “learns” from examples and can, ultimately, generalise them. In the case of the two algorithms used – both in so-called supervised and in unsupervised learning – the artificial network was ultimately able, on the basis of given light patterns, to recognise a pattern being sought – one of which was four consecutive letters.

“Our system has enabled us to take an important step towards creating computer hardware which behaves similarly to neurons and synapses in the brain and which is also able to work on real-world tasks,” says Wolfram Pernice. “By working with photons instead of electrons we can exploit to the full the known potential of optical technologies – not only in order to transfer data, as has been the case so far, but also in order to process and store them in one place,” adds co-author Prof. Harish Bhaskaran from the University of Oxford.

A very specific example is that with the aid of such hardware cancer cells could be identified automatically. Further work will need to be done, however, before such applications become reality. The researchers need to increase the number of artificial neurons and synapses and increase the depth of neural networks. This can be done, for example, with optical chips manufactured using silicon technology. “This step is to be taken in the EU joint project ‘Fun-COMP’ by using foundry processing for the production of nanochips,” says co-author and leader of the Fun-COMP project, Prof. C. David Wright from the University of Exeter.

Here’s a link to and a citation for the paper,

All-optical spiking neurosynaptic networks with self-learning capabilities by J. Feldmann, N. Youngblood, C. D. Wright, H. Bhaskaran & W. H. P. Pernice. Nature volume 569, pages208–214 (2019) DOI: https://doi.org/10.1038/s41586-019-1157-8 Issue Date: 09 May 2019

This paper is behind a paywall.

For the curious, I found a little more information about Fun-COMP (functionally-scaled computer technology). It’s a European Commission (EC) Horizon 2020 project coordinated through the University of Exeter. For information with details such as the total cost, contribution from the EC, the list of partnerships and more there is the Fun-COMP webpage on fabiodisconzi.com.

How to get people to trust artificial intelligence

Vyacheslav Polonski’s (University of Oxford researcher) January 10, 2018 piece (originally published Jan. 9, 2018 on The Conversation) on phys.org isn’t a gossip article although there are parts that could be read that way. Before getting to what I consider the juicy bits (Note: Links have been removed),

Artificial intelligence [AI] can already predict the future. Police forces are using it to map when and where crime is likely to occur [Note: See my Nov. 23, 2017 posting about predictive policing in Vancouver for details about the first Canadian municipality to introduce the technology]. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences.

Many decisions in our lives require a good forecast, and AI agents are almost always better at forecasting than their human counterparts. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictions. Recent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong.

The part (juicy bits) that satisfied some of my long held curiosity was this section on Watson and its life as a medical adjunct (Note: Links have been removed),

IBM’s attempt to promote its supercomputer programme to cancer doctors (Watson for Onology) was a PR [public relations] disaster. The AI promised to deliver top-quality recommendations on the treatment of 12 cancers that accounted for 80% of the world’s cases. As of today, over 14,000 patients worldwide have received advice based on its calculations.

But when doctors first interacted with Watson they found themselves in a rather difficult situation. On the one hand, if Watson provided guidance about a treatment that coincided with their own opinions, physicians did not see much value in Watson’s recommendations. The supercomputer was simply telling them what they already know, and these recommendations did not change the actual treatment. This may have given doctors some peace of mind, providing them with more confidence in their own decisions. But IBM has yet to provide evidence that Watson actually improves cancer survival rates.

On the other hand, if Watson generated a recommendation that contradicted the experts’ opinion, doctors would typically conclude that Watson wasn’t competent. And the machine wouldn’t be able to explain why its treatment was plausible because its machine learning algorithms were simply too complex to be fully understood by humans. Consequently, this has caused even more mistrust and disbelief, leading many doctors to ignore the seemingly outlandish AI recommendations and stick to their own expertise.

As a result, IBM Watson’s premier medical partner, the MD Anderson Cancer Center, recently announced it was dropping the programme. Similarly, a Danish hospital reportedly abandoned the AI programme after discovering that its cancer doctors disagreed with Watson in over two thirds of cases.

The problem with Watson for Oncology was that doctors simply didn’t trust it. Human trust is often based on our understanding of how other people think and having experience of their reliability. …

It seems to me there might be a bit more to the doctors’ trust issues and I was surprised it didn’t seem to have occurred to Polonski. Then I did some digging (from Polonski’s webpage on the Oxford Internet Institute website),

Vyacheslav Polonski (@slavacm) is a DPhil [PhD] student at the Oxford Internet Institute. His research interests are located at the intersection of network science, media studies and social psychology. Vyacheslav’s doctoral research examines the adoption and use of social network sites, focusing on the effects of social influence, social cognition and identity construction.

Vyacheslav is a Visiting Fellow at Harvard University and a Global Shaper at the World Economic Forum. He was awarded the Master of Science degree with Distinction in the Social Science of the Internet from the University of Oxford in 2013. He also obtained the Bachelor of Science degree with First Class Honours in Management from the London School of Economics and Political Science (LSE) in 2012.

Vyacheslav was honoured at the British Council International Student of the Year 2011 awards, and was named UK’s Student of the Year 2012 and national winner of the Future Business Leader of the Year 2012 awards by TARGETjobs.

Previously, he has worked as a management consultant at Roland Berger Strategy Consultants and gained further work experience at the World Economic Forum, PwC, Mars, Bertelsmann and Amazon.com. Besides, he was involved in several start-ups as part of the 2012 cohort of Entrepreneur First and as part of the founding team of the London office of Rocket Internet. Vyacheslav was the junior editor of the bi-lingual book ‘Inspire a Nation‘ about Barack Obama’s first presidential election campaign. In 2013, he was invited to be a keynote speaker at the inaugural TEDx conference of IE University in Spain to discuss the role of a networked mindset in everyday life.

Vyacheslav is fluent in German, English and Russian, and is passionate about new technologies, social entrepreneurship, philanthropy, philosophy and modern art.

Research interests

Network science, social network analysis, online communities, agency and structure, group dynamics, social interaction, big data, critical mass, network effects, knowledge networks, information diffusion, product adoption

Positions held at the OII

  • DPhil student, October 2013 –
  • MSc Student, October 2012 – August 2013

Polonski doesn’t seem to have any experience dealing with, participating in, or studying the medical community. Getting a doctor to admit that his or her approach to a particular patient’s condition was wrong or misguided runs counter to their training and, by extension, the institution of medicine. Also, one of the biggest problems in any field is getting people to change and it’s not always about trust. In this instance, you’re asking a doctor to back someone else’s opinion after he or she has rendered theirs. This is difficult even when the other party is another human doctor let alone a form of artificial intelligence.

If you want to get a sense of just how hard it is to get someone to back down after they’ve committed to a position, read this January 10, 2018 essay by Lara Bazelon, an associate professor at the University of San Francisco School of Law. This is just one of the cases (Note: Links have been removed),

Davontae Sanford was 14 years old when he confessed to murdering four people in a drug house on Detroit’s East Side. Left alone with detectives in a late-night interrogation, Sanford says he broke down after being told he could go home if he gave them “something.” On the advice of a lawyer whose license was later suspended for misconduct, Sanders pleaded guilty in the middle of his March 2008 trial and received a sentence of 39 to 92 years in prison.

Sixteen days after Sanford was sentenced, a hit man named Vincent Smothers told the police he had carried out 12 contract killings, including the four Sanford had pleaded guilty to committing. Smothers explained that he’d worked with an accomplice, Ernest Davis, and he provided a wealth of corroborating details to back up his account. Smothers told police where they could find one of the weapons used in the murders; the gun was recovered and ballistics matched it to the crime scene. He also told the police he had used a different gun in several of the other murders, which ballistics tests confirmed. Once Smothers’ confession was corroborated, it was clear Sanford was innocent. Smothers made this point explicitly in an 2015 affidavit, emphasizing that Sanford hadn’t been involved in the crimes “in any way.”

Guess what happened? (Note: Links have been removed),

But Smothers and Davis were never charged. Neither was Leroy Payne, the man Smothers alleged had paid him to commit the murders. …

Davontae Sanford, meanwhile, remained behind bars, locked up for crimes he very clearly didn’t commit.

Police failed to turn over all the relevant information in Smothers’ confession to Sanford’s legal team, as the law required them to do. When that information was leaked in 2009, Sanford’s attorneys sought to reverse his conviction on the basis of actual innocence. Wayne County Prosecutor Kym Worthy fought back, opposing the motion all the way to the Michigan Supreme Court. In 2014, the court sided with Worthy, ruling that actual innocence was not a valid reason to withdraw a guilty plea [emphasis mine]. Sanford would remain in prison for another two years.

Doctors are just as invested in their opinions and professional judgments as lawyers  (just like  the prosecutor and the judges on the Michigan Supreme Court) are.

There is one more problem. From the doctor’s (or anyone else’s perspective), if the AI is making the decisions, why do he/she need to be there? At best it’s as if AI were turning the doctor into its servant or, at worst, replacing the doctor. Polonski alludes to the problem in one of his solutions to the ‘trust’ issue (Note: A link has been removed),

Research suggests involving people more in the AI decision-making process could also improve trust and allow the AI to learn from human experience. For example,one study showed people were given the freedom to slightly modify an algorithm felt more satisfied with its decisions, more likely to believe it was superior and more likely to use it in the future.

Having input into the AI decision-making process somewhat addresses one of the problems but the commitment to one’s own judgment even when there is overwhelming evidence to the contrary is a perennially thorny problem. The legal case mentioned here earlier is clearly one where the contrarian is wrong but it’s not always that obvious. As well, sometimes, people who hold out against the majority are right.

US Army

Getting back to building trust, it turns out the US Army Research Laboratory is also interested in transparency where AI is concerned (from a January 11, 2018 US Army news release on EurekAlert),

U.S. Army Research Laboratory [ARL] scientists developed ways to improve collaboration between humans and artificially intelligent agents in two projects recently completed for the Autonomy Research Pilot Initiative supported by the Office of Secretary of Defense. They did so by enhancing the agent transparency [emphasis mine], which refers to a robot, unmanned vehicle, or software agent’s ability to convey to humans its intent, performance, future plans, and reasoning process.

“As machine agents become more sophisticated and independent, it is critical for their human counterparts to understand their intent, behaviors, reasoning process behind those behaviors, and expected outcomes so the humans can properly calibrate their trust [emphasis mine] in the systems and make appropriate decisions,” explained ARL’s Dr. Jessie Chen, senior research psychologist.

The U.S. Defense Science Board, in a 2016 report, identified six barriers to human trust in autonomous systems, with ‘low observability, predictability, directability and auditability’ as well as ‘low mutual understanding of common goals’ being among the key issues.

In order to address these issues, Chen and her colleagues developed the Situation awareness-based Agent Transparency, or SAT, model and measured its effectiveness on human-agent team performance in a series of human factors studies supported by the ARPI. The SAT model deals with the information requirements from an agent to its human collaborator in order for the human to obtain effective situation awareness of the agent in its tasking environment. At the first SAT level, the agent provides the operator with the basic information about its current state and goals, intentions, and plans. At the second level, the agent reveals its reasoning process as well as the constraints/affordances that the agent considers when planning its actions. At the third SAT level, the agent provides the operator with information regarding its projection of future states, predicted consequences, likelihood of success/failure, and any uncertainty associated with the aforementioned projections.

In one of the ARPI projects, IMPACT, a research program on human-agent teaming for management of multiple heterogeneous unmanned vehicles, ARL’s experimental effort focused on examining the effects of levels of agent transparency, based on the SAT model, on human operators’ decision making during military scenarios. The results of a series of human factors experiments collectively suggest that transparency on the part of the agent benefits the human’s decision making and thus the overall human-agent team performance. More specifically, researchers said the human’s trust in the agent was significantly better calibrated — accepting the agent’s plan when it is correct and rejecting it when it is incorrect– when the agent had a higher level of transparency.

The other project related to agent transparency that Chen and her colleagues performed under the ARPI was Autonomous Squad Member, on which ARL collaborated with Naval Research Laboratory scientists. The ASM is a small ground robot that interacts with and communicates with an infantry squad. As part of the overall ASM program, Chen’s group developed transparency visualization concepts, which they used to investigate the effects of agent transparency levels on operator performance. Informed by the SAT model, the ASM’s user interface features an at a glance transparency module where user-tested iconographic representations of the agent’s plans, motivator, and projected outcomes are used to promote transparent interaction with the agent. A series of human factors studies on the ASM’s user interface have investigated the effects of agent transparency on the human teammate’s situation awareness, trust in the ASM, and workload. The results, consistent with the IMPACT project’s findings, demonstrated the positive effects of agent transparency on the human’s task performance without increase of perceived workload. The research participants also reported that they felt the ASM as more trustworthy, intelligent, and human-like when it conveyed greater levels of transparency.

Chen and her colleagues are currently expanding the SAT model into bidirectional transparency between the human and the agent.

“Bidirectional transparency, although conceptually straightforward–human and agent being mutually transparent about their reasoning process–can be quite challenging to implement in real time. However, transparency on the part of the human should support the agent’s planning and performance–just as agent transparency can support the human’s situation awareness and task performance, which we have demonstrated in our studies,” Chen hypothesized.

The challenge is to design the user interfaces, which can include visual, auditory, and other modalities, that can support bidirectional transparency dynamically, in real time, while not overwhelming the human with too much information and burden.

Interesting, yes? Here’s a link and a citation for the paper,

Situation Awareness-based Agent Transparency and Human-Autonomy Teaming Effectiveness by Jessie Y.C. Chen, Shan G. Lakhmani, Kimberly Stowers, Anthony R. Selkowitz, Julia L. Wright, and Michael Barnes. Theoretical Issues in Ergonomics Science May 2018. DOI 10.1080/1463922X.2017.1315750

This paper is behind a paywall.

A transatlantic report highlighting the risks and opportunities associated with synthetic biology and bioengineering

I love e-Life, the open access journal where its editors noted that a submitted synthetic biology and bioengineering report was replete with US and UK experts (along with a European or two) but no expert input from other parts of the world. In response the authors added ‘transatlantic’ to the title. It was a good decision since it was too late to add any new experts if the authors planned to have their paper published in the foreseeable future.

I’ve commented many times here when panels of experts include only Canadian, US, UK, and, sometimes, European or Commonwealth (Australia/New Zealand) experts that we need to broaden our perspectives and now I can add: or at least acknowledge (e.g. transatlantic) that the perspectives taken are reflective of a rather narrow range of countries.

Now getting to the report, here’s more from a November 21, 2017 University of Cambridge press release,

Human genome editing, 3D-printed replacement organs and artificial photosynthesis – the field of bioengineering offers great promise for tackling the major challenges that face our society. But as a new article out today highlights, these developments provide both opportunities and risks in the short and long term.

Rapid developments in the field of synthetic biology and its associated tools and methods, including more widely available gene editing techniques, have substantially increased our capabilities for bioengineering – the application of principles and techniques from engineering to biological systems, often with the goal of addressing ‘real-world’ problems.

In a feature article published in the open access journal eLife, an international team of experts led by Dr Bonnie Wintle and Dr Christian R. Boehm from the Centre for the Study of Existential Risk at the University of Cambridge, capture perspectives of industry, innovators, scholars, and the security community in the UK and US on what they view as the major emerging issues in the field.

Dr Wintle says: “The growth of the bio-based economy offers the promise of addressing global environmental and societal challenges, but as our paper shows, it can also present new kinds of challenges and risks. The sector needs to proceed with caution to ensure we can reap the benefits safely and securely.”

The report is intended as a summary and launching point for policy makers across a range of sectors to further explore those issues that may be relevant to them.

Among the issues highlighted by the report as being most relevant over the next five years are:

Artificial photosynthesis and carbon capture for producing biofuels

If technical hurdles can be overcome, such developments might contribute to the future adoption of carbon capture systems, and provide sustainable sources of commodity chemicals and fuel.

Enhanced photosynthesis for agricultural productivity

Synthetic biology may hold the key to increasing yields on currently farmed land – and hence helping address food security – by enhancing photosynthesis and reducing pre-harvest losses, as well as reducing post-harvest and post-consumer waste.

Synthetic gene drives

Gene drives promote the inheritance of preferred genetic traits throughout a species, for example to prevent malaria-transmitting mosquitoes from breeding. However, this technology raises questions about whether it may alter ecosystems [emphasis mine], potentially even creating niches where a new disease-carrying species or new disease organism may take hold.

Human genome editing

Genome engineering technologies such as CRISPR/Cas9 offer the possibility to improve human lifespans and health. However, their implementation poses major ethical dilemmas. It is feasible that individuals or states with the financial and technological means may elect to provide strategic advantages to future generations.

Defence agency research in biological engineering

The areas of synthetic biology in which some defence agencies invest raise the risk of ‘dual-use’. For example, one programme intends to use insects to disseminate engineered plant viruses that confer traits to the target plants they feed on, with the aim of protecting crops from potential plant pathogens – but such technologies could plausibly also be used by others to harm targets.

In the next five to ten years, the authors identified areas of interest including:

Regenerative medicine: 3D printing body parts and tissue engineering

While this technology will undoubtedly ease suffering caused by traumatic injuries and a myriad of illnesses, reversing the decay associated with age is still fraught with ethical, social and economic concerns. Healthcare systems would rapidly become overburdened by the cost of replenishing body parts of citizens as they age and could lead new socioeconomic classes, as only those who can pay for such care themselves can extend their healthy years.

Microbiome-based therapies

The human microbiome is implicated in a large number of human disorders, from Parkinson’s to colon cancer, as well as metabolic conditions such as obesity and type 2 diabetes. Synthetic biology approaches could greatly accelerate the development of more effective microbiota-based therapeutics. However, there is a risk that DNA from genetically engineered microbes may spread to other microbiota in the human microbiome or into the wider environment.

Intersection of information security and bio-automation

Advancements in automation technology combined with faster and more reliable engineering techniques have resulted in the emergence of robotic ‘cloud labs’ where digital information is transformed into DNA then expressed in some target organisms. This opens the possibility of new kinds of information security threats, which could include tampering with digital DNA sequences leading to the production of harmful organisms, and sabotaging vaccine and drug production through attacks on critical DNA sequence databases or equipment.

Over the longer term, issues identified include:

New makers disrupt pharmaceutical markets

Community bio-labs and entrepreneurial startups are customizing and sharing methods and tools for biological experiments and engineering. Combined with open business models and open source technologies, this could herald opportunities for manufacturing therapies tailored to regional diseases that multinational pharmaceutical companies might not find profitable. But this raises concerns around the potential disruption of existing manufacturing markets and raw material supply chains as well as fears about inadequate regulation, less rigorous product quality control and misuse.

Platform technologies to address emerging disease pandemics

Emerging infectious diseases—such as recent Ebola and Zika virus disease outbreaks—and potential biological weapons attacks require scalable, flexible diagnosis and treatment. New technologies could enable the rapid identification and development of vaccine candidates, and plant-based antibody production systems.

Shifting ownership models in biotechnology

The rise of off-patent, generic tools and the lowering of technical barriers for engineering biology has the potential to help those in low-resource settings, benefit from developing a sustainable bioeconomy based on local needs and priorities, particularly where new advances are made open for others to build on.

Dr Jenny Molloy comments: “One theme that emerged repeatedly was that of inequality of access to the technology and its benefits. The rise of open source, off-patent tools could enable widespread sharing of knowledge within the biological engineering field and increase access to benefits for those in developing countries.”

Professor Johnathan Napier from Rothamsted Research adds: “The challenges embodied in the Sustainable Development Goals will require all manner of ideas and innovations to deliver significant outcomes. In agriculture, we are on the cusp of new paradigms for how and what we grow, and where. Demonstrating the fairness and usefulness of such approaches is crucial to ensure public acceptance and also to delivering impact in a meaningful way.”

Dr Christian R. Boehm concludes: “As these technologies emerge and develop, we must ensure public trust and acceptance. People may be willing to accept some of the benefits, such as the shift in ownership away from big business and towards more open science, and the ability to address problems that disproportionately affect the developing world, such as food security and disease. But proceeding without the appropriate safety precautions and societal consensus—whatever the public health benefits—could damage the field for many years to come.”

The research was made possible by the Centre for the Study of Existential Risk, the Synthetic Biology Strategic Research Initiative (both at the University of Cambridge), and the Future of Humanity Institute (University of Oxford). It was based on a workshop co-funded by the Templeton World Charity Foundation and the European Research Council under the European Union’s Horizon 2020 research and innovation programme.

Here’s a link to and a citation for the paper,

A transatlantic perspective on 20 emerging issues in biological engineering by Bonnie C Wintle, Christian R Boehm, Catherine Rhodes, Jennifer C Molloy, Piers Millett, Laura Adam, Rainer Breitling, Rob Carlson, Rocco Casagrande, Malcolm Dando, Robert Doubleday, Eric Drexler, Brett Edwards, Tom Ellis, Nicholas G Evans, Richard Hammond, Jim Haseloff, Linda Kahl, Todd Kuiken, Benjamin R Lichman, Colette A Matthewman, Johnathan A Napier, Seán S ÓhÉigeartaigh, Nicola J Patron, Edward Perello, Philip Shapira, Joyce Tait, Eriko Takano, William J Sutherland. eLife; 14 Nov 2017; DOI: 10.7554/eLife.30247

This paper is open access and the editors have included their notes to the authors and the authors’ response.

You may have noticed that I highlighted a portion of the text concerning synthetic gene drives. Coincidentally I ran across a November 16, 2017 article by Ed Yong for The Atlantic where the topic is discussed within the context of a project in New Zealand, ‘Predator Free 2050’ (Note: A link has been removed),

Until the 13th century, the only land mammals in New Zealand were bats. In this furless world, local birds evolved a docile temperament. Many of them, like the iconic kiwi and the giant kakapo parrot, lost their powers of flight. Gentle and grounded, they were easy prey for the rats, dogs, cats, stoats, weasels, and possums that were later introduced by humans. Between them, these predators devour more than 26 million chicks and eggs every year. They have already driven a quarter of the nation’s unique birds to extinction.

Many species now persist only in offshore islands where rats and their ilk have been successfully eradicated, or in small mainland sites like Zealandia where they are encircled by predator-proof fences. The songs in those sanctuaries are echoes of the New Zealand that was.

But perhaps, they also represent the New Zealand that could be.

In recent years, many of the country’s conservationists and residents have rallied behind Predator-Free 2050, an extraordinarily ambitious plan to save the country’s birds by eradicating its invasive predators. Native birds of prey will be unharmed, but Predator-Free 2050’s research strategy, which is released today, spells doom for rats, possums, and stoats (a large weasel). They are to die, every last one of them. No country, anywhere in the world, has managed such a task in an area that big. The largest island ever cleared of rats, Australia’s Macquarie Island, is just 50 square miles in size. New Zealand is 2,000 times bigger. But, the country has committed to fulfilling its ecological moonshot within three decades.

In 2014, Kevin Esvelt, a biologist at MIT, drew a Venn diagram that troubles him to this day. In it, he and his colleagues laid out several possible uses for gene drives—a nascent technology for spreading designer genes through groups of wild animals. Typically, a given gene has a 50-50 chance of being passed to the next generation. But gene drives turn that coin toss into a guarantee, allowing traits to zoom through populations in just a few generations. There are a few natural examples, but with CRISPR, scientists can deliberately engineer such drives.

Suppose you have a population of rats, roughly half of which are brown, and the other half white. Now, imagine there is a gene that affects each rat’s color. It comes in two forms, one leading to brown fur, and the other leading to white fur. A male with two brown copies mates with a female with two white copies, and all their offspring inherit one of each. Those offspring breed themselves, and the brown and white genes continue cascading through the generations in a 50-50 split. This is the usual story of inheritance. But you can subvert it with CRISPR, by programming the brown gene to cut its counterpart and replace it with another copy of itself. Now, the rats’ children are all brown-furred, as are their grandchildren, and soon the whole population is brown.

Forget fur. The same technique could spread an antimalarial gene through a mosquito population, or drought-resistance through crop plants. The applications are vast, but so are the risks. In theory, gene drives spread so quickly and relentlessly that they could rewrite an entire wild population, and once released, they would be hard to contain. If the concept of modifying the genes of organisms is already distasteful to some, gene drives magnify that distaste across national, continental, and perhaps even global scales.

These excerpts don’t do justice to this thought-provoking article. If you have time, I recommend reading it in its entirety  as it provides some insight into gene drives and, with some imagination on the reader’s part, the potential for the other technologies discussed in the report.

One last comment, I notice that Eric Drexler is cited as on the report’s authors. He’s familiar to me as K. Eric Drexler, the author of the book that popularized nanotechnology in the US and other countries, Engines of Creation (1986) .

Revisiting the scientific past for new breakthroughs

A March 2, 2017 article on phys.org features a thought-provoking (and, for some of us, confirming) take on scientific progress  (Note: Links have been removed),

The idea that science isn’t a process of constant progress might make some modern scientists feel a bit twitchy. Surely we know more now than we did 100 years ago? We’ve sequenced the genome, explored space and considerably lengthened the average human lifespan. We’ve invented aircraft, computers and nuclear energy. We’ve developed theories of relativity and quantum mechanics to explain how the universe works.

However, treating the history of science as a linear story of progression doesn’t reflect wholly how ideas emerge and are adapted, forgotten, rediscovered or ignored. While we are happy with the notion that the arts can return to old ideas, for example in neoclassicism, this idea is not commonly recognised in science. Is this constraint really present in principle? Or is it more a comment on received practice or, worse, on the general ignorance of the scientific community of its own intellectual history?

For one thing, not all lines of scientific enquiry are pursued to conclusion. For example, a few years ago, historian of science Hasok Chang undertook a careful examination of notebooks from scientists working in the 19th century. He unearthed notes from experiments in electrochemistry whose results received no explanation at the time. After repeating the experiments himself, Chang showed the results still don’t have a full explanation today. These research programmes had not been completed, simply put to one side and forgotten.

A March 1, 2017 essay by Giles Gasper (Durham University), Hannah Smithson (University of Oxford) and Tom Mcleish (Durham University) for The Conversation, which originated the article, expands on the theme (Note: Links have been removed),

… looping back into forgotten scientific history might also provide an alternative, regenerative way of thinking that doesn’t rely on what has come immediately before it.

Collaborating with an international team of colleagues, we have taken this hypothesis further by bringing scientists into close contact with scientific treatises from the early 13th century. The treatises were composed by the English polymath Robert Grosseteste – who later became Bishop of Lincoln – between 1195 and 1230. They cover a wide range of topics we would recognise as key to modern physics, including sound, light, colour, comets, the planets, the origin of the cosmos and more.

We have worked with paleographers (handwriting experts) and Latinists to decipher Grosseteste’s manuscripts, and with philosophers, theologians, historians and scientists to provide intellectual interpretation and context to his work. As a result, we’ve discovered that scientific and mathematical minds today still resonate with Grosseteste’s deeply physical and structured thinking.

Our first intuition and hope was that the scientists might bring a new analytic perspective to these very technical texts. And so it proved: the deep mathematical structure of a small treatise on colour, the De colore, was shown to describe what we would now call a three-dimensional abstract co-ordinate space for colour.

But more was true. During the examination of each treatise, at some point one of the group would say: “Did anyone ever try doing …?” or “What would happen if we followed through with this calculation, supposing he meant …”. Responding to this thinker from eight centuries ago has, to our delight and surprise, inspired new scientific work of a rather fresh cut. It isn’t connected in a linear way to current research programmes, but sheds light on them from new directions.

I encourage you to read the essay in its entirety.

Brown recluse spider, one of the world’s most venomous spiders, shows off unique spinning technique

Caption: American Brown Recluse Spider is pictured. Credit: Oxford University

According to scientists from Oxford University this deadly spider could teach us a thing or two about strength. From a Feb. 15, 2017 news item on ScienceDaily,

Brown recluse spiders use a unique micro looping technique to make their threads stronger than that of any other spider, a newly published UK-US collaboration has discovered.

One of the most feared and venomous arachnids in the world, the American brown recluse spider has long been known for its signature necro-toxic venom, as well as its unusual silk. Now, new research offers an explanation for how the spider is able to make its silk uncommonly strong.

Researchers suggest that if applied to synthetic materials, the technique could inspire scientific developments and improve impact absorbing structures used in space travel.

The study, published in the journal Material Horizons, was produced by scientists from Oxford University’s Department of Zoology, together with a team from the Applied Science Department at Virginia’s College of William & Mary. Their surveillance of the brown recluse spider’s spinning behaviour shows how, and to what extent, the spider manages to strengthen the silk it makes.

A Feb. 15, 2017 University of Oxford press release, which originated the news item,  provides more detail about the research,

From observing the arachnid, the team discovered that unlike other spiders, who produce round ribbons of thread, recluse silk is thin and flat. This structural difference is key to the thread’s strength, providing the flexibility needed to prevent premature breakage and withstand the knots created during spinning which give each strand additional strength.

Professor Hannes Schniepp from William & Mary explains: “The theory of knots adding strength is well proven. But adding loops to synthetic filaments always seems to lead to premature fibre failure. Observation of the recluse spider provided the breakthrough solution; unlike all spiders its silk is not round, but a thin, nano-scale flat ribbon. The ribbon shape adds the flexibility needed to prevent premature failure, so that all the microloops can provide additional strength to the strand.”

By using computer simulations to apply this technique to synthetic fibres, the team were able to test and prove that adding even a single loop significantly enhances the strength of the material.

William & Mary PhD student Sean Koebley adds: “We were able to prove that adding even a single loop significantly enhances the toughness of a simple synthetic sticky tape. Our observations open the door to new fibre technology inspired by the brown recluse.”

Speaking on how the recluse’s technique could be applied more broadly in the future, Professor Fritz Vollrath, of the Department of Zoology at Oxford University, expands: “Computer simulations demonstrate that fibres with many loops would be much, much tougher than those without loops. This right away suggests possible applications. For example carbon filaments could be looped to make them less brittle, and thus allow their use in novel impact absorbing structures. One example would be spider-like webs of carbon-filaments floating in outer space, to capture the drifting space debris that endangers astronaut lives’ and satellite integrity.”

Here’s a link to and a citation for the paper,

Toughness-enhancing metastructure in the recluse spider’s looped ribbon silk by
S. R. Koebley, F. Vollrath, and H. C. Schniepp. Mater. Horiz., 2017, Advance Article DOI: 10.1039/C6MH00473C First published online 15 Feb 2017

This paper is open access although you may need to register with the Royal Society of Chemistry’s publishing site to get access.

Developing cortical implants for future speech neural prostheses

I’m guessing that graphene will feature in these proposed cortical implants since the project leader is a member of the Graphene Flagship’s Biomedical Technologies Work Package. (For those who don’t know, the Graphene Flagship is one of two major funding initiatives each receiving funding of 1B Euros over 10 years from the European Commission as part of their FET [Future and Emerging Technologies)] Initiative.)  A Jan. 12, 2017 news item on Nanowerk announces the new project (Note: A link has been removed),

BrainCom is a FET Proactive project, funded by the European Commission with 8.35M€ [8.3 million Euros] for the next 5 years, holding its Kick-off meeting on January 12-13 at ICN2 (Catalan Institute of Nanoscience and Nanotechnology) and the UAB [ Universitat Autònoma de Barcelona]. This project, coordinated by ICREA [Catalan Institution for Research and Advanced Studies] Research Prof. Jose A. Garrido from ICN2, will permit significant advances in understanding of cortical speech networks and the development of speech rehabilitation solutions using innovative brain-computer interfaces.

A Jan. 12, 2017 ICN2 press release, which originated the news item expands on the theme (it is a bit repetitive),

More than 5 million people worldwide suffer annually from aphasia, an extremely invalidating condition in which patients lose the ability to comprehend and formulate language after brain damage or in the course of neurodegenerative disorders. Brain-computer interfaces (BCIs), enabled by forefront technologies and materials, are a promising approach to treat patients with aphasia. The principle of BCIs is to collect neural activity at its source and decode it by means of electrodes implanted directly in the brain. However, neurorehabilitation of higher cognitive functions such as language raises serious issues. The current challenge is to design neural implants that cover sufficiently large areas of the brain to allow for reliable decoding of detailed neuronal activity distributed in various brain regions that are key for language processing.

BrainCom is a FET Proactive project funded by the European Commission with 8.35M€ for the next 5 years. This interdisciplinary initiative involves 10 partners including technologists, engineers, biologists, clinicians, and ethics experts. They aim to develop a new generation of neuroprosthetic cortical devices enabling large-scale recordings and stimulation of cortical activity to study high level cognitive functions. Ultimately, the BraimCom project will seed a novel line of knowledge and technologies aimed at developing the future generation of speech neural prostheses. It will cover different levels of the value chain: from technology and engineering to basic and language neuroscience, and from preclinical research in animals to clinical studies in humans.

This recently funded project is coordinated by ICREA Prof. Jose A. Garrido, Group Leader of the Advanced Electronic Materials and Devices Group at the Institut Català de Nanociència i Nanotecnologia (Catalan Institute of Nanoscience and Nanotechnology – ICN2) and deputy leader of the Biomedical Technologies Work Package presented last year in Barcelona by the Graphene Flagship. The BrainCom Kick-Off meeting is held on January 12-13 at ICN2 and the Universitat Autònoma de Barcelona (UAB).

Recent developments show that it is possible to record cortical signals from a small region of the motor cortex and decode them to allow tetraplegic [also known as, quadriplegic] people to activate a robotic arm to perform everyday life actions. Brain-computer interfaces have also been successfully used to help tetraplegic patients unable to speak to communicate their thoughts by selecting letters on a computer screen using non-invasive electroencephalographic (EEG) recordings. The performance of such technologies can be dramatically increased using more detailed cortical neural information.

BrainCom project proposes a radically new electrocorticography technology taking advantage of unique mechanical and electrical properties of novel nanomaterials such as graphene, 2D materials and organic semiconductors.  The consortium members will fabricate ultra-flexible cortical and intracortical implants, which will be placed right on the surface of the brain, enabling high density recording and stimulation sites over a large area. This approach will allow the parallel stimulation and decoding of cortical activity with unprecedented spatial and temporal resolution.

These technologies will help to advance the basic understanding of cortical speech networks and to develop rehabilitation solutions to restore speech using innovative brain-computer paradigms. The technology innovations developed in the project will also find applications in the study of other high cognitive functions of the brain such as learning and memory, as well as other clinical applications such as epilepsy monitoring.

The BrainCom project Consortium members are:

  • Catalan Institute of Nanoscience and Nanotechnology (ICN2) – Spain (Coordinator)
  • Institute of Microelectronics of Barcelona (CNM-IMB-CSIC) – Spain
  • University Grenoble Alpes – France
  • ARMINES/ Ecole des Mines de St. Etienne – France
  • Centre Hospitalier Universitaire de Grenoble – France
  • Multichannel Systems – Germany
  • University of Geneva – Switzerland
  • University of Oxford – United Kingdom
  • Ludwig-Maximilians-Universität München – Germany
  • Wavestone – Luxembourg

There doesn’t seem to be a website for the project but there is a BrainCom webpage on the European Commission’s CORDIS (Community Research and Development Information Service) website.

Epic Scottish poetry and social network science

It’s been a while since I’ve run a social network story here and this research into a 250-year controversy piqued my interest anew. From an Oct. 20, 2016 Coventry University (UK) press release (also on EurekAlert) Note: A link has been removed,

The social networks behind one of the most famous literary controversies of all time have been uncovered using modern networks science.

Since James Macpherson published what he claimed were translations of ancient Scottish Gaelic poetry by a third-century bard named Ossian, scholars have questioned the authenticity of the works and whether they were misappropriated from Irish mythology or, as heralded at the time, authored by a Scottish equivalent to Homer.

Now, in a joint study by Coventry University, the National University of Ireland, Galway and the University of Oxford, published today in the journal Advances in Complex Systems, researchers have revealed the structures of the social networks underlying the Ossian’s works and their similarities to Irish mythology.

The researchers mapped the characters at the heart of the works and the relationships between them to compare the social networks found in the Scottish epics with classical Greek literature and Irish mythology.

The study revealed that the networks in the Scottish poems bore no resemblance to epics by Homer, but strongly resembled those in mythological stories from Ireland.

The Ossianic poems are considered to be some of the most important literary works ever to have emerged from Britain or Ireland, given their influence over the Romantic period in literature and the arts. Figures from Brahms to Wordsworth reacted enthusiastically; Napoleon took a copy on his military campaigns and US President Thomas Jefferson believed that Ossian was the greatest poet to have ever existed.

The poems launched the romantic portrayal of the Scottish Highlands which persists, in many forms, to the present day and inspired Romantic nationalism all across Europe.

Professor Ralph Kenna, a statistical physicist based at Coventry University, said:

By working together, it shows how science can open up new avenues of research in the humanities. The opposite also applies, as social structures discovered in Ossian inspire new questions in mathematics.”

Dr Justin Tonra, a digital humanities expert from the National University of Ireland, Galway said:

From a humanities point of view, while it cannot fully resolve the debate about Ossian, this scientific analysis does reveal an insightful statistical picture: close similarity to the Irish texts which Macpherson explicitly rejected, and distance from the Greek sources which he sought to emulate.”

A statistical physicist, eh? I find that specialty quite an unexpected addition to the team stretching my ideas about social networks in new directions.

Getting back to the research, the scientists have supplied this image to illustrate their work,

Caption: In the social network underlying the Ossianic epic, the 325 nodes represent characters appearing in the narratives and the 748 links represent interactions between them. Credit: Coventry University

Caption: In the social network underlying the Ossianic epic, the 325 nodes represent characters appearing in the narratives and the 748 links represent interactions between them. Credit: Coventry University

Here’s a link to and a citation for the paper,

A networks-science investigation into the epic poems of Ossian by Joseph Yose, Ralph Kenna, Pádraig MacCarron, Thierry Platini, Justin Tonra.  Complex Syst. DOI: http://dx.doi.org/10.1142/S0219525916500089 Published: 21 October 2016

This paper is behind a paywall.

Help find some siblings for the Higgs boson

This is the Higgs Hunters’ (or HiggsHunters) second call for volunteers; the first was described in my Dec. 2, 2014 posting. Some 18 months after the first call, over 20,000 volunteers have been viewing images from the Large Hadron Collider in a bid to assist physicists at CERN (European Organization for Nuclear Research).

These images show how particles appear in the ATLAS detector. The lines show the paths of charged particles travelling away from a collision at the centre. Volunteers are looking for tracks appearing 'out of thin air' away from the centre. (Image: CERN)

These images show how particles appear in the ATLAS detector. The lines show the paths of charged particles travelling away from a collision at the centre. Volunteers are looking for tracks appearing ‘out of thin air’ away from the centre. (Image: CERN)

A July 6, 2016 news item on phys.org announces the call for more volunteers (Note: Links have been removed),

A citizen science project, called HiggsHunters gives everyone the chance to help search for the Higgs boson’s relatives.

Volunteers are searching through thousands of images from the ATLAS experiment on the HiggsHunters.org website, which makes use of the Zooniverse  citizen science platform.

They are looking for ‘baby Higgs bosons’, which leave a characteristic trace in the ATLAS detector.

This is the first time that images from the Large Hadron Collider have been examined on such a scale – 60,000 of the most interesting events were selected from collisions recorded throughout 2012 – the year of the Higgs boson discovery. About 20,000 of those collisions have been scanned so far, revealing interesting features.

A July 4, 2016 posting by Harriet Kim Jarlett on Will Kalderon’s CERN blog, which originated the news item, provides more details,

“There are tasks – even in this high-tech world – where the human eye and the human brain simply win out,” says Professor Alan Barr of the University of Oxford, who is leading the project.

Over the past two years, more than twenty thousand amateur scientists, from 179 countries, have been scouring images of LHC collisions,  looking for as-yet unobserved particles.

Dr Will Kalderon, who has been working on the project says “We’ve been astounded both by the number of responses and ability of people to do this so well, I’m really excited to see what we might find”.

July 4, 2016 was the fourth anniversary of the  confirmation that the Higgs Boson almost certainly exists (from the CERN blog),

Today, July 4 2016, is the fourth birthday of the Higgs boson discovery. Here, a toy Higgs is sat on top of a birthday cake decorated with a HiggsHunter event display. On the blackboard behind is the process people are looking for - Higgs-strahlung. (Image: Will Kalderon/CERN)

Today, July 4 2016, is the fourth birthday of the Higgs boson discovery. Here, a toy Higgs is sat on top of a birthday cake decorated with a HiggsHunter event display. On the blackboard behind is the process people are looking for – Higgs-strahlung. (Image: Will Kalderon/CERN)

You can find the Higgs Hunters website here. Should you be interested in other citizen science projects, you can find the Zooniverse website here.

Weather@Home citizen science project

It’s been a while since I’ve featured a citizen science story here. So, here’s more about Weather@Home from a June 9, 2016 Oregon State University news release on EurekAlert,

Tens of thousands of “citizen scientists” have volunteered some use of their personal computer time to help researchers create one of the most detailed, high resolution simulations of weather ever done in the Western United States.

The data, obtained through a project called Weather@Home, is an important step forward for scientifically sound, societally relevant climate science, researchers say in a an article published in the Bulletin of the American Meteorological Society. The analysis covered the years 1960-2009 and future projections of 2030-49.

Caption: The elevation of areas of the American West that were part of recent climate modeling as part of the Weather@Home Program. Credit: Graphic courtesy of Oregon State University

Caption: The elevation of areas of the American West that were part of recent climate modeling as part of the Weather@Home Program. Credit: Graphic courtesy of Oregon State University

The news release expands on the theme,

“When you have 30,000 modern laptop computers at work, you can transcend even what a supercomputer can do,” said Philip Mote, professor and director of the Oregon Climate Change Research Institute at Oregon State University, and lead author on the study.

“With this analysis we have 140,000 one-year simulations that show all of the impacts that mountains, valleys, coasts and other aspects of terrain can have on local weather,” he said. “We can drill into local areas, ask more specific questions about management implications, and understand the physical and biological climate changes in the West in a way never before possible.”

The sheer number of simulations tends to improve accuracy and reduce the uncertainty associated with this type of computer analysis, experts say. The high resolution also makes it possible to better consider the multiple climate forces at work in the West – coastal breezes, fog, cold air in valleys, sunlight being reflected off snow – and vegetation that ranges from wet, coastal rain forests to ice-covered mountains and arid scrublands within a comparatively short distance.

Although more accurate than previous simulations, improvements are still necessary, researchers say. Weather@Home tends to be too cool in a few mountain ranges and too warm in some arid plains, such as the Snake River plain and Columbia plateau, especially in summer. While other models have similar errors, Weather@Home offers the unique capability to improve simulations by improving the physics in the model.

Ultimately, this approach will help improve future predictions of regional climate. The social awareness of these issues has “matured to the point that numerous public agencies, businesses and investors are asking detailed questions about the future impacts of climate change,” the researchers wrote in their report.

This has led to a skyrocketing demand for detailed answers to specific questions – what’s the risk of a flood in a particular area, what will be future wind speeds as wind farms are developed, how should roads and bridges be built to handle extremely intense rainfall? There will be questions about heat stress on humans, the frequency of droughts, future sea levels and the height of local storm surges.

This type of analysis, and more like it, will help answer some of those questions, researchers say.

New participants in this ongoing research are always welcome, officials said. If interested in participating, anyone can go online to “climateprediction.net” and click on “join.” They should then follow the instructions to download and install BOINC, a program that manages the tasks; create an account; and select a project. Participation in climateprediction.net is available, as well as many others.

I checked out the About page on the climateprediction.net website, which hosts the Weather@Home project,

Climateprediction.net is a volunteer computing, climate modelling project based at the University of Oxford in the Environmental Change Institute, the Oxford e-Research Centre and Atmospheric, Oceanic and Planetary Physics.

We have a team of 13 climate scientists, computing experts and graduate students working on this project, as well as our partners and collaborators working at other universities, research and non-profit organisations around the world.

What we do

We run climate modelling experiments using the home computers of thousands of volunteers. This allows us to answer important and difficult questions about how climate change is affecting our world now and how it will affect our world in the future.

Climateprediction.net is a not-for-profit project.

Why we need your help

We run hundreds of thousands of state-of-the-art climate models, each very slightly different from the others, but still plausibly representing the real world.

This technique, known as ensemble modelling, requires an enormous amount of computing power.

Climate models are large and resource-intensive to run and it is not possible to run the large number of models we need on supercomputers.

Our solution is to appeal to volunteer computing, which combines the power of thousands of ordinary computers, each of which tackles one small part of the larger modelling task.

By using your computers, we can improve our understanding of, and confidence in, climate change predictions more than would ever be possible using the supercomputers currently available to scientists.

Please join our project and help us model the climate.

Our Experiments

When climateprediction.net first started, we were running very large, global models to answer questions about how climate change will pan out in the 21st century.

In addition, we are now running a number of smaller, regional experiments, under the umbrella of weather@home.

BOINC

Climateprediction.net uses a volunteer computing platform called BOINC (The Berkeley Open Infrastructure for Network Computing).

BOINC was originally developed to support SETI@home, which uses people’s home computers to analyse radio signals, searching for signs of extra-terrestrial intelligence.

BOINC is now used on over 70 projects covering a wide range of scientific areas, including mathematics, medicine, molecular biology, climatology, environmental science, and astrophysics.

Getting back to Oregon State University and its regional project research, here’s a link to and a citation for the paper,

Superensemble Regional Climate Modeling for the Western United States by Philip W. Mote, Myles R. Allen, Richard G. Jones, Sihan Li, Roberto Mera, David E. Rupp, Ahmed Salahuddin, and Dean Vickers. Bulletin of the American Meteorological Society February 2016, Vol. 97, No. 2 DOI: http://dx.doi.org/10.1175/BAMS-D-14-00090.1 Published online 14 March 2016

This is an open access paper.