Tag Archives: UK

Paving the way for hardware neural networks?

I’m glad the Imperial College of London (ICL; UK) translated this research into something I can, more or less, understand because the research team’s title for their paper would have left me ‘confuzzled’ .Thank you for this November 20, 2017 ICL press release (also on EurekAlert) by Hayley Dunning,

Researchers have shown how to write any magnetic pattern desired onto nanowires, which could help computers mimic how the brain processes information.

Much current computer hardware, such as hard drives, use magnetic memory devices. These rely on magnetic states – the direction microscopic magnets are pointing – to encode and read information.

Exotic magnetic states – such as a point where three south poles meet – represent complex systems. These may act in a similar way to many complex systems found in nature, such as the way our brains process information.

Computing systems that are designed to process information in similar ways to our brains are known as ‘neural networks’. There are already powerful software-based neural networks – for example one recently beat the human champion at the game ‘Go’ – but their efficiency is limited as they run on conventional computer hardware.

Now, researchers from Imperial College London have devised a method for writing magnetic information in any pattern desired, using a very small magnetic probe called a magnetic force microscope.

With this new writing method, arrays of magnetic nanowires may be able to function as hardware neural networks – potentially more powerful and efficient than software-based approaches.

The team, from the Departments of Physics and Materials at Imperial, demonstrated their system by writing patterns that have never been seen before. They published their results today [November 20, 2017] in Nature Nanotechnology.

Interlocking hexagon patterns with complex magnetisation

‘Hexagonal artificial spin ice ground state’ – a pattern never demonstrated before. Coloured arrows show north or south polarisation

Dr Jack Gartside, first author from the Department of Physics, said: “With this new writing method, we open up research into ‘training’ these magnetic nanowires to solve useful problems. If successful, this will bring hardware neural networks a step closer to reality.”

As well as applications in computing, the method could be used to study fundamental aspects of complex systems, by creating magnetic states that are far from optimal (such as three south poles together) and seeing how the system responds.

Here’s a link to and a citation for the paper,

Realization of ground state in artificial kagome spin ice via topological defect-driven magnetic writing by Jack C. Gartside, Daan M. Arroo, David M. Burn, Victoria L. Bemmer, Andy Moskalenko, Lesley F. Cohen & Will R. Branford. Nature Nanotechnology (2017) doi:10.1038/s41565-017-0002-1 Published online: 20 November 2017

This paper is behind a paywall.

*Odd spacing eliminated and a properly embedded video added on February 6, 2018 at 18:16 hours PT.

Crackpot or visionary? Teaching seven-year-olds about intellectual property

It’s been a while since I’ve devoted a posting to intellectual property issues and my focus is usually on science/technology and how intellectual property issues relate to those fields. As a writer, I support a more relaxed approach to copyright and patent law and, in particular, I want to see the continuation of ‘fair use’ as it’s called in the US and ‘fair dealing’ as it’s called in Canada. I support the principle of making money from your work so you can continue to contribute creatively. But, the application of intellectual property law seems to have been turned into a weapon against creativity of all sorts. (At the end of this post you’ll find links to three typical posts from the many I have written on this topic.)

I do take the point being made in the following video (but for seven-year-olds and up!!!) about trademarks/logos and trademark infringement from the UK’s Intellectual Property Office,

Here’s the description from Youtube’s Logo Mania webpage,

Published on Jan 16, 2018

Brian Wheeler’s January 17, 2018 article for BBC (British Broadcasting Corporation) online news on UK Politics sheds a bit of light on this ‘campaign’ (Note: A link has been removed),

A campaign to teach children about copyright infringement on the internet, is employing cartoons and puns on pop stars’ names, to get the message across.

Even its makers admit it is a “dry” and “niche” subject for a cartoon aimed at seven-year-olds.

But the Intellectual Property Office adds learning to “respect” copyrights and trademarks is a “key life skill”.

And it is hoping the adventures of Nancy and the Meerkats can finally make intellectual property “fun”.

The series, which began life five years ago on Fun Kids Radio, was re-launched this week with the aim of getting its message into primary schools.

The Intellectual Property Office is leading the government’s efforts to crack down on internet piracy and protect the revenues of Britain’s creative industries.

The government agency is spending £20,000 of its own money on the latest Nancy campaign, which is part-funded by the UK music industry.

Catherine Davies, head of the IPO’s education outreach department, which already produces teaching materials for GCSE students, admitted IP was a “complex subject” for small children and something of a challenge to make accessible and entertaining.

Wheeler’s article is definitely worth reading in its entirety. In fact, I was so intrigued I chased down the government press release (from the www.gov.uk webpage),

Nancy and the Meerkats logo

Nancy and the Meerkats, with the help of Big Joe, present a new radio series to engage pupils with the concept of intellectual property (IP). Aimed at primary education, the resource guides pupils through the process of setting up a band and recording and releasing a song, which is promoted and performed live on tour.

Building on the success of the previous two series, Nancy and the Meerkats consists of a new radio series, short videos, comic book, lesson plans and competition. The supporting teaching resource also includes themed activities and engaging lesson plans. Together, these support and develop pupils’ understanding of copyright, trade marks and the importance of respecting IP.

Curriculum links are provided for England, Northern Ireland, Scotland and Wales.

The series will launch on Monday 15th January [2018] at 5pm on Fun Kids Radio

Along with ‘Logo Mania’, you can find such gems as ‘Track Attack’ concerning song lyrics and, presumably, copyright issues, ‘The Hum Bone’ concerning patents, and ‘Pirates on the Internet’ about illegal downloading on the Fun Kids Radio website. Previous seasons have included ‘Are forks just for eating with?’, ‘Is a kaleidoscope useful?’, ‘Rubber Bands’, ‘Cornish Pasties’, and more. It seems Fun Kids Radio has moved from its focus on the types of questions and topics that might interest children to topics of interest for the music industry and the UK’s Intellectual Property Office. At a guess, I’m guessing those groups might be maximalists where copyright is concerned.

By the way, for those interested in teaching resources and more, go to http://crackingideas.com/third_party/Nancy+and+the+Meerkats.

Finally, I’m not sure whether to laugh or cry. I do know that I’m curious about how they decided to focus on seven to 11-year-olds. Are children in the UK heavily involved in content piracy? Is there a generation of grade school pop stars about to enter the music market? Where is the data and how did they gather it?

Should anyone be inclined to answer those questions, I look forward to reading your reply in the Comments section.

ETA January 19, 2018 (five minutes later) Oops! Here are the links promised earlier,

October 31, 2011: Patents as weapons and obstacles

June 28, 2012: Billions lost to patent trolls; US White House asks for comments on intellectual property (IP) enforcement; and more on IP

March 28, 2013: Intellectual property, innovation, and hindrances

There are many, many more posts. Just click on the category for ‘intellectual property’.

Alberta adds a newish quantum nanotechnology research hub to the Canada’s quantum computing research scene

One of the winners in Canada’s 2017 federal budget announcement of the Pan-Canadian Artificial Intelligence Strategy was Edmonton, Alberta. It’s a fact which sometimes goes unnoticed while Canadians marvel at the wonderfulness found in Toronto and Montréal where it seems new initiatives and monies are being announced on a weekly basis (I exaggerate) for their AI (artificial intelligence) efforts.

Alberta’s quantum nanotechnology hub (graduate programme)

Intriguingly, it seems that Edmonton has higher aims than (an almost unnoticed) leadership in AI. Physicists at the University of Alberta have announced hopes to be just as successful as their AI brethren in a Nov. 27, 2017 article by Juris Graney for the Edmonton Journal,

Physicists at the University of Alberta [U of A] are hoping to emulate the success of their artificial intelligence studying counterparts in establishing the city and the province as the nucleus of quantum nanotechnology research in Canada and North America.

Google’s artificial intelligence research division DeepMind announced in July [2017] it had chosen Edmonton as its first international AI research lab, based on a long-running partnership with the U of A’s 10-person AI lab.

Retaining the brightest minds in the AI and machine-learning fields while enticing a global tech leader to Alberta was heralded as a coup for the province and the university.

It is something U of A physics professor John Davis believes the university’s new graduate program, Quanta, can help achieve in the world of quantum nanotechnology.

The field of quantum mechanics had long been a realm of theoretical science based on the theory that atomic and subatomic material like photons or electrons behave both as particles and waves.

“When you get right down to it, everything has both behaviours (particle and wave) and we can pick and choose certain scenarios which one of those properties we want to use,” he said.

But, Davis said, physicists and scientists are “now at the point where we understand quantum physics and are developing quantum technology to take to the marketplace.”

“Quantum computing used to be realm of science fiction, but now we’ve figured it out, it’s now a matter of engineering,” he said.

Quantum computing labs are being bought by large tech companies such as Google, IBM and Microsoft because they realize they are only a few years away from having this power, he said.

Those making the groundbreaking developments may want to commercialize their finds and take the technology to market and that is where Quanta comes in.

East vs. West—Again?

Ivan Semeniuk in his article, Quantum Supremacy, ignores any quantum research effort not located in either Waterloo, Ontario or metro Vancouver, British Columbia to describe a struggle between the East and the West (a standard Canadian trope). From Semeniuk’s Oct. 17, 2017 quantum article [link follows the excerpts] for the Globe and Mail’s October 2017 issue of the Report on Business (ROB),

 Lazaridis [Mike], of course, has experienced lost advantage first-hand. As co-founder and former co-CEO of Research in Motion (RIM, now called Blackberry), he made the smartphone an indispensable feature of the modern world, only to watch rivals such as Apple and Samsung wrest away Blackberry’s dominance. Now, at 56, he is engaged in a high-stakes race that will determine who will lead the next technology revolution. In the rolling heartland of southwestern Ontario, he is laying the foundation for what he envisions as a new Silicon Valley—a commercial hub based on the promise of quantum technology.

Semeniuk skips over the story of how Blackberry lost its advantage. I came onto that story late in the game when Blackberry was already in serious trouble due to a failure to recognize that the field they helped to create was moving in a new direction. If memory serves, they were trying to keep their technology wholly proprietary which meant that developers couldn’t easily create apps to extend the phone’s features. Blackberry also fought a legal battle in the US with a patent troll draining company resources and energy in proved to be a futile effort.

Since then Lazaridis has invested heavily in quantum research. He gave the University of Waterloo a serious chunk of money as they named their Quantum Nano Centre (QNC) after him and his wife, Ophelia (you can read all about it in my Sept. 25, 2012 posting about the then new centre). The best details for Lazaridis’ investments in Canada’s quantum technology are to be found on the Quantum Valley Investments, About QVI, History webpage,

History-bannerHistory has repeatedly demonstrated the power of research in physics to transform society.  As a student of history and a believer in the power of physics, Mike Lazaridis set out in 2000 to make real his bold vision to establish the Region of Waterloo as a world leading centre for physics research.  That is, a place where the best researchers in the world would come to do cutting-edge research and to collaborate with each other and in so doing, achieve transformative discoveries that would lead to the commercialization of breakthrough  technologies.

Establishing a World Class Centre in Quantum Research:

The first step in this regard was the establishment of the Perimeter Institute for Theoretical Physics.  Perimeter was established in 2000 as an independent theoretical physics research institute.  Mike started Perimeter with an initial pledge of $100 million (which at the time was approximately one third of his net worth).  Since that time, Mike and his family have donated a total of more than $170 million to the Perimeter Institute.  In addition to this unprecedented monetary support, Mike also devotes his time and influence to help lead and support the organization in everything from the raising of funds with government and private donors to helping to attract the top researchers from around the globe to it.  Mike’s efforts helped Perimeter achieve and grow its position as one of a handful of leading centres globally for theoretical research in fundamental physics.

Stephen HawkingPerimeter is located in a Governor-General award winning designed building in Waterloo.  Success in recruiting and resulting space requirements led to an expansion of the Perimeter facility.  A uniquely designed addition, which has been described as space-ship-like, was opened in 2011 as the Stephen Hawking Centre in recognition of one of the most famous physicists alive today who holds the position of Distinguished Visiting Research Chair at Perimeter and is a strong friend and supporter of the organization.

Recognizing the need for collaboration between theorists and experimentalists, in 2002, Mike applied his passion and his financial resources toward the establishment of The Institute for Quantum Computing at the University of Waterloo.  IQC was established as an experimental research institute focusing on quantum information.  Mike established IQC with an initial donation of $33.3 million.  Since that time, Mike and his family have donated a total of more than $120 million to the University of Waterloo for IQC and other related science initiatives.  As in the case of the Perimeter Institute, Mike devotes considerable time and influence to help lead and support IQC in fundraising and recruiting efforts.  Mike’s efforts have helped IQC become one of the top experimental physics research institutes in the world.

Quantum ComputingMike and Doug Fregin have been close friends since grade 5.  They are also co-founders of BlackBerry (formerly Research In Motion Limited).  Doug shares Mike’s passion for physics and supported Mike’s efforts at the Perimeter Institute with an initial gift of $10 million.  Since that time Doug has donated a total of $30 million to Perimeter Institute.  Separately, Doug helped establish the Waterloo Institute for Nanotechnology at the University of Waterloo with total gifts for $29 million.  As suggested by its name, WIN is devoted to research in the area of nanotechnology.  It has established as an area of primary focus the intersection of nanotechnology and quantum physics.

With a donation of $50 million from Mike which was matched by both the Government of Canada and the province of Ontario as well as a donation of $10 million from Doug, the University of Waterloo built the Mike & Ophelia Lazaridis Quantum-Nano Centre, a state of the art laboratory located on the main campus of the University of Waterloo that rivals the best facilities in the world.  QNC was opened in September 2012 and houses researchers from both IQC and WIN.

Leading the Establishment of Commercialization Culture for Quantum Technologies in Canada:

In the Research LabFor many years, theorists have been able to demonstrate the transformative powers of quantum mechanics on paper.  That said, converting these theories to experimentally demonstrable discoveries has, putting it mildly, been a challenge.  Many naysayers have suggested that achieving these discoveries was not possible and even the believers suggested that it could likely take decades to achieve these discoveries.  Recently, a buzz has been developing globally as experimentalists have been able to achieve demonstrable success with respect to Quantum Information based discoveries.  Local experimentalists are very much playing a leading role in this regard.  It is believed by many that breakthrough discoveries that will lead to commercialization opportunities may be achieved in the next few years and certainly within the next decade.

Recognizing the unique challenges for the commercialization of quantum technologies (including risk associated with uncertainty of success, complexity of the underlying science and high capital / equipment costs) Mike and Doug have chosen to once again lead by example.  The Quantum Valley Investment Fund will provide commercialization funding, expertise and support for researchers that develop breakthroughs in Quantum Information Science that can reasonably lead to new commercializable technologies and applications.  Their goal in establishing this Fund is to lead in the development of a commercialization infrastructure and culture for Quantum discoveries in Canada and thereby enable such discoveries to remain here.

Semeniuk goes on to set the stage for Waterloo/Lazaridis vs. Vancouver (from Semeniuk’s 2017 ROB article),

… as happened with Blackberry, the world is once again catching up. While Canada’s funding of quantum technology ranks among the top five in the world, the European Union, China, and the US are all accelerating their investments in the field. Tech giants such as Google [also known as Alphabet], Microsoft and IBM are ramping up programs to develop companies and other technologies based on quantum principles. Meanwhile, even as Lazaridis works to establish Waterloo as the country’s quantum hub, a Vancouver-area company has emerged to challenge that claim. The two camps—one methodically focused on the long game, the other keen to stake an early commercial lead—have sparked an East-West rivalry that many observers of the Canadian quantum scene are at a loss to explain.

Is it possible that some of the rivalry might be due to an influential individual who has invested heavily in a ‘quantum valley’ and has a history of trying to ‘own’ a technology?

Getting back to D-Wave Systems, the Vancouver company, I have written about them a number of times (particularly in 2015; for the full list: input D-Wave into the blog search engine). This June 26, 2015 posting includes a reference to an article in The Economist magazine about D-Wave’s commercial opportunities while the bulk of the posting is focused on a technical breakthrough.

Semeniuk offers an overview of the D-Wave Systems story,

D-Wave was born in 1999, the same year Lazaridis began to fund quantum science in Waterloo. From the start, D-Wave had a more immediate goal: to develop a new computer technology to bring to market. “We didn’t have money or facilities,” says Geordie Rose, a physics PhD who co0founded the company and served in various executive roles. …

The group soon concluded that the kind of machine most scientists were pursing based on so-called gate-model architecture was decades away from being realized—if ever. …

Instead, D-Wave pursued another idea, based on a principle dubbed “quantum annealing.” This approach seemed more likely to produce a working system, even if the application that would run on it were more limited. “The only thing we cared about was building the machine,” says Rose. “Nobody else was trying to solve the same problem.”

D-Wave debuted its first prototype at an event in California in February 2007 running it through a few basic problems such as solving a Sudoku puzzle and finding the optimal seating plan for a wedding reception. … “They just assumed we were hucksters,” says Hilton [Jeremy Hilton, D.Wave senior vice-president of systems]. Federico Spedalieri, a computer scientist at the University of Southern California’s [USC} Information Sciences Institute who has worked with D-Wave’s system, says the limited information the company provided about the machine’s operation provoked outright hostility. “I think that played against them a lot in the following years,” he says.

It seems Lazaridis is not the only one who likes to hold company information tightly.

Back to Semeniuk and D-Wave,

Today [October 2017], the Los Alamos National Laboratory owns a D-Wave machine, which costs about $15million. Others pay to access D-Wave systems remotely. This year , for example, Volkswagen fed data from thousands of Beijing taxis into a machine located in Burnaby [one of the municipalities that make up metro Vancouver] to study ways to optimize traffic flow.

But the application for which D-Wave has the hights hope is artificial intelligence. Any AI program hings on the on the “training” through which a computer acquires automated competence, and the 2000Q [a D-Wave computer] appears well suited to this task. …

Yet, for all the buzz D-Wave has generated, with several research teams outside Canada investigating its quantum annealing approach, the company has elicited little interest from the Waterloo hub. As a result, what might seem like a natural development—the Institute for Quantum Computing acquiring access to a D-Wave machine to explore and potentially improve its value—has not occurred. …

I am particularly interested in this comment as it concerns public funding (from Semeniuk’s article),

Vern Brownell, a former Goldman Sachs executive who became CEO of D-Wave in 2009, calls the lack of collaboration with Waterloo’s research community “ridiculous,” adding that his company’s efforts to establish closer ties have proven futile, “I’ll be blunt: I don’t think our relationship is good enough,” he says. Brownell also point out that, while  hundreds of millions in public funds have flowed into Waterloo’s ecosystem, little funding is available for  Canadian scientists wishing to make the most of D-Wave’s hardware—despite the fact that it remains unclear which core quantum technology will prove the most profitable.

There’s a lot more to Semeniuk’s article but this is the last excerpt,

The world isn’t waiting for Canada’s quantum rivals to forge a united front. Google, Microsoft, IBM, and Intel are racing to develop a gate-model quantum computer—the sector’s ultimate goal. (Google’s researchers have said they will unveil a significant development early next year.) With the U.K., Australia and Japan pouring money into quantum, Canada, an early leader, is under pressure to keep up. The federal government is currently developing  a strategy for supporting the country’s evolving quantum sector and, ultimately, getting a return on its approximately $1-billion investment over the past decade [emphasis mine].

I wonder where the “approximately $1-billion … ” figure came from. I ask because some years ago MP Peter Julian asked the government for information about how much Canadian federal money had been invested in nanotechnology. The government replied with sheets of paper (a pile approximately 2 inches high) that had funding disbursements from various ministries. Each ministry had its own method with different categories for listing disbursements and the titles for the research projects were not necessarily informative for anyone outside a narrow specialty. (Peter Julian’s assistant had kindly sent me a copy of the response they had received.) The bottom line is that it would have been close to impossible to determine the amount of federal funding devoted to nanotechnology using that data. So, where did the $1-billion figure come from?

In any event, it will be interesting to see how the Council of Canadian Academies assesses the ‘quantum’ situation in its more academically inclined, “The State of Science and Technology and Industrial Research and Development in Canada,” when it’s released later this year (2018).

Finally, you can find Semeniuk’s October 2017 article here but be aware it’s behind a paywall.

Whither we goest?

Despite any doubts one might have about Lazaridis’ approach to research and technology, his tremendous investment and support cannot be denied. Without him, Canada’s quantum research efforts would be substantially less significant. As for the ‘cowboys’ in Vancouver, it takes a certain temperament to found a start-up company and it seems the D-Wave folks have more in common with Lazaridis than they might like to admit. As for the Quanta graduate  programme, it’s early days yet and no one should ever count out Alberta.

Meanwhile, one can continue to hope that a more thoughtful approach to regional collaboration will be adopted so Canada can continue to blaze trails in the field of quantum research.

H. G. Wells’ Crystal Egg as an immersive multimedia experience in London, UK (January 6 – 13*, 2018)

Here’s the promotional trailer,

Exciting, eh? Tash Reith-Banks writes about this immersive theatre experience in a January 5, 2018 article for The Guardian (Links have been removed),

HG Wells hold a special place in the hearts of many sci-fi enthusiasts and scientists alike. Best known for his novels The War of the Worlds, The Island of Doctor Moreau, and The Invisible Man, Wells’s work is renowned for its prescience and has been revisited and adapted many times, so modern do some of his fears and preoccupations seem.

The Crystal Egg is a short story written 1897. Set in a grimily familiar depiction of Victorian London, it is a disturbing piece combining an almost Dickensian family-run curiosity shop, a pleasing account of scientific method and altogether more eerie references to portals into other worlds and alien beings.

I asked the show’s producer, Mike Archer, and director, Elif Knight to talk me through their interest in Wells, the challenges of adaptation and how Victorian sci-fi sits alongside more contemporary fiction, film and television.

What drew you to HG Wells in general and this story in particular?

Mike Archer: I have been a fan of Wells’s work since I was a boy. I encountered The Crystal Egg in 2005 and was drawn to the idea of it extending the mythos of the invasion from Mars in The War of the Worlds.

Recently, I started to feel the story had something to say about things that are happening in the world right now. When we went back to the story, myself and my partner Luisa Guerreiro thought about how we could use The Crystal Egg as an inspiration, and wanted to adapt it into an invasion story for the now.

Elif Knight: I was aware of HG Wells as a very prescient writer of science fiction. The fact that he had predicted many of the inventions and developments of the 20th century – not least manned flight and the internet – demonstrated that his imagination was not just wide-ranging but also accurate. But the question arose: how to show what an extraordinary piece of work The Crystal Egg is? And when the producers offered me the Vaults as a location, I had my answer – to recreate for the audience the atmosphere of the late nineteenth century, so that they could get a sense of how astonishing Wells’s vision was at that time.

Were there any particular challenges in staging the story?

MA: Yes, several. The biggest for me, was how to honour the source material whilst making it engaging on a relatable level and feeling somewhat fresh. The book is very scientific in its vision, but that scientific vision alone doesn’t necessarily translate to a two hour show.

Denizens of the curiosity shop attempt to unlock the strange object’s secrets.
Denizens of the curiosity shop attempt to unlock the strange object’s secrets. Photograph: Morgan Fraser PR

I like sci-fi to feel real. For me the best kind is when you have a world that is recognisable and believable and sci-fi just so happens to be a part of it. I think that is where the semi-immersive nature of part of the show came from. Bringing the audience, themselves aliens in a foreign world, face-to-face with the creation of Wells. This means you have to have a believable world in which to play. We did a lot of research into the Seven Dials area, the context of the story’s creation and began to extrapolate it out.

EK: That was a challenge : to re-create the slums of Victorian London in the Vaults. For example, with a small cast we had to create a busy market day in the London of 1897. But that is where things get interesting; that’s where I have used other media and interesting sonic and filmic devices to bring the area to life.

Here’s more about the show from the Crystal Egg Live! event page,

THE CRYSTAL EGG

An immersive adaptation of H.G. Wells’ mystery novel. Dive in to Victorian London deep underground, using multimedia to enhance your immersive experience.

They Are Watching!

London’s newest immersive, multi-media experience is about to land in a sci-fi extravaganza at The Vaults, Waterloo.

The Crystal Egg Live by H.G. Wells tells the story of Charley Cave. After watching his father dash into the night, Charley is taken in by Uncle Wace, an eccentric old man who, with his dysfunctional family, runs a curiosity shop in London’s Seven Dials Rookery.

When the body of his father is found in the river, Charley inherits the sole possession found with it – a  crystal egg. Believing the object to be of value, the family plan to sell it quickly and improve their lives. However one night Wace makes a chance discovery about this seemingly innocent item, a discovery that threatens to tear the family apart and plunge the world into a greater danger.

Old Lamp Entertainment invites you to The Vaults to uncover the secret for yourself. Fusing multiple art forms including light, sound, video, and performance; this production will bring to life the work of seminal writer H. G. Wells, author of ‘The War of the Worlds’ and ‘The Time Machine’ like never before.

Step back into 19th Century London to discover an object of immense power amongst the dusty relics of Wace’s curio shop, and come face to face with creatures of another world.

What would you do if you knew you were being watched? Watched by someone you were not even aware was there?

www.oldlamp.biz

Preview: 6th January 2018  6.00pm

Performances: 7th- 13th January 2018  4.00pm & 7.30pm daily

Strictly limited run

Press Performance: 7.30pm on 7th January [2018]

TICKETS

£20 – Preview performances

£30 – General Admission

Prices exclude Booking fee

Book online, by phone or in person at V3, 100 Lower Marsh. SE1

To book Step-free or Access tickets, please call 02074019603

ENTRANCE INFORMATION

Entrance to THE CRYSTAL EGG is via our Leake Street entrance

There may be an age limitation; please phone ahead.

For anyone not familiar with The Vaults, there’s a comprehensive description fo the site and explanation for how to get there. Enjoy!

*’Jan. 6 – 15′ corrected to ‘Jan. 6 – 13’ on January 8, 2018.

Gold’s origin in the universe due to cosmic collision

An hypothesis for gold’s origins was first mentioned here in a May 26, 2016 posting,

The link between this research and my side project on gold nanoparticles is a bit tenuous but this work on the origins for gold and other precious metals being found in the stars is so fascinating and I’m determined to find a connection.

An artist's impression of two neutron stars colliding. (Credit: Dana Berry / Skyworks Digital, Inc.) Courtesy: Kavli Foundation

An artist’s impression of two neutron stars colliding. (Credit: Dana Berry / Skyworks Digital, Inc.) Courtesy: Kavli Foundation

From a May 19, 2016 news item on phys.org,

The origin of many of the most precious elements on the periodic table, such as gold, silver and platinum, has perplexed scientists for more than six decades. Now a recent study has an answer, evocatively conveyed in the faint starlight from a distant dwarf galaxy.

In a roundtable discussion, published today [May 19, 2016?], The Kavli Foundation spoke to two of the researchers behind the discovery about why the source of these heavy elements, collectively called “r-process” elements, has been so hard to crack.

From the Spring 2016 Kavli Foundation webpage hosting the  “Galactic ‘Gold Mine’ Explains the Origin of Nature’s Heaviest Elements” Roundtable ,

Astronomers studying a galaxy called Reticulum II have just discovered that its stars contain whopping amounts of these metals—collectively known as “r-process” elements (See “What is the R-Process?”). Of the 10 dwarf galaxies that have been similarly studied so far, only Reticulum II bears such strong chemical signatures. The finding suggests some unusual event took place billions of years ago that created ample amounts of heavy elements and then strew them throughout the galaxy’s reservoir of gas and dust. This r-process-enriched material then went on to form Reticulum II’s standout stars.

Based on the new study, from a team of researchers at the Kavli Institute at the Massachusetts Institute of Technology, the unusual event in Reticulum II was likely the collision of two, ultra-dense objects called neutron stars. Scientists have hypothesized for decades that these collisions could serve as a primary source for r-process elements, yet the idea had lacked solid observational evidence. Now armed with this information, scientists can further hope to retrace the histories of galaxies based on the contents of their stars, in effect conducting “stellar archeology.”

Researchers have confirmed the hypothesis according to an Oct. 16, 2017 news item on phys.org,

Gold’s origin in the Universe has finally been confirmed, after a gravitational wave source was seen and heard for the first time ever by an international collaboration of researchers, with astronomers at the University of Warwick playing a leading role.

Members of Warwick’s Astronomy and Astrophysics Group, Professor Andrew Levan, Dr Joe Lyman, Dr Sam Oates and Dr Danny Steeghs, led observations which captured the light of two colliding neutron stars, shortly after being detected through gravitational waves – perhaps the most eagerly anticipated phenomenon in modern astronomy.

Marina Koren’s Oct. 16, 2017 article for The Atlantic presents a richly evocative view (Note: Links have been removed),

Some 130 million years ago, in another galaxy, two neutron stars spiraled closer and closer together until they smashed into each other in spectacular fashion. The violent collision produced gravitational waves, cosmic ripples powerful enough to stretch and squeeze the fabric of the universe. There was a brief flash of light a million trillion times as bright as the sun, and then a hot cloud of radioactive debris. The afterglow hung for several days, shifting from bright blue to dull red as the ejected material cooled in the emptiness of space.

Astronomers detected the aftermath of the merger on Earth on August 17. For the first time, they could see the source of universe-warping forces Albert Einstein predicted a century ago. Unlike with black-hole collisions, they had visible proof, and it looked like a bright jewel in the night sky.

But the merger of two neutron stars is more than fireworks. It’s a factory.

Using infrared telescopes, astronomers studied the spectra—the chemical composition of cosmic objects—of the collision and found that the plume ejected by the merger contained a host of newly formed heavy chemical elements, including gold, silver, platinum, and others. Scientists estimate the amount of cosmic bling totals about 10,000 Earth-masses of heavy elements.

I’m not sure exactly what this image signifies but it did accompany Koren’s article so presumably it’s a representation of colliding neutron stars,

NSF / LIGO / Sonoma State University /A. Simonnet. Downloaded from: https://www.theatlantic.com/science/archive/2017/10/the-making-of-cosmic-bling/543030/

An Oct. 16, 2017 University of Warwick press release (also on EurekAlert), which originated the news item on phys.org, provides more detail,

Huge amounts of gold, platinum, uranium and other heavy elements were created in the collision of these compact stellar remnants, and were pumped out into the universe – unlocking the mystery of how gold on wedding rings and jewellery is originally formed.

The collision produced as much gold as the mass of the Earth. [emphasis mine]

This discovery has also confirmed conclusively that short gamma-ray bursts are directly caused by the merging of two neutron stars.

The neutron stars were very dense – as heavy as our Sun yet only 10 kilometres across – and they collided with each other 130 million years ago, when dinosaurs roamed the Earth, in a relatively old galaxy that was no longer forming many stars.

They drew towards each other over millions of light years, and revolved around each other increasingly quickly as they got closer – eventually spinning around each other five hundred times per second.

Their merging sent ripples through the fabric of space and time – and these ripples are the elusive gravitational waves spotted by the astronomers.

The gravitational waves were detected by the Advanced Laser Interferometer Gravitational-Wave Observatory (Adv-LIGO) on 17 August this year [2017], with a short duration gamma-ray burst detected by the Fermi satellite just two seconds later.

This led to a flurry of observations as night fell in Chile, with a first report of a new source from the Swope 1m telescope.

Longstanding collaborators Professor Levan and Professor Nial Tanvir (from the University of Leicester) used the facilities of the European Southern Observatory to pinpoint the source in infrared light.

Professor Levan’s team was the first one to get observations of this new source with the Hubble Space Telescope. It comes from a galaxy called NGC 4993, 130 million light years away.

Andrew Levan, Professor in the Astronomy & Astrophysics group at the University of Warwick, commented: “Once we saw the data, we realised we had caught a new kind of astrophysical object. This ushers in the era of multi-messenger astronomy, it is like being able to see and hear for the first time.”

Dr Joe Lyman, who was observing at the European Southern Observatory at the time was the first to alert the community that the source was unlike any seen before.

He commented: “The exquisite observations obtained in a few days showed we were observing a kilonova, an object whose light is powered by extreme nuclear reactions. This tells us that the heavy elements, like the gold or platinum in jewellery are the cinders, forged in the billion degree remnants of a merging neutron star.”

Dr Samantha Oates added: “This discovery has answered three questions that astronomers have been puzzling for decades: what happens when neutron stars merge? What causes the short duration gamma-ray bursts? Where are the heavy elements, like gold, made? In the space of about a week all three of these mysteries were solved.”

Dr Danny Steeghs said: “This is a new chapter in astrophysics. We hope that in the next few years we will detect many more events like this. Indeed, in Warwick we have just finished building a telescope designed to do just this job, and we expect it to pinpoint these sources in this new era of multi-messenger astronomy”.

Congratulations to all of the researchers involved in this work!

Many, many research teams were  involved. Here’s a sampling of their news releases which focus on their areas of research,

University of the Witwatersrand (South Africa)

https://www.eurekalert.org/pub_releases/2017-10/uotw-wti101717.php

Weizmann Institute of Science (Israel)

https://www.eurekalert.org/pub_releases/2017-10/wios-cns101717.php

Carnegie Institution for Science (US)

https://www.eurekalert.org/pub_releases/2017-10/cifs-dns101217.php

Northwestern University (US)

https://www.eurekalert.org/pub_releases/2017-10/nu-adc101617.php

National Radio Astronomy Observatory (US)

https://www.eurekalert.org/pub_releases/2017-10/nrao-ru101317.php

Max-Planck-Gesellschaft (Germany)

https://www.eurekalert.org/pub_releases/2017-10/m-gwf101817.php

Penn State (Pennsylvania State University; US)

https://www.eurekalert.org/pub_releases/2017-10/ps-stl101617.php

University of California – Davis

https://www.eurekalert.org/pub_releases/2017-10/uoc–cns101717.php

The American Association for the Advancement of Science’s (AAAS) magazine, Science, has published seven papers on this research. Here’s an Oct. 16, 2017 AAAS news release with an overview of the papers,

https://www.eurekalert.org/pub_releases/2017-10/aaft-btf101617.php

I’m sure there are more news releases out there and that there will be many more papers published in many journals, so if this interests, I encourage you to keep looking.

Two final pieces I’d like to draw your attention to: one answers basic questions and another focuses on how artists knew what to draw when neutron stars collide.

Keith A Spencer’s Oct. 18, 2017 piece on salon.com answers a lot of basic questions for those of us who don’t have a background in astronomy. Here are a couple of examples,

What is a neutron star?

Okay, you know how atoms have protons, neutrons, and electrons in them? And you know how protons are positively charged, and electrons are negatively charged, and neutrons are neutral?

Yeah, I remember that from watching Bill Nye as a kid.

Totally. Anyway, have you ever wondered why the negatively-charged electrons and the positively-charged protons don’t just merge into each other and form a neutral neutron? I mean, they’re sitting there in the atom’s nucleus pretty close to each other. Like, if you had two magnets that close, they’d stick together immediately.

I guess now that you mention it, yeah, it is weird.

Well, it’s because there’s another force deep in the atom that’s preventing them from merging.

It’s really really strong.

The only way to overcome this force is to have a huge amount of matter in a really hot, dense space — basically shove them into each other until they give up and stick together and become a neutron. This happens in very large stars that have been around for a while — the core collapses, and in the aftermath, the electrons in the star are so close to the protons, and under so much pressure, that they suddenly merge. There’s a big explosion and the outer material of the star is sloughed off.

Okay, so you’re saying under a lot of pressure and in certain conditions, some stars collapse and become big balls of neutrons?

Pretty much, yeah.

So why do the neutrons just stick around in a huge ball? Aren’t they neutral? What’s keeping them together? 

Gravity, mostly. But also the strong nuclear force, that aforementioned weird strong force. This isn’t something you’d encounter on a macroscopic scale — the strong force only really works at the type of distances typified by particles in atomic nuclei. And it’s different, fundamentally, than the electromagnetic force, which is what makes magnets attract and repel and what makes your hair stick up when you rub a balloon on it.

So these neutrons in a big ball are bound by gravity, but also sticking together by virtue of the strong nuclear force. 

So basically, the new ball of neutrons is really small, at least, compared to how heavy it is. That’s because the neutrons are all clumped together as if this neutron star is one giant atomic nucleus — which it kinda is. It’s like a giant atom made only of neutrons. If our sun were a neutron star, it would be less than 20 miles wide. It would also not be something you would ever want to get near.

Got it. That means two giant balls of neutrons that weighed like, more than our sun and were only ten-ish miles wide, suddenly smashed into each other, and in the aftermath created a black hole, and we are just now detecting it on Earth?

Exactly. Pretty weird, no?

Spencer does a good job of gradually taking you through increasingly complex explanations.

For those with artistic interests, Neel V. Patel tries to answer a question about how artists knew what draw when neutron stars collided in his Oct. 18, 2017 piece for Slate.com,

All of these things make this discovery easy to marvel at and somewhat impossible to picture. Luckily, artists have taken up the task of imagining it for us, which you’ve likely seen if you’ve already stumbled on coverage of the discovery. Two bright, furious spheres of light and gas spiraling quickly into one another, resulting in a massive swell of lit-up matter along with light and gravitational waves rippling off speedily in all directions, towards parts unknown. These illustrations aren’t just alluring interpretations of a rare phenomenon; they are, to some extent, the translation of raw data and numbers into a tangible visual that gives scientists and nonscientists alike some way of grasping what just happened. But are these visualizations realistic? Is this what it actually looked like? No one has any idea. Which is what makes the scientific illustrators’ work all the more fascinating.

“My goal is to represent what the scientists found,” says Aurore Simmonet, a scientific illustrator based at Sonoma State University in Rohnert Park, California. Even though she said she doesn’t have a rigorous science background (she certainly didn’t know what a kilonova was before being tasked to illustrate one), she also doesn’t believe that type of experience is an absolute necessity. More critical, she says, is for the artist to have an interest in the subject matter and in learning new things, as well as a capacity to speak directly to scientists about their work.

Illustrators like Simmonet usually start off work on an illustration by asking the scientist what’s the biggest takeaway a viewer should grasp when looking at a visual. Unfortunately, this latest discovery yielded a multitude of papers emphasizing different conclusions and highlights. With so many scientific angles, there’s a stark challenge in trying to cram every important thing into a single drawing.

Clearly, however, the illustrations needed to center around the kilonova. Simmonet loves colors, so she began by discussing with the researchers what kind of color scheme would work best. The smash of two neutron stars lends itself well to deep, vibrant hues. Simmonet and Robin Dienel at the Carnegie Institution for Science elected to use a wide array of colors and drew bright cracking to show pressure forming at the merging. Others, like Luis Calcada at the European Southern Observatory, limited the color scheme in favor of emphasizing the bright moment of collision and the signal waves created by the kilonova.

Animators have even more freedom to show the event, since they have much more than a single frame to play with. The Conceptual Image Lab at NASA’s [US National Aeronautics and Space Administration] Goddard Space Flight Center created a short video about the new findings, and lead animator Brian Monroe says the video he and his colleagues designed shows off the evolution of the entire process: the rising action, climax, and resolution of the kilonova event.

The illustrators try to adhere to what the likely physics of the event entailed, soliciting feedback from the scientists to make sure they’re getting it right. The swirling of gas, the direction of ejected matter upon impact, the reflection of light, the proportions of the objects—all of these things are deliberately framed such that they make scientific sense. …

Do take a look at Patel’s piece, if for no other reason than to see all of the images he has embedded there. You may recognize Aurore Simmonet’s name from the credit line in the second image I have embedded here.

Leftover 2017 memristor news bits

i have two bits of news, one from this October 2017 about using light to control a memristor’s learning properties and one from December 2017 about memristors and neural networks.

Shining a light on the memristor

Michael Berger wrote an October 30, 2017 Nanowerk Sportlight article about some of the latest work concerning memristors and light,

Memristors – or resistive memory – are nanoelectronic devices that are very promising components for next generation memory and computing devices. They are two-terminal electric elements similar to a conventional resistor – however, the electric resistance in a memristor is dependent on the charge passing through it; which means that its conductance can be precisely modulated by charge or flux through it. Its special property is that its resistance can be programmed (resistor function) and subsequently remains stored (memory function).

In this sense, a memristor is similar to a synapse in the human brain because it exhibits the same switching characteristics, i.e. it is able, with a high level of plasticity, to modify the efficiency of signal transfer between neurons under the influence of the transfer itself. That’s why researchers are hopeful to use memristors for the fabrication of electronic synapses for neuromorphic (i.e. brain-like) computing that mimics some of the aspects of learning and computation in human brains.

Human brains may be slow at pure number crunching but they are excellent at handling fast dynamic sensory information such as image and voice recognition. Walking is something that we take for granted but this is quite challenging for robots, especially over uneven terrain.

“Memristors present an opportunity to make new types of computers that are different from existing von Neumann architectures, which traditional computers are based upon,” Dr Neil T. Kemp, a Lecturer in Physics at the University of Hull [UK], tells Nanowerk. “Our team at the University of Hull is focussed on making memristor devices dynamically reconfigurable and adaptive – we believe this is the route to making a new generation of artificial intelligence systems that are smarter and can exhibit complex behavior. Such systems would also have the advantage of memristors, high density integration and lower power usage, so these systems would be more lightweight, portable and not need re-charging so often – which is something really needed for robots etc.”

In their new paper in Nanoscale (“Reversible Optical Switching Memristors with Tunable STDP Synaptic Plasticity: A Route to Hierarchical Control in Artificial Intelligent Systems”), Kemp and his team demonstrate the ability to reversibly control the learning properties of memristors via optical means.

The reversibility is achieved by changing the polarization of light. The researchers have used this effect to demonstrate tuneable learning in a memristor. One way this is achieved is through something called Spike Timing Dependent Plasticity (STDP), which is an effect known to occur in human brains and is linked with sensory perception, spatial reasoning, language and conscious thought in the neocortex.

STDP learning is based upon differences in the arrival time of signals from two adjacent neurons. The University of Hull team has shown that they can modulate the synaptic plasticity via optical means which enables the devices to have tuneable learning.

“Our research findings are important because it demonstrates that light can be used to control the learning properties of a memristor,” Kemp points out. “We have shown that light can be used in a reversible manner to change the connection strength (or conductivity) of artificial memristor synapses and as well control their ability to forget i.e. we can dynamically change device to have short-term or long-term memory.”

According to the team, there are many potential applications, such as adaptive electronic circuits controllable via light, or in more complex systems, such as neuromorphic computing, the development of optically reconfigurable neural networks.

Having optically controllable memristors can also facilitate the implementation of hierarchical control in larger artificial-brain like systems, whereby some of the key processes that are carried out by biological molecules in human brains can be emulated in solid-state devices through patterning with light.

Some of these processes include synaptic pruning, conversion of short term memory to long term memory, erasing of certain memories that are no longer needed or changing the sensitivity of synapses to be more adept at learning new information.

“The ability to control this dynamically, both spatially and temporally, is particularly interesting since it would allow neural networks to be reconfigurable on the fly through either spatial patterning or by adjusting the intensity of the light source,” notes Kemp.

In their new paper in Nanoscale Currently, the devices are more suited to neuromorphic computing applications, which do not need to be as fast. Optical control of memristors opens the route to dynamically tuneable and reprogrammable synaptic circuits as well the ability (via optical patterning) to have hierarchical control in larger and more complex artificial intelligent systems.

“Artificial Intelligence is really starting to come on strong in many areas, especially in the areas of voice/image recognition and autonomous systems – we could even say that this is the next revolution, similarly to what the industrial revolution was to farming and production processes,” concludes Kemp. “There are many challenges to overcome though. …

That excerpt should give you the gist of Berger’s article and, for those who need more information, there’s Berger’s article and, also, a link to and a citation for the paper,

Reversible optical switching memristors with tunable STDP synaptic plasticity: a route to hierarchical control in artificial intelligent systems by Ayoub H. Jaafar, Robert J. Gray, Emanuele Verrelli, Mary O’Neill, Stephen. M. Kelly, and Neil T. Kemp. Nanoscale, 2017,9, 17091-17098 DOI: 10.1039/C7NR06138B First published on 24 Oct 2017

This paper is behind a paywall.

The memristor and the neural network

It would seem machine learning could experience a significant upgrade if the work in Wei Lu’s University of Michigan laboratory can be scaled for general use. From a December 22, 2017 news item on ScienceDaily,

A new type of neural network made with memristors can dramatically improve the efficiency of teaching machines to think like humans.

The network, called a reservoir computing system, could predict words before they are said during conversation, and help predict future outcomes based on the present.

The research team that created the reservoir computing system, led by Wei Lu, professor of electrical engineering and computer science at the University of Michigan, recently published their work in Nature Communications.

A December 19, 2017 University of Michigan news release (also on EurekAlert) by Dan Newman, which originated the news item, expands on the theme,

Reservoir computing systems, which improve on a typical neural network’s capacity and reduce the required training time, have been created in the past with larger optical components. However, the U-M group created their system using memristors, which require less space and can be integrated more easily into existing silicon-based electronics.

Memristors are a special type of resistive device that can both perform logic and store data. This contrasts with typical computer systems, where processors perform logic separate from memory modules. In this study, Lu’s team used a special memristor that memorizes events only in the near history.

Inspired by brains, neural networks are composed of neurons, or nodes, and synapses, the connections between nodes.

To train a neural network for a task, a neural network takes in a large set of questions and the answers to those questions. In this process of what’s called supervised learning, the connections between nodes are weighted more heavily or lightly to minimize the amount of error in achieving the correct answer.

Once trained, a neural network can then be tested without knowing the answer. For example, a system can process a new photo and correctly identify a human face, because it has learned the features of human faces from other photos in its training set.

“A lot of times, it takes days or months to train a network,” says Lu. “It is very expensive.”

Image recognition is also a relatively simple problem, as it doesn’t require any information apart from a static image. More complex tasks, such as speech recognition, can depend highly on context and require neural networks to have knowledge of what has just occurred, or what has just been said.

“When transcribing speech to text or translating languages, a word’s meaning and even pronunciation will differ depending on the previous syllables,” says Lu.

This requires a recurrent neural network, which incorporates loops within the network that give the network a memory effect. However, training these recurrent neural networks is especially expensive, Lu says.

Reservoir computing systems built with memristors, however, can skip most of the expensive training process and still provide the network the capability to remember. This is because the most critical component of the system – the reservoir – does not require training.

When a set of data is inputted into the reservoir, the reservoir identifies important time-related features of the data, and hands it off in a simpler format to a second network. This second network then only needs training like simpler neural networks, changing weights of the features and outputs that the first network passed on until it achieves an acceptable level of error.

Enlargereservoir computing system

IMAGE:  Schematic of a reservoir computing system, showing the reservoir with internal dynamics and the simpler output. Only the simpler output needs to be trained, allowing for quicker and lower-cost training. Courtesy Wei Lu.

 

“The beauty of reservoir computing is that while we design it, we don’t have to train it,” says Lu.

The team proved the reservoir computing concept using a test of handwriting recognition, a common benchmark among neural networks. Numerals were broken up into rows of pixels, and fed into the computer with voltages like Morse code, with zero volts for a dark pixel and a little over one volt for a white pixel.

Using only 88 memristors as nodes to identify handwritten versions of numerals, compared to a conventional network that would require thousands of nodes for the task, the reservoir achieved 91% accuracy.

Reservoir computing systems are especially adept at handling data that varies with time, like a stream of data or words, or a function depending on past results.

To demonstrate this, the team tested a complex function that depended on multiple past results, which is common in engineering fields. The reservoir computing system was able to model the complex function with minimal error.

Lu plans on exploring two future paths with this research: speech recognition and predictive analysis.

“We can make predictions on natural spoken language, so you don’t even have to say the full word,” explains Lu.

“We could actually predict what you plan to say next.”

In predictive analysis, Lu hopes to use the system to take in signals with noise, like static from far-off radio stations, and produce a cleaner stream of data. “It could also predict and generate an output signal even if the input stopped,” he says.

EnlargeWei Lu

IMAGE:  Wei Lu, Professor of Electrical Engineering & Computer Science at the University of Michigan holds a memristor he created. Photo: Marcin Szczepanski.

 

The work was published in Nature Communications in the article, “Reservoir computing using dynamic memristors for temporal information processing”, with authors Chao Du, Fuxi Cai, Mohammed Zidan, Wen Ma, Seung Hwan Lee, and Prof. Wei Lu.

The research is part of a $6.9 million DARPA [US Defense Advanced Research Projects Agency] project, called “Sparse Adaptive Local Learning for Sensing and Analytics [also known as SALLSA],” that aims to build a computer chip based on self-organizing, adaptive neural networks. The memristor networks are fabricated at Michigan’s Lurie Nanofabrication Facility.

Lu and his team previously used memristors in implementing “sparse coding,” which used a 32-by-32 array of memristors to efficiently analyze and recreate images.

Here’s a link to and a citation for the paper,

Reservoir computing using dynamic memristors for temporal information processing by Chao Du, Fuxi Cai, Mohammed A. Zidan, Wen Ma, Seung Hwan Lee & Wei D. Lu. Nature Communications 8, Article number: 2204 (2017) doi:10.1038/s41467-017-02337-y Published online: 19 December 2017

This is an open access paper.

A customized cruise experience with wearable technology (and decreased personal agency?)

The days when you went cruising to ‘get away from it all’ seem to have passed (if they ever really existed) with the introduction of wearable technology that will register your every preference and make life easier according to Cliff Kuang’s Oct. 19, 2017 article for Fast Company,

This month [October 2017], the 141,000-ton Regal Princess will push out to sea after a nine-figure revamp of mind-boggling scale. Passengers won’t be greeted by new restaurants, swimming pools, or onboard activities, but will instead step into a future augured by the likes of Netflix and Uber, where nearly everything is on demand and personally tailored. An ambitious new customization platform has been woven into the ship’s 19 passenger decks: some 7,000 onboard sensors and 4,000 “guest portals” (door-access panels and touch-screen TVs), all of them connected by 75 miles of internal cabling. As the Carnival-owned ship cruises to Nassau, Bahamas, and Grand Turk, its 3,500 passengers will have the option of carrying a quarter-size device, called the Ocean Medallion, which can be slipped into a pocket or worn on the wrist and is synced with a companion app.

The platform will provide a new level of service for passengers; the onboard sensors record their tastes and respond to their movements, and the app guides them around the ship and toward activities aligned with their preferences. Carnival plans to roll out the platform to another seven ships by January 2019. Eventually, the Ocean Medallion could be opening doors, ordering drinks, and scheduling activities for passengers on all 102 of Carnival’s vessels across 10 cruise lines, from the mass-market Princess ships to the legendary ocean liners of Cunard.

Kuang goes on to explain the reasoning behind this innovation,

The Ocean Medallion is Carnival’s attempt to address a problem that’s become increasingly vexing to the $35.5 billion cruise industry. Driven by economics, ships have exploded in size: In 1996, Carnival Destiny was the world’s largest cruise ship, carrying 2,600 passengers. Today, Royal Caribbean’s MS Harmony of the Seas carries up to 6,780 passengers and 2,300 crew. Larger ships expend less fuel per passenger; the money saved can then go to adding more amenities—which, in turn, are geared to attracting as many types of people as possible. Today on a typical ship you can do practically anything—from attending violin concertos to bungee jumping. And that’s just onboard. Most of a cruise is spent in port, where each day there are dozens of experiences available. This avalanche of choice can bury a passenger. It has also made personalized service harder to deliver. …

Kuang also wrote this brief description of how the technology works from the passenger’s perspective in an Oct. 19, 2017 item for Fast Company,

1. Pre-trip

On the web or on the app, you can book experiences, log your tastes and interests, and line up your days. That data powers the recommendations you’ll see. The Ocean Medallion arrives by mail and becomes the key to ship access.

2. Stateroom

When you draw near, your cabin-room door unlocks without swiping. The room’s unique 43-inch TV, which doubles as a touch screen, offers a range of Carnival’s bespoke travel shows. Whatever you watch is fed into your excursion suggestions.

3. Food

When you order something, sensors detect where you are, allowing your server to find you. Your allergies and preferences are also tracked, and shape the choices you’re offered. In all, the back-end data has 45,000 allergens tagged and manages 250,000 drink combinations.

4. Activities

The right algorithms can go beyond suggesting wines based on previous orders. Carnival is creating a massive semantic database, so if you like pricey reds, you’re more apt to be guided to a violin concerto than a limbo competition. Your onboard choices—the casino, the gym, the pool—inform your excursion recommendations.

In Kuang’s Oct. 19, 2017 article he notes that the cruise ship line is putting a lot of effort into retraining their staff and emphasizing the ‘soft’ skills that aren’t going to be found in this iteration of the technology. No mention is made of whether or not there will be reductions in the number of staff members on this cruise ship nor is the possibility that ‘soft’ skills may in the future be incorporated into this technological marvel.

Personalization/customization is increasingly everywhere

How do you feel about customized news feeds? As it turns out, this is not a rhetorical question as Adrienne LaFrance notes in her Oct. 19, 2017 article for The Atlantic (Note: Links have been removed),

Today, a Google search for news runs through the same algorithmic filtration system as any other Google search: A person’s individual search history, geographic location, and other demographic information affects what Google shows you. Exactly how your search results differ from any other person’s is a mystery, however. Not even the computer scientists who developed the algorithm could precisely reverse engineer it, given the fact that the same result can be achieved through numerous paths, and that ranking factors—deciding which results show up first—are constantly changing, as are the algorithms themselves.

We now get our news in real time, on demand, tailored to our interests, across multiple platforms, without knowing just how much is actually personalized. It was technology companies like Google and Facebook, not traditional newsrooms, that made it so. But news organizations are increasingly betting that offering personalized content can help them draw audiences to their sites—and keep them coming back.

Personalization extends beyond how and where news organizations meet their readers. Already, smartphone users can subscribe to push notifications for the specific coverage areas that interest them. On Facebook, users can decide—to some extent—which organizations’ stories they would like to appear in their news feeds. At the same time, devices and platforms that use machine learning to get to know their users will increasingly play a role in shaping ultra-personalized news products. Meanwhile, voice-activated artificially intelligent devices, such as Google Home and Amazon Echo, are poised to redefine the relationship between news consumers and the news [emphasis mine].

While news personalization can help people manage information overload by making individuals’ news diets unique, it also threatens to incite filter bubbles and, in turn, bias [emphasis mine]. This “creates a bit of an echo chamber,” says Judith Donath, author of The Social Machine: Designs for Living Online and a researcher affiliated with Harvard University ’s Berkman Klein Center for Internet and Society. “You get news that is designed to be palatable to you. It feeds into people’s appetite of expecting the news to be entertaining … [and] the desire to have news that’s reinforcing your beliefs, as opposed to teaching you about what’s happening in the world and helping you predict the future better.”

Still, algorithms have a place in responsible journalism. “An algorithm actually is the modern editorial tool,” says Tamar Charney, the managing editor of NPR One, the organization’s customizable mobile-listening app. A handcrafted hub for audio content from both local and national programs as well as podcasts from sources other than NPR, NPR One employs an algorithm to help populate users’ streams with content that is likely to interest them. But Charney assures there’s still a human hand involved: “The whole editorial vision of NPR One was to take the best of what humans do and take the best of what algorithms do and marry them together.” [emphasis mine]

The skimming and diving Charney describes sounds almost exactly like how Apple and Google approach their distributed-content platforms. With Apple News, users can decide which outlets and topics they are most interested in seeing, with Siri offering suggestions as the algorithm gets better at understanding your preferences. Siri now has have help from Safari. The personal assistant can now detect browser history and suggest news items based on what someone’s been looking at—for example, if someone is searching Safari for Reykjavík-related travel information, they will then see Iceland-related news on Apple News. But the For You view of Apple News isn’t 100 percent customizable, as it still spotlights top stories of the day, and trending stories that are popular with other users, alongside those curated just for you.

Similarly, with Google’s latest update to Google News, readers can scan fixed headlines, customize sidebars on the page to their core interests and location—and, of course, search. The latest redesign of Google News makes it look newsier than ever, and adds to many of the personalization features Google first introduced in 2010. There’s also a place where you can preprogram your own interests into the algorithm.

Google says this isn’t an attempt to supplant news organizations, nor is it inspired by them. The design is rather an embodiment of Google’s original ethos, the product manager for Google News Anand Paka says: “Just due to the deluge of information, users do want ways to control information overload. In other words, why should I read the news that I don’t care about?” [emphasis mine]

Meanwhile, in May [2017?], Google briefly tested a personalized search filter that would dip into its trove of data about users with personal Google and Gmail accounts and include results exclusively from their emails, photos, calendar items, and other personal data related to their query. [emphasis mine] The “personal” tab was supposedly “just an experiment,” a Google spokesperson said, and the option was temporarily removed, but seems to have rolled back out for many users as of August [2017?].

Now, Google, in seeking to settle a class-action lawsuit alleging that scanning emails to offer targeted ads amounts to illegal wiretapping, is promising that for the next three years it won’t use the content of its users’ emails to serve up targeted ads in Gmail. The move, which will go into effect at an unspecified date, doesn’t mean users won’t see ads, however. Google will continue to collect data from users’ search histories, YouTube, and Chrome browsing habits, and other activity.

The fear that personalization will encourage filter bubbles by narrowing the selection of stories is a valid one, especially considering that the average internet user or news consumer might not even be aware of such efforts. Elia Powers, an assistant professor of journalism and news media at Towson University in Maryland, studied the awareness of news personalization among students after he noticed those in his own classes didn’t seem to realize the extent to which Facebook and Google customized users’ results. “My sense is that they didn’t really understand … the role that people that were curating the algorithms [had], how influential that was. And they also didn’t understand that they could play a pretty active role on Facebook in telling Facebook what kinds of news they want them to show and how to prioritize [content] on Google,” he says.

The results of Powers’s study, which was published in Digital Journalism in February [2017], showed that the majority of students had no idea that algorithms were filtering the news content they saw on Facebook and Google. When asked if Facebook shows every news item, posted by organizations or people, in a users’ newsfeed, only 24 percent of those surveyed were aware that Facebook prioritizes certain posts and hides others. Similarly, only a quarter of respondents said Google search results would be different for two different people entering the same search terms at the same time. [emphasis mine; Note: Respondents in this study were students.]

This, of course, has implications beyond the classroom, says Powers: “People as news consumers need to be aware of what decisions are being made [for them], before they even open their news sites, by algorithms and the people behind them, and also be able to understand how they can counter the effects or maybe even turn off personalization or make tweaks to their feeds or their news sites so they take a more active role in actually seeing what they want to see in their feeds.”

On Google and Facebook, the algorithm that determines what you see is invisible. With voice-activated assistants, the algorithm suddenly has a persona. “We are being trained to have a relationship with the AI,” says Amy Webb, founder of the Future Today Institute and an adjunct professor at New York University Stern School of Business. “This is so much more catastrophically horrible for news organizations than the internet. At least with the internet, I have options. The voice ecosystem is not built that way. It’s being built so I just get the information I need in a pleasing way.”

LaFrance’s article is thoughtful and well worth reading in its entirety. Now, onto some commentary.

Loss of personal agency

I have been concerned for some time about the increasingly dull results I get from a Google search and while I realize the company has been gathering information about me via my searches , supposedly in service of giving me better searches, I had no idea how deeply the company can mine for personal data. It makes me wonder what would happen if Google and Facebook attempted a merger.

More cogently, I rather resent the search engines and artificial intelligence agents (e.g. Facebook bots) which have usurped my role as the arbiter of what interests me, in short, my increasing loss of personal agency.

I’m also deeply suspicious of what these companies are going to do with my data. Will it be used to manipulate me in some way? Presumably, the data will be sold and used for some purpose. In the US, they have married electoral data with consumer data as Brent Bambury notes in an Oct. 13, 2017 article for his CBC (Canadian Broadcasting Corporation) Radio show,

How much of your personal information circulates in the free-market ether of metadata? It could be more than you imagine, and it might be enough to let others change the way you vote.

A data firm that specializes in creating psychological profiles of voters claims to have up to 5,000 data points on 220 million Americans. Cambridge Analytica has deep ties to the American right and was hired by the campaigns of Ben Carson, Ted Cruz and Donald Trump.

During the U.S. election, CNN called them “Donald Trump’s mind readers” and his secret weapon.

David Carroll is a Professor at the Parsons School of Design in New York City. He is one of the millions of Americans profiled by Cambridge Analytica and he’s taking legal action to find out where the company gets its masses of data and how they use it to create their vaunted psychographic profiles of voters.

On Day 6 [Banbury’s CBC radio programme], he explained why that’s important.

“They claim to have figured out how to project our voting behavior based on our consumer behavior. So it’s important for citizens to be able to understand this because it would affect our ability to understand how we’re being targeted by campaigns and how the messages that we’re seeing on Facebook and television are being directed at us to manipulate us.” [emphasis mine]

The parent company of Cambridge Analytica, SCL Group, is a U.K.-based data operation with global ties to military and political activities. David Carroll says the potential for sharing personal data internationally is a cause for concern.

“It’s the first time that this kind of data is being collected and transferred across geographic boundaries,” he says.

But that also gives Carroll an opening for legal action. An individual has more rights to access their personal information in the U.K., so that’s where he’s launching his lawsuit.

Reports link Michael Flynn, briefly Trump’s National Security Adviser, to SCL Group and indicate that former White House strategist Steve Bannon is a board member of Cambridge Analytica. Billionaire Robert Mercer, who has underwritten Bannon’s Breitbart operations and is a major Trump donor, also has a significant stake in Cambridge Analytica.

In the world of data, Mercer’s credentials are impeccable.

“He is an important contributor to the field of artificial intelligence,” says David Carroll.

“His work at IBM is seminal and really important in terms of the foundational ideas that go into big data analytics, so the relationship between AI and big data analytics. …

Banbury’s piece offers a lot more, including embedded videos, than I’ve not included in that excerpt but I also wanted to include some material from Carole Cadwalladr’s Oct. 1, 2017 Guardian article about Carroll and his legal fight in the UK,

“There are so many disturbing aspects to this. One of the things that really troubles me is how the company can buy anonymous data completely legally from all these different sources, but as soon as it attaches it to voter files, you are re-identified. It means that every privacy policy we have ignored in our use of technology is a broken promise. It would be one thing if this information stayed in the US, if it was an American company and it only did voter data stuff.”

But, he [Carroll] argues, “it’s not just a US company and it’s not just a civilian company”. Instead, he says, it has ties with the military through SCL – “and it doesn’t just do voter targeting”. Carroll has provided information to the Senate intelligence committee and believes that the disclosures mandated by a British court could provide evidence helpful to investigators.

Frank Pasquale, a law professor at the University of Maryland, author of The Black Box Society and a leading expert on big data and the law, called the case a “watershed moment”.

“It really is a David and Goliath fight and I think it will be the model for other citizens’ actions against other big corporations. I think we will look back and see it as a really significant case in terms of the future of algorithmic accountability and data protection. …

Nobody is discussing personal agency directly but if you’re only being exposed to certain kinds of messages then your personal agency has been taken from you. Admittedly we don’t have complete personal agency in our lives but AI along with the data gathering done online and increasingly with wearable and smart technology means that another layer of control has been added to your life and it is largely invisible. After all, the students in Elia Powers’ study didn’t realize their news feeds were being pre-curated.

Nano- and neuro- together for nanoneuroscience

This is not the first time I’ve posted about nanotechnology and neuroscience (see this April 2, 2013 piece about then new brain science initiative in the US and Michael Berger’s  Nanowerk Spotlight article/review of an earlier paper covering the topic of nanotechnology and neuroscience).

Interestingly, the European Union (EU) had announced its two  $1B Euro research initiatives, the Human Brain Project and the Graphene Flagship (see my Jan. 28, 2013 posting about it),  months prior to the US brain research push. For those unfamiliar with the nanotechnology effort, graphene is a nanomaterial and there is high interest in its potential use in biomedical technology, thus partially connecting both EU projects.

In any event, Berger is highlighting a nanotechnology and neuroscience connection again in his Oct. 18, 2017 Nanowerk Spotlight article, or overview of, a new paper, which updates our understanding of the potential connections between the two fields (Note: A link has been removed),

Over the past several years, nanoscale analysis tools and in the design and synthesis of nanomaterials have generated optical, electrical, and chemical methods that can readily be adapted for use in neuroscience and brain activity mapping.

A review paper in Advanced Functional Materials (“Nanotechnology for Neuroscience: Promising Approaches for Diagnostics, Therapeutics and Brain Activity Mapping”) summarizes the basic concepts associated with neuroscience and the current journey of nanotechnology towards the study of neuron function by addressing various concerns on the significant role of nanomaterials in neuroscience and by describing the future applications of this emerging technology.

The collaboration between nanotechnology and neuroscience, though still at the early stages, utilizes broad concepts, such as drug delivery, cell protection, cell regeneration and differentiation, imaging and surgery, to give birth to novel clinical methods in neuroscience.

Ultimately, the clinical translation of nanoneuroscience implicates that central nervous system (CNS) diseases, including neurodevelopmental, neurodegenerative and psychiatric diseases, have the potential to be cured, while the industrial translation of nanoneuroscience indicates the need for advancement of brain-computer interface technologies.

Future Developing Arenas in Nanoneuroscience

The Brain Activity Map (BAM) Project aims to map the neural activity of every neuron across all neural circuits with the ultimate aim of curing diseases associated with the nervous system. The announcement of this collaborative, public-private research initiative in 2013 by President Obama has driven the surge in developing methods to elucidate neural circuitry. Three current developing arenas in the context of nanoneuroscience applications that will push such initiative forward are 1) optogenetics, 2) molecular/ion sensing and monitoring and 3) piezoelectric effects.

In their review, the authors discuss these aspects in detail.

Neurotoxicity of Nanomaterials

By engineering particles on the scale of molecular-level entities – proteins, lipid bilayers and nucleic acids – we can stereotactically interface with many of the components of cell systems, and at the cutting edge of this technology, we can begin to devise ways in which we can manipulate these components to our own ends. However, interfering with the internal environment of cells, especially neurons, is by no means simple.

“If we are to continue to make great strides in nanoneuroscience, functional investigations of nanomaterials must be complemented with robust toxicology studies,” the authors point out. “A database on the toxicity of materials that fully incorporates these findings for use in future schema must be developed. These databases should include information and data on 1) the chemical nature of the nanomaterials in complex aqueous environments; 2) the biological interactions of nanomaterials with chemical specificity; 3) the effects of various nanomaterial properties on living systems; and 4) a model for the simulation and computation of possible effects of nanomaterials in living systems across varying time and space. If we can establish such methods, it may be possible to design nanopharmaceuticals for improved research as well as quality of life.”

“However, challenges in nanoneuroscience are present in many forms, such as neurotoxicity; the inability to cross the blood-brain barrier [emphasis mine]; the need for greater specificity, bioavailability and short half-lives; and monitoring of disease treatment,” the authors conclude their review. “The nanoneurotoxicity surrounding these nanomaterials is a barrier that must be overcome for the translation of these applications from bench-to-bedside. While the challenges associated with nanoneuroscience seem unending, they represent opportunities for future work.”

I have a March 26, 2015 posting about Canadian researchers breaching the blood-brain barrier and an April 13, 2016 posting about US researchers at Cornell University also breaching the blood-brain barrier. Perhaps the “inability” mentioned in this Spotlight article means that it can’t be done consistently or that it hasn’t been achieved on humans.

Here’s a link to and a citation for the paper,

Nanotechnology for Neuroscience: Promising Approaches for Diagnostics, Therapeutics and Brain Activity Mapping by Anil Kumar, Aaron Tan, Joanna Wong, Jonathan Clayton Spagnoli, James Lam, Brianna Diane Blevins, Natasha G, Lewis Thorne, Keyoumars Ashkan, Jin Xie, and Hong Liu. Advanced Functional Materials Volume 27, Issue 39, October 19, 2017 DOI: 10.1002/adfm.201700489 Version of Record online: 14 AUG 2017

© 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

I took a look at the authors’ information and found that most of these researchers are based in  China and in the UK, with a sole researcher based in the US.

Could CRISPR (clustered regularly interspaced short palindromic repeats) be weaponized?

On the occasion of an American team’s recent publication of research where they edited the germline (embryos), I produced a three-part series about CRISPR (clustered regularly interspaced short palindromic repeats), sometimes referred to as CRISPR/Cas9, (links offered at end of this post).

Somewhere in my series, there’s a quote about how CRISPR could be used as a ‘weapon of mass destruction’ and it seems this has been a hot topic for the last year or so as James Revill, research fellow at the University of Sussex, references in his August 31, 2017 essay on theconversation.com (h/t phys.org August 31, 2017 news item), Note: Links have been removed,

The gene editing technique CRISPR has been in the limelight after scientists reported they had used it to safely remove disease in human embryos for the first time. This follows a “CRISPR craze” over the last couple of years, with the number of academic publications on the topic growing steadily.

There are good reasons for the widespread attention to CRISPR. The technique allows scientists to “cut and paste” DNA more easily than in the past. It is being applied to a number of different peaceful areas, ranging from cancer therapies to the control of disease carrying insects.

Some of these applications – such as the engineering of mosquitoes to resist the parasite that causes malaria – effectively involve tinkering with ecosystems. CRISPR has therefore generated a number of ethical and safety concerns. Some also worry that applications being explored by defence organisations that involve “responsible innovation in gene editing” may send worrying signals to other states.

Concerns are also mounting that gene editing could be used in the development of biological weapons. In 2016, Bill Gates remarked that “the next epidemic could originate on the computer screen of a terrorist intent on using genetic engineering to create a synthetic version of the smallpox virus”. More recently, in July 2017, John Sotos, of Intel Health & Life Sciences, stated that gene editing research could “open up the potential for bioweapons of unimaginable destructive potential”.

An annual worldwide threat assessment report of the US intelligence community in February 2016 argued that the broad availability and low cost of the basic ingredients of technologies like CRISPR makes it particularly concerning.

A Feb. 11, 2016 news item on sciencemagazine.org offers a précis of some of the reactions while a February 9, 2016 article by Antonio Regalado for the Massachusetts Institute of Technology’s MIT Technology Review delves into the matter more deeply,

Genome editing is a weapon of mass destruction.

That’s according to James Clapper, [former] U.S. director of national intelligence, who on Tuesday, in the annual worldwide threat assessment report of the U.S. intelligence community, added gene editing to a list of threats posed by “weapons of mass destruction and proliferation.”

Gene editing refers to several novel ways to alter the DNA inside living cells. The most popular method, CRISPR, has been revolutionizing scientific research, leading to novel animals and crops, and is likely to power a new generation of gene treatments for serious diseases (see “Everything You Need to Know About CRISPR’s Monster Year”).

It is gene editing’s relative ease of use that worries the U.S. intelligence community, according to the assessment. “Given the broad distribution, low cost, and accelerated pace of development of this dual-use technology, its deliberate or unintentional misuse might lead to far-reaching economic and national security implications,” the report said.

The choice by the U.S. spy chief to call out gene editing as a potential weapon of mass destruction, or WMD, surprised some experts. It was the only biotechnology appearing in a tally of six more conventional threats, like North Korea’s suspected nuclear detonation on January 6 [2016], Syria’s undeclared chemical weapons, and new Russian cruise missiles that might violate an international treaty.

The report is an unclassified version of the “collective insights” of the Central Intelligence Agency, the National Security Agency, and half a dozen other U.S. spy and fact-gathering operations.

Although the report doesn’t mention CRISPR by name, Clapper clearly had the newest and the most versatile of the gene-editing systems in mind. The CRISPR technique’s low cost and relative ease of use—the basic ingredients can be bought online for $60—seems to have spooked intelligence agencies.

….

However, one has to be careful with the hype surrounding new technologies and, at present, the security implications of CRISPR are probably modest. There are easier, cruder methods of creating terror. CRISPR would only get aspiring biological terrorists so far. Other steps, such as growing and disseminating biological weapons agents, would typically be required for it to become an effective weapon. This would require additional skills and places CRISPR-based biological weapons beyond the reach of most terrorist groups. At least for the time being.

A July 5, 2016 opinion piece by Malcolm Dando for Nature argues for greater safeguards,

In Geneva next month [August 2016], officials will discuss updates to the global treaty that outlaws the use of biological weapons. The 1972 Biological Weapons Convention (BWC) was the first agreement to ban an entire class of weapons, and it remains a crucial instrument to stop scientific research on viruses, bacteria and toxins from being diverted into military programmes.

The BWC is the best route to ensure that nations take the biological-weapons threat seriously. Most countries have struggled to develop and introduce strong and effective national programmes — witness the difficulty the United States had in agreeing what oversight system should be applied to gain-of-function experiments that created more- dangerous lab-grown versions of common pathogens.

As scientific work advances — the CRISPR gene-editing system has been flagged as the latest example of possible dual-use technology — this treaty needs to be regularly updated. This is especially important because it has no formal verification system. Proposals for declarations, monitoring visits and inspections were vetoed by the United States in 2001, on the grounds that such verification threatened national security and confidential business information.

Even so, issues such as the possible dual-use threat from gene-editing systems will not be easily resolved. But we have to try. Without the involvement of the BWC, codes of conduct and oversight systems set up at national level are unlikely to be effective. The stakes are high, and after years of fumbling, we need strong international action to monitor and assess the threats from the new age of biological techniques.

Revill notes the latest BWC agreement and suggests future directions,

This convention is imperfect and lacks a way to ensure that states are compliant. Moreover, it has not been adequately “tended to” by its member states recently, with the last major meeting unable to agree a further programme of work. Yet it remains the cornerstone of an international regime against the hostile use of biology. All 178 state parties declared in December of 2016 their continued determination “to exclude completely the possibility of the use of (biological) weapons, and their conviction that such use would be repugnant to the conscience of humankind”.

These states therefore need to address the hostile potential of CRISPR. Moreover, they need to do so collectively. Unilateral national measures, such as reasonable biological security procedures, are important. However, preventing the hostile exploitation of CRISPR is not something that can be achieved by any single state acting alone.

As such, when states party to the convention meet later this year, it will be important to agree to a more systematic and regular review of science and technology. Such reviews can help with identifying and managing the security risks of technologies such as CRISPR, as well as allowing an international exchange of information on some of the potential benefits of such technologies.

Most states supported the principle of enhanced reviews of science and technology under the convention at the last major meeting. But they now need to seize the opportunity and agree on the practicalities of such reviews in order to prevent the convention being left behind by developments in science and technology.

Experts (military, intelligence, medical, etc.) are not the only ones concerned about CRISPR according to a February 11, 2016 article by Sharon Begley for statnews.com (Note: A link has been removed),

Most Americans oppose using powerful new technology to alter the genes of unborn babies, according to a new poll — even to prevent serious inherited diseases.

They expressed the strongest disapproval for editing genes to create “designer babies” with enhanced intelligence or looks.

But the poll, conducted by STAT and Harvard T.H. Chan School of Public Health, found that people have mixed, and apparently not firm, views on emerging genetic techniques. US adults are almost evenly split on whether the federal government should fund research on editing genes before birth to keep children from developing diseases such as cystic fibrosis or Huntington’s disease.

“They’re not against scientists trying to improve [genome-editing] technologies,” said Robert Blendon, professor of health policy and political analysis at Harvard’s Chan School, perhaps because they recognize that one day there might be a compelling reason to use such technologies. An unexpected event, such as scientists “eliminating a terrible disease” that a child would have otherwise inherited, “could change people’s views in the years ahead,” Blendon said.

But for now, he added, “people are concerned about editing the genes of those who are yet unborn.”

A majority, however, wants government regulators to approve gene therapy to treat diseases in children and adults.

The STAT-Harvard poll comes as scientists and policy makers confront the ethical, social, and legal implications of these revolutionary tools for changing DNA. Thanks to a technique called CRISPR-Cas9, scientists can easily, and with increasing precision, modify genes through the genetic analog of a computer’s “find and replace” function.

I find it surprising that there’s resistance to removing diseases found in the germline (embryos). When they were doing public consultations on nanotechnology, the one area where people tended to be quite open to research was health and medicine. Where food was concerned however, people had far more concerns.

If you’re interested in the STAT-Harvard poll, you can find it here. As for James Revill, he has written a more substantive version of this essay as a paper, which is available here.

On a semi-related note, I found STAT (statnews.com) to be a quite interesting and accessibly written online health science journal. Here’s more from the About Us page (Note: A link has been removed),

What’s STAT all about?
STAT is a national publication focused on finding and telling compelling stories about health, medicine, and scientific discovery. We produce daily news, investigative articles, and narrative projects in addition to multimedia features. We tell our stories from the places that matter to our readers — research labs, hospitals, executive suites, and political campaigns.

Why did you call it STAT?
In medical parlance, “stat” means important and urgent, and that’s what we’re all about — quickly and smartly delivering good stories. Read more about the origins of our name here.

Who’s behind the new publication?
STAT is produced by Boston Globe Media. Our headquarters is located in Boston but we have bureaus in Washington, New York, Cleveland, Atlanta, San Francisco, and Los Angeles. It was started by John Henry, the owner of Boston Globe Media and the principal owner of the Boston Red Sox. Rick Berke is executive editor.

So is STAT part of The Boston Globe?
They’re distinct properties but the two share content and complement one another.

Is it free?
Much of STAT is free. We also offer STAT Plus, a premium subscription plan that includes exclusive reporting about the pharmaceutical and biotech industries as well as other benefits. Learn more about it here.

Who’s working for STAT?
Some of the best-sourced science, health, and biotech journalists in the country, as well as motion graphics artists and data visualization specialists. Our team includes talented writers, editors, and producers capable of the kind of explanatory journalism that complicated science issues sometimes demand.

Who’s your audience?
You. Even if you don’t work in science, have never stepped foot in a hospital, or hated high school biology, we’ve got something for you. And for the lab scientists, health professionals, business leaders, and policy makers, we think you’ll find coverage here that interests you, too. The world of health, science, and medicine is booming and yielding fascinating stories. We explore how they affect us all.

….

As promised, here are the links to my three-part series on CRISPR,

Part 1 opens the series with a basic description of CRISPR and the germline research that occasioned the series along with some of the other (non-weapon) ethical issues and patent disputes that are arising from this new technology. CRISPR and editing the germline in the US (part 1 of 3): In the beginning

Part 2 covers three critical responses to the reporting and between them describe the technology in more detail and the possibility of ‘designer babies’.  CRISPR and editing the germline in the US (part 2 of 3): ‘designer babies’?

Part 3 is all about public discussion or, rather, the lack of and need for according to a couple of social scientists. Informally, there is some discussion via pop culture and Joelle Renstrom notes although she is focused on the larger issues touched on by the television series, Orphan Black and as I touch on in my final comments. CRISPR and editing the germline in the US (part 3 of 3): public discussions and pop culture

Finally, I hope to stumble across studies from other countries about how they are responding to the possibilities presented by CRISPR/Cas9 so that I can offer a more global perspective than this largely US perspective. At the very least, it would be interesting to find it if there differences.