Tag Archives: University of California at Berkeley

Cyborg bacteria to reduce carbon dioxide

This video is a bit technical but then it is about work being presented to chemists at the American Chemical Society’s (ACS) at the 254th National Meeting & Exposition Aug. 20 -24, 2017,

For a more plain language explanation, there’s an August 22, 2017 ACS news release (also on EurekAlert),

Photosynthesis provides energy for the vast majority of life on Earth. But chlorophyll, the green pigment that plants use to harvest sunlight, is relatively inefficient. To enable humans to capture more of the sun’s energy than natural photosynthesis can, scientists have taught bacteria to cover themselves in tiny, highly efficient solar panels to produce useful compounds.

“Rather than rely on inefficient chlorophyll to harvest sunlight, I’ve taught bacteria how to grow and cover their bodies with tiny semiconductor nanocrystals,” says Kelsey K. Sakimoto, Ph.D., who carried out the research in the lab of Peidong Yang, Ph.D. “These nanocrystals are much more efficient than chlorophyll and can be grown at a fraction of the cost of manufactured solar panels.”

Humans increasingly are looking to find alternatives to fossil fuels as sources of energy and feedstocks for chemical production. Many scientists have worked to create artificial photosynthetic systems to generate renewable energy and simple organic chemicals using sunlight. Progress has been made, but the systems are not efficient enough for commercial production of fuels and feedstocks.

Research in Yang’s lab at the University of California, Berkeley, where Sakimoto earned his Ph.D., focuses on harnessing inorganic semiconductors that can capture sunlight to organisms such as bacteria that can then use the energy to produce useful chemicals from carbon dioxide and water. “The thrust of research in my lab is to essentially ‘supercharge’ nonphotosynthetic bacteria by providing them energy in the form of electrons from inorganic semiconductors, like cadmium sulfide, that are efficient light absorbers,” Yang says. “We are now looking for more benign light absorbers than cadmium sulfide to provide bacteria with energy from light.”

Sakimoto worked with a naturally occurring, nonphotosynthetic bacterium, Moorella thermoacetica, which, as part of its normal respiration, produces acetic acid from carbon dioxide (CO2). Acetic acid is a versatile chemical that can be readily upgraded to a number of fuels, polymers, pharmaceuticals and commodity chemicals through complementary, genetically engineered bacteria.

When Sakimoto fed cadmium and the amino acid cysteine, which contains a sulfur atom, to the bacteria, they synthesized cadmium sulfide (CdS) nanoparticles, which function as solar panels on their surfaces. The hybrid organism, M. thermoacetica-CdS, produces acetic acid from CO2, water and light. “Once covered with these tiny solar panels, the bacteria can synthesize food, fuels and plastics, all using solar energy,” Sakimoto says. “These bacteria outperform natural photosynthesis.”

The bacteria operate at an efficiency of more than 80 percent, and the process is self-replicating and self-regenerating, making this a zero-waste technology. “Synthetic biology and the ability to expand the product scope of CO2 reduction will be crucial to poising this technology as a replacement, or one of many replacements, for the petrochemical industry,” Sakimoto says.

So, do the inorganic-biological hybrids have commercial potential? “I sure hope so!” he says. “Many current systems in artificial photosynthesis require solid electrodes, which is a huge cost. Our algal biofuels are much more attractive, as the whole CO2-to-chemical apparatus is self-contained and only requires a big vat out in the sun.” But he points out that the system still requires some tweaking to tune both the semiconductor and the bacteria. He also suggests that it is possible that the hybrid bacteria he created may have some naturally occurring analog. “A future direction, if this phenomenon exists in nature, would be to bioprospect for these organisms and put them to use,” he says.

For more insight into the work, check out Dexter Johnson’s Aug. 22, 2017 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website),

“It’s actually a natural, overlooked feature of their biology,” explains Sakimoto in an e-mail interview with IEEE Spectrum. “This bacterium has a detoxification pathway, meaning if it encounters a toxic metal, like cadmium, it will try to precipitate it out, thereby detoxifying it. So when we introduce cadmium ions into the growth medium in which M. thermoacetica is hanging out, it will convert the amino acid cysteine into sulfide, which precipitates out cadmium as cadmium sulfide. The crystals then assemble and stick onto the bacterium through normal electrostatic interactions.”

I’ve just excerpted one bit, there’s more in Dexter’s posting.

Canadian science policy news and doings (also: some US science envoy news)

I have a couple of notices from the Canadian Science Policy Centre (CSPC), a twitter feed, and an article in online magazine to thank for this bumper crop of news.

 Canadian Science Policy Centre: the conference

The 2017 Canadian Science Policy Conference to be held Nov. 1 – 3, 2017 in Ottawa, Ontario for the third year in a row has a super saver rate available until Sept. 3, 2017 according to an August 14, 2017 announcement (received via email).

Time is running out, you have until September 3rd until prices go up from the SuperSaver rate.

Savings off the regular price with the SuperSaver rate:
Up to 26% for General admission
Up to 29% for Academic/Non-Profit Organizations
Up to 40% for Students and Post-Docs

Before giving you the link to the registration page and assuming that you might want to check out what is on offer at the conference, here’s a link to the programme. They don’t seem to have any events celebrating Canada’s 150th anniversary although they do have a session titled, ‘The Next 150 years of Science in Canada: Embedding Equity, Delivering Diversity/Les 150 prochaine années de sciences au Canada:  Intégrer l’équité, promouvoir la diversité‘,

Enhancing equity, diversity, and inclusivity (EDI) in science, technology, engineering and math (STEM) has been described as being a human rights issue and an economic development issue by various individuals and organizations (e.g. OECD). Recent federal policy initiatives in Canada have focused on increasing participation of women (a designated under-represented group) in science through increased reporting, program changes, and institutional accountability. However, the Employment Equity Act requires employers to act to ensure the full representation of the three other designated groups: Aboriginal peoples, persons with disabilities and members of visible minorities. Significant structural and systemic barriers to full participation and employment in STEM for members of these groups still exist in Canadian institutions. Since data support the positive role of diversity in promoting innovation and economic development, failure to capture the full intellectual capacity of a diverse population limits provincial and national potential and progress in many areas. A diverse international panel of experts from designated groups will speak to the issue of accessibility and inclusion in STEM. In addition, the discussion will focus on evidence-based recommendations for policy initiatives that will promote full EDI in science in Canada to ensure local and national prosperity and progress for Canada over the next 150 years.

There’s also this list of speakers . Curiously, I don’t see Kirsty Duncan, Canada’s Minister of Science on the list, nor do I see any other politicians in the banner for their conference website  This divergence from the CSPC’s usual approach to promoting the conference is interesting.

Moving onto the conference, the organizers have added two panels to the programme (from the announcement received via email),

Friday, November 3, 2017
10:30AM-12:00PM
Open Science and Innovation
Organizer: Tiberius Brastaviceanu
Organization: ACES-CAKE

10:30AM- 12:00PM
The Scientific and Economic Benefits of Open Science
Organizer: Arij Al Chawaf
Organization: Structural Genomics

I think this is the first time there’s been a ‘Tiberius’ on this blog and teamed with the organization’s name, well, I just had to include it.

Finally, here’s the link to the registration page and a page that details travel deals.

Canadian Science Policy Conference: a compendium of documents and articles on Canada’s Chief Science Advisor and Ontario’s Chief Scientist and the pre-2018 budget submissions

The deadline for applications for the Chief Science Advisor position was extended to Feb. 2017 and so far, there’s no word as to whom it might be. Perhaps Minister of Science Kirsty Duncan wants to make a splash with a surprise announcement at the CSPC’s 2017 conference? As for Ontario’s Chief Scientist, this move will make province the third (?) to have a chief scientist, after Québec and Alberta. There is apparently one in Alberta but there doesn’t seem to be a government webpage and his LinkedIn profile doesn’t include this title. In any event, Dr. Fred Wrona is mentioned as the Alberta’s Chief Scientist in a May 31, 2017 Alberta government announcement. *ETA Aug. 25, 2017: I missed the Yukon, which has a Senior Science Advisor. The position is currently held by Dr. Aynslie Ogden.*

Getting back to the compendium, here’s the CSPC’s A Comprehensive Collection of Publications Regarding Canada’s Federal Chief Science Advisor and Ontario’s Chief Scientist webpage. Here’s a little background provided on the page,

On June 2nd, 2017, the House of Commons Standing Committee on Finance commenced the pre-budget consultation process for the 2018 Canadian Budget. These consultations provide Canadians the opportunity to communicate their priorities with a focus on Canadian productivity in the workplace and community in addition to entrepreneurial competitiveness. Organizations from across the country submitted their priorities on August 4th, 2017 to be selected as witness for the pre-budget hearings before the Committee in September 2017. The process will result in a report to be presented to the House of Commons in December 2017 and considered by the Minister of Finance in the 2018 Federal Budget.

NEWS & ANNOUNCEMENT

House of Commons- PRE-BUDGET CONSULTATIONS IN ADVANCE OF THE 2018 BUDGET

https://www.ourcommons.ca/Committees/en/FINA/StudyActivity?studyActivityId=9571255

CANADIANS ARE INVITED TO SHARE THEIR PRIORITIES FOR THE 2018 FEDERAL BUDGET

https://www.ourcommons.ca/DocumentViewer/en/42-1/FINA/news-release/9002784

The deadline for pre-2018 budget submissions was Aug. 4, 2017 and they haven’t yet scheduled any meetings although they are to be held in September. (People can meet with the Standing Committee on Finance in various locations across Canada to discuss their submissions.) I’m not sure where the CSPC got their list of ‘science’ submissions but it’s definitely worth checking as there are some odd omissions such as TRIUMF (Canada’s National Laboratory for Particle and Nuclear Physics)), Genome Canada, the Pan-Canadian Artificial Intelligence Strategy, CIFAR (Canadian Institute for Advanced Research), the Perimeter Institute, Canadian Light Source, etc.

Twitter and the Naylor Report under a microscope

This news came from University of British Columbia President Santa Ono’s twitter feed,

 I will join Jon [sic] Borrows and Janet Rossant on Sept 19 in Ottawa at a Mindshare event to discuss the importance of the Naylor Report

The Mindshare event Ono is referring to is being organized by Universities Canada (formerly the Association of Universities and Colleges of Canada) and the Institute for Research on Public Policy. It is titled, ‘The Naylor report under the microscope’. Here’s more from the event webpage,

Join Universities Canada and Policy Options for a lively discussion moderated by editor-in-chief Jennifer Ditchburn on the report from the Fundamental Science Review Panel and why research matters to Canadians.

Moderator

Jennifer Ditchburn, editor, Policy Options.

Jennifer Ditchburn

Editor-in-chief, Policy Options

Jennifer Ditchburn is the editor-in-chief of Policy Options, the online policy forum of the Institute for Research on Public Policy.  An award-winning parliamentary correspondent, Jennifer began her journalism career at the Canadian Press in Montreal as a reporter-editor during the lead-up to the 1995 referendum.  From 2001 and 2006 she was a national reporter with CBC TV on Parliament Hill, and in 2006 she returned to the Canadian Press.  She is a three-time winner of a National Newspaper Award:  twice in the politics category, and once in the breaking news category. In 2015 she was awarded the prestigious Charles Lynch Award for outstanding coverage of national issues. Jennifer has been a frequent contributor to television and radio public affairs programs, including CBC’s Power and Politics, the “At Issue” panel, and The Current. She holds a bachelor of arts from Concordia University, and a master of journalism from Carleton University.

@jenditchburn

Tuesday, September 19, 2017

 12-2 pm

Fairmont Château Laurier,  Laurier  Room
 1 Rideau Street, Ottawa

 rsvp@univcan.ca

I can’t tell if they’re offering lunch or if there is a cost associated with this event so you may want to contact the organizers.

As for the Naylor report, I posted a three-part series on June 8, 2017, which features my comments and the other comments I was able to find on the report:

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 1 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 2 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 3 of 3

One piece not mentioned in my three-part series is Paul Wells’ provocatively titled June 29, 2017 article for MacLean’s magazine, Why Canadian scientists aren’t happy (Note: Links have been removed),

Much hubbub this morning over two interviews Kirsty Duncan, the science minister, has given the papers. The subject is Canada’s Fundamental Science Review, commonly called the Naylor Report after David Naylor, the former University of Toronto president who was its main author.

Other authors include BlackBerry founder Mike Lazaridis, who has bankrolled much of the Waterloo renaissance, and Canadian Nobel physicist Arthur McDonald. It’s as blue-chip as a blue-chip panel could be.

Duncan appointed the panel a year ago. It’s her panel, delivered by her experts. Why does it not seem to be… getting anywhere? Why does it seem to have no champion in government? Therein lies a tale.

Note, first, that Duncan’s interviews—her first substantive comment on the report’s recommendations!—come nearly three months after its April release, which in turn came four months after Duncan asked Naylor to deliver his report, last December. (By March I had started to make fun of the Trudeau government in print for dragging its heels on the report’s release. That column was not widely appreciated in the government, I’m told.)

Anyway, the report was released, at an event attended by no representative of the Canadian government. Here’s the gist of what I wrote at the time:

 

Naylor’s “single most important recommendation” is a “rapid increase” in federal spending on “independent investigator-led research” instead of the “priority-driven targeted research” that two successive federal governments, Trudeau’s and Stephen Harper’s, have preferred in the last 8 or 10 federal budgets.

In English: Trudeau has imitated Harper in favouring high-profile, highly targeted research projects, on areas of study selected by political staffers in Ottawa, that are designed to attract star researchers from outside Canada so they can bolster the image of Canada as a research destination.

That’d be great if it wasn’t achieved by pruning budgets for the less spectacular research that most scientists do.

Naylor has numbers. “Between 2007-08 and 2015-16, the inflation-adjusted budgetary envelope for investigator-led research fell by 3 per cent while that for priority-driven research rose by 35 per cent,” he and his colleagues write. “As the number of researchers grew during this period, the real resources available per active researcher to do investigator-led research declined by about 35 per cent.”

And that’s not even taking into account the way two new programs—the $10-million-per-recipient Canada Excellence Research Chairs and the $1.5 billion Canada First Research Excellence Fund—are “further concentrating resources in the hands of smaller numbers of individuals and institutions.”

That’s the context for Duncan’s remarks. In the Globe, she says she agrees with Naylor on “the need for a research system that promotes equity and diversity, provides a better entry for early career researchers and is nimble in response to new scientific opportunities.” But she also “disagreed” with the call for a national advisory council that would give expert advice on the government’s entire science, research and innovation policy.

This is an asinine statement. When taking three months to read a report, it’s a good idea to read it. There is not a single line in Naylor’s overlong report that calls for the new body to make funding decisions. Its proposed name is NACRI, for National Advisory Council on Research and Innovation. A for Advisory. Its responsibilities, listed on Page 19 if you’re reading along at home, are restricted to “advice… evaluation… public reporting… advice… advice.”

Duncan also didn’t promise to meet Naylor’s requested funding levels: $386 million for research in the first year, growing to $1.3 billion in new money in the fourth year. That’s a big concern for researchers, who have been warning for a decade that two successive government’s—Harper’s and Trudeau’s—have been more interested in building new labs than in ensuring there’s money to do research in them.

The minister has talking points. She gave the same answer to both reporters about whether Naylor’s recommendations will be implemented in time for the next federal budget. “It takes time to turn the Queen Mary around,” she said. Twice. I’ll say it does: She’s reacting three days before Canada Day to a report that was written before Christmas. Which makes me worry when she says elected officials should be in charge of being nimble.

Here’s what’s going on.

The Naylor report represents Canadian research scientists’ side of a power struggle. The struggle has been continuing since Jean Chrétien left office. After early cuts, he presided for years over very large increases to the budgets of the main science granting councils. But since 2003, governments have preferred to put new funding dollars to targeted projects in applied sciences. …

Naylor wants that trend reversed, quickly. He is supported in that call by a frankly astonishingly broad coalition of university administrators and working researchers, who until his report were more often at odds. So you have the group representing Canada’s 15 largest research universities and the group representing all universities and a new group representing early-career researchers and, as far as I can tell, every Canadian scientist on Twitter. All backing Naylor. All fundamentally concerned that new money for research is of no particular interest if it does not back the best science as chosen by scientists, through peer review.

The competing model, the one preferred by governments of all stripes, might best be called superclusters. Very large investments into very large projects with loosely defined scientific objectives, whose real goal is to retain decorated veteran scientists and to improve the Canadian high-tech industry. Vast and sprawling labs and tech incubators, cabinet ministers nodding gravely as world leaders in sexy trendy fields sketch the golden path to Jobs of Tomorrow.

You see the imbalance. On one side, ribbons to cut. On the other, nerds experimenting on tapeworms. Kirsty Duncan, a shaky political performer, transparently a junior minister to the supercluster guy, with no deputy minister or department reporting to her, is in a structurally weak position: her title suggests she’s science’s emissary to the government, but she is not equipped to be anything more than government’s emissary to science.

A government that consistently buys into the market for intellectual capital at the very top of the price curve is a factory for producing white elephants. But don’t take my word for it. Ask Geoffrey Hinton [University of Toronto’s Geoffrey Hinton, a Canadian leader in machine learning].

“There is a lot of pressure to make things more applied; I think it’s a big mistake,” he said in 2015. “In the long run, curiosity-driven research just works better… Real breakthroughs come from people focusing on what they’re excited about.”

I keep saying this, like a broken record. If you want the science that changes the world, ask the scientists who’ve changed it how it gets made. This government claims to be interested in what scientists think. We’ll see.

Incisive and acerbic,  you may want to make time to read this article in its entirety.

Getting back to the ‘The Naylor report under the microscope’ event, I wonder if anyone will be as tough and direct as Wells. Going back even further, I wonder if this is why there’s no mention of Duncan as a speaker at the conference. It could go either way: surprise announcement of a Chief Science Advisor, as I first suggested, or avoidance of a potentially angry audience.

For anyone curious about Geoffrey Hinton, there’s more here in my March 31, 2017 post (scroll down about 20% of the way) and for more about the 2017 budget and allocations for targeted science projects there’s my March 24, 2017 post.

US science envoy quits

An Aug. 23, 2017article by Matthew Rosza for salon.com notes the resignation of one of the US science envoys,

President Donald Trump’s infamous response to the Charlottesville riots — namely, saying that both sides were to blame and that there were “very fine people” marching as white supremacists — has prompted yet another high profile resignation from his administration.

Daniel M. Kammen, who served as a science envoy for the State Department and focused on renewable energy development in the Middle East and Northern Africa, submitted a letter of resignation on Wednesday. Notably, he began the first letter of each paragraph with letters that spelled out I-M-P-E-A-C-H. That followed a letter earlier this month by writer Jhumpa Lahiri and actor Kal Penn to similarly spell R-E-S-I-S-T in their joint letter of resignation from the President’s Committee on Arts and Humanities.

Jeremy Berke’s Aug. 23, 2017 article for BusinessInsider.com provides a little more detail (Note: Links have been removed),

A State Department climate science envoy resigned Wednesday in a public letter posted on Twitter over what he says is President Donald Trump’s “attacks on the core values” of the United States with his response to violence in Charlottesville, Virginia.

“My decision to resign is in response to your attacks on the core values of the United States,” wrote Daniel Kammen, a professor of energy at the University of California, Berkeley, who was appointed as one five science envoys in 2016. “Your failure to condemn white supremacists and neo-Nazis has domestic and international ramifications.”

“Your actions to date have, sadly, harmed the quality of life in the United States, our standing abroad, and the sustainability of the planet,” Kammen writes.

Science envoys work with the State Department to establish and develop energy programs in countries around the world. Kammen specifically focused on renewable energy development in the Middle East and North Africa.

That’s it.

3-D integration of nanotechnologies on a single computer chip

By integrating nanomaterials , a new technique for a 3D computer chip capable of handling today’s huge amount of data has been developed. Weirdly, the first two paragraphs of a July 5, 2017 news item on Nanowerk do not convey the main point (Note: A link has been removed),

As embedded intelligence is finding its way into ever more areas of our lives, fields ranging from autonomous driving to personalized medicine are generating huge amounts of data. But just as the flood of data is reaching massive proportions, the ability of computer chips to process it into useful information is stalling.

Now, researchers at Stanford University and MIT have built a new chip to overcome this hurdle. The results are published today in the journal Nature (“Three-dimensional integration of nanotechnologies for computing and data storage on a single chip”), by lead author Max Shulaker, an assistant professor of electrical engineering and computer science at MIT. Shulaker began the work as a PhD student alongside H.-S. Philip Wong and his advisor Subhasish Mitra, professors of electrical engineering and computer science at Stanford. The team also included professors Roger Howe and Krishna Saraswat, also from Stanford.

This image helps to convey the main points,

Instead of relying on silicon-based devices, a new chip uses carbon nanotubes and resistive random-access memory (RRAM) cells. The two are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. Courtesy MIT

As I hove been quite impressed with their science writing, it was a bit surprising to find that the Massachusetts Institute of Technology (MIT) had issued this news release (news item) as it didn’t follow the ‘rules’, i.e., cover as many of the journalistic questions (Who, What, Where, When, Why, and, sometimes, How) as possible in the first sentence/paragraph. This is written more in the style of a magazine article and so the details take a while to emerge, from a July 5, 2017 MIT news release, which originated the news item,

Computers today comprise different chips cobbled together. There is a chip for computing and a separate chip for data storage, and the connections between the two are limited. As applications analyze increasingly massive volumes of data, the limited rate at which data can be moved between different chips is creating a critical communication “bottleneck.” And with limited real estate on the chip, there is not enough room to place them side-by-side, even as they have been miniaturized (a phenomenon known as Moore’s Law).

To make matters worse, the underlying devices, transistors made from silicon, are no longer improving at the historic rate that they have for decades.

The new prototype chip is a radical change from today’s chips. It uses multiple nanotechnologies, together with a new computer architecture, to reverse both of these trends.

Instead of relying on silicon-based devices, the chip uses carbon nanotubes, which are sheets of 2-D graphene formed into nanocylinders, and resistive random-access memory (RRAM) cells, a type of nonvolatile memory that operates by changing the resistance of a solid dielectric material. The researchers integrated over 1 million RRAM cells and 2 million carbon nanotube field-effect transistors, making the most complex nanoelectronic system ever made with emerging nanotechnologies.

The RRAM and carbon nanotubes are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. By inserting ultradense wires between these layers, this 3-D architecture promises to address the communication bottleneck.

However, such an architecture is not possible with existing silicon-based technology, according to the paper’s lead author, Max Shulaker, who is a core member of MIT’s Microsystems Technology Laboratories. “Circuits today are 2-D, since building conventional silicon transistors involves extremely high temperatures of over 1,000 degrees Celsius,” says Shulaker. “If you then build a second layer of silicon circuits on top, that high temperature will damage the bottom layer of circuits.”

The key in this work is that carbon nanotube circuits and RRAM memory can be fabricated at much lower temperatures, below 200 C. “This means they can be built up in layers without harming the circuits beneath,” Shulaker says.

This provides several simultaneous benefits for future computing systems. “The devices are better: Logic made from carbon nanotubes can be an order of magnitude more energy-efficient compared to today’s logic made from silicon, and similarly, RRAM can be denser, faster, and more energy-efficient compared to DRAM,” Wong says, referring to a conventional memory known as dynamic random-access memory.

“In addition to improved devices, 3-D integration can address another key consideration in systems: the interconnects within and between chips,” Saraswat adds.

“The new 3-D computer architecture provides dense and fine-grained integration of computating and data storage, drastically overcoming the bottleneck from moving data between chips,” Mitra says. “As a result, the chip is able to store massive amounts of data and perform on-chip processing to transform a data deluge into useful information.”

To demonstrate the potential of the technology, the researchers took advantage of the ability of carbon nanotubes to also act as sensors. On the top layer of the chip they placed over 1 million carbon nanotube-based sensors, which they used to detect and classify ambient gases.

Due to the layering of sensing, data storage, and computing, the chip was able to measure each of the sensors in parallel, and then write directly into its memory, generating huge bandwidth, Shulaker says.

Three-dimensional integration is the most promising approach to continue the technology scaling path set forth by Moore’s laws, allowing an increasing number of devices to be integrated per unit volume, according to Jan Rabaey, a professor of electrical engineering and computer science at the University of California at Berkeley, who was not involved in the research.

“It leads to a fundamentally different perspective on computing architectures, enabling an intimate interweaving of memory and logic,” Rabaey says. “These structures may be particularly suited for alternative learning-based computational paradigms such as brain-inspired systems and deep neural nets, and the approach presented by the authors is definitely a great first step in that direction.”

“One big advantage of our demonstration is that it is compatible with today’s silicon infrastructure, both in terms of fabrication and design,” says Howe.

“The fact that this strategy is both CMOS [complementary metal-oxide-semiconductor] compatible and viable for a variety of applications suggests that it is a significant step in the continued advancement of Moore’s Law,” says Ken Hansen, president and CEO of the Semiconductor Research Corporation, which supported the research. “To sustain the promise of Moore’s Law economics, innovative heterogeneous approaches are required as dimensional scaling is no longer sufficient. This pioneering work embodies that philosophy.”

The team is working to improve the underlying nanotechnologies, while exploring the new 3-D computer architecture. For Shulaker, the next step is working with Massachusetts-based semiconductor company Analog Devices to develop new versions of the system that take advantage of its ability to carry out sensing and data processing on the same chip.

So, for example, the devices could be used to detect signs of disease by sensing particular compounds in a patient’s breath, says Shulaker.

“The technology could not only improve traditional computing, but it also opens up a whole new range of applications that we can target,” he says. “My students are now investigating how we can produce chips that do more than just computing.”

“This demonstration of the 3-D integration of sensors, memory, and logic is an exceptionally innovative development that leverages current CMOS technology with the new capabilities of carbon nanotube field–effect transistors,” says Sam Fuller, CTO emeritus of Analog Devices, who was not involved in the research. “This has the potential to be the platform for many revolutionary applications in the future.”

This work was funded by the Defense Advanced Research Projects Agency [DARPA], the National Science Foundation, Semiconductor Research Corporation, STARnet SONIC, and member companies of the Stanford SystemX Alliance.

Here’s a link to and a citation for the paper,

Three-dimensional integration of nanotechnologies for computing and data storage on a single chip by Max M. Shulaker, Gage Hills, Rebecca S. Park, Roger T. Howe, Krishna Saraswat, H.-S. Philip Wong, & Subhasish Mitra. Nature 547, 74–78 (06 July 2017) doi:10.1038/nature22994 Published online 05 July 2017

This paper is behind a paywall.

CRISPR and editing the germline in the US (part 3 of 3): public discussions and pop culture

After giving a basic explanation of the technology and some of the controversies in part 1 and offering more detail about the technology and about the possibility of designer babies in part 2; this part covers public discussion, a call for one and the suggestion that one is taking place in popular culture.

But a discussion does need to happen

In a move that is either an exquisite coincidence or has been carefully orchestrated (I vote for the latter), researchers from the University of Wisconsin-Madison have released a study about attitudes in the US to human genome editing. From an Aug. 11, 2017 University of Wisconsin-Madison news release (also on EurekAllert),

In early August 2017, an international team of scientists announced they had successfully edited the DNA of human embryos. As people process the political, moral and regulatory issues of the technology — which nudges us closer to nonfiction than science fiction — researchers at the University of Wisconsin-Madison and Temple University show the time is now to involve the American public in discussions about human genome editing.

In a study published Aug. 11 in the journal Science, the researchers assessed what people in the United States think about the uses of human genome editing and how their attitudes may drive public discussion. They found a public divided on its uses but united in the importance of moving conversations forward.

“There are several pathways we can go down with gene editing,” says UW-Madison’s Dietram Scheufele, lead author of the study and member of a National Academy of Sciences committee that compiled a report focused on human gene editing earlier this year. “Our study takes an exhaustive look at all of those possible pathways forward and asks where the public stands on each one of them.”

Compared to previous studies on public attitudes about the technology, the new study takes a more nuanced approach, examining public opinion about the use of gene editing for disease therapy versus for human enhancement, and about editing that becomes hereditary versus editing that does not.

The research team, which included Scheufele and Dominique Brossard — both professors of life sciences communication — along with Michael Xenos, professor of communication arts, first surveyed study participants about the use of editing to treat disease (therapy) versus for enhancement (creating so-called “designer babies”). While about two-thirds of respondents expressed at least some support for therapeutic editing, only one-third expressed support for using the technology for enhancement.

Diving even deeper, researchers looked into public attitudes about gene editing on specific cell types — somatic or germline — either for therapy or enhancement. Somatic cells are non-reproductive, so edits made in those cells do not affect future generations. Germline cells, however, are heritable, and changes made in these cells would be passed on to children.

Public support of therapeutic editing was high both in cells that would be inherited and those that would not, with 65 percent of respondents supporting therapy in germline cells and 64 percent supporting therapy in somatic cells. When considering enhancement editing, however, support depended more upon whether the changes would affect future generations. Only 26 percent of people surveyed supported enhancement editing in heritable germline cells and 39 percent supported enhancement of somatic cells that would not be passed on to children.

“A majority of people are saying that germline enhancement is where the technology crosses that invisible line and becomes unacceptable,” says Scheufele. “When it comes to therapy, the public is more open, and that may partly be reflective of how severe some of those genetically inherited diseases are. The potential treatments for those diseases are something the public at least is willing to consider.”

Beyond questions of support, researchers also wanted to understand what was driving public opinions. They found that two factors were related to respondents’ attitudes toward gene editing as well as their attitudes toward the public’s role in its emergence: the level of religious guidance in their lives, and factual knowledge about the technology.

Those with a high level of religious guidance in their daily lives had lower support for human genome editing than those with low religious guidance. Additionally, those with high knowledge of the technology were more supportive of it than those with less knowledge.

While respondents with high religious guidance and those with high knowledge differed on their support for the technology, both groups highly supported public engagement in its development and use. These results suggest broad agreement that the public should be involved in questions of political, regulatory and moral aspects of human genome editing.

“The public may be split along lines of religiosity or knowledge with regard to what they think about the technology and scientific community, but they are united in the idea that this is an issue that requires public involvement,” says Scheufele. “Our findings show very nicely that the public is ready for these discussions and that the time to have the discussions is now, before the science is fully ready and while we have time to carefully think through different options regarding how we want to move forward.”

Here’s a  link to and a citation for the paper,

U.S. attitudes on human genome editing by Dietram A. Scheufele, Michael A. Xenos, Emily L. Howell, Kathleen M. Rose, Dominique Brossard1, and Bruce W. Hardy. Science 11 Aug 2017: Vol. 357, Issue 6351, pp. 553-554 DOI: 10.1126/science.aan3708

This paper is behind a paywall.

A couple of final comments

Briefly, I notice that there’s no mention of the ethics of patenting this technology in the news release about the study.

Moving on, it seems surprising that the first team to engage in germline editing in the US is in Oregon; I would have expected the work to come from Massachusetts, California, or Illinois where a lot of bleeding edge medical research is performed. However, given the dearth of financial support from federal funding institutions, it seems likely that only an outsider would dare to engage i the research. Given the timing, Mitalipov’s work was already well underway before the recent about-face from the US National Academy of Sciences (Note: Kaiser’s Feb. 14, 2017 article does note that for some the recent recommendations do not represent any change).

As for discussion on issues such as editing of the germline, I’ve often noted here that popular culture (including advertising with the science fiction and other dramas laid in various media) often provides an informal forum for discussion. Joelle Renstrom in an Aug. 13, 2017 article for slate.com writes that Orphan Black (a BBC America series featuring clones) opened up a series of questions about science and ethics in the guise of a thriller about clones. She offers a précis of the first four seasons (Note: A link has been removed),

If you stopped watching a few seasons back, here’s a brief synopsis of how the mysteries wrap up. Neolution, an organization that seeks to control human evolution through genetic modification, began Project Leda, the cloning program, for two primary reasons: to see whether they could and to experiment with mutations that might allow people (i.e., themselves) to live longer. Neolution partnered with biotech companies such as Dyad, using its big pharma reach and deep pockets to harvest people’s genetic information and to conduct individual and germline (that is, genetic alterations passed down through generations) experiments, including infertility treatments that result in horrifying birth defects and body modification, such as tail-growing.

She then provides the article’s thesis (Note: Links have been removed),

Orphan Black demonstrates Carl Sagan’s warning of a time when “awesome technological powers are in the hands of a very few.” Neolutionists do whatever they want, pausing only to consider whether they’re missing an opportunity to exploit. Their hubris is straight out of Victor Frankenstein’s playbook. Frankenstein wonders whether he ought to first reanimate something “of simpler organisation” than a human, but starting small means waiting for glory. Orphan Black’s evil scientists embody this belief: if they’re going to play God, then they’ll control not just their own destinies, but the clones’ and, ultimately, all of humanity’s. Any sacrifices along the way are for the greater good—reasoning that culminates in Westmoreland’s eugenics fantasy to genetically sterilize 99 percent of the population he doesn’t enhance.

Orphan Black uses sci-fi tropes to explore real-world plausibility. Neolution shares similarities with transhumanism, the belief that humans should use science and technology to take control of their own evolution. While some transhumanists dabble in body modifications, such as microchip implants or night-vision eye drops, others seek to end suffering by curing human illness and aging. But even these goals can be seen as selfish, as access to disease-eradicating or life-extending technologies would be limited to the wealthy. Westmoreland’s goal to “sell Neolution to the 1 percent” seems frighteningly plausible—transhumanists, who statistically tend to be white, well-educated, and male, and their associated organizations raise and spend massive sums of money to help fulfill their goals. …

On Orphan Black, denial of choice is tantamount to imprisonment. That the clones have to earn autonomy underscores the need for ethics in science, especially when it comes to genetics. The show’s message here is timely given the rise of gene-editing techniques such as CRISPR. Recently, the National Academy of Sciences gave germline gene editing the green light, just one year after academy scientists from around the world argued it would be “irresponsible to proceed” without further exploring the implications. Scientists in the United Kingdom and China have already begun human genetic engineering and American scientists recently genetically engineered a human embryo for the first time. The possibility of Project Leda isn’t farfetched. Orphan Black warns us that money, power, and fear of death can corrupt both people and science. Once that happens, loss of humanity—of both the scientists and the subjects—is inevitable.

In Carl Sagan’s dark vision of the future, “people have lost the ability to set their own agendas or knowledgeably question those in authority.” This describes the plight of the clones at the outset of Orphan Black, but as the series continues, they challenge this paradigm by approaching science and scientists with skepticism, ingenuity, and grit. …

I hope there are discussions such as those Scheufele and Brossard are advocating but it might be worth considering that there is already some discussion underway, as informal as it is.

-30-

Part 1: CRISPR and editing the germline in the US (part 1 of 3): In the beginning

Part 2: CRISPR and editing the germline in the US (part 2 of 3): ‘designer babies’?

CRISPR and editing the germline in the US (part 2 of 3): ‘designer babies’?

Having included an explanation of CRISPR-CAS9 technology along with the news about the first US team to edit the germline and bits and pieces about ethics and a patent fight (part 1), this part hones in on the details of the work and worries about ‘designer babies’.

The interest flurry

I found three articles addressing the research and all three concur that despite some of the early reporting, this is not the beginning of a ‘designer baby’ generation.

First up was Nick Thieme in a July 28, 2017 article for Slate,

MIT Technology Review reported Thursday that a team of researchers from Portland, Oregon were the first team of U.S.-based scientists to successfully create a genetically modified human embryo. The researchers, led by Shoukhrat Mitalipov of Oregon Health and Science University, changed the DNA of—in MIT Technology Review’s words—“many tens” of genetically-diseased embryos by injecting the host egg with CRISPR, a DNA-based gene editing tool first discovered in bacteria, at the time of fertilization. CRISPR-Cas9, as the full editing system is called, allows scientists to change genes accurately and efficiently. As has happened with research elsewhere, the CRISPR-edited embryos weren’t implanted—they were kept sustained for only a couple of days.

In addition to being the first American team to complete this feat, the researchers also improved upon the work of the three Chinese research teams that beat them to editing embryos with CRISPR: Mitalipov’s team increased the proportion of embryonic cells that received the intended genetic changes, addressing an issue called “mosaicism,” which is when an embryo is comprised of cells with different genetic makeups. Increasing that proportion is essential to CRISPR work in eliminating inherited diseases, to ensure that the CRISPR therapy has the intended result. The Oregon team also reduced the number of genetic errors introduced by CRISPR, reducing the likelihood that a patient would develop cancer elsewhere in the body.

Separate from the scientific advancements, it’s a big deal that this work happened in a country with such intense politicization of embryo research. …

But there are a great number of obstacles between the current research and the future of genetically editing all children to be 12-foot-tall Einsteins.

Ed Yong in an Aug. 2, 2017 article for The Atlantic offered a comprehensive overview of the research and its implications (unusually for Yong, there seems to be mildly condescending note but it’s worth ignoring for the wealth of information in the article; Note: Links have been removed),

… the full details of the experiment, which are released today, show that the study is scientifically important but much less of a social inflection point than has been suggested. “This has been widely reported as the dawn of the era of the designer baby, making it probably the fifth or sixth time people have reported that dawn,” says Alta Charo, an expert on law and bioethics at the University of Wisconsin-Madison. “And it’s not.”

Given the persistent confusion around CRISPR and its implications, I’ve laid out exactly what the team did, and what it means.

Who did the experiments?

Shoukhrat Mitalipov is a Kazakhstani-born cell biologist with a history of breakthroughs—and controversy—in the stem cell field. He was the scientist to clone monkeys. He was the first to create human embryos by cloning adult cells—a move that could provide patients with an easy supply of personalized stem cells. He also pioneered a technique for creating embryos with genetic material from three biological parents, as a way of preventing a group of debilitating inherited diseases.

Although MIT Tech Review name-checked Mitalipov alone, the paper splits credit for the research between five collaborating teams—four based in the United States, and one in South Korea.

What did they actually do?

The project effectively began with an elevator conversation between Mitalipov and his colleague Sanjiv Kaul. Mitalipov explained that he wanted to use CRISPR to correct a disease-causing gene in human embryos, and was trying to figure out which disease to focus on. Kaul, a cardiologist, told him about hypertrophic cardiomyopathy (HCM)—an inherited heart disease that’s commonly caused by mutations in a gene called MYBPC3. HCM is surprisingly common, affecting 1 in 500 adults. Many of them lead normal lives, but in some, the walls of their hearts can thicken and suddenly fail. For that reason, HCM is the commonest cause of sudden death in athletes. “There really is no treatment,” says Kaul. “A number of drugs are being evaluated but they are all experimental,” and they merely treat the symptoms. The team wanted to prevent HCM entirely by removing the underlying mutation.

They collected sperm from a man with HCM and used CRISPR to change his mutant gene into its normal healthy version, while simultaneously using the sperm to fertilize eggs that had been donated by female volunteers. In this way, they created embryos that were completely free of the mutation. The procedure was effective, and avoided some of the critical problems that have plagued past attempts to use CRISPR in human embryos.

Wait, other human embryos have been edited before?

There have been three attempts in China. The first two—in 2015 and 2016—used non-viable embryos that could never have resulted in a live birth. The third—announced this March—was the first to use viable embryos that could theoretically have been implanted in a womb. All of these studies showed that CRISPR gene-editing, for all its hype, is still in its infancy.

The editing was imprecise. CRISPR is heralded for its precision, allowing scientists to edit particular genes of choice. But in practice, some of the Chinese researchers found worrying levels of off-target mutations, where CRISPR mistakenly cut other parts of the genome.

The editing was inefficient. The first Chinese team only managed to successfully edit a disease gene in 4 out of 86 embryos, and the second team fared even worse.

The editing was incomplete. Even in the successful cases, each embryo had a mix of modified and unmodified cells. This pattern, known as mosaicism, poses serious safety problems if gene-editing were ever to be used in practice. Doctors could end up implanting women with embryos that they thought were free of a disease-causing mutation, but were only partially free. The resulting person would still have many tissues and organs that carry those mutations, and might go on to develop symptoms.

What did the American team do differently?

The Chinese teams all used CRISPR to edit embryos at early stages of their development. By contrast, the Oregon researchers delivered the CRISPR components at the earliest possible point—minutes before fertilization. That neatly avoids the problem of mosaicism by ensuring that an embryo is edited from the very moment it is created. The team did this with 54 embryos and successfully edited the mutant MYBPC3 gene in 72 percent of them. In the other 28 percent, the editing didn’t work—a high failure rate, but far lower than in previous attempts. Better still, the team found no evidence of off-target mutations.

This is a big deal. Many scientists assumed that they’d have to do something more convoluted to avoid mosaicism. They’d have to collect a patient’s cells, which they’d revert into stem cells, which they’d use to make sperm or eggs, which they’d edit using CRISPR. “That’s a lot of extra steps, with more risks,” says Alta Charo. “If it’s possible to edit the embryo itself, that’s a real advance.” Perhaps for that reason, this is the first study to edit human embryos that was published in a top-tier scientific journal—Nature, which rejected some of the earlier Chinese papers.

Is this kind of research even legal?

Yes. In Western Europe, 15 countries out of 22 ban any attempts to change the human germ line—a term referring to sperm, eggs, and other cells that can transmit genetic information to future generations. No such stance exists in the United States but Congress has banned the Food and Drug Administration from considering research applications that make such modifications. Separately, federal agencies like the National Institutes of Health are banned from funding research that ultimately destroys human embryos. But the Oregon team used non-federal money from their institutions, and donations from several small non-profits. No taxpayer money went into their work. [emphasis mine]

Why would you want to edit embryos at all?

Partly to learn more about ourselves. By using CRISPR to manipulate the genes of embryos, scientists can learn more about the earliest stages of human development, and about problems like infertility and miscarriages. That’s why biologist Kathy Niakan from the Crick Institute in London recently secured a license from a British regulator to use CRISPR on human embryos.

Isn’t this a slippery slope toward making designer babies?

In terms of avoiding genetic diseases, it’s not conceptually different from PGD, which is already widely used. The bigger worry is that gene-editing could be used to make people stronger, smarter, or taller, paving the way for a new eugenics, and widening the already substantial gaps between the wealthy and poor. But many geneticists believe that such a future is fundamentally unlikely because complex traits like height and intelligence are the work of hundreds or thousands of genes, each of which have a tiny effect. The prospect of editing them all is implausible. And since genes are so thoroughly interconnected, it may be impossible to edit one particular trait without also affecting many others.

“There’s the worry that this could be used for enhancement, so society has to draw a line,” says Mitalipov. “But this is pretty complex technology and it wouldn’t be hard to regulate it.”

Does this discovery have any social importance at all?

“It’s not so much about designer babies as it is about geographical location,” says Charo. “It’s happening in the United States, and everything here around embryo research has high sensitivity.” She and others worry that the early report about the study, before the actual details were available for scrutiny, could lead to unnecessary panic. “Panic reactions often lead to panic-driven policy … which is usually bad policy,” wrote Greely [bioethicist Hank Greely].

As I understand it, despite the change in stance, there is no federal funding available for the research performed by Mitalipov and his team.

Finally, University College London (UCL) scientists Joyce Harper and Helen O’Neill wrote about CRISPR, the Oregon team’s work, and the possibilities in an Aug. 3, 2017 essay for The Conversation (Note: Links have been removed),

The genome editing tool used, CRISPR-Cas9, has transformed the field of biology in the short time since its discovery in that it not only promises, but delivers. CRISPR has surpassed all previous efforts to engineer cells and alter genomes at a fraction of the time and cost.

The technology, which works like molecular scissors to cut and paste DNA, is a natural defence system that bacteria use to fend off harmful infections. This system has the ability to recognise invading virus DNA, cut it and integrate this cut sequence into its own genome – allowing the bacterium to render itself immune to future infections of viruses with similar DNA. It is this ability to recognise and cut DNA that has allowed scientists to use it to target and edit specific DNA regions.

When this technology is applied to “germ cells” – the sperm and eggs – or embryos, it changes the germline. That means that any alterations made would be permanent and passed down to future generations. This makes it more ethically complex, but there are strict regulations around human germline genome editing, which is predominantly illegal. The UK received a licence in 2016 to carry out CRISPR on human embryos for research into early development. But edited embryos are not allowed to be inserted into the uterus and develop into a fetus in any country.

Germline genome editing came into the global spotlight when Chinese scientists announced in 2015 that they had used CRISPR to edit non-viable human embryos – cells that could never result in a live birth. They did this to modify the gene responsible for the blood disorder β-thalassaemia. While it was met with some success, it received a lot of criticism because of the premature use of this technology in human embryos. The results showed a high number of potentially dangerous, off-target mutations created in the procedure.

Impressive results

The new study, published in Nature, is different because it deals with viable human embryos and shows that the genome editing can be carried out safely – without creating harmful mutations. The team used CRISPR to correct a mutation in the gene MYBPC3, which accounts for approximately 40% of the myocardial disease hypertrophic cardiomyopathy. This is a dominant disease, so an affected individual only needs one abnormal copy of the gene to be affected.

The researchers used sperm from a patient carrying one copy of the MYBPC3 mutation to create 54 embryos. They edited them using CRISPR-Cas9 to correct the mutation. Without genome editing, approximately 50% of the embryos would carry the patients’ normal gene and 50% would carry his abnormal gene.

After genome editing, the aim would be for 100% of embryos to be normal. In the first round of the experiments, they found that 66.7% of embryos – 36 out of 54 – were normal after being injected with CRIPSR. Of the remaining 18 embryos, five had remained unchanged, suggesting editing had not worked. In 13 embryos, only a portion of cells had been edited.

The level of efficiency is affected by the type of CRISPR machinery used and, critically, the timing in which it is put into the embryo. The researchers therefore also tried injecting the sperm and the CRISPR-Cas9 complex into the egg at the same time, which resulted in more promising results. This was done for 75 mature donated human eggs using a common IVF technique called intracytoplasmic sperm injection. This time, impressively, 72.4% of embryos were normal as a result. The approach also lowered the number of embryos containing a mixture of edited and unedited cells (these embryos are called mosaics).

Finally, the team injected a further 22 embryos which were grown into blastocyst – a later stage of embryo development. These were sequenced and the researchers found that the editing had indeed worked. Importantly, they could show that the level of off-target mutations was low.

A brave new world?

So does this mean we finally have a cure for debilitating, heritable diseases? It’s important to remember that the study did not achieve a 100% success rate. Even the researchers themselves stress that further research is needed in order to fully understand the potential and limitations of the technique.

In our view, it is unlikely that genome editing would be used to treat the majority of inherited conditions anytime soon. We still can’t be sure how a child with a genetically altered genome will develop over a lifetime, so it seems unlikely that couples carrying a genetic disease would embark on gene editing rather than undergoing already available tests – such as preimplantation genetic diagnosis or prenatal diagnosis – where the embryos or fetus are tested for genetic faults.

-30-

As might be expected there is now a call for public discussion about the ethics about this kind of work. See Part 3.

For anyone who started in the middle of this series, here’s Part 1 featuring an introduction to the technology and some of the issues.

CRISPR and editing the germline in the US (part 1 of 3): In the beginning

There’s been a minor flurry of interest in CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats; also known as CRISPR-CAS9), a gene-editing technique, since a team in Oregon announced a paper describing their work editing the germline. Since I’ve been following the CRISPR-CAS9 story for a while this seems like a good juncture for a more in-depth look at the topic. In this first part I’m including an introduction to CRISPR, some information about the latest US work, and some previous writing about ethics issues raised when Chinese scientists first announced their work editing germlines in 2015 and during the patent dispute between the University of California at Berkeley and Harvard University’s Broad Institute.

Introduction to CRISPR

I’ve been searching for a good description of CRISPR and this helped to clear up some questions for me (Thank you to MIT Review),

For anyone who’s been reading about science for a while, this upbeat approach to explaining how a particular technology will solve all sorts of problems will seem quite familiar. It’s not the most hyperbolic piece I’ve seen but it barely mentions any problems associated with research (for some of the problems see: ‘The interest flurry’ later in part 2).

Oregon team

Steve Connor’s July 26, 2017 article for the MIT (Massachusetts Institute of Technology) Technology Review breaks the news (Note: Links have been removed),

The first known attempt at creating genetically modified human embryos in the United States has been carried out by a team of researchers in Portland, Oregon, MIT Technology Review has learned.

The effort, led by Shoukhrat Mitalipov of Oregon Health and Science University, involved changing the DNA of a large number of one-cell embryos with the gene-editing technique CRISPR, according to people familiar with the scientific results.

Until now, American scientists have watched with a combination of awe, envy, and some alarm as scientists elsewhere were first to explore the controversial practice. To date, three previous reports of editing human embryos were all published by scientists in China.

Now Mitalipov is believed to have broken new ground both in the number of embryos experimented upon and by demonstrating that it is possible to safely and efficiently correct defective genes that cause inherited diseases.

Although none of the embryos were allowed to develop for more than a few days—and there was never any intention of implanting them into a womb—the experiments are a milestone on what may prove to be an inevitable journey toward the birth of the first genetically modified humans.

In altering the DNA code of human embryos, the objective of scientists is to show that they can eradicate or correct genes that cause inherited disease, like the blood condition beta-thalassemia. The process is termed “germline engineering” because any genetically modified child would then pass the changes on to subsequent generations via their own germ cells—the egg and sperm.

Some critics say germline experiments could open the floodgates to a brave new world of “designer babies” engineered with genetic enhancements—a prospect bitterly opposed by a range of religious organizations, civil society groups, and biotech companies.

The U.S. intelligence community last year called CRISPR a potential “weapon of mass destruction.”

Here’s a link to a citation for the groundbreaking paper,

Correction of a pathogenic gene mutation in human embryos by Hong Ma, Nuria Marti-Gutierrez, Sang-Wook Park, Jun Wu, Yeonmi Lee, Keiichiro Suzuki, Amy Koski, Dongmei Ji, Tomonari Hayama, Riffat Ahmed, Hayley Darby, Crystal Van Dyken, Ying Li, Eunju Kang, A.-Reum Park, Daesik Kim, Sang-Tae Kim, Jianhui Gong, Ying Gu, Xun Xu, David Battaglia, Sacha A. Krieg, David M. Lee, Diana H. Wu, Don P. Wolf, Stephen B. Heitner, Juan Carlos Izpisua Belmonte, Paula Amato, Jin-Soo Kim, Sanjiv Kaul, & Shoukhrat Mitalipov. Nature (2017) doi:10.1038/nature23305 Published online 02 August 2017

This paper appears to be open access.

CRISPR Issues: ethics and patents

In my May 14, 2015 posting I mentioned a ‘moratorium’ on germline research, the Chinese research paper, and the stance taken by the US National Institutes of Health (NIH),

The CRISPR technology has reignited a discussion about ethical and moral issues of human genetic engineering some of which is reviewed in an April 7, 2015 posting about a moratorium by Sheila Jasanoff, J. Benjamin Hurlbut and Krishanu Saha for the Guardian science blogs (Note: A link has been removed),

On April 3, 2015, a group of prominent biologists and ethicists writing in Science called for a moratorium on germline gene engineering; modifications to the human genome that will be passed on to future generations. The moratorium would apply to a technology called CRISPR/Cas9, which enables the removal of undesirable genes, insertion of desirable ones, and the broad recoding of nearly any DNA sequence.

Such modifications could affect every cell in an adult human being, including germ cells, and therefore be passed down through the generations. Many organisms across the range of biological complexity have already been edited in this way to generate designer bacteria, plants and primates. There is little reason to believe the same could not be done with human eggs, sperm and embryos. Now that the technology to engineer human germlines is here, the advocates for a moratorium declared, it is time to chart a prudent path forward. They recommend four actions: a hold on clinical applications; creation of expert forums; transparent research; and a globally representative group to recommend policy approaches.

The authors go on to review precedents and reasons for the moratorium while suggesting we need better ways for citizens to engage with and debate these issues,

An effective moratorium must be grounded in the principle that the power to modify the human genome demands serious engagement not only from scientists and ethicists but from all citizens. We need a more complex architecture for public deliberation, built on the recognition that we, as citizens, have a duty to participate in shaping our biotechnological futures, just as governments have a duty to empower us to participate in that process. Decisions such as whether or not to edit human genes should not be left to elite and invisible experts, whether in universities, ad hoc commissions, or parliamentary advisory committees. Nor should public deliberation be temporally limited by the span of a moratorium or narrowed to topics that experts deem reasonable to debate.

I recommend reading the post in its entirety as there are nuances that are best appreciated in the entirety of the piece.

Shortly after this essay was published, Chinese scientists announced they had genetically modified (nonviable) human embryos. From an April 22, 2015 article by David Cyranoski and Sara Reardon in Nature where the research and some of the ethical issues discussed,

In a world first, Chinese scientists have reported editing the genomes of human embryos. The results are published1 in the online journal Protein & Cell and confirm widespread rumours that such experiments had been conducted — rumours that sparked a high-profile debate last month2, 3 about the ethical implications of such work.

In the paper, researchers led by Junjiu Huang, a gene-function researcher at Sun Yat-sen University in Guangzhou, tried to head off such concerns by using ‘non-viable’ embryos, which cannot result in a live birth, that were obtained from local fertility clinics. The team attempted to modify the gene responsible for β-thalassaemia, a potentially fatal blood disorder, using a gene-editing technique known as CRISPR/Cas9. The researchers say that their results reveal serious obstacles to using the method in medical applications.

“I believe this is the first report of CRISPR/Cas9 applied to human pre-implantation embryos and as such the study is a landmark, as well as a cautionary tale,” says George Daley, a stem-cell biologist at Harvard Medical School in Boston, Massachusetts. “Their study should be a stern warning to any practitioner who thinks the technology is ready for testing to eradicate disease genes.”

….

Huang says that the paper was rejected by Nature and Science, in part because of ethical objections; both journals declined to comment on the claim. (Nature’s news team is editorially independent of its research editorial team.)

He adds that critics of the paper have noted that the low efficiencies and high number of off-target mutations could be specific to the abnormal embryos used in the study. Huang acknowledges the critique, but because there are no examples of gene editing in normal embryos he says that there is no way to know if the technique operates differently in them.

Still, he maintains that the embryos allow for a more meaningful model — and one closer to a normal human embryo — than an animal model or one using adult human cells. “We wanted to show our data to the world so people know what really happened with this model, rather than just talking about what would happen without data,” he says.

This, too, is a good and thoughtful read.

There was an official response in the US to the publication of this research, from an April 29, 2015 post by David Bruggeman on his Pasco Phronesis blog (Note: Links have been removed),

In light of Chinese researchers reporting their efforts to edit the genes of ‘non-viable’ human embryos, the National Institutes of Health (NIH) Director Francis Collins issued a statement (H/T Carl Zimmer).

“NIH will not fund any use of gene-editing technologies in human embryos. The concept of altering the human germline in embryos for clinical purposes has been debated over many years from many different perspectives, and has been viewed almost universally as a line that should not be crossed. Advances in technology have given us an elegant new way of carrying out genome editing, but the strong arguments against engaging in this activity remain. These include the serious and unquantifiable safety issues, ethical issues presented by altering the germline in a way that affects the next generation without their consent, and a current lack of compelling medical applications justifying the use of CRISPR/Cas9 in embryos.” …

The US has modified its stance according to a February 14, 2017 article by Jocelyn Kaiser for Science Magazine (Note: Links have been removed),

Editing the DNA of a human embryo to prevent a disease in a baby could be ethically allowable one day—but only in rare circumstances and with safeguards in place, says a widely anticipated report released today.

The report from an international committee convened by the U.S. National Academy of Sciences (NAS) and the National Academy of Medicine in Washington, D.C., concludes that such a clinical trial “might be permitted, but only following much more research” on risks and benefits, and “only for compelling reasons and under strict oversight.” Those situations could be limited to couples who both have a serious genetic disease and for whom embryo editing is “really the last reasonable option” if they want to have a healthy biological child, says committee co-chair Alta Charo, a bioethicist at the University of Wisconsin in Madison.

Some researchers are pleased with the report, saying it is consistent with previous conclusions that safely altering the DNA of human eggs, sperm, or early embryos—known as germline editing—to create a baby could be possible eventually. “They have closed the door to the vast majority of germline applications and left it open for a very small, well-defined subset. That’s not unreasonable in my opinion,” says genome researcher Eric Lander of the Broad Institute in Cambridge, Massachusetts. Lander was among the organizers of an international summit at NAS in December 2015 who called for more discussion before proceeding with embryo editing.

But others see the report as lowering the bar for such experiments because it does not explicitly say they should be prohibited for now. “It changes the tone to an affirmative position in the absence of the broad public debate this report calls for,” says Edward Lanphier, chairman of the DNA editing company Sangamo Therapeutics in Richmond, California. Two years ago, he co-authored a Nature commentary calling for a moratorium on clinical embryo editing.

One advocacy group opposed to embryo editing goes further. “We’re very disappointed with the report. It’s really a pretty dramatic shift from the existing and widespread agreement globally that human germline editing should be prohibited,” says Marcy Darnovsky, executive director of the Center for Genetics and Society in Berkeley, California.

Interestingly, this change of stance occurred just prior to a CRISPR patent decision (from my March 15, 2017 posting),

I have written about the CRISPR patent tussle (Harvard & MIT’s [Massachusetts Institute of Technology] Broad Institute vs the University of California at Berkeley) previously in a Jan. 6, 2015 posting and in a more detailed May 14, 2015 posting. I also mentioned (in a Jan. 17, 2017 posting) CRISPR and its patent issues in the context of a posting about a Slate.com series on Frankenstein and the novel’s applicability to our own time. This patent fight is being bitterly fought as fortunes are at stake.

It seems a decision has been made regarding the CRISPR patent claims. From a Feb. 17, 2017 article by Charmaine Distor for The Science Times,

After an intense court battle, the US Patent and Trademark Office (USPTO) released its ruling on February 15 [2017]. The rights for the CRISPR-Cas9 gene editing technology was handed over to the Broad Institute of Harvard University and the Massachusetts Institute of Technology (MIT).

According to an article in Nature, the said court battle was between the Broad Institute and the University of California. The two institutions are fighting over the intellectual property right for the CRISPR patent. The case between the two started when the patent was first awarded to the Broad Institute despite having the University of California apply first for the CRISPR patent.

Heidi Ledford’s Feb. 17, 2017 article for Nature provides more insight into the situation (Note: Links have been removed),

It [USPTO] ruled that the Broad Institute of Harvard and MIT in Cambridge could keep its patents on using CRISPR–Cas9 in eukaryotic cells. That was a blow to the University of California in Berkeley, which had filed its own patents and had hoped to have the Broad’s thrown out.

The fight goes back to 2012, when Jennifer Doudna at Berkeley, Emmanuelle Charpentier, then at the University of Vienna, and their colleagues outlined how CRISPR–Cas9 could be used to precisely cut isolated DNA1. In 2013, Feng Zhang at the Broad and his colleagues — and other teams — showed2 how it could be adapted to edit DNA in eukaryotic cells such as plants, livestock and humans.

Berkeley filed for a patent earlier, but the USPTO granted the Broad’s patents first — and this week upheld them. There are high stakes involved in the ruling. The holder of key patents could make millions of dollars from CRISPR–Cas9’s applications in industry: already, the technique has sped up genetic research, and scientists are using it to develop disease-resistant livestock and treatments for human diseases.

….

I also noted this eyebrow-lifting statistic,  “As for Ledford’s 3rd point, there are an estimated 763 patent families (groups of related patents) claiming CAS9 leading to the distinct possibility that the Broad Institute will be fighting many patent claims in the future.)

-30-

Part 2 covers three critical responses to the reporting and between them describe the technology in more detail and the possibility of ‘designer babies’.  CRISPR and editing the germline in the US (part 2 of 3): ‘designer babies’?

Part 3 is all about public discussion or, rather, the lack of and need for according to a couple of social scientists. Informally, there is some discussion via pop culture and Joelle Renstrom notes although she is focused on the larger issues touched on by the television series, Orphan Black and as I touch on in my final comments. CRISPR and editing the germline in the US (part 3 of 3): public discussions and pop culture

Artificial intelligence and metaphors

This is a different approach to artificial intelligence. From a June 27, 2017 news item on ScienceDaily,

Ask Siri to find a math tutor to help you “grasp” calculus and she’s likely to respond that your request is beyond her abilities. That’s because metaphors like “grasp” are difficult for Apple’s voice-controlled personal assistant to, well, grasp.

But new UC Berkeley research suggests that Siri and other digital helpers could someday learn the algorithms that humans have used for centuries to create and understand metaphorical language.

Mapping 1,100 years of metaphoric English language, researchers at UC Berkeley and Lehigh University in Pennsylvania have detected patterns in how English speakers have added figurative word meanings to their vocabulary.

The results, published in the journal Cognitive Psychology, demonstrate how throughout history humans have used language that originally described palpable experiences such as “grasping an object” to describe more intangible concepts such as “grasping an idea.”

Unfortunately, this image is not the best quality,

Scientists have created historical maps showing the evolution of metaphoric language. (Image courtesy of Mahesh Srinivasan)

A June 27, 2017 University of California at Berkeley (or UC Berkeley) news release by Yasmin Anwar, which originated the news item,

“The use of concrete language to talk about abstract ideas may unlock mysteries about how we are able to communicate and conceptualize things we can never see or touch,” said study senior author Mahesh Srinivasan, an assistant professor of psychology at UC Berkeley. “Our results may also pave the way for future advances in artificial intelligence.”

The findings provide the first large-scale evidence that the creation of new metaphorical word meanings is systematic, researchers said. They can also inform efforts to design natural language processing systems like Siri to help them understand creativity in human language.

“Although such systems are capable of understanding many words, they are often tripped up by creative uses of words that go beyond their existing, pre-programmed vocabularies,” said study lead author Yang Xu, a postdoctoral researcher in linguistics and cognitive science at UC Berkeley.

“This work brings opportunities toward modeling metaphorical words at a broad scale, ultimately allowing the construction of artificial intelligence systems that are capable of creating and comprehending metaphorical language,” he added.

Srinivasan and Xu conducted the study with Lehigh University psychology professor Barbara Malt.

Using the Metaphor Map of English database, researchers examined more than 5,000 examples from the past millennium in which word meanings from one semantic domain, such as “water,” were extended to another semantic domain, such as “mind.”

Researchers called the original semantic domain the “source domain” and the domain that the metaphorical meaning was extended to, the “target domain.”

More than 1,400 online participants were recruited to rate semantic domains such as “water” or “mind” according to the degree to which they were related to the external world (light, plants), animate things (humans, animals), or intense emotions (excitement, fear).

These ratings were fed into computational models that the researchers had developed to predict which semantic domains had been the sources or targets of metaphorical extension.

In comparing their computational predictions against the actual historical record provided by the Metaphor Map of English, researchers found that their models correctly forecast about 75 percent of recorded metaphorical language mappings over the past millennium.

Furthermore, they found that the degree to which a domain is tied to experience in the external world, such as “grasping a rope,” was the primary predictor of how a word would take on a new metaphorical meaning such as “grasping an idea.”

For example, time and again, researchers found that words associated with textiles, digestive organs, wetness, solidity and plants were more likely to provide sources for metaphorical extension, while mental and emotional states, such as excitement, pride and fear were more likely to be the targets of metaphorical extension.

Scientists have created historical maps showing the evolution of metaphoric language. (Image courtesy of Mahesh Srinivasan)

Here’s a link to and a citation for the paper,

Evolution of word meanings through metaphorical mapping: Systematicity over the past millennium by Yang Xu, Barbara C. Malt, Mahesh Srinivasan. Cognitive Psychology Volume 96, August 2017, Pages 41–53 DOI: https://doi.org/10.1016/j.cogpsych.2017.05.005

The early web version of this paper is behind a paywall.

For anyone interested in the ‘Metaphor Map of English’ database mentioned in the news release, you find it here on the University of Glasgow website. By the way, it also seems to be known as ‘Mapping Metaphor with the Historical Thesaurus‘.

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 1 of 3

This sucker (INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research, also known as, Canada’s Fundamental Science Review 2017 or the Naylor report) is a 280 pp. (PDF) and was released on Monday, April 10, 2017. I didn’t intend that this commentary should stretch out into three parts (sigh). Them’s the breaks. This first part provides an introduction to the panel and the report as well as some ‘first thoughts’. Part 2 offers more detailed thoughts and Part 3 offers ‘special cases’ and sums up some of the ideas first introduced in part 1.

I first wrote about this review in a June 15, 2017 posting where amongst other comments I made this one,

Getting back to the review and more specifically, the panel, it’s good to see that four of the nine participants are women but other than that there doesn’t seem to be much diversity, i.e.,the majority (five) spring from the Ontario/Québec nexus of power and all the Canadians are from the southern part of country. Back to diversity, there is one business man, Mike Laziridis known primarily as the founder of Research in Motion (RIM or more popularly as the Blackberry company) making the panel not a wholly ivory tower affair. Still, I hope one day these panels will have members from the Canadian North and international members who come from somewhere other than the US, Great Britain, and/or if they’re having a particularly wild day, Germany. Here are some candidate countries for other places to look for panel members: Japan, Israel, China, South Korea, and India. Other possibilities include one of the South American countries, African countries, and/or the Middle Eastern countries.

Take the continent of Africa for example, where many countries seem to have successfully tackled one of the issues as we face. Specifically, the problem of encouraging young researchers. …

Here’s a quick summary about the newly released report from the April 10, 2017 federal government news release on Canada’s Public Policy Forum,

Today [April 10, 2017], the Government of Canada published the final report of the expert panel on Canada’s Fundamental Science Review. Commissioned by the Honourable Kirsty Duncan, Minister of Science, the report by the blue-ribbon panel offers a comprehensive review of the mechanisms for federal funding that supports research undertaken at academic institutions and research institutes across Canada, as well as the levels of that funding. It provides a multi-year blueprint for improving the oversight and governance of what the panelists call the “research ecosystem.” The report also recommends making major new investments to restore support for front-line research and strengthen the foundations of Canadian science and research at this pivotal point in global history.

The review is the first of its type in more than 40 years. While it focused most closely on the four major federal agencies that support science and scholarly inquiry across all disciplines, the report also takes a wide-angle view of governance mechanisms ranging from smaller agencies to big science facilities. Another issue closely examined by the panel was the effect of the current configuration of funding on the prospects of early career researchers—a group that includes a higher proportion of women and is more diverse than previous generations of scientists and scholars.

The panel’s deliberations were informed by a broad consultative process. The panel received 1,275 written submissions [emphasis mine] from individuals, associations and organizations. It also held a dozen round tables in five cities, engaging some 230 researchers [emphasis mine] at different career stages.

Among the findings:

  • Basic research worldwide has led to most of the technological, medical and social advances that make our quality of life today so much better than a century ago. Canadian scientists and scholars have contributed meaningfully to these advances through the decades; however, by various measures, Canada’s research competitiveness has eroded in recent years.
  • This trend emerged during a period when there was a drop of more than 30 percent in real per capita funding for independent or investigator-led research by front-line scientists and scholars in universities, colleges, institutes and research hospitals. This drop occurred as a result of caps on federal funding to the granting councils and a dramatic change in the balance of funding toward priority-driven and partnership-oriented research.
  • Canada is an international outlier in that funding from federal government sources accounts for less than 25 percent of total spending on research and development in the higher education sector. While governments sometimes highlight that, relative to GDP, Canada leads the G7 in total spending by this sector, institutions themselves now underwrite 50 percent of these costs—with adverse effects on both research and education.
  • Coordination and collaboration among the four key federal research agencies [Canada Foundation for Innovation {CFI}; Social Sciences and Humanities Research Council {SSHRC}; Natural Sciences and Engineering Research Council {NSERC}; Canadian Institutes of Health Research {CIHR}] is suboptimal, with poor alignment of supports for different aspects of research such as infrastructure, operating costs and personnel awards. Governance and administrative practices vary inexplicably, and support for areas such as international partnerships or multidisciplinary research is uneven.
  • Early career researchers are struggling in some disciplines, and Canada lacks a career-spanning strategy for supporting both research operations and staff.
  • Flagship personnel programs such as the Canada Research Chairs have had the same value since 2000. Levels of funding and numbers of awards for students and post-doctoral fellows have not kept pace with inflation, peer nations or the size of applicant pools.

The report also outlines a comprehensive agenda to strengthen the foundations of Canadian extramural research. Recommended improvements in oversight include:

  • legislation to create an independent National Advisory Council on Research and Innovation (NACRI) that would work closely with Canada’s new Chief Science Advisor (CSA) to raise the bar in terms of ongoing evaluations of all research programming;
  • wide-ranging improvements to oversight and governance of the four agencies, including the appointment of a coordinating board chaired by the CSA; and
  • lifecycle governance of national-scale research facilities as well as improved methods for overseeing and containing the growth in ad-hoc funding of smaller non-profit research entities.

With regard to funding, the panel recommends a major multi-year reinvestment in front-line research, targeting several areas of identified need. Each recommendation is benchmarked and is focused on making long-term improvements in Canada’s research capacity. The panel’s recommendations, to be phased in over four years, would raise annual spending across the four major federal agencies and other key entities from approximately $3.5 billion today to $4.8 billion in 2022. The goal is to ensure that Canada benefits from an outsized concentration of world-leading scientists and scholars who can make exciting discoveries and generate novel insights while educating and inspiring the next generation of researchers, innovators and leaders.

Given global competition, the current conditions in the ecosystem, the role of research in underpinning innovation and educating innovators, and the need for research to inform evidence-based policy-making, the panel concludes that this is among the highest-yield investments in Canada’s future that any government could make.

The full report is posted on www.sciencereview.ca.

Quotes

“In response to the request from Prime Minister Trudeau and Minister Duncan, the Science Review panel has put together a comprehensive roadmap for Canadian pre-eminence in science and innovation far into the future. The report provides creative pathways for optimizing Canada’s investments in fundamental research in the physical, life and social sciences as well as the humanities in a cost effective way. Implementation of the panel’s recommendations will make Canada the destination of choice for the world’s best talent. It will also guarantee that young Canadian researchers can fulfill their dreams in their own country, bringing both Nobel Prizes and a thriving economy to Canada. American scientists will look north with envy.”

– Robert J. Birgeneau, Silverman Professor of Physics and Public Policy, University of California, Berkeley

“We have paid close attention not only to hard data on performance and funding but also to the many issues raised by the science community in our consultations. I sincerely hope the report will serve as a useful guide to policy-makers for years to come.”

– Martha Crago, Vice-President, Research and Professor of Human Communication Disorders, Dalhousie University

“Science is the bedrock of modern civilization. Our report’s recommendations to increase and optimize government investments in fundamental scientific research will help ensure that Canada’s world-class researchers can continue to make their critically important contributions to science, industry and society in Canada while educating and inspiring future generations. At the same time, such investments will enable Canada to attract top researchers from around the world. Canada must strategically build critical density in our researcher communities to elevate its global competitiveness. This is the path to new technologies, new businesses, new jobs and new value creation for Canada.”

– Mike Lazaridis, Founder and Managing Partner, Quantum Valley Investments

“This was a very comprehensive review. We heard from a wide range of researchers—from the newest to those with ambitious, established and far-reaching research careers. At all these levels, researchers spoke of their gratitude for federal funding, but they also described enormous barriers to their success. These ranged from personal career issues like gaps in parental leave to a failure to take gender, age, geographic location and ethnicity into account. They also included mechanical and economic issues like gaps between provincial and federal granting timelines and priorities, as well as a lack of money for operating and maintaining critical equipment.”

– Claudia Malacrida, Associate Vice-President, Research and Professor of Sociology, University of Lethbridge

“We would like to thank the community for its extensive participation in this review. We reflect that community perspective in recommending improvements to funding and governance for fundamental science programs to restore the balance with recent industry-oriented programs and improve both science and innovation in Canada.”

– Arthur B. McDonald, Professor Emeritus, Queen’s University

“This report sets out a multi-year agenda that, if implemented, could transform Canadian research capacity and have enormous long-term impacts across the nation. It proffers a legacy-building opportunity for a new government that has boldly nailed its colours to the mast of science and evidence-informed policy-making. I urge the Prime Minister to act decisively on our recommendations.”

– C. David Naylor, Professor of Medicine, University of Toronto (Chair)

“This report outlines all the necessary ingredients to advance basic research, thereby positioning Canada as a leading ‘knowledge’ nation. Rarely does a country have such a unique opportunity to transform the research landscape and lay the foundation for a future of innovation, prosperity and well-being.”

– Martha C. Piper, President Emeritus, University of British Columbia

“Our report shows a clear path forward. Now it is up to the government to make sure that Canada truly becomes a world leader in how it both organizes and financially supports fundamental research.”

– Rémi Quirion, Le scientifique en chef du Québec

“The government’s decision to initiate this review reflected a welcome commitment to fundamental research. I am hopeful that the release of our report will energize the government and research community to take the next steps needed to strengthen Canada’s capacity for discovery and research excellence. A research ecosystem that supports a diversity of scholars at every career stage conducting research in every discipline will best serve Canada and the next generation of students and citizens as we move forward to meet social, technological, economic and ecological challenges.”

– Anne Wilson, Professor of Psychology, Wilfrid Laurier University

Quick facts

  • The Fundamental Science Review Advisory Panel is an independent and non-partisan body whose mandate was to provide advice and recommendations to the Minister of Science on how to improve federal science programs and initiatives.
  • The panel was asked to consider whether there are gaps in the federal system of support for fundamental research and recommend how to address them.
  • The scope of the review included the federal granting councils along with some federally funded organizations such as the Canada Foundation for Innovation.

First thoughts

Getting to the report itself, I have quickly skimmed through it  but before getting to that and for full disclosure purposes, please note, I made a submission to the panel. That said, I’m a little disappointed. I would have liked to have seen a little more imagination in the recommendations which set forth future directions. Albeit the questions themselves would not seem to encourage any creativity,

Our mandate was summarized in two broad questions:

1. Are there any overall program gaps in Canada’s fundamental research funding ecosystem that need to be addressed?

2. Are there elements or programming features in other countries that could provide a useful example for the Government of Canada in addressing these gaps? (p. 1 print; p. 35 PDF)

A new agency to replace the STIC (Science, Technology and Innovation Council)

There are no big surprises. Of course they’ve recommended another organization, NACRI [National Advisory Council on Research and Innovation], most likely to replace the Conservative government’s advisory group, the Science, Technology and Innovation Council (STIC) which seems to have died as of Nov. 2015, one month after the Liberals won. There was no Chief Science Advisor under the Conservatives. As I recall, the STIC replaced a previous Liberal government’s advisory group and Chief Science Advisor (Arthur Carty, now the executive director of the Waterloo [as in University of Waterloo] Institute of Nanotechnology).

Describing the NACRI as peopled by volunteers doesn’t exactly describe the situation. This is the sort of ‘volunteer opportunity’ a dedicated careerist salivates over because it’s a career builder where you rub shoulders with movers and shakers in other academic institutions, in government, and in business. BTW, flights to meetings will be paid for along with per diems (accommodations and meals). These volunteers will also have a staff. Admittedly, it will be unpaid extra time for the ‘volunteer’ but the payoff promises to be considerable.

Canada’s eroding science position

There is considerable concern evinced over Canada’s eroding position although we still have bragging rights in some areas (regenerative medicine, artificial intelligence for two areas). As for erosion, the OECD (Organization for Economic Cooperation and Development) dates the erosion back to 2001 (from my June 2, 2014 posting),

Interestingly, the OECD (Organization for Economic Cooperation and Development) Science, Technology and Industry Scoreboard 2013 dates the decline to 2001. From my Oct. 30, 2013 posting (excerpted from the scorecard),

Canada is among the few OECD countries where R&D expenditure declined between 2000 and 2011 (Figure 1). This decline was mainly due to reduced business spending on R&D. It occurred despite relatively generous public support for business R&D, primarily through tax incentives. In 2011, Canada was amongst the OECD countries with the most generous tax support for R&D and the country with the largest share of government funding for business R&D being accounted for by tax credits (Figure 2). …

It should be noted, the Liberals have introduced another budget with flat funding for science (if you want to see a scathing review see Nassif Ghoussoub’s (professor of mathematics at the University of British Columbia April 10, 2017 posting) on his Piece of Mind blog). Although the funding isn’t quite so flat as it might seem at first glance (see my March 24, 2017 posting about the 2017 budget). The government explained that the science funding agencies didn’t receive increased funding as the government was waiting on this report which was released only weeks later (couldn’t they have a sneak preview?). In any event, it seems it will be at least a year before the funding issues described in the report can be addressed through another budget unless there’s some ‘surprise’ funding ahead.

Again, here’s a link to the other parts:

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report) Commentaries

Part 2

Part 3

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.

Formation of a time (temporal) crystal

It’s a crystal arranged in time according to a March 8, 2017 University of Texas at Austin news release (also on EurekAlert), Note: Links have been removed,

Salt, snowflakes and diamonds are all crystals, meaning their atoms are arranged in 3-D patterns that repeat. Today scientists are reporting in the journal Nature on the creation of a phase of matter, dubbed a time crystal, in which atoms move in a pattern that repeats in time rather than in space.

The atoms in a time crystal never settle down into what’s known as thermal equilibrium, a state in which they all have the same amount of heat. It’s one of the first examples of a broad new class of matter, called nonequilibrium phases, that have been predicted but until now have remained out of reach. Like explorers stepping onto an uncharted continent, physicists are eager to explore this exotic new realm.

“This opens the door to a whole new world of nonequilibrium phases,” says Andrew Potter, an assistant professor of physics at The University of Texas at Austin. “We’ve taken these theoretical ideas that we’ve been poking around for the last couple of years and actually built it in the laboratory. Hopefully, this is just the first example of these, with many more to come.”

Some of these nonequilibrium phases of matter may prove useful for storing or transferring information in quantum computers.

Potter is part of the team led by researchers at the University of Maryland who successfully created the first time crystal from ions, or electrically charged atoms, of the element ytterbium. By applying just the right electrical field, the researchers levitated 10 of these ions above a surface like a magician’s assistant. Next, they whacked the atoms with a laser pulse, causing them to flip head over heels. Then they hit them again and again in a regular rhythm. That set up a pattern of flips that repeated in time.

Crucially, Potter noted, the pattern of atom flips repeated only half as fast as the laser pulses. This would be like pounding on a bunch of piano keys twice a second and notes coming out only once a second. This weird quantum behavior was a signature that he and his colleagues predicted, and helped confirm that the result was indeed a time crystal.

The team also consists of researchers at the National Institute of Standards and Technology, the University of California, Berkeley and Harvard University, in addition to the University of Maryland and UT Austin.

Frank Wilczek, a Nobel Prize-winning physicist at the Massachusetts Institute of Technology, was teaching a class about crystals in 2012 when he wondered whether a phase of matter could be created such that its atoms move in a pattern that repeats in time, rather than just in space.

Potter and his colleague Norman Yao at UC Berkeley created a recipe for building such a time crystal and developed ways to confirm that, once you had built such a crystal, it was in fact the real deal. That theoretical work was announced publically last August and then published in January in the journal Physical Review Letters.

A team led by Chris Monroe of the University of Maryland in College Park built a time crystal, and Potter and Yao helped confirm that it indeed had the properties they predicted. The team announced that breakthrough—constructing a working time crystal—last September and is publishing the full, peer-reviewed description today in Nature.

A team led by Mikhail Lukin at Harvard University created a second time crystal a month after the first team, in that case, from a diamond.

Here’s a link to and a citation for the paper,

Observation of a discrete time crystal by J. Zhang, P. W. Hess, A. Kyprianidis, P. Becker, A. Lee, J. Smith, G. Pagano, I.-D. Potirniche, A. C. Potter, A. Vishwanath, N. Y. Yao, & C. Monroe. Nature 543, 217–220 (09 March 2017) doi:10.1038/nature21413 Published online 08 March 2017

This paper is behind a paywall.