Tag Archives: blockchain

October 2019 science and art/science events in Vancouver and other parts of Canada

This is a scattering of events, which I’m sure will be augmented as we properly start the month of October 2019.

October 2, 2019 in Waterloo, Canada (Perimeter Institute)

If you want to be close enough to press the sacred flesh (Sir Martin Rees), you’re out of luck. However, there are still options ranging from watching a live webcast from the comfort of your home to watching the lecture via closed circuit television with other devoted fans at a licensed bistro located on site at the Perimeter Institute (PI) to catching the lecture at a later date via YouTube.

That said, here’s why you might be interested,

Here’s more from a September 11, 2019 Perimeter Institute (PI) announcement received via email,

Surviving the Century
MOVING TOWARD A POST-HUMAN FUTURE
Martin Rees, UK Astronomer Royal
Wednesday, Oct. 2 at 7:00 PM ET

Advances in technology and space exploration could, if applied wisely, allow a bright future for the 10 billion people living on earth by the end of the century.

But there are dystopian risks we ignore at our peril: our collective “footprint” on our home planet, as well as the creation and use of technologies so powerful that even small groups could cause a global catastrophe.

Martin Rees, the UK Astronomer Royal, will explore this unprecedented moment in human history during his lecture on October 2, 2019. A former president of the Royal Society and master of Trinity College, Cambridge, Rees is a cosmologist whose work also explores the interfaces between science, ethics, and politics. Read More.

Mark your calendar! Tickets will be available on Monday, Sept. 16 at 9 AM ET

Didn’t get tickets for the lecture? We’ve got more ways to watch.
Join us at Perimeter on lecture night to watch live in the Black Hole Bistro.
Catch the live stream on Inside the Perimeter or watch it on Youtube the next day
Become a member of our donor thank you program! Learn more.

It took me a while to locate an address for PI venue since I expect that information to be part of the announcement. (insert cranky emoticon here) Here’s the address: Perimeter Institute, Mike Lazaridis Theatre of Ideas, 31 Caroline St. N., Waterloo, ON

Before moving onto the next event, I’m including a paragraph from the event description that was not included in the announcement (from the PI Outreach Surviving the Century webpage),

In his October 2 [2019] talk – which kicks off the 2019/20 season of the Perimeter Institute Public Lecture Series – Rees will discuss the outlook for humans (or their robotic envoys) venturing to other planets. Humans, Rees argues, will be ill-adapted to new habitats beyond Earth, and will use genetic and cyborg technology to transform into a “post-human” species.

I first covered Sir Martin Rees and his concerns about technology (robots and cyborgs run amok) in this November 26, 2012 posting about existential risk. He and his colleagues at Cambridge University, UK, proposed a Centre for the Study of Existential Risk, which opened in 2015.

Straddling Sept. and Oct. at the movies in Vancouver

The Vancouver International Film Festival (VIFF) opened today, September 26, 2019. During its run to October 11, 2019 there’ll be a number of documentaries that touch on science. Here are three of the documentaries most closely adhere to the topics I’m most likely to address on this blog. There is a fourth documentary included here as it touches on ecology in a more hopeful fashion than is the current trend.

Human Nature

From the VIFF 2019 film description and ticket page,

One of the most significant scientific breakthroughs in history, the discovery of CRISPR has made it possible to manipulate human DNA, paving the path to a future of great possibilities.

The implications of this could mean the eradication of disease or, more controversially, the possibility of genetically pre-programmed children.

Breaking away from scientific jargon, Human Nature pieces together a complex account of bio-research for the layperson as compelling as a work of science-fiction. But whether the gene-editing powers of CRISPR (described as “a word processor for DNA”) are used for good or evil, they’re reshaping the world as we know it. As we push past the boundaries of what it means to be human, Adam Bolt’s stunning work of science journalism reaches out to scientists, engineers, and people whose lives could benefit from CRISPR technology, and offers a wide-ranging look at the pros and cons of designing our futures.

Tickets
Friday, September 27, 2019 at 11:45 AM
Vancity Theatre

Saturday, September 28, 2019 at 11:15 AM
International Village 10

Thursday, October 10, 2019 at 6:45 PM
SFU Goldcorp

According to VIFF, the tickets for the Sept. 27, 2019 show are going fast.

Resistance Fighters

From the VIFF 2019 film description and ticket page,

Since mass-production in the 1940s, antibiotics have been nothing less than miraculous, saving countless lives and revolutionizing modern medicine. It’s virtually impossible to imagine hospitals or healthcare without them. But after years of abuse and mismanagement by the medical and agricultural communities, superbugs resistant to antibiotics are reaching apocalyptic proportions. The ongoing rise in multi-resistant bacteria – unvanquishable microbes, currently responsible for 700,000 deaths per year and projected to kill 10 million yearly by 2050 if nothing changes – and the people who fight them are the subjects of Michael Wech’s stunning “science-thriller.”

Peeling back the carefully constructed veneer of the medical corporate establishment’s greed and complacency to reveal the world on the cusp of a potential crisis, Resistance Fighters sounds a clarion call of urgency. It’s an all-out war, one which most of us never knew we were fighting, to avoid “Pharmageddon.” Doctors, researchers, patients, and diplomats testify about shortsighted medical and economic practices, while Wech offers refreshingly original perspectives on environment, ecology, and (animal) life in general. As alarming as it is informative, this is a wake-up call the world needs to hear.

Sunday, October 6, 2019 at 5:45 PM
International Village 8

Thursday, October 10, 2019 at 2:15 PM
SFU Goldcorp

According to VIFF, the tickets for the Oct. 6, 2019 show are going fast.

Trust Machine: The Story of Blockchain

Strictly speaking this is more of a technology story than science story but I have written about blockchain and cryptocurrencies before so I’m including this. From the VIFF 2019 film description and ticket page,

For anyone who has questions about cryptocurrencies like Bitcoin (and who doesn’t?), Alex Winter’s thorough documentary is an excellent introduction to the blockchain phenomenon. Trust Machine offers a wide range of expert testimony and a variety of perspectives that explicate the promises and the risks inherent in this new manifestation of high-tech wizardry. And it’s not just money that blockchains threaten to disrupt: innovators as diverse as UNICEF and Imogen Heap make spirited arguments that the industries of energy, music, humanitarianism, and more are headed for revolutionary change.

A propulsive and subversive overview of this little-understood phenomenon, Trust Machine crafts a powerful and accessible case that a technologically decentralized economy is more than just a fad. As the aforementioned experts – tech wizards, underground activists, and even some establishment figures – argue persuasively for an embrace of the possibilities offered by blockchains, others criticize its bubble-like markets and inefficiencies. Either way, Winter’s film suggests a whole new epoch may be just around the corner, whether the powers that be like it or not.

Tuesday, October 1, 2019 at 11:00 AM
Vancity Theatre

Thursday, October 3, 2019 at 9:00 PM
Vancity Theatre

Monday, October 7, 2019 at 1:15 PM
International Village 8

According to VIFF, tickets for all three shows are going fast

The Great Green Wall

For a little bit of hope, From the VIFF 2019 film description and ticket page,

“We must dare to invent the future.” In 2007, the African Union officially began a massively ambitious environmental project planned since the 1970s. Stretching through 11 countries and 8,000 km across the desertified Sahel region, on the southern edges of the Sahara, The Great Green Wall – once completed, a mosaic of restored, fertile land – would be the largest living structure on Earth.

Malian musician-activist Inna Modja embarks on an expedition through Senegal, Mali, Nigeria, Niger, and Ethiopia, gathering an ensemble of musicians and artists to celebrate the pan-African dream of realizing The Great Green Wall. Her journey is accompanied by a dazzling array of musical diversity, celebrating local cultures and traditions as they come together into a community to stand against the challenges of desertification, drought, migration, and violent conflict.

An unforgettable, beautiful exploration of a modern marvel of ecological restoration, and so much more than a passive source of information, The Great Green Wall is a powerful call to take action and help reshape the world.

Sunday, September 29, 2019 at 11:15 AM
International Village 10

Wednesday, October 2, 2019 at 6:00 PM
International Village 8
Standby – advance tickets are sold out but a limited number are likely to be released at the door

Wednesday, October 9, 2019 at 11:00 AM
International Village 9

As you can see, one show is already offering standby tickets only and the other two are selling quickly.

For venue locations, information about what ‘standby’ means and much more go here and click on the Festival tab. As for more information the individual films, you’ll links to trailers, running times, and more on the pages for which I’ve supplied links.

Brain Talks on October 16, 2019 in Vancouver

From time to time I get notices about a series titled Brain Talks from the Dept. of Psychiatry at the University of British Columbia. A September 11, 2019 announcement (received via email) focuses attention on the ‘guts of the matter’,

YOU ARE INVITED TO ATTEND:

BRAINTALKS: THE BRAIN AND THE GUT

WEDNESDAY, OCTOBER 16TH, 2019 FROM 6:00 PM – 8:00 PM

Join us on Wednesday October 16th [2019] for a series of talks exploring the
relationship between the brain, microbes, mental health, diet and the
gut. We are honored to host three phenomenal presenters for the evening:
Dr. Brett Finlay, Dr. Leslie Wicholas, and Thara Vayali, ND.

DR. BRETT FINLAY [2] is a Professor in the Michael Smith Laboratories at
the University of British Columbia. Dr. Finlay’s  research interests are
focused on host-microbe interactions at the molecular level,
specializing in Cellular Microbiology. He has published over 500 papers
and has been inducted into the Canadian  Medical Hall of Fame. He is the
co-author of the  books: Let Them Eat Dirt and The Whole Body
Microbiome.

DR. LESLIE WICHOLAS [3]  is a psychiatrist with an expertise in the
clinical understanding of the gut-brain axis. She has become
increasingly involved in the emerging field of Nutritional Psychiatry,
exploring connections between diet, nutrition, and mental health.
Currently, Dr. Wicholas is the director of the Food as Medicine program
at the Mood Disorder Association of BC.

THARA VAYALI, ND [4] holds a BSc in Nutritional Sciences and a MA in
Education and Communications. She has trained in naturopathic medicine
and advocates for awareness about women’s physiology and body literacy.
Ms. Vayali is a frequent speaker and columnist that prioritizes
engagement, understanding, and community as pivotal pillars for change.

Our event on Wednesday, October 16th [2019] will start with presentations from
each of the three speakers, and end with a panel discussion inspired by
audience questions. After the talks, at 7:30 pm, we host a social
gathering with a rich spread of catered healthy food and non-alcoholic
drinks. We look forward to seeing you there!

Paetzhold Theater

Vancouver General Hospital; Jim Pattison Pavilion, Vancouver, BC

Attend Event

That’s it for now.

Blockchain made physical: BlocKit

Caption: Parts of BlocKit Credit: Irni Khairuddin

I’m always on the lookout for something that helps make blockchain and cryptocurrency more understandable. (For the uninitiated or anyone like me who needed to refresh their memories, I have links to good essays on the topic further down in this posting.)

A July 10, 2019 news item on ScienceDaily announces a new approach to understanding blockchain technology,

A kit made from everyday objects is bringing the blockchain into the physical world.

The ‘BlocKit’, which includes items such as plastic tubs, clay discs, padlocks, envelopes, sticky notes and battery-powered candles, is aimed to help people understand how digital blockchains work and can also be used by innovators designing new systems and services around blockchain.

A team of computer scientists from Lancaster University, the University of Edinburgh in the UK, and the Universiti Teknologi MARA, in Malaysia, created the prototype BlocKit because blockchain — the decentralised digital infrastructure that is used to organise the cryptocurrency Bitcoin and holds promise to revolutionise many other sectors from finance, supply-chain and healthcare — is so difficult for people to comprehend.

A July 10, 2019 Lancaster University press release (also on EurekAlert), which originated the news item, expands on the theme,

“Despite growing interest in its potential, the blockchain is so novel, disruptive and complex, it is hard for most people to understand how these systems work,” said Professor Corina Sas of Lancaster University’s School of Computing and Communications. “We have created a prototype kit consisting of physical objects that fulfil the roles of different parts of the blockchain. The kit really helps people visualise the different component parts of blockchain, and how they all interact.

“Having tangible physical objects, such as a transparent plastic box for a Bitcoin wallet, clay discs for Bitcoins, padlocks for passwords and candles representing miners’ computational power, makes thinking around processes and systems much easier to comprehend.”

The BlocKit consisted of physical items that represented 11 key aspects of blockchain infrastructure and it was used to explore key characteristics of blockchain, such as trust – an important challenge for Bitcoin users. The kit was evaluated as part of a study involving 15 experienced Bitcoin users.

“We received very positive feedback from the people who used the kit in our study and, interestingly, we found that the BlocKit can also be used by designers looking to develop new services based around blockchain – such as managing patients’ health records for example.”

I will be providing a link to and a citation for the paper but first, I’m excerpting a few bits,

We report on a workshop with 15 bitcoin experts, [emphasis mine] 12 males, 3 females, (mean age 29, range 21-39). All participants had at least 2 years of engaging in bitcoin transactions: 9 had between 2 and 3 years, 4 had between 4 and 5 years, 2had more than 6 years. All participants have at least graduate education, i.e., 6 BSc, 7 MScs, and 2 Ph.D. Participants were recruited through the mailing lists of two universities,and through a local Bitcoins meetup group. [p. 3]

A striking finding was the overwhelmingly positive experience supported by BlocKit. Findings show that 10 participants deeply enjoyed physically touching [emphasis mine] its objects and enacting their movement in space while talking about blockchain processes: “there is going to be other transactions from other people essentially, so let’s put a few bitcoins in that box. I love this stuff, this is amazing” [P12]. Participants suggested that BlocKit could be a valuable tool for learning about blockchain: “I think this all makes sense and would be fine to explain to the novices. It is cool, this is really an interesting kit”[P7]. Other participants suggested leveraging gamification principles for learning about blockchain: “It’s almost like you could turn this into some kind of cool game like a monopoly”[P5] [p. 5]

A significant finding is the value of the kit in supporting experts to materialize and reflect on their understanding of blockchain infrastructure and its inner working. We argue that through its materiality, the kit allows bringing the mental models into question, which in turn helps experts confirm their understandings, develop more nuanced understandings, or even revise some previously held, less accurate assumptions. [emphasis mine]

Even experts are still learning about bitcoin and blockchain according to this research sample. it’s also interesting to note that the workshop participants enjoyed the physicality. I don’t see too many mentions of it in my wanderings but I can’t help wondering if all this digitization is going to leave people starved for touch.

Getting back to blockchain, here’s the link and citation I promised,

BlocKit: A Physical Kit for Materializing and Designing for Blockchain Infrastructure by Irni Eliana Khairuddin, Corina Sas, and Chris Speed.presented at Designing Interactive Systems (DIS) 2019
ACM International Conference Series [downloaded from https://eprints.lancs.ac.uk/id/eprint/132467/1/Design_Kit_DIS_28.pdf]

This paper is open access, as for BlocKit, it exists only as a prototype according to the July 10, 2019 Lancaster University press release.

Introductory essays for blockchain and cryptocurrency

Here are two of my favourites. First, there’s this February 6, 2018 essay (part ii of a series) by Tim Schneider on artnet.com explaining it all by using the art world and art market as examples,

… the fraught relationship between art and value lies at the molten core of several pieces made using blockchain technology. Part one of this series addressed how, in theory, the blockchain strengthens the markets for new media by introducing the concept of digital scarcity. This innovation means that works as simple as an “original” JPG or GIF could be made as rare as Francis Bacon paintings. (This fact leads to a host of business implications that will be covered in Part III.

However, a handful of forward-looking artists is using the blockchain to do more than reset the market’s perception of supply and demand. The technology, their work proves, is more than new software—it’s also a new medium.

The description of how artists using blockchain as a medium provides some of the best descriptions of cryptocurrency and blockchain that I’ve been able to find.

The other essay, a January 5, 2018 article for Slate.com by Joshua Oliver, provides some detail I haven’t seen anywhere else (Note: A link has been removed),

Already, blockchain has been hailed as likely to revolutionize … well … everything. Banks, health care, voting, supply chains, fantasy football, Airbnb, coffee: Nothing is beyond the hypothetical reach of blockchain as a revolutionary force. These predictions are easy to sell because blockchain is still little-understood. If you don’t quite know what blockchain is, it’s easier to imagine that it is whatever you want it to be. But before we can begin to search for the real potential amid the mass of blockchain conjecture and hype, we need to clear up what exactly we mean when we say blockchain.

One cause of confusion is the phrase the blockchain, which makes it sound like blockchain is one specific thing. In reality, the word blockchain is commonly used to describe two broad types of computer systems. [emphases mine] Both use similar underlying protocols, but they have other important differences. Bitcoin represents one approach to using blockchain, one wedded to principles of radical decentralization. The second approach—pioneered by more business-minded players—puts blockchain to use without adopting bitcoin’s revolutionary, decentralized governance. Both of these designs are short-handed as blockchains, so it’s easy to miss the crucial differences. Without grasping these differences, it’s hard to understand where we are today in the development of this promising technology, which blockchain ventures are worth your attention, and what might happen next.

That’s all I’ve got for now.

Counterfeiting olive oil, honey, wine, and more

This seems like the right thing to post on April Fool’s Day (April 1, 2019) as the upcoming news item concerns fooling people although not in a any friendly, amusing way.. More pleasantly, the other story I’m including holds the possibility of foiling the would-be adulterators/counterfeiters.

The problem and blockchain anti-counterfeiting measures

Adulterating or outright counterfeiting products such as olive oil isn’t new. I’m willing to bet the ancient Greeks, Romans, Persians, Egyptians, and others were intimately familiar with the practice. It seems that 2019 might see an increase in the practice according to a March 22, 2019 article by Emma Woollacott for BBC (British Broadcasting Corporation) news online,

“Fraud in the olive oil market has been going on a very long time,” says Susan Testa, director of culinary innovation at Italian olive oil producer Bellucci.

“Seed oil is added maybe; or it may contain only a small percentage of Italian oil and have oil from other countries added, while it just says Italian oil on the label.”

In February [2019] the Canadian Food Inspection Agency (CFIA) warned that poor olive harvests are likely to lead to a big increase in such adulterated oil this year.

And it’s far from the only product affected, with the European Union’s Knowledge Centre for Food Fraud and Quality recently highlighting wine, honey, fish, dairy products, meat and poultry as being frequently faked.


Food suppliers, like Bellucci are making efforts to guarantee the provenance of their food themselves, using new tools such as blockchain technology.

Best-known for its role in crypto-currencies like Bitcoin, blockchain is a way of keeping records in which each block of data is time-stamped and linked irreversibly to the last, in a way that can’t be subsequently altered.

That makes it possible to keep a secure record of the product’s journey to the supermarket shelf.

Since the company was founded in 2013, Bellucci has aimed to build a reputation around the traceability of its oil. Customers can enter the lot number of a particular bottle into an app to see its precise provenance, right back to the groves where the olives were harvested.


“We expect an improvement in the exchange of information throughout the supply chain,” says Andrea Biagianti, chief information officer for Certified Origins, Bellucci’s parent company.

“We would also like the ability [to have] more transparency in the supply chain and the genuine trust of consumers.”

IBM’s Food Trust network, formally launched late last year, uses similar techniques.

“In the registration phase, you define the product and its properties – for example, the optical spectrum you see when you look at a bottle of whisky,” explains Andreas Kind, head of blockchain at IBM Research.

The appearance of the whisky is precisely recorded within the blockchain, meaning that the description can’t later be altered. Then transport companies, border control, storage providers or retailers, can see if the look of the liquid no longer matches the description or “optical signature”.

Meanwhile, labels holding tamper-proof “cryptoanchors” are fixed to the bottles. These contain tiny computers holding the product data – encrypted, or encoded, so it can’t be tampered with. The labels break when the bottle is opened.

Linking the packaging and the product in this way offers a kind of proof says Mr Kind, “a bit like when you buy a diamond and get a certificate.”


Wollacott’s March 22, 2019 article is fascinating and well worth reading in its entirety.

The honey problem and nuclear detection

Getting back to Canada, specifically, the province of British Columbia (BC), it seems honey producers are concerned that adulterated product is affecting their sales. A January 25, 2019 news article by Glenda Luymes for the Vancouver Sun describes the technology to detect the problem (Note: Links have been removed),

A high-tech honey-testing machine unveiled Thursday [January 24, 2019] in Chilliwack could help B.C. beekeepers root out “adulterated” honey imports that threaten to cheapen their product.

Using a nuclear magnetic resonance (NMR) machine, Peter Awram’s lab will be able to determine if cheap sweeteners, such as corn syrup or rice syrup, have been added to particular brands of honey to increase producers’ profits.

The machine will also create a “fingerprint” for each honey sample, which will be kept in a database to help distinguish premium B.C. honey from a flood of untested, adulterated honey entering Canada from around the world.

“We’d eventually like to see it lead to a certification scheme, where producers submit their honey for testing and get a label,” said Awram, who runs Worker Bee Honey Company with his parents, Jerry and Pia Awram. “It would give security to the people buying it.”

A study published in October [2018] in Scientific Reports found evidence of global honey fraud, calling honey the world’s “third-most adulterated food.” Researchers tested 100 honey samples from 18 honey-producing countries. They discovered 27 per cent of the samples were “of questionable authenticity,” while 52 of the samples from Asia were adulterated.

There’s more about honey, adulteration, and detection in this Vancouver Sun video,

You can find the Worker Bee Honey Company here and you can find a 25 minute presentation about hone and the NMR by Peter Awram for the 2018 BC Honey Producers Association annual general meeting here.

Why not monetize your DNA for 2019?

I’m not a big fan of DNA (deoxyribonucleic acid) companies that promise to tell you about your ancestors and, depending on the kit, predisposition to certain health issues as per their reports about your genetic code. (I regularly pray no one in my family has decided to pay one of these companies to analyze their spit.)

During Christmas season 2018, the DNA companies (23andMe and Ancestry) advertised special prices so you could gift someone in your family with a kit. All this corporate largesse may not be wholly in service of the Christmas spirit. After all, there’s money to be made once they’ve gotten your sample.

Monetizing your DNA in 2016

I don’t know when 23andMe started selling DNA information or if any similar company predated their efforts but this June 21, 2016 article by Antonio Regalado for MIT (Massachusetts Institute of Technology) Review offers the earliest information I found,

“Welcome to You.” So says the genetic test kit that 23andMe will send to your home. Pay $199, spit in a tube, and several weeks later you’ll get a peek into your DNA. Have you got the gene for blond hair? Which of 36 disease risks could you pass to a child?

Run by entrepreneur Anne Wojcicki, the ex-wife of Google founder Sergey Brin, and until last year housed alongside the Googleplex, the company created a test that has been attacked by regulators and embraced by a curious public. It remains, nine years after its introduction, the only one of its kind sold directly to consumers. 23andMe has managed to amass a collection of DNA information about 1.2 million people, which last year began to prove its value when the company revealed it had sold access to the data to more than 13 drug companies. One, Genentech, anted up $10 million for a look at the genes of people with Parkinson’s disease.

That means 23andMe is monetizing DNA rather the way Facebook makes money from our “likes.” What’s more, it gets its customers to pay for the privilege. That idea so appeals to investors that they have valued the still-unprofitable company at over $1 billion. “Money follows data,” says Barbara Evans, a legal scholar at the University of Houston, who studies personal genetics. “It takes a lot of labor and capital to get that information in a form that is useful.”

Monetizing your DNA in 2018 and privacy concerns

Starting with Adele Peters’ December 13, 2018 article for Fast Company (Note: A link has been removed),

When 23andMe made a $300 million deal with GlaxoSmithKline [GSK] in July[2018]–so the pharmaceutical giant could access a vast store of genetic data as it works on new drugs–the consumers who actually provided that data didn’t get a cut of the proceeds. A new health platform is taking a different approach: If you choose to share your own DNA data or other health records, you’ll get company shares that will later pay you dividends if that data is sold.

Before getting to the start-up that would allow you rather than a company to profit or at least somewhat monetize your DNA, I’m including a general overview of the July 2018 GSK/23andMe deal in Jamie Ducharme’s July 26, 2018 article for TIME (Note: Links have been removed),

Consumer genetic testing company 23andMe announced on Wednesday [July 25, 2018] that GlaxoSmithKline purchased a $300 million stake in the company, allowing the pharmaceutical giant to use 23andMe’s trove of genetic data to develop new drugs — and raising new privacy concerns for consumers

The “collaboration” is a way to make “novel treatments and cures a reality,” 23andMe CEO Anne Wojcicki said in a company blog post. But, though it isn’t 23andMe’s first foray into drug discovery, the deal doesn’t seem quite so simple to some medical experts — or some of the roughly 5 million 23andMe customers who have sent off tubes of their spit in exchange for ancestry and health insights

Perhaps the most obvious issue is privacy, says Peter Pitts, president of the Center for Medicine in the Public Interest, a non-partisan non-profit that aims to promote patient-centered health care.

“If people are concerned about their social security numbers being stolen, they should be concerned about their genetic information being misused,” Pitts says. “This information is never 100% safe. The risk is magnified when one organization shares it with a second organization. When information moves from one place to another, there’s always a chance for it to be intercepted by unintended third parties.

That risk is real, agrees Dr. Arthur Caplan, head of the division of medical ethics at the New York University School of Medicine. Caplan says that any genetic privacy concerns also extend to your blood relatives, who likely did not consent to having their DNA tested — echoing some of the questions that arose after law enforcement officials used a genealogy website to find and arrest the suspected Golden State Killer in April [2018].

“A lot of people paid money to 23andMe to get their ancestry determined — fun, recreational stuff,” Caplan says. “Even though they may have signed a thing saying, ‘I’m okay if you use this information for medical research,’ I’m not sure they understood what that really meant. I’m not sure they understood that it meant, ‘Yes, we’ll go to Glaxo, and that’s where we’re really going to make a lot of money off of you.’”

A 23andMe spokesperson told TIME that data privacy is a “top priority” for the company, emphasizing that customer data isn’t used in research without consent, and that GlaxoSmithKline will only receive “summary statistics from analyses 23andMe conducts so that no single individual can be identified.”

Yes the data is supposed to be stripped of identifying information but given how many times similar claims about geolocation data have been disproved, I am skeptical. DJ Pangburn’s September 26, 2017 article (Even This Data Guru Is Creeped Out By What Anonymous Location Data Reveals About Us) for Fast Company illustrate the fragility of ‘anonymized data’,

… as a number of studies have shown, even when it’s “anonymous,” stripped of so-called personally identifiable information, geographic data can help create a detailed portrait of a person and, with enough ancillary data, identify them by name

Curious to see this kind of data mining in action, I emailed Gilad Lotan, now vice president of BuzzFeed’s data science team. He agreed to look at a month’s worth of two different users’ anonymized location data, and to come up with individual profiles that were as accurate as possible

The results, produced in just a few days’ time, range from the expected to the surprisingly revealing, and demonstrate just how “anonymous” data can identify individuals.

Last fall Lotan taught a class at New York University on surveillance that kicked off with an assignment like the one I’d given him: link anonymous location data with other data sets–from LinkedIn, Facebook, home registration and mortgage records, and other online data.
“It’s not hard to figure out who this [unnamed] person is,” says Lotan. In class, students found that tracking location data around holidays proved to be the easiest way to determine who, exactly, the data belonged to. “Basically,” he says, “visits to private homes that are owned and publicly registered.”

In 2013, researchers at MIT and the Université Catholique de Louvain in Belgium published a paper reporting on 15 months of study of human mobility data for over 1.5 million individuals. What they found is that only four spatio-temporal points are required to “uniquely identify 95% of the individuals.” The researchers concluded that there was very little privacy even in raw location data. Four years later, their calls for policies rectifying concerns about location tracking have fallen largely on deaf ears.

Getting back to DNA, there was also some concern at Fox News,

Other than warnings, I haven’t seen much about any possible legislation regarding DNA and privacy in either Canada or the US.

Now, let’s get to how you can monetize your self.

Me making money off me

I’ve found two possibilities for an individual who wants to consider monetizing their own DNA.

Health shares

Adele Peters’ December 13, 2018 article describes a start-up company and the model they’re proposing to allow you profit from your own DNA (Note: Links have been removed),

“You can’t say data is valuable and then take that data away from everybody,” says Dawn Barry, president and cofounder of LunaPBC, the public benefit corporation that manages the community-owned platform, called LunaDNA, which recently got SEC approval to recognize health data as currency. “What we’re finding is that [our early adopters are] very excited about the transparency of this model–that when we all come together and create value, that value flows down to the individuals who shared their data.

The platform shares some anonymized data with nonprofits, such as foundations that study rare diseases. In that case, money wouldn’t initially change hands, but “there could be intellectual property that at some point in time is monetized, and the community would share in that,” says Bob Kain, CEO and cofounder of LunaPBC. “When we have enough data in the near future, then we’ll work with pharmaceutical companies, for instance, to drive discovery for those companies. And they will pay market rates.

The company doesn’t offer DNA analysis itself, but chose to focus on data management. If you’ve sent a tube of spit to 23andMe, AncestryDNA, MyHeritage, or FamilyTree DNA, you can contribute that data to LunaDNA and get shares. (If you’d rather not let the original testing company keep your data, you can also separately take the steps to delete it.

“We looked at a number of different models to enable people to have ownership, including cryptocurrency, which is a proxy for ownership, too,” says Kain. “Cryptocurrency is hard to understand for most people, and right now, the regulatory landscape is blurry. So we thought, to move forward, we’d go with something much more traditional and easy to understand, and that is stock shares, basically.

For sharing targeted genes, you get 10 shares. For sharing your whole genome, you get 300 shares. At the moment, that’s not worth very much–the valuation takes into account the risk that the data might not be monetized, and the fact that the startup isn’t the exclusive owner of your data. The SEC filing says that the estimated fair market value of a whole genome is only $21. Some other health information is worth far less; 20 days of data from a fitness tracker garners two shares, valued at 14¢. But as more people contribute data, the research value of the whole database (and dividends) will increase. If the shareholders ever decided to sell the company itself, they would also make money that way.

Luna’s is a very interesting approach and I encourage you to read the December 13, 2018 article in its entirety.

Blockchain and crypto me

At least one effort to introduce blockchain/cryptocurrency technology to the process for monetizing your DNA garnered a lot of attention in February 2018.

A February 8, 2018 article by Eric Rosenbaum for CNBC (a US cable tv channel) explores an effort by George Church (Note: Links have been removed),

It’s probably wise to be skeptical of anyone who says they have a new idea for a blockchain-based company, or worse still, a company changing its business model to focus on the crypto world. That ice tea company that shifted its model to the blockchain, or Kodak saying its road back to riches was managing photo rights using a blockchain system. Raise eyebrow, or move directly onto outright shake of head

However, when a world renown Harvard geneticist announces he’s launching a blockchain-based start-up, it merits some attention. And it’s not the crypto-angle itself that might make you do a double-take, but the assets that will be managed, and exchanged, using digital currency: your DNA

Harvard University genetics guru George Church — one of the scientists at the forefront of the CRISPR genetic engineering revolution — announced on Wednesday a start-up, Nebula Genomics, that will use the blockchain to not only allow individuals to share their personal genome for research purposes, but retain ownership and monetize their DNA through trading of a custom digital currency.

The genomics revolution has been exponentially advanced by drastic reductions in cost. As Nebula noted in a white paper explaining its business model, the first human genome was sequenced in 2001 at a cost of $3 billion. Today, human genome sequencing costs less than $1,000, and in a few years the price will drop below $100

In fact, some big Silicon Valley start-ups, led by 23andMe, have capitalized on this rapid advance and already offer personal DNA testing kits for around $100 (sometimes with discounts even less)

Nebula took direct aim at 23andMe in its white paper, and one reason why it can offer genetic testing for less

“Today, 23andMe (23andme.com) and Ancestry (ancestry.com) are the two leading personal genomics companies. Both use DNA microarray-based genotyping for their genetic tests. It is an outdated and significantly less powerful alternative to DNA sequencing. Instead of sequencing continuous stretches of DNA, genotyping identifies single letters spaced at approximately regular intervals across the genome. …

Outdated genetic tests? Interesting, eh? Zoë Corbyn provides more information about Church’s plans in her February 18, 2018 article for the Guardian,

“Under the current system, personal genomics companies effectively own your personal genomics data, and you don’t see any benefit at all,” says Grishin [Dennis Grishin, Nebula co-founder]. “We want to eliminate the middleman.

Although the aim isn’t to provide a get-rich-quick scheme, the company believes there is potential for substantial returns. Though speculative, its modelling suggests that someone in the US could earn up to 50 times the cost of sequencing their genome – about $50,000 at current rates – taking into account both what could be made from a lifetime of renting out their genetic data, and reductions in medical bills if the results throw up a potentially preventable disease

The startup also thinks it can solve the problem of the dearth of genetic data researchers have to draw on, due to individuals – put off by cost or privacy concerns – not getting sequenced.

Payouts when you grant access to your genome would come in the form of Nebula tokens, the company’s cryptocurrency, and companies would need to buy tokens from the startup to pay people whose data they wanted to access. Though the value of a token is yet to be set and the number of tokens defined, it might, for example, take one Nebula token to get your genome sequenced. An individual new to the system could begin to earn fractions of a token by taking part in surveys about their heath posted by prospective data buyers. When someone had earned enough, they could get sequenced and begin renting out their data and amassing tokens. Alternatively, if an individual wasn’t yet sequenced they may find data buyers willing to pay for or subsidise their genome sequencing in exchange for access to it. “Potentially you wouldn’t have to pay out of pocket for the sequencing of your genome,” says Grishin.

In all cases, stress Grishin and Obbad [Kamal Obbad, Nebula co-founder], the sequence would belong to the individual, so they could rent it out over and over, including to multiple companies simultaneously. And the data buyer would never take ownership or possession of it – rather, it would be stored by the individual (for example in their computer or on their Dropbox account) with Nebula then providing a secure computation platform on which the data buyer could compute on the data. “You stay in control of your data and you can share it securely with who you want to,” explains Obbad. Nebula makes money not by taking any transaction fee but by being a participant providing computing and storage services. The cryptocurrency would be able to be cashed out for real money via existing cryptocurrency exchanges.

Hopefully, Luna and Nebula, as well as, any competitors in this race to allow individuals to monetize their own DNA will have excellent security.

For the curious, you can find Luna here and Nebula here.Note: I am not endorsing either company or any others mentioned here. This posting is strictly informational.

Call for abstracts: Seventh annual conference on governance of emerging technologies & science (GETS)

The conference itself will be held from May 22 – 24, 2019 at Arizona State University (ASU) and the deadline for abstracts is January 31, 2019. Here’s the news straight from the January 8, 2019 email announcement,

The Seventh Annual Conference on Governance of Emerging Technologies & Science (GETS)

May 22-24, 2019 / ASU / Sandra Day O’Connor College of Law
111 E. Taylor St., Phoenix, AZ
 
The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, digital health, human enhancement, artificial intelligence, virtual reality, internet of things (IoT), blockchain and much, much more!
 
Submit Your Abstract Here: 2019 Abstract
or
Conference Website
 
Call for abstracts:
 
The co-sponsors invite submission of abstracts for proposed presentations. Submitters of abstracts need not provide a written paper, although provision will be made for posting and possible post-conference publication of papers for those who are interested. 
Abstracts are invited for any aspect or topic relating to the governance of emerging technologies, including any of the technologies listed above.
 
·         Abstracts should not exceed 500 words and must contain your name and email address.
·         Abstracts must be submitted by January 31, 2019 to be considered. 
·         The sponsors will pay for the conference registration (including all conference meals and events) for one presenter for each accepted abstract. In addition, we will have limited funds available for travel subsidies (application included in submission form).
For more informationcontact our Executive Director Josh Abbott at Josh.Abbott@asu.edu.

Good luck on your submission!

Paint to Programming: exploring the role of algorithms in SciArt; a Dec. 4, 2018 ArtSci Salon event in Toronto, Canada

Here’s the latest from a November 20, 2018 ArtSci Salon announcement received via email,

Paint to Programming: exploring the role of algorithms in SciArt

Description

What is the role of programming in artwork creation? is programming preliminary a Medium to be hidden to an audience more interested in the interface rather than in its algorithmic content ? or is it both medium and content, revealing the inner working, the politics, and the tactical/strategic uses of code and algorithmic complexity in a culture increasingly withdrawn from its crucial implications?

Thanks to a collaboration between Art the Science and ArtSci Salon, this event is meant to initiate a conversation to understand the many uses of algorithms in artistic and scientific research, from ways to solve problems in fluid mechanics by drawing inspiration from the dripping technique of Jackson Pollock, to exploring and making visible the complex dynamics of the blockchain, to using algorithms to process and display data for science communication.

Join ArtSci Salon and Art the Science at Fields for an evening of presentation and discussion with:

Julia Krolik : Exploring algorithms in SciArt
Owen Fernley: Creative coding
Sarah Friend: Software as a medium
Bernardo Palacios Muñiz: Modern painting: A fluid mechanics perspective

Moderator: Roberta Buiani

December 4th | 6pm-8pm

The Fields Institute for Research in Mathematical Sciences
222 College Street | Room 230
Toronto ON | M5T 3J1
Please, RSVP here 

Bios

Julia Krolik is the founder and Chief Executive Officer of Art the Science, an organization dedicated to uniting and empowering artists and scientists to collectively advance scientific knowledge. As an exhibiting artist, focusing on science, art and new media, Julia has created works for CBC, the Ontario Science Centre, the Toronto Urban Film Festival and the Scotia Bank Photography Festival. 

Owen Fernley is an engineer and experimentalist. He has experience programming computational engines in Fortran and C and is currently building front-end web tools in javascript to aid in exploration geology.He co-created Decomposing Pianos, an experimental music collective focusing on projects related to art, science, experimental music and new media. 

Sarah Friend is an artist and software engineer working at a large blockchain development studio. When not doing that, she creates games and other interactive experiences. Her practice investigates murky dichotomies – like those between privacy and transparency, centralization and decentralization, and the environment and technology – with playfulness and absurdist humour.

Bernardo Palacios Muñiz is a mechanical engineer and a researcher from Mexico City. His thesis at UNAM “Descifrando a Pollock: Arte y Mecánica de Fluidos” explored the technique inplemented by Jackson POllock through the perspective of fluid mechanics

Like so many of the events from the ArtSci salon, this is very timely. On a somewhat related note, there’s an art/AI emergence mentioned in my August 31, 2018 posting (scroll down about 70% of the way to this subhead ‘Artworks generated by an AI system are to be sold at Christie’s auction house’).

I’ve also mentioned ArtSci Salon’s presentation partner, Art the Science, in an October 23, 2017 posting. Amongst other programmes, they advertise and promote artist  residencies. I notice that their events are held exclusively in Ontario and the descriptions for participants in their 2018 online gallery exhibit feature a preponderance of Ontario-based artists. I’m sure they’d like to get more participation from across the country but that takes extra time and effort and volunteer organizations such as this one don’t have much of either to spare. Their three year life (they were founded in 2015) is quite an accomplishment.

As for a more national art/sci or sciart network, maybe it’s time to organize something, eh?

More memory, less space and a walk down the cryptocurrency road

Libraries, archives, records management, oral history, etc. there are many institutions and names for how we manage collective and personal memory. You might call it a peculiarly human obsession stretching back into antiquity. For example, there’s the Library of Alexandria (Wikipedia entry) founded in the third, or possibly 2nd, century BCE (before the common era) and reputed to store all the knowledge in the world. It was destroyed although accounts differ as to when and how but its loss remains a potent reminder of memory’s fragility.

These days, the technology community is terribly concerned with storing ever more bits of data on materials that are reaching their limits for storage.I have news of a possible solution,  an interview of sorts with the researchers working on this new technology, and some very recent research into policies for cryptocurrency mining and development. That bit about cryptocurrency makes more sense when you read what the response to one of the interview questions.

Memory

It seems University of Alberta researchers may have found a way to increase memory exponentially, from a July 23, 2018 news item on ScienceDaily,

The most dense solid-state memory ever created could soon exceed the capabilities of current computer storage devices by 1,000 times, thanks to a new technique scientists at the University of Alberta have perfected.

“Essentially, you can take all 45 million songs on iTunes and store them on the surface of one quarter,” said Roshan Achal, PhD student in Department of Physics and lead author on the new research. “Five years ago, this wasn’t even something we thought possible.”

A July 23, 2018 University of Alberta news release (also on EurekAlert) by Jennifer-Anne Pascoe, which originated the news item, provides more information,

Previous discoveries were stable only at cryogenic conditions, meaning this new finding puts society light years closer to meeting the need for more storage for the current and continued deluge of data. One of the most exciting features of this memory is that it’s road-ready for real-world temperatures, as it can withstand normal use and transportation beyond the lab.

“What is often overlooked in the nanofabrication business is actual transportation to an end user, that simply was not possible until now given temperature restrictions,” continued Achal. “Our memory is stable well above room temperature and precise down to the atom.”

Achal explained that immediate applications will be data archival. Next steps will be increasing readout and writing speeds, meaning even more flexible applications.

More memory, less space

Achal works with University of Alberta physics professor Robert Wolkow, a pioneer in the field of atomic-scale physics. Wolkow perfected the art of the science behind nanotip technology, which, thanks to Wolkow and his team’s continued work, has now reached a tipping point, meaning scaling up atomic-scale manufacturing for commercialization.

“With this last piece of the puzzle now in-hand, atom-scale fabrication will become a commercial reality in the very near future,” said Wolkow. Wolkow’s Spin-off [sic] company, Quantum Silicon Inc., is hard at work on commercializing atom-scale fabrication for use in all areas of the technology sector.

To demonstrate the new discovery, Achal, Wolkow, and their fellow scientists not only fabricated the world’s smallest maple leaf, they also encoded the entire alphabet at a density of 138 terabytes, roughly equivalent to writing 350,000 letters across a grain of rice. For a playful twist, Achal also encoded music as an atom-sized song, the first 24 notes of which will make any video-game player of the 80s and 90s nostalgic for yesteryear but excited for the future of technology and society.

As noted in the news release, there is an atom-sized song, which is available in this video,

As for the nano-sized maple leaf, I highlighted that bit of whimsy in a June 30, 2017 posting.

Here’s a link to and a citation for the paper,

Lithography for robust and editable atomic-scale silicon devices and memories by Roshan Achal, Mohammad Rashidi, Jeremiah Croshaw, David Churchill, Marco Taucer, Taleana Huff, Martin Cloutier, Jason Pitters, & Robert A. Wolkow. Nature Communicationsvolume 9, Article number: 2778 (2018) DOI: https://doi.org/10.1038/s41467-018-05171-y Published 23 July 2018

This paper is open access.

For interested parties, you can find Quantum Silicon (QSI) here. My Edmonton geography is all but nonexistent, still, it seems to me the company address on Saskatchewan Drive is a University of Alberta address. It’s also the address for the National Research Council of Canada. Perhaps this is a university/government spin-off company?

The ‘interview’

I sent some questions to the researchers at the University of Alberta who very kindly provided me with the following answers. Roshan Achal passed on one of the questions to his colleague Taleana Huff for her response. Both Achal and Huff are associated with QSI.

Unfortunately I could not find any pictures of all three researchers (Achal, Huff, and Wolkow) together.

Roshan Achal (left) used nanotechnology perfected by his PhD supervisor, Robert Wolkow (right) to create atomic-scale computer memory that could exceed the capacity of today’s solid-state storage drives by 1,000 times. (Photo: Faculty of Science)

(1) SHRINKING THE MANUFACTURING PROCESS TO THE ATOMIC SCALE HAS
ATTRACTED A LOT OF ATTENTION OVER THE YEARS STARTING WITH SCIENCE
FICTION OR RICHARD FEYNMAN OR K. ERIC DREXLER, ETC. IN ANY EVENT, THE
ORIGINS ARE CONTESTED SO I WON’T PUT YOU ON THE SPOT BY ASKING WHO
STARTED IT ALL INSTEAD ASKING HOW DID YOU GET STARTED?

I got started in this field about 6 years ago, when I undertook a MSc
with Dr. Wolkow here at the University of Alberta. Before that point, I
had only ever heard of a scanning tunneling microscope from what was
taught in my classes. I was aware of the famous IBM logo made up from
just a handful of atoms using this machine, but I didn’t know what
else could be done. Here, Dr. Wolkow introduced me to his line of
research, and I saw the immense potential for growth in this area and
decided to pursue it further. I had the chance to interact with and
learn from nanofabrication experts and gain the skills necessary to
begin playing around with my own techniques and ideas during my PhD.

(2) AS I UNDERSTAND IT, THESE ARE THE PIECES YOU’VE BEEN
WORKING ON: (1) THE TUNGSTEN MICROSCOPE TIP, WHICH MAKE[s] (2) THE SMALLEST
QUANTUM DOTS (SINGLE ATOMS OF SILICON), (3) THE AUTOMATION OF THE
QUANTUM DOT PRODUCTION PROCESS, AND (4) THE “MOST DENSE SOLID-STATE
MEMORY EVER CREATED.” WHAT’S MISSING FROM THE LIST AND IS THAT WHAT
YOU’RE WORKING ON NOW?

One of the things missing from the list, that we are currently working
on, is the ability to easily communicate (electrically) from the
macroscale (our world) to the nanoscale, without the use of a scanning
tunneling microscope. With this, we would be able to then construct
devices using the other pieces we’ve developed up to this point, and
then integrate them with more conventional electronics. This would bring
us yet another step closer to the realization of atomic-scale
electronics.

(3) PERHAPS YOU COULD CLARIFY SOMETHING FOR ME. USUALLY WHEN SOLID STATE
MEMORY IS MENTIONED, THERE’S GREAT CONCERN ABOUT MOORE’S LAW. IS
THIS WORK GOING TO CREATE A NEW LAW? AND, WHAT IF ANYTHING DOES
;YOUR MEMORY DEVICE HAVE TO DO WITH QUANTUM COMPUTING?

That is an interesting question. With the density we’ve achieved,
there are not too many surfaces where atomic sites are more closely
spaced to allow for another factor of two improvement. In that sense, it
would be difficult to improve memory densities further using these
techniques alone. In order to continue Moore’s law, new techniques, or
storage methods would have to be developed to move beyond atomic-scale
storage.

The memory design itself does not have anything to do with quantum
computing, however, the lithographic techniques developed through our
work, may enable the development of certain quantum-dot-based quantum
computing schemes.

(4) THIS MAY BE A LITTLE OUT OF LEFT FIELD (OR FURTHER OUT THAN THE
OTHERS), COULD;YOUR MEMORY DEVICE HAVE AN IMPACT ON THE
DEVELOPMENT OF CRYPTOCURRENCY AND BLOCKCHAIN? IF SO, WHAT MIGHT THAT
IMPACT BE?

I am not very familiar with these topics, however, co-author Taleana
Huff has provided some thoughts:

Taleana Huff (downloaded from https://ca.linkedin.com/in/taleana-huff]

“The memory, as we’ve designed it, might not have too much of an
impact in and of itself. Cryptocurrencies fall into two categories.
Proof of Work and Proof of Stake. Proof of Work relies on raw
computational power to solve a difficult math problem. If you solve it,
you get rewarded with a small amount of that coin. The problem is that
it can take a lot of power and energy for your computer to crunch
through that problem. Faster access to memory alone could perhaps
streamline small parts of this slightly, but it would be very slight.
Proof of Stake is already quite power efficient and wouldn’t really
have a drastic advantage from better faster computers.

Now, atomic-scale circuitry built using these new lithographic
techniques that we’ve developed, which could perform computations at
significantly lower energy costs, would be huge for Proof of Work coins.
One of the things holding bitcoin back, for example, is that mining it
is now consuming power on the order of the annual energy consumption
required by small countries. A more efficient way to mine while still
taking the same amount of time to solve the problem would make bitcoin
much more attractive as a currency.”

Thank you to Roshan Achal and Taleana Huff for helping me to further explore the implications of their work with Dr. Wolkow.

Comments

As usual, after receiving the replies I have more questions but these people have other things to do so I’ll content myself with noting that there is something extraordinary in the fact that we can imagine a near future where atomic scale manufacturing is possible and where as Achal says, ” … storage methods would have to be developed to move beyond atomic-scale [emphasis mine] storage”. In decades past it was the stuff of science fiction or of theorists who didn’t have the tools to turn the idea into a reality. With Wolkow’s, Achal’s, Hauff’s, and their colleagues’ work, atomic scale manufacturing is attainable in the foreseeable future.

Hopefully we’ll be wiser than we have been in the past in how we deploy these new manufacturing techniques. Of course, before we need the wisdom, scientists, as  Achal notes,  need to find a new way to communicate between the macroscale and the nanoscale.

As for Huff’s comments about cryptocurrencies and cyptocurrency and blockchain technology, I stumbled across this very recent research, from a July 31, 2018 Elsevier press release (also on EurekAlert),

A study [behind a paywall] published in Energy Research & Social Science warns that failure to lower the energy use by Bitcoin and similar Blockchain designs may prevent nations from reaching their climate change mitigation obligations under the Paris Agreement.

The study, authored by Jon Truby, PhD, Assistant Professor, Director of the Centre for Law & Development, College of Law, Qatar University, Doha, Qatar, evaluates the financial and legal options available to lawmakers to moderate blockchain-related energy consumption and foster a sustainable and innovative technology sector. Based on this rigorous review and analysis of the technologies, ownership models, and jurisdictional case law and practices, the article recommends an approach that imposes new taxes, charges, or restrictions to reduce demand by users, miners, and miner manufacturers who employ polluting technologies, and offers incentives that encourage developers to create less energy-intensive/carbon-neutral Blockchain.

“Digital currency mining is the first major industry developed from Blockchain, because its transactions alone consume more electricity than entire nations,” said Dr. Truby. “It needs to be directed towards sustainability if it is to realize its potential advantages.

“Many developers have taken no account of the environmental impact of their designs, so we must encourage them to adopt consensus protocols that do not result in high emissions. Taking no action means we are subsidizing high energy-consuming technology and causing future Blockchain developers to follow the same harmful path. We need to de-socialize the environmental costs involved while continuing to encourage progress of this important technology to unlock its potential economic, environmental, and social benefits,” explained Dr. Truby.

As a digital ledger that is accessible to, and trusted by all participants, Blockchain technology decentralizes and transforms the exchange of assets through peer-to-peer verification and payments. Blockchain technology has been advocated as being capable of delivering environmental and social benefits under the UN’s Sustainable Development Goals. However, Bitcoin’s system has been built in a way that is reminiscent of physical mining of natural resources – costs and efforts rise as the system reaches the ultimate resource limit and the mining of new resources requires increasing hardware resources, which consume huge amounts of electricity.

Putting this into perspective, Dr. Truby said, “the processes involved in a single Bitcoin transaction could provide electricity to a British home for a month – with the environmental costs socialized for private benefit.

“Bitcoin is here to stay, and so, future models must be designed without reliance on energy consumption so disproportionate on their economic or social benefits.”

The study evaluates various Blockchain technologies by their carbon footprints and recommends how to tax or restrict Blockchain types at different phases of production and use to discourage polluting versions and encourage cleaner alternatives. It also analyzes the legal measures that can be introduced to encourage technology innovators to develop low-emissions Blockchain designs. The specific recommendations include imposing levies to prevent path-dependent inertia from constraining innovation:

  • Registration fees collected by brokers from digital coin buyers.
  • “Bitcoin Sin Tax” surcharge on digital currency ownership.
  • Green taxes and restrictions on machinery purchases/imports (e.g. Bitcoin mining machines).
  • Smart contract transaction charges.

According to Dr. Truby, these findings may lead to new taxes, charges or restrictions, but could also lead to financial rewards for innovators developing carbon-neutral Blockchain.

The press release doesn’t fully reflect Dr. Truby’s thoughtfulness or the incentives he has suggested. it’s not all surcharges, taxes, and fees constitute encouragement.  Here’s a sample from the conclusion,

The possibilities of Blockchain are endless and incentivisation can help solve various climate change issues, such as through the development of digital currencies to fund climate finance programmes. This type of public-private finance initiative is envisioned in the Paris Agreement, and fiscal tools can incentivize innovators to design financially rewarding Blockchain technology that also achieves environmental goals. Bitcoin, for example, has various utilitarian intentions in its White Paper, which may or may not turn out to be as envisioned, but it would not have been such a success without investors seeking remarkable returns. Embracing such technology, and promoting a shift in behaviour with such fiscal tools, can turn the industry itself towards achieving innovative solutions for environmental goals.

I realize Wolkow, et. al, are not focused on cryptocurrency and blockchain technology per se but as Huff notes in her reply, “… new lithographic techniques that we’ve developed, which could perform computations at significantly lower energy costs, would be huge for Proof of Work coins.”

Whether or not there are implications for cryptocurrencies, energy needs, climate change, etc., it’s the kind of innovative work being done by scientists at the University of Alberta which may have implications in fields far beyond the researchers’ original intentions such as more efficient computation and data storage.

ETA Aug. 6, 2018: Dexter Johnson weighed in with an August 3, 2018 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website),

Researchers at the University of Alberta in Canada have developed a new approach to rewritable data storage technology by using a scanning tunneling microscope (STM) to remove and replace hydrogen atoms from the surface of a silicon wafer. If this approach realizes its potential, it could lead to a data storage technology capable of storing 1,000 times more data than today’s hard drives, up to 138 terabytes per square inch.

As a bit of background, Gerd Binnig and Heinrich Rohrer developed the first STM in 1986 for which they later received the Nobel Prize in physics. In the over 30 years since an STM first imaged an atom by exploiting a phenomenon known as tunneling—which causes electrons to jump from the surface atoms of a material to the tip of an ultrasharp electrode suspended a few angstroms above—the technology has become the backbone of so-called nanotechnology.

In addition to imaging the world on the atomic scale for the last thirty years, STMs have been experimented with as a potential data storage device. Last year, we reported on how IBM (where Binnig and Rohrer first developed the STM) used an STM in combination with an iron atom to serve as an electron-spin resonance sensor to read the magnetic pole of holmium atoms. The north and south poles of the holmium atoms served as the 0 and 1 of digital logic.

The Canadian researchers have taken a somewhat different approach to making an STM into a data storage device by automating a known technique that uses the ultrasharp tip of the STM to apply a voltage pulse above an atom to remove individual hydrogen atoms from the surface of a silicon wafer. Once the atom has been removed, there is a vacancy on the surface. These vacancies can be patterned on the surface to create devices and memories.

If you have the time, I recommend reading Dexter’s posting as he provides clear explanations, additional insight into the work, and more historical detail.

The Hedy Lamarr of international research: Canada’s Third assessment of The State of Science and Technology and Industrial Research and Development in Canada (2 of 2)

Taking up from where I left off with my comments on Competing in a Global Innovation Economy: The Current State of R and D in Canada or as I prefer to call it the Third assessment of Canadas S&T (science and technology) and R&D (research and development). (Part 1 for anyone who missed it).

Is it possible to get past Hedy?

Interestingly (to me anyway), one of our R&D strengths, the visual and performing arts, features sectors where a preponderance of people are dedicated to creating culture in Canada and don’t spend a lot of time trying to make money so they can retire before the age of 40 as so many of our start-up founders do. (Retiring before the age of 40 just reminded me of Hollywood actresses {Hedy] who found and still do find that work was/is hard to come by after that age. You may be able but I’m not sure I can get past Hedy.) Perhaps our business people (start-up founders) could take a leaf out of the visual and performing arts handbook? Or, not. There is another question.

Does it matter if we continue to be a ‘branch plant’ economy? Somebody once posed that question to me when I was grumbling that our start-ups never led to larger businesses and acted more like incubators (which could describe our R&D as well),. He noted that Canadians have a pretty good standard of living and we’ve been running things this way for over a century and it seems to work for us. Is it that bad? I didn’t have an  answer for him then and I don’t have one now but I think it’s a useful question to ask and no one on this (2018) expert panel or the previous expert panel (2013) seems to have asked.

I appreciate that the panel was constrained by the questions given by the government but given how they snuck in a few items that technically speaking were not part of their remit, I’m thinking they might have gone just a bit further. The problem with answering the questions as asked is that if you’ve got the wrong questions, your answers will be garbage (GIGO; garbage in, garbage out) or, as is said, where science is concerned, it’s the quality of your questions.

On that note, I would have liked to know more about the survey of top-cited researchers. I think looking at the questions could have been quite illuminating and I would have liked some information on from where (geographically and area of specialization) they got most of their answers. In keeping with past practice (2012 assessment published in 2013), there is no additional information offered about the survey questions or results. Still, there was this (from the report released April 10, 2018; Note: There may be some difference between the formatting seen here and that seen in the document),

3.1.2 International Perceptions of Canadian Research
As with the 2012 S&T report, the CCA commissioned a survey of top-cited researchers’ perceptions of Canada’s research strength in their field or subfield relative to that of other countries (Section 1.3.2). Researchers were asked to identify the top five countries in their field and subfield of expertise: 36% of respondents (compared with 37% in the 2012 survey) from across all fields of research rated Canada in the top five countries in their field (Figure B.1 and Table B.1 in the appendix). Canada ranks fourth out of all countries, behind the United States, United Kingdom, and Germany, and ahead of France. This represents a change of about 1 percentage point from the overall results of the 2012 S&T survey. There was a 4 percentage point decrease in how often France is ranked among the top five countries; the ordering of the top five countries, however, remains the same.

When asked to rate Canada’s research strength among other advanced countries in their field of expertise, 72% (4,005) of respondents rated Canadian research as “strong” (corresponding to a score of 5 or higher on a 7-point scale) compared with 68% in the 2012 S&T survey (Table 3.4). [pp. 40-41 Print; pp. 78-70 PDF]

Before I forget, there was mention of the international research scene,

Growth in research output, as estimated by number of publications, varies considerably for the 20 top countries. Brazil, China, India, Iran, and South Korea have had the most significant increases in publication output over the last 10 years. [emphases mine] In particular, the dramatic increase in China’s output means that it is closing the gap with the United States. In 2014, China’s output was 95% of that of the United States, compared with 26% in 2003. [emphasis mine]

Table 3.2 shows the Growth Index (GI), a measure of the rate at which the research output for a given country changed between 2003 and 2014, normalized by the world growth rate. If a country’s growth in research output is higher than the world average, the GI score is greater than 1.0. For example, between 2003 and 2014, China’s GI score was 1.50 (i.e., 50% greater than the world average) compared with 0.88 and 0.80 for Canada and the United States, respectively. Note that the dramatic increase in publication production of emerging economies such as China and India has had a negative impact on Canada’s rank and GI score (see CCA, 2016).

As long as I’ve been blogging (10 years), the international research community (in particular the US) has been looking over its shoulder at China.

Patents and intellectual property

As an inventor, Hedy got more than one patent. Much has been made of the fact that  despite an agreement, the US Navy did not pay her or her partner (George Antheil) for work that would lead to significant military use (apparently, it was instrumental in the Bay of Pigs incident, for those familiar with that bit of history), GPS, WiFi, Bluetooth, and more.

Some comments about patents. They are meant to encourage more innovation by ensuring that creators/inventors get paid for their efforts .This is true for a set time period and when it’s over, other people get access and can innovate further. It’s not intended to be a lifelong (or inheritable) source of income. The issue in Lamarr’s case is that the navy developed the technology during the patent’s term without telling either her or her partner so, of course, they didn’t need to compensate them despite the original agreement. They really should have paid her and Antheil.

The current patent situation, particularly in the US, is vastly different from the original vision. These days patents are often used as weapons designed to halt innovation. One item that should be noted is that the Canadian federal budget indirectly addressed their misuse (from my March 16, 2018 posting),

Surprisingly, no one else seems to have mentioned a new (?) intellectual property strategy introduced in the document (from Chapter 2: Progress; scroll down about 80% of the way, Note: The formatting has been changed),

Budget 2018 proposes measures in support of a new Intellectual Property Strategy to help Canadian entrepreneurs better understand and protect intellectual property, and get better access to shared intellectual property.

What Is a Patent Collective?
A Patent Collective is a way for firms to share, generate, and license or purchase intellectual property. The collective approach is intended to help Canadian firms ensure a global “freedom to operate”, mitigate the risk of infringing a patent, and aid in the defence of a patent infringement suit.

Budget 2018 proposes to invest $85.3 million over five years, starting in 2018–19, with $10 million per year ongoing, in support of the strategy. The Minister of Innovation, Science and Economic Development will bring forward the full details of the strategy in the coming months, including the following initiatives to increase the intellectual property literacy of Canadian entrepreneurs, and to reduce costs and create incentives for Canadian businesses to leverage their intellectual property:

  • To better enable firms to access and share intellectual property, the Government proposes to provide $30 million in 2019–20 to pilot a Patent Collective. This collective will work with Canada’s entrepreneurs to pool patents, so that small and medium-sized firms have better access to the critical intellectual property they need to grow their businesses.
  • To support the development of intellectual property expertise and legal advice for Canada’s innovation community, the Government proposes to provide $21.5 million over five years, starting in 2018–19, to Innovation, Science and Economic Development Canada. This funding will improve access for Canadian entrepreneurs to intellectual property legal clinics at universities. It will also enable the creation of a team in the federal government to work with Canadian entrepreneurs to help them develop tailored strategies for using their intellectual property and expanding into international markets.
  • To support strategic intellectual property tools that enable economic growth, Budget 2018 also proposes to provide $33.8 million over five years, starting in 2018–19, to Innovation, Science and Economic Development Canada, including $4.5 million for the creation of an intellectual property marketplace. This marketplace will be a one-stop, online listing of public sector-owned intellectual property available for licensing or sale to reduce transaction costs for businesses and researchers, and to improve Canadian entrepreneurs’ access to public sector-owned intellectual property.

The Government will also consider further measures, including through legislation, in support of the new intellectual property strategy.

Helping All Canadians Harness Intellectual Property
Intellectual property is one of our most valuable resources, and every Canadian business owner should understand how to protect and use it.

To better understand what groups of Canadians are benefiting the most from intellectual property, Budget 2018 proposes to provide Statistics Canada with $2 million over three years to conduct an intellectual property awareness and use survey. This survey will help identify how Canadians understand and use intellectual property, including groups that have traditionally been less likely to use intellectual property, such as women and Indigenous entrepreneurs. The results of the survey should help the Government better meet the needs of these groups through education and awareness initiatives.

The Canadian Intellectual Property Office will also increase the number of education and awareness initiatives that are delivered in partnership with business, intermediaries and academia to ensure Canadians better understand, integrate and take advantage of intellectual property when building their business strategies. This will include targeted initiatives to support underrepresented groups.

Finally, Budget 2018 also proposes to invest $1 million over five years to enable representatives of Canada’s Indigenous Peoples to participate in discussions at the World Intellectual Property Organization related to traditional knowledge and traditional cultural expressions, an important form of intellectual property.

It’s not wholly clear what they mean by ‘intellectual property’. The focus seems to be on  patents as they are the only intellectual property (as opposed to copyright and trademarks) singled out in the budget. As for how the ‘patent collective’ is going to meet all its objectives, this budget supplies no clarity on the matter. On the plus side, I’m glad to see that indigenous peoples’ knowledge is being acknowledged as “an important form of intellectual property” and I hope the discussions at the World Intellectual Property Organization are fruitful.

As for the patent situation in Canada (from the report released April 10, 2018),

Over the past decade, the Canadian patent flow in all technical sectors has consistently decreased. Patent flow provides a partial picture of how patents in Canada are exploited. A negative flow represents a deficit of patented inventions owned by Canadian assignees versus the number of patented inventions created by Canadian inventors. The patent flow for all Canadian patents decreased from about −0.04 in 2003 to −0.26 in 2014 (Figure 4.7). This means that there is an overall deficit of 26% of patent ownership in Canada. In other words, fewer patents were owned by Canadian institutions than were invented in Canada.

This is a significant change from 2003 when the deficit was only 4%. The drop is consistent across all technical sectors in the past 10 years, with Mechanical Engineering falling the least, and Electrical Engineering the most (Figure 4.7). At the technical field level, the patent flow dropped significantly in Digital Communication and Telecommunications. For example, the Digital Communication patent flow fell from 0.6 in 2003 to −0.2 in 2014. This fall could be partially linked to Nortel’s US$4.5 billion patent sale [emphasis mine] to the Rockstar consortium (which included Apple, BlackBerry, Ericsson, Microsoft, and Sony) (Brickley, 2011). Food Chemistry and Microstructural [?] and Nanotechnology both also showed a significant drop in patent flow. [p. 83 Print; p. 121 PDF]

Despite a fall in the number of parents for ‘Digital Communication’, we’re still doing well according to statistics elsewhere in this report. Is it possible that patents aren’t that big a deal? Of course, it’s also possible that we are enjoying the benefits of past work and will miss out on future work. (Note: A video of the April 10, 2018 report presentation by Max Blouw features him saying something like that.)

One last note, Nortel died many years ago. Disconcertingly, this report, despite more than one reference to Nortel, never mentions the company’s demise.

Boxed text

While the expert panel wasn’t tasked to answer certain types of questions, as I’ve noted earlier they managed to sneak in a few items.  One of the strategies they used was putting special inserts into text boxes including this (from the report released April 10, 2018),

Box 4.2
The FinTech Revolution

Financial services is a key industry in Canada. In 2015, the industry accounted for 4.4%

of Canadia jobs and about 7% of Canadian GDP (Burt, 2016). Toronto is the second largest financial services hub in North America and one of the most vibrant research hubs in FinTech. Since 2010, more than 100 start-up companies have been founded in Canada, attracting more than $1 billion in investment (Moffatt, 2016). In 2016 alone, venture-backed investment in Canadian financial technology companies grew by 35% to $137.7 million (Ho, 2017). The Toronto Financial Services Alliance estimates that there are approximately 40,000 ICT specialists working in financial services in Toronto alone.

AI, blockchain, [emphasis mine] and other results of ICT research provide the basis for several transformative FinTech innovations including, for example, decentralized transaction ledgers, cryptocurrencies (e.g., bitcoin), and AI-based risk assessment and fraud detection. These innovations offer opportunities to develop new markets for established financial services firms, but also provide entry points for technology firms to develop competing service offerings, increasing competition in the financial services industry. In response, many financial services companies are increasing their investments in FinTech companies (Breznitz et al., 2015). By their own account, the big five banks invest more than $1 billion annually in R&D of advanced software solutions, including AI-based innovations (J. Thompson, personal communication, 2016). The banks are also increasingly investing in university research and collaboration with start-up companies. For instance, together with several large insurance and financial management firms, all big five banks have invested in the Vector Institute for Artificial Intelligence (Kolm, 2017).

I’m glad to see the mention of blockchain while AI (artificial intelligence) is an area where we have innovated (from the report released April 10, 2018),

AI has attracted researchers and funding since the 1960s; however, there were periods of stagnation in the 1970s and 1980s, sometimes referred to as the “AI winter.” During this period, the Canadian Institute for Advanced Research (CIFAR), under the direction of Fraser Mustard, started supporting AI research with a decade-long program called Artificial Intelligence, Robotics and Society, [emphasis mine] which was active from 1983 to 1994. In 2004, a new program called Neural Computation and Adaptive Perception was initiated and renewed twice in 2008 and 2014 under the title, Learning in Machines and Brains. Through these programs, the government provided long-term, predictable support for high- risk research that propelled Canadian researchers to the forefront of global AI development. In the 1990s and early 2000s, Canadian research output and impact on AI were second only to that of the United States (CIFAR, 2016). NSERC has also been an early supporter of AI. According to its searchable grant database, NSERC has given funding to research projects on AI since at least 1991–1992 (the earliest searchable year) (NSERC, 2017a).

The University of Toronto, the University of Alberta, and the Université de Montréal have emerged as international centres for research in neural networks and deep learning, with leading experts such as Geoffrey Hinton and Yoshua Bengio. Recently, these locations have expanded into vibrant hubs for research in AI applications with a diverse mix of specialized research institutes, accelerators, and start-up companies, and growing investment by major international players in AI development, such as Microsoft, Google, and Facebook. Many highly influential AI researchers today are either from Canada or have at some point in their careers worked at a Canadian institution or with Canadian scholars.

As international opportunities in AI research and the ICT industry have grown, many of Canada’s AI pioneers have been drawn to research institutions and companies outside of Canada. According to the OECD, Canada’s share of patents in AI declined from 2.4% in 2000 to 2005 to 2% in 2010 to 2015. Although Canada is the sixth largest producer of top-cited scientific publications related to machine learning, firms headquartered in Canada accounted for only 0.9% of all AI-related inventions from 2012 to 2014 (OECD, 2017c). Canadian AI researchers, however, remain involved in the core nodes of an expanding international network of AI researchers, most of whom continue to maintain ties with their home institutions. Compared with their international peers, Canadian AI researchers are engaged in international collaborations far more often than would be expected by Canada’s level of research output, with Canada ranking fifth in collaboration. [p. 97-98 Print; p. 135-136 PDF]

The only mention of robotics seems to be here in this section and it’s only in passing. This is a bit surprising given its global importance. I wonder if robotics has been somehow hidden inside the term artificial intelligence, although sometimes it’s vice versa with robot being used to describe artificial intelligence. I’m noticing this trend of assuming the terms are synonymous or interchangeable not just in Canadian publications but elsewhere too.  ’nuff said.

Getting back to the matter at hand, t he report does note that patenting (technometric data) is problematic (from the report released April 10, 2018),

The limitations of technometric data stem largely from their restricted applicability across areas of R&D. Patenting, as a strategy for IP management, is similarly limited in not being equally relevant across industries. Trends in patenting can also reflect commercial pressures unrelated to R&D activities, such as defensive or strategic patenting practices. Finally, taxonomies for assessing patents are not aligned with bibliometric taxonomies, though links can be drawn to research publications through the analysis of patent citations. [p. 105 Print; p. 143 PDF]

It’s interesting to me that they make reference to many of the same issues that I mention but they seem to forget and don’t use that information in their conclusions.

There is one other piece of boxed text I want to highlight (from the report released April 10, 2018),

Box 6.3
Open Science: An Emerging Approach to Create New Linkages

Open Science is an umbrella term to describe collaborative and open approaches to
undertaking science, which can be powerful catalysts of innovation. This includes
the development of open collaborative networks among research performers, such
as the private sector, and the wider distribution of research that usually results when
restrictions on use are removed. Such an approach triggers faster translation of ideas
among research partners and moves the boundaries of pre-competitive research to
later, applied stages of research. With research results freely accessible, companies
can focus on developing new products and processes that can be commercialized.

Two Canadian organizations exemplify the development of such models. In June
2017, Genome Canada, the Ontario government, and pharmaceutical companies
invested $33 million in the Structural Genomics Consortium (SGC) (Genome Canada,
2017). Formed in 2004, the SGC is at the forefront of the Canadian open science
movement and has contributed to many key research advancements towards new
treatments (SGC, 2018). McGill University’s Montréal Neurological Institute and
Hospital has also embraced the principles of open science. Since 2016, it has been
sharing its research results with the scientific community without restriction, with
the objective of expanding “the impact of brain research and accelerat[ing] the
discovery of ground-breaking therapies to treat patients suffering from a wide range
of devastating neurological diseases” (neuro, n.d.).

This is exciting stuff and I’m happy the panel featured it. (I wrote about the Montréal Neurological Institute initiative in a Jan. 22, 2016 posting.)

More than once, the report notes the difficulties with using bibliometric and technometric data as measures of scientific achievement and progress and open science (along with its cousins, open data and open access) are contributing to the difficulties as James Somers notes in his April 5, 2018 article ‘The Scientific Paper is Obsolete’ for The Atlantic (Note: Links have been removed),

The scientific paper—the actual form of it—was one of the enabling inventions of modernity. Before it was developed in the 1600s, results were communicated privately in letters, ephemerally in lectures, or all at once in books. There was no public forum for incremental advances. By making room for reports of single experiments or minor technical advances, journals made the chaos of science accretive. Scientists from that point forward became like the social insects: They made their progress steadily, as a buzzing mass.

The earliest papers were in some ways more readable than papers are today. They were less specialized, more direct, shorter, and far less formal. Calculus had only just been invented. Entire data sets could fit in a table on a single page. What little “computation” contributed to the results was done by hand and could be verified in the same way.

The more sophisticated science becomes, the harder it is to communicate results. Papers today are longer than ever and full of jargon and symbols. They depend on chains of computer programs that generate data, and clean up data, and plot data, and run statistical models on data. These programs tend to be both so sloppily written and so central to the results that it’s [sic] contributed to a replication crisis, or put another way, a failure of the paper to perform its most basic task: to report what you’ve actually discovered, clearly enough that someone else can discover it for themselves.

Perhaps the paper itself is to blame. Scientific methods evolve now at the speed of software; the skill most in demand among physicists, biologists, chemists, geologists, even anthropologists and research psychologists, is facility with programming languages and “data science” packages. And yet the basic means of communicating scientific results hasn’t changed for 400 years. Papers may be posted online, but they’re still text and pictures on a page.

What would you get if you designed the scientific paper from scratch today? A little while ago I spoke to Bret Victor, a researcher who worked at Apple on early user-interface prototypes for the iPad and now runs his own lab in Oakland, California, that studies the future of computing. Victor has long been convinced that scientists haven’t yet taken full advantage of the computer. “It’s not that different than looking at the printing press, and the evolution of the book,” he said. After Gutenberg, the printing press was mostly used to mimic the calligraphy in bibles. It took nearly 100 years of technical and conceptual improvements to invent the modern book. “There was this entire period where they had the new technology of printing, but they were just using it to emulate the old media.”Victor gestured at what might be possible when he redesigned a journal article by Duncan Watts and Steven Strogatz, “Collective dynamics of ‘small-world’ networks.” He chose it both because it’s one of the most highly cited papers in all of science and because it’s a model of clear exposition. (Strogatz is best known for writing the beloved “Elements of Math” column for The New York Times.)

The Watts-Strogatz paper described its key findings the way most papers do, with text, pictures, and mathematical symbols. And like most papers, these findings were still hard to swallow, despite the lucid prose. The hardest parts were the ones that described procedures or algorithms, because these required the reader to “play computer” in their head, as Victor put it, that is, to strain to maintain a fragile mental picture of what was happening with each step of the algorithm.Victor’s redesign interleaved the explanatory text with little interactive diagrams that illustrated each step. In his version, you could see the algorithm at work on an example. You could even control it yourself….

For anyone interested in the evolution of how science is conducted and communicated, Somers’ article is a fascinating and in depth look at future possibilities.

Subregional R&D

I didn’t find this quite as compelling as the last time and that may be due to the fact that there’s less information and I think the 2012 report was the first to examine the Canadian R&D scene with a subregional (in their case, provinces) lens. On a high note, this report also covers cities (!) and regions, as well as, provinces.

Here’s the conclusion (from the report released April 10, 2018),

Ontario leads Canada in R&D investment and performance. The province accounts for almost half of R&D investment and personnel, research publications and collaborations, and patents. R&D activity in Ontario produces high-quality publications in each of Canada’s five R&D strengths, reflecting both the quantity and quality of universities in the province. Quebec lags Ontario in total investment, publications, and patents, but performs as well (citations) or better (R&D intensity) by some measures. Much like Ontario, Quebec researchers produce impactful publications across most of Canada’s five R&D strengths. Although it invests an amount similar to that of Alberta, British Columbia does so at a significantly higher intensity. British Columbia also produces more highly cited publications and patents, and is involved in more international research collaborations. R&D in British Columbia and Alberta clusters around Vancouver and Calgary in areas such as physics and ICT and in clinical medicine and energy, respectively. [emphasis mine] Smaller but vibrant R&D communities exist in the Prairies and Atlantic Canada [also referred to as the Maritime provinces or Maritimes] (and, to a lesser extent, in the Territories) in natural resource industries.

Globally, as urban populations expand exponentially, cities are likely to drive innovation and wealth creation at an increasing rate in the future. In Canada, R&D activity clusters around five large cities: Toronto, Montréal, Vancouver, Ottawa, and Calgary. These five cities create patents and high-tech companies at nearly twice the rate of other Canadian cities. They also account for half of clusters in the services sector, and many in advanced manufacturing.

Many clusters relate to natural resources and long-standing areas of economic and research strength. Natural resource clusters have emerged around the location of resources, such as forestry in British Columbia, oil and gas in Alberta, agriculture in Ontario, mining in Quebec, and maritime resources in Atlantic Canada. The automotive, plastics, and steel industries have the most individual clusters as a result of their economic success in Windsor, Hamilton, and Oshawa. Advanced manufacturing industries tend to be more concentrated, often located near specialized research universities. Strong connections between academia and industry are often associated with these clusters. R&D activity is distributed across the country, varying both between and within regions. It is critical to avoid drawing the wrong conclusion from this fact. This distribution does not imply the existence of a problem that needs to be remedied. Rather, it signals the benefits of diverse innovation systems, with differentiation driven by the needs of and resources available in each province. [pp.  132-133 Print; pp. 170-171 PDF]

Intriguingly, there’s no mention that in British Columbia (BC), there are leading areas of research: Visual & Performing Arts, Psychology & Cognitive Sciences, and Clinical Medicine (according to the table on p. 117 Print, p. 153 PDF).

As I said and hinted earlier, we’ve got brains; they’re just not the kind of brains that command respect.

Final comments

My hat’s off to the expert panel and staff of the Council of Canadian Academies. Combining two previous reports into one could not have been easy. As well, kudos to their attempts to broaden the discussion by mentioning initiative such as open science and for emphasizing the problems with bibliometrics, technometrics, and other measures. I have covered only parts of this assessment, (Competing in a Global Innovation Economy: The Current State of R&D in Canada), there’s a lot more to it including a substantive list of reference materials (bibliography).

While I have argued that perhaps the situation isn’t quite as bad as the headlines and statistics may suggest, there are some concerning trends for Canadians but we have to acknowledge that many countries have stepped up their research game and that’s good for all of us. You don’t get better at anything unless you work with and play with others who are better than you are. For example, both India and Italy surpassed us in numbers of published research papers. We slipped from 7th place to 9th. Thank you, Italy and India. (And, Happy ‘Italian Research in the World Day’ on April 15, 2018, the day’s inaugural year. In Italian: Piano Straordinario “Vivere all’Italiana” – Giornata della ricerca Italiana nel mondo.)

Unfortunately, the reading is harder going than previous R&D assessments in the CCA catalogue. And in the end, I can’t help thinking we’re just a little bit like Hedy Lamarr. Not really appreciated in all of our complexities although the expert panel and staff did try from time to time. Perhaps the government needs to find better ways of asking the questions.

***ETA April 12, 2018 at 1500 PDT: Talking about missing the obvious! I’ve been ranting on about how research strength in visual and performing arts and in philosophy and theology, etc. is perfectly fine and could lead to ‘traditional’ science breakthroughs without underlining the point by noting that Antheil was a musician, Lamarr was as an actress and they set the foundation for work by electrical engineers (or people with that specialty) for their signature work leading to WiFi, etc.***

There is, by the way, a Hedy-Canada connection. In 1998, she sued Canadian software company Corel, for its unauthorized use of her image on their Corel Draw 8 product packaging. She won.

More stuff

For those who’d like to see and hear the April 10, 2017 launch for “Competing in a Global Innovation Economy: The Current State of R&D in Canada” or the Third Assessment as I think of it, go here.

The report can be found here.

For anyone curious about ‘Bombshell: The Hedy Lamarr Story’ to be broadcast on May 18, 2018 as part of PBS’s American Masters series, there’s this trailer,

For the curious, I did find out more about the Hedy Lamarr and Corel Draw. John Lettice’s December 2, 1998 article The Rgister describes the suit and her subsequent victory in less than admiring terms,

Our picture doesn’t show glamorous actress Hedy Lamarr, who yesterday [Dec. 1, 1998] came to a settlement with Corel over the use of her image on Corel’s packaging. But we suppose that following the settlement we could have used a picture of Corel’s packaging. Lamarr sued Corel earlier this year over its use of a CorelDraw image of her. The picture had been produced by John Corkery, who was 1996 Best of Show winner of the Corel World Design Contest. Corel now seems to have come to an undisclosed settlement with her, which includes a five-year exclusive (oops — maybe we can’t use the pack-shot then) licence to use “the lifelike vector illustration of Hedy Lamarr on Corel’s graphic software packaging”. Lamarr, bless ‘er, says she’s looking forward to the continued success of Corel Corporation,  …

There’s this excerpt from a Sept. 21, 2015 posting (a pictorial essay of Lamarr’s life) by Shahebaz Khan on The Blaze Blog,

6. CorelDRAW:
For several years beginning in 1997, the boxes of Corel DRAW’s software suites were graced by a large Corel-drawn image of Lamarr. The picture won Corel DRAW’s yearly software suite cover design contest in 1996. Lamarr sued Corel for using the image without her permission. Corel countered that she did not own rights to the image. The parties reached an undisclosed settlement in 1998.

There’s also a Nov. 23, 1998 Corel Draw 8 product review by Mike Gorman on mymac.com, which includes a screenshot of the packaging that precipitated the lawsuit. Once they settled, it seems Corel used her image at least one more time.