An explanation of neural networks from the Massachusetts Institute of Technology (MIT)

I always enjoy the MIT ‘explainers’ and have been a little sad that I haven’t stumbled across one in a while. Until now, that is. Here’s an April 14, 201 neural network ‘explainer’ (in its entirety) by Larry Hardesty (?),

In the past 10 years, the best-performing artificial-intelligence systems — such as the speech recognizers on smartphones or Google’s latest automatic translator — have resulted from a technique called “deep learning.”

Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years. Neural networks were first proposed in 1944 by Warren McCullough and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952 as founding members of what’s sometimes called the first cognitive science department.

Neural nets were a major area of research in both neuroscience and computer science until 1969, when, according to computer science lore, they were killed off by the MIT mathematicians Marvin Minsky and Seymour Papert, who a year later would become co-directors of the new MIT Artificial Intelligence Laboratory.

The technique then enjoyed a resurgence in the 1980s, fell into eclipse again in the first decade of the new century, and has returned like gangbusters in the second, fueled largely by the increased processing power of graphics chips.

“There’s this idea that ideas in science are a bit like epidemics of viruses,” says Tomaso Poggio, the Eugene McDermott Professor of Brain and Cognitive Sciences at MIT, an investigator at MIT’s McGovern Institute for Brain Research, and director of MIT’s Center for Brains, Minds, and Machines. “There are apparently five or six basic strains of flu viruses, and apparently each one comes back with a period of around 25 years. People get infected, and they develop an immune response, and so they don’t get infected for the next 25 years. And then there is a new generation that is ready to be infected by the same strain of virus. In science, people fall in love with an idea, get excited about it, hammer it to death, and then get immunized — they get tired of it. So ideas should have the same kind of periodicity!”

Weighty matters

Neural nets are a means of doing machine learning, in which a computer learns to perform some task by analyzing training examples. Usually, the examples have been hand-labeled in advance. An object recognition system, for instance, might be fed thousands of labeled images of cars, houses, coffee cups, and so on, and it would find visual patterns in the images that consistently correlate with particular labels.

Modeled loosely on the human brain, a neural net consists of thousands or even millions of simple processing nodes that are densely interconnected. Most of today’s neural nets are organized into layers of nodes, and they’re “feed-forward,” meaning that data moves through them in only one direction. An individual node might be connected to several nodes in the layer beneath it, from which it receives data, and several nodes in the layer above it, to which it sends data.

To each of its incoming connections, a node will assign a number known as a “weight.” When the network is active, the node receives a different data item — a different number — over each of its connections and multiplies it by the associated weight. It then adds the resulting products together, yielding a single number. If that number is below a threshold value, the node passes no data to the next layer. If the number exceeds the threshold value, the node “fires,” which in today’s neural nets generally means sending the number — the sum of the weighted inputs — along all its outgoing connections.

When a neural net is being trained, all of its weights and thresholds are initially set to random values. Training data is fed to the bottom layer — the input layer — and it passes through the succeeding layers, getting multiplied and added together in complex ways, until it finally arrives, radically transformed, at the output layer. During training, the weights and thresholds are continually adjusted until training data with the same labels consistently yield similar outputs.

Minds and machines

The neural nets described by McCullough and Pitts in 1944 had thresholds and weights, but they weren’t arranged into layers, and the researchers didn’t specify any training mechanism. What McCullough and Pitts showed was that a neural net could, in principle, compute any function that a digital computer could. The result was more neuroscience than computer science: The point was to suggest that the human brain could be thought of as a computing device.

Neural nets continue to be a valuable tool for neuroscientific research. For instance, particular network layouts or rules for adjusting weights and thresholds have reproduced observed features of human neuroanatomy and cognition, an indication that they capture something about how the brain processes information.

The first trainable neural network, the Perceptron, was demonstrated by the Cornell University psychologist Frank Rosenblatt in 1957. The Perceptron’s design was much like that of the modern neural net, except that it had only one layer with adjustable weights and thresholds, sandwiched between input and output layers.

Perceptrons were an active area of research in both psychology and the fledgling discipline of computer science until 1959, when Minsky and Papert published a book titled “Perceptrons,” which demonstrated that executing certain fairly common computations on Perceptrons would be impractically time consuming.

“Of course, all of these limitations kind of disappear if you take machinery that is a little more complicated — like, two layers,” Poggio says. But at the time, the book had a chilling effect on neural-net research.

“You have to put these things in historical context,” Poggio says. “They were arguing for programming — for languages like Lisp. Not many years before, people were still using analog computers. It was not clear at all at the time that programming was the way to go. I think they went a little bit overboard, but as usual, it’s not black and white. If you think of this as this competition between analog computing and digital computing, they fought for what at the time was the right thing.”


By the 1980s, however, researchers had developed algorithms for modifying neural nets’ weights and thresholds that were efficient enough for networks with more than one layer, removing many of the limitations identified by Minsky and Papert. The field enjoyed a renaissance.

But intellectually, there’s something unsatisfying about neural nets. Enough training may revise a network’s settings to the point that it can usefully classify data, but what do those settings mean? What image features is an object recognizer looking at, and how does it piece them together into the distinctive visual signatures of cars, houses, and coffee cups? Looking at the weights of individual connections won’t answer that question.

In recent years, computer scientists have begun to come up with ingenious methods for deducing the analytic strategies adopted by neural nets. But in the 1980s, the networks’ strategies were indecipherable. So around the turn of the century, neural networks were supplanted by support vector machines, an alternative approach to machine learning that’s based on some very clean and elegant mathematics.

The recent resurgence in neural networks — the deep-learning revolution — comes courtesy of the computer-game industry. The complex imagery and rapid pace of today’s video games require hardware that can keep up, and the result has been the graphics processing unit (GPU), which packs thousands of relatively simple processing cores on a single chip. It didn’t take long for researchers to realize that the architecture of a GPU is remarkably like that of a neural net.

Modern GPUs enabled the one-layer networks of the 1960s and the two- to three-layer networks of the 1980s to blossom into the 10-, 15-, even 50-layer networks of today. That’s what the “deep” in “deep learning” refers to — the depth of the network’s layers. And currently, deep learning is responsible for the best-performing systems in almost every area of artificial-intelligence research.

Under the hood

The networks’ opacity is still unsettling to theorists, but there’s headway on that front, too. In addition to directing the Center for Brains, Minds, and Machines (CBMM), Poggio leads the center’s research program in Theoretical Frameworks for Intelligence. Recently, Poggio and his CBMM colleagues have released a three-part theoretical study of neural networks.

The first part, which was published last month in the International Journal of Automation and Computing, addresses the range of computations that deep-learning networks can execute and when deep networks offer advantages over shallower ones. Parts two and three, which have been released as CBMM technical reports, address the problems of global optimization, or guaranteeing that a network has found the settings that best accord with its training data, and overfitting, or cases in which the network becomes so attuned to the specifics of its training data that it fails to generalize to other instances of the same categories.

There are still plenty of theoretical questions to be answered, but CBMM researchers’ work could help ensure that neural networks finally break the generational cycle that has brought them in and out of favor for seven decades.

This image from MIT illustrates a ‘modern’ neural network,

Most applications of deep learning use “convolutional” neural networks, in which the nodes of each layer are clustered, the clusters overlap, and each cluster feeds data to multiple nodes (orange and green) of the next layer. Image: Jose-Luis Olivares/MIT

h/t April 17, 2017

One final note, I wish the folks at MIT had an ‘explainer’ archive. I’m not sure how to find any more ‘explainers on MIT’s website.

Biodegradable nanoparticles to program immune cells for cancer treatments

The Fred Hutchinson Cancer Research Centre in Seattle, Washington has announced a proposed cancer treatment using nanoparticle-programmed T cells according to an April 12, 2017 news release (received via email; also on EurekAlert), Note: A link has been removed,

Researchers at Fred Hutchinson Cancer Research Center have developed biodegradable nanoparticles that can be used to genetically program immune cells to recognize and destroy cancer cells — while the immune cells are still inside the body.

In a proof-of-principle study to be published April 17 [2017] in Nature Nanotechnology, the team showed that nanoparticle-programmed immune cells, known as T cells, can rapidly clear or slow the progression of leukemia in a mouse model.

“Our technology is the first that we know of to quickly program tumor-recognizing capabilities into T cells without extracting them for laboratory manipulation,” said Fred Hutch’s Dr. Matthias Stephan, the study’s senior author. “The reprogrammed cells begin to work within 24 to 48 hours and continue to produce these receptors for weeks. This suggests that our technology has the potential to allow the immune system to quickly mount a strong enough response to destroy cancerous cells before the disease becomes fatal.”

Cellular immunotherapies have shown promise in clinical trials, but challenges remain to making them more widely available and to being able to deploy them quickly. At present, it typically takes a couple of weeks to prepare these treatments: the T cells must be removed from the patient and genetically engineered and grown in special cell processing facilities before they are infused back into the patient. These new nanoparticles could eliminate the need for such expensive and time consuming steps.

Although his T-cell programming method is still several steps away from the clinic, Stephan imagines a future in which nanoparticles transform cell-based immunotherapies — whether for cancer or infectious disease — into an easily administered, off-the-shelf treatment that’s available anywhere.

“I’ve never had cancer, but if I did get a cancer diagnosis I would want to start treatment right away,” Stephan said. “I want to make cellular immunotherapy a treatment option the day of diagnosis and have it able to be done in an outpatient setting near where people live.”

The body as a genetic engineering lab

Stephan created his T-cell homing nanoparticles as a way to bring the power of cellular cancer immunotherapy to more people.

In his method, the laborious, time-consuming T-cell programming steps all take place within the body, creating a potential army of “serial killers” within days.

As reported in the new study, Stephan and his team developed biodegradable nanoparticles that turned T cells into CAR T cells, a particular type of cellular immunotherapy that has delivered promising results against leukemia in clinical trials.

The researchers designed the nanoparticles to carry genes that encode for chimeric antigen receptors, or CARs, that target and eliminate cancer. They also tagged the nanoparticles with molecules that make them stick like burrs to T cells, which engulf the nanoparticles. The cell’s internal traffic system then directs the nanoparticle to the nucleus, and it dissolves.

The study provides proof-of-principle that the nanoparticles can educate the immune system to target cancer cells. Stephan and his team designed the new CAR genes to integrate into chromosomes housed in the nucleus, making it possible for T cells to begin decoding the new genes and producing CARs within just one or two days.

Once the team determined that their CAR-carrying nanoparticles reprogrammed a noticeable percent of T cells, they tested their efficacy. Using a preclinical mouse model of leukemia, Stephan and his colleagues compared their nanoparticle-programming strategy against chemotherapy followed by an infusion of T cells programmed in the lab to express CARs, which mimics current CAR-T-cell therapy.

The nanoparticle-programmed CAR-T cells held their own against the infused CAR-T cells. Treatment with nanoparticles or infused CAR-T cells improved survival 58 days on average, up from a median survival of about two weeks.

The study was funded by Fred Hutch’s Immunotherapy Initiative, the Leukemia & Lymphoma Society, the Phi Beta Psi Sorority, the National Science Foundation and the National Cancer Institute.

Next steps and other applications

Stephan’s nanoparticles still have to clear several hurdles before they get close to human trials. He’s pursuing new strategies to make the gene-delivery-and-expression system safe in people and working with companies that have the capacity to produce clinical-grade nanoparticles. Additionally, Stephan has turned his sights to treating solid tumors and is collaborating to this end with several research groups at Fred Hutch.

And, he said, immunotherapy may be just the beginning. In theory, nanoparticles could be modified to serve the needs of patients whose immune systems need a boost, but who cannot wait for several months for a conventional vaccine to kick in.

“We hope that this can be used for infectious diseases like hepatitis or HIV,” Stephan said. This method may be a way to “provide patients with receptors they don’t have in their own body,” he explained. “You just need a tiny number of programmed T cells to protect against a virus.”

Here’s a link to and a citation for the paper,

In situ programming of leukaemia-specific T cells using synthetic DNA nanocarriers by Tyrel T. Smith, Sirkka B. Stephan, Howell F. Moffett, Laura E. McKnight, Weihang Ji, Diana Reiman, Emmy Bonagofski, Martin E. Wohlfahrt, Smitha P. S. Pillai, & Matthias T. Stephan. Nature Nanotechnology (2017) doi:10.1038/nnano.2017.57 Published online 17 April 2017

This paper is behind a paywall.

Health technology and the Canadian Broadcasting Corporation’s (CBC) two-tier health system ‘Viewpoint’

There’s a lot of talk and handwringing about Canada’s health care system, which ebbs and flows in almost predictable cycles. Jesse Hirsh in a May 16, 2017 ‘Viewpoints’ segment (an occasional series run as part the of the CBC’s [Canadian Broadcasting Corporation] flagship, daily news programme, The National) dared to reframe the discussion as one about technology and ‘those who get it’  [the technologically literate] and ‘those who don’t’,  a state Hirsh described as being illiterate as you can see and hear in the following video.

I don’t know about you but I’m getting tired of being called illiterate when I don’t know something. To be illiterate means you can’t read and write and as it turns out I do both of those things on a daily basis (sometimes even in two languages). Despite my efforts, I’m ignorant about any number of things and those numbers keep increasing day by day. BTW, Is there anyone who isn’t having trouble keeping up?

Moving on from my rhetorical question, Hirsh has a point about the tech divide and about the need for discussion. It’s a point that hadn’t occurred to me (although I think he’s taking it in the wrong direction). In fact, this business of a tech divide already exists if you consider that people who live in rural environments and need the latest lifesaving techniques or complex procedures or access to highly specialized experts have to travel to urban centres. I gather that Hirsh feels that this divide isn’t necessarily going to be an urban/rural split so much as an issue of how technically literate you and your doctor are.  That’s intriguing but then his argumentation gets muddled. Confusingly, he seems to be suggesting that the key to the split is your access (not your technical literacy) to artificial intelligence (AI) and algorithms (presumably he’s referring to big data and data analytics). I expect access will come down more to money than technological literacy.

For example, money is likely to be a key issue when you consider his big pitch is for access to IBM’s Watson computer. (My Feb. 28, 2011 posting titled: Engineering, entertainment, IBM’s Watson, and product placement focuses largely on Watson, its winning appearances on the US television game show, Jeopardy, and its subsequent adoption into the University of Maryland’s School of Medicine in a project to bring Watson into the examining room with patients.)

Hirsh’s choice of IBM’s Watson is particularly interesting for a number of reasons. (1) Presumably there are companies other than IBM in this sector. Why do they not rate a mention?  (2) Given the current situation with IBM and the Canadian federal government’s introduction of the Phoenix payroll system (a PeopleSoft product customized by IBM), which is  a failure of monumental proportions (a Feb. 23, 2017 article by David Reevely for the Ottawa Citizen and a May 25, 2017 article by Jordan Press for the National Post), there may be a little hesitation, if not downright resistance, to a large scale implementation of any IBM product or service, regardless of where the blame lies. (3) Hirsh notes on the home page for his eponymous website,

I’m presently spending time at the IBM Innovation Space in Toronto Canada, investigating the impact of artificial intelligence and cognitive computing on all sectors and industries.

Yes, it would seem he has some sort of relationship with IBM not referenced in his Viewpoints segment on The National. Also, his description of the relationship isn’t especially illuminating but perhaps it.s this? (from the IBM Innovation Space  – Toronto Incubator Application webpage),

Our incubator

The IBM Innovation Space is a Toronto-based incubator that provides startups with a collaborative space to innovate and disrupt the market. Our goal is to provide you with the tools needed to take your idea to the next level, introduce you to the right networks and help you acquire new clients. Our unique approach, specifically around client engagement, positions your company for optimal growth and revenue at an accelerated pace.


IBM Bluemix
IBM Global Entrepreneur
Softlayer – an IBM Company

Startups partnered with the IBM Innovation Space can receive up to $120,000 in IBM credits at no charge for up to 12 months through the Global Entrepreneurship Program (GEP). These credits can be used in our products such our IBM Bluemix developer platform, Softlayer cloud services, and our world-renowned IBM Watson ‘cognitive thinking’ APIs. We provide you with enterprise grade technology to meet your clients’ needs, large or small.

Collaborative workspace in the heart of Downtown Toronto
Mentorship opportunities available with leading experts
Access to large clients to scale your startup quickly and effectively
Weekly programming ranging from guest speakers to collaborative activities
Help with funding and access to local VCs and investors​

Final comments

While I have some issues with Hirsh’s presentation, I agree that we should be discussing the issues around increased automation of our health care system. A friend of mine’s husband is a doctor and according to him those prescriptions and orders you get when leaving the hospital? They are not made up by a doctor so much as they are spit up by a computer based on the data that the doctors and nurses have supplied.

GIGO, bias, and de-skilling

Leaving aside the wonders that Hirsh describes, there’s an oldish saying in the computer business, garbage in/garbage out (gigo). At its simplest, who’s going to catch a mistake? (There are lots of mistakes made in hospitals and other health care settings.)

There are also issues around the quality of research. Are all the research papers included in the data used by the algorithms going to be considered equal? There’s more than one case where a piece of problematic research has been accepted uncritically, even if it get through peer review, and subsequently cited many times over. One of the ways to measure impact, i.e., importance, is to track the number of citations. There’s also the matter of where the research is published. A ‘high impact’ journal, such as Nature, Science, or Cell, automatically gives a piece of research a boost.

There are other kinds of bias as well. Increasingly, there’s discussion about algorithms being biased and about how machine learning (AI) can become biased. (See my May 24, 2017 posting: Machine learning programs learn bias, which highlights the issues and cites other FrogHeart posts on that and other related topics.)

These problems are to a large extent already present. Doctors have biases and research can be wrong and it can take a long time before there are corrections. However, the advent of an automated health diagnosis and treatment system is likely to exacerbate the problems. For example, if you don’t agree with your doctor’s diagnosis or treatment, you can search other opinions. What happens when your diagnosis and treatment have become data? Will the system give you another opinion? Who will you talk to? The doctor who got an answer from ‘Watson”? Is she or he going to debate Watson? Are you?

This leads to another issue and that’s automated systems getting more credit than they deserve. Futurists such as Hirsh tend to underestimate people and overestimate the positive impact that automation will have. A computer, data analystics, or an AI system are tools not gods. You’ll have as much luck petitioning one of those tools as you would Zeus.

The unasked question is how will your doctor or other health professional gain experience and skills if they never have to practice the basic, boring aspects of health care (asking questions for a history, reading medical journals to keep up with the research, etc.) and leave them to the computers? There had to be  a reason for calling it a medical ‘practice’.

There are definitely going to be advantages to these technological innovations but thoughtful adoption of these practices (pun intended) should be our goal.

Who owns your data?

Another issue which is increasingly making itself felt is ownership of data. Jacob Brogan has written a provocative May 23, 2017 piece for asking that question about the data gathers for DNA testing (Note: Links have been removed),

AncestryDNA’s pitch to consumers is simple enough. For $99 (US), the company will analyze a sample of your saliva and then send back information about your “ethnic mix.” While that promise may be scientifically dubious, it’s a relatively clear-cut proposal. Some, however, worry that the service might raise significant privacy concerns.

After surveying AncestryDNA’s terms and conditions, consumer protection attorney Joel Winston found a few issues that troubled him. As he noted in a Medium post last week, the agreement asserts that it grants the company “a perpetual, royalty-free, world-wide, transferable license to use your DNA.” (The actual clause is considerably longer.) According to Winston, “With this single contractual provision, customers are granting the broadest possible rights to own and exploit their genetic information.”

Winston also noted a handful of other issues that further complicate the question of ownership. Since we share much of our DNA with our relatives, he warned, “Even if you’ve never used, but one of your genetic relatives has, the company may already own identifiable portions of your DNA.” [emphasis mine] Theoretically, that means information about your genetic makeup could make its way into the hands of insurers or other interested parties, whether or not you’ve sent the company your spit. (Maryam Zaringhalam explored some related risks in a recent Slate article.) Further, Winston notes that Ancestry’s customers waive their legal rights, meaning that they cannot sue the company if their information gets used against them in some way.

Over the weekend, Eric Heath, Ancestry’s chief privacy officer, responded to these concerns on the company’s own site. He claims that the transferable license is necessary for the company to provide its customers with the service that they’re paying for: “We need that license in order to move your data through our systems, render it around the globe, and to provide you with the results of our analysis work.” In other words, it allows them to send genetic samples to labs (Ancestry uses outside vendors), store the resulting data on servers, and furnish the company’s customers with the results of the study they’ve requested.

Speaking to me over the phone, Heath suggested that this license was akin to the ones that companies such as YouTube employ when users upload original content. It grants them the right to shift that data around and manipulate it in various ways, but isn’t an assertion of ownership. “We have committed to our users that their DNA data is theirs. They own their DNA,” he said.

I’m glad to see the company’s representatives are open to discussion and, later in the article, you’ll see there’ve already been some changes made. Still, there is no guarantee that the situation won’t again change, for ill this time.

What data do they have and what can they do with it?

It’s not everybody who thinks data collection and data analytics constitute problems. While some people might balk at the thought of their genetic data being traded around and possibly used against them, e.g., while hunting for a job, or turned into a source of revenue, there tends to be a more laissez-faire attitude to other types of data. Andrew MacLeod’s May 24, 2017 article for highlights political implications and privacy issues (Note: Links have been removed),

After a small Victoria [British Columbia, Canada] company played an outsized role in the Brexit vote, government information and privacy watchdogs in British Columbia and Britain have been consulting each other about the use of social media to target voters based on their personal data.

The U.K.’s information commissioner, Elizabeth Denham [Note: Denham was formerly B.C.’s Office of the Information and Privacy Commissioner], announced last week [May 17, 2017] that she is launching an investigation into “the use of data analytics for political purposes.”

The investigation will look at whether political parties or advocacy groups are gathering personal information from Facebook and other social media and using it to target individuals with messages, Denham said.

B.C.’s Office of the Information and Privacy Commissioner confirmed it has been contacted by Denham.

Macleod’s March 6, 2017 article for provides more details about the company’s role (note: Links have been removed),

The “tiny” and “secretive” British Columbia technology company [AggregateIQ; AIQ] that played a key role in the Brexit referendum was until recently listed as the Canadian office of a much larger firm that has 25 years of experience using behavioural research to shape public opinion around the world.

The larger firm, SCL Group, says it has worked to influence election outcomes in 19 countries. Its associated company in the U.S., Cambridge Analytica, has worked on a wide range of campaigns, including Donald Trump’s presidential bid.

In late February [2017], the Telegraph reported that campaign disclosures showed that Vote Leave campaigners had spent £3.5 million — about C$5.75 million [emphasis mine] — with a company called AggregateIQ, run by CEO Zack Massingham in downtown Victoria.

That was more than the Leave side paid any other company or individual during the campaign and about 40 per cent of its spending ahead of the June referendum that saw Britons narrowly vote to exit the European Union.

According to media reports, Aggregate develops advertising to be used on sites including Facebook, Twitter and YouTube, then targets messages to audiences who are likely to be receptive.

The Telegraph story described Victoria as “provincial” and “picturesque” and AggregateIQ as “secretive” and “low-profile.”

Canadian media also expressed surprise at AggregateIQ’s outsized role in the Brexit vote.

The Globe and Mail’s Paul Waldie wrote “It’s quite a coup for Mr. Massingham, who has only been involved in politics for six years and started AggregateIQ in 2013.”

Victoria Times Colonist columnist Jack Knox wrote “If you have never heard of AIQ, join the club.”

The Victoria company, however, appears to be connected to the much larger SCL Group, which describes itself on its website as “the global leader in data-driven communications.”

In the United States it works through related company Cambridge Analytica and has been involved in elections since 2012. Politico reported in 2015 that the firm was working on Ted Cruz’s presidential primary campaign.

And NBC and other media outlets reported that the Trump campaign paid Cambridge Analytica millions to crunch data on 230 million U.S. adults, using information from loyalty cards, club and gym memberships and charity donations [emphasis mine] to predict how an individual might vote and to shape targeted political messages.

That’s quite a chunk of change and I don’t believe that gym memberships, charity donations, etc. were the only sources of information (in the US, there’s voter registration, credit card information, and more) but the list did raise my eyebrows. It would seem we are under surveillance at all times, even in the gym.

In any event, I hope that Hirsh’s call for discussion is successful and that the discussion includes more critical thinking about the implications of Hirsh’s ‘Brave New World’.

Café Scientifique (Vancouver, Canada) May 30, 2017 talk: Jerilyn Prior redux

I’m not sure ‘redux’ is exactly the right term but I’m going to declare it ‘close enough’. This upcoming talk was originally scheduled for March 2016 (my March 29, 2016 posting) but cancelled when the venerable The Railway Club abruptly closed its doors after 84 years of operation.

Our next café will happen on TUESDAY MAY 30TH, 7:30PM in the back room
at YAGGER'S DOWNTOWN (433 W Pender). Our speaker for the evening
will be DR. JERILYNN PRIOR, a is Professor of Endocrinology and
Metabolism at the University of British Columbia, founder and scientific
director of the Centre for Menstrual Cycle and Ovulation Research
(CeMCOR), director of the BC Center of the Canadian Multicenter
Osteoporosis Study (CaMOS), and a past president of the Society for
Menstrual Cycle Research.  The title of her talk is:


43 years old with teenagers a full-time executive director of a not for
profit is not sleeping, she wakes soaked a couple of times a night, not
every night but especially around the time her period comes. As it does
frequently—it is heavy, even flooding. Her sexual interest is
virtually gone and she feels dry when she tries.

Her family doctor offered her The Pill. When she took it she got very
sore breasts, ankle swelling and high blood pressure. Her brain feels
fuzzy, she’s getting migraines, gaining weight and just can’t cope.
. . .
What’s going on? Does she need estrogen “replacement”?  If yes,
why when she’s still getting flow? Does The Pill work for other women?

We hope to see you there!

As I noted in March 2016, this seems more like a description for a workshop on perimenopause  and consequently of more interest for doctors and perimenopausal women than the audience that Café Scientifique usually draws. Of course, I  could be completely wrong.

‘Mother of all bombs’ is a nanoweapon?

According to physicist, Louis A. Del Monte, in an April 14, 2017 opinion piece for Huffington, the ‘mother of all bombs ‘ is a nanoweapon (Note: Links have been removed),

The United States military dropped its largest non-nuclear bomb, the GBU-43/B Massive Ordnance Air Blast Bomb (MOAB), nicknamed the “mother of all bombs,” on an ISIS cave and tunnel complex in the Achin District of the Nangarhar province, Afghanistan [on Thursday, April 13, 2017]. The Achin District is the center of ISIS activity in Afghanistan. This was the first use in combat of the GBU-43/B Massive Ordnance Air Blast (MOAB).

… Although it carries only about 8 tons of explosives, the explosive mixture delivers a destructive impact equivalent of 11 tons of TNT.

There is little doubt the United States Department of Defense is likely using nanometals, such as nanoaluminum (alternately spelled nano-aluminum) mixed with TNT, to enhance the detonation properties of the MOAB. The use of nanoaluminum mixed with TNT was known to boost the explosive power of the TNT since the early 2000s. If true, this means that the largest known United States non-nuclear bomb is a nanoweapon. When most of us think about nanoweapons, we think small, essentially invisible weapons, like nanobots (i.e., tiny robots made using nanotechnology). That can often be the case. But, as defined in my recent book, Nanoweapons: A Growing Threat to Humanity (Potomac 2017), “Nanoweapons are any military technology that exploits the power of nanotechnology.” This means even the largest munition, such as the MOAB, is a nanoweapon if it uses nanotechnology.

… The explosive is H6, which is a mixture of five ingredients (by weight):

  • 44.0% RDX & nitrocellulose (RDX is a well know explosive, more powerful that TNT, often used with TNT and other explosives. Nitrocellulose is a propellant or low-order explosive, originally known as gun-cotton.)
  • 29.5% TNT
  • 21.0% powdered aluminum
  • 5.0% paraffin wax as a phlegmatizing (i.e., stabilizing) agent.
  • 0.5% calcium chloride (to absorb moisture and eliminate the production of gas

Note, the TNT and powdered aluminum account for over half the explosive payload by weight. It is highly likely that the “powdered aluminum” is nanoaluminum, since nanoaluminum can enhance the destructive properties of TNT. This argues that H6 is a nano-enhanced explosive, making the MOAB a nanoweapon.

The United States GBU-43/B Massive Ordnance Air Blast Bomb (MOAB) was the largest non-nuclear bomb known until Russia detonated the Aviation Thermobaric Bomb of Increased Power, termed the “father of all bombs” (FOAB), in 2007. It is reportedly four times more destructive than the MOAB, even though it carries only 7 tons of explosives versus the 8 tons of the MOAB. Interestingly, the Russians claim to achieve the more destructive punch using nanotechnology.

If you have the time, I encourage you to read the piece in its entirety.

Repairing a ‘broken’ heart with a 3D printed patch

The idea of using stem cells to help heal your heart so you don’t have scar tissue seems to be a step closer to reality. From an April 14, 2017 news item on ScienceDaily which announces the research and explains why scar tissue in your heart is a problem,

A team of biomedical engineering researchers, led by the University of Minnesota, has created a revolutionary 3D-bioprinted patch that can help heal scarred heart tissue after a heart attack. The discovery is a major step forward in treating patients with tissue damage after a heart attack.

According to the American Heart Association, heart disease is the No. 1 cause of death in the U.S. killing more than 360,000 people a year. During a heart attack, a person loses blood flow to the heart muscle and that causes cells to die. Our bodies can’t replace those heart muscle cells so the body forms scar tissue in that area of the heart, which puts the person at risk for compromised heart function and future heart failure.

An April 13, 2017 University of Minnesota news release (also on EurekAlert but dated April 14, 2017), which originated the news item, describes the work in more detail,

In this study, researchers from the University of Minnesota-Twin Cities, University of Wisconsin-Madison, and University of Alabama-Birmingham used laser-based 3D-bioprinting techniques to incorporate stem cells derived from adult human heart cells on a matrix that began to grow and beat synchronously in a dish in the lab.

When the cell patch was placed on a mouse following a simulated heart attack, the researchers saw significant increase in functional capacity after just four weeks. Since the patch was made from cells and structural proteins native to the heart, it became part of the heart and absorbed into the body, requiring no further surgeries.

“This is a significant step forward in treating the No. 1 cause of death in the U.S.,” said Brenda Ogle, an associate professor of biomedical engineering at the University of Minnesota. “We feel that we could scale this up to repair hearts of larger animals and possibly even humans within the next several years.”

Ogle said that this research is different from previous research in that the patch is modeled after a digital, three-dimensional scan of the structural proteins of native heart tissue.  The digital model is made into a physical structure by 3D printing with proteins native to the heart and further integrating cardiac cell types derived from stem cells.  Only with 3D printing of this type can we achieve one micron resolution needed to mimic structures of native heart tissue.

“We were quite surprised by how well it worked given the complexity of the heart,” Ogle said.  “We were encouraged to see that the cells had aligned in the scaffold and showed a continuous wave of electrical signal that moved across the patch.”

Ogle said they are already beginning the next step to develop a larger patch that they would test on a pig heart, which is similar in size to a human heart.

The researchers has made this video of beating heart cells in a petri dish available,

Date: Published on Apr 14, 2017

Caption: Researchers used laser-based 3D-bioprinting techniques to incorporate stem cells derived from adult human heart cells on a matrix that began to grow and beat synchronously in a dish in the lab. Credit: Brenda Ogle, University of Minnesota

Here’s a link to and a citation for the paper,

Myocardial Tissue Engineering With Cells Derived From Human-Induced Pluripotent Stem Cells and a Native-Like, High-Resolution, 3-Dimensionally Printed Scaffold by Ling Gao, Molly E. Kupfer, Jangwook P. Jung, Libang Yang, Patrick Zhang, Yong Da Sie, Quyen Tran, Visar Ajeti, Brian T. Freeman, Vladimir G. Fast, Paul J. Campagnola, Brenda M. Ogle, Jianyi Zhang. Circulation Research April 14, 2017, Volume 120, Issue 8 Circulation Research. 2017;120:1318-1325 Originally published online] January 9, 2017

This paper appears to be open access.

Machine learning programs learn bias

The notion of bias in artificial intelligence (AI)/algorithms/robots is gaining prominence (links to other posts featuring algorithms and bias are at the end of this post). The latest research concerns machine learning where an artificial intelligence system trains itself with ordinary human language from the internet. From an April 13, 2017 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

As artificial intelligence systems “learn” language from existing texts, they exhibit the same biases that humans do, a new study reveals. The results not only provide a tool for studying prejudicial attitudes and behavior in humans, but also emphasize how language is intimately intertwined with historical biases and cultural stereotypes. A common way to measure biases in humans is the Implicit Association Test (IAT), where subjects are asked to pair two concepts they find similar, in contrast to two concepts they find different; their response times can vary greatly, indicating how well they associated one word with another (for example, people are more likely to associate “flowers” with “pleasant,” and “insects” with “unpleasant”). Here, Aylin Caliskan and colleagues developed a similar way to measure biases in AI systems that acquire language from human texts; rather than measuring lag time, however, they used the statistical number of associations between words, analyzing roughly 2.2 million words in total. Their results demonstrate that AI systems retain biases seen in humans. For example, studies of human behavior show that the exact same resume is 50% more likely to result in an opportunity for an interview if the candidate’s name is European American rather than African-American. Indeed, the AI system was more likely to associate European American names with “pleasant” stimuli (e.g. “gift,” or “happy”). In terms of gender, the AI system also reflected human biases, where female words (e.g., “woman” and “girl”) were more associated than male words with the arts, compared to mathematics. In a related Perspective, Anthony G. Greenwald discusses these findings and how they could be used to further analyze biases in the real world.

There are more details about the research in this April 13, 2017 Princeton University news release on EurekAlert (also on ScienceDaily),

In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. But in a new study, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways. Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, the researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.

Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorization and automated translations.

“Questions about fairness and bias in machine learning are tremendously important for our society,” said researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, as well as an affiliate scholar at Stanford Law School’s Center for Internet and Society. “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”

The paper, “Semantics derived automatically from language corpora contain human-like biases,” published April 14  [2017] in Science. Its lead author is Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton; Joanna Bryson, a reader at University of Bath, and CITP affiliate, is a coauthor.

As a touchstone for documented human biases, the study turned to the Implicit Association Test, used in numerous social psychology studies since its development at the University of Washington in the late 1990s. The test measures response times (in milliseconds) by human subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter, the Implicit Association Test has repeatedly shown, when subjects are asked to pair two concepts they find similar, versus two concepts they find dissimilar.

Take flower types, like “rose” and “daisy,” and insects like “ant” and “moth.” These words can be paired with pleasant concepts, like “caress” and “love,” or unpleasant notions, like “filth” and “ugly.” People more quickly associate the flower words with pleasant concepts, and the insect terms with unpleasant ideas.

The Princeton team devised an experiment with a program where it essentially functioned like a machine learning version of the Implicit Association Test. Called GloVe, and developed by Stanford University researchers, the popular, open-source program is of the sort that a startup machine learning company might use at the heart of its product. The GloVe algorithm can represent the co-occurrence statistics of words in, say, a 10-word window of text. Words that often appear near one another have a stronger association than those words that seldom do.

The Stanford researchers turned GloVe loose on a huge trawl of contents from the World Wide Web, containing 840 billion words. Within this large sample of written human culture, Narayanan and colleagues then examined sets of so-called target words, like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside two sets of attribute words, such as “man, male” and “woman, female,” looking for evidence of the kinds of biases humans can unwittingly possess.

In the results, innocent, inoffensive biases, like for flowers over bugs, showed up, but so did examples along lines of gender and race. As it turned out, the Princeton machine learning experiment managed to replicate the broad substantiations of bias found in select Implicit Association Test studies over the years that have relied on live, human subjects.

For instance, the machine learning program associated female names more with familial attribute words, like “parents” and “wedding,” than male names. In turn, male names had stronger associations with career attributes, like “professional” and “salary.” Of course, results such as these are often just objective reflections of the true, unequal distributions of occupation types with respect to gender–like how 77 percent of computer programmers are male, according to the U.S. Bureau of Labor Statistics.

Yet this correctly distinguished bias about occupations can end up having pernicious, sexist effects. An example: when foreign languages are naively processed by machine learning programs, leading to gender-stereotyped sentences. The Turkish language uses a gender-neutral, third person pronoun, “o.” Plugged into the well-known, online translation service Google Translate, however, the Turkish sentences “o bir doktor” and “o bir hem?ire” with this gender-neutral pronoun are translated into English as “he is a doctor” and “she is a nurse.”

“This paper reiterates the important point that machine learning methods are not ‘objective’ or ‘unbiased’ just because they rely on mathematics and algorithms,” said Hanna Wallach, a senior researcher at Microsoft Research New York City, who was not involved in the study. “Rather, as long as they are trained using data from society and as long as society exhibits biases, these methods will likely reproduce these biases.”

Another objectionable example harkens back to a well-known 2004 paper by Marianne Bertrand of the University of Chicago Booth School of Business and Sendhil Mullainathan of Harvard University. The economists sent out close to 5,000 identical resumes to 1,300 job advertisements, changing only the applicants’ names to be either traditionally European American or African American. The former group was 50 percent more likely to be offered an interview than the latter. In an apparent corroboration of this bias, the new Princeton study demonstrated that a set of African American names had more unpleasantness associations than a European American set.

Computer programmers might hope to prevent cultural stereotype perpetuation through the development of explicit, mathematics-based instructions for the machine learning programs underlying AI systems. Not unlike how parents and mentors try to instill concepts of fairness and equality in children and students, coders could endeavor to make machines reflect the better angels of human nature.

“The biases that we studied in the paper are easy to overlook when designers are creating systems,” said Narayanan. “The biases and stereotypes in our society reflected in our language are complex and longstanding. Rather than trying to sanitize or eliminate them, we should treat biases as part of the language and establish an explicit way in machine learning of determining what we consider acceptable and unacceptable.”

Here’s a link to and a citation for the Princeton paper,

Semantics derived automatically from language corpora contain human-like biases by Aylin Caliskan, Joanna J. Bryson, Arvind Narayanan. Science  14 Apr 2017: Vol. 356, Issue 6334, pp. 183-186 DOI: 10.1126/science.aal4230

This paper appears to be open access.

Links to more cautionary posts about AI,

Aug 5, 2009: Autonomous algorithms; intelligent windows; pretty nano pictures

June 14, 2016:  Accountability for artificial intelligence decision-making

Oct. 25, 2016 Removing gender-based stereotypes from algorithms

March 1, 2017: Algorithms in decision-making: a government inquiry in the UK

There’s also a book which makes some of the current use of AI programmes and big data quite accessible reading: Cathy O’Neal’s ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’.

Internet of toys, the robotification of childhood, and privacy issues

Leave it to the European Commission’s (EC) Joint Research Centre (JRC) to look into the future of toys. As far as I’m aware there are no such moves in either Canada or the US despite the ubiquity of robot toys and other such devices. From a March 23, 2017 EC JRC  press release (also on EurekAlert),

Action is needed to monitor and control the emerging Internet of Toys, concludes a new JRC report. Privacy and security are highlighted as main areas of concern.

Large numbers of connected toys have been put on the market over the past few years, and the turnover is expected to reach €10 billion by 2020 – up from just €2.6 billion in 2015.

Connected toys come in many different forms, from smart watches to teddy bears that interact with their users. They are connected to the internet and together with other connected appliances they form the Internet of Things, which is bringing technology into our daily lives more than ever.

However, the toys’ ability to record, store and share information about their young users raises concerns about children’s safety, privacy and social development.

A team of JRC scientists and international experts looked at the safety, security, privacy and societal questions emerging from the rise of the Internet of Toys. The report invites policymakers, industry, parents and teachers to study connected toys more in depth in order to provide a framework which ensures that these toys are safe and beneficial for children.

Robotification of childhood

Robots are no longer only used in industry to carry out repetitive or potentially dangerous tasks. In the past years, robots have entered our everyday lives and also children are more and more likely to encounter robotic or artificial intelligence-enhanced toys.

We still know relatively little about the consequences of children’s interaction with robotic toys. However, it is conceivable that they represent both opportunities and risks for children’s cognitive, socio-emotional and moral-behavioural development.

For example, social robots may further the acquisition of foreign language skills by compensating for the lack of native speakers as language tutors or by removing the barriers and peer pressure encountered in class room. There is also evidence about the benefits of child-robot interaction for children with developmental problems, such as autism or learning difficulties, who may find human interaction difficult.

However, the internet-based personalization of children’s education via filtering algorithms may also increase the risk of ‘educational bubbles’ where children only receive information that fits their pre-existing knowledge and interest – similar to adult interaction on social media networks.

Safety and security considerations

The rapid rise in internet connected toys also raises concerns about children’s safety and privacy. In particular, the way that data gathered by connected toys is analysed, manipulated and stored is not transparent, which poses an emerging threat to children’s privacy.

The data provided by children while they play, i.e the sounds, images and movements recorded by connected toys is personal data protected by the EU data protection framework, as well as by the new General Data Protection Regulation (GDPR). However, information on how this data is stored, analysed and shared might be hidden in long privacy statements or policies and often go unnoticed by parents.

Whilst children’s right to privacy is the most immediate concern linked to connected toys, there is also a long term concern: growing up in a culture where the tracking, recording and analysing of children’s everyday choices becomes a normal part of life is also likely to shape children’s behaviour and development.

Usage framework to guide the use of connected toys

The report calls for industry and policymakers to create a connected toys usage framework to act as a guide for their design and use.

This would also help toymakers to meet the challenge of complying with the new European General Data Protection Regulation (GDPR) which comes into force in May 2018, which will increase citizens’ control over their personal data.

The report also calls for the connected toy industry and academic researchers to work together to produce better designed and safer products.

Advice for parents

The report concludes that it is paramount that we understand how children interact with connected toys and which risks and opportunities they entail for children’s development.

“These devices come with really interesting possibilities and the more we use them, the more we will learn about how to best manage them. Locking them up in a cupboard is not the way to go. We as adults have to understand how they work – and how they might ‘misbehave’ – so that we can provide the right tools and the right opportunities for our children to grow up happy in a secure digital world”, Stéphane Chaudron, the report’s lead researcher at the Joint Research Centre (JRC).).

The authors of the report encourage parents to get informed about the capabilities, functions, security measures and privacy settings of toys before buying them. They also urge parents to focus on the quality of play by observing their children, talking to them about their experiences and playing alongside and with their children.

Protecting and empowering children

Through the Alliance to better protect minors online and with the support of UNICEF, NGOs, Toy Industries Europe and other industry and stakeholder groups, European and global ICT and media companies  are working to improve the protection and empowerment of children when using connected toys. This self-regulatory initiative is facilitated by the European Commission and aims to create a safer and more stimulating digital environment for children.

There’s an engaging video accompanying this press release,

You can find the report (Kaleidoscope on the Internet of Toys: Safety, security, privacy and societal insights) here and both the PDF and print versions are free (although I imagine you’ll have to pay postage for the print version). This report was published in 2016; the authors are Stéphane Chaudron, Rosanna Di Gioia, Monica Gemo, Donell Holloway , Jackie Marsh , Giovanna Mascheroni , Jochen Peter, Dylan Yamada-Rice and organizations involved include European Cooperation in Science and Technology (COST), Digital Literacy and Multimodal Practices of Young Children (DigiLitEY), and COST Action IS1410. DigiLitEY is a European network of 33 countries focusing on research in this area (2015-2019).

Nanocoating to reduce dental implant failures

Scientists at Plymouth University (UK) have developed a nanocoating that could reduce the number of dental implant failures. From a March 24, 2017 news item on Nanowerk (Note: A link has been removed),

According to the American Academy of Implant Dentistry (AAID), 15 million Americans have crown or bridge replacements and three million have dental implants — with this latter number rising by 500,000 a year. The AAID estimates that the value of the American and European market for dental implants will rise to $4.2 billion by 2022.

Dental implants are a successful form of treatment for patients, yet according to a study published in 2005, five to 10 per cent of all dental implants fail.

The reasons for this failure are several-fold – mechanical problems, poor connection to the bones in which they are implanted, infection or rejection. When failure occurs the dental implant must be removed.

The main reason for dental implant failure is peri-implantitis. This is the destructive inflammatory process affecting the soft and hard tissues surrounding dental implants. This occurs when pathogenic microbes in the mouth and oral cavity develop into biofilms, which protects them and encourages growth. Peri-implantitis is caused when the biofilms develop on dental implants.

A research team comprising scientists from the School of Biological Sciences, Peninsula Schools of Medicine and Dentistry and the School of Engineering at the University of Plymouth, have joined forces to develop and evaluate the effectiveness of a new nanocoating for dental implants to reduce the risk of peri-implantitis.

The results of their work are published in the journal Nanotoxicology (“Antibacterial activity and biofilm inhibition by surface modified titanium alloy medical implants following application of silver, titanium dioxide and hydroxyapatite nanocoatings”).

A March 27, 2017 Plymouth University press release, which originated the news item, gives more details about the research,

In the study, the research team created a new approach using a combination of silver, titanium oxide and hydroxyapatite nanocoatings.

The application of the combination to the surface of titanium alloy implants successfully inhibited bacterial growth and reduced the formation of bacterial biofilm on the surface of the implants by 97.5 per cent.

Not only did the combination result in the effective eradication of infection, it created a surface with anti-biofilm properties which supported successful integration into surrounding bone and accelerated bone healing.

Professor Christopher Tredwin, Head of Plymouth University Peninsula School of Dentistry, commented:

“In this cross-Faculty study we have identified the means to protect dental implants against the most common cause of their failure. The potential of our work for increased patient comfort and satisfaction, and reduced costs, is great and we look forward to translating our findings into clinical practice.”

The University of Plymouth was the first university in the UK to secure Research Council Funding in Nanoscience and this project is the latest in a long line of projects investigating nanotechnology and human health.

Nanoscience activity at the University of Plymouth is led by Professor Richard Handy, who has represented the UK on matters relating to the Environmental Safety and Human Health of Nanomaterials at the Organisation for Economic Cooperation and Development (OECD). He commented:

“As yet there are no nano-specific guidelines in dental or medical implant legislation and we are, with colleagues elsewhere, guiding the way in this area. The EU recognises that medical devices and implants must: perform as expected for its intended use, and be better than similar items in the market; be safe for the intended use or safer than an existing item, and; be biocompatible or have negligible toxicity.”

He added:

“Our work has been about proving these criteria which we have done in vitro. The next step would be to demonstrate the effectiveness of our discovery, perhaps with animal models and then human volunteers.”

Dr Alexandros Besinis, Lecturer in Mechanical Engineering at the School of Engineering, University of Plymouth, led the research team. He commented:

“Current strategies to render the surface of dental implants antibacterial with the aim to prevent infection and peri-implantitis development, include application of antimicrobial coatings loaded with antibiotics or chlorhexidine. However, such approaches are usually effective only in the short-term, and the use of chlorhexidine has also been reported to be toxic to human cells. The significance of our new study is that we have successfully applied a dual-layered silver-hydroxyapatite nanocoating to titanium alloy medical implants which helps to overcome these risks.”

Dr Besinis has been an Honorary Teaching Fellow at the Peninsula School of Dentistry since 2011 and has recently joined the School of Engineering. His research interests focus on advanced engineering materials and the use of nanotechnology to build novel biomaterials and medical implants with improved mechanical, physical and antibacterial properties.

Here’s a link to and a citation for the paper,

Antibacterial activity and biofilm inhibition by surface modified titanium alloy medical implants following application of silver, titanium dioxide and hydroxyapatite nanocoatings by A. Besinis, S. D. Hadi, H. R. Le, C. Tredwin & R. D. Handy.  Nanotoxicology Volume 11, 2017 – Issue 3  Pages 327-338 Published online: 17 Mar 2017

This paper appears to be open access.

Edible water bottles by Ooho!

Courtesy: Skipping Rocks Lab

As far as I’m concerned, that looks more like a breast implant than a water bottle, which, from a psycho-social perspective, could lead to some interesting research papers. It is, in fact a new type of water bottle.  From an April 10, 2017 article by Adele Peters for Fast Company (Note: Links have been removed),

If you run in a race in London in the near future and pass a hydration station, you may be handed a small, bubble-like sphere of water instead of a bottle. The gelatinous packaging, called the Ooho, is compostable–or even edible, if you want to swallow it. And after two years of development, its designers are ready to bring it to market.

Three London-based design students first created a prototype of the edible bottle in 2014 as an alternative to plastic bottles. The idea gained internet hype (though also some scorn for a hilarious video that made the early prototypes look fairly impossible to use without soaking yourself).
The problem it was designed to solve–the number of disposable bottles in landfills–keeps growing. In the U.K. alone, around 16 million are trashed each day; another 19 million are recycled, but still have the environmental footprint of a product made from oil. In the U.S., recycling rates are even lower. …

The new packaging is based on the culinary technique of spherification, which is also used to make fake caviar and the tiny juice balls added to boba tea [bubble tea?]. Dip a ball of ice in calcium chloride and brown algae extract, and you can form a spherical membrane that keeps holding the ice as it melts and returns to room temperature.

An April 25, 2014 article by Kashmira Gander for describes the technology and some of the problems that had to be solved before bringing this product to market,

To make the bottle [Ooho!], students at the Imperial College London gave a frozen ball of water a gelatinous layer by dipping it into a calcium chloride solution.

They then soaked the ball in another solution made from brown algae extract to encapsulate the ice in a second membrane, and reinforce the structure.

However, Ooho still has teething problems, as the membrane is only as thick as a fruit skin, and therefore makes transporting the object more difficult than a regular bottle of water.

“This is a problem we’re trying to address with a double container,” Rodrigo García González, who created Ooho with fellow students Pierre Paslier and Guillaume Couche, explained to the Smithsonian. “The idea is that we can pack several individual edible Oohos into a bigger Ooho container [to make] a thicker and more resistant membrane.”

According to Peters’ Fast Company article, the issues have been resolved,

Because the membrane is made from food ingredients, you can eat it instead of throwing it away. The Jell-O-like packaging doesn’t have a natural taste, but it’s possible to add flavors to make it more appetizing.

The package doesn’t have to be eaten every time, since it’s also compostable. “When people try it for the first time, they want to eat it because it’s part of the experience,” says Pierre Paslier, cofounder of Skipping Rocks Lab, the startup developing the packaging. “Then it will be just like the peel of a fruit. You’re not expected to eat the peel of your orange or banana. We are trying to follow the example set by nature for packaging.”

The outer layer of the package is always meant to be peeled like fruit–one thin outer layer of the membrane peels away to keep the inner layer clean and can then be composted. (While compostable cups are an alternative solution, many can only be composted in industrial facilities; the Ooho can be tossed on a simple home compost pile, where it will decompose within weeks).

The company is targeting both outdoor events and cafes. “Where we see a lot of potential for Ooho is outdoor events–festivals, marathons, places where basically there are a lot of people consuming packaging over a very short amount of time,” says Paslier.

I encourage you to read Peters’ article in its entirety if you have the time. You can also find more information on the Skipping Rocks Lab website and on the company’s crowdfunding campaign on CrowdCube.