Category Archives: nanotechnology

Health technology and the Canadian Broadcasting Corporation’s (CBC) two-tier health system ‘Viewpoint’

There’s a lot of talk and handwringing about Canada’s health care system, which ebbs and flows in almost predictable cycles. Jesse Hirsh in a May 16, 2017 ‘Viewpoints’ segment (an occasional series run as part the of the CBC’s [Canadian Broadcasting Corporation] flagship, daily news programme, The National) dared to reframe the discussion as one about technology and ‘those who get it’  [the technologically literate] and ‘those who don’t’,  a state Hirsh described as being illiterate as you can see and hear in the following video.

I don’t know about you but I’m getting tired of being called illiterate when I don’t know something. To be illiterate means you can’t read and write and as it turns out I do both of those things on a daily basis (sometimes even in two languages). Despite my efforts, I’m ignorant about any number of things and those numbers keep increasing day by day. BTW, Is there anyone who isn’t having trouble keeping up?

Moving on from my rhetorical question, Hirsh has a point about the tech divide and about the need for discussion. It’s a point that hadn’t occurred to me (although I think he’s taking it in the wrong direction). In fact, this business of a tech divide already exists if you consider that people who live in rural environments and need the latest lifesaving techniques or complex procedures or access to highly specialized experts have to travel to urban centres. I gather that Hirsh feels that this divide isn’t necessarily going to be an urban/rural split so much as an issue of how technically literate you and your doctor are.  That’s intriguing but then his argumentation gets muddled. Confusingly, he seems to be suggesting that the key to the split is your access (not your technical literacy) to artificial intelligence (AI) and algorithms (presumably he’s referring to big data and data analytics). I expect access will come down more to money than technological literacy.

For example, money is likely to be a key issue when you consider his big pitch is for access to IBM’s Watson computer. (My Feb. 28, 2011 posting titled: Engineering, entertainment, IBM’s Watson, and product placement focuses largely on Watson, its winning appearances on the US television game show, Jeopardy, and its subsequent adoption into the University of Maryland’s School of Medicine in a project to bring Watson into the examining room with patients.)

Hirsh’s choice of IBM’s Watson is particularly interesting for a number of reasons. (1) Presumably there are companies other than IBM in this sector. Why do they not rate a mention?  (2) Given the current situation with IBM and the Canadian federal government’s introduction of the Phoenix payroll system (a PeopleSoft product customized by IBM), which is  a failure of monumental proportions (a Feb. 23, 2017 article by David Reevely for the Ottawa Citizen and a May 25, 2017 article by Jordan Press for the National Post), there may be a little hesitation, if not downright resistance, to a large scale implementation of any IBM product or service, regardless of where the blame lies. (3) Hirsh notes on the home page for his eponymous website,

I’m presently spending time at the IBM Innovation Space in Toronto Canada, investigating the impact of artificial intelligence and cognitive computing on all sectors and industries.

Yes, it would seem he has some sort of relationship with IBM not referenced in his Viewpoints segment on The National. Also, his description of the relationship isn’t especially illuminating but perhaps it.s this? (from the IBM Innovation Space  – Toronto Incubator Application webpage),

Our incubator

The IBM Innovation Space is a Toronto-based incubator that provides startups with a collaborative space to innovate and disrupt the market. Our goal is to provide you with the tools needed to take your idea to the next level, introduce you to the right networks and help you acquire new clients. Our unique approach, specifically around client engagement, positions your company for optimal growth and revenue at an accelerated pace.

OUR SERVICES

IBM Bluemix
IBM Global Entrepreneur
Softlayer – an IBM Company
Watson

Startups partnered with the IBM Innovation Space can receive up to $120,000 in IBM credits at no charge for up to 12 months through the Global Entrepreneurship Program (GEP). These credits can be used in our products such our IBM Bluemix developer platform, Softlayer cloud services, and our world-renowned IBM Watson ‘cognitive thinking’ APIs. We provide you with enterprise grade technology to meet your clients’ needs, large or small.

Collaborative workspace in the heart of Downtown Toronto
Mentorship opportunities available with leading experts
Access to large clients to scale your startup quickly and effectively
Weekly programming ranging from guest speakers to collaborative activities
Help with funding and access to local VCs and investors​

Final comments

While I have some issues with Hirsh’s presentation, I agree that we should be discussing the issues around increased automation of our health care system. A friend of mine’s husband is a doctor and according to him those prescriptions and orders you get when leaving the hospital? They are not made up by a doctor so much as they are spit up by a computer based on the data that the doctors and nurses have supplied.

GIGO, bias, and de-skilling

Leaving aside the wonders that Hirsh describes, there’s an oldish saying in the computer business, garbage in/garbage out (gigo). At its simplest, who’s going to catch a mistake? (There are lots of mistakes made in hospitals and other health care settings.)

There are also issues around the quality of research. Are all the research papers included in the data used by the algorithms going to be considered equal? There’s more than one case where a piece of problematic research has been accepted uncritically, even if it get through peer review, and subsequently cited many times over. One of the ways to measure impact, i.e., importance, is to track the number of citations. There’s also the matter of where the research is published. A ‘high impact’ journal, such as Nature, Science, or Cell, automatically gives a piece of research a boost.

There are other kinds of bias as well. Increasingly, there’s discussion about algorithms being biased and about how machine learning (AI) can become biased. (See my May 24, 2017 posting: Machine learning programs learn bias, which highlights the issues and cites other FrogHeart posts on that and other related topics.)

These problems are to a large extent already present. Doctors have biases and research can be wrong and it can take a long time before there are corrections. However, the advent of an automated health diagnosis and treatment system is likely to exacerbate the problems. For example, if you don’t agree with your doctor’s diagnosis or treatment, you can search other opinions. What happens when your diagnosis and treatment have become data? Will the system give you another opinion? Who will you talk to? The doctor who got an answer from ‘Watson”? Is she or he going to debate Watson? Are you?

This leads to another issue and that’s automated systems getting more credit than they deserve. Futurists such as Hirsh tend to underestimate people and overestimate the positive impact that automation will have. A computer, data analystics, or an AI system are tools not gods. You’ll have as much luck petitioning one of those tools as you would Zeus.

The unasked question is how will your doctor or other health professional gain experience and skills if they never have to practice the basic, boring aspects of health care (asking questions for a history, reading medical journals to keep up with the research, etc.) and leave them to the computers? There had to be  a reason for calling it a medical ‘practice’.

There are definitely going to be advantages to these technological innovations but thoughtful adoption of these practices (pun intended) should be our goal.

Who owns your data?

Another issue which is increasingly making itself felt is ownership of data. Jacob Brogan has written a provocative May 23, 2017 piece for slate.com asking that question about the data Ancestry.com gathers for DNA testing (Note: Links have been removed),

AncestryDNA’s pitch to consumers is simple enough. For $99 (US), the company will analyze a sample of your saliva and then send back information about your “ethnic mix.” While that promise may be scientifically dubious, it’s a relatively clear-cut proposal. Some, however, worry that the service might raise significant privacy concerns.

After surveying AncestryDNA’s terms and conditions, consumer protection attorney Joel Winston found a few issues that troubled him. As he noted in a Medium post last week, the agreement asserts that it grants the company “a perpetual, royalty-free, world-wide, transferable license to use your DNA.” (The actual clause is considerably longer.) According to Winston, “With this single contractual provision, customers are granting Ancestry.com the broadest possible rights to own and exploit their genetic information.”

Winston also noted a handful of other issues that further complicate the question of ownership. Since we share much of our DNA with our relatives, he warned, “Even if you’ve never used Ancestry.com, but one of your genetic relatives has, the company may already own identifiable portions of your DNA.” [emphasis mine] Theoretically, that means information about your genetic makeup could make its way into the hands of insurers or other interested parties, whether or not you’ve sent the company your spit. (Maryam Zaringhalam explored some related risks in a recent Slate article.) Further, Winston notes that Ancestry’s customers waive their legal rights, meaning that they cannot sue the company if their information gets used against them in some way.

Over the weekend, Eric Heath, Ancestry’s chief privacy officer, responded to these concerns on the company’s own site. He claims that the transferable license is necessary for the company to provide its customers with the service that they’re paying for: “We need that license in order to move your data through our systems, render it around the globe, and to provide you with the results of our analysis work.” In other words, it allows them to send genetic samples to labs (Ancestry uses outside vendors), store the resulting data on servers, and furnish the company’s customers with the results of the study they’ve requested.

Speaking to me over the phone, Heath suggested that this license was akin to the ones that companies such as YouTube employ when users upload original content. It grants them the right to shift that data around and manipulate it in various ways, but isn’t an assertion of ownership. “We have committed to our users that their DNA data is theirs. They own their DNA,” he said.

I’m glad to see the company’s representatives are open to discussion and, later in the article, you’ll see there’ve already been some changes made. Still, there is no guarantee that the situation won’t again change, for ill this time.

What data do they have and what can they do with it?

It’s not everybody who thinks data collection and data analytics constitute problems. While some people might balk at the thought of their genetic data being traded around and possibly used against them, e.g., while hunting for a job, or turned into a source of revenue, there tends to be a more laissez-faire attitude to other types of data. Andrew MacLeod’s May 24, 2017 article for thetyee.ca highlights political implications and privacy issues (Note: Links have been removed),

After a small Victoria [British Columbia, Canada] company played an outsized role in the Brexit vote, government information and privacy watchdogs in British Columbia and Britain have been consulting each other about the use of social media to target voters based on their personal data.

The U.K.’s information commissioner, Elizabeth Denham [Note: Denham was formerly B.C.’s Office of the Information and Privacy Commissioner], announced last week [May 17, 2017] that she is launching an investigation into “the use of data analytics for political purposes.”

The investigation will look at whether political parties or advocacy groups are gathering personal information from Facebook and other social media and using it to target individuals with messages, Denham said.

B.C.’s Office of the Information and Privacy Commissioner confirmed it has been contacted by Denham.

Macleod’s March 6, 2017 article for thetyee.ca provides more details about the company’s role (note: Links have been removed),

The “tiny” and “secretive” British Columbia technology company [AggregateIQ; AIQ] that played a key role in the Brexit referendum was until recently listed as the Canadian office of a much larger firm that has 25 years of experience using behavioural research to shape public opinion around the world.

The larger firm, SCL Group, says it has worked to influence election outcomes in 19 countries. Its associated company in the U.S., Cambridge Analytica, has worked on a wide range of campaigns, including Donald Trump’s presidential bid.

In late February [2017], the Telegraph reported that campaign disclosures showed that Vote Leave campaigners had spent £3.5 million — about C$5.75 million [emphasis mine] — with a company called AggregateIQ, run by CEO Zack Massingham in downtown Victoria.

That was more than the Leave side paid any other company or individual during the campaign and about 40 per cent of its spending ahead of the June referendum that saw Britons narrowly vote to exit the European Union.

According to media reports, Aggregate develops advertising to be used on sites including Facebook, Twitter and YouTube, then targets messages to audiences who are likely to be receptive.

The Telegraph story described Victoria as “provincial” and “picturesque” and AggregateIQ as “secretive” and “low-profile.”

Canadian media also expressed surprise at AggregateIQ’s outsized role in the Brexit vote.

The Globe and Mail’s Paul Waldie wrote “It’s quite a coup for Mr. Massingham, who has only been involved in politics for six years and started AggregateIQ in 2013.”

Victoria Times Colonist columnist Jack Knox wrote “If you have never heard of AIQ, join the club.”

The Victoria company, however, appears to be connected to the much larger SCL Group, which describes itself on its website as “the global leader in data-driven communications.”

In the United States it works through related company Cambridge Analytica and has been involved in elections since 2012. Politico reported in 2015 that the firm was working on Ted Cruz’s presidential primary campaign.

And NBC and other media outlets reported that the Trump campaign paid Cambridge Analytica millions to crunch data on 230 million U.S. adults, using information from loyalty cards, club and gym memberships and charity donations [emphasis mine] to predict how an individual might vote and to shape targeted political messages.

That’s quite a chunk of change and I don’t believe that gym memberships, charity donations, etc. were the only sources of information (in the US, there’s voter registration, credit card information, and more) but the list did raise my eyebrows. It would seem we are under surveillance at all times, even in the gym.

In any event, I hope that Hirsh’s call for discussion is successful and that the discussion includes more critical thinking about the implications of Hirsh’s ‘Brave New World’.

Café Scientifique (Vancouver, Canada) May 30, 2017 talk: Jerilyn Prior redux

I’m not sure ‘redux’ is exactly the right term but I’m going to declare it ‘close enough’. This upcoming talk was originally scheduled for March 2016 (my March 29, 2016 posting) but cancelled when the venerable The Railway Club abruptly closed its doors after 84 years of operation.

Our next café will happen on TUESDAY MAY 30TH, 7:30PM in the back room
at YAGGER'S DOWNTOWN (433 W Pender). Our speaker for the evening
will be DR. JERILYNN PRIOR, a is Professor of Endocrinology and
Metabolism at the University of British Columbia, founder and scientific
director of the Centre for Menstrual Cycle and Ovulation Research
(CeMCOR), director of the BC Center of the Canadian Multicenter
Osteoporosis Study (CaMOS), and a past president of the Society for
Menstrual Cycle Research.  The title of her talk is:

IS PERIMENOPAUSE ESTROGEN DEFICIENCY?
SORTING ENGRAINED MISINFORMATION ABOUT WOMEN’S MIDLIFE REPRODUCTIVE
TRANSITION

43 years old with teenagers a full-time executive director of a not for
profit is not sleeping, she wakes soaked a couple of times a night, not
every night but especially around the time her period comes. As it does
frequently—it is heavy, even flooding. Her sexual interest is
virtually gone and she feels dry when she tries.

Her family doctor offered her The Pill. When she took it she got very
sore breasts, ankle swelling and high blood pressure. Her brain feels
fuzzy, she’s getting migraines, gaining weight and just can’t cope.
. . .
What’s going on? Does she need estrogen “replacement”?  If yes,
why when she’s still getting flow? Does The Pill work for other women?
_WHAT DO WE KNOW ABOUT THE WHAT, WHY, HOW LONG AND HOW TO HELP
SYMPTOMATIC PERIMENOPAUSAL WOMEN?_

We hope to see you there!

As I noted in March 2016, this seems more like a description for a workshop on perimenopause  and consequently of more interest for doctors and perimenopausal women than the audience that Café Scientifique usually draws. Of course, I  could be completely wrong.

‘Mother of all bombs’ is a nanoweapon?

According to physicist, Louis A. Del Monte, in an April 14, 2017 opinion piece for Huffington Post.com, the ‘mother of all bombs ‘ is a nanoweapon (Note: Links have been removed),

The United States military dropped its largest non-nuclear bomb, the GBU-43/B Massive Ordnance Air Blast Bomb (MOAB), nicknamed the “mother of all bombs,” on an ISIS cave and tunnel complex in the Achin District of the Nangarhar province, Afghanistan [on Thursday, April 13, 2017]. The Achin District is the center of ISIS activity in Afghanistan. This was the first use in combat of the GBU-43/B Massive Ordnance Air Blast (MOAB).

… Although it carries only about 8 tons of explosives, the explosive mixture delivers a destructive impact equivalent of 11 tons of TNT.

There is little doubt the United States Department of Defense is likely using nanometals, such as nanoaluminum (alternately spelled nano-aluminum) mixed with TNT, to enhance the detonation properties of the MOAB. The use of nanoaluminum mixed with TNT was known to boost the explosive power of the TNT since the early 2000s. If true, this means that the largest known United States non-nuclear bomb is a nanoweapon. When most of us think about nanoweapons, we think small, essentially invisible weapons, like nanobots (i.e., tiny robots made using nanotechnology). That can often be the case. But, as defined in my recent book, Nanoweapons: A Growing Threat to Humanity (Potomac 2017), “Nanoweapons are any military technology that exploits the power of nanotechnology.” This means even the largest munition, such as the MOAB, is a nanoweapon if it uses nanotechnology.

… The explosive is H6, which is a mixture of five ingredients (by weight):

  • 44.0% RDX & nitrocellulose (RDX is a well know explosive, more powerful that TNT, often used with TNT and other explosives. Nitrocellulose is a propellant or low-order explosive, originally known as gun-cotton.)
  • 29.5% TNT
  • 21.0% powdered aluminum
  • 5.0% paraffin wax as a phlegmatizing (i.e., stabilizing) agent.
  • 0.5% calcium chloride (to absorb moisture and eliminate the production of gas

Note, the TNT and powdered aluminum account for over half the explosive payload by weight. It is highly likely that the “powdered aluminum” is nanoaluminum, since nanoaluminum can enhance the destructive properties of TNT. This argues that H6 is a nano-enhanced explosive, making the MOAB a nanoweapon.

The United States GBU-43/B Massive Ordnance Air Blast Bomb (MOAB) was the largest non-nuclear bomb known until Russia detonated the Aviation Thermobaric Bomb of Increased Power, termed the “father of all bombs” (FOAB), in 2007. It is reportedly four times more destructive than the MOAB, even though it carries only 7 tons of explosives versus the 8 tons of the MOAB. Interestingly, the Russians claim to achieve the more destructive punch using nanotechnology.

If you have the time, I encourage you to read the piece in its entirety.

Repairing a ‘broken’ heart with a 3D printed patch

The idea of using stem cells to help heal your heart so you don’t have scar tissue seems to be a step closer to reality. From an April 14, 2017 news item on ScienceDaily which announces the research and explains why scar tissue in your heart is a problem,

A team of biomedical engineering researchers, led by the University of Minnesota, has created a revolutionary 3D-bioprinted patch that can help heal scarred heart tissue after a heart attack. The discovery is a major step forward in treating patients with tissue damage after a heart attack.

According to the American Heart Association, heart disease is the No. 1 cause of death in the U.S. killing more than 360,000 people a year. During a heart attack, a person loses blood flow to the heart muscle and that causes cells to die. Our bodies can’t replace those heart muscle cells so the body forms scar tissue in that area of the heart, which puts the person at risk for compromised heart function and future heart failure.

An April 13, 2017 University of Minnesota news release (also on EurekAlert but dated April 14, 2017), which originated the news item, describes the work in more detail,

In this study, researchers from the University of Minnesota-Twin Cities, University of Wisconsin-Madison, and University of Alabama-Birmingham used laser-based 3D-bioprinting techniques to incorporate stem cells derived from adult human heart cells on a matrix that began to grow and beat synchronously in a dish in the lab.

When the cell patch was placed on a mouse following a simulated heart attack, the researchers saw significant increase in functional capacity after just four weeks. Since the patch was made from cells and structural proteins native to the heart, it became part of the heart and absorbed into the body, requiring no further surgeries.

“This is a significant step forward in treating the No. 1 cause of death in the U.S.,” said Brenda Ogle, an associate professor of biomedical engineering at the University of Minnesota. “We feel that we could scale this up to repair hearts of larger animals and possibly even humans within the next several years.”

Ogle said that this research is different from previous research in that the patch is modeled after a digital, three-dimensional scan of the structural proteins of native heart tissue.  The digital model is made into a physical structure by 3D printing with proteins native to the heart and further integrating cardiac cell types derived from stem cells.  Only with 3D printing of this type can we achieve one micron resolution needed to mimic structures of native heart tissue.

“We were quite surprised by how well it worked given the complexity of the heart,” Ogle said.  “We were encouraged to see that the cells had aligned in the scaffold and showed a continuous wave of electrical signal that moved across the patch.”

Ogle said they are already beginning the next step to develop a larger patch that they would test on a pig heart, which is similar in size to a human heart.

The researchers has made this video of beating heart cells in a petri dish available,

Date: Published on Apr 14, 2017

Caption: Researchers used laser-based 3D-bioprinting techniques to incorporate stem cells derived from adult human heart cells on a matrix that began to grow and beat synchronously in a dish in the lab. Credit: Brenda Ogle, University of Minnesota

Here’s a link to and a citation for the paper,

Myocardial Tissue Engineering With Cells Derived From Human-Induced Pluripotent Stem Cells and a Native-Like, High-Resolution, 3-Dimensionally Printed Scaffold by Ling Gao, Molly E. Kupfer, Jangwook P. Jung, Libang Yang, Patrick Zhang, Yong Da Sie, Quyen Tran, Visar Ajeti, Brian T. Freeman, Vladimir G. Fast, Paul J. Campagnola, Brenda M. Ogle, Jianyi Zhang. Circulation Research April 14, 2017, Volume 120, Issue 8 https://doi.org/10.1161/CIRCRESAHA.116.310277 Circulation Research. 2017;120:1318-1325 Originally published online] January 9, 2017

This paper appears to be open access.

Machine learning programs learn bias

The notion of bias in artificial intelligence (AI)/algorithms/robots is gaining prominence (links to other posts featuring algorithms and bias are at the end of this post). The latest research concerns machine learning where an artificial intelligence system trains itself with ordinary human language from the internet. From an April 13, 2017 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

As artificial intelligence systems “learn” language from existing texts, they exhibit the same biases that humans do, a new study reveals. The results not only provide a tool for studying prejudicial attitudes and behavior in humans, but also emphasize how language is intimately intertwined with historical biases and cultural stereotypes. A common way to measure biases in humans is the Implicit Association Test (IAT), where subjects are asked to pair two concepts they find similar, in contrast to two concepts they find different; their response times can vary greatly, indicating how well they associated one word with another (for example, people are more likely to associate “flowers” with “pleasant,” and “insects” with “unpleasant”). Here, Aylin Caliskan and colleagues developed a similar way to measure biases in AI systems that acquire language from human texts; rather than measuring lag time, however, they used the statistical number of associations between words, analyzing roughly 2.2 million words in total. Their results demonstrate that AI systems retain biases seen in humans. For example, studies of human behavior show that the exact same resume is 50% more likely to result in an opportunity for an interview if the candidate’s name is European American rather than African-American. Indeed, the AI system was more likely to associate European American names with “pleasant” stimuli (e.g. “gift,” or “happy”). In terms of gender, the AI system also reflected human biases, where female words (e.g., “woman” and “girl”) were more associated than male words with the arts, compared to mathematics. In a related Perspective, Anthony G. Greenwald discusses these findings and how they could be used to further analyze biases in the real world.

There are more details about the research in this April 13, 2017 Princeton University news release on EurekAlert (also on ScienceDaily),

In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. But in a new study, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways. Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, the researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.

Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorization and automated translations.

“Questions about fairness and bias in machine learning are tremendously important for our society,” said researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, as well as an affiliate scholar at Stanford Law School’s Center for Internet and Society. “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”

The paper, “Semantics derived automatically from language corpora contain human-like biases,” published April 14  [2017] in Science. Its lead author is Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton; Joanna Bryson, a reader at University of Bath, and CITP affiliate, is a coauthor.

As a touchstone for documented human biases, the study turned to the Implicit Association Test, used in numerous social psychology studies since its development at the University of Washington in the late 1990s. The test measures response times (in milliseconds) by human subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter, the Implicit Association Test has repeatedly shown, when subjects are asked to pair two concepts they find similar, versus two concepts they find dissimilar.

Take flower types, like “rose” and “daisy,” and insects like “ant” and “moth.” These words can be paired with pleasant concepts, like “caress” and “love,” or unpleasant notions, like “filth” and “ugly.” People more quickly associate the flower words with pleasant concepts, and the insect terms with unpleasant ideas.

The Princeton team devised an experiment with a program where it essentially functioned like a machine learning version of the Implicit Association Test. Called GloVe, and developed by Stanford University researchers, the popular, open-source program is of the sort that a startup machine learning company might use at the heart of its product. The GloVe algorithm can represent the co-occurrence statistics of words in, say, a 10-word window of text. Words that often appear near one another have a stronger association than those words that seldom do.

The Stanford researchers turned GloVe loose on a huge trawl of contents from the World Wide Web, containing 840 billion words. Within this large sample of written human culture, Narayanan and colleagues then examined sets of so-called target words, like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside two sets of attribute words, such as “man, male” and “woman, female,” looking for evidence of the kinds of biases humans can unwittingly possess.

In the results, innocent, inoffensive biases, like for flowers over bugs, showed up, but so did examples along lines of gender and race. As it turned out, the Princeton machine learning experiment managed to replicate the broad substantiations of bias found in select Implicit Association Test studies over the years that have relied on live, human subjects.

For instance, the machine learning program associated female names more with familial attribute words, like “parents” and “wedding,” than male names. In turn, male names had stronger associations with career attributes, like “professional” and “salary.” Of course, results such as these are often just objective reflections of the true, unequal distributions of occupation types with respect to gender–like how 77 percent of computer programmers are male, according to the U.S. Bureau of Labor Statistics.

Yet this correctly distinguished bias about occupations can end up having pernicious, sexist effects. An example: when foreign languages are naively processed by machine learning programs, leading to gender-stereotyped sentences. The Turkish language uses a gender-neutral, third person pronoun, “o.” Plugged into the well-known, online translation service Google Translate, however, the Turkish sentences “o bir doktor” and “o bir hem?ire” with this gender-neutral pronoun are translated into English as “he is a doctor” and “she is a nurse.”

“This paper reiterates the important point that machine learning methods are not ‘objective’ or ‘unbiased’ just because they rely on mathematics and algorithms,” said Hanna Wallach, a senior researcher at Microsoft Research New York City, who was not involved in the study. “Rather, as long as they are trained using data from society and as long as society exhibits biases, these methods will likely reproduce these biases.”

Another objectionable example harkens back to a well-known 2004 paper by Marianne Bertrand of the University of Chicago Booth School of Business and Sendhil Mullainathan of Harvard University. The economists sent out close to 5,000 identical resumes to 1,300 job advertisements, changing only the applicants’ names to be either traditionally European American or African American. The former group was 50 percent more likely to be offered an interview than the latter. In an apparent corroboration of this bias, the new Princeton study demonstrated that a set of African American names had more unpleasantness associations than a European American set.

Computer programmers might hope to prevent cultural stereotype perpetuation through the development of explicit, mathematics-based instructions for the machine learning programs underlying AI systems. Not unlike how parents and mentors try to instill concepts of fairness and equality in children and students, coders could endeavor to make machines reflect the better angels of human nature.

“The biases that we studied in the paper are easy to overlook when designers are creating systems,” said Narayanan. “The biases and stereotypes in our society reflected in our language are complex and longstanding. Rather than trying to sanitize or eliminate them, we should treat biases as part of the language and establish an explicit way in machine learning of determining what we consider acceptable and unacceptable.”

Here’s a link to and a citation for the Princeton paper,

Semantics derived automatically from language corpora contain human-like biases by Aylin Caliskan, Joanna J. Bryson, Arvind Narayanan. Science  14 Apr 2017: Vol. 356, Issue 6334, pp. 183-186 DOI: 10.1126/science.aal4230

This paper appears to be open access.

Links to more cautionary posts about AI,

Aug 5, 2009: Autonomous algorithms; intelligent windows; pretty nano pictures

June 14, 2016:  Accountability for artificial intelligence decision-making

Oct. 25, 2016 Removing gender-based stereotypes from algorithms

March 1, 2017: Algorithms in decision-making: a government inquiry in the UK

There’s also a book which makes some of the current use of AI programmes and big data quite accessible reading: Cathy O’Neal’s ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’.

Internet of toys, the robotification of childhood, and privacy issues

Leave it to the European Commission’s (EC) Joint Research Centre (JRC) to look into the future of toys. As far as I’m aware there are no such moves in either Canada or the US despite the ubiquity of robot toys and other such devices. From a March 23, 2017 EC JRC  press release (also on EurekAlert),

Action is needed to monitor and control the emerging Internet of Toys, concludes a new JRC report. Privacy and security are highlighted as main areas of concern.

Large numbers of connected toys have been put on the market over the past few years, and the turnover is expected to reach €10 billion by 2020 – up from just €2.6 billion in 2015.

Connected toys come in many different forms, from smart watches to teddy bears that interact with their users. They are connected to the internet and together with other connected appliances they form the Internet of Things, which is bringing technology into our daily lives more than ever.

However, the toys’ ability to record, store and share information about their young users raises concerns about children’s safety, privacy and social development.

A team of JRC scientists and international experts looked at the safety, security, privacy and societal questions emerging from the rise of the Internet of Toys. The report invites policymakers, industry, parents and teachers to study connected toys more in depth in order to provide a framework which ensures that these toys are safe and beneficial for children.

Robotification of childhood

Robots are no longer only used in industry to carry out repetitive or potentially dangerous tasks. In the past years, robots have entered our everyday lives and also children are more and more likely to encounter robotic or artificial intelligence-enhanced toys.

We still know relatively little about the consequences of children’s interaction with robotic toys. However, it is conceivable that they represent both opportunities and risks for children’s cognitive, socio-emotional and moral-behavioural development.

For example, social robots may further the acquisition of foreign language skills by compensating for the lack of native speakers as language tutors or by removing the barriers and peer pressure encountered in class room. There is also evidence about the benefits of child-robot interaction for children with developmental problems, such as autism or learning difficulties, who may find human interaction difficult.

However, the internet-based personalization of children’s education via filtering algorithms may also increase the risk of ‘educational bubbles’ where children only receive information that fits their pre-existing knowledge and interest – similar to adult interaction on social media networks.

Safety and security considerations

The rapid rise in internet connected toys also raises concerns about children’s safety and privacy. In particular, the way that data gathered by connected toys is analysed, manipulated and stored is not transparent, which poses an emerging threat to children’s privacy.

The data provided by children while they play, i.e the sounds, images and movements recorded by connected toys is personal data protected by the EU data protection framework, as well as by the new General Data Protection Regulation (GDPR). However, information on how this data is stored, analysed and shared might be hidden in long privacy statements or policies and often go unnoticed by parents.

Whilst children’s right to privacy is the most immediate concern linked to connected toys, there is also a long term concern: growing up in a culture where the tracking, recording and analysing of children’s everyday choices becomes a normal part of life is also likely to shape children’s behaviour and development.

Usage framework to guide the use of connected toys

The report calls for industry and policymakers to create a connected toys usage framework to act as a guide for their design and use.

This would also help toymakers to meet the challenge of complying with the new European General Data Protection Regulation (GDPR) which comes into force in May 2018, which will increase citizens’ control over their personal data.

The report also calls for the connected toy industry and academic researchers to work together to produce better designed and safer products.

Advice for parents

The report concludes that it is paramount that we understand how children interact with connected toys and which risks and opportunities they entail for children’s development.

“These devices come with really interesting possibilities and the more we use them, the more we will learn about how to best manage them. Locking them up in a cupboard is not the way to go. We as adults have to understand how they work – and how they might ‘misbehave’ – so that we can provide the right tools and the right opportunities for our children to grow up happy in a secure digital world”, Stéphane Chaudron, the report’s lead researcher at the Joint Research Centre (JRC).).

The authors of the report encourage parents to get informed about the capabilities, functions, security measures and privacy settings of toys before buying them. They also urge parents to focus on the quality of play by observing their children, talking to them about their experiences and playing alongside and with their children.

Protecting and empowering children

Through the Alliance to better protect minors online and with the support of UNICEF, NGOs, Toy Industries Europe and other industry and stakeholder groups, European and global ICT and media companies  are working to improve the protection and empowerment of children when using connected toys. This self-regulatory initiative is facilitated by the European Commission and aims to create a safer and more stimulating digital environment for children.

There’s an engaging video accompanying this press release,

You can find the report (Kaleidoscope on the Internet of Toys: Safety, security, privacy and societal insights) here and both the PDF and print versions are free (although I imagine you’ll have to pay postage for the print version). This report was published in 2016; the authors are Stéphane Chaudron, Rosanna Di Gioia, Monica Gemo, Donell Holloway , Jackie Marsh , Giovanna Mascheroni , Jochen Peter, Dylan Yamada-Rice and organizations involved include European Cooperation in Science and Technology (COST), Digital Literacy and Multimodal Practices of Young Children (DigiLitEY), and COST Action IS1410. DigiLitEY is a European network of 33 countries focusing on research in this area (2015-2019).

Nanocoating to reduce dental implant failures

Scientists at Plymouth University (UK) have developed a nanocoating that could reduce the number of dental implant failures. From a March 24, 2017 news item on Nanowerk (Note: A link has been removed),

According to the American Academy of Implant Dentistry (AAID), 15 million Americans have crown or bridge replacements and three million have dental implants — with this latter number rising by 500,000 a year. The AAID estimates that the value of the American and European market for dental implants will rise to $4.2 billion by 2022.

Dental implants are a successful form of treatment for patients, yet according to a study published in 2005, five to 10 per cent of all dental implants fail.

The reasons for this failure are several-fold – mechanical problems, poor connection to the bones in which they are implanted, infection or rejection. When failure occurs the dental implant must be removed.

The main reason for dental implant failure is peri-implantitis. This is the destructive inflammatory process affecting the soft and hard tissues surrounding dental implants. This occurs when pathogenic microbes in the mouth and oral cavity develop into biofilms, which protects them and encourages growth. Peri-implantitis is caused when the biofilms develop on dental implants.

A research team comprising scientists from the School of Biological Sciences, Peninsula Schools of Medicine and Dentistry and the School of Engineering at the University of Plymouth, have joined forces to develop and evaluate the effectiveness of a new nanocoating for dental implants to reduce the risk of peri-implantitis.

The results of their work are published in the journal Nanotoxicology (“Antibacterial activity and biofilm inhibition by surface modified titanium alloy medical implants following application of silver, titanium dioxide and hydroxyapatite nanocoatings”).

A March 27, 2017 Plymouth University press release, which originated the news item, gives more details about the research,

In the study, the research team created a new approach using a combination of silver, titanium oxide and hydroxyapatite nanocoatings.

The application of the combination to the surface of titanium alloy implants successfully inhibited bacterial growth and reduced the formation of bacterial biofilm on the surface of the implants by 97.5 per cent.

Not only did the combination result in the effective eradication of infection, it created a surface with anti-biofilm properties which supported successful integration into surrounding bone and accelerated bone healing.

Professor Christopher Tredwin, Head of Plymouth University Peninsula School of Dentistry, commented:

“In this cross-Faculty study we have identified the means to protect dental implants against the most common cause of their failure. The potential of our work for increased patient comfort and satisfaction, and reduced costs, is great and we look forward to translating our findings into clinical practice.”

The University of Plymouth was the first university in the UK to secure Research Council Funding in Nanoscience and this project is the latest in a long line of projects investigating nanotechnology and human health.

Nanoscience activity at the University of Plymouth is led by Professor Richard Handy, who has represented the UK on matters relating to the Environmental Safety and Human Health of Nanomaterials at the Organisation for Economic Cooperation and Development (OECD). He commented:

“As yet there are no nano-specific guidelines in dental or medical implant legislation and we are, with colleagues elsewhere, guiding the way in this area. The EU recognises that medical devices and implants must: perform as expected for its intended use, and be better than similar items in the market; be safe for the intended use or safer than an existing item, and; be biocompatible or have negligible toxicity.”

He added:

“Our work has been about proving these criteria which we have done in vitro. The next step would be to demonstrate the effectiveness of our discovery, perhaps with animal models and then human volunteers.”

Dr Alexandros Besinis, Lecturer in Mechanical Engineering at the School of Engineering, University of Plymouth, led the research team. He commented:

“Current strategies to render the surface of dental implants antibacterial with the aim to prevent infection and peri-implantitis development, include application of antimicrobial coatings loaded with antibiotics or chlorhexidine. However, such approaches are usually effective only in the short-term, and the use of chlorhexidine has also been reported to be toxic to human cells. The significance of our new study is that we have successfully applied a dual-layered silver-hydroxyapatite nanocoating to titanium alloy medical implants which helps to overcome these risks.”

Dr Besinis has been an Honorary Teaching Fellow at the Peninsula School of Dentistry since 2011 and has recently joined the School of Engineering. His research interests focus on advanced engineering materials and the use of nanotechnology to build novel biomaterials and medical implants with improved mechanical, physical and antibacterial properties.

Here’s a link to and a citation for the paper,

Antibacterial activity and biofilm inhibition by surface modified titanium alloy medical implants following application of silver, titanium dioxide and hydroxyapatite nanocoatings by A. Besinis, S. D. Hadi, H. R. Le, C. Tredwin & R. D. Handy.  Nanotoxicology Volume 11, 2017 – Issue 3  Pages 327-338  http://dx.doi.org/10.1080/17435390.2017.1299890 Published online: 17 Mar 2017

This paper appears to be open access.

Edible water bottles by Ooho!

Courtesy: Skipping Rocks Lab

As far as I’m concerned, that looks more like a breast implant than a water bottle, which, from a psycho-social perspective, could lead to some interesting research papers. It is, in fact a new type of water bottle.  From an April 10, 2017 article by Adele Peters for Fast Company (Note: Links have been removed),

If you run in a race in London in the near future and pass a hydration station, you may be handed a small, bubble-like sphere of water instead of a bottle. The gelatinous packaging, called the Ooho, is compostable–or even edible, if you want to swallow it. And after two years of development, its designers are ready to bring it to market.

Three London-based design students first created a prototype of the edible bottle in 2014 as an alternative to plastic bottles. The idea gained internet hype (though also some scorn for a hilarious video that made the early prototypes look fairly impossible to use without soaking yourself).
The problem it was designed to solve–the number of disposable bottles in landfills–keeps growing. In the U.K. alone, around 16 million are trashed each day; another 19 million are recycled, but still have the environmental footprint of a product made from oil. In the U.S., recycling rates are even lower. …

The new packaging is based on the culinary technique of spherification, which is also used to make fake caviar and the tiny juice balls added to boba tea [bubble tea?]. Dip a ball of ice in calcium chloride and brown algae extract, and you can form a spherical membrane that keeps holding the ice as it melts and returns to room temperature.

An April 25, 2014 article by Kashmira Gander for Independent.co.uk describes the technology and some of the problems that had to be solved before bringing this product to market,

To make the bottle [Ooho!], students at the Imperial College London gave a frozen ball of water a gelatinous layer by dipping it into a calcium chloride solution.

They then soaked the ball in another solution made from brown algae extract to encapsulate the ice in a second membrane, and reinforce the structure.

However, Ooho still has teething problems, as the membrane is only as thick as a fruit skin, and therefore makes transporting the object more difficult than a regular bottle of water.

“This is a problem we’re trying to address with a double container,” Rodrigo García González, who created Ooho with fellow students Pierre Paslier and Guillaume Couche, explained to the Smithsonian. “The idea is that we can pack several individual edible Oohos into a bigger Ooho container [to make] a thicker and more resistant membrane.”

According to Peters’ Fast Company article, the issues have been resolved,

Because the membrane is made from food ingredients, you can eat it instead of throwing it away. The Jell-O-like packaging doesn’t have a natural taste, but it’s possible to add flavors to make it more appetizing.

The package doesn’t have to be eaten every time, since it’s also compostable. “When people try it for the first time, they want to eat it because it’s part of the experience,” says Pierre Paslier, cofounder of Skipping Rocks Lab, the startup developing the packaging. “Then it will be just like the peel of a fruit. You’re not expected to eat the peel of your orange or banana. We are trying to follow the example set by nature for packaging.”

The outer layer of the package is always meant to be peeled like fruit–one thin outer layer of the membrane peels away to keep the inner layer clean and can then be composted. (While compostable cups are an alternative solution, many can only be composted in industrial facilities; the Ooho can be tossed on a simple home compost pile, where it will decompose within weeks).

The company is targeting both outdoor events and cafes. “Where we see a lot of potential for Ooho is outdoor events–festivals, marathons, places where basically there are a lot of people consuming packaging over a very short amount of time,” says Paslier.

I encourage you to read Peters’ article in its entirety if you have the time. You can also find more information on the Skipping Rocks Lab website and on the company’s crowdfunding campaign on CrowdCube.

2D printed transistors in Ireland

2D transistors seem to be a hot area for research these days. In Ireland, the AMBER Centre has announced a transistor consisting entirely of 2D nanomaterials in an April 6, 2017 news item on Nanowerk,

Researchers in AMBER, the Science Foundation Ireland-funded materials science research centre hosted in Trinity College Dublin, have fabricated printed transistors consisting entirely of 2-dimensional nanomaterials for the first time. These 2D materials combine exciting electronic properties with the potential for low-cost production.

This breakthrough could unlock the potential for applications such as food packaging that displays a digital countdown to warn you of spoiling, wine labels that alert you when your white wine is at its optimum temperature, or even a window pane that shows the day’s forecast. …

An April 7, 2017 AMBER Centre press release (also on EurekAlert), which originated the news item, expands on the theme,

Prof Jonathan Coleman, who is an investigator in AMBER and Trinity’s School of Physics, said, “In the future, printed devices will be incorporated into even the most mundane objects such as labels, posters and packaging.

Printed electronic circuitry (constructed from the devices we have created) will allow consumer products to gather, process, display and transmit information: for example, milk cartons could send messages to your phone warning that the milk is about to go out-of-date.

We believe that 2D nanomaterials can compete with the materials currently used for printed electronics. Compared to other materials employed in this field, our 2D nanomaterials have the capability to yield more cost effective and higher performance printed devices. However, while the last decade has underlined the potential of 2D materials for a range of electronic applications, only the first steps have been taken to demonstrate their worth in printed electronics. This publication is important because it shows that conducting, semiconducting and insulating 2D nanomaterials can be combined together in complex devices. We felt that it was critically important to focus on printing transistors as they are the electric switches at the heart of modern computing. We believe this work opens the way to print a whole host of devices solely from 2D nanosheets.”

Led by Prof Coleman, in collaboration with the groups of Prof Georg Duesberg (AMBER) and Prof. Laurens Siebbeles (TU Delft,Netherlands), the team used standard printing techniques to combine graphene nanosheets as the electrodes with two other nanomaterials, tungsten diselenide and boron nitride as the channel and separator (two important parts of a transistor) to form an all-printed, all-nanosheet, working transistor.

Printable electronics have developed over the last thirty years based mainly on printable carbon-based molecules. While these molecules can easily be turned into printable inks, such materials are somewhat unstable and have well-known performance limitations. There have been many attempts to surpass these obstacles using alternative materials, such as carbon nanotubes or inorganic nanoparticles, but these materials have also shown limitations in either performance or in manufacturability. While the performance of printed 2D devices cannot yet compare with advanced transistors, the team believe there is a wide scope to improve performance beyond the current state-of-the-art for printed transistors.

The ability to print 2D nanomaterials is based on Prof. Coleman’s scalable method of producing 2D nanomaterials, including graphene, boron nitride, and tungsten diselenide nanosheets, in liquids, a method he has licensed to Samsung and Thomas Swan. These nanosheets are flat nanoparticles that are a few nanometres thick but hundreds of nanometres wide. Critically, nanosheets made from different materials have electronic properties that can be conducting, insulating or semiconducting and so include all the building blocks of electronics. Liquid processing is especially advantageous in that it yields large quantities of high quality 2D materials in a form that is easy to process into inks. Prof. Coleman’s publication provides the potential to print circuitry at extremely low cost which will facilitate a range of applications from animated posters to smart labels.

Prof Coleman is a partner in Graphene flagship, a €1 billion EU initiative to boost new technologies and innovation during the next 10 years.

Here’s a link to and a citation for the paper,

All-printed thin-film transistors from networks of liquid-exfoliated nanosheets by Adam G. Kelly, Toby Hallam, Claudia Backes, Andrew Harvey, Amir Sajad Esmaeily, Ian Godwin, João Coelho, Valeria Nicolosi, Jannika Lauth, Aditya Kulkarni, Sachin Kinge, Laurens D. A. Siebbeles, Georg S. Duesberg, Jonathan N. Coleman. Science  07 Apr 2017: Vol. 356, Issue 6333, pp. 69-73 DOI: 10.1126/science.aal4062

This paper is behind a paywall.

Ultra-thin superconducting film for outer space

Truth in a press release? But first, there’s this April 6, 2017 news item on Nanowerk announcing research that may have applications in aerospace and other sectors,

Experimental physicists in the research group led by Professor Uwe Hartmann at Saarland University have developed a thin nanomaterial with superconducting properties. Below about -200 °C these materials conduct electricity without loss, levitate magnets and can screen magnetic fields.

The particularly interesting aspect of this work is that the research team has succeeded in creating superconducting nanowires that can be woven into an ultra-thin film that is as flexible as cling film. As a result, novel coatings for applications ranging from aerospace to medical technology are becoming possible.

The research team will be exhibiting their superconducting film at Hannover Messe from April 24th to April 28th [2017] (Hall 2, Stand B46) and are looking for commercial and industrial partners with whom they can develop their system for practical applications.

An April 6, 2017 University of Saarland press release (also on EurekAlert), which originated the news item, provides more details along with a line that rings with the truth,

A team of experimental physicists at Saarland University have developed something that – it has to be said – seems pretty unremarkable at first sight. [emphasis mine] It looks like nothing more than a charred black piece of paper. But appearances can be deceiving. This unassuming object is a superconductor. The term ‘superconductor’ is given to a material that (usually at a very low temperatures) has zero electrical resistance and can therefore conduct an electric current without loss. Put simply, the electrons in the material can flow unrestricted through the cold immobilized atomic lattice. In the absence of electrical resistance, if a magnet is brought up close to a cold superconductor, the magnet effectively ‘sees’ a mirror image of itself in the superconducting material. So if a superconductor and a magnet are placed in close proximity to one another and cooled with liquid nitrogen they will repel each another and the magnet levitates above the superconductor. The term ‘levitation’ comes from the Latin word levitas meaning lightness. It’s a bit like a low-temperature version of the hoverboard from the ‘Back to the Future’ films. If the temperature is too high, however, frictionless sliding is just not going to happen.
Many of the common superconducting materials available today are rigid, brittle and dense, which makes them heavy. The Saarbrücken physicists have now succeeded in packing superconducting properties into a thin flexible film. The material is a essentially a woven fabric of plastic fibres and high-temperature superconducting nanowires. ‘That makes the material very pliable and adaptable – like cling film (or ‘plastic wrap’ as it’s also known). Theoretically, the material can be made to any size. And we need fewer resources than are typically required to make superconducting ceramics, so our superconducting mesh is also cheaper to fabricate,’ explains Uwe Hartmann, Professor of Nanostructure Research and Nanotechnology at Saarland University.

The low weight of the film is particularly advantageous. ‘With a density of only 0.05 grams per cubic centimetre, the material is very light, weighing about a hundred times less than a conventional superconductor. This makes the material very promising for all those applications where weight is an issue, such as in space technology. There are also potential applications in medical technology,’ explains Hartmann. The material could be used as a novel coating to provide low-temperature screening from electromagnetic fields, or it could be used in flexible cables or to facilitate friction-free motion.

In order to be able to weave this new material, the experimental physicists made use of a technique known as electrospinning, which is usually used in the manufacture of polymeric fibres. ‘We force a liquid material through a very fine nozzle known as a spinneret to which a high electrical voltage has been applied. This produces nanowire filaments that are a thousand times thinner than the diameter of a human hair, typically about 300 nanometres or less. We then heat the mesh of fibres so that superconductors of the right composition are created. The superconducting material itself is typically an yttrium-barium-copper-oxide or similar compound,’ explains Dr. Michael Koblischka, one of the research scientists in Hartmann‘s group.

The research project received €100,000 in funding from the Volkswagen Foundation as part of its ‘Experiment!’ initiative. The initiative aims to encourage curiosity-driven, blue-skies research. The positive results from the Saarbrücken research team demonstrate the value of this type of funding. Since September 2016, the project has been supported by the German Research Foundation (DFG). Total funds of around €425,000 will be provided over a three-year period during which the research team will be carrying out more detailed investigations into the properties of the nanowires.

I’d say the “unremarkable but appearances can be deceiving” comments are true more often than not. I think that’s one of the hard things about science. Big advances can look nondescript.

What looks like a pretty unremarkable piece of burnt paper is in fact an ultrathin superconductor that has been developed by the team lead by Uwe Hartmann (r.) shown here with doctoral student XianLin Zeng. Courtesy: Saarland University

In any event, here’s a link to and a citation for the paper,

Preparation of granular Bi-2212 nanowires by electrospinning by Xian Lin Zeng, Michael R Koblischka, Thomas Karwoth, Thomas Hauet, and Uwe Hartmann. Superconductor Science and Technology, Volume 30, Number 3 Published 1 February 2017

© 2017 IOP Publishing Ltd

This paper is behind a paywall.