Monthly Archives: May 2017

Machine learning programs learn bias

The notion of bias in artificial intelligence (AI)/algorithms/robots is gaining prominence (links to other posts featuring algorithms and bias are at the end of this post). The latest research concerns machine learning where an artificial intelligence system trains itself with ordinary human language from the internet. From an April 13, 2017 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

As artificial intelligence systems “learn” language from existing texts, they exhibit the same biases that humans do, a new study reveals. The results not only provide a tool for studying prejudicial attitudes and behavior in humans, but also emphasize how language is intimately intertwined with historical biases and cultural stereotypes. A common way to measure biases in humans is the Implicit Association Test (IAT), where subjects are asked to pair two concepts they find similar, in contrast to two concepts they find different; their response times can vary greatly, indicating how well they associated one word with another (for example, people are more likely to associate “flowers” with “pleasant,” and “insects” with “unpleasant”). Here, Aylin Caliskan and colleagues developed a similar way to measure biases in AI systems that acquire language from human texts; rather than measuring lag time, however, they used the statistical number of associations between words, analyzing roughly 2.2 million words in total. Their results demonstrate that AI systems retain biases seen in humans. For example, studies of human behavior show that the exact same resume is 50% more likely to result in an opportunity for an interview if the candidate’s name is European American rather than African-American. Indeed, the AI system was more likely to associate European American names with “pleasant” stimuli (e.g. “gift,” or “happy”). In terms of gender, the AI system also reflected human biases, where female words (e.g., “woman” and “girl”) were more associated than male words with the arts, compared to mathematics. In a related Perspective, Anthony G. Greenwald discusses these findings and how they could be used to further analyze biases in the real world.

There are more details about the research in this April 13, 2017 Princeton University news release on EurekAlert (also on ScienceDaily),

In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. But in a new study, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways. Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, the researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.

Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorization and automated translations.

“Questions about fairness and bias in machine learning are tremendously important for our society,” said researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, as well as an affiliate scholar at Stanford Law School’s Center for Internet and Society. “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”

The paper, “Semantics derived automatically from language corpora contain human-like biases,” published April 14  [2017] in Science. Its lead author is Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton; Joanna Bryson, a reader at University of Bath, and CITP affiliate, is a coauthor.

As a touchstone for documented human biases, the study turned to the Implicit Association Test, used in numerous social psychology studies since its development at the University of Washington in the late 1990s. The test measures response times (in milliseconds) by human subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter, the Implicit Association Test has repeatedly shown, when subjects are asked to pair two concepts they find similar, versus two concepts they find dissimilar.

Take flower types, like “rose” and “daisy,” and insects like “ant” and “moth.” These words can be paired with pleasant concepts, like “caress” and “love,” or unpleasant notions, like “filth” and “ugly.” People more quickly associate the flower words with pleasant concepts, and the insect terms with unpleasant ideas.

The Princeton team devised an experiment with a program where it essentially functioned like a machine learning version of the Implicit Association Test. Called GloVe, and developed by Stanford University researchers, the popular, open-source program is of the sort that a startup machine learning company might use at the heart of its product. The GloVe algorithm can represent the co-occurrence statistics of words in, say, a 10-word window of text. Words that often appear near one another have a stronger association than those words that seldom do.

The Stanford researchers turned GloVe loose on a huge trawl of contents from the World Wide Web, containing 840 billion words. Within this large sample of written human culture, Narayanan and colleagues then examined sets of so-called target words, like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside two sets of attribute words, such as “man, male” and “woman, female,” looking for evidence of the kinds of biases humans can unwittingly possess.

In the results, innocent, inoffensive biases, like for flowers over bugs, showed up, but so did examples along lines of gender and race. As it turned out, the Princeton machine learning experiment managed to replicate the broad substantiations of bias found in select Implicit Association Test studies over the years that have relied on live, human subjects.

For instance, the machine learning program associated female names more with familial attribute words, like “parents” and “wedding,” than male names. In turn, male names had stronger associations with career attributes, like “professional” and “salary.” Of course, results such as these are often just objective reflections of the true, unequal distributions of occupation types with respect to gender–like how 77 percent of computer programmers are male, according to the U.S. Bureau of Labor Statistics.

Yet this correctly distinguished bias about occupations can end up having pernicious, sexist effects. An example: when foreign languages are naively processed by machine learning programs, leading to gender-stereotyped sentences. The Turkish language uses a gender-neutral, third person pronoun, “o.” Plugged into the well-known, online translation service Google Translate, however, the Turkish sentences “o bir doktor” and “o bir hem?ire” with this gender-neutral pronoun are translated into English as “he is a doctor” and “she is a nurse.”

“This paper reiterates the important point that machine learning methods are not ‘objective’ or ‘unbiased’ just because they rely on mathematics and algorithms,” said Hanna Wallach, a senior researcher at Microsoft Research New York City, who was not involved in the study. “Rather, as long as they are trained using data from society and as long as society exhibits biases, these methods will likely reproduce these biases.”

Another objectionable example harkens back to a well-known 2004 paper by Marianne Bertrand of the University of Chicago Booth School of Business and Sendhil Mullainathan of Harvard University. The economists sent out close to 5,000 identical resumes to 1,300 job advertisements, changing only the applicants’ names to be either traditionally European American or African American. The former group was 50 percent more likely to be offered an interview than the latter. In an apparent corroboration of this bias, the new Princeton study demonstrated that a set of African American names had more unpleasantness associations than a European American set.

Computer programmers might hope to prevent cultural stereotype perpetuation through the development of explicit, mathematics-based instructions for the machine learning programs underlying AI systems. Not unlike how parents and mentors try to instill concepts of fairness and equality in children and students, coders could endeavor to make machines reflect the better angels of human nature.

“The biases that we studied in the paper are easy to overlook when designers are creating systems,” said Narayanan. “The biases and stereotypes in our society reflected in our language are complex and longstanding. Rather than trying to sanitize or eliminate them, we should treat biases as part of the language and establish an explicit way in machine learning of determining what we consider acceptable and unacceptable.”

Here’s a link to and a citation for the Princeton paper,

Semantics derived automatically from language corpora contain human-like biases by Aylin Caliskan, Joanna J. Bryson, Arvind Narayanan. Science  14 Apr 2017: Vol. 356, Issue 6334, pp. 183-186 DOI: 10.1126/science.aal4230

This paper appears to be open access.

Links to more cautionary posts about AI,

Aug 5, 2009: Autonomous algorithms; intelligent windows; pretty nano pictures

June 14, 2016:  Accountability for artificial intelligence decision-making

Oct. 25, 2016 Removing gender-based stereotypes from algorithms

March 1, 2017: Algorithms in decision-making: a government inquiry in the UK

There’s also a book which makes some of the current use of AI programmes and big data quite accessible reading: Cathy O’Neil’s ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’.

Internet of toys, the robotification of childhood, and privacy issues

Leave it to the European Commission’s (EC) Joint Research Centre (JRC) to look into the future of toys. As far as I’m aware there are no such moves in either Canada or the US despite the ubiquity of robot toys and other such devices. From a March 23, 2017 EC JRC  press release (also on EurekAlert),

Action is needed to monitor and control the emerging Internet of Toys, concludes a new JRC report. Privacy and security are highlighted as main areas of concern.

Large numbers of connected toys have been put on the market over the past few years, and the turnover is expected to reach €10 billion by 2020 – up from just €2.6 billion in 2015.

Connected toys come in many different forms, from smart watches to teddy bears that interact with their users. They are connected to the internet and together with other connected appliances they form the Internet of Things, which is bringing technology into our daily lives more than ever.

However, the toys’ ability to record, store and share information about their young users raises concerns about children’s safety, privacy and social development.

A team of JRC scientists and international experts looked at the safety, security, privacy and societal questions emerging from the rise of the Internet of Toys. The report invites policymakers, industry, parents and teachers to study connected toys more in depth in order to provide a framework which ensures that these toys are safe and beneficial for children.

Robotification of childhood

Robots are no longer only used in industry to carry out repetitive or potentially dangerous tasks. In the past years, robots have entered our everyday lives and also children are more and more likely to encounter robotic or artificial intelligence-enhanced toys.

We still know relatively little about the consequences of children’s interaction with robotic toys. However, it is conceivable that they represent both opportunities and risks for children’s cognitive, socio-emotional and moral-behavioural development.

For example, social robots may further the acquisition of foreign language skills by compensating for the lack of native speakers as language tutors or by removing the barriers and peer pressure encountered in class room. There is also evidence about the benefits of child-robot interaction for children with developmental problems, such as autism or learning difficulties, who may find human interaction difficult.

However, the internet-based personalization of children’s education via filtering algorithms may also increase the risk of ‘educational bubbles’ where children only receive information that fits their pre-existing knowledge and interest – similar to adult interaction on social media networks.

Safety and security considerations

The rapid rise in internet connected toys also raises concerns about children’s safety and privacy. In particular, the way that data gathered by connected toys is analysed, manipulated and stored is not transparent, which poses an emerging threat to children’s privacy.

The data provided by children while they play, i.e the sounds, images and movements recorded by connected toys is personal data protected by the EU data protection framework, as well as by the new General Data Protection Regulation (GDPR). However, information on how this data is stored, analysed and shared might be hidden in long privacy statements or policies and often go unnoticed by parents.

Whilst children’s right to privacy is the most immediate concern linked to connected toys, there is also a long term concern: growing up in a culture where the tracking, recording and analysing of children’s everyday choices becomes a normal part of life is also likely to shape children’s behaviour and development.

Usage framework to guide the use of connected toys

The report calls for industry and policymakers to create a connected toys usage framework to act as a guide for their design and use.

This would also help toymakers to meet the challenge of complying with the new European General Data Protection Regulation (GDPR) which comes into force in May 2018, which will increase citizens’ control over their personal data.

The report also calls for the connected toy industry and academic researchers to work together to produce better designed and safer products.

Advice for parents

The report concludes that it is paramount that we understand how children interact with connected toys and which risks and opportunities they entail for children’s development.

“These devices come with really interesting possibilities and the more we use them, the more we will learn about how to best manage them. Locking them up in a cupboard is not the way to go. We as adults have to understand how they work – and how they might ‘misbehave’ – so that we can provide the right tools and the right opportunities for our children to grow up happy in a secure digital world”, Stéphane Chaudron, the report’s lead researcher at the Joint Research Centre (JRC).).

The authors of the report encourage parents to get informed about the capabilities, functions, security measures and privacy settings of toys before buying them. They also urge parents to focus on the quality of play by observing their children, talking to them about their experiences and playing alongside and with their children.

Protecting and empowering children

Through the Alliance to better protect minors online and with the support of UNICEF, NGOs, Toy Industries Europe and other industry and stakeholder groups, European and global ICT and media companies  are working to improve the protection and empowerment of children when using connected toys. This self-regulatory initiative is facilitated by the European Commission and aims to create a safer and more stimulating digital environment for children.

There’s an engaging video accompanying this press release,

You can find the report (Kaleidoscope on the Internet of Toys: Safety, security, privacy and societal insights) here and both the PDF and print versions are free (although I imagine you’ll have to pay postage for the print version). This report was published in 2016; the authors are Stéphane Chaudron, Rosanna Di Gioia, Monica Gemo, Donell Holloway , Jackie Marsh , Giovanna Mascheroni , Jochen Peter, Dylan Yamada-Rice and organizations involved include European Cooperation in Science and Technology (COST), Digital Literacy and Multimodal Practices of Young Children (DigiLitEY), and COST Action IS1410. DigiLitEY is a European network of 33 countries focusing on research in this area (2015-2019).

Nanocoating to reduce dental implant failures

Scientists at Plymouth University (UK) have developed a nanocoating that could reduce the number of dental implant failures. From a March 24, 2017 news item on Nanowerk (Note: A link has been removed),

According to the American Academy of Implant Dentistry (AAID), 15 million Americans have crown or bridge replacements and three million have dental implants — with this latter number rising by 500,000 a year. The AAID estimates that the value of the American and European market for dental implants will rise to $4.2 billion by 2022.

Dental implants are a successful form of treatment for patients, yet according to a study published in 2005, five to 10 per cent of all dental implants fail.

The reasons for this failure are several-fold – mechanical problems, poor connection to the bones in which they are implanted, infection or rejection. When failure occurs the dental implant must be removed.

The main reason for dental implant failure is peri-implantitis. This is the destructive inflammatory process affecting the soft and hard tissues surrounding dental implants. This occurs when pathogenic microbes in the mouth and oral cavity develop into biofilms, which protects them and encourages growth. Peri-implantitis is caused when the biofilms develop on dental implants.

A research team comprising scientists from the School of Biological Sciences, Peninsula Schools of Medicine and Dentistry and the School of Engineering at the University of Plymouth, have joined forces to develop and evaluate the effectiveness of a new nanocoating for dental implants to reduce the risk of peri-implantitis.

The results of their work are published in the journal Nanotoxicology (“Antibacterial activity and biofilm inhibition by surface modified titanium alloy medical implants following application of silver, titanium dioxide and hydroxyapatite nanocoatings”).

A March 27, 2017 Plymouth University press release, which originated the news item, gives more details about the research,

In the study, the research team created a new approach using a combination of silver, titanium oxide and hydroxyapatite nanocoatings.

The application of the combination to the surface of titanium alloy implants successfully inhibited bacterial growth and reduced the formation of bacterial biofilm on the surface of the implants by 97.5 per cent.

Not only did the combination result in the effective eradication of infection, it created a surface with anti-biofilm properties which supported successful integration into surrounding bone and accelerated bone healing.

Professor Christopher Tredwin, Head of Plymouth University Peninsula School of Dentistry, commented:

“In this cross-Faculty study we have identified the means to protect dental implants against the most common cause of their failure. The potential of our work for increased patient comfort and satisfaction, and reduced costs, is great and we look forward to translating our findings into clinical practice.”

The University of Plymouth was the first university in the UK to secure Research Council Funding in Nanoscience and this project is the latest in a long line of projects investigating nanotechnology and human health.

Nanoscience activity at the University of Plymouth is led by Professor Richard Handy, who has represented the UK on matters relating to the Environmental Safety and Human Health of Nanomaterials at the Organisation for Economic Cooperation and Development (OECD). He commented:

“As yet there are no nano-specific guidelines in dental or medical implant legislation and we are, with colleagues elsewhere, guiding the way in this area. The EU recognises that medical devices and implants must: perform as expected for its intended use, and be better than similar items in the market; be safe for the intended use or safer than an existing item, and; be biocompatible or have negligible toxicity.”

He added:

“Our work has been about proving these criteria which we have done in vitro. The next step would be to demonstrate the effectiveness of our discovery, perhaps with animal models and then human volunteers.”

Dr Alexandros Besinis, Lecturer in Mechanical Engineering at the School of Engineering, University of Plymouth, led the research team. He commented:

“Current strategies to render the surface of dental implants antibacterial with the aim to prevent infection and peri-implantitis development, include application of antimicrobial coatings loaded with antibiotics or chlorhexidine. However, such approaches are usually effective only in the short-term, and the use of chlorhexidine has also been reported to be toxic to human cells. The significance of our new study is that we have successfully applied a dual-layered silver-hydroxyapatite nanocoating to titanium alloy medical implants which helps to overcome these risks.”

Dr Besinis has been an Honorary Teaching Fellow at the Peninsula School of Dentistry since 2011 and has recently joined the School of Engineering. His research interests focus on advanced engineering materials and the use of nanotechnology to build novel biomaterials and medical implants with improved mechanical, physical and antibacterial properties.

Here’s a link to and a citation for the paper,

Antibacterial activity and biofilm inhibition by surface modified titanium alloy medical implants following application of silver, titanium dioxide and hydroxyapatite nanocoatings by A. Besinis, S. D. Hadi, H. R. Le, C. Tredwin & R. D. Handy.  Nanotoxicology Volume 11, 2017 – Issue 3  Pages 327-338  http://dx.doi.org/10.1080/17435390.2017.1299890 Published online: 17 Mar 2017

This paper appears to be open access.

Edible water bottles by Ooho!

Courtesy: Skipping Rocks Lab

As far as I’m concerned, that looks more like a breast implant than a water bottle, which, from a psycho-social perspective, could lead to some interesting research papers. It is, in fact a new type of water bottle.  From an April 10, 2017 article by Adele Peters for Fast Company (Note: Links have been removed),

If you run in a race in London in the near future and pass a hydration station, you may be handed a small, bubble-like sphere of water instead of a bottle. The gelatinous packaging, called the Ooho, is compostable–or even edible, if you want to swallow it. And after two years of development, its designers are ready to bring it to market.

Three London-based design students first created a prototype of the edible bottle in 2014 as an alternative to plastic bottles. The idea gained internet hype (though also some scorn for a hilarious video that made the early prototypes look fairly impossible to use without soaking yourself).
The problem it was designed to solve–the number of disposable bottles in landfills–keeps growing. In the U.K. alone, around 16 million are trashed each day; another 19 million are recycled, but still have the environmental footprint of a product made from oil. In the U.S., recycling rates are even lower. …

The new packaging is based on the culinary technique of spherification, which is also used to make fake caviar and the tiny juice balls added to boba tea [bubble tea?]. Dip a ball of ice in calcium chloride and brown algae extract, and you can form a spherical membrane that keeps holding the ice as it melts and returns to room temperature.

An April 25, 2014 article by Kashmira Gander for Independent.co.uk describes the technology and some of the problems that had to be solved before bringing this product to market,

To make the bottle [Ooho!], students at the Imperial College London gave a frozen ball of water a gelatinous layer by dipping it into a calcium chloride solution.

They then soaked the ball in another solution made from brown algae extract to encapsulate the ice in a second membrane, and reinforce the structure.

However, Ooho still has teething problems, as the membrane is only as thick as a fruit skin, and therefore makes transporting the object more difficult than a regular bottle of water.

“This is a problem we’re trying to address with a double container,” Rodrigo García González, who created Ooho with fellow students Pierre Paslier and Guillaume Couche, explained to the Smithsonian. “The idea is that we can pack several individual edible Oohos into a bigger Ooho container [to make] a thicker and more resistant membrane.”

According to Peters’ Fast Company article, the issues have been resolved,

Because the membrane is made from food ingredients, you can eat it instead of throwing it away. The Jell-O-like packaging doesn’t have a natural taste, but it’s possible to add flavors to make it more appetizing.

The package doesn’t have to be eaten every time, since it’s also compostable. “When people try it for the first time, they want to eat it because it’s part of the experience,” says Pierre Paslier, cofounder of Skipping Rocks Lab, the startup developing the packaging. “Then it will be just like the peel of a fruit. You’re not expected to eat the peel of your orange or banana. We are trying to follow the example set by nature for packaging.”

The outer layer of the package is always meant to be peeled like fruit–one thin outer layer of the membrane peels away to keep the inner layer clean and can then be composted. (While compostable cups are an alternative solution, many can only be composted in industrial facilities; the Ooho can be tossed on a simple home compost pile, where it will decompose within weeks).

The company is targeting both outdoor events and cafes. “Where we see a lot of potential for Ooho is outdoor events–festivals, marathons, places where basically there are a lot of people consuming packaging over a very short amount of time,” says Paslier.

I encourage you to read Peters’ article in its entirety if you have the time. You can also find more information on the Skipping Rocks Lab website and on the company’s crowdfunding campaign on CrowdCube.

2D printed transistors in Ireland

2D transistors seem to be a hot area for research these days. In Ireland, the AMBER Centre has announced a transistor consisting entirely of 2D nanomaterials in an April 6, 2017 news item on Nanowerk,

Researchers in AMBER, the Science Foundation Ireland-funded materials science research centre hosted in Trinity College Dublin, have fabricated printed transistors consisting entirely of 2-dimensional nanomaterials for the first time. These 2D materials combine exciting electronic properties with the potential for low-cost production.

This breakthrough could unlock the potential for applications such as food packaging that displays a digital countdown to warn you of spoiling, wine labels that alert you when your white wine is at its optimum temperature, or even a window pane that shows the day’s forecast. …

An April 7, 2017 AMBER Centre press release (also on EurekAlert), which originated the news item, expands on the theme,

Prof Jonathan Coleman, who is an investigator in AMBER and Trinity’s School of Physics, said, “In the future, printed devices will be incorporated into even the most mundane objects such as labels, posters and packaging.

Printed electronic circuitry (constructed from the devices we have created) will allow consumer products to gather, process, display and transmit information: for example, milk cartons could send messages to your phone warning that the milk is about to go out-of-date.

We believe that 2D nanomaterials can compete with the materials currently used for printed electronics. Compared to other materials employed in this field, our 2D nanomaterials have the capability to yield more cost effective and higher performance printed devices. However, while the last decade has underlined the potential of 2D materials for a range of electronic applications, only the first steps have been taken to demonstrate their worth in printed electronics. This publication is important because it shows that conducting, semiconducting and insulating 2D nanomaterials can be combined together in complex devices. We felt that it was critically important to focus on printing transistors as they are the electric switches at the heart of modern computing. We believe this work opens the way to print a whole host of devices solely from 2D nanosheets.”

Led by Prof Coleman, in collaboration with the groups of Prof Georg Duesberg (AMBER) and Prof. Laurens Siebbeles (TU Delft,Netherlands), the team used standard printing techniques to combine graphene nanosheets as the electrodes with two other nanomaterials, tungsten diselenide and boron nitride as the channel and separator (two important parts of a transistor) to form an all-printed, all-nanosheet, working transistor.

Printable electronics have developed over the last thirty years based mainly on printable carbon-based molecules. While these molecules can easily be turned into printable inks, such materials are somewhat unstable and have well-known performance limitations. There have been many attempts to surpass these obstacles using alternative materials, such as carbon nanotubes or inorganic nanoparticles, but these materials have also shown limitations in either performance or in manufacturability. While the performance of printed 2D devices cannot yet compare with advanced transistors, the team believe there is a wide scope to improve performance beyond the current state-of-the-art for printed transistors.

The ability to print 2D nanomaterials is based on Prof. Coleman’s scalable method of producing 2D nanomaterials, including graphene, boron nitride, and tungsten diselenide nanosheets, in liquids, a method he has licensed to Samsung and Thomas Swan. These nanosheets are flat nanoparticles that are a few nanometres thick but hundreds of nanometres wide. Critically, nanosheets made from different materials have electronic properties that can be conducting, insulating or semiconducting and so include all the building blocks of electronics. Liquid processing is especially advantageous in that it yields large quantities of high quality 2D materials in a form that is easy to process into inks. Prof. Coleman’s publication provides the potential to print circuitry at extremely low cost which will facilitate a range of applications from animated posters to smart labels.

Prof Coleman is a partner in Graphene flagship, a €1 billion EU initiative to boost new technologies and innovation during the next 10 years.

Here’s a link to and a citation for the paper,

All-printed thin-film transistors from networks of liquid-exfoliated nanosheets by Adam G. Kelly, Toby Hallam, Claudia Backes, Andrew Harvey, Amir Sajad Esmaeily, Ian Godwin, João Coelho, Valeria Nicolosi, Jannika Lauth, Aditya Kulkarni, Sachin Kinge, Laurens D. A. Siebbeles, Georg S. Duesberg, Jonathan N. Coleman. Science  07 Apr 2017: Vol. 356, Issue 6333, pp. 69-73 DOI: 10.1126/science.aal4062

This paper is behind a paywall.

Preserving heritage smells (scents)

Preserving a smell? It’s an intriguing idea and forms the research focus for scientists at the University College London’s (UCL) Institute for Sustainable Heritage according to an April 6, 2017 Biomed Central news release on EurekAlert,

A ‘Historic Book Odour Wheel’ which has been developed to document and archive the aroma associated with old books, is being presented in a study in the open access journal Heritage Science. Researchers at UCL Institute for Sustainable Heritage created the wheel as part of an experiment in which they asked visitors to St Paul’s Cathedral’s Dean and Chapter library in London to characterize its smell.

The visitors most frequently described the aroma of the library as ‘woody’ (selected by 100% of the visitors who were asked), followed by ‘smoky’ (86%), ‘earthy'(71%) and ‘vanilla’ (41%). The intensity of the smells was assessed as between ‘strong odor’ and ‘very strong odor’. Over 70% of the visitors described the smell as pleasant, 14% as ‘mildly pleasant’ and 14% as ‘neutral’.

In a separate experiment, the researchers presented visitors to the Birmingham Museum and Art Gallery with an unlabelled historic book smell – sampled from a 1928 book they obtained from a second-hand bookshop in London – and collected the terms used to describe the smell. The word ‘chocolate’ – or variations such as ‘cocoa’ or ‘chocolatey’ – was used most often, followed by ‘coffee’, ‘old’, ‘wood’ and ‘burnt’. Participants also mentioned smells including ‘fish’, ‘body odour’, ‘rotten socks’ and ‘mothballs’.

Cecilia Bembibre, heritage scientist at UCL and corresponding author of the study said: “Our odour wheel provides an example of how scientists and historians could begin to identify, analyze and document smells that have cultural significance, such as the aroma of old books in historic libraries. The role of smells in how we perceive heritage has not been systematically explored until now.”

Attempting to answer the question of whether certain smells could be considered part of our cultural heritage and if so how they could be identified, protected and conserved, the researchers also conducted a chemical analysis of volatile organic compounds (VOCs) which they sampled from books in the library. VOCs are chemicals that evaporate at low temperatures, many of which can be perceived as scents or odors.

Combining their findings from the VOC analysis with the visitors’ characterizations, the authors created their Historic Book Odour wheel, which shows the chemical description of a smell (such as acetic acid) together with the sensory descriptions provided by the visitors (such as ‘vinegar’).

Cecilia Bembibre said: “By documenting the words used by the visitors to describe a heritage smell, our study opens a discussion about developing a vocabulary to identify aromas that have cultural meaning and significance.”

She added: “The Historic Book Odour Wheel also has the potential to be used as a diagnostic tool by conservators, informing on the condition of an object, for example its state of decay, through its olfactory profile.”

The authors suggest that, in addition to its use for the identification and conservation of smells, the Historic Book Odour Wheel could potentially be used to recreate smells and aid the design of olfactory experiences in museums, allowing visitors to form a personal connection with exhibits by allowing them to understand what the past smelled like.

Before this can be done, further research is needed to build on the preliminary findings in this study to allow them to inform and benefit heritage management, conservation, visitor experience design and heritage policy making.

Here’s what the Historic Book Odour Wheel looks like,

Odour wheel of historic book containing general aroma categories, sensory descriptors and chemical information on the smells as sampled (colours are arbitrary) Courtesy: Heritage Science [downloaded from https://heritagesciencejournal.springeropen.com/articles/10.1186/s40494-016-0114-1

Here’s a link to and a citation for the paper,

Smell of heritage: a framework for the identification, analysis and archival of historic odours by Cecilia Bembibre and Matija Strlič. Heritage Science20175:2 DOI: 10.1186/s40494-016-0114-1 Published: 7 April 2017

©  The Author(s) 2017

This paper is open access.

Canada and its Vancouver tech scene gets a boost

Prime Minister Justin Trudeau has been running around attending tech events both in the Vancouver area (Canada) and in Seattle these last few days (May 17 and May 18, 2017). First he attended the Microsoft CEO Summit as noted in a May 11, 2017 news release from the Prime Minister’s Office (Note: I have a few comments about this performance and the Canadian tech scene at the end of this post),

The Prime Minister, Justin Trudeau, today [May 11, 2017] announced that he will participate in the Microsoft CEO Summit in Seattle, Washington, on May 17 and 18 [2017], to promote the Cascadia Innovation Corridor, encourage investment in the Canadian technology sector, and draw global talent to Canada.

This year’s summit, under the theme “The CEO Agenda: Navigating Change,” will bring together more than 150 chief executive officers. While at the Summit, Prime Minister Trudeau will showcase Budget 2017’s Innovation and Skills Plan and demonstrate how Canada is making it easier for Canadian entrepreneurs and innovators to turn their ideas into thriving businesses.

Prime Minister Trudeau will also meet with Washington Governor Jay Inslee.

Quote

“Canada’s greatest strength is its skilled, hard-working, creative, and diverse workforce. Canada is recognized as a world leader in research and development in many areas like artificial intelligence, quantum computing, and 3D programming. Our government will continue to help Canadian businesses grow and create good, well-paying middle class jobs in today’s high-tech economy.”
— Rt. Honourable Justin Trudeau, Prime Minister of Canada

Quick Facts

  • Canada-U.S. bilateral trade in goods and services reached approximately $882 billion in 2016.
  • Nearly 400,000 people and over $2 billion-worth of goods and services cross the Canada-U.S. border every day.
  • Canada-Washington bilateral trade was $19.8 billion in 2016. Some 223,300 jobs in the State of Washington depend on trade and investment with Canada. Canada is among Washington’s top export destinations.

Associated Link

Here’s a little more about the Microsoft meeting from a May 17, 2017 article by Alan Boyle for GeekWire.com (Note: Links have been removed),

So far, this year’s Microsoft CEO Summit has been all about Canadian Prime Minister Justin Trudeau’s talk today, but there’s been precious little information available about who else is attending – and Trudeau may be one of the big reasons why.

Microsoft co-founder Bill Gates created the annual summit back in 1997, to give global business leaders an opportunity to share their experiences and learn about new technologies that will have an impact on business in the future. The event’s attendee list is kept largely confidential, as is the substance of the discussions.

This year, Microsoft says the summit’s two themes are “trust in technology” (as in cybersecurity, international hacking, privacy and the flow of data) and “the race to space” (as in privately funded space efforts such as Amazon billionaire Jeff Bezos’ Blue Origin rocket venture).

Usually, Microsoft lists a few folks who are attending the summit on the company’s Redmond campus, just to give a sense of the event’s cachet. For example, last year’s headliners included Berkshire Hathaway CEO Warren Buffett and Exxon Mobil CEO Rex Tillerson (who is now the Trump administration’s secretary of state)

This year, however, the spotlight has fallen almost exclusively on the hunky 45-year-old Trudeau, the first sitting head of government or state to address the summit. Microsoft isn’t saying anything about the other 140-plus VIPs attending the discussions. “Out of respect for the privacy of our guests, we are not providing any additional information,” a Microsoft spokesperson told GeekWire via email.

Even Trudeau’s remarks at the summit are hush-hush, although officials say he’s talking up Canada’s tech sector.  …

Laura Kane’s May 18, 2017 article for therecord.com provides a little more information about Trudeau’s May 18, 2017 activities in Washington state,

Prime Minister Justin Trudeau continued his efforts to promote Canada’s technology sector to officials in Washington state on Thursday [May 18, 2017], meeting with Gov. Jay Inslee a day after attending the secretive Microsoft CEO Summit.

Trudeau and Inslee discussed, among other issues, the development of the Cascadia Innovation Corridor, an initiative that aims to strengthen technology industry ties between British Columbia and Washington.

The pair also spoke about trade and investment opportunities and innovation in the energy sector, said Trudeau’s office. In brief remarks before the meeting, the prime minister said Washington and Canada share a lot in common.

But protesters clad in yellow hazardous material suits that read “Keystone XL Toxic Cleanup Crew” gathered outside the hotel to criticize Trudeau’s environmental record, arguing his support of pipelines is at odds with any global warming promises he has made.

Later that afternoon, Trudeau visited Electronic Arts (a US games company with offices in the Vancouver area) for more tech talk as Stephanie Ip notes in her May 18, 2017 article for The Vancouver Sun,

Prime Minister Justin Trudeau was in Metro Vancouver Thursday [may 18, 2017] to learn from local tech and business leaders how the federal government can boost B.C.’s tech sector.

The roundtable discussion was organized by the Vancouver Economic Commission and hosted in Burnaby at Electronic Arts’ Capture Lab, where the video game company behind the popular FIFA, Madden and NHL franchises records human movement to add more realism to its digital characters. Representatives from Amazon, Launch Academy, Sony Pictures, Darkhorse 101 Pictures and Front Fundr were also there.

While the roundtable was not open to media, Trudeau met beforehand with media.

“We’re going to talk about how the government can be a better partner or better get out of your way in some cases to allow you to continue to grow, to succeed, to create great opportunities to allow innovation to advance success in Canada and to create good jobs for Canadians and draw in people from around the world and continue to lead the way in the world,” he said.

“Everything from clean tech, to bio-medical advances, to innovation in digital economy — there’s a lot of very, very exciting things going on”

Comments on the US tech sector and the supposed Canadian tech sector

I wonder at all the secrecy. As for the companies mentioned as being at the roundtable, you’ll notice a preponderance of US companies with Launch Academy and Front Fundr (which is not a tech company but a crowdfunding equity company) supplying Canadian content. As for Darkhorse 101 Pictures,  I strongly suspect (after an online search) it is part of Darkhorse Comics (as US company) which has an entertainment division.

Perhaps it didn’t seem worthwhile to mention the Canadian companies? In that case, that’s a sad reflection on how poorly we and our media support our tech sector.

In fact, it seems Trudeau’s version of the Canadian technology sector is for us to continue in our role as a branch plant remaining forever in service of the US economy or at least the US tech sector which may be experiencing some concerns with the US Trump administration and what appears to be an increasingly isolationist perspective with regard to trade and immigration. It’s a perspective that the tech sector, especially the entertainment component, can ill afford.

As for the Cascadia Innovation Corridor mentioned in the Prime Minister’s news release and in Kane’s article, I have more about that in a Feb. 28, 2017 posting about the Cascadia Data Analytics Cooperative.

I noticed he mentioned clean tech as an area of excitement. Well, we just lost a significant player not to the US this time but to the EU (European Union) or more specifically, Germany. (There’ll be more about that in an upcoming post.)

I’m glad to see that Trudeau remains interested in Canadian science and technology but perhaps he could concentrate on new ways of promoting sectoral health rather than relying on the same old thing.

Ultra-thin superconducting film for outer space

Truth in a press release? But first, there’s this April 6, 2017 news item on Nanowerk announcing research that may have applications in aerospace and other sectors,

Experimental physicists in the research group led by Professor Uwe Hartmann at Saarland University have developed a thin nanomaterial with superconducting properties. Below about -200 °C these materials conduct electricity without loss, levitate magnets and can screen magnetic fields.

The particularly interesting aspect of this work is that the research team has succeeded in creating superconducting nanowires that can be woven into an ultra-thin film that is as flexible as cling film. As a result, novel coatings for applications ranging from aerospace to medical technology are becoming possible.

The research team will be exhibiting their superconducting film at Hannover Messe from April 24th to April 28th [2017] (Hall 2, Stand B46) and are looking for commercial and industrial partners with whom they can develop their system for practical applications.

An April 6, 2017 University of Saarland press release (also on EurekAlert), which originated the news item, provides more details along with a line that rings with the truth,

A team of experimental physicists at Saarland University have developed something that – it has to be said – seems pretty unremarkable at first sight. [emphasis mine] It looks like nothing more than a charred black piece of paper. But appearances can be deceiving. This unassuming object is a superconductor. The term ‘superconductor’ is given to a material that (usually at a very low temperatures) has zero electrical resistance and can therefore conduct an electric current without loss. Put simply, the electrons in the material can flow unrestricted through the cold immobilized atomic lattice. In the absence of electrical resistance, if a magnet is brought up close to a cold superconductor, the magnet effectively ‘sees’ a mirror image of itself in the superconducting material. So if a superconductor and a magnet are placed in close proximity to one another and cooled with liquid nitrogen they will repel each another and the magnet levitates above the superconductor. The term ‘levitation’ comes from the Latin word levitas meaning lightness. It’s a bit like a low-temperature version of the hoverboard from the ‘Back to the Future’ films. If the temperature is too high, however, frictionless sliding is just not going to happen.
Many of the common superconducting materials available today are rigid, brittle and dense, which makes them heavy. The Saarbrücken physicists have now succeeded in packing superconducting properties into a thin flexible film. The material is a essentially a woven fabric of plastic fibres and high-temperature superconducting nanowires. ‘That makes the material very pliable and adaptable – like cling film (or ‘plastic wrap’ as it’s also known). Theoretically, the material can be made to any size. And we need fewer resources than are typically required to make superconducting ceramics, so our superconducting mesh is also cheaper to fabricate,’ explains Uwe Hartmann, Professor of Nanostructure Research and Nanotechnology at Saarland University.

The low weight of the film is particularly advantageous. ‘With a density of only 0.05 grams per cubic centimetre, the material is very light, weighing about a hundred times less than a conventional superconductor. This makes the material very promising for all those applications where weight is an issue, such as in space technology. There are also potential applications in medical technology,’ explains Hartmann. The material could be used as a novel coating to provide low-temperature screening from electromagnetic fields, or it could be used in flexible cables or to facilitate friction-free motion.

In order to be able to weave this new material, the experimental physicists made use of a technique known as electrospinning, which is usually used in the manufacture of polymeric fibres. ‘We force a liquid material through a very fine nozzle known as a spinneret to which a high electrical voltage has been applied. This produces nanowire filaments that are a thousand times thinner than the diameter of a human hair, typically about 300 nanometres or less. We then heat the mesh of fibres so that superconductors of the right composition are created. The superconducting material itself is typically an yttrium-barium-copper-oxide or similar compound,’ explains Dr. Michael Koblischka, one of the research scientists in Hartmann‘s group.

The research project received €100,000 in funding from the Volkswagen Foundation as part of its ‘Experiment!’ initiative. The initiative aims to encourage curiosity-driven, blue-skies research. The positive results from the Saarbrücken research team demonstrate the value of this type of funding. Since September 2016, the project has been supported by the German Research Foundation (DFG). Total funds of around €425,000 will be provided over a three-year period during which the research team will be carrying out more detailed investigations into the properties of the nanowires.

I’d say the “unremarkable but appearances can be deceiving” comments are true more often than not. I think that’s one of the hard things about science. Big advances can look nondescript.

What looks like a pretty unremarkable piece of burnt paper is in fact an ultrathin superconductor that has been developed by the team lead by Uwe Hartmann (r.) shown here with doctoral student XianLin Zeng. Courtesy: Saarland University

In any event, here’s a link to and a citation for the paper,

Preparation of granular Bi-2212 nanowires by electrospinning by Xian Lin Zeng, Michael R Koblischka, Thomas Karwoth, Thomas Hauet, and Uwe Hartmann. Superconductor Science and Technology, Volume 30, Number 3 Published 1 February 2017

© 2017 IOP Publishing Ltd

This paper is behind a paywall.

Evolution of literature as seen by a classicist, a biologist and a computer scientist

Studying intertextuality shows how books are related in various ways and are reorganized and recombined over time. Image courtesy of Elena Poiata.

I find the image more instructive when I read it from the bottom up. For those who prefer to prefer to read from the top down, there’s this April 5, 2017 University of Texas at Austin news release (also on EurekAlert),

A classicist, biologist and computer scientist all walk into a room — what comes next isn’t the punchline but a new method to analyze relationships among ancient Latin and Greek texts, developed in part by researchers from The University of Texas at Austin.

Their work, referred to as quantitative criticism, is highlighted in a study published in the Proceedings of the National Academy of Sciences. The paper identifies subtle literary patterns in order to map relationships between texts and more broadly to trace the cultural evolution of literature.

“As scholars of the humanities well know, literature is a system within which texts bear a multitude of relationships to one another. Understanding what is distinctive about one text entails knowing how it fits within that system,” said Pramit Chaudhuri, associate professor in the Department of Classics at UT Austin. “Our work seeks to harness the power of quantification and computation to describe those relationships at macro and micro levels not easily achieved by conventional reading alone.”

In the study, the researchers create literary profiles based on stylometric features, such as word usage, punctuation and sentence structure, and use techniques from machine learning to understand these complex datasets. Taking a computational approach enables the discovery of small but important characteristics that distinguish one work from another — a process that could require years using manual counting methods.

“One aspect of the technical novelty of our work lies in the unusual types of literary features studied,” Chaudhuri said. “Much computational text analysis focuses on words, but there are many other important hallmarks of style, such as sound, rhythm and syntax.”

Another component of their work builds on Matthew Jockers’ literary “macroanalysis,” which uses machine learning to identify stylistic signatures of particular genres within a large body of English literature. Implementing related approaches, Chaudhuri and his colleagues have begun to trace the evolution of Latin prose style, providing new, quantitative evidence for the sweeping impact of writers such as Caesar and Livy on the subsequent development of Roman prose literature.

“There is a growing appreciation that culture evolves and that language can be studied as a cultural artifact, but there has been less research focused specifically on the cultural evolution of literature,” said the study’s lead author Joseph Dexter, a Ph.D. candidate in systems biology at Harvard University. “Working in the area of classics offers two advantages: the literary tradition is a long and influential one well served by digital resources, and classical scholarship maintains a strong interest in close linguistic study of literature.”

Unusually for a publication in a science journal, the paper contains several examples of the types of more speculative literary reading enabled by the quantitative methods introduced. The authors discuss the poetic use of rhyming sounds for emphasis and of particular vocabulary to evoke mood, among other literary features.

“Computation has long been employed for attribution and dating of literary works, problems that are unambiguous in scope and invite binary or numerical answers,” Dexter said. “The recent explosion of interest in the digital humanities, however, has led to the key insight that similar computational methods can be repurposed to address questions of literary significance and style, which are often more ambiguous and open ended. For our group, this humanist work of criticism is just as important as quantitative methods and data.”

The paper is the work of the Quantitative Criticism Lab (www.qcrit.org), co-directed by Chaudhuri and Dexter in collaboration with researchers from several other institutions. It is funded in part by a 2016 National Endowment for the Humanities grant and the Andrew W. Mellon Foundation New Directions Fellowship, awarded in 2016 to Chaudhuri to further his education in statistics and biology. Chaudhuri was one of 12 scholars selected for the award, which provides humanities researchers the opportunity to train outside of their own area of special interest with a larger goal of bridging the humanities and social sciences.

Here’s another link to the paper along with a citation,

Quantitative criticism of literary relationships by Joseph P. Dexter, Theodore Katz, Nilesh Tripuraneni, Tathagata Dasgupta, Ajay Kannan, James A. Brofos, Jorge A. Bonilla Lopez, Lea A. Schroeder, Adriana Casarez, Maxim Rabinovich, Ayelet Haimson Lushkov, and Pramit Chaudhuri. PNAS Published online before print April 3, 2017, doi: 10.1073/pnas.1611910114

This paper appears to be open access.

Accurate spatial orientation for regenerating cells

German scientists have developed a system that helps guide nerve cells into proper spatial alignmentswhen they are regenerating according to an April 5, 2017 news item on Nanowerk,

In many tissues of the human body, such as nerve tissue, the spatial organization of cells plays an important role. Nerve cells and their long protrusions assemble into nerve tracts and transport information throughout the body. When such a tissue is injured, an accurate spatial orientation of the cells facilitates the healing process. Scientists from the DWI – Leibniz Institute for Interactive Materials in Aachen developed an injectable gel, which can act as a guidance system for nerve cells.

An April 5, 2017 Leibniz Institute press release, which originated the news item, explains more about the research,

Inside the body, an extracellular matrix surrounds the cells. It provides mechanical support and promotes spatial tissue organization. In order to regenerate damaged tissue, an artificial matrix can temporally replace the natural extracellular matrix. This matrix needs to mimic the natural cell environment in order to efficiently stimulate the regenerative potential of the surrounding tissue. Solid implants, however, may impair remaining healthy tissue whereas soft, injectable materials allow for a minimal invasive therapy, which is particularly beneficial for sensitive tissues, such as the spinal cord. Unfortunately, up to now, artificial soft materials did not yet reproduce the complex structures and spatial properties of natural tissues.

A team of scientists, headed by Dr. Laura De Laporte from the DWI – Leibniz Institute for Interactive Materials, developed a new, minimal invasive material termed ‘Anisogel’. “If you aim to enhance the regeneration of damaged spinal cord tissue, you need to come up with a new material concept,” says Jonas Rose. He is a PhD student working on the Anisogel project. “We use micrometer-sized building blocks and assemble them into 3D hierarchically organized structures.” Anisogel consists of two gel components. Many, microscopically small, soft rod-shaped gels, incorporated with a low amount of magnetic nanoparticles, are the first component. Using a weak magnetic field, scientists can orient the gel rods, after which a very soft surrounding gel matrix is crosslinked, forming the structural guidance system. The gel rods, being stabilized by the gel matrix, maintain their orientation, even after removal of the magnetic field. Using cell culture experiments, the researchers demonstrate that cells can easily migrate through this gel matrix, and that nerve cells and fibroblasts orient along the paths provided by this guidance system.  A low amount of one percent gel rods inside the entire Anisogel volume is proven to be sufficient to induce linear nerve growth. The material, developed by the Aachen-based scientists, is the first injectable biomaterial, which assembles into a controlled oriented structure after injection and provides a functional guidance system for cells. “To meet the complex requirements of this approach, the project team includes researchers with very different areas of expertise,” says Laura De Laporte, whose research is supported by a Starting Grant of the European Research Council. “This interdisciplinary work is what makes this project so fascinating.”

“Although our cell culture experiments were successful, we are prepared to go a long way to translate our Anisogel into a medical therapy. In collaboration with the Uniklinik RWTH Aachen, we currently plan pre-clinical studies to further test and optimize this material,” Laura De Laporte explains.

Here’s a link to and a citation for the paper,

Nerve Cells Decide to Orient inside an Injectable Hydrogel with Minimal Structural Guidance  by Jonas C. Rose, María Cámara-Torres, Khosrow Rahimi, Jens Köhler, Martin Möller, and Laura De Laporte. Nano Lett., Article ASAP DOI: 10.1021/acs.nanolett.7b01123 Publication Date (Web): March 22, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.