Monthly Archives: October 2017

Brain composer

This is a representation of the work they are doing on brain-computer interfaces (BCI) at the Technical University of Graz (TU Graz; Austria),

A Sept. 11, 2017 news item on phys.org announces the research into thinking melodies turning them into a musical score,

TU Graz researchers develop new brain-computer interface application that allows music to be composed by the power of thought. They have published their results in the current issue of the journal PLOS ONE.

Brain-computer interfaces (BCI) can replace bodily functions to a certain degree. Thanks to BCI, physically impaired persons can control special prostheses via their minds, surf the internet and write emails.

A group led by BCI expert Gernot Müller-Putz from TU Graz’s Institute of Neural Engineering shows that experiences of quite a different tone can be sounded from the keys of brain-computer interfaces. Derived from an established BCI method for writing, the team has developed a new application by which music can be composed and transferred onto a musical score through the power of thought. It employs a special cap that measures brain waves, the adapted BCI, music composition software, and a bit of musical knowledge.

A Sept. 6, 2017 TU Graz press release by Suzanne Eigner, which originated the news item, explains the research in more detail,

The basic principle of the BCI method used, which is called P300, can be briefly described: various options, such as letters or notes, pauses, chords, etc. flash by one after the other in a table. If you’re trained and can focus on the desired option while it lights up, you cause a minute change in your brain waves. The BCI recognises this change and draws conclusions about the chosen option.

Musical test persons

18 test persons chosen for the study by Gernot Müller-Putz, Andreas Pinegger and Selina C. Wriessnegger from TU Graz’s Institute of Neural Engineering as well as Hannah Hiebel, meanwhile at the Institute of Cognitive Psychology & Neuroscience at the University of Graz, had to “think” melodies onto a musical score. All test subjects were of sound bodily health during the study and had a certain degree of basic musical and compositional knowledge since they all played musical instruments to some degree. Among the test persons was the late Graz composer and clarinettist, Franz Cibulka. “The results of the BCI compositions can really be heard. And what is more important: the test persons enjoyed it. After a short training session, all of them could start composing and seeing their melodies on the score and then play them. The very positive results of the study with bodily healthy test persons are the first step in a possible expansion of the BCI composition to patients,” stresses Müller-Putz.

Sideshow of BCI research

This little-noticed sideshow of the lively BCI research at TU Graz, with its distinct focus on disabled persons, shows us which other avenues may yet be worth exploring. Meanwhile there are some initial attempts at BCI systems on smart phones. This makes it easier for people to use BCI applications, since the smart phone as powerful computer is becoming part of the BCI system. It is thus conceivable, for instance, to have BCI apps which can analyse brain signals for various applications. “20 years ago, the idea of composing a piece of music using the power of the mind was unimaginable. Now we can do it, and at the same time have tens of new, different ideas which are in part, once again, a long way from becoming reality. We still need a bit more time before it is mature enough for daily applications. The BCI community is working in many directions at high pressure.

Here’s a link to and a citation for the paper,

Composing only by thought: Novel application of the P300 brain-computer interface by Andreas Pinegger, Hannah Hiebel, Selina C. Wriessnegger, Gernot R. Müller-Putz. PLOS https://doi.org/10.1371/journal.pone.0181584 Published: September 6, 2017

This paper is open access.

This BCI ‘sideshow’ reminded me of The Music Man, a musical by Meredith Wilson. It was both a play and a film  and I’ve only ever seen the 1962 film. It features a con man, Harold Hill, who sells musical instruments and uniforms in small towns in Iowa. He has no musical training but while he’s conning the townspeople he convinces them that he can provide musical training with his ‘think method’. After falling in love with one of the townsfolk, he is hunted down and made to prove his method works. This is a clip from a Broadway revival of the play where Harold Hill is hoping that his ‘think method’ while yield results,

Of course, the people in this study had musicaltraining so they could think a melody into a musical score but I find the echo from the past amusing nonetheless.

Vampire nanogenerators: 2017

Researchers have been working on ways to harvest energy from bloodstreams. I last wrote about this type of research in an April 3, 2009 posting about ‘vampire batteries ‘(for use in pacemakers). The latest work according to a Sept. 8, 2017 news item on Nanowerk comes from China,

Men build dams and huge turbines to turn the energy of waterfalls and tides into electricity. To produce hydropower on a much smaller scale, Chinese scientists have now developed a lightweight power generator based on carbon nanotube fibers suitable to convert even the energy of flowing blood in blood vessels into electricity. They describe their innovation in the journal Angewandte Chemie (“A One-Dimensional Fluidic Nanogenerator with a High Power Conversion Efficiency”)

A Sept. 8, 2017 Wiley Publishing news release (also on EurekAlert), which originated the news item, expands on the theme,

For thousands of years, people have used the energy of flowing or falling water for their purposes, first to power mechanical engines such as watermills, then to generate electricity by exploiting height differences in the landscape or sea tides. Using naturally flowing water as a sustainable power source has the advantage that there are (almost) no dependencies on weather or daylight. Even flexible, minute power generators that make use of the flow of biological fluids are conceivable. How such a system could work is explained by a research team from Fudan University in Shanghai, China. Huisheng Peng and his co-workers have developed a fiber with a thickness of less than a millimeter that generates electrical power when surrounded by flowing saline solution—in a thin tube or even in a blood vessel.

The construction principle of the fiber is quite simple. An ordered array of carbon nanotubes was continuously wrapped around a polymeric core. Carbon nanotubes are well known to be electroactive and mechanically stable; they can be spun and aligned in sheets. In the as-prepared electroactive threads, the carbon nanotube sheets coated the fiber core with a thickness of less than half a micron. For power generation, the thread or “fiber-shaped fluidic nanogenerator” (FFNG), as the authors call it, was connected to electrodes and immersed into flowing water or simply repeatedly dipped into a saline solution. “The electricity was derived from the relative movement between the FFNG and the solution,” the scientists explained. According to the theory, an electrical double layer is created around the fiber, and then the flowing solution distorts the symmetrical charge distribution, generating an electricity gradient along the long axis.

The power output efficiency of this system was high. Compared with other types of miniature energy-harvesting devices, the FFNG was reported to show a superior power conversion efficiency of more than 20%. Other advantages are elasticity, tunability, lightweight, and one-dimensionality, thus offering prospects of exciting technological applications. The FFNG can be made stretchable just by spinning the sheets around an elastic fiber substrate. If woven into fabrics, wearable electronics become thus a very interesting option for FFNG application. Another exciting application is the harvesting of electrical energy from the bloodstream for medical applications. First tests with frog nerves proved to be successful.

Here’s a link to and a citation for the paper,

A One-Dimensional Fluidic Nanogenerator with a High Power Conversion Efficiency by Yifan Xu, Dr. Peining Chen, Jing Zhang, Songlin Xie, Dr. Fang Wan, Jue Deng, Dr. Xunliang Cheng, Yajie Hu, Meng Liao, Dr. Bingjie Wang, Dr. Xuemei Sun, and Prof. Dr. Huisheng Peng. Angewandte Chemie International Edition DOI: 10.1002/anie.201706620 Version of Record online: 7 SEP 2017

© 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Hallucinogenic molecules and the brain

Psychedelic drugs seems to be enjoying a ‘moment’. After decades of being vilified and  declared illegal (in many jurisdictions), psychedelic (or hallucinogenic) drugs are once again being tested for use in therapy. A Sept. 1, 2017 article by Diana Kwon for The Scientist describes some of the latest research (I’ve excerpted the section on molecules; Note: Links have been removed),

Mind-bending molecules

© SEAN MCCABE

All the classic psychedelic drugs—psilocybin, LSD, and N,N-dimethyltryptamine (DMT), the active component in ayahuasca—activate serotonin 2A (5-HT2A) receptors, which are distributed throughout the brain. In all likelihood, this receptor plays a key role in the drugs’ effects. Krähenmann [Rainer Krähenmann, a psychiatrist and researcher at the University of Zurich]] and his colleagues in Zurich have discovered that ketanserin, a 5-HT2A receptor antagonist, blocks LSD’s hallucinogenic properties and prevents individuals from entering a dreamlike state or attributing personal relevance to the experience.12,13

Other research groups have found that, in rodent brains, 2,5-dimethoxy-4-iodoamphetamine (DOI), a highly potent and selective 5-HT2A receptor agonist, can modify the expression of brain-derived neurotrophic factor (BDNF)—a protein that, among other things, regulates neuronal survival, differentiation, and synaptic plasticity. This has led some scientists to hypothesize that, through this pathway, psychedelics may enhance neuroplasticity, the ability to form new neuronal connections in the brain.14 “We’re still working on that and trying to figure out what is so special about the receptor and where it is involved,” says Katrin Preller, a postdoc studying psychedelics at the University of Zurich. “But it seems like this combination of serotonin 2A receptors and BDNF leads to a kind of different organizational state in the brain that leads to what people experience under the influence of psychedelics.”

This serotonin receptor isn’t limited to the central nervous system. Work by Charles Nichols, a pharmacology professor at Louisiana State University, has revealed that 5-HT2A receptor agonists can reduce inflammation throughout the body. Nichols and his former postdoc Bangning Yu stumbled upon this discovery by accident, while testing the effects of DOI on smooth muscle cells from rat aortas. When they added this drug to the rodent cells in culture, it blocked the effects of tumor necrosis factor-alpha (TNF-α), a key inflammatory cytokine.

“It was completely unexpected,” Nichols recalls. The effects were so bewildering, he says, that they repeated the experiment twice to convince themselves that the results were correct. Before publishing the findings in 2008,15 they tested a few other 5-HT2A receptor agonists, including LSD, and found consistent anti-inflammatory effects, though none of the drugs’ effects were as strong as DOI’s. “Most of the psychedelics I have tested are about as potent as a corticosteroid at their target, but there’s something very unique about DOI that makes it much more potent,” Nichols says. “That’s one of the mysteries I’m trying to solve.”

After seeing the effect these drugs could have in cells, Nichols and his team moved on to whole animals. When they treated mouse models of system-wide inflammation with DOI, they found potent anti-inflammatory effects throughout the rodents’ bodies, with the strongest effects in the small intestine and a section of the main cardiac artery known as the aortic arch.16 “I think that’s really when it felt that we were onto something big, when we saw it in the whole animal,” Nichols says.

The group is now focused on testing DOI as a potential therapeutic for inflammatory diseases. In a 2015 study, they reported that DOI could block the development of asthma in a mouse model of the condition,17 and last December, the team received a patent to use DOI for four indications: asthma, Crohn’s disease, rheumatoid arthritis, and irritable bowel syndrome. They are now working to move the treatment into clinical trials. The benefit of using DOI for these conditions, Nichols says, is that because of its potency, only small amounts will be required—far below the amounts required to produce hallucinogenic effects.

In addition to opening the door to a new class of diseases that could benefit from psychedelics-inspired therapy, Nichols’s work suggests “that there may be some enduring changes that are mediated through anti-inflammatory effects,” Griffiths [Roland Griffiths, a psychiatry professor at Johns Hopkins University] says. Recent studies suggest that inflammation may play a role in a number of psychological disorders, including depression18 and addiction.19

“If somebody has neuroinflammation and that’s causing depression, and something like psilocybin makes it better through the subjective experience but the brain is still inflamed, it’s going to fall back into the depressed rut,” Nichols says. But if psilocybin is also treating the inflammation, he adds, “it won’t have that rut to fall back into.”

If it turns out that psychedelics do have anti-inflammatory effects in the brain, the drugs’ therapeutic uses could be even broader than scientists now envision. “In terms of neurodegenerative disease, every one of these disorders is mediated by inflammatory cytokines,” says Juan Sanchez-Ramos, a neuroscientist at the University of South Florida who in 2013 reported that small doses of psilocybin could promote neurogenesis in the mouse hippocampus.20 “That’s why I think, with Alzheimer’s, for example, if you attenuate the inflammation, it could help slow the progression of the disease.”

For anyone who was never exposed to the anti-hallucinogenic drug campaigns, this turn of events is mindboggling. There was a great deal of concern especially with LSD in the 1960s and it was not entirely unfounded. In my own family, a distant cousin, while under the influence of the drug, jumped off a building believing he could fly.  So, Kwon’s story opening with a story about someone being treated successfully for depression with a psychedelic drug was surprising to me . Why these drugs are being used successfully for psychiatric conditions when so much damage was apparently done under the influence in decades past may have something to do with taking the drugs in a controlled environment and, possibly, smaller dosages.

Nanowire fingerprint technology

Apparently this technology from France’s Laboratoire d’électronique des technologies de l’information (CEA-Leti) will make fingerprinting more reliable. From a Sept. 5, 2017 news item on Nanowerk,

Leti today announced that the European R&D project known as PiezoMAT has developed a pressure-based fingerprint sensor that enables resolution more than twice as high as currently required by the U.S. Federal Bureau of Investigation (FBI).

The project’s proof of concept demonstrates that a matrix of interconnected piezoelectric zinc-oxide (ZnO) nanowires grown on silicon can reconstruct the smallest features of human fingerprints at 1,000 dots per inch (DPI).

“The pressure-based fingerprint sensor derived from the integration of piezo-electric ZnO nanowires grown on silicon opens the path to ultra-high resolution fingerprint sensors, which will be able to reach resolution much higher than 1,000 DPI,” said Antoine Viana, Leti’s project manager. “This technology holds promise for significant improvement in both security and identification applications.”

A Sept. 5, 2017 Leti press release, which originated the news item, delves further,

The eight-member project team of European companies, universities and research institutes fabricated a demonstrator embedding a silicon chip with 250 pixels, and its associated electronics for signal collection and post-processing. The chip was designed to demonstrate the concept and the major technological achievements, not the maximum potential nanowire integration density. Long-term development will pursue full electronics integration for optimal sensor resolution.

The project also provided valuable experience and know-how in several key areas, such as optimization of seed-layer processing, localized growth of well-oriented ZnO nanowires on silicon substrates, mathematical modeling of complex charge generation, and synthesis of new polymers for encapsulation. The research and deliverables of the project have been presented in scientific journals and at conferences, including Eurosensors 2016 in Budapest.

The 44-month, €2.9 million PiezoMAT (PIEZOelectric nanowire MATrices) research project was funded by the European Commission in the Seventh Framework Program. Its partners include:

  • Leti (Grenoble, France): A leading European center in the field of microelectronics, microtechnology and nanotechnology R&D, Leti is one of the three institutes of the Technological Research Division at CEA, the French Alternative Energies and Atomic Energy Commission. Leti’s activities span basic and applied research up to pilot industrial lines. www.leti-cea.com/cea-tech/leti/english
  • Fraunhofer IAF (Freiburg, Germany): Fraunhofer IAF, one of the leading research facilities worldwide in the field of III-V semiconductors, develops electronic and optical devices based on modern micro- and nanostructures. Fraunhofer IAF’s technologies find applications in areas such as security, energy, communication, health, and mobility. www.iaf.fraunhofer.de/en
  • Centre for Energy Research, Hungarian Academy of Sciences (Budapest, Hungary):  The Institute for Technical Physics and Materials Science, one of the institutes of the Research Centre, conducts interdisciplinary research on complex functional materials and nanometer-scale structures, exploration of physical, chemical, and biological principles, and their exploitation in integrated micro- and nanosystems www.mems.hu, www.energia.mta.hu/en
  • Universität Leipzig (Leipzig, Germany): Germany’s second-oldest university with continuous teaching, established in 1409, hosts about 30,000 students in liberal arts, medicine and natural sciences. One of its scientific profiles is “Complex Matter”, and contributions to PIEZOMAT are in the field of nanostructures and wide gap materials. www.zv.uni-leipzig.de/en/
  • Kaunas University of Technology (Kaunas, Lithuania): One of the largest technical universities in the Baltic States, focusing its R&D activities on novel materials, smart devices, advanced measurement techniques and micro/nano-technologies. The Institute of Mechatronics specializes on multi-physics simulation and dynamic characterization of macro/micro-scale transducers with well-established expertise in the field of piezoelectric devices. http://en.ktu.lt/
  • SPECIFIC POLYMERS (Castries, France): SME with twelve employees and an annual turnover of about 1M€, SPECIFIC POLYMERS acts as an R&D service provider and scale-up producer in the field of functional polymers with high specificity (>1000 polymers in catalogue; >500 customers; >50 countries). www.specificpolymers.fr/
  • Tyndall National Institute (Cork, Ireland): Tyndall National Institute is one of Europe’s leading research centres in Information and Communications Technology (ICT) research and development and the largest facility of its type in Ireland. The Institute employs over 460 researchers, engineers and support staff, with a full-time graduate cohort of 135 students. With a network of 200 industry partners and customers worldwide, Tyndall generates around €30M income each year, 85% from competitively won contracts nationally and internationally. Tyndall is a globally leading Institute in its four core research areas of Photonics, Microsystems, Micro/Nanoelectronics and Theory, Modeling and Design. www.tyndall.ie/
  • OT-Morpho (Paris, France): OT-Morpho is a world leader in digital security & identification technologies with the ambition to empower citizens and consumers alike to interact, pay, connect, commute, travel and even vote in ways that are now possible in a connected world. As our physical and digital, civil and commercial lifestyles converge, OT-Morpho stands precisely at that crossroads to leverage the best in security and identity technologies and offer customized solutions to a wide range of international clients from key industries, including Financial services, Telecom, Identity, Security and IoT. With close to €3bn in revenues and more than 14,000 employees, OT-Morpho is the result of the merger between OT (Oberthur Technologies) and Safran Identity & Security (Morpho) completed in 31 May 2017. Temporarily designated by the name “OT-Morpho”, the new company will unveil its new name in September 2017. For more information, visit www.morpho.com and www.oberthur.com

I have tended to take fingerprint technology for granted but last fall (2016) I stumbled on a report suggesting that forensic sciences, including fingerprinting, was perhaps not as conclusive as one might expect after watching fictional police procedural television programmes. My Sept. 23, 2016 posting features the US President’s Council of Advisors on Science and Technology (PCAST) released a report (‘Forensic Science in Criminal Courts: Ensuring Scientific Validity of Feature-Comparison Methods‘ 174 pp PDF).

Robots in Vancouver and in Canada (two of two)

This is the second of a two-part posting about robots in Vancouver and Canada. The first part included a definition, a brief mention a robot ethics quandary, and sexbots. This part is all about the future. (Part one is here.)

Canadian Robotics Strategy

Meetings were held Sept. 28 – 29, 2017 in, surprisingly, Vancouver. (For those who don’t know, this is surprising because most of the robotics and AI research seems to be concentrated in eastern Canada. if you don’t believe me take a look at the speaker list for Day 2 or the ‘Canadian Stakeholder’ meeting day.) From the NSERC (Natural Sciences and Engineering Research Council) events page of the Canadian Robotics Network,

Join us as we gather robotics stakeholders from across the country to initiate the development of a national robotics strategy for Canada. Sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC), this two-day event coincides with the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) in order to leverage the experience of international experts as we explore Canada’s need for a national robotics strategy.

Where
Vancouver, BC, Canada

When
Thursday September 28 & Friday September 29, 2017 — Save the date!

Download the full agenda and speakers’ list here.

Objectives

The purpose of this two-day event is to gather members of the robotics ecosystem from across Canada to initiate the development of a national robotics strategy that builds on our strengths and capacities in robotics, and is uniquely tailored to address Canada’s economic needs and social values.

This event has been sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC) and is supported in kind by the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) as an official Workshop of the conference.  The first of two days coincides with IROS 2017 – one of the premiere robotics conferences globally – in order to leverage the experience of international robotics experts as we explore Canada’s need for a national robotics strategy here at home.

Who should attend

Representatives from industry, research, government, startups, investment, education, policy, law, and ethics who are passionate about building a robust and world-class ecosystem for robotics in Canada.

Program Overview

Download the full agenda and speakers’ list here.

DAY ONE: IROS Workshop 

“Best practices in designing effective roadmaps for robotics innovation”

Thursday September 28, 2017 | 8:30am – 5:00pm | Vancouver Convention Centre

Morning Program:“Developing robotics innovation policy and establishing key performance indicators that are relevant to your region” Leading international experts share their experience designing robotics strategies and policy frameworks in their regions and explore international best practices. Opening Remarks by Prof. Hong Zhang, IROS 2017 Conference Chair.

Afternoon Program: “Understanding the Canadian robotics ecosystem” Canadian stakeholders from research, industry, investment, ethics and law provide a collective overview of the Canadian robotics ecosystem. Opening Remarks by Ryan Gariepy, CTO of Clearpath Robotics.

Thursday Evening Program: Sponsored by Clearpath Robotics  Workshop participants gather at a nearby restaurant to network and socialize.

Learn more about the IROS Workshop.

DAY TWO: NSERC-Sponsored Canadian Robotics Stakeholder Meeting
“Towards a national robotics strategy for Canada”

Friday September 29, 2017 | 8:30am – 5:00pm | University of British Columbia (UBC)

On the second day of the program, robotics stakeholders from across the country gather at UBC for a full day brainstorming session to identify Canada’s unique strengths and opportunities relative to the global competition, and to align on a strategic vision for robotics in Canada.

Friday Evening Program: Sponsored by NSERC Meeting participants gather at a nearby restaurant for the event’s closing dinner reception.

Learn more about the Canadian Robotics Stakeholder Meeting.

I was glad to see in the agenda that some of the international speakers represented research efforts from outside the usual Europe/US axis.

I have been in touch with one of the organizers (also mentioned in part one with regard to robot ethics), Ajung Moon (her website is here), who says that there will be a white paper available on the Canadian Robotics Network website at some point in the future. I’ll keep looking for it and, in the meantime, I wonder what the 2018 Canadian federal budget will offer robotics.

Robots and popular culture

For anyone living in Canada or the US, Westworld (television series) is probably the most recent and well known ‘robot’ drama to premiere in the last year.As for movies, I think Ex Machina from 2014 probably qualifies in that category. Interestingly, both Westworld and Ex Machina seem quite concerned with sex with Westworld adding significant doses of violence as another  concern.

I am going to focus on another robot story, the 2012 movie, Robot & Frank, which features a care robot and an older man,

Frank (played by Frank Langella), a former jewel thief, teaches a robot the skills necessary to rob some neighbours of their valuables. The ethical issue broached in the film isn’t whether or not the robot should learn the skills and assist Frank in his thieving ways although that’s touched on when Frank keeps pointing out that planning his heist requires he live more healthily. No, the problem arises afterward when the neighbour accuses Frank of the robbery and Frank removes what he believes is all the evidence. He believes he’s going successfully evade arrest until the robot notes that Frank will have to erase its memory in order to remove all of the evidence. The film ends without the robot’s fate being made explicit.

In a way, I find the ethics query (was the robot Frank’s friend or just a machine?) posed in the film more interesting than the one in Vikander’s story, an issue which does have a history. For example, care aides, nurses, and/or servants would have dealt with requests to give an alcoholic patient a drink. Wouldn’t there  already be established guidelines and practices which could be adapted for robots? Or, is this question made anew by something intrinsically different about robots?

To be clear, Vikander’s story is a good introduction and starting point for these kinds of discussions as is Moon’s ethical question. But they are starting points and I hope one day there’ll be a more extended discussion of the questions raised by Moon and noted in Vikander’s article (a two- or three-part series of articles? public discussions?).

How will humans react to robots?

Earlier there was the contention that intimate interactions with robots and sexbots would decrease empathy and the ability of human beings to interact with each other in caring ways. This sounds a bit like the argument about smartphones/cell phones and teenagers who don’t relate well to others in real life because most of their interactions are mediated through a screen, which many seem to prefer. It may be partially true but, arguably,, books too are an antisocial technology as noted in Walter J. Ong’s  influential 1982 book, ‘Orality and Literacy’,  (from the Walter J. Ong Wikipedia entry),

A major concern of Ong’s works is the impact that the shift from orality to literacy has had on culture and education. Writing is a technology like other technologies (fire, the steam engine, etc.) that, when introduced to a “primary oral culture” (which has never known writing) has extremely wide-ranging impacts in all areas of life. These include culture, economics, politics, art, and more. Furthermore, even a small amount of education in writing transforms people’s mentality from the holistic immersion of orality to interiorization and individuation. [emphases mine]

So, robotics and artificial intelligence would not be the first technologies to affect our brains and our social interactions.

There’s another area where human-robot interaction may have unintended personal consequences according to April Glaser’s Sept. 14, 2017 article on Slate.com (Note: Links have been removed),

The customer service industry is teeming with robots. From automated phone trees to touchscreens, software and machines answer customer questions, complete orders, send friendly reminders, and even handle money. For an industry that is, at its core, about human interaction, it’s increasingly being driven to a large extent by nonhuman automation.

But despite the dreams of science-fiction writers, few people enter a customer-service encounter hoping to talk to a robot. And when the robot malfunctions, as they so often do, it’s a human who is left to calm angry customers. It’s understandable that after navigating a string of automated phone menus and being put on hold for 20 minutes, a customer might take her frustration out on a customer service representative. Even if you know it’s not the customer service agent’s fault, there’s really no one else to get mad at. It’s not like a robot cares if you’re angry.

When human beings need help with something, says Madeleine Elish, an anthropologist and researcher at the Data and Society Institute who studies how humans interact with machines, they’re not only looking for the most efficient solution to a problem. They’re often looking for a kind of validation that a robot can’t give. “Usually you don’t just want the answer,” Elish explained. “You want sympathy, understanding, and to be heard”—none of which are things robots are particularly good at delivering. In a 2015 survey of over 1,300 people conducted by researchers at Boston University, over 90 percent of respondents said they start their customer service interaction hoping to speak to a real person, and 83 percent admitted that in their last customer service call they trotted through phone menus only to make their way to a human on the line at the end.

“People can get so angry that they have to go through all those automated messages,” said Brian Gnerer, a call center representative with AT&T in Bloomington, Minnesota. “They’ve been misrouted or been on hold forever or they pressed one, then two, then zero to speak to somebody, and they are not getting where they want.” And when people do finally get a human on the phone, “they just sigh and are like, ‘Thank God, finally there’s somebody I can speak to.’ ”

Even if robots don’t always make customers happy, more and more companies are making the leap to bring in machines to take over jobs that used to specifically necessitate human interaction. McDonald’s and Wendy’s both reportedly plan to add touchscreen self-ordering machines to restaurants this year. Facebook is saturated with thousands of customer service chatbots that can do anything from hail an Uber, retrieve movie times, to order flowers for loved ones. And of course, corporations prefer automated labor. As Andy Puzder, CEO of the fast-food chains Carl’s Jr. and Hardee’s and former Trump pick for labor secretary, bluntly put it in an interview with Business Insider last year, robots are “always polite, they always upsell, they never take a vacation, they never show up late, there’s never a slip-and-fall, or an age, sex, or race discrimination case.”

But those robots are backstopped by human beings. How does interacting with more automated technology affect the way we treat each other? …

“We know that people treat artificial entities like they’re alive, even when they’re aware of their inanimacy,” writes Kate Darling, a researcher at MIT who studies ethical relationships between humans and robots, in a recent paper on anthropomorphism in human-robot interaction. Sure, robots don’t have feelings and don’t feel pain (not yet, anyway). But as more robots rely on interaction that resembles human interaction, like voice assistants, the way we treat those machines will increasingly bleed into the way we treat each other.

It took me a while to realize that what Glaser is talking about are AI systems and not robots as such. (sigh) It’s so easy to conflate the concepts.

AI ethics (Toby Walsh and Suzanne Gildert)

Jack Stilgoe of the Guardian published a brief Oct. 9, 2017 introduction to his more substantive (30 mins.?) podcast interview with Dr. Toby Walsh where they discuss stupid AI amongst other topics (Note: A link has been removed),

Professor Toby Walsh has recently published a book – Android Dreams – giving a researcher’s perspective on the uncertainties and opportunities of artificial intelligence. Here, he explains to Jack Stilgoe that we should worry more about the short-term risks of stupid AI in self-driving cars and smartphones than the speculative risks of super-intelligence.

Professor Walsh discusses the effects that AI could have on our jobs, the shapes of our cities and our understandings of ourselves. As someone developing AI, he questions the hype surrounding the technology. He is scared by some drivers’ real-world experimentation with their not-quite-self-driving Teslas. And he thinks that Siri needs to start owning up to being a computer.

I found this discussion to cast a decidedly different light on the future of robotics and AI. Walsh is much more interested in discussing immediate issues like the problems posed by ‘self-driving’ cars. (Aside: Should we be calling them robot cars?)

One ethical issue Walsh raises is with data regarding accidents. He compares what’s happening with accident data from self-driving (robot) cars to how the aviation industry handles accidents. Hint: accident data involving air planes is shared. Would you like to guess who does not share their data?

Sharing and analyzing data and developing new safety techniques based on that data has made flying a remarkably safe transportation technology.. Walsh argues the same could be done for self-driving cars if companies like Tesla took the attitude that safety is in everyone’s best interests and shared their accident data in a scheme similar to the aviation industry’s.

In an Oct. 12, 2017 article by Matthew Braga for Canadian Broadcasting Corporation (CBC) news online another ethical issue is raised by Suzanne Gildert (a participant in the Canadian Robotics Roadmap/Strategy meetings mentioned earlier here), Note: Links have been removed,

… Suzanne Gildert, the co-founder and chief science officer of Vancouver-based robotics company Kindred. Since 2014, her company has been developing intelligent robots [emphasis mine] that can be taught by humans to perform automated tasks — for example, handling and sorting products in a warehouse.

The idea is that when one of Kindred’s robots encounters a scenario it can’t handle, a human pilot can take control. The human can see, feel and hear the same things the robot does, and the robot can learn from how the human pilot handles the problematic task.

This process, called teleoperation, is one way to fast-track learning by manually showing the robot examples of what its trainers want it to do. But it also poses a potential moral and ethical quandary that will only grow more serious as robots become more intelligent.

“That AI is also learning my values,” Gildert explained during a talk on robot ethics at the Singularity University Canada Summit in Toronto on Wednesday [Oct. 11, 2017]. “Everything — my mannerisms, my behaviours — is all going into the AI.”

At its worst, everything from algorithms used in the U.S. to sentence criminals to image-recognition software has been found to inherit the racist and sexist biases of the data on which it was trained.

But just as bad habits can be learned, good habits can be learned too. The question is, if you’re building a warehouse robot like Kindred is, is it more effective to train those robots’ algorithms to reflect the personalities and behaviours of the humans who will be working alongside it? Or do you try to blend all the data from all the humans who might eventually train Kindred robots around the world into something that reflects the best strengths of all?

I notice Gildert distinguishes her robots as “intelligent robots” and then focuses on AI and issues with bias which have already arisen with regard to algorithms (see my May 24, 2017 posting about bias in machine learning, AI, and .Note: if you’re in Vancouver on Oct. 26, 2017 and interested in algorithms and bias), there’s a talk being given by Dr. Cathy O’Neil, author the Weapons of Math Destruction, on the topic of Gender and Bias in Algorithms. It’s not free but  tickets are here.)

Final comments

There is one more aspect I want to mention. Even as someone who usually deals with nanobots, it’s easy to start discussing robots as if the humanoid ones are the only ones that exist. To recapitulate, there are humanoid robots, utilitarian robots, intelligent robots, AI, nanobots, ‘microscopic bots, and more all of which raise questions about ethics and social impacts.

However, there is one more category I want to add to this list: cyborgs. They live amongst us now. Anyone who’s had a hip or knee replacement or a pacemaker or a deep brain stimulator or other such implanted device qualifies as a cyborg. Increasingly too, prosthetics are being introduced and made part of the body. My April 24, 2017 posting features this story,

This Case Western Reserve University (CRWU) video accompanies a March 28, 2017 CRWU news release, (h/t ScienceDaily March 28, 2017 news item)

Bill Kochevar grabbed a mug of water, drew it to his lips and drank through the straw.

His motions were slow and deliberate, but then Kochevar hadn’t moved his right arm or hand for eight years.

And it took some practice to reach and grasp just by thinking about it.

Kochevar, who was paralyzed below his shoulders in a bicycling accident, is believed to be the first person with quadriplegia in the world to have arm and hand movements restored with the help of two temporarily implanted technologies. [emphasis mine]

A brain-computer interface with recording electrodes under his skull, and a functional electrical stimulation (FES) system* activating his arm and hand, reconnect his brain to paralyzed muscles.

Does a brain-computer interface have an effect on human brain and, if so, what might that be?

In any discussion (assuming there is funding for it) about ethics and social impact, we might want to invite the broadest range of people possible at an ‘earlyish’ stage (although we’re already pretty far down the ‘automation road’) stage or as Jack Stilgoe and Toby Walsh note, technological determinism holds sway.

Once again here are links for the articles and information mentioned in this double posting,

That’s it!

ETA Oct. 16, 2017: Well, I guess that wasn’t quite ‘it’. BBC’s (British Broadcasting Corporation) Magazine published a thoughtful Oct. 15, 2017 piece titled: Can we teach robots ethics?

Robots in Vancouver and in Canada (one of two)

This piece just started growing. It started with robot ethics, moved on to sexbots and news of an upcoming Canadian robotics roadmap. Then, it became a two-part posting with the robotics strategy (roadmap) moving to part two along with robots and popular culture and a further  exploration of robot and AI ethics issues..

What is a robot?

There are lots of robots, some are macroscale and others are at the micro and nanoscales (see my Sept. 22, 2017 posting for the latest nanobot). Here’s a definition from the Robot Wikipedia entry that covers all the scales. (Note: Links have been removed),

A robot is a machine—especially one programmable by a computer— capable of carrying out a complex series of actions automatically.[2] Robots can be guided by an external control device or the control may be embedded within. Robots may be constructed to take on human form but most robots are machines designed to perform a task with no regard to how they look.

Robots can be autonomous or semi-autonomous and range from humanoids such as Honda’s Advanced Step in Innovative Mobility (ASIMO) and TOSY’s TOSY Ping Pong Playing Robot (TOPIO) to industrial robots, medical operating robots, patient assist robots, dog therapy robots, collectively programmed swarm robots, UAV drones such as General Atomics MQ-1 Predator, and even microscopic nano robots. [emphasis mine] By mimicking a lifelike appearance or automating movements, a robot may convey a sense of intelligence or thought of its own.

We may think we’ve invented robots but the idea has been around for a very long time (from the Robot Wikipedia entry; Note: Links have been removed),

Many ancient mythologies, and most modern religions include artificial people, such as the mechanical servants built by the Greek god Hephaestus[18] (Vulcan to the Romans), the clay golems of Jewish legend and clay giants of Norse legend, and Galatea, the mythical statue of Pygmalion that came to life. Since circa 400 BC, myths of Crete include Talos, a man of bronze who guarded the Cretan island of Europa from pirates.

In ancient Greece, the Greek engineer Ctesibius (c. 270 BC) “applied a knowledge of pneumatics and hydraulics to produce the first organ and water clocks with moving figures.”[19][20] In the 4th century BC, the Greek mathematician Archytas of Tarentum postulated a mechanical steam-operated bird he called “The Pigeon”. Hero of Alexandria (10–70 AD), a Greek mathematician and inventor, created numerous user-configurable automated devices, and described machines powered by air pressure, steam and water.[21]

The 11th century Lokapannatti tells of how the Buddha’s relics were protected by mechanical robots (bhuta vahana yanta), from the kingdom of Roma visaya (Rome); until they were disarmed by King Ashoka. [22] [23]

In ancient China, the 3rd century text of the Lie Zi describes an account of humanoid automata, involving a much earlier encounter between Chinese emperor King Mu of Zhou and a mechanical engineer known as Yan Shi, an ‘artificer’. Yan Shi proudly presented the king with a life-size, human-shaped figure of his mechanical ‘handiwork’ made of leather, wood, and artificial organs.[14] There are also accounts of flying automata in the Han Fei Zi and other texts, which attributes the 5th century BC Mohist philosopher Mozi and his contemporary Lu Ban with the invention of artificial wooden birds (ma yuan) that could successfully fly.[17] In 1066, the Chinese inventor Su Song built a water clock in the form of a tower which featured mechanical figurines which chimed the hours.

The beginning of automata is associated with the invention of early Su Song’s astronomical clock tower featured mechanical figurines that chimed the hours.[24][25][26] His mechanism had a programmable drum machine with pegs (cams) that bumped into little levers that operated percussion instruments. The drummer could be made to play different rhythms and different drum patterns by moving the pegs to different locations.[26]

In Renaissance Italy, Leonardo da Vinci (1452–1519) sketched plans for a humanoid robot around 1495. Da Vinci’s notebooks, rediscovered in the 1950s, contained detailed drawings of a mechanical knight now known as Leonardo’s robot, able to sit up, wave its arms and move its head and jaw.[28] The design was probably based on anatomical research recorded in his Vitruvian Man. It is not known whether he attempted to build it.

In Japan, complex animal and human automata were built between the 17th to 19th centuries, with many described in the 18th century Karakuri zui (Illustrated Machinery, 1796). One such automaton was the karakuri ningyō, a mechanized puppet.[29] Different variations of the karakuri existed: the Butai karakuri, which were used in theatre, the Zashiki karakuri, which were small and used in homes, and the Dashi karakuri which were used in religious festivals, where the puppets were used to perform reenactments of traditional myths and legends.

The term robot was coined by a Czech writer (from the Robot Wikipedia entry; Note: Links have been removed)

‘Robot’ was first applied as a term for artificial automata in a 1920 play R.U.R. by the Czech writer, Karel Čapek. However, Josef Čapek was named by his brother Karel as the true inventor of the term robot.[6][7] The word ‘robot’ itself was not new, having been in Slavic language as robota (forced laborer), a term which classified those peasants obligated to compulsory service under the feudal system widespread in 19th century Europe (see: Robot Patent).[37][38] Čapek’s fictional story postulated the technological creation of artificial human bodies without souls, and the old theme of the feudal robota class eloquently fit the imagination of a new class of manufactured, artificial workers.

I’m particularly fascinated by how long humans have been imagining and creating robots.

Robot ethics in Vancouver

The Westender, has run what I believe is the first article by a local (Vancouver, Canada) mainstream media outlet on the topic of robots and ethics. Tessa Vikander’s Sept. 14, 2017 article highlights two local researchers, Ajung Moon and Mark Schmidt, and a local social media company’s (Hootsuite), analytics director, Nik Pai. Vikander opens her piece with an ethical dilemma (Note: Links have been removed),

Emma is 68, in poor health and an alcoholic who has been told by her doctor to stop drinking. She lives with a care robot, which helps her with household tasks.

Unable to fix herself a drink, she asks the robot to do it for her. What should the robot do? Would the answer be different if Emma owns the robot, or if she’s borrowing it from the hospital?

This is the type of hypothetical, ethical question that Ajung Moon, director of the Open Roboethics Initiative [ORI], is trying to answer.

According to an ORI study, half of respondents said ownership should make a difference, and half said it shouldn’t. With society so torn on the question, Moon is trying to figure out how engineers should be programming this type of robot.

A Vancouver resident, Moon is dedicating her life to helping those in the decision-chair make the right choice. The question of the care robot is but one ethical dilemma in the quickly advancing world of artificial intelligence.

At the most sensationalist end of the scale, one form of AI that’s recently made headlines is the sex robot, which has a human-like appearance. A report from the Foundation for Responsible Robotics says that intimacy with sex robots could lead to greater social isolation [emphasis mine] because they desensitize people to the empathy learned through human interaction and mutually consenting relationships.

I’ll get back to the impact that robots might have on us in part two but first,

Sexbots, could they kill?

For more about sexbots in general, Alessandra Maldonado wrote an Aug. 10, 2017 article for salon.com about them (Note: A link has been removed),

Artificial intelligence has given people the ability to have conversations with machines like never before, such as speaking to Amazon’s personal assistant Alexa or asking Siri for directions on your iPhone. But now, one company has widened the scope of what it means to connect with a technological device and created a whole new breed of A.I. — specifically for sex-bots.

Abyss Creations has been in the business of making hyperrealistic dolls for 20 years, and by the end of 2017, they’ll unveil their newest product, an anatomically correct robotic sex toy. Matt McMullen, the company’s founder and CEO, explains the goal of sex robots is companionship, not only a physical partnership. “Imagine if you were completely lonely and you just wanted someone to talk to, and yes, someone to be intimate with,” he said in a video depicting the sculpting process of the dolls. “What is so wrong with that? It doesn’t hurt anybody.”

Maldonado also embedded this video into her piece,

A friend of mine described it as creepy. Specifically we were discussing why someone would want to programme ‘insecurity’ as a  desirable trait in a sexbot.

Marc Beaulieu’s concept of a desirable trait in a sexbot is one that won’t kill him according to his Sept. 25, 2017 article on Canadian Broadcasting News (CBC) online (Note: Links have been removed),

Harmony has a charming Scottish lilt, albeit a bit staccato and canny. Her eyes dart around the room, her chin dips as her eyebrows raise in coquettish fashion. Her face manages expressions that are impressively lifelike. That face comes in 31 different shapes and 5 skin tones, with or without freckles and it sticks to her cyber-skull with magnets. Just peel it off and switch it out at will. In fact, you can choose Harmony’s eye colour, body shape (in great detail) and change her hair too. Harmony, of course, is a sex bot. A very advanced one. How advanced is she? Well, if you have $12,332 CAD to put towards a talkative new home appliance, REALBOTIX says you could be having a “conversation” and relations with her come January. Happy New Year.

Caveat emptor though: one novel bonus feature you might also get with Harmony is her ability to eventually murder you in your sleep. And not because she wants to.

Dr Nick Patterson, faculty of Science Engineering and Built Technology at Deakin University in Australia is lending his voice to a slew of others warning us to slow down and be cautious as we steadily approach Westworldian levels of human verisimilitude with AI tech. Surprisingly, Patterson didn’t regurgitate the narrative we recognize from the popular sci-fi (increasingly non-fi actually) trope of a dystopian society’s futile resistance to a robocalypse. He doesn’t think Harmony will want to kill you. He thinks she’ll be hacked by a code savvy ne’er-do-well who’ll want to snuff you out instead. …

Embedded in Beaulieu’s article is another video of the same sexbot profiled earlier. Her programmer seems to have learned a thing or two (he no longer inputs any traits as you’re watching),

I guess you could get one for Christmas this year if you’re willing to wait for an early 2018 delivery and aren’t worried about hackers turning your sexbot into a killer. While the killer aspect might seem farfetched, it turns out it’s not the only sexbot/hacker issue.

Sexbots as spies

This Oct. 5, 2017 story by Karl Bode for Techdirt points out that sex toys that are ‘smart’ can easily be hacked for any reason including some mischief (Note: Links have been removed),

One “smart dildo” manufacturer was recently forced to shell out $3.75 million after it was caught collecting, err, “usage habits” of the company’s customers. According to the lawsuit, Standard Innovation’s We-Vibe vibrator collected sensitive data about customer usage, including “selected vibration settings,” the device’s battery life, and even the vibrator’s “temperature.” At no point did the company apparently think it was a good idea to clearly inform users of this data collection.

But security is also lacking elsewhere in the world of internet-connected sex toys. Alex Lomas of Pentest Partners recently took a look at the security in many internet-connected sex toys, and walked away arguably unimpressed. Using a Bluetooth “dongle” and antenna, Lomas drove around Berlin looking for openly accessible sex toys (he calls it “screwdriving,” in a riff off of wardriving). He subsequently found it’s relatively trivial to discover and hijack everything from vibrators to smart butt plugs — thanks to the way Bluetooth Low Energy (BLE) connectivity works:

“The only protection you have is that BLE devices will generally only pair with one device at a time, but range is limited and if the user walks out of range of their smartphone or the phone battery dies, the adult toy will become available for others to connect to without any authentication. I should say at this point that this is purely passive reconnaissance based on the BLE advertisements the device sends out – attempting to connect to the device and actually control it without consent is not something I or you should do. But now one could drive the Hush’s motor to full speed, and as long as the attacker remains connected over BLE and not the victim, there is no way they can stop the vibrations.”

Does that make you think twice about a sexbot?

Robots and artificial intelligence

Getting back to the Vikander article (Sept. 14, 2017), Moon or Vikander or both seem to have conflated artificial intelligence with robots in this section of the article,

As for the building blocks that have thrust these questions [care robot quandary mentioned earlier] into the spotlight, Moon explains that AI in its basic form is when a machine uses data sets or an algorithm to make a decision.

“It’s essentially a piece of output that either affects your decision, or replaces a particular decision, or supports you in making a decision.” With AI, we are delegating decision-making skills or thinking to a machine, she says.

Although we’re not currently surrounded by walking, talking, independently thinking robots, the use of AI [emphasis mine] in our daily lives has become widespread.

For Vikander, the conflation may have been due to concerns about maintaining her word count and for Moon, it may have been one of convenience or a consequence of how the jargon is evolving with ‘robot’ meaning a machine specifically or, sometimes, a machine with AI or AI only.

To be precise, not all robots have AI and not all AI is found in robots. It’s a distinction that may be more important for people developing robots and/or AI but it also seems to make a difference where funding is concerned. In a March 24, 2017 posting about the 2017 Canadian federal budget I noticed this,

… The Canadian Institute for Advanced Research will receive $93.7 million [emphasis mine] to “launch a Pan-Canadian Artificial Intelligence Strategy … (to) position Canada as a world-leading destination for companies seeking to invest in artificial intelligence and innovation.”

This brings me to a recent set of meetings held in Vancouver to devise a Canadian robotics roadmap, which suggests the robotics folks feel they need specific representation and funding.

See: part two for the rest.

Alan Copperman and Amanda Marcotte have a very US-centric discussion about CRISPR and germline editing (designer babies?)

For anyone who needs more information, I ran a three part series on CRISPR germline editing on August 15, 2017:

Part 1 opens the series with a basic description of CRISPR and the germline research that occasioned the series along with some of the ethical issues and patent disputes that are arising from this new technology. CRISPR and editing the germline in the US (part 1 of 3): In the beginning

Part 2 covers three critical responses to the reporting and between them describe the technology in more detail and the possibility of ‘designer babies’.  CRISPR and editing the germline in the US (part 2 of 3): ‘designer babies’?

Part 3 is all about public discussion or, rather, the lack of and need for according to a couple of social scientists. Informally, there is some discussion via pop culture and Joelle Renstrom notes although she is focused on the larger issues touched on by the television series, Orphan Black and as I touch on in my final comments. CRISPR and editing the germline in the US (part 3 of 3): public discussions and pop culture

The news about CRISPR and germline editing by a US team made a bit of a splash even being mentioned on Salon.com, which hardly ever covers any science news (except for some occasional climate change pieces). In a Sept. 4, 2017 salon.com item (an excerpt from the full interview) Amanda Marcotte talks with Dr. Alan Copperman director of the division of reproductive endocrinology and infertility at Mount Sinai Medical Center about the technology and its implications.  As noted in the headline, it’s a US-centric discussion where assumptions are made about who will be leading discussions about the future of the technology.

It’s been a while since I’ve watched it but I believe they do mention in passing that Chinese scientists published two studies about using CRISPR to edit the germline (i think there’s a third Chinese paper in the pipeline) before the American team announced its accomplishment in August 2017. By the way, the first paper by the Chinese caused quite the quandary in April 2015. (My May 14, 2015 posting covers some of the ethical issues; scroll down about 50% of the way for more about the impact of the published Chinese research.)

Also, you might want notice just how smooth Copperman’s responses are almost always emphasizing the benefits of the technology before usually answering the question. He’s had media training and he’s good at this.

They also talk about corn and CRISPR just about the time that agricultural research was announced. Interesting timing, non? (See my Oct. 11, 2017 posting about CRISPR edited corn coming to market in 2020.)

For anyone who wants to skip to the full Marcotte/Cooperman interview, go here on Facebook.

Narrating neuroscience in Toronto (Canada) on Oct. 20, 2017 and knitting a neuron

What is it with the Canadian neuroscience community? First, there’s The Beautiful Brain an exhibition of the extraordinary drawings of Santiago Ramón y Cajal (1852–1934) at the Belkin Gallery on the University of British Columbia (UBC) campus in Vancouver and a series of events marking the exhibition (for more see my Sept. 11, 2017 posting ; scroll down about 30% for information about the drawings and the events still to come).

I guess there must be some money floating around for raising public awareness because now there’s a neuroscience and ‘storytelling’ event (Narrating Neuroscience) in Toronto, Canada. From a Sept. 25, 2017 ArtSci Salon announcement (received via email),

With NARRATING NEUROSCIENCE we plan to initiate a discussion on the  role and the use of storytelling and art (both in verbal and visual  forms) to communicate abstract and complex concepts in neuroscience to  very different audiences, ranging from fellow scientists, clinicians and patients, to social scientists and the general public. We invited four guests to share their research through case studies and experiences stemming directly from their research or from other practices they have adopted and incorporated into their research, where storytelling and the arts have played a crucial role not only in communicating cutting edge research in neuroscience, but also in developing and advancing it.

OUR GUESTS

MATTEO FARINELLA, PhD, Presidential Scholar in Society and Neuroscience – Columbia University

SHELLEY WALL , AOCAD, MSc, PhD – Assistant professor, Biomedical Communications Graduate Program and Department of Biology, UTM

ALFONSO FASANO, MD, PhD, Associate Professor – University of Toronto Clinician Investigator – Krembil Research Institute Movement Disorders Centre – Toronto Western Hospital

TAHANI BAAKDHAH, MD, MSc, PhD candidate – University of Toronto

DATE: October 20, 2017
TIME: 6:00-8:00 pm
LOCATION: The Fields Institute for Research in Mathematical Sciences
222 College Street, Toronto, ON

Events Facilitators: Roberta Buiani and Stephen Morris (ArtSci Salon) and Nina Czegledy (Leonardo Network)

TAHANI BAAKDHAH is a PhD student at the University of Toronto studying how the stem cells built our retina during development, the mechanism by which the light sensing cells inside the eye enable us to see this beautiful world and how we can regenerate these cells in case of disease or injury.

MATTEO FARINELLA combines a background in neuroscience with a lifelong passion for drawing, making comics and illustrations about the brain. He is the author of _Neurocomic_ (Nobrow 2013) published with the support of the Wellcome Trust, _Cervellopoli_ (Editoriale Scienza 2017) and he has collaborated with universities and educational institutions around
the world to make science more clear and accessible. In 2016 Matteo joined Columbia University as a Presidential Scholar in Society and Neuroscience, where he investigates the role of visual narratives in science communication. Working with science journalists, educators and cognitive neuroscientists he aims to understand how these tools may
affect the public perception of science and increase scientific literacy (cartoonscience.org [2]).

ALFONSO FASANO graduated from the Catholic University of Rome, Italy, in 2002 and became a neurologist in 2007. After a 2-year fellowship at the University of Kiel, Germany, he completed a PhD in neuroscience at the Catholic University of Rome. In 2013 he joined the Movement Disorder Centre at Toronto Western Hospital, where he is the co-director of the
surgical program for movement disorders. He is also an associate professor of medicine in the Division of Neurology at the University of Toronto and clinician investigator at the Krembil Research Institute. Dr. Fasano’s main areas of interest are the treatment of movement  disorders with advanced technology (infusion pumps and neuromodulation), pathophysiology and treatment of tremor and gait disorders. He is author of more than 170 papers and book chapters. He is principal investigator of several clinical trials.

SHELLEY WALL is an assistant professor in the University of Toronto’s Biomedical Communications graduate program, a certified medical illustrator, and inaugural Illustrator-in-Residence in the Faculty of Medicine, University of Toronto. One of her primary areas of research, teaching, and creation is graphic medicine—the intersection of comics with illness, medicine, and caregiving—and one of her ongoing projects is a series of comics about caregiving and young onset Parkinson’s disease.

You can register for this free Toronto event here.

One brief observation, there aren’t any writers (other than academics) or storytellers included in this ‘storytelling’ event. The ‘storytelling’ being featured is visual. To be blunt I’m not of the ‘one picture is worth a thousand words’ school of thinking (see my Feb. 22, 2011 posting). Yes, sometimes pictures are all you need but that tiresome aphorism which suggests  communication can be reduced to one means of communication really needs to be retired. As for academic writing, it’s not noted for its storytelling qualities or experimentation. Academics are not judged on their writing or storytelling skills although there are some who are very good.

Getting back to the Toronto event, they seem to have the visual part of their focus  ” … discussion on the  role and the use of storytelling and art (both in verbal and visual  forms) … ” covered. Having recently attended a somewhat similar event in Vancouver, which was announced n my Sept. 11, 2017 posting, there were some exciting images and ideas presented.

The ArtSci Salon folks also announced this (from the Sept. 25, 2017 ArtSci Salon announcement; received via email),

ATTENTION ARTSCI SALONISTAS AND FANS OF ART AND SCIENCE!!
CALL FOR KNITTING AND CROCHET LOVERS!

In addition to being a PhD student at the University of Toronto, Tahani Baakdhah is a prolific knitter and crocheter and has been the motor behind two successful Knit-a-Neuron Toronto initiatives. We invite all Knitters and Crocheters among our ArtSci Salonistas to pick a pattern
(link below) and knit a neuron (or 2! Or as many as you want!!)

http://bit.ly/2y05hRR

BRING THEM TO OUR OCTOBER 20 ARTSCI SALON!
Come to the ArtSci Salon and knit there!
You can’t come?
Share a picture with @ArtSci_Salon @SciCommTO #KnitANeuronTO [3] on
social media
Or…Drop us a line at artscisalon@gmail.com !

I think it’s been a few years since my last science knitting post. No, it was Oct. 18, 2016. Moving on, I found more neuron knitting while researching this piece. Here’s the Neural Knitworks group, which is part of Australia’s National Science Week (11-19 August 2018) initiative (from the Neural Knitworks webpage),

Neural Knitworks is a collaborative project about mind and brain health.

Whether you’re a whiz with yarn, or just discovering the joy of craft, now you can crochet wrap, knit or knot—and find out about neuroscience.

During 2014 an enormous number of handmade neurons were donated (1665 in total!) and used to build a giant walk-in brain, as seen here at Hazelhurst Gallery [scroll to end of this post]. Since then Neural Knitworks have been held in dozens of communities across Australia, with installations created in Queensland, the ACT, Singapore, as part of the Cambridge Science Festival in the UK and in Philadelphia, USA.

In 2017, the Neural Knitworks team again invites you to host your own home-grown Neural Knitwork for National Science Week*. Together we’ll create a giant ‘virtual’ neural network by linking your displays visually online.

* If you wish to host a Neural Knitwork event outside of National Science Week or internationally we ask that you contact us to seek permission to use the material, particularly if you intend to create derivative works or would like to exhibit the giant brain. Please outline your plans in an email.

Your creation can be big or small, part of a formal display, or simply consist of neighbourhood neuron ‘yarn-bombings’. Knitworks can be created at home, at work or at school. No knitting experience is required and all ages can participate.

See below for how to register your event and download our scientifically informed patterns.

What is a neuron?

Neurons are electrically excitable cells of the brain, spinal cord and peripheral nerves. The billions of neurons in your body connect to each other in neural networks. They receive signals from every sense, control movement, create memories, and form the neural basis of every thought.

Check out the neuron microscopy gallery for some real-world inspiration.

What happens at a Neural Knitwork?

Neural Knitworks are based on the principle that yarn craft, with its mental challenges, social connection and mindfulness, helps keep our brains and minds sharp, engaged and healthy.

Have fun as you

  • design your own woolly neurons, or get inspired by our scientifically-informed knitting, crochet or knot patterns;
  • natter with neuroscientists and teach them a few of your crafty tricks;
  • contribute to a travelling textile brain exhibition;
  • increase your attention span and test your memory.

Calm your mind and craft your own brain health as you

  • forge friendships;
  • solve creative and mental challenges;
  • practice mindfulness and relaxation;
  • teach and learn;
  • develop eye-hand coordination and fine motor dexterity.

Interested in hosting a Neural Knitwork?

  1. Log your event on the National Science Week calendar to take advantage of multi-channel promotion.
  2. Share the link^ for this Neural Knitwork page on your own website or online newsletter and add information your own event details.
  3. Use this flyer template (2.5 MB .docx) to promote your event in local shop windows and on noticeboards.
  4. Read our event organisers toolbox for tips on hosting a successful event.
  5. You’ll need plenty of yarn, needles, copies of our scientifically-based neuron crafting pattern books (3.4 MB PDF) and a comfy spot in which to create.
  6. Gather together a group of friends who knit, crochet, design, spin, weave and anyone keen to give it a go. Those who know how to knit can teach others how to do it, and there’s even an easy no knit pattern that you can knot.
  7. Download a neuroscience podcast to listen to, and you’ve got a Neural Knitwork!
  8. Join the Neural Knitworks community on Facebook  to share and find information about events including public talks featuring neuroscientists.
  9. Tweet #neuralknitworks to show us your creations.
  10. Find display ideas in the pattern book and on our Facebook page.

Finally,, the knitted neurons from Australia’s 2014 National Science Week brain exhibit,

[downloaded from https://www.scienceweek.net.au/neural-knitworks/]

CRISPR corn to come to market in 2020

It seems most of the recent excitement around CRISPR/CAS9 (clustered regularly interspaced short palindromic repeats) has focused on germline editing, specifically human embryos. Most people don’t realize that the first ‘CRISPR’ product is slated to enter the US market in 2020. A June 14, 2017 American Chemical Society news release (also on EurekAlert) provides a preview,

The gene-editing technique known as CRISPR/Cas9 made a huge splash in the news when it was initially announced. But the first commercial product, expected around 2020, could make it to the market without much fanfare: It’s a waxy corn destined to contribute to paper glue and food thickeners. The cover story of Chemical & Engineering News (C&EN), the weekly newsmagazine of the American Chemical Society, explores what else is in the works.

Melody M. Bomgardner, a senior editor at C&EN [Chemical & Engineering News], notes that compared to traditional biotechnology, CRISPR allows scientists to add and remove specific genes from organisms with greater speed, precision and oftentimes, at a lower cost. Among other things, it could potentially lead to higher quality cotton, non-browning mushrooms, drought-resistant corn and — finally — tasty, grocery store tomatoes.

Some hurdles remain, however, before more CRISPR products become available. Regulators are assessing how they should approach crops modified with the technique, which often (though not always) splices genes into a plant from within the species rather than introducing a foreign gene. And scientists still don’t understand all the genes in any given crop, much less know which ones might be good candidates for editing. Luckily, researchers can use CRISPR to find out.

Melody M. Bomgardner’s June 12, 2017 article for C&EN describes in detail how CRISPR could significantly change agriculture (Note: Links have been removed),

When the seed firm DuPont Pioneer first announced the new corn in early 2016, few people paid attention. Pharmaceutical companies using CRISPR for new drugs got the headlines instead.

But people should notice DuPont’s waxy corn because using CRISPR—an acronym for clustered regularly interspaced short palindromic repeats—to delete or alter traits in plants is changing the world of plant breeding, scientists say. Moreover, the technique’s application in agriculture is likely to reach the public years before CRISPR-aided drugs hit the market.

Until CRISPR tools were developed, the process of finding useful traits and getting them into reliable, productive plants took many years. It involved a lot of steps and was plagued by randomness.

“Now, because of basic research in the lab and in the field, we can go straight after the traits we want,” says Zachary Lippman, professor of biological sciences at Cold Spring Harbor Laboratory. CRISPR has been transformative, Lippman says. “It’s basically a freight train that’s not going to stop.”

Proponents hope consumers will embrace gene-edited crops in a way that they did not accept genetically engineered ones, especially because they needn’t involve the introduction of genes from other species—a process that gave rise to the specter of Frankenfood.

But it’s not clear how consumers will react or if gene editing will result in traits that consumers value. And the potential commercial uses of CRISPR may narrow if agriculture agencies in the U.S. and Europe decide to regulate gene-edited crops in the same way they do genetically engineered crops.

DuPont Pioneer expects the U.S. to treat its gene-edited waxy corn like a conventional crop because it does not contain any foreign genes, according to Neal Gutterson, the company’s vice president of R&D. In fact, the waxy trait already exists in some corn varieties. It gives the kernels a starch content of more than 97% amylopectin, compared with 75% amylopectin in regular feed corn. The rest of the kernel is amylose. Amylopectin is more soluble than amylose, making starch from waxy corn a better choice for paper adhesives and food thickeners.

Like most of today’s crops, DuPont’s current waxy corn varieties are the result of decades of effort by plant breeders using conventional breeding techniques.

Breeders identify new traits by examining unusual, or mutant, plants. Over many generations of breeding, they work to get a desired trait into high-performing (elite) varieties that lack the trait. They begin with a first-generation cross, or hybrid, of a mutant and an elite plant and then breed several generations of hybrids with the elite parent in a process called backcrossing. They aim to achieve a plant that best approximates the elite version with the new trait.

But it’s tough to grab only the desired trait from a mutant and make a clean getaway. DuPont’s plant scientists found that the waxy trait came with some genetic baggage; even after backcrossing, the waxy corn plant did not offer the same yield as elite versions without the trait. The disappointing outcome is common enough that it has its own term: yield drag.

Because the waxy trait is native to certain corn plants, DuPont did not have to rely on the genetic engineering techniques that breeders have used to make herbicide-tolerant and insect-resistant corn plants. Those commonly planted crops contain DNA from other species.

In addition to giving some consumers pause, that process does not precisely place the DNA into the host plant. So researchers must raise hundreds or thousands of modified plants to find the best ones with the desired trait and work to get that trait into each elite variety. Finally, plants modified with traditional genetic engineering need regulatory approval in the U.S. and other countries before they can be marketed.

Instead, DuPont plant scientists used CRISPR to zero in on, and partially knock out, a gene for an enzyme that produces amylose. By editing the gene directly, they created a waxy version of the elite corn without yield drag or foreign DNA.

Plant scientists who adopt gene editing may still need to breed, measure, and observe because traits might not work well together or bring a meaningful benefit. “It’s not a panacea,” Lippman says, “but it is one of the most powerful tools to come around, ever.”

It’s an interesting piece which answers the question of why tomatoes from the grocery store don’t taste good.

Oops—Greg Gage does it again! With a ‘neuroscience’ talk for TED and launch for the Plant SpikerBox

I’ve written a couple times about Greg Gage and his Backyard Brains,  first, in a March 28, 2012 posting (scroll down about 40% of the way for the mention of the first [?] ‘SpikerBox’) and, most recently, in a June 26, 2013 posting (scroll down about 25% of the way for the mention of a RoboRoach Kickstater project from Backyard Brains) which also featured the launch of a new educational product and a TED [technology education design] talk.

Here’s the latest from an Oct. 10, 2017 news release (received via email),

Backyard Brains Releases Plant SpikerBox, unlocking the Secret Electrical Language used in Plants

The first consumer device to investigate how plants create behaviors through electrophysiology and to enable interspecies plant to plant communication.

ANN ARBOR, MI, OCTOBER 10, 2017–Today Backyard Brains launched the Plant SpikerBox, the first ever science kit designed to reveal the wonderful nature behind plant behavior through electrophysiology experiments done at home or in the classroom. The new SpikerBox launched alongside three new experiments, enabling users to explore Venus Flytrap and Sensitive Mimosa signals and to perform a jaw-dropping Interspecies Plant-Plant-Communicator experiment. The Plant SpikerBox and all three experiments are featured in a live talk from TED2017 given by Backyard Brains CEO and cofounder Dr. Greg Gage which was released today on ​​https://ted.com.

Backyard Brains received viral attention for their previous videos, TED talks, and for their mission to create hands-on neuroscience experiments for everyone. The company (run by professional neuroscientists) produces consumer-friendly versions of expensive graduate lab equipment used at top research universities around the world. The new plant experiments and device facilitate the growing movement of DIY [do it yourself] scientists, made up of passionate amateurs, students, parents, and teachers.

Like previous inventions, the Plant SpikerBox is extremely easy to use, making it accessible for students as young as middle school. The device works by recording the electrical activity responsible for different plant behaviors. For example, the Venus Flytrap uses an electrical signal to determine if prey has landed in its trap; the SpikerBox reveals these invisible messages and allows you to visualize them on your mobile device. For the first time ever, you can peer into the fascinating world of plant signaling and plant behaviors.

The new SpikerBox features an “Interspecies Plant-Plant-Communicator” which demonstrates the ubiquitous nature of electrical signaling seen in humans, insects, and plants. With this device, one can capture the electrical message (called an action potential) from one plant’s behavior, and send it to a different plant to activate another behavior.

Co-founder and CEO Greg Gage explains, “Itis surprising to many people that plants use electrical messages similar to those used by the neurons in our brains. I was shocked to hear that. Many neuroscientists are. But if you think about it, it [sic] does make sense. Our nervous system evolved to react quickly. Electricity is fast. The plants we are studying also need to react quickly, so it makes sense they would develop a similar system. To be clear: No, plants don’t have brains, but they do exhibit behaviors and they do use electric messages called ‘Action Potentials’ like we do to send information. The benefit of these plant experiments then is twofold: First, we can simply demonstrate fundamental neuroscience principles, and second, we can spread the wonder of understanding how living creatures work and hopefully encourage others to make a career in life sciences!”

The Plant SpikerBox is a trailblazer, bringing plant electrophysiology to the public for the first time ever. It is designed to work with the Backyard Brains SpikeRecorder software which is available to download for free on their website or in mobile app stores. The three plant experiments are just a few of the dozens of free experiments available on the Backyard Brains website. The Plant SpikerBox is available now for $149.99.

About Backyard Brains

A staggering 1 in 5 people will develop a neurological disorder in their lifetime, making the need for neuroscience studies urgent. Backyard Brains passionately responds with their motto “Neuroscience for Everyone,” providing exposure, education, and experiment kits to students of all ages. Founded in 2010 in Ann Arbor, MI by University of Michigan Neuroscience graduate students Greg Gage and Tim Marzullo, Backyard Brains have been dubbed Champions of Change at an Obama White House ceremony and have won prestigious awards from the National Institutes of Health and the Society for Neuroscience. To learn more, visit BackyardBrains.com

You can find an embedded video of Greg Gage’s TED talk and Plant SpikerBox launch along with links to experiments you could run with it on Backyard Brains’ Plant SpikerBox product page.

For a sample of what they have on offer, here’s an excerpt from the Venus Flytrap Electrophysiology experiment webpage (Note: Links have been removed),

Background

Your nervous system allows you to sense and respond quickly to the environment around you. You have a nervous system, animals have nervous systems, but plants do not. But not having a nervous system does not mean you cannot sense and respond to the world. Plants can certainly sense the environment around them and move. You have seen your plants slowly turn their leaves towards sunlight by the window over a week, open their flowers in the day, and close their flowers during the night. Some plants can move in much more dramatic fashion, such as the Venus Flytrap and the Sensitive Mimosa.

The Venus Flytrap comes from the swamps of North Carolina, USA, and lives in very nutrient-poor, water-logged soil. It photosynthesizes like other plants, but it can’t always rely on the sunlight for food. To supplement its food supply it traps and eats insects, extracting from them the nitrogen and phosphorous needed to form plant food (amino acids, nucleic acids, and other molecules).

If you look closely at the Venus Flytrap, you will notice it has very tiny “Trigger Hairs” inside its trap leaves.

If a wayward, unsuspecting insect touches a trigger hair, an Action Potential occurs in the leaves. This is a different Action Potential than what we are used to seeing in neurons, as it’s based on the movement of calcium, potassium, and chloride ions (vs. movement of potassium and sodium as in the Action Potentials of neurons and muscles), and it is muuuuuuuuucccchhhhhh longer than anything we’ve seen before.

If the trigger hair is touched twice within 20 seconds (firing two Action Potentials within 20 seconds), the trap closes. The trap is not closing due to muscular action (plants do not have muscles), but rather due to an osmotic, rapid change in the shape of curvature of the trap leaves. Interestingly, the firing of Action Potentials is not always reliable, depending on time of year, temperature, health of plant, and/or other factors. Quite different from we humans, Action Potential failure is not devastating to a Venus Flytrap.

We can observe this plant Action Potential using our Plant SpikerBox. Welcome to the Brave New World of Plant Electrophysiology.

Downloads

Before you begin, make sure you have the Backyard Brains SpikeRecorder. The Backyard Brains SpikeRecorder program allows you to visualize and save data on your computer when doing experiments.

….

I did feel a bit sorry for the Venus Flytrap in Greg Gage’s TED talk which was fooled into closing its trap. According to Gage, the Venus Flytrap has limited number of times it can close its trap and after the last time, it dies. On the other hand, I eat meat and use leather goods so there is not pedestal for me to perch on.

For anyone who caught the Brittany Spears reference in the headline in this posting,

From exploring outer space with Brittany Spears to exploring plant communication and neuroscience in your back yard, science can be found in many different places.