Category Archives: robots

Robotics where and how you don’t expect them: a wearable robot and a robot implant for regeneration

Generally I  expect robots to be machines that are external to my body but recently there were two news bits about some different approaches. First, the wearable robot.

A robot that supports your hip

A January 10, 2018 news item on ScienceDaily describes research into muscles that can be worn,

Scientists are one step closer to artificial muscles. Orthotics have come a long way since their initial wood and strap designs, yet innovation lapsed when it came to compensating for muscle power — until now.

A collaborative research team has designed a wearable robot to support a person’s hip joint while walking. The team, led by Minoru Hashimoto, a professor of textile science and technology at Shinshu University in Japan, published the details of their prototype in Smart Materials and Structures, a journal published by the Institute of Physics.

A January 9, 2018 Shinshu University press release on EurekAlert, which originated the news item, provides more detail,

“With a rapidly aging society, an increasing number of elderly people require care after suffering from stroke, and other-age related disabilities. Various technologies, devices, and robots are emerging to aid caretakers,” wrote Hashimoto, noting that several technologies meant to assist a person with walking are often cumbersome to the user. “[In our] current study, [we] sought to develop a lightweight, soft, wearable assist wear for supporting activities of daily life for older people with weakened muscles and those with mobility issues.”

The wearable system consists of plasticized polyvinyl chloride (PVC) gel, mesh electrodes, and applied voltage. The mesh electrodes sandwich the gel, and when voltage is applied, the gel flexes and contracts, like a muscle. It’s a wearable actuator, the mechanism that causes movement.

“We thought that the electrical mechanical properties of the PVC gel could be used for robotic artificial muscles, so we started researching the PVC gel,” said Hashimoto. “The ability to add voltage to PVC gel is especially attractive for high speed movement, and the gel moves with high speed with just a few hundred volts.”

In a preliminary evaluation, a stroke patient with some paralysis on one side of his body walked with and without the wearable system.

“We found that the assist wear enabled natural movement, increasing step length and decreasing muscular activity during straight line walking,” wrote Hashimoto. The researchers also found that adjusting the charge could change the level of assistance the actuator provides.

The robotic system earned first place in demonstrations with their multilayer PVC gel artificial muscle at the, “24th International Symposium on Smart Structures and Materials & Nondestructive Evaluation and Health Monitoring” for SPIE the international society for optics and photonics.

Next, the researchers plan to create a string actuator using the PVC gel, which could potentially lead to the development of fabric capable of providing more manageable external muscular support with ease.

Here’s a link to and a citation for the paper,

PVC gel soft actuator-based wearable assist wear for hip joint support during walking by Yi Li and Minoru Hashimoto. Smart Materials and Structures, Volume 26, Number 12 DOI: 10.1088/1361-665X/aa9315 Published 30 October 2017

© 2017 IOP Publishing Ltd

This paper is behind a paywall and I see it was published in the Fall of 2017. Either they postponed the publicity or this is the second wave. In any event, it was timely as it allowed me to post this along with the robotic research on regeneration.

Robotic implants and tissue regeneration

Boston Children’s Hospital in a January 10, 2018 news release on EurekAlert describes a new (to me) method for tissue regeneration,

An implanted, programmable medical robot can gradually lengthen tubular organs by applying traction forces — stimulating tissue growth in stunted organs without interfering with organ function or causing apparent discomfort, report researchers at Boston Children’s Hospital.

The robotic system, described today in Science Robotics, induced cell proliferation and lengthened part of the esophagus in a large animal by about 75 percent, while the animal remained awake and mobile. The researchers say the system could treat long-gap esophageal atresia, a rare birth defect in which part of the esophagus is missing, and could also be used to lengthen the small intestine in short bowel syndrome.

The most effective current operation for long-gap esophageal atresia, called the Foker process, uses sutures anchored on the patient’s back to gradually pull on the esophagus. To prevent the esophagus from tearing, patients must be paralyzed in a medically induced coma and placed on mechanical ventilation in the intensive care unit for one to four weeks. The long period of immobilization can also cause medical complications such as bone fractures and blood clots.

“This project demonstrates proof-of-concept that miniature robots can induce organ growth inside a living being for repair or replacement, while avoiding the sedation and paralysis currently required for the most difficult cases of esophageal atresia,” says Russell Jennings, MD, surgical director of the Esophageal and Airway Treatment Center at Boston Children’s Hospital, and a co-investigator on the study. “The potential uses of such robots are yet to be fully explored, but they will certainly be applied to many organs in the near future.”

The motorized robotic device is attached only to the esophagus, so would allow a patient to move freely. Covered by a smooth, biocompatible, waterproof “skin,” it includes two attachment rings, placed around the esophagus and sewn into place with sutures. A programmable control unit outside the body applies adjustable traction forces to the rings, slowly and steadily pulling the tissue in the desired direction.

The device was tested in the esophagi of pigs (five received the implant and three served as controls). The distance between the two rings (pulling the esophagus in opposite directions) was increased by small, 2.5-millimeter increments each day for 8 to 9 days. The animals were able to eat normally even with the device applying traction to its esophagus, and showed no sign of discomfort.

On day 10, the segment of esophagus had increased in length by 77 percent on average. Examination of the tissue showed a proliferation of the cells that make up the esophagus. The organ also maintained its normal diameter.

“This shows we didn’t simply stretch the esophagus — it lengthened through cell growth,” says Pierre Dupont, PhD, the study’s senior investigator and Chief of Pediatric Cardiac Bioengineering at Boston Children’s.

The research team is now starting to test the robotic system in a large animal model of short bowel syndrome. While long-gap esophageal atresia is quite rare, the prevalence of short bowel syndrome is much higher. Short bowel can be caused by necrotizing enterocolitis in the newborn, Crohn’s disease in adults, or a serious infection or cancer requiring a large segment of intestine to be removed.

“Short bowel syndrome is a devastating illness requiring patients to be fed intravenously,” says gastroenterologist Peter Ngo, MD, a coauthor on the study. “This, in turn, can lead to liver failure, sometimes requiring a liver or multivisceral (liver-intestine) transplant, outcomes that are both devastating and costly.”

The team hopes to get support to continue its tests of the device in large animal models, and eventually conduct clinical trials. They will also test other features.

“No one knows the best amount of force to apply to an organ to induce growth,” explains Dupont. “Today, in fact, we don’t even know what forces we are applying clinically. It’s all based on surgeon experience. A robotic device can figure out the best forces to apply and then apply those forces precisely.”

Here’s a link to and a citation for the paper,

In vivo tissue regeneration with robotic implants by Dana D. Damian, Karl Price, Slava Arabagi, Ignacio Berra, Zurab Machaidze, Sunil Manjila, Shogo Shimada, Assunta Fabozzo, Gustavo Arnal, David Van Story, Jeffrey D. Goldsmith, Agoston T. Agoston, Chunwoo Kim, Russell W. Jennings, Peter D. Ngo, Michael Manfredi, and Pierre E. Dupont. Science Robotics 10 Jan 2018: Vol. 3, Issue 14, eaaq0018 DOI: 10.1126/scirobotics.aaq0018

This paper is behind a paywall.

An exoskeleton for a cell-sized robot

A January 3, 2018 news item on phys.org announces work on cell-sized robots,

An electricity-conducting, environment-sensing, shape-changing machine the size of a human cell? Is that even possible?

Cornell physicists Paul McEuen and Itai Cohen not only say yes, but they’ve actually built the “muscle” for one.

With postdoctoral researcher Marc Miskin at the helm, the team has made a robot exoskeleton that can rapidly change its shape upon sensing chemical or thermal changes in its environment. And, they claim, these microscale machines – equipped with electronic, photonic and chemical payloads – could become a powerful platform for robotics at the size scale of biological microorganisms.

“You could put the computational power of the spaceship Voyager onto an object the size of a cell,” Cohen said. “Then, where do you go explore?”

“We are trying to build what you might call an ‘exoskeleton’ for electronics,” said McEuen, the John A. Newman Professor of Physical Science and director of the Kavli Institute at Cornell for Nanoscale Science. “Right now, you can make little computer chips that do a lot of information-processing … but they don’t know how to move or cause something to bend.”

Cornell University has produced a video of the researchers discussing their work (about 3 mins. running time)

For those who prefer text or need it to reinforce their understanding, there’s a January 2, 2018 Cornell University news release (also on EurekAlert but dated Jan. 3, 2018) by Tom Fleischman, which originated the news item,

The machines move using a motor called a bimorph. A bimorph is an assembly of two materials – in this case, graphene and glass – that bends when driven by a stimulus like heat, a chemical reaction or an applied voltage. The shape change happens because, in the case of heat, two materials with different thermal responses expand by different amounts over the same temperature change.

As a consequence, the bimorph bends to relieve some of this strain, allowing one layer to stretch out longer than the other. By adding rigid flat panels that cannot be bent by bimorphs, the researchers localize bending to take place only in specific places, creating folds. With this concept, they are able to make a variety of folding structures ranging from tetrahedra (triangular pyramids) to cubes.

In the case of graphene and glass, the bimorphs also fold in response to chemical stimuli by driving large ions into the glass, causing it to expand. Typically this chemical activity only occurs on the very outer edge of glass when submerged in water or some other ionic fluid. Since their bimorph is only a few nanometers thick, the glass is basically all outer edge and very reactive.

“It’s a neat trick,” Miskin said, “because it’s something you can do only with these nanoscale systems.”

The bimorph is built using atomic layer deposition – chemically “painting” atomically thin layers of silicon dioxide onto aluminum over a cover slip – then wet-transferring a single atomic layer of graphene on top of the stack. The result is the thinnest bimorph ever made. One of their machines was described as being “three times larger than a red blood cell and three times smaller than a large neuron” when folded. Folding scaffolds of this size have been built before, but this group’s version has one clear advantage.

“Our devices are compatible with semiconductor manufacturing,” Cohen said. “That’s what’s making this compatible with our future vision for robotics at this scale.”

And due to graphene’s relative strength, Miskin said, it can handle the types of loads necessary for electronics applications. “If you want to build this electronics exoskeleton,” he said, “you need it to be able to produce enough force to carry the electronics. Ours does that.”

For now, these tiniest of tiny machines have no commercial application in electronics, biological sensing or anything else. But the research pushes the science of nanoscale robots forward, McEuen said.

“Right now, there are no ‘muscles’ for small-scale machines,” he said, “so we’re building the small-scale muscles.”

Here’s a link to and a citation for the paper,

Graphene-based bimorphs for micron-sized, autonomous origami machines by Marc Z. Miskin, Kyle J. Dorsey, Baris Bircan, Yimo Han, David A. Muller, Paul L. McEuen, and Itai Cohen. PNAS [Proceedings of the National Academy of Sciences] 2018 doi: 10.1073/pnas.1712889115 published ahead of print January 2, 2018

This paper is behind a paywall.

How to get people to trust artificial intelligence

Vyacheslav Polonski’s (University of Oxford researcher) January 10, 2018 piece (originally published Jan. 9, 2018 on The Conversation) on phys.org isn’t a gossip article although there are parts that could be read that way. Before getting to what I consider the juicy bits (Note: Links have been removed),

Artificial intelligence [AI] can already predict the future. Police forces are using it to map when and where crime is likely to occur [Note: See my Nov. 23, 2017 posting about predictive policing in Vancouver for details about the first Canadian municipality to introduce the technology]. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences.

Many decisions in our lives require a good forecast, and AI agents are almost always better at forecasting than their human counterparts. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictions. Recent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong.

The part (juicy bits) that satisfied some of my long held curiosity was this section on Watson and its life as a medical adjunct (Note: Links have been removed),

IBM’s attempt to promote its supercomputer programme to cancer doctors (Watson for Onology) was a PR [public relations] disaster. The AI promised to deliver top-quality recommendations on the treatment of 12 cancers that accounted for 80% of the world’s cases. As of today, over 14,000 patients worldwide have received advice based on its calculations.

But when doctors first interacted with Watson they found themselves in a rather difficult situation. On the one hand, if Watson provided guidance about a treatment that coincided with their own opinions, physicians did not see much value in Watson’s recommendations. The supercomputer was simply telling them what they already know, and these recommendations did not change the actual treatment. This may have given doctors some peace of mind, providing them with more confidence in their own decisions. But IBM has yet to provide evidence that Watson actually improves cancer survival rates.

On the other hand, if Watson generated a recommendation that contradicted the experts’ opinion, doctors would typically conclude that Watson wasn’t competent. And the machine wouldn’t be able to explain why its treatment was plausible because its machine learning algorithms were simply too complex to be fully understood by humans. Consequently, this has caused even more mistrust and disbelief, leading many doctors to ignore the seemingly outlandish AI recommendations and stick to their own expertise.

As a result, IBM Watson’s premier medical partner, the MD Anderson Cancer Center, recently announced it was dropping the programme. Similarly, a Danish hospital reportedly abandoned the AI programme after discovering that its cancer doctors disagreed with Watson in over two thirds of cases.

The problem with Watson for Oncology was that doctors simply didn’t trust it. Human trust is often based on our understanding of how other people think and having experience of their reliability. …

It seems to me there might be a bit more to the doctors’ trust issues and I was surprised it didn’t seem to have occurred to Polonski. Then I did some digging (from Polonski’s webpage on the Oxford Internet Institute website),

Vyacheslav Polonski (@slavacm) is a DPhil [PhD] student at the Oxford Internet Institute. His research interests are located at the intersection of network science, media studies and social psychology. Vyacheslav’s doctoral research examines the adoption and use of social network sites, focusing on the effects of social influence, social cognition and identity construction.

Vyacheslav is a Visiting Fellow at Harvard University and a Global Shaper at the World Economic Forum. He was awarded the Master of Science degree with Distinction in the Social Science of the Internet from the University of Oxford in 2013. He also obtained the Bachelor of Science degree with First Class Honours in Management from the London School of Economics and Political Science (LSE) in 2012.

Vyacheslav was honoured at the British Council International Student of the Year 2011 awards, and was named UK’s Student of the Year 2012 and national winner of the Future Business Leader of the Year 2012 awards by TARGETjobs.

Previously, he has worked as a management consultant at Roland Berger Strategy Consultants and gained further work experience at the World Economic Forum, PwC, Mars, Bertelsmann and Amazon.com. Besides, he was involved in several start-ups as part of the 2012 cohort of Entrepreneur First and as part of the founding team of the London office of Rocket Internet. Vyacheslav was the junior editor of the bi-lingual book ‘Inspire a Nation‘ about Barack Obama’s first presidential election campaign. In 2013, he was invited to be a keynote speaker at the inaugural TEDx conference of IE University in Spain to discuss the role of a networked mindset in everyday life.

Vyacheslav is fluent in German, English and Russian, and is passionate about new technologies, social entrepreneurship, philanthropy, philosophy and modern art.

Research interests

Network science, social network analysis, online communities, agency and structure, group dynamics, social interaction, big data, critical mass, network effects, knowledge networks, information diffusion, product adoption

Positions held at the OII

  • DPhil student, October 2013 –
  • MSc Student, October 2012 – August 2013

Polonski doesn’t seem to have any experience dealing with, participating in, or studying the medical community. Getting a doctor to admit that his or her approach to a particular patient’s condition was wrong or misguided runs counter to their training and, by extension, the institution of medicine. Also, one of the biggest problems in any field is getting people to change and it’s not always about trust. In this instance, you’re asking a doctor to back someone else’s opinion after he or she has rendered theirs. This is difficult even when the other party is another human doctor let alone a form of artificial intelligence.

If you want to get a sense of just how hard it is to get someone to back down after they’ve committed to a position, read this January 10, 2018 essay by Lara Bazelon, an associate professor at the University of San Francisco School of Law. This is just one of the cases (Note: Links have been removed),

Davontae Sanford was 14 years old when he confessed to murdering four people in a drug house on Detroit’s East Side. Left alone with detectives in a late-night interrogation, Sanford says he broke down after being told he could go home if he gave them “something.” On the advice of a lawyer whose license was later suspended for misconduct, Sanders pleaded guilty in the middle of his March 2008 trial and received a sentence of 39 to 92 years in prison.

Sixteen days after Sanford was sentenced, a hit man named Vincent Smothers told the police he had carried out 12 contract killings, including the four Sanford had pleaded guilty to committing. Smothers explained that he’d worked with an accomplice, Ernest Davis, and he provided a wealth of corroborating details to back up his account. Smothers told police where they could find one of the weapons used in the murders; the gun was recovered and ballistics matched it to the crime scene. He also told the police he had used a different gun in several of the other murders, which ballistics tests confirmed. Once Smothers’ confession was corroborated, it was clear Sanford was innocent. Smothers made this point explicitly in an 2015 affidavit, emphasizing that Sanford hadn’t been involved in the crimes “in any way.”

Guess what happened? (Note: Links have been removed),

But Smothers and Davis were never charged. Neither was Leroy Payne, the man Smothers alleged had paid him to commit the murders. …

Davontae Sanford, meanwhile, remained behind bars, locked up for crimes he very clearly didn’t commit.

Police failed to turn over all the relevant information in Smothers’ confession to Sanford’s legal team, as the law required them to do. When that information was leaked in 2009, Sanford’s attorneys sought to reverse his conviction on the basis of actual innocence. Wayne County Prosecutor Kym Worthy fought back, opposing the motion all the way to the Michigan Supreme Court. In 2014, the court sided with Worthy, ruling that actual innocence was not a valid reason to withdraw a guilty plea [emphasis mine]. Sanford would remain in prison for another two years.

Doctors are just as invested in their opinions and professional judgments as lawyers  (just like  the prosecutor and the judges on the Michigan Supreme Court) are.

There is one more problem. From the doctor’s (or anyone else’s perspective), if the AI is making the decisions, why do he/she need to be there? At best it’s as if AI were turning the doctor into its servant or, at worst, replacing the doctor. Polonski alludes to the problem in one of his solutions to the ‘trust’ issue (Note: A link has been removed),

Research suggests involving people more in the AI decision-making process could also improve trust and allow the AI to learn from human experience. For example,one study showed people were given the freedom to slightly modify an algorithm felt more satisfied with its decisions, more likely to believe it was superior and more likely to use it in the future.

Having input into the AI decision-making process somewhat addresses one of the problems but the commitment to one’s own judgment even when there is overwhelming evidence to the contrary is a perennially thorny problem. The legal case mentioned here earlier is clearly one where the contrarian is wrong but it’s not always that obvious. As well, sometimes, people who hold out against the majority are right.

US Army

Getting back to building trust, it turns out the US Army Research Laboratory is also interested in transparency where AI is concerned (from a January 11, 2018 US Army news release on EurekAlert),

U.S. Army Research Laboratory [ARL] scientists developed ways to improve collaboration between humans and artificially intelligent agents in two projects recently completed for the Autonomy Research Pilot Initiative supported by the Office of Secretary of Defense. They did so by enhancing the agent transparency [emphasis mine], which refers to a robot, unmanned vehicle, or software agent’s ability to convey to humans its intent, performance, future plans, and reasoning process.

“As machine agents become more sophisticated and independent, it is critical for their human counterparts to understand their intent, behaviors, reasoning process behind those behaviors, and expected outcomes so the humans can properly calibrate their trust [emphasis mine] in the systems and make appropriate decisions,” explained ARL’s Dr. Jessie Chen, senior research psychologist.

The U.S. Defense Science Board, in a 2016 report, identified six barriers to human trust in autonomous systems, with ‘low observability, predictability, directability and auditability’ as well as ‘low mutual understanding of common goals’ being among the key issues.

In order to address these issues, Chen and her colleagues developed the Situation awareness-based Agent Transparency, or SAT, model and measured its effectiveness on human-agent team performance in a series of human factors studies supported by the ARPI. The SAT model deals with the information requirements from an agent to its human collaborator in order for the human to obtain effective situation awareness of the agent in its tasking environment. At the first SAT level, the agent provides the operator with the basic information about its current state and goals, intentions, and plans. At the second level, the agent reveals its reasoning process as well as the constraints/affordances that the agent considers when planning its actions. At the third SAT level, the agent provides the operator with information regarding its projection of future states, predicted consequences, likelihood of success/failure, and any uncertainty associated with the aforementioned projections.

In one of the ARPI projects, IMPACT, a research program on human-agent teaming for management of multiple heterogeneous unmanned vehicles, ARL’s experimental effort focused on examining the effects of levels of agent transparency, based on the SAT model, on human operators’ decision making during military scenarios. The results of a series of human factors experiments collectively suggest that transparency on the part of the agent benefits the human’s decision making and thus the overall human-agent team performance. More specifically, researchers said the human’s trust in the agent was significantly better calibrated — accepting the agent’s plan when it is correct and rejecting it when it is incorrect– when the agent had a higher level of transparency.

The other project related to agent transparency that Chen and her colleagues performed under the ARPI was Autonomous Squad Member, on which ARL collaborated with Naval Research Laboratory scientists. The ASM is a small ground robot that interacts with and communicates with an infantry squad. As part of the overall ASM program, Chen’s group developed transparency visualization concepts, which they used to investigate the effects of agent transparency levels on operator performance. Informed by the SAT model, the ASM’s user interface features an at a glance transparency module where user-tested iconographic representations of the agent’s plans, motivator, and projected outcomes are used to promote transparent interaction with the agent. A series of human factors studies on the ASM’s user interface have investigated the effects of agent transparency on the human teammate’s situation awareness, trust in the ASM, and workload. The results, consistent with the IMPACT project’s findings, demonstrated the positive effects of agent transparency on the human’s task performance without increase of perceived workload. The research participants also reported that they felt the ASM as more trustworthy, intelligent, and human-like when it conveyed greater levels of transparency.

Chen and her colleagues are currently expanding the SAT model into bidirectional transparency between the human and the agent.

“Bidirectional transparency, although conceptually straightforward–human and agent being mutually transparent about their reasoning process–can be quite challenging to implement in real time. However, transparency on the part of the human should support the agent’s planning and performance–just as agent transparency can support the human’s situation awareness and task performance, which we have demonstrated in our studies,” Chen hypothesized.

The challenge is to design the user interfaces, which can include visual, auditory, and other modalities, that can support bidirectional transparency dynamically, in real time, while not overwhelming the human with too much information and burden.

Interesting, yes? Here’s a link and a citation for the paper,

Situation Awareness-based Agent Transparency and Human-Autonomy Teaming Effectiveness by Jessie Y.C. Chen, Shan G. Lakhmani, Kimberly Stowers, Anthony R. Selkowitz, Julia L. Wright, and Michael Barnes. Theoretical Issues in Ergonomics Science May 2018. DOI 10.1080/1463922X.2017.1315750

This paper is behind a paywall.

smARTcities SALON in Vaughan, Ontario, Canada on March 22, 2018

Thank goodness for the March 15, 2018 notice from the Art/Sci Salon in Toronto (received via email) announcing an event on smart cities being held in the nearby city of Vaughan (it borders Toronto to the north). It’s led me on quite the chase as I’ve delved into a reference to Smart City projects taking place across the country and the results follow after this bit about the event.

smARTcities SALON

From the announcement,

SMARTCITIES SALON

Smart City projects are currently underway across the country, including
Google SideWalk at Toronto Harbourfront. Canada’s first Smart Hospital
is currently under construction in the City of Vaughan. It’s an example
of the city working towards building a reputation as one of the world’s
leading Smart Cities, by adopting new technologies consistent with
priorities defined by citizen collaboration.

Hon. Maurizio Bevilacqua, P.C., Mayor chairs the Smart City Advisory
Task Force leading historic transformation in Vaughan. Working to become
a Smart City is a chance to encourage civic engagement, accelerate
economic growth, and generate efficiencies. His opening address will
outline some of the priorities and opportunities that our panel will
discuss.

PANELISTS

Lilian Radovac, PhD., Assistant Professor, Institute of Communication,
Culture, Information & Technology, University of Toronto. Lilian is a
historian of urban sounds and cultures and has a critical interest in
SmartCity initiatives in two of the cities she has called home: New York
City and Toronto..

Oren Berkovich is the CEO of Singularity University in Canada, an
educational institution and a global network of experts and
entrepreneurs that work together on solving the world’s biggest
challenges. As a catalyst for long-term growth Oren spends his time
connecting people with ideas to facilitate strategic conversations about
the future.

Frank Di Palma, the Chief Information Officer for the City of Vaughan,
is a graduate of York University with more than 20 years experience in
IT operations and services. Frank leads the many SmartCity initiatives
already underway at Vaughan City Hall.

Ron Wild, artist and Digital Art/Science Collaborator, will moderate the
discussion.

Audience Participation opportunities will enable attendees to forward
questions for consideration by the panel.

You can register for the smARTcities SALON here on Eventbrite,

Art Exhibition Reception

Following the panel discussion, the audience is invited to view the art exhibition ‘smARTcities; exploring the digital frontier.’ Works commissioned by Vaughan specifically for the exhibition, including the SmartCity Map and SmartHospital Map will be shown as well as other Art/Science-themed works. Many of these ‘maps’ were made by Ron in collaboration with mathematicians, scientists, and medical researchers, some of who will be in attendance. Further examples of Ron’s art can be found HERE

Please click through to buy a FREE ticket so we know how many guests to expect. Thank you.

This event can be reached by taking the subway up the #1 west line to the new Vaughan Metropolitan Centre terminal station. Take the #20 bus to the Vaughan Mills transfer loop; transfer there to the #4/A which will take you to the stop right at City Hall. Free parking is available for those coming by car. Car-pooling and ride-sharing is encouraged. The facility is fully accessible.

Here’s one of Wild’s pieces,

144×96″ triptych, Vaughan, 2018 Artist: mrowade (Ron Wild?)

I’m pretty sure that mrowade is Ron Wild.

Smart Cities, the rest of the country, and Vancouver

Much to my surprise, I covered the ‘Smart Cities’ story in its early (but not earliest) days (and before it was Smart Cities) in two posts: January 30, 2015 and January 27,2016 about the National Research Council of Canada (NRC) and its cities and technology public engagement exercises.

David Vogt in a July 12, 2016 posting on the Urban Opus website provides some catch up information,

Canada’s National Research Council (NRC) has identified Cities of the Future as a game-changing technology and economic opportunity.  Following a national dialogue, an Executive Summit was held in Toronto on March 31, 2016, resulting in an important summary report that will become the seed for Canadian R&D strategy in this sector.

The conclusion so far is that the opportunity for Canada is to muster leadership in the following three areas (in order):

  1. Better Infrastructure and Infrastructure Management
  2. Efficient Transportation; and
  3. Renewable Energy

The National Research Council (NRC) offers a more balanced view of the situation on its “NRC capabilities in smart infrastructure and cities of the future” webpage,

Key opportunities for Canada

North America is one of the most urbanised regions in the world (82 % living in urban areas in 2014).
With growing urbanisation, sustainable development challenges will be increasingly concentrated in cities, requiring technology solutions.
Smart cities are data-driven, relying on broadband and telecommunications, sensors, social media, data collection and integration, automation, analytics and visualization to provide real-time situational analysis.
Most infrastructure will be “smart” by 2030 and transportation systems will be intelligent, adaptive and connected.
Renewable energy, energy storage, power quality and load measurement will contribute to smart grid solutions that are integrated with transportation.
“Green”, sustainable and high-performing construction and infrastructure materials are in demand.

Canadian challenges

High energy use: Transportation accounts for roughly 23% of Canada’s total greenhouse gas emissions, followed closely by the energy consumption of buildings, which accounts for 12% of Canada’s greenhouse gas emissions (Canada’s United Nations Framework Convention on Climate Change report).
Traffic congestion in Canadian cities is increasing, contributing to loss of productivity, increased stress for citizens as well as air and noise pollution.
Canadian cities are susceptible to extreme weather and events related to climate change (e.g., floods, storms).
Changing demographics: aging population (need for accessible transportation options, housing, medical and recreational services) and diverse (immigrant) populations.
Financial and jurisdictional issues: the inability of municipalities (who have primary responsibility) to finance R&D or large-scale solutions without other government assistance.

Opportunities being examined
Living lab

Test bed for smart city technology in order to quantify and demonstrate the benefits of smart cities.
Multiple partnering opportunities (e.g. municipalities, other government organizations, industry associations, universities, social sciences, urban planning).

The integrated city

Efficient transportation: integration of personal mobility and freight movement as key city and inter-city infrastructure.
Efficient and integrated transportation systems linked to city infrastructure.
Planning urban environments for mobility while repurposing redundant infrastructures (converting parking to the food-water-energy nexus) as population shifts away from personal transportation.

FOOD-WATER-ENERGY NEXUS

Sustainable urban bio-cycling.
‎System approach to the development of the technology platforms required to address the nexus.

Key enabling platform technologies
Artificial intelligence

Computer vision and image understanding
Adaptive robots; future robotic platforms for part manufacturing
Understanding human emotions from language
Next generation information extraction using deep learning
Speech recognition
Artificial intelligence to optimize talent management for human resources

Nanomaterials

Nanoelectronics
Nanosensing
Smart materials
Nanocomposites
Self-assembled nanostructures
Nanoimprint
Nanoplasmonic
Nanoclay
Nanocoating

Big data analytics

Predictive equipment maintenance
Energy management
Artificial intelligence for optimizing energy storage and distribution
Understanding and tracking of hazardous chemical elements
Process and design optimization

Printed electronics for Internet of Things

Inks and materials
Printing technologies
Large area, flexible, stretchable, printed electronics components
Applications: sensors for Internet of Things, wearables, antenna, radio-frequency identification tags, smart surfaces, packaging, security, signage

If you’re curious about the government’s plan with regard to implementation, this NRC webpage provides some fascinating insight into their hopes if not the reality. (I have mentioned artificial intelligence and the federal government before in a March 16, 2018 posting about the federal budget and science; scroll down approximately 50% of the way to the subsection titled, Budget 2018: Who’s watching over us? and scan for Michael Karlin’s name.)

As for the current situation, there’s a Smart Cities Challenge taking place. Both Toronto and Vancouver have webpages dedicated to their response to the challenge. (You may want to check your own city’s website to find if it’s participating.)I have a preference for the Toronto page as they immediately state that they’re participating in this challenge and they provide an explanation for what they want from you. Vancouver’s page is by comparison a bit confusing with two videos being immediately presented to the reader and from there too many graphics competing for your attention. They do, however, offer something valuable, links to explanations for smart cities and for the challenge.

Here’s a description of the Smart Cities Challenge (from its webpage),

The Smart Cities Challenge

The Smart Cities Challenge is a pan-Canadian competition open to communities of all sizes, including municipalities, regional governments and Indigenous communities (First Nations, Métis and Inuit). The Challenge encourages communities to adopt a smart cities approach to improve the lives of their residents through innovation, data and connected technology.

  • One prize of up to $50 million open to all communities, regardless of population;
  • Two prizes of up to $10 million open to all communities with populations under 500,000 people; and
  • One prize of up to $5 million open to all communities with populations under 30,000 people.

Infrastructure Canada is engaging Indigenous leaders, communities and organizations to finalize the design of a competition specific to Indigenous communities that will reflect their unique realities and issues. Indigenous communities are also eligible to compete for all the prizes in the current competition.

The Challenge will be an open and transparent process. Communities that submit proposals will also post them online, so that residents and stakeholders can see them. An independent Jury will be appointed to select finalists and winners.

Applications are due by April 24, 2018. Communities interested in participating should visit the
Impact Canada Challenge Platform for the applicant guide and more information.

Finalists will be announced in the Summer of 2018 and winners in Spring 2019 according to the information on the Impact Canada Challenge Platform.

It’s not clear to me if she’s leading Vancouver’s effort to win the Smart Cities Challenge but Jessie Adcock’s (City of Vancouver Chief Digital Officer) Twitter feed certainly features information on the topic and, I suspect, if you’re looking for the most up-to-date information on Vancovuer’s participation, you’re more likely to find it on her feed than on the City of Vancouver’s Smart Cities Challenge webpage.

Machine learning, neural networks, and knitting

In a recent (Tuesday, March 6, 2018) live stream ‘conversation’ (‘Science in Canada; Investing in Canadian Innovation’ now published on YouTube) between Canadian Prime Minister, Justin Trudeau, and US science communicator, Bill Nye, at the University of Ottawa, they discussed, amongst many other topics, what AI (artificial intelligence) can and can’t do. They seemed to agree that AI can’t be creative, i.e., write poetry, create works of art, make jokes, etc. A conclusion which is both (in my opinion) true and not true.

There are times when I think the joke may be on us (humans). Take for example this March 6, 2018 story by Alexis Madrigal for The Atlantic magazine (Note: Links have been removed),

SkyKnit: How an AI Took Over an Adult Knitting Community

Ribald knitters teamed up with a neural-network creator to generate new types of tentacled, cozy shapes.

Janelle Shane is a humorist [Note: She describes herself as a “Research Scientist in optics. Plays with neural networks. …” in her Twitter bio.] who creates and mines her material from neural networks, the form of machine learning that has come to dominate the field of artificial intelligence over the last half-decade.

Perhaps you’ve seen the candy-heart slogans she generated for Valentine’s Day: DEAR ME, MY MY, LOVE BOT, CUTE KISS, MY BEAR, and LOVE BUN.

Or her new paint-color names: Parp Green, Shy Bather, Farty Red, and Bull Cream.

Or her neural-net-generated Halloween costumes: Punk Tree, Disco Monster, Spartan Gandalf, Starfleet Shark, and A Masked Box.

Her latest project, still ongoing, pushes the joke into a new, physical realm. Prodded by a knitter on the knitting forum Ravelry, Shane trained a type of neural network on a series of over 500 sets of knitting instructions. Then, she generated new instructions, which members of the Ravelry community have actually attempted to knit.

“The knitting project has been a particularly fun one so far just because it ended up being a dialogue between this computer program and these knitters that went over my head in a lot of ways,” Shane told me. “The computer would spit out a whole bunch of instructions that I couldn’t read and the knitters would say, this is the funniest thing I’ve ever read.”

It appears that the project evolved,

The human-machine collaboration created configurations of yarn that you probably wouldn’t give to your in-laws for Christmas, but they were interesting. The user citikas was the first to post a try at one of the earliest patterns, “reverss shawl.” It was strange, but it did have some charisma.

Shane nicknamed the whole effort “Project Hilarious Disaster.” The community called it SkyKnit.

I’m not sure what’s meant by “community” as mentioned in the previous excerpt. Are we talking about humans only, AI only, or both humans and AI?

Here’s some of what underlies Skyknit (Note: Links have been removed),

The different networks all attempt to model the data they’ve been fed by tuning a vast, funky flowchart. After you’ve created a statistical model that describes your real data, you can also roll the dice and generate new, never-before-seen data of the same kind.

How this works—like, the math behind it—is very hard to visualize because values inside the model can have hundreds of dimensions and we are humble three-dimensional creatures moving through time. But as the neural-network enthusiast Robin Sloan puts it, “So what? It turns out imaginary spaces are useful even if you can’t, in fact, imagine them.”

Out of that ferment, a new kind of art has emerged. Its practitioners use neural networks not to attain practical results, but to see what’s lurking in the these vast, opaque systems. What did the machines learn about the world as they attempted to understand the data they’d been fed? Famously, Google released DeepDream, which produced trippy visualizations that also demonstrated how that type of neural network processed the textures and objects in its source imagery.

Madrigal’s article is well worth reading if you have the time. You can also supplement Madrigal’s piece with an August 9, 2017 article about Janelle Shane’s algorithmic experiments by Jacob Brogan for slate.com.

I found some SkyKnit examples on Ravelry including this one from the Dollybird Workshop,

© Chatelaine

SkyKnit fancy addite rifopshent
by SkyKnit
Published in
Dollybird Workshop
SkyKnit
Craft
Knitting
Category
Stitch pattern
Published
February 2018
Suggested yarn
Yarn weight
Fingering (14 wpi) ?
Gauge
24 stitches and 30 rows = 4 inches
in stockinette stitch
Needle size
US 4 – 3.5 mm

written-pattern

This pattern is available as a free Ravelry download

SkyKnit is a type of machine learning algorithm called an artificial neural network. Its creator, Janelle Shane of AIweirdness.com, gave it 88,000 lines of knitting instructions from Stitch-Maps.com and Ravelry, and it taught itself how to make new patterns. Join the discussion!

SkyKnit seems to have created something that has paralell columns, and is reversible. Perhaps a scarf?

Test-knitting & image courtesy of Chatelaine

Patterns may include notes from testknitters; yarn, needles, and gauge are totally at your discretion.

About the designer
SkyKnit’s favorites include lace, tentacles, and totally not the elimination of the human race.
For more information, see: http://aiweirdness.com/

Shane’s website, aiweirdness.com, is where she posts musings such as this (from a March 2, [?] 2018 posting), Note: A link has been removed,

If you’ve been on the internet today, you’ve probably interacted with a neural network. They’re a type of machine learning algorithm that’s used for everything from language translation to finance modeling. One of their specialties is image recognition. Several companies – including Google, Microsoft, IBM, and Facebook – have their own algorithms for labeling photos. But image recognition algorithms can make really bizarre mistakes.

image

Microsoft Azure’s computer vision API [application programming interface] added the above caption and tags. But there are no sheep in the image of above. None. I zoomed all the way in and inspected every speck.

….

I have become quite interested in Shane’s self descriptions such as this one from the aiweirdness.com website,

Portrait/Logo

About

I train neural networks, a type of machine learning algorithm, to write unintentional humor as they struggle to imitate human datasets. Well, I intend the humor. The neural networks are just doing their best to understand what’s going on. Currently located on the occupied land of the Arapahoe Nation.
https://wandering.shop/@janellecshane

As for the joke being on us, I can’t help remembering the Facebook bots that developed their own language (Facebotlish), and were featured in my June 30, 2017 posting, There’s a certain eerieness to it all, which seems an appropriate response in a year celebrating the 200th anniversary of Mary Shelley’s 1818 book, Frankenstein; or, the Modern Prometheus. I’m closing with a video clip from the 1931 movie,

Happy Weekend!

Is technology taking our jobs? (a Women in Communications and Technology, BC Chapter event) and Brave New Work in Vancouver (Canada)

Awkwardly named as it is, the Women in Communications and Technology BC Chapter (WCTBC) has been reinvigorated after a moribund period (from a Feb. 21, 2018 posting by Rebecca Bollwitt for the Miss 604 blog),

There’s an exciting new organization and event series coming to Vancouver, which will aim to connect, inspire, and advance women in the communications and technology industries. I’m honoured to be on the Board of Directors for the newly rebooted Women in Communications and Technology, BC Chapter (“WCTBC”) and we’re ready to announce our first event!

Women in Debate: Is Technology Taking Our Jobs?

When: Tuesday, March 6, 2018 at 5:30pm
Where: BLG – 200 Burrard, 1200 Waterfront Centre, Vancouver
Tickets: Register online today. The cost is $25 for WCT members and $35 for non-members.

Automation, driven by technological progress, has been expanding for the past several decades. As the pace of development increases, so has the urgency in the debate about the potential effects of automation on jobs, employment, and human activity. Will new technology spawn mass unemployment, as the robots take jobs away from humans? Or is this part of a cycle that predates even the Industrial Revolution in which some jobs will become obsolete, while new jobs will be created?

Debaters:
Christin Wiedemann – Co-CEO, PQA Testing
Kathy Gibson – President, Catchy Consulting
Laura Sukorokoff – Senior Trainer & Communications, Hyperwallet
Sally Whitehead – Global Director, Sophos

Based on the Oxford style debates popularized by the podcast ‘Intelligence Squared’, the BC chapter of Women in Communications and Technology brings you Women in Debate: Is Technology Taking Our Jobs?

For anyone not familiar with “Intelligence Squared,”  there’s this from their About webpage,

ntelligence Squared is the world’s premier forum for debate and intelligent discussion. Live and online we take you to the heart of the issues that matter, in the company of some of the world’s sharpest minds and most exciting orators.

Intelligence Squared Live

Our events have captured the imagination of public audiences for more than a decade, welcoming the biggest names in politics, journalism and the arts. Our celebrated list of speakers includes President Jimmy Carter, Stephen Fry, Patti Smith, Richard Dawkins, Sean Penn, Marina Abramovic, Werner Herzog, Terry Gilliam, Anne Marie Slaughter, Reverend Jesse Jackson, Mary Beard, Yuval Noah Harari, Jonathan Franzen, Salman Rushdie, Eric Schmidt, Richard Branson, Professor Brian Cox, Nate Silver, Umberto Eco, Martin Amis and Grayson Perry.

Further digging into WCTBC unearthed this story about the reasons for its ‘reboot’, from the Who we are / Regional Chapters / British Columbia webpage,

“Earlier this month [October 2017?], Christin Wiedemann and Briana Sim, co-Chairs of the BC Chapter of WCT, attended a Women in IoT [Internet of Things] event in Vancouver. The event was organized by the GE Women’s Network and TELUS Connections, with WCT as an event partner. The event sold out after only two days, and close to 200 women attended.

Five female panelists representing different backgrounds and industries talked about the impact IoT is having on our lives today, and how they think IoT fits into the future of the technology landscape. Christin facilitated the Q&A portion of the event, and had an opportunity to share that the BC chapter is rebooting and hopes to launch a kickoff event later in November”

You can find a summary of the event here (http://gereports.ca/theres-lots-room-us-top-insights-five-canadas-top-women-business-leaders-iot/#), and you can also check out the Storify (https://storify.com/cwiedemann/women-in-iot​).”

– October 6th, 2017

Simon Fraser University’s Brave New Work

Coincidentally or not, there’s a major series of events being offered by Simon Fraser University’s (SFU; located in Vancouver, British Columbia, Canada) Public Square Programme in their 2018 Community Summit Series titled: Brave New Work; How can we thrive in the changing world of work? which takes place February 26, 2018 to March 7, 2018.

There’s not a single mention (!!!!!) of Brave New World (by Aldous Huxley) in what is clearly word play based on this man’s book.

From the 2018 Community Summit: Brave New Work webpage on the SFU website (Note: Links have been removed),

How can we thrive in the changing world of work?

The 2018 Community Summit, Brave New Work, invites us to consider how we can all thrive in the changing world of work.

Technological growth is happening at an unprecedented rate and scale, and it is fundamentally altering the way we organize and value work. The work we do (and how we do it) is changing. One of the biggest challenges in effectively responding to this new world of work is creating a shared understanding of the issues at play and how they intersect. Individuals, businesses, governments, educational institutions, and civil society must collaborate to construct the future we want.

The future of work is here, but it’s still ours to define. From February 26th to March 7th, we will convene diverse communities through a range of events and activities to provoke thinking and encourage solution-finding. We hope you’ll join us.

The New World of Work: Thriving or Surviving?

As part of its 2018 Community Summit, Brave New Work, SFU Public Square is proud to present, in partnership with Vancity, an evening with Van Jones and Anne-Marie Slaughter, moderated by CBC’s Laura Lynch at the Queen Elizabeth Theatre.

Van Jones and Anne-Marie Slaughter, two leading commentators on the American economy, will discuss the role that citizens, governments and civil society can play in shaping the future of work. They will explore the challenges ahead, as well as how these challenges might be addressed through green jobs, emergent industries, education and public policy.

Join us for an important conversation about how the future of work can be made to work for all of us.

Are you a member of Vancity? As one of the many perks of being a Vancity member, you have access to a free ticket to attend the event. For your free ticket, please visit Vancity for more information. There are a limited number of seats reserved for Vancity members, so we encourage you to register early.

Tickets are now on sale, get yours today!

Future of Work in Canada: Emerging Trends and Opportunities

What are some of the trends currently defining the new world of work in Canada, and what does our future look like? What opportunities can be seized to build more competitive, prosperous, and inclusive organizations? This mini-conference, presented in partnership with Deloitte Canada, will feature panel discussions and presentations by representatives from Deloitte, Brookfield Institute for Innovation & Entrepreneurship, Vancity, Futurpreneur, and many more.

Work in the 21st Century: Innovations in Research

Research doesn’t just live in libraries and academic papers; it has a profound impact on our day to day lives. Work in the 21st Century is a dynamic evening that showcases the SFU researchers and entrepreneurs who are leading the way in making innovative impacts in the new world of work.

Basic Income

This lecture will examine the question of basic income (BI). A neoliberal version of BI is being considered and even developed by a number of governments and institutions of global capitalism. This form of BI could enhance the supply of low wage precarious workers, by offering a public subsidy to employers, paid for by cuts to others areas of social provision.

ReframeWork

ReframeWork is a national gathering of leading thinkers and innovators on the topic of Future of Work. We will explore how Canada can lead in forming new systems for good work and identify the richest areas of opportunity for solution-building that affects broader change.

The Urban Worker Project Skillshare

The Urban Worker Project Skillshare is a day-long gathering, bringing together over 150 independent workers to lean on each other, learn from each other, get valuable expert advice, and build community. Join us!

SFU City Conversations: Making Visible the Invisible

Are outdated and stereotypical gender roles contributing to the invisible workload? What is the invisible workload anyway? Don’t miss this special edition of SFU City Conversations on intersectionality and invisible labour, presented in partnership with the Simon Fraser Student Society Women’s Centre.

Climate of Work: How Does Climate Change Affect the Future of Work

What does our changing climate have to do with the future of work? Join Embark as they explore the ways our climate impacts different industries such as planning, communications or entrepreneurship.

Symposium: Art, Labour, and the Future of Work

One of the key distinguishing features of Western modernity is that the activity of labour has always been at the heart of our self-understanding. Work defines who we are. But what might we do in a world without work? Join SFU’s Institute for the Humanities for a symposium on art, aesthetics, and self-understanding.

Worker Writers and the Poetics of Labour

If you gave a worker a pen, what would they write? What stories would they tell, and what experiences might they share? Hear poetry about what it is to work in the 21st century directly from participants of the Worker Writers School at this free public poetry reading.

Creating a Diverse and Resilient Economy in Metro Vancouver

This panel conversation event will focus on the future of employment in Metro Vancouver, and planning for the employment lands that support the regional economy. What are the trends and issues related to employment in various sectors in Metro Vancouver, and how does land use planning, regulation, and market demand affect the future of work regionally?

Preparing Students for the Future World of Work

This event, hosted by CACEE Canada West and SFU Career and Volunteer Services, will feature presentations and discussions on how post-secondary institutions can prepare students for the future of work.

Work and Purpose Later in Life

How is the changing world of work affecting older adults? And what role should work play in our lives, anyway? This special Philosophers’ Cafe will address questions of retirement, purpose, and work for older adults.

Beyond Bitcoin: Blockchain and the Future of Work

Blockchain technology is making headlines. Enthusiastic or skeptic, the focus of this dialogue will be to better understand key concepts and to explore the wide-ranging applications of distributed ledgers and the implications for business here in BC and in the global economy.

Building Your Resilience

Being a university student can be stressful. This interactive event will share key strategies for enhancing your resilience and well-being, that will support your success now and in your future career.

We may not be working because of robots (no mention of automation in the SFU descriptions?) but we sure will talk about work-related topics. Sarcasm aside, it’s good to see this interest in work and in public discussion although I’m deeply puzzled by SFU’s decision to seemingly ignore technology, except for blockchain. Thank goodness for WCTBC. At any rate, I’m often somewhat envious of what goes on elsewhere so it’s nice to see this level of excitement and effort here in Vancouver.

A bioengineered robot hand with its own nervous system: machine/flesh and a job opening

A November 14, 2017 news item on phys.org announces a grant for a research project which will see engineered robot hands combined with regenerative medicine to imbue neuroprosthetic hands with the sense of touch,

The sense of touch is often taken for granted. For someone without a limb or hand, losing that sense of touch can be devastating. While highly sophisticated prostheses with complex moving fingers and joints are available to mimic almost every hand motion, they remain frustratingly difficult and unnatural for the user. This is largely because they lack the tactile experience that guides every movement. This void in sensation results in limited use or abandonment of these very expensive artificial devices. So why not make a prosthesis that can actually “feel” its environment?

That is exactly what an interdisciplinary team of scientists from Florida Atlantic University and the University of Utah School of Medicine aims to do. They are developing a first-of-its-kind bioengineered robotic hand that will grow and adapt to its environment. This “living” robot will have its own peripheral nervous system directly linking robotic sensors and actuators. FAU’s College of Engineering and Computer Science is leading the multidisciplinary team that has received a four-year, $1.3 million grant from the National Institute of Biomedical Imaging and Bioengineering of the [US] National Institutes of Health for a project titled “Virtual Neuroprosthesis: Restoring Autonomy to People Suffering from Neurotrauma.”

A November14, 2017 Florida Atlantic University (FAU) news release by Gisele Galoustian, which originated the news item, goes into more detail,

With expertise in robotics, bioengineering, behavioral science, nerve regeneration, electrophysiology, microfluidic devices, and orthopedic surgery, the research team is creating a living pathway from the robot’s touch sensation to the user’s brain to help amputees control the robotic hand. A neuroprosthesis platform will enable them to explore how neurons and behavior can work together to regenerate the sensation of touch in an artificial limb.

At the core of this project is a cutting-edge robotic hand and arm developed in the BioRobotics Laboratory in FAU’s College of Engineering and Computer Science. Just like human fingertips, the robotic hand is equipped with numerous sensory receptors that respond to changes in the environment. Controlled by a human, it can sense pressure changes, interpret the information it is receiving and interact with various objects. It adjusts its grip based on an object’s weight or fragility. But the real challenge is figuring out how to send that information back to the brain using living residual neural pathways to replace those that have been damaged or destroyed by trauma.

“When the peripheral nerve is cut or damaged, it uses the rich electrical activity that tactile receptors create to restore itself. We want to examine how the fingertip sensors can help damaged or severed nerves regenerate,” said Erik Engeberg, Ph.D., principal investigator, an associate professor in FAU’s Department of Ocean and Mechanical Engineering, and director of FAU’s BioRobotics Laboratory. “To accomplish this, we are going to directly connect these living nerves in vitro and then electrically stimulate them on a daily basis with sensors from the robotic hand to see how the nerves grow and regenerate while the hand is operated by limb-absent people.”

For the study, the neurons will not be kept in conventional petri dishes. Instead, they will be placed in  biocompatible microfluidic chambers that provide a nurturing environment mimicking the basic function of living cells. Sarah E. Du, Ph.D., co-principal investigator, an assistant professor in FAU’s Department of Ocean and Mechanical Engineering, and an expert in the emerging field of microfluidics, has developed these tiny customized artificial chambers with embedded micro-electrodes. The research team will be able to stimulate the neurons with electrical impulses from the robot’s hand to help regrowth after injury. They will morphologically and electrically measure in real-time how much neural tissue has been restored.

Jianning Wei, Ph.D., co-principal investigator, an associate professor of biomedical science in FAU’s Charles E. Schmidt College of Medicine, and an expert in neural damage and regeneration, will prepare the neurons in vitro, observe them grow and see how they fare and regenerate in the aftermath of injury. This “virtual” method will give the research team multiple opportunities to test and retest the nerves without any harm to subjects.

Using an electroencephalogram (EEG) to detect electrical activity in the brain, Emmanuelle Tognoli, Ph.D., co-principal investigator, associate research professor in FAU’s Center for Complex Systems and Brain Sciences in the Charles E. Schmidt College of Science, and an expert in electrophysiology and neural, behavioral, and cognitive sciences, will examine how the tactile information from the robotic sensors is passed onto the brain to distinguish scenarios with successful or unsuccessful functional restoration of the sense of touch. Her objective: to understand how behavior helps nerve regeneration and how this nerve regeneration helps the behavior.

Once the nerve impulses from the robot’s tactile sensors have gone through the microfluidic chamber, they are sent back to the human user manipulating the robotic hand. This is done with a special device that converts the signals coming from the microfluidic chambers into a controllable pressure at a cuff placed on the remaining portion of the amputated person’s arm. Users will know if they are squeezing the object too hard or if they are losing their grip.

Engeberg also is working with Douglas T. Hutchinson, M.D., co-principal investigator and a professor in the Department of Orthopedics at the University of Utah School of Medicine, who specializes in hand and orthopedic surgery. They are developing a set of tasks and behavioral neural indicators of performance that will ultimately reveal how to promote a healthy sensation of touch in amputees and limb-absent people using robotic devices. The research team also is seeking a post-doctoral researcher with multi-disciplinary experience to work on this breakthrough project.

Here’s more about the job opportunity from the FAU BioRobotics Laboratory job posting, (I checked on January 30, 2018 and it seems applications are still being accepted.)

Post-doctoral Opportunity

Dated Posted: Oct. 13, 2017

The BioRobotics Lab at Florida Atlantic University (FAU) invites applications for a NIH NIBIB-funded Postdoctoral position to develop a Virtual Neuroprosthesis aimed at providing a sense of touch in amputees and limb-absent people.

Candidates should have a Ph.D. in one of the following degrees: mechanical engineering, electrical engineering, biomedical engineering, bioengineering or related, with interest and/or experience in transdisciplinary work at the intersection of robotic hands, biology, and biomedical systems. Prior experience in the neural field will be considered an advantage, though not a necessity. Underrepresented minorities and women are warmly encouraged to apply.

The postdoctoral researcher will be co-advised across the department of Mechanical Engineering and the Center for Complex Systems & Brain Sciences through an interdisciplinary team whose expertise spans Robotics, Microfluidics, Behavioral and Clinical Neuroscience and Orthopedic Surgery.

The position will be for one year with a possibility of extension based on performance. Salary will be commensurate with experience and qualifications. Review of applications will begin immediately and continue until the position is filled.

The application should include:

  1. a cover letter with research interests and experiences,
  2. a CV, and
  3. names and contact information for three professional references.

Qualified candidates can contact Erik Engeberg, Ph.D., Associate Professor, in the FAU Department of Ocean and Mechanical Engineering at eengeberg@fau.edu. Please reference AcademicKeys.com in your cover letter when applying for or inquiring about this job announcement.

You can find the apply button on this page. Good luck!

Liquid circuitry, shape-shifting fluids and more

I’d have to see it to believe it but researchers at the US Dept. of Energy (DOE) Lawrence Berkeley National Laboratory (LBNL) have developed a new kind of ‘bijel’ which would allow for some pretty nifty robotics. From a Sept. 25, 2017 news item on ScienceDaily,

A new two-dimensional film, made of polymers and nanoparticles and developed by researchers at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab), can direct two different non-mixing liquids into a variety of exotic architectures. This finding could lead to soft robotics, liquid circuitry, shape-shifting fluids, and a host of new materials that use soft, rather than solid, substances.

The study, reported today in the journal Nature Nanotechnology, presents the newest entry in a class of substances known as bicontinuous jammed emulsion gels, or bijels, which hold promise as a malleable liquid that can support catalytic reactions, electrical conductivity, and energy conversion.

A Sept. 25, 2017 LBNL news release (also on EurekAlert), which originated the news item, expands on the theme,

Bijels are typically made of immiscible, or non-mixing, liquids. People who shake their bottle of vinaigrette before pouring the dressing on their salad are familiar with such liquids. As soon as the shaking stops, the liquids start to separate again, with the lower density liquid – often oil – rising to the top.

Trapping, or jamming, particles where these immiscible liquids meet can prevent the liquids from completely separating, stabilizing the substance into a bijel. What makes bijels remarkable is that, rather than just making the spherical droplets that we normally see when we try to mix oil and water, the particles at the interface shape the liquids into complex networks of interconnected fluid channels.

Bijels are notoriously difficult to make, however, involving exact temperatures at precisely timed stages. In addition, the liquid channels are normally more than 5 micrometers across, making them too large to be useful in energy conversion and catalysis.

“Bijels have long been of interest as next-generation materials for energy applications and chemical synthesis,” said study lead author Caili Huang. “The problem has been making enough of them, and with features of the right size. In this work, we crack that problem.”

Huang started the work as a graduate student with Thomas Russell, the study’s principal investigator, at Berkeley Lab’s Materials Sciences Division, and he continued the project as a postdoctoral researcher at DOE’s Oak Ridge National Laboratory.

Creating a new bijel recipe

The method described in this new study simplifies the bijel process by first using specially coated particles about 10-20 nanometers in diameter. The smaller-sized particles line the liquid interfaces much more quickly than the ones used in traditional bijels, making the smaller channels that are highly valued for applications.

Illustration shows key stages of bijel formation. Clockwise from top left, two non-mixing liquids are shown. Ligands (shown in yellow) with amine groups are dispersed throughout the oil or solvent, and nanoparticles coated with carboxylic acids (shown as blue dots) are scattered in the water. With vigorous shaking, the nanoparticles and ligands form a “supersoap” that gets trapped at the interface of the two liquids. The bottom panel is a magnified view of the jammed nanoparticle supersoap. (Credit: Caili Huang/ORNL)

“We’ve basically taken liquids like oil and water and given them a structure, and it’s a structure that can be changed,” said Russell, a visiting faculty scientist at Berkeley Lab. “If the nanoparticles are responsive to electrical, magnetic, or mechanical stimuli, the bijels can become reconfigurable and re-shaped on demand by an external field.”

The researchers were able to prepare new bijels from a variety of common organic, water-insoluble solvents, such as toluene, that had ligands dissolved in it, and deionized water, which contained the nanoparticles. To ensure thorough mixing of the liquids, they subjected the emulsion to a vortex spinning at 3,200 revolutions per minute.

“This extreme shaking creates a whole bunch of new places where these particles and polymers can meet each other,” said study co-author Joe Forth, a postdoctoral fellow at Berkeley Lab’s Materials Sciences Division. “You’re synthesizing a lot of this material, which is in effect a thin, 2-D coating of the liquid surfaces in the system.”

The liquids remained a bijel even after one week, a sign of the system’s stability.

Russell, who is also a professor of polymer science and engineering at the University of Massachusetts-Amherst, added that these shape-shifting characteristics would be valuable in microreactors, microfluidic devices, and soft actuators.

Nanoparticle supersoap

Nanoparticles had not been seriously considered in bijels before because their small size made them hard to trap in the liquid interface. To resolve that problem, the researchers coated nano-sized particles with carboxylic acids and put them in water. They then took polymers with an added amine group – a derivative of ammonia – and dissolved them in the toluene.

At left is a vial of bijel stabilized with nanoparticle surfactants. On the right is the same vial after a week of inversion, showing that the nanoparticle kept the liquids from moving. (Credit: Caili Huang/ORNL)

This configuration took advantage of the amine group’s affinity to water, a characteristic that is comparable to surfactants, like soap. Their nanoparticle “supersoap” was designed so that the nanoparticles join ligands, forming an octopus-like shape with a polar head and nonpolar legs that get jammed at the interface, the researchers said.

“Bijels are really a new material, and also excitingly weird in that they are kinetically arrested in these unusual configurations,” said study co-author Brett Helms, a staff scientist at Berkeley Lab’s Molecular Foundry. “The discovery that you can make these bijels with simple ingredients is a surprise. We all have access to oils and water and nanocrystals, allowing broad tunability in bijel properties. This platform also allows us to experiment with new ways to control their shape and function since they are both responsive and reconfigurable.”

The nanoparticles were made of silica, but the researchers noted that in previous studies they used graphene and carbon nanotubes to form nanoparticle surfactants.

“The key is that the nanoparticles can be made of many materials,” said Russell.  “The most important thing is what’s on the surface.”

This is an animation of the bijel

3-D rendering of the nanoparticle bijel taken by confocal microscope. (Credit: Caili Huang/ORNL [Oak Ridge National Laboratory] and Joe Forth/Berkeley Lab)

Here’s a link to and a citation for the paper,

Bicontinuous structured liquids with sub-micrometre domains using nanoparticle surfactants by Caili Huang, Joe Forth, Weiyu Wang, Kunlun Hong, Gregory S. Smith, Brett A. Helms & Thomas P. Russell. Nature Nanotechnology (2017) doi:10.1038/nnano.2017.182 25 September 2017

This paper is behind a paywall.

Robots in Vancouver and in Canada (two of two)

This is the second of a two-part posting about robots in Vancouver and Canada. The first part included a definition, a brief mention a robot ethics quandary, and sexbots. This part is all about the future. (Part one is here.)

Canadian Robotics Strategy

Meetings were held Sept. 28 – 29, 2017 in, surprisingly, Vancouver. (For those who don’t know, this is surprising because most of the robotics and AI research seems to be concentrated in eastern Canada. if you don’t believe me take a look at the speaker list for Day 2 or the ‘Canadian Stakeholder’ meeting day.) From the NSERC (Natural Sciences and Engineering Research Council) events page of the Canadian Robotics Network,

Join us as we gather robotics stakeholders from across the country to initiate the development of a national robotics strategy for Canada. Sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC), this two-day event coincides with the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) in order to leverage the experience of international experts as we explore Canada’s need for a national robotics strategy.

Where
Vancouver, BC, Canada

When
Thursday September 28 & Friday September 29, 2017 — Save the date!

Download the full agenda and speakers’ list here.

Objectives

The purpose of this two-day event is to gather members of the robotics ecosystem from across Canada to initiate the development of a national robotics strategy that builds on our strengths and capacities in robotics, and is uniquely tailored to address Canada’s economic needs and social values.

This event has been sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC) and is supported in kind by the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) as an official Workshop of the conference.  The first of two days coincides with IROS 2017 – one of the premiere robotics conferences globally – in order to leverage the experience of international robotics experts as we explore Canada’s need for a national robotics strategy here at home.

Who should attend

Representatives from industry, research, government, startups, investment, education, policy, law, and ethics who are passionate about building a robust and world-class ecosystem for robotics in Canada.

Program Overview

Download the full agenda and speakers’ list here.

DAY ONE: IROS Workshop 

“Best practices in designing effective roadmaps for robotics innovation”

Thursday September 28, 2017 | 8:30am – 5:00pm | Vancouver Convention Centre

Morning Program:“Developing robotics innovation policy and establishing key performance indicators that are relevant to your region” Leading international experts share their experience designing robotics strategies and policy frameworks in their regions and explore international best practices. Opening Remarks by Prof. Hong Zhang, IROS 2017 Conference Chair.

Afternoon Program: “Understanding the Canadian robotics ecosystem” Canadian stakeholders from research, industry, investment, ethics and law provide a collective overview of the Canadian robotics ecosystem. Opening Remarks by Ryan Gariepy, CTO of Clearpath Robotics.

Thursday Evening Program: Sponsored by Clearpath Robotics  Workshop participants gather at a nearby restaurant to network and socialize.

Learn more about the IROS Workshop.

DAY TWO: NSERC-Sponsored Canadian Robotics Stakeholder Meeting
“Towards a national robotics strategy for Canada”

Friday September 29, 2017 | 8:30am – 5:00pm | University of British Columbia (UBC)

On the second day of the program, robotics stakeholders from across the country gather at UBC for a full day brainstorming session to identify Canada’s unique strengths and opportunities relative to the global competition, and to align on a strategic vision for robotics in Canada.

Friday Evening Program: Sponsored by NSERC Meeting participants gather at a nearby restaurant for the event’s closing dinner reception.

Learn more about the Canadian Robotics Stakeholder Meeting.

I was glad to see in the agenda that some of the international speakers represented research efforts from outside the usual Europe/US axis.

I have been in touch with one of the organizers (also mentioned in part one with regard to robot ethics), Ajung Moon (her website is here), who says that there will be a white paper available on the Canadian Robotics Network website at some point in the future. I’ll keep looking for it and, in the meantime, I wonder what the 2018 Canadian federal budget will offer robotics.

Robots and popular culture

For anyone living in Canada or the US, Westworld (television series) is probably the most recent and well known ‘robot’ drama to premiere in the last year.As for movies, I think Ex Machina from 2014 probably qualifies in that category. Interestingly, both Westworld and Ex Machina seem quite concerned with sex with Westworld adding significant doses of violence as another  concern.

I am going to focus on another robot story, the 2012 movie, Robot & Frank, which features a care robot and an older man,

Frank (played by Frank Langella), a former jewel thief, teaches a robot the skills necessary to rob some neighbours of their valuables. The ethical issue broached in the film isn’t whether or not the robot should learn the skills and assist Frank in his thieving ways although that’s touched on when Frank keeps pointing out that planning his heist requires he live more healthily. No, the problem arises afterward when the neighbour accuses Frank of the robbery and Frank removes what he believes is all the evidence. He believes he’s going successfully evade arrest until the robot notes that Frank will have to erase its memory in order to remove all of the evidence. The film ends without the robot’s fate being made explicit.

In a way, I find the ethics query (was the robot Frank’s friend or just a machine?) posed in the film more interesting than the one in Vikander’s story, an issue which does have a history. For example, care aides, nurses, and/or servants would have dealt with requests to give an alcoholic patient a drink. Wouldn’t there  already be established guidelines and practices which could be adapted for robots? Or, is this question made anew by something intrinsically different about robots?

To be clear, Vikander’s story is a good introduction and starting point for these kinds of discussions as is Moon’s ethical question. But they are starting points and I hope one day there’ll be a more extended discussion of the questions raised by Moon and noted in Vikander’s article (a two- or three-part series of articles? public discussions?).

How will humans react to robots?

Earlier there was the contention that intimate interactions with robots and sexbots would decrease empathy and the ability of human beings to interact with each other in caring ways. This sounds a bit like the argument about smartphones/cell phones and teenagers who don’t relate well to others in real life because most of their interactions are mediated through a screen, which many seem to prefer. It may be partially true but, arguably,, books too are an antisocial technology as noted in Walter J. Ong’s  influential 1982 book, ‘Orality and Literacy’,  (from the Walter J. Ong Wikipedia entry),

A major concern of Ong’s works is the impact that the shift from orality to literacy has had on culture and education. Writing is a technology like other technologies (fire, the steam engine, etc.) that, when introduced to a “primary oral culture” (which has never known writing) has extremely wide-ranging impacts in all areas of life. These include culture, economics, politics, art, and more. Furthermore, even a small amount of education in writing transforms people’s mentality from the holistic immersion of orality to interiorization and individuation. [emphases mine]

So, robotics and artificial intelligence would not be the first technologies to affect our brains and our social interactions.

There’s another area where human-robot interaction may have unintended personal consequences according to April Glaser’s Sept. 14, 2017 article on Slate.com (Note: Links have been removed),

The customer service industry is teeming with robots. From automated phone trees to touchscreens, software and machines answer customer questions, complete orders, send friendly reminders, and even handle money. For an industry that is, at its core, about human interaction, it’s increasingly being driven to a large extent by nonhuman automation.

But despite the dreams of science-fiction writers, few people enter a customer-service encounter hoping to talk to a robot. And when the robot malfunctions, as they so often do, it’s a human who is left to calm angry customers. It’s understandable that after navigating a string of automated phone menus and being put on hold for 20 minutes, a customer might take her frustration out on a customer service representative. Even if you know it’s not the customer service agent’s fault, there’s really no one else to get mad at. It’s not like a robot cares if you’re angry.

When human beings need help with something, says Madeleine Elish, an anthropologist and researcher at the Data and Society Institute who studies how humans interact with machines, they’re not only looking for the most efficient solution to a problem. They’re often looking for a kind of validation that a robot can’t give. “Usually you don’t just want the answer,” Elish explained. “You want sympathy, understanding, and to be heard”—none of which are things robots are particularly good at delivering. In a 2015 survey of over 1,300 people conducted by researchers at Boston University, over 90 percent of respondents said they start their customer service interaction hoping to speak to a real person, and 83 percent admitted that in their last customer service call they trotted through phone menus only to make their way to a human on the line at the end.

“People can get so angry that they have to go through all those automated messages,” said Brian Gnerer, a call center representative with AT&T in Bloomington, Minnesota. “They’ve been misrouted or been on hold forever or they pressed one, then two, then zero to speak to somebody, and they are not getting where they want.” And when people do finally get a human on the phone, “they just sigh and are like, ‘Thank God, finally there’s somebody I can speak to.’ ”

Even if robots don’t always make customers happy, more and more companies are making the leap to bring in machines to take over jobs that used to specifically necessitate human interaction. McDonald’s and Wendy’s both reportedly plan to add touchscreen self-ordering machines to restaurants this year. Facebook is saturated with thousands of customer service chatbots that can do anything from hail an Uber, retrieve movie times, to order flowers for loved ones. And of course, corporations prefer automated labor. As Andy Puzder, CEO of the fast-food chains Carl’s Jr. and Hardee’s and former Trump pick for labor secretary, bluntly put it in an interview with Business Insider last year, robots are “always polite, they always upsell, they never take a vacation, they never show up late, there’s never a slip-and-fall, or an age, sex, or race discrimination case.”

But those robots are backstopped by human beings. How does interacting with more automated technology affect the way we treat each other? …

“We know that people treat artificial entities like they’re alive, even when they’re aware of their inanimacy,” writes Kate Darling, a researcher at MIT who studies ethical relationships between humans and robots, in a recent paper on anthropomorphism in human-robot interaction. Sure, robots don’t have feelings and don’t feel pain (not yet, anyway). But as more robots rely on interaction that resembles human interaction, like voice assistants, the way we treat those machines will increasingly bleed into the way we treat each other.

It took me a while to realize that what Glaser is talking about are AI systems and not robots as such. (sigh) It’s so easy to conflate the concepts.

AI ethics (Toby Walsh and Suzanne Gildert)

Jack Stilgoe of the Guardian published a brief Oct. 9, 2017 introduction to his more substantive (30 mins.?) podcast interview with Dr. Toby Walsh where they discuss stupid AI amongst other topics (Note: A link has been removed),

Professor Toby Walsh has recently published a book – Android Dreams – giving a researcher’s perspective on the uncertainties and opportunities of artificial intelligence. Here, he explains to Jack Stilgoe that we should worry more about the short-term risks of stupid AI in self-driving cars and smartphones than the speculative risks of super-intelligence.

Professor Walsh discusses the effects that AI could have on our jobs, the shapes of our cities and our understandings of ourselves. As someone developing AI, he questions the hype surrounding the technology. He is scared by some drivers’ real-world experimentation with their not-quite-self-driving Teslas. And he thinks that Siri needs to start owning up to being a computer.

I found this discussion to cast a decidedly different light on the future of robotics and AI. Walsh is much more interested in discussing immediate issues like the problems posed by ‘self-driving’ cars. (Aside: Should we be calling them robot cars?)

One ethical issue Walsh raises is with data regarding accidents. He compares what’s happening with accident data from self-driving (robot) cars to how the aviation industry handles accidents. Hint: accident data involving air planes is shared. Would you like to guess who does not share their data?

Sharing and analyzing data and developing new safety techniques based on that data has made flying a remarkably safe transportation technology.. Walsh argues the same could be done for self-driving cars if companies like Tesla took the attitude that safety is in everyone’s best interests and shared their accident data in a scheme similar to the aviation industry’s.

In an Oct. 12, 2017 article by Matthew Braga for Canadian Broadcasting Corporation (CBC) news online another ethical issue is raised by Suzanne Gildert (a participant in the Canadian Robotics Roadmap/Strategy meetings mentioned earlier here), Note: Links have been removed,

… Suzanne Gildert, the co-founder and chief science officer of Vancouver-based robotics company Kindred. Since 2014, her company has been developing intelligent robots [emphasis mine] that can be taught by humans to perform automated tasks — for example, handling and sorting products in a warehouse.

The idea is that when one of Kindred’s robots encounters a scenario it can’t handle, a human pilot can take control. The human can see, feel and hear the same things the robot does, and the robot can learn from how the human pilot handles the problematic task.

This process, called teleoperation, is one way to fast-track learning by manually showing the robot examples of what its trainers want it to do. But it also poses a potential moral and ethical quandary that will only grow more serious as robots become more intelligent.

“That AI is also learning my values,” Gildert explained during a talk on robot ethics at the Singularity University Canada Summit in Toronto on Wednesday [Oct. 11, 2017]. “Everything — my mannerisms, my behaviours — is all going into the AI.”

At its worst, everything from algorithms used in the U.S. to sentence criminals to image-recognition software has been found to inherit the racist and sexist biases of the data on which it was trained.

But just as bad habits can be learned, good habits can be learned too. The question is, if you’re building a warehouse robot like Kindred is, is it more effective to train those robots’ algorithms to reflect the personalities and behaviours of the humans who will be working alongside it? Or do you try to blend all the data from all the humans who might eventually train Kindred robots around the world into something that reflects the best strengths of all?

I notice Gildert distinguishes her robots as “intelligent robots” and then focuses on AI and issues with bias which have already arisen with regard to algorithms (see my May 24, 2017 posting about bias in machine learning, AI, and .Note: if you’re in Vancouver on Oct. 26, 2017 and interested in algorithms and bias), there’s a talk being given by Dr. Cathy O’Neil, author the Weapons of Math Destruction, on the topic of Gender and Bias in Algorithms. It’s not free but  tickets are here.)

Final comments

There is one more aspect I want to mention. Even as someone who usually deals with nanobots, it’s easy to start discussing robots as if the humanoid ones are the only ones that exist. To recapitulate, there are humanoid robots, utilitarian robots, intelligent robots, AI, nanobots, ‘microscopic bots, and more all of which raise questions about ethics and social impacts.

However, there is one more category I want to add to this list: cyborgs. They live amongst us now. Anyone who’s had a hip or knee replacement or a pacemaker or a deep brain stimulator or other such implanted device qualifies as a cyborg. Increasingly too, prosthetics are being introduced and made part of the body. My April 24, 2017 posting features this story,

This Case Western Reserve University (CRWU) video accompanies a March 28, 2017 CRWU news release, (h/t ScienceDaily March 28, 2017 news item)

Bill Kochevar grabbed a mug of water, drew it to his lips and drank through the straw.

His motions were slow and deliberate, but then Kochevar hadn’t moved his right arm or hand for eight years.

And it took some practice to reach and grasp just by thinking about it.

Kochevar, who was paralyzed below his shoulders in a bicycling accident, is believed to be the first person with quadriplegia in the world to have arm and hand movements restored with the help of two temporarily implanted technologies. [emphasis mine]

A brain-computer interface with recording electrodes under his skull, and a functional electrical stimulation (FES) system* activating his arm and hand, reconnect his brain to paralyzed muscles.

Does a brain-computer interface have an effect on human brain and, if so, what might that be?

In any discussion (assuming there is funding for it) about ethics and social impact, we might want to invite the broadest range of people possible at an ‘earlyish’ stage (although we’re already pretty far down the ‘automation road’) stage or as Jack Stilgoe and Toby Walsh note, technological determinism holds sway.

Once again here are links for the articles and information mentioned in this double posting,

That’s it!

ETA Oct. 16, 2017: Well, I guess that wasn’t quite ‘it’. BBC’s (British Broadcasting Corporation) Magazine published a thoughtful Oct. 15, 2017 piece titled: Can we teach robots ethics?

Robots in Vancouver and in Canada (one of two)

This piece just started growing. It started with robot ethics, moved on to sexbots and news of an upcoming Canadian robotics roadmap. Then, it became a two-part posting with the robotics strategy (roadmap) moving to part two along with robots and popular culture and a further  exploration of robot and AI ethics issues..

What is a robot?

There are lots of robots, some are macroscale and others are at the micro and nanoscales (see my Sept. 22, 2017 posting for the latest nanobot). Here’s a definition from the Robot Wikipedia entry that covers all the scales. (Note: Links have been removed),

A robot is a machine—especially one programmable by a computer— capable of carrying out a complex series of actions automatically.[2] Robots can be guided by an external control device or the control may be embedded within. Robots may be constructed to take on human form but most robots are machines designed to perform a task with no regard to how they look.

Robots can be autonomous or semi-autonomous and range from humanoids such as Honda’s Advanced Step in Innovative Mobility (ASIMO) and TOSY’s TOSY Ping Pong Playing Robot (TOPIO) to industrial robots, medical operating robots, patient assist robots, dog therapy robots, collectively programmed swarm robots, UAV drones such as General Atomics MQ-1 Predator, and even microscopic nano robots. [emphasis mine] By mimicking a lifelike appearance or automating movements, a robot may convey a sense of intelligence or thought of its own.

We may think we’ve invented robots but the idea has been around for a very long time (from the Robot Wikipedia entry; Note: Links have been removed),

Many ancient mythologies, and most modern religions include artificial people, such as the mechanical servants built by the Greek god Hephaestus[18] (Vulcan to the Romans), the clay golems of Jewish legend and clay giants of Norse legend, and Galatea, the mythical statue of Pygmalion that came to life. Since circa 400 BC, myths of Crete include Talos, a man of bronze who guarded the Cretan island of Europa from pirates.

In ancient Greece, the Greek engineer Ctesibius (c. 270 BC) “applied a knowledge of pneumatics and hydraulics to produce the first organ and water clocks with moving figures.”[19][20] In the 4th century BC, the Greek mathematician Archytas of Tarentum postulated a mechanical steam-operated bird he called “The Pigeon”. Hero of Alexandria (10–70 AD), a Greek mathematician and inventor, created numerous user-configurable automated devices, and described machines powered by air pressure, steam and water.[21]

The 11th century Lokapannatti tells of how the Buddha’s relics were protected by mechanical robots (bhuta vahana yanta), from the kingdom of Roma visaya (Rome); until they were disarmed by King Ashoka. [22] [23]

In ancient China, the 3rd century text of the Lie Zi describes an account of humanoid automata, involving a much earlier encounter between Chinese emperor King Mu of Zhou and a mechanical engineer known as Yan Shi, an ‘artificer’. Yan Shi proudly presented the king with a life-size, human-shaped figure of his mechanical ‘handiwork’ made of leather, wood, and artificial organs.[14] There are also accounts of flying automata in the Han Fei Zi and other texts, which attributes the 5th century BC Mohist philosopher Mozi and his contemporary Lu Ban with the invention of artificial wooden birds (ma yuan) that could successfully fly.[17] In 1066, the Chinese inventor Su Song built a water clock in the form of a tower which featured mechanical figurines which chimed the hours.

The beginning of automata is associated with the invention of early Su Song’s astronomical clock tower featured mechanical figurines that chimed the hours.[24][25][26] His mechanism had a programmable drum machine with pegs (cams) that bumped into little levers that operated percussion instruments. The drummer could be made to play different rhythms and different drum patterns by moving the pegs to different locations.[26]

In Renaissance Italy, Leonardo da Vinci (1452–1519) sketched plans for a humanoid robot around 1495. Da Vinci’s notebooks, rediscovered in the 1950s, contained detailed drawings of a mechanical knight now known as Leonardo’s robot, able to sit up, wave its arms and move its head and jaw.[28] The design was probably based on anatomical research recorded in his Vitruvian Man. It is not known whether he attempted to build it.

In Japan, complex animal and human automata were built between the 17th to 19th centuries, with many described in the 18th century Karakuri zui (Illustrated Machinery, 1796). One such automaton was the karakuri ningyō, a mechanized puppet.[29] Different variations of the karakuri existed: the Butai karakuri, which were used in theatre, the Zashiki karakuri, which were small and used in homes, and the Dashi karakuri which were used in religious festivals, where the puppets were used to perform reenactments of traditional myths and legends.

The term robot was coined by a Czech writer (from the Robot Wikipedia entry; Note: Links have been removed)

‘Robot’ was first applied as a term for artificial automata in a 1920 play R.U.R. by the Czech writer, Karel Čapek. However, Josef Čapek was named by his brother Karel as the true inventor of the term robot.[6][7] The word ‘robot’ itself was not new, having been in Slavic language as robota (forced laborer), a term which classified those peasants obligated to compulsory service under the feudal system widespread in 19th century Europe (see: Robot Patent).[37][38] Čapek’s fictional story postulated the technological creation of artificial human bodies without souls, and the old theme of the feudal robota class eloquently fit the imagination of a new class of manufactured, artificial workers.

I’m particularly fascinated by how long humans have been imagining and creating robots.

Robot ethics in Vancouver

The Westender, has run what I believe is the first article by a local (Vancouver, Canada) mainstream media outlet on the topic of robots and ethics. Tessa Vikander’s Sept. 14, 2017 article highlights two local researchers, Ajung Moon and Mark Schmidt, and a local social media company’s (Hootsuite), analytics director, Nik Pai. Vikander opens her piece with an ethical dilemma (Note: Links have been removed),

Emma is 68, in poor health and an alcoholic who has been told by her doctor to stop drinking. She lives with a care robot, which helps her with household tasks.

Unable to fix herself a drink, she asks the robot to do it for her. What should the robot do? Would the answer be different if Emma owns the robot, or if she’s borrowing it from the hospital?

This is the type of hypothetical, ethical question that Ajung Moon, director of the Open Roboethics Initiative [ORI], is trying to answer.

According to an ORI study, half of respondents said ownership should make a difference, and half said it shouldn’t. With society so torn on the question, Moon is trying to figure out how engineers should be programming this type of robot.

A Vancouver resident, Moon is dedicating her life to helping those in the decision-chair make the right choice. The question of the care robot is but one ethical dilemma in the quickly advancing world of artificial intelligence.

At the most sensationalist end of the scale, one form of AI that’s recently made headlines is the sex robot, which has a human-like appearance. A report from the Foundation for Responsible Robotics says that intimacy with sex robots could lead to greater social isolation [emphasis mine] because they desensitize people to the empathy learned through human interaction and mutually consenting relationships.

I’ll get back to the impact that robots might have on us in part two but first,

Sexbots, could they kill?

For more about sexbots in general, Alessandra Maldonado wrote an Aug. 10, 2017 article for salon.com about them (Note: A link has been removed),

Artificial intelligence has given people the ability to have conversations with machines like never before, such as speaking to Amazon’s personal assistant Alexa or asking Siri for directions on your iPhone. But now, one company has widened the scope of what it means to connect with a technological device and created a whole new breed of A.I. — specifically for sex-bots.

Abyss Creations has been in the business of making hyperrealistic dolls for 20 years, and by the end of 2017, they’ll unveil their newest product, an anatomically correct robotic sex toy. Matt McMullen, the company’s founder and CEO, explains the goal of sex robots is companionship, not only a physical partnership. “Imagine if you were completely lonely and you just wanted someone to talk to, and yes, someone to be intimate with,” he said in a video depicting the sculpting process of the dolls. “What is so wrong with that? It doesn’t hurt anybody.”

Maldonado also embedded this video into her piece,

A friend of mine described it as creepy. Specifically we were discussing why someone would want to programme ‘insecurity’ as a  desirable trait in a sexbot.

Marc Beaulieu’s concept of a desirable trait in a sexbot is one that won’t kill him according to his Sept. 25, 2017 article on Canadian Broadcasting News (CBC) online (Note: Links have been removed),

Harmony has a charming Scottish lilt, albeit a bit staccato and canny. Her eyes dart around the room, her chin dips as her eyebrows raise in coquettish fashion. Her face manages expressions that are impressively lifelike. That face comes in 31 different shapes and 5 skin tones, with or without freckles and it sticks to her cyber-skull with magnets. Just peel it off and switch it out at will. In fact, you can choose Harmony’s eye colour, body shape (in great detail) and change her hair too. Harmony, of course, is a sex bot. A very advanced one. How advanced is she? Well, if you have $12,332 CAD to put towards a talkative new home appliance, REALBOTIX says you could be having a “conversation” and relations with her come January. Happy New Year.

Caveat emptor though: one novel bonus feature you might also get with Harmony is her ability to eventually murder you in your sleep. And not because she wants to.

Dr Nick Patterson, faculty of Science Engineering and Built Technology at Deakin University in Australia is lending his voice to a slew of others warning us to slow down and be cautious as we steadily approach Westworldian levels of human verisimilitude with AI tech. Surprisingly, Patterson didn’t regurgitate the narrative we recognize from the popular sci-fi (increasingly non-fi actually) trope of a dystopian society’s futile resistance to a robocalypse. He doesn’t think Harmony will want to kill you. He thinks she’ll be hacked by a code savvy ne’er-do-well who’ll want to snuff you out instead. …

Embedded in Beaulieu’s article is another video of the same sexbot profiled earlier. Her programmer seems to have learned a thing or two (he no longer inputs any traits as you’re watching),

I guess you could get one for Christmas this year if you’re willing to wait for an early 2018 delivery and aren’t worried about hackers turning your sexbot into a killer. While the killer aspect might seem farfetched, it turns out it’s not the only sexbot/hacker issue.

Sexbots as spies

This Oct. 5, 2017 story by Karl Bode for Techdirt points out that sex toys that are ‘smart’ can easily be hacked for any reason including some mischief (Note: Links have been removed),

One “smart dildo” manufacturer was recently forced to shell out $3.75 million after it was caught collecting, err, “usage habits” of the company’s customers. According to the lawsuit, Standard Innovation’s We-Vibe vibrator collected sensitive data about customer usage, including “selected vibration settings,” the device’s battery life, and even the vibrator’s “temperature.” At no point did the company apparently think it was a good idea to clearly inform users of this data collection.

But security is also lacking elsewhere in the world of internet-connected sex toys. Alex Lomas of Pentest Partners recently took a look at the security in many internet-connected sex toys, and walked away arguably unimpressed. Using a Bluetooth “dongle” and antenna, Lomas drove around Berlin looking for openly accessible sex toys (he calls it “screwdriving,” in a riff off of wardriving). He subsequently found it’s relatively trivial to discover and hijack everything from vibrators to smart butt plugs — thanks to the way Bluetooth Low Energy (BLE) connectivity works:

“The only protection you have is that BLE devices will generally only pair with one device at a time, but range is limited and if the user walks out of range of their smartphone or the phone battery dies, the adult toy will become available for others to connect to without any authentication. I should say at this point that this is purely passive reconnaissance based on the BLE advertisements the device sends out – attempting to connect to the device and actually control it without consent is not something I or you should do. But now one could drive the Hush’s motor to full speed, and as long as the attacker remains connected over BLE and not the victim, there is no way they can stop the vibrations.”

Does that make you think twice about a sexbot?

Robots and artificial intelligence

Getting back to the Vikander article (Sept. 14, 2017), Moon or Vikander or both seem to have conflated artificial intelligence with robots in this section of the article,

As for the building blocks that have thrust these questions [care robot quandary mentioned earlier] into the spotlight, Moon explains that AI in its basic form is when a machine uses data sets or an algorithm to make a decision.

“It’s essentially a piece of output that either affects your decision, or replaces a particular decision, or supports you in making a decision.” With AI, we are delegating decision-making skills or thinking to a machine, she says.

Although we’re not currently surrounded by walking, talking, independently thinking robots, the use of AI [emphasis mine] in our daily lives has become widespread.

For Vikander, the conflation may have been due to concerns about maintaining her word count and for Moon, it may have been one of convenience or a consequence of how the jargon is evolving with ‘robot’ meaning a machine specifically or, sometimes, a machine with AI or AI only.

To be precise, not all robots have AI and not all AI is found in robots. It’s a distinction that may be more important for people developing robots and/or AI but it also seems to make a difference where funding is concerned. In a March 24, 2017 posting about the 2017 Canadian federal budget I noticed this,

… The Canadian Institute for Advanced Research will receive $93.7 million [emphasis mine] to “launch a Pan-Canadian Artificial Intelligence Strategy … (to) position Canada as a world-leading destination for companies seeking to invest in artificial intelligence and innovation.”

This brings me to a recent set of meetings held in Vancouver to devise a Canadian robotics roadmap, which suggests the robotics folks feel they need specific representation and funding.

See: part two for the rest.