Tag Archives: cyborgs

Robots in Vancouver and in Canada (two of two)

This is the second of a two-part posting about robots in Vancouver and Canada. The first part included a definition, a brief mention a robot ethics quandary, and sexbots. This part is all about the future. (Part one is here.)

Canadian Robotics Strategy

Meetings were held Sept. 28 – 29, 2017 in, surprisingly, Vancouver. (For those who don’t know, this is surprising because most of the robotics and AI research seems to be concentrated in eastern Canada. if you don’t believe me take a look at the speaker list for Day 2 or the ‘Canadian Stakeholder’ meeting day.) From the NSERC (Natural Sciences and Engineering Research Council) events page of the Canadian Robotics Network,

Join us as we gather robotics stakeholders from across the country to initiate the development of a national robotics strategy for Canada. Sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC), this two-day event coincides with the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) in order to leverage the experience of international experts as we explore Canada’s need for a national robotics strategy.

Where
Vancouver, BC, Canada

When
Thursday September 28 & Friday September 29, 2017 — Save the date!

Download the full agenda and speakers’ list here.

Objectives

The purpose of this two-day event is to gather members of the robotics ecosystem from across Canada to initiate the development of a national robotics strategy that builds on our strengths and capacities in robotics, and is uniquely tailored to address Canada’s economic needs and social values.

This event has been sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC) and is supported in kind by the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) as an official Workshop of the conference.  The first of two days coincides with IROS 2017 – one of the premiere robotics conferences globally – in order to leverage the experience of international robotics experts as we explore Canada’s need for a national robotics strategy here at home.

Who should attend

Representatives from industry, research, government, startups, investment, education, policy, law, and ethics who are passionate about building a robust and world-class ecosystem for robotics in Canada.

Program Overview

Download the full agenda and speakers’ list here.

DAY ONE: IROS Workshop 

“Best practices in designing effective roadmaps for robotics innovation”

Thursday September 28, 2017 | 8:30am – 5:00pm | Vancouver Convention Centre

Morning Program:“Developing robotics innovation policy and establishing key performance indicators that are relevant to your region” Leading international experts share their experience designing robotics strategies and policy frameworks in their regions and explore international best practices. Opening Remarks by Prof. Hong Zhang, IROS 2017 Conference Chair.

Afternoon Program: “Understanding the Canadian robotics ecosystem” Canadian stakeholders from research, industry, investment, ethics and law provide a collective overview of the Canadian robotics ecosystem. Opening Remarks by Ryan Gariepy, CTO of Clearpath Robotics.

Thursday Evening Program: Sponsored by Clearpath Robotics  Workshop participants gather at a nearby restaurant to network and socialize.

Learn more about the IROS Workshop.

DAY TWO: NSERC-Sponsored Canadian Robotics Stakeholder Meeting
“Towards a national robotics strategy for Canada”

Friday September 29, 2017 | 8:30am – 5:00pm | University of British Columbia (UBC)

On the second day of the program, robotics stakeholders from across the country gather at UBC for a full day brainstorming session to identify Canada’s unique strengths and opportunities relative to the global competition, and to align on a strategic vision for robotics in Canada.

Friday Evening Program: Sponsored by NSERC Meeting participants gather at a nearby restaurant for the event’s closing dinner reception.

Learn more about the Canadian Robotics Stakeholder Meeting.

I was glad to see in the agenda that some of the international speakers represented research efforts from outside the usual Europe/US axis.

I have been in touch with one of the organizers (also mentioned in part one with regard to robot ethics), Ajung Moon (her website is here), who says that there will be a white paper available on the Canadian Robotics Network website at some point in the future. I’ll keep looking for it and, in the meantime, I wonder what the 2018 Canadian federal budget will offer robotics.

Robots and popular culture

For anyone living in Canada or the US, Westworld (television series) is probably the most recent and well known ‘robot’ drama to premiere in the last year.As for movies, I think Ex Machina from 2014 probably qualifies in that category. Interestingly, both Westworld and Ex Machina seem quite concerned with sex with Westworld adding significant doses of violence as another  concern.

I am going to focus on another robot story, the 2012 movie, Robot & Frank, which features a care robot and an older man,

Frank (played by Frank Langella), a former jewel thief, teaches a robot the skills necessary to rob some neighbours of their valuables. The ethical issue broached in the film isn’t whether or not the robot should learn the skills and assist Frank in his thieving ways although that’s touched on when Frank keeps pointing out that planning his heist requires he live more healthily. No, the problem arises afterward when the neighbour accuses Frank of the robbery and Frank removes what he believes is all the evidence. He believes he’s going successfully evade arrest until the robot notes that Frank will have to erase its memory in order to remove all of the evidence. The film ends without the robot’s fate being made explicit.

In a way, I find the ethics query (was the robot Frank’s friend or just a machine?) posed in the film more interesting than the one in Vikander’s story, an issue which does have a history. For example, care aides, nurses, and/or servants would have dealt with requests to give an alcoholic patient a drink. Wouldn’t there  already be established guidelines and practices which could be adapted for robots? Or, is this question made anew by something intrinsically different about robots?

To be clear, Vikander’s story is a good introduction and starting point for these kinds of discussions as is Moon’s ethical question. But they are starting points and I hope one day there’ll be a more extended discussion of the questions raised by Moon and noted in Vikander’s article (a two- or three-part series of articles? public discussions?).

How will humans react to robots?

Earlier there was the contention that intimate interactions with robots and sexbots would decrease empathy and the ability of human beings to interact with each other in caring ways. This sounds a bit like the argument about smartphones/cell phones and teenagers who don’t relate well to others in real life because most of their interactions are mediated through a screen, which many seem to prefer. It may be partially true but, arguably,, books too are an antisocial technology as noted in Walter J. Ong’s  influential 1982 book, ‘Orality and Literacy’,  (from the Walter J. Ong Wikipedia entry),

A major concern of Ong’s works is the impact that the shift from orality to literacy has had on culture and education. Writing is a technology like other technologies (fire, the steam engine, etc.) that, when introduced to a “primary oral culture” (which has never known writing) has extremely wide-ranging impacts in all areas of life. These include culture, economics, politics, art, and more. Furthermore, even a small amount of education in writing transforms people’s mentality from the holistic immersion of orality to interiorization and individuation. [emphases mine]

So, robotics and artificial intelligence would not be the first technologies to affect our brains and our social interactions.

There’s another area where human-robot interaction may have unintended personal consequences according to April Glaser’s Sept. 14, 2017 article on Slate.com (Note: Links have been removed),

The customer service industry is teeming with robots. From automated phone trees to touchscreens, software and machines answer customer questions, complete orders, send friendly reminders, and even handle money. For an industry that is, at its core, about human interaction, it’s increasingly being driven to a large extent by nonhuman automation.

But despite the dreams of science-fiction writers, few people enter a customer-service encounter hoping to talk to a robot. And when the robot malfunctions, as they so often do, it’s a human who is left to calm angry customers. It’s understandable that after navigating a string of automated phone menus and being put on hold for 20 minutes, a customer might take her frustration out on a customer service representative. Even if you know it’s not the customer service agent’s fault, there’s really no one else to get mad at. It’s not like a robot cares if you’re angry.

When human beings need help with something, says Madeleine Elish, an anthropologist and researcher at the Data and Society Institute who studies how humans interact with machines, they’re not only looking for the most efficient solution to a problem. They’re often looking for a kind of validation that a robot can’t give. “Usually you don’t just want the answer,” Elish explained. “You want sympathy, understanding, and to be heard”—none of which are things robots are particularly good at delivering. In a 2015 survey of over 1,300 people conducted by researchers at Boston University, over 90 percent of respondents said they start their customer service interaction hoping to speak to a real person, and 83 percent admitted that in their last customer service call they trotted through phone menus only to make their way to a human on the line at the end.

“People can get so angry that they have to go through all those automated messages,” said Brian Gnerer, a call center representative with AT&T in Bloomington, Minnesota. “They’ve been misrouted or been on hold forever or they pressed one, then two, then zero to speak to somebody, and they are not getting where they want.” And when people do finally get a human on the phone, “they just sigh and are like, ‘Thank God, finally there’s somebody I can speak to.’ ”

Even if robots don’t always make customers happy, more and more companies are making the leap to bring in machines to take over jobs that used to specifically necessitate human interaction. McDonald’s and Wendy’s both reportedly plan to add touchscreen self-ordering machines to restaurants this year. Facebook is saturated with thousands of customer service chatbots that can do anything from hail an Uber, retrieve movie times, to order flowers for loved ones. And of course, corporations prefer automated labor. As Andy Puzder, CEO of the fast-food chains Carl’s Jr. and Hardee’s and former Trump pick for labor secretary, bluntly put it in an interview with Business Insider last year, robots are “always polite, they always upsell, they never take a vacation, they never show up late, there’s never a slip-and-fall, or an age, sex, or race discrimination case.”

But those robots are backstopped by human beings. How does interacting with more automated technology affect the way we treat each other? …

“We know that people treat artificial entities like they’re alive, even when they’re aware of their inanimacy,” writes Kate Darling, a researcher at MIT who studies ethical relationships between humans and robots, in a recent paper on anthropomorphism in human-robot interaction. Sure, robots don’t have feelings and don’t feel pain (not yet, anyway). But as more robots rely on interaction that resembles human interaction, like voice assistants, the way we treat those machines will increasingly bleed into the way we treat each other.

It took me a while to realize that what Glaser is talking about are AI systems and not robots as such. (sigh) It’s so easy to conflate the concepts.

AI ethics (Toby Walsh and Suzanne Gildert)

Jack Stilgoe of the Guardian published a brief Oct. 9, 2017 introduction to his more substantive (30 mins.?) podcast interview with Dr. Toby Walsh where they discuss stupid AI amongst other topics (Note: A link has been removed),

Professor Toby Walsh has recently published a book – Android Dreams – giving a researcher’s perspective on the uncertainties and opportunities of artificial intelligence. Here, he explains to Jack Stilgoe that we should worry more about the short-term risks of stupid AI in self-driving cars and smartphones than the speculative risks of super-intelligence.

Professor Walsh discusses the effects that AI could have on our jobs, the shapes of our cities and our understandings of ourselves. As someone developing AI, he questions the hype surrounding the technology. He is scared by some drivers’ real-world experimentation with their not-quite-self-driving Teslas. And he thinks that Siri needs to start owning up to being a computer.

I found this discussion to cast a decidedly different light on the future of robotics and AI. Walsh is much more interested in discussing immediate issues like the problems posed by ‘self-driving’ cars. (Aside: Should we be calling them robot cars?)

One ethical issue Walsh raises is with data regarding accidents. He compares what’s happening with accident data from self-driving (robot) cars to how the aviation industry handles accidents. Hint: accident data involving air planes is shared. Would you like to guess who does not share their data?

Sharing and analyzing data and developing new safety techniques based on that data has made flying a remarkably safe transportation technology.. Walsh argues the same could be done for self-driving cars if companies like Tesla took the attitude that safety is in everyone’s best interests and shared their accident data in a scheme similar to the aviation industry’s.

In an Oct. 12, 2017 article by Matthew Braga for Canadian Broadcasting Corporation (CBC) news online another ethical issue is raised by Suzanne Gildert (a participant in the Canadian Robotics Roadmap/Strategy meetings mentioned earlier here), Note: Links have been removed,

… Suzanne Gildert, the co-founder and chief science officer of Vancouver-based robotics company Kindred. Since 2014, her company has been developing intelligent robots [emphasis mine] that can be taught by humans to perform automated tasks — for example, handling and sorting products in a warehouse.

The idea is that when one of Kindred’s robots encounters a scenario it can’t handle, a human pilot can take control. The human can see, feel and hear the same things the robot does, and the robot can learn from how the human pilot handles the problematic task.

This process, called teleoperation, is one way to fast-track learning by manually showing the robot examples of what its trainers want it to do. But it also poses a potential moral and ethical quandary that will only grow more serious as robots become more intelligent.

“That AI is also learning my values,” Gildert explained during a talk on robot ethics at the Singularity University Canada Summit in Toronto on Wednesday [Oct. 11, 2017]. “Everything — my mannerisms, my behaviours — is all going into the AI.”

At its worst, everything from algorithms used in the U.S. to sentence criminals to image-recognition software has been found to inherit the racist and sexist biases of the data on which it was trained.

But just as bad habits can be learned, good habits can be learned too. The question is, if you’re building a warehouse robot like Kindred is, is it more effective to train those robots’ algorithms to reflect the personalities and behaviours of the humans who will be working alongside it? Or do you try to blend all the data from all the humans who might eventually train Kindred robots around the world into something that reflects the best strengths of all?

I notice Gildert distinguishes her robots as “intelligent robots” and then focuses on AI and issues with bias which have already arisen with regard to algorithms (see my May 24, 2017 posting about bias in machine learning, AI, and .Note: if you’re in Vancouver on Oct. 26, 2017 and interested in algorithms and bias), there’s a talk being given by Dr. Cathy O’Neil, author the Weapons of Math Destruction, on the topic of Gender and Bias in Algorithms. It’s not free but  tickets are here.)

Final comments

There is one more aspect I want to mention. Even as someone who usually deals with nanobots, it’s easy to start discussing robots as if the humanoid ones are the only ones that exist. To recapitulate, there are humanoid robots, utilitarian robots, intelligent robots, AI, nanobots, ‘microscopic bots, and more all of which raise questions about ethics and social impacts.

However, there is one more category I want to add to this list: cyborgs. They live amongst us now. Anyone who’s had a hip or knee replacement or a pacemaker or a deep brain stimulator or other such implanted device qualifies as a cyborg. Increasingly too, prosthetics are being introduced and made part of the body. My April 24, 2017 posting features this story,

This Case Western Reserve University (CRWU) video accompanies a March 28, 2017 CRWU news release, (h/t ScienceDaily March 28, 2017 news item)

Bill Kochevar grabbed a mug of water, drew it to his lips and drank through the straw.

His motions were slow and deliberate, but then Kochevar hadn’t moved his right arm or hand for eight years.

And it took some practice to reach and grasp just by thinking about it.

Kochevar, who was paralyzed below his shoulders in a bicycling accident, is believed to be the first person with quadriplegia in the world to have arm and hand movements restored with the help of two temporarily implanted technologies. [emphasis mine]

A brain-computer interface with recording electrodes under his skull, and a functional electrical stimulation (FES) system* activating his arm and hand, reconnect his brain to paralyzed muscles.

Does a brain-computer interface have an effect on human brain and, if so, what might that be?

In any discussion (assuming there is funding for it) about ethics and social impact, we might want to invite the broadest range of people possible at an ‘earlyish’ stage (although we’re already pretty far down the ‘automation road’) stage or as Jack Stilgoe and Toby Walsh note, technological determinism holds sway.

Once again here are links for the articles and information mentioned in this double posting,

That’s it!

ETA Oct. 16, 2017: Well, I guess that wasn’t quite ‘it’. BBC’s (British Broadcasting Corporation) Magazine published a thoughtful Oct. 15, 2017 piece titled: Can we teach robots ethics?

Westworld: a US television programme investigating AI (artificial intelligence) and consciousness

The US television network, Home Box Office (HBO) is getting ready to première Westworld, a new series based on a movie first released in 1973. Here’s more about the movie from its Wikipedia entry (Note: Links have been removed),

Westworld is a 1973 science fiction Western thriller film written and directed by novelist Michael Crichton and produced by Paul Lazarus III about amusement park robots that malfunction and begin killing visitors. It stars Yul Brynner as an android in a futuristic Western-themed amusement park, and Richard Benjamin and James Brolin as guests of the park.

Westworld was the first theatrical feature directed by Michael Crichton.[3] It was also the first feature film to use digital image processing, to pixellate photography to simulate an android point of view.[4] The film was nominated for Hugo, Nebula and Saturn awards, and was followed by a sequel film, Futureworld, and a short-lived television series, Beyond Westworld. In August 2013, HBO announced plans for a television series based on the original film.

The latest version is due to start broadcasting in the US on Sunday, Oct. 2, 2016 and as part of the publicity effort the producers are profiled by Sean Captain for Fast Company in a Sept. 30, 2016 article,

As Game of Thrones marches into its final seasons, HBO is debuting this Sunday what it hopes—and is betting millions of dollars on—will be its new blockbuster series: Westworld, a thorough reimagining of Michael Crichton’s 1973 cult classic film about a Western theme park populated by lifelike robot hosts. A philosophical prelude to Jurassic Park, Crichton’s Westworld is a cautionary tale about technology gone very wrong: the classic tale of robots that rise up and kill the humans. HBO’s new series, starring Evan Rachel Wood, Anthony Hopkins, and Ed Harris, is subtler and also darker: The humans are the scary ones.

“We subverted the entire premise of Westworld in that our sympathies are meant to be with the robots, the hosts,” says series co-creator Lisa Joy. She’s sitting on a couch in her Burbank office next to her partner in life and on the show—writer, director, producer, and husband Jonathan Nolan—who goes by Jonah. …

Their Westworld, which runs in the revered Sunday-night 9 p.m. time slot, combines present-day production values and futuristic technological visions—thoroughly revamping Crichton’s story with hybrid mechanical-biological robots [emphasis mine] fumbling along the blurry line between simulated and actual consciousness.

Captain never does explain the “hybrid mechanical-biological robots.” For example, do they have human skin or other organs grown for use in a robot? In other words, how are they hybrid?

That nitpick aside, the article provides some interesting nuggets of information and insight into the themes and ideas 2016 Westworld’s creators are exploring (Note: A link has been removed),

… Based on the four episodes I previewed (which get progressively more interesting), Westworld does a good job with the trope—which focused especially on the awakening of Dolores, an old soul of a robot played by Evan Rachel Wood. Dolores is also the catchall Spanish word for suffering, pain, grief, and other displeasures. “There are no coincidences in Westworld,” says Joy, noting that the name is also a play on Dolly, the first cloned mammal.

The show operates on a deeper, though hard-to-define level, that runs beneath the shoot-em and screw-em frontier adventure and robotic enlightenment narratives. It’s an allegory of how even today’s artificial intelligence is already taking over, by cataloging and monetizing our lives and identities. “Google and Facebook, their business is reading your mind in order to advertise shit to you,” says Jonah Nolan. …

“Exist free of rules, laws or judgment. No impulse is taboo,” reads a spoof home page for the resort that HBO launched a few weeks ago. That’s lived to the fullest by the park’s utterly sadistic loyal guest, played by Ed Harris and known only as the Man in Black.

The article also features some quotes from scientists on the topic of artificial intelligence (Note: Links have been removed),

“In some sense, being human, but less than human, it’s a good thing,” says Jon Gratch, professor of computer science and psychology at the University of Southern California [USC]. Gratch directs research at the university’s Institute for Creative Technologies on “virtual humans,” AI-driven onscreen avatars used in military-funded training programs. One of the projects, SimSensei, features an avatar of a sympathetic female therapist, Ellie. It uses AI and sensors to interpret facial expressions, posture, tension in the voice, and word choices by users in order to direct a conversation with them.

“One of the things that we’ve found is that people don’t feel like they’re being judged by this character,” says Gratch. In work with a National Guard unit, Ellie elicited more honest responses about their psychological stresses than a web form did, he says. Other data show that people are more honest when they know the avatar is controlled by an AI versus being told that it was controlled remotely by a human mental health clinician.

“If you build it like a human, and it can interact like a human. That solves a lot of the human-computer or human-robot interaction issues,” says professor Paul Rosenbloom, also with USC’s Institute for Creative Technologies. He works on artificial general intelligence, or AGI—the effort to create a human-like or human level of intellect.

Rosenbloom is building an AGI platform called Sigma that models human cognition, including emotions. These could make a more effective robotic tutor, for instance, “There are times you want the person to know you are unhappy with them, times you want them to know that you think they’re doing great,” he says, where “you” is the AI programmer. “And there’s an emotional component as well as the content.”

Achieving full AGI could take a long time, says Rosenbloom, perhaps a century. Bernie Meyerson, IBM’s chief innovation officer, is also circumspect in predicting if or when Watson could evolve into something like HAL or Her. “Boy, we are so far from that reality, or even that possibility, that it becomes ludicrous trying to get hung up there, when we’re trying to get something to reasonably deal with fact-based data,” he says.

Gratch, Rosenbloom, and Meyerson are talking about screen-based entities and concepts of consciousness and emotions. Then, there’s a scientist who’s talking about the difficulties with robots,

… Ken Goldberg, an artist and professor of engineering at UC [University of California] Berkeley, calls the notion of cyborg robots in Westworld “a pretty common trope in science fiction.” (Joy will take up the theme again, as the screenwriter for a new Battlestar Galactica movie.) Goldberg’s lab is struggling just to build and program a robotic hand that can reliably pick things up. But a sympathetic, somewhat believable Dolores in a virtual setting is not so farfetched.

Captain delves further into a thorny issue,

“Can simulations, at some point, become the real thing?” asks Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University. “If we perfectly simulate a rainstorm on a computer, it’s still not a rainstorm. We won’t get wet. But is the mind or consciousness different? The jury is still out.”

While artificial consciousness is still in the dreamy phase, today’s level of AI is serious business. “What was sort of a highfalutin philosophical question a few years ago has become an urgent industrial need,” says Jonah Nolan. It’s not clear yet how the Delos management intends, beyond entrance fees, to monetize Westworld, although you get a hint when Ford tells Theresa Cullen “We know everything about our guests, don’t we? As we know everything about our employees.”

AI has a clear moneymaking model in this world, according to Nolan. “Facebook is monetizing your social graph, and Google is advertising to you.” Both companies (and others) are investing in AI to better understand users and find ways to make money off this knowledge. …

As my colleague David Bruggeman has often noted on his Pasco Phronesis blog, there’s a lot of science on television.

For anyone who’s interested in artificial intelligence and the effects it may have on urban life, see my Sept. 27, 2016 posting featuring the ‘One Hundred Year Study on Artificial Intelligence (AI100)’, hosted by Stanford University.

Points to anyone who recognized Jonah (Jonathan) Nolan as the producer for the US television series, Person of Interest, a programme based on the concept of a supercomputer with intelligence and personality and the ability to continuously monitor the population 24/7.

Long-term brain mapping with injectable electronics

Charles Lieber and his team at Harvard University announced a success with their work on injectable electronics last year (see my June 11, 2015 posting for more) and now they are reporting on their work with more extensive animal studies according to an Aug. 29, 2016 news item on psypost.org,

Scientists in recent years have made great strides in the quest to understand the brain by using implanted probes to explore how specific neural circuits work.

Though effective, those probes also come with their share of problems as a result of rigidity. The inflammation they produce induces chronic recording instability and means probes must be relocated every few days, leaving some of the central questions of neuroscience – like how the neural circuits are reorganized during development, learning and aging- beyond scientists’ reach.

But now, it seems, things are about to change.

Led by Charles Lieber, The Mark Hyman Jr. Professor of Chemistry and chair of the Department of Chemistry and Chemical Biology, a team of researchers that included graduate student Tian-Ming Fu, postdoctoral fellow Guosong Hong, graduate student Tao Zhou and others, has demonstrated that syringe-injectable mesh electronics can stably record neural activity in mice for eight months or more, with none of the inflammation

An Aug. 29, 2016 Harvard University press release, which originated the news item, provides more detail,

“With the ability to follow the same individual neurons in a circuit chronically…there’s a whole suite of things this opens up,” Lieber said. “The eight months we demonstrate in this paper is not a limit, but what this does show is that mesh electronics could be used…to investigate neuro-degenerative diseases like Alzheimer’s, or processes that occur over long time, like aging or learning.”

Lieber and colleagues also demonstrated that the syringe-injectable mesh electronics could be used to deliver electrical stimulation to the brain over three months or more.

“Ultimately, our aim is to create these with the goal of finding clinical applications,” Lieber said. “What we found is that, because of the lack of immune response (to the mesh electronics), which basically insulates neurons, we can deliver stimulation in a much more subtle way, using lower voltages that don’t damage tissue.”

The possibilities, however, don’t end there.

The seamless integration of the electronics and biology, Lieber said, could open the door to an entirely new class of brain-machine interfaces and vast improvements in prosthetics, among other fields.

“Today, brain-machine interfaces are based on traditional implanted probes, and there has been some impressive work that’s been done in that field,” Lieber said. “But all the interfaces rely on the same technique to decode neural signals.”

Because traditional rigid implanted probes are invariably unstable, he explained, researchers and clinicians rely on decoding what they call the “population average” – essentially taking a host of neural signals and applying complex computational tools to determine what they mean.

Using tissue-like mesh electronics, by comparison, researchers may be able to read signals from specific neurons over time, potentially allowing for the development of improved brain-machine interfaces for prosthetics.

“We think this is going to be very powerful, because we can identify circuits and both record and stimulate in a way that just hasn’t been possible before,” Lieber said. “So what I like to say is: I think therefore it happens.”

Lieber even held out the possibility that the syringe-injectable mesh electronics could one day be used to treat catastrophic injuries to the brain and spinal cord.

“I don’t think that’s science-fiction,” he said. “Other people may say that will be possible through, for example, regenerative medicine, but we are pursuing this from a different angle.

“My feeling is that this is about a seamless integration between the biological and the electronic systems, so they’re not distinct entities,” he continued. “If we can make the electronics look like the neural network, they will work together…and that’s where you want to be if you want to exploit the strengths of both.”

In the 2015 posting, Lieber was discussing cyborgs, here he broaches the concept without using the word, “… seamless integration between the biological and the electronic systems, so they’re not distinct entities.”

Here’s a link to and a citation for the paper,

Stable long-term chronic brain mapping at the single-neuron level by Tian-Ming Fu, Guosong Hong, Tao Zhou, Thomas G Schuhmann, Robert D Viveros, & Charles M Lieber. Nature Methods (2016) doi:10.1038/nmeth.3969 Published online 29 August 2016

This paper is behind a paywall.

Singapore’s* new chip could make low-powered wireless neural implants a possibility and Australians develop their own neural implant

Singapore

This research from Singapore could make neuroprosthetics and exoskeletons a little easier to manage as long as you don’t mind having a neural implant. From a Feb. 11, 2016 news item on ScienceDaily,

A versatile chip offers multiple applications in various electronic devices, report researchers, suggested that there is now hope that a low-powered, wireless neural implant may soon be a reality. Neural implants when embedded in the brain can alleviate the debilitating symptoms of Parkinson’s disease or give paraplegic people the ability to move their prosthetic limbs.

Caption: NTU Asst Prof Arindam Basu is holding his low-powered smart chip. Credit: NTU Singapore

Caption: NTU Asst Prof Arindam Basu is holding his low-powered smart chip. Credit: NTU Singapore

A Feb. 11, 2016 Nanyang Technological University (NTU) press release (also on EurekAlert), which originated the news item, provides more detail,

Scientists at Nanyang Technological University, Singapore (NTU Singapore) have developed a small smart chip that can be paired with neural implants for efficient wireless transmission of brain signals.

Neural implants when embedded in the brain can alleviate the debilitating symptoms of Parkinson’s disease or give paraplegic people the ability to move their prosthetic limbs.

However, they need to be connected by wires to an external device outside the body. For a prosthetic patient, the neural implant is connected to a computer that decodes the brain signals so the artificial limb can move.

These external wires are not only cumbersome but the permanent openings which allow the wires into the brain increases the risk of infections.

The new chip by NTU scientists can allow the transmission of brain data wirelessly and with high accuracy.

Assistant Professor Arindam Basu from NTU’s School of Electrical and Electronic Engineering said the research team have tested the chip on data recorded from animal models, which showed that it could decode the brain’s signal to the hand and fingers with 95 per cent accuracy.

“What we have developed is a very versatile smart chip that can process data, analyse patterns and spot the difference,” explained Prof Basu.

“It is about a hundred times more efficient than current processing chips on the market. It will lead to more compact medical wearable devices, such as portable ECG monitoring devices and neural implants, since we no longer need large batteries to power them.”

Different from other wireless implants

To achieve high accuracy in decoding brain signals, implants require thousands of channels of raw data. To wirelessly transmit this large amount of data, more power is also needed which means either bigger batteries or more frequent recharging.

This is not feasible as there is limited space in the brain for implants while frequent recharging means the implants cannot be used for long-term recording of signals.

Current wireless implant prototypes thus suffer from a lack of accuracy as they lack the bandwidth to send out thousands of channels of raw data.

Instead of enlarging the power source to support the transmission of raw data, Asst Prof Basu tried to reduce the amount of data that needs to be transmitted.

Designed to be extremely power-efficient, NTU’s patented smart chip will analyse and decode the thousands of signals from the neural implants in the brain, before compressing the results and sending it wirelessly to a small external receiver.

This invention and its findings were published last month [December 2015] in the prestigious journal, IEEE Transactions on Biomedical Circuits & Systems, by the Institute of Electrical and Electronics Engineers, the world’s largest professional association for the advancement of technology.

Its underlying science was also featured in three international engineering conferences (two in Atlanta, USA and one in China) over the last three months.

Versatile smart chip with multiple uses

This new smart chip is designed to analyse data patterns and spot any abnormal or unusual patterns.

For example, in a remote video camera, the chip can be programmed to send a video back to the servers only when a specific type of car or something out of the ordinary is detected, such as an intruder.

This would be extremely beneficial for the Internet of Things (IOT), where every electrical and electronic device is connected to the Internet through a smart chip.

With a report by marketing research firm Gartner Inc predicting that 6.4 billion smart devices and appliances will be connected to the Internet by 2016, and will rise to 20.8 billion devices by 2020, reducing network traffic will be a priority for most companies.

Using NTU’s new chip, the devices can process and analyse the data on site, before sending back important details in a compressed package, instead of sending the whole data stream. This will reduce data usage by over a thousand times.

Asst Prof Basu is now in talks with Singapore Technologies Electronics Limited to adapt his smart chip that can significantly reduce power consumption and the amount of data transmitted by battery-operated remote sensors, such as video cameras.

The team is also looking to expand the applications of the chip into commercial products, such as to customise it for smart home sensor networks, in collaboration with a local electronics company.

The chip, measuring 5mm by 5mm can now be licensed by companies from NTU’s commercialisation arm, NTUitive.

Here’s a link to and a citation for the paper,

A 128-Channel Extreme Learning Machine-Based Neural Decoder for Brain Machine Interfaces by Yi Chen, Enyi Yao, Arindam Basu. IEEE Transactions on Biomedical Circuits and Systems, 2015; 1 DOI: 10.1109/TBCAS.2015.2483618

This paper is behind a paywall.

Australia

Earlier this month there was a Feb. 9, 2016 announcement about a planned human clinical trial in Australia for a new brain-machine interface (neural implant). Before proceeding with the news, here’s what this implant looks like,

Caption: This tiny device, the size of a small paperclip, is implanted in to a blood vessel next to the brain and can read electrical signals from the motor cortex, the brain's control centre. These signals can then be transmitted to an exoskeleton or wheelchair to give paraplegic patients greater mobility. Users will need to learn how to communicate with their machinery, but over time, it is thought it will become second nature, like driving or playing the piano. The first human trials are slated for 2017 in Melbourne, Australia. Credit: The University of Melbourne.

Caption: This tiny device, the size of a small paperclip, is implanted in to a blood vessel next to the brain and can read electrical signals from the motor cortex, the brain’s control centre. These signals can then be transmitted to an exoskeleton or wheelchair to give paraplegic patients greater mobility. Users will need to learn how to communicate with their machinery, but over time, it is thought it will become second nature, like driving or playing the piano. The first human trials are slated for 2017 in Melbourne, Australia. Credit: The University of Melbourne.

A Feb. 9, 2016 University of Melbourne press release (also on EurekAlert), which originated the news item, provides more detail,

Melbourne medical researchers have created a new minimally invasive brain-machine interface, giving people with spinal cord injuries new hope to walk again with the power of thought.

The brain machine interface consists of a stent-based electrode (stentrode), which is implanted within a blood vessel next to the brain, and records the type of neural activity that has been shown in pre-clinical trials to move limbs through an exoskeleton or to control bionic limbs.

The new device is the size of a small paperclip and will be implanted in the first in-human trial at The Royal Melbourne Hospital in 2017.

The results published today in Nature Biotechnology show the device is capable of recording high-quality signals emitted from the brain’s motor cortex, without the need for open brain surgery.

Principal author and Neurologist at The Royal Melbourne Hospital and Research Fellow at The Florey Institute of Neurosciences and the University of Melbourne, Dr Thomas Oxley, said the stentrode was revolutionary.

“The development of the stentrode has brought together leaders in medical research from The Royal Melbourne Hospital, The University of Melbourne and the Florey Institute of Neuroscience and Mental Health. In total 39 academic scientists from 16 departments were involved in its development,” Dr Oxley said.

“We have been able to create the world’s only minimally invasive device that is implanted into a blood vessel in the brain via a simple day procedure, avoiding the need for high risk open brain surgery.

“Our vision, through this device, is to return function and mobility to patients with complete paralysis by recording brain activity and converting the acquired signals into electrical commands, which in turn would lead to movement of the limbs through a mobility assist device like an exoskeleton. In essence this a bionic spinal cord.”

Stroke and spinal cord injuries are leading causes of disability, affecting 1 in 50 people. There are 20,000 Australians with spinal cord injuries, with the typical patient a 19-year old male, and about 150,000 Australians left severely disabled after stroke.

Co-principal investigator and biomedical engineer at the University of Melbourne, Dr Nicholas Opie, said the concept was similar to an implantable cardiac pacemaker – electrical interaction with tissue using sensors inserted into a vein, but inside the brain.

“Utilising stent technology, our electrode array self-expands to stick to the inside wall of a vein, enabling us to record local brain activity. By extracting the recorded neural signals, we can use these as commands to control wheelchairs, exoskeletons, prosthetic limbs or computers,” Dr Opie said.

“In our first-in-human trial, that we anticipate will begin within two years, we are hoping to achieve direct brain control of an exoskeleton for three people with paralysis.”

“Currently, exoskeletons are controlled by manual manipulation of a joystick to switch between the various elements of walking – stand, start, stop, turn. The stentrode will be the first device that enables direct thought control of these devices”

Neurophysiologist at The Florey, Professor Clive May, said the data from the pre-clinical study highlighted that the implantation of the device was safe for long-term use.

“Through our pre-clinical study we were able to successfully record brain activity over many months. The quality of recording improved as the device was incorporated into tissue,” Professor May said.

“Our study also showed that it was safe and effective to implant the device via angiography, which is minimally invasive compared with the high risks associated with open brain surgery.

“The brain-computer interface is a revolutionary device that holds the potential to overcome paralysis, by returning mobility and independence to patients affected by various conditions.”

Professor Terry O’Brien, Head of Medicine at Departments of Medicine and Neurology, The Royal Melbourne Hospital and University of Melbourne said the development of the stentrode has been the “holy grail” for research in bionics.

“To be able to create a device that can record brainwave activity over long periods of time, without damaging the brain is an amazing development in modern medicine,” Professor O’Brien said.

“It can also be potentially used in people with a range of diseases aside from spinal cord injury, including epilepsy, Parkinsons and other neurological disorders.”

The development of the minimally invasive stentrode and the subsequent pre-clinical trials to prove its effectiveness could not have been possible without the support from the major funding partners – US Defense Department DARPA [Defense Advanced Research Projects Agency] and Australia’s National Health and Medical Research Council.

So, DARPA is helping fund this, eh? Interesting but not a surprise given the agency’s previous investments in brain research and neuroprosthetics.

For those who like to get their news via video,

Here’s a link to and a citation for the paper,

Minimally invasive endovascular stent-electrode array for high-fidelity, chronic recordings of cortical neural activity by Thomas J Oxley, Nicholas L Opie, Sam E John, Gil S Rind, Stephen M Ronayne, Tracey L Wheeler, Jack W Judy, Alan J McDonald, Anthony Dornom, Timothy J H Lovell, Christopher Steward, David J Garrett, Bradford A Moffat, Elaine H Lui, Nawaf Yassi, Bruce C V Campbell, Yan T Wong, Kate E Fox, Ewan S Nurse, Iwan E Bennett, Sébastien H Bauquier, Kishan A Liyanage, Nicole R van der Nagel, Piero Perucca, Arman Ahnood et al. Nature Biotechnology (2016)  doi:10.1038/nbt.3428 Published online 08 February 2016

This paper is behind a paywall.

I wish the researchers in Singapore, Australia, and elsewhere, good luck!

*’Sinagpore’ in head changed to ‘Singapore’ on May 14, 2019.

Injectable electronics

Having taught a course on bioelectronics for Simon Fraser University’s (Vancouver, Canada) Continuing Studies Program, this  latest work from Harvard University (US) caught my attention. A Harvard research team has developed a technique which could allow doctors to inject us with electronics, should we need them. From a June 8, 2015 news item on phys.org,

It’s a notion that might be pulled from the pages of science-fiction novel – electronic devices that can be injected directly into the brain, or other body parts, and treat everything from neurodegenerative disorders to paralysis.

It sounds unlikely, until you visit Charles Lieber’s lab.

A team of international researchers, led by Lieber, the Mark Hyman, Jr. Professor of Chemistry, an international team of researchers developed a method for fabricating nano-scale electronic scaffolds that can be injected via syringe. Once connected to electronic devices, the scaffolds can be used to monitor neural activity, stimulate tissues and even promote regenerations of neurons. …

Here’s an image provided by the researchers,

Bright-field image showing the mesh electronics being injected through sub-100 micrometer inner diameter glass needle into aqueous solution. mage courtesy of Lieber Research Group, Harvard University

Bright-field image showing the mesh electronics being injected through sub-100 micrometer inner diameter glass needle into aqueous solution. mage courtesy of Lieber Research Group, Harvard University

A June 8, 2015 Harvard University new release by Peter Reuell (also on EurekAlert), which originated the news item, describes the work in more detail,

“I do feel that this has the potential to be revolutionary,” Lieber said. “This opens up a completely new frontier where we can explore the interface between electronic structures and biology. For the past thirty years, people have made incremental improvements in micro-fabrication techniques that have allowed us to make rigid probes smaller and smaller, but no one has addressed this issue – the electronics/cellular interface – at the level at which biology works.”

The idea of merging the biological with the electronic is not a new one for Lieber.

In an earlier study, scientists in Lieber’s lab demonstrated that the scaffolds could be used to create “cyborg” tissue – when cardiac or nerve cells were grown with embedded scaffolds. [emphasis mine] Researchers were then able to use the devices to record electrical signals generated by the tissues, and to measure changes in those signals as they administered cardio- or neuro-stimulating drugs.

“We were able to demonstrate that we could make this scaffold and culture cells within it, but we didn’t really have an idea how to insert that into pre-existing tissue,” Lieber said. “But if you want to study the brain or develop the tools to explore the brain-machine interface, you need to stick something into the body. When releasing the electronics scaffold completely from the fabrication substrate, we noticed that it was almost invisible and very flexible like a polymer and could literally be sucked into a glass needle or pipette. From there, we simply asked, would it be possible to deliver the mesh electronics by syringe needle injection, a process common to delivery of many species in biology and medicine – you could go to the doctor and you inject this and you’re wired up.'”

Though not the first attempts at implanting electronics into the brain – deep brain stimulation has been used to treat a variety of disorders for decades – the nano-fabricated scaffolds operate on a completely different scale.

“Existing techniques are crude relative to the way the brain is wired,” Lieber explained. “Whether it’s a silicon probe or flexible polymers…they cause inflammation in the tissue that requires periodically changing the position or the stimulation. But with our injectable electronics, it’s as if it’s not there at all. They are one million times more flexible than any state-of-the-art flexible electronics and have subcellular feature sizes. They’re what I call “neuro-philic” – they actually like to interact with neurons..”

Despite their enormous potential, the fabrication of the injectable scaffolds is surprisingly easy.

“That’s the beauty of this – it’s compatible with conventional manufacturing techniques,” Lieber said.

The process is similar to that used to etch microchips, and begins with a dissolvable layer deposited on a substrate. To create the scaffold, researchers lay out a mesh of nanowires sandwiched in layers of organic polymer. The first layer is then dissolved, leaving the flexible mesh, which can be drawn into a syringe needle and administered like any other injection.

After injection, the input/output of the mesh can be connected to standard measurement electronics so that the integrated devices can be addressed and used to stimulate or record neural activity.

“These type of things have never been done before, from both a fundamental neuroscience and medical perspective,” Lieber said. “It’s really exciting – there are a lot of potential applications.”

Going forward, Lieber said, researchers hope to better understand how the brain and other tissues react to the injectable electronics over longer periods.

Lieber’s earlier work on “cyborg tissue” was briefly mentioned here in a Feb. 20, 2014 posting.

Getting back to the most recent work, here’s a link to and a citation for the paper,

Syringe-injectable electronics by Jia Liu, Tian-Ming Fu, Zengguang Cheng, Guosong Hong, Tao Zhou, Lihua Jin, Madhavi Duvvuri, Zhe Jiang, Peter Kruskal, Chong Xie, Zhigang Suo, Ying Fang, & Charles M. Lieber. Nature Nanotechnology (2015) doi:10.1038/nnano.2015.115 Published online 08 June 2015

This paper is behind a paywall but there is a free preview via ReadCube Access.

One final note, the researchers have tested the injectable electronics (or mesh electronics) in vivo (live animals).

Cyborgs (a presentation) at the American Chemical Society’s 248th meeting

There will be a plethora of chemistry news online over the next few days as the American Society’s (ACS) 248th meeting in San Francisco, CA from Aug. 10 -14, 2014 takes place. Unexpectedly, an Aug. 11, 2014 news item on Azonano highlights a meeting presentation focused on cyborgs,

No longer just fantastical fodder for sci-fi buffs, cyborg technology is bringing us tangible progress toward real-life electronic skin, prosthetics and ultraflexible circuits. Now taking this human-machine concept to an unprecedented level, pioneering scientists are working on the seamless marriage between electronics and brain signaling with the potential to transform our understanding of how the brain works — and how to treat its most devastating diseases.

An Aug. 10, 2014 ACS news release on EurekAlert provides more detail about the presentation (Note: Links have been removed),

“By focusing on the nanoelectronic connections between cells, we can do things no one has done before,” says Charles M. Lieber, Ph.D. “We’re really going into a new size regime for not only the device that records or stimulates cellular activity, but also for the whole circuit. We can make it really look and behave like smart, soft biological material, and integrate it with cells and cellular networks at the whole-tissue level. This could get around a lot of serious health problems in neurodegenerative diseases in the future.”

These disorders, such as Parkinson’s, that involve malfunctioning nerve cells can lead to difficulty with the most mundane and essential movements that most of us take for granted: walking, talking, eating and swallowing.

Scientists are working furiously to get to the bottom of neurological disorders. But they involve the body’s most complex organ — the brain — which is largely inaccessible to detailed, real-time scrutiny. This inability to see what’s happening in the body’s command center hinders the development of effective treatments for diseases that stem from it.

By using nanoelectronics, it could become possible for scientists to peer for the first time inside cells, see what’s going wrong in real time and ideally set them on a functional path again.

For the past several years, Lieber has been working to dramatically shrink cyborg science to a level that’s thousands of times smaller and more flexible than other bioelectronic research efforts. His team has made ultrathin nanowires that can monitor and influence what goes on inside cells. Using these wires, they have built ultraflexible, 3-D mesh scaffolding with hundreds of addressable electronic units, and they have grown living tissue on it. They have also developed the tiniest electronic probe ever that can record even the fastest signaling between cells.

Rapid-fire cell signaling controls all of the body’s movements, including breathing and swallowing, which are affected in some neurodegenerative diseases. And it’s at this level where the promise of Lieber’s most recent work enters the picture.

In one of the lab’s latest directions, Lieber’s team is figuring out how to inject their tiny, ultraflexible electronics into the brain and allow them to become fully integrated with the existing biological web of neurons. They’re currently in the early stages of the project and are working with rat models.

“It’s hard to say where this work will take us,” he says. “But in the end, I believe our unique approach will take us on a path to do something really revolutionary.”

Lieber acknowledges funding from the U.S. Department of Defense, the National Institutes of Health and the U.S. Air Force.

I first covered Lieber’s work in an Aug. 27, 2012 posting  highlighting some good descriptions from Lieber and his colleagues of their work. There’s also this Aug. 26, 2012 article by Peter Reuell in the Harvard Gazette (featuring a very good technical description for someone not terribly familiar with the field but able to grasp some technical information while managing their own [mine] ignorance). The posting and the article provide details about the foundational work for Lieber’s 2014 presentation at the ACS meeting.

Lieber will be speaking next at the IEEE (Institute for Electrical and Electronics Engineers) 14th International Conference on Nanotechnology sometime between August 18 – 21, 2014 in Toronto, Ontario, Canada.

As for some of Lieber’s latest published work, there’s more information in my Feb. 20, 2014 posting which features a link to a citation for the paper (behind a paywall) in question.

Making nanoelectronic devices last longer in the body could lead to ‘cyborg’ tissue

An American Chemical Society (ACS) Feb. 19, 2014 news release (also on EurekAlert), describes some research devoted to extending a nanoelectronic device’s ‘life’ when implanted in the body,

The debut of cyborgs who are part human and part machine may be a long way off, but researchers say they now may be getting closer. In a study published in ACS’ journal Nano Letters, they report development of a coating that makes nanoelectronics much more stable in conditions mimicking those in the human body. [emphases mine] The advance could also aid in the development of very small implanted medical devices for monitoring health and disease.

Charles Lieber and colleagues note that nanoelectronic devices with nanowire components have unique abilities to probe and interface with living cells. They are much smaller than most implanted medical devices used today. For example, a pacemaker that regulates the heart is the size of a U.S. 50-cent coin, but nanoelectronics are so small that several hundred such devices would fit in the period at the end of this sentence. Laboratory versions made of silicon nanowires can detect disease biomarkers and even single virus cells, or record heart cells as they beat. Lieber’s team also has integrated nanoelectronics into living tissues in three dimensions — creating a “cyborg tissue.” One obstacle to the practical, long-term use of these devices is that they typically fall apart within weeks or days when implanted. In the current study, the researchers set out to make them much more stable.

They found that coating silicon nanowires with a metal oxide shell allowed nanowire devices to last for several months. This was in conditions that mimicked the temperature and composition of the inside of the human body. In preliminary studies, one shell material appears to extend the lifespan of nanoelectronics to about two years.

Depending on how you define the term cyborg, it could be said there are already cyborgs amongst us as I noted in an April 20, 2012 posting titled: My mother is a cyborg. Personally I’m fascinated by the news release’s mention of ‘cyborg tissue’ although there’s no further explanation of what the term might mean.

For the curious, here’s a link to and a citation for the paper,

Long Term Stability of Nanowire Nanoelectronics in Physiological Environments by Wei Zhou, Xiaochuan Dai, Tian-Ming Fu, Chong Xie, Jia Liu, and Charles M. Lieber. Nano Lett., Article ASAP DOI: 10.1021/nl500070h Publication Date (Web): January 30, 2014
Copyright © 2014 American Chemical Society

This paper is behind a paywall.

Chemistry of Cyborgs: review of the state of the art by German researchers

Communication between man and machine – a fascinating area at the interface of chemistry, biomedicine, and engineering. (Figure: KIT/S. Giselbrecht, R. Meyer, B. Rapp)

Communication between man and machine – a fascinating area at the interface of chemistry, biomedicine, and engineering. (Figure: KIT/S. Giselbrecht, R. Meyer, B. Rapp)

German researchers from the Karlsruhe Institute of Technology (KIT), Professor Christof M. Niemeyer and Dr. Stefan Giselbrecht of the Institute for Biological Interfaces 1 (IBG 1) and Dr. Bastian E. Rapp, Institute of Microstructure Technology (IMT) have written a good overview of the current state of cyborgs while pointing out some of the ethical issues associated with this field. From the Jan. 10, 2014 news item on ScienceDaily,

Medical implants, complex interfaces between brain and machine or remotely controlled insects: Recent developments combining machines and organisms have great potentials, but also give rise to major ethical concerns. In a new review, KIT scientists discuss the state of the art of research, opportunities, and risks.

The Jan. ?, 2014 KIT press release (also on EurekAlert with a release date of Jan. 10, 2014), which originated the news item, describes the innovations and the work at KIT in more detail,

They are known from science fiction novels and films – technically modified organisms with extraordinary skills, so-called cyborgs. This name originates from the English term “cybernetic organism”. In fact, cyborgs that combine technical systems with living organisms are already reality. The KIT researchers Professor Christof M. Niemeyer and Dr. Stefan Giselbrecht of the Institute for Biological Interfaces 1 (IBG 1) and Dr. Bastian E. Rapp, Institute of Microstructure Technology (IMT), point out that this especially applies to medical implants.

In recent years, medical implants based on smart materials that automatically react to changing conditions, computer-supported design and fabrication based on magnetic resonance tomography datasets or surface modifications for improved tissue integration allowed major progress to be achieved. For successful tissue integration and the prevention of inflammation reactions, special surface coatings were developed also by the KIT under e.g. the multidisciplinary Helmholtz program “BioInterfaces”.

Progress in microelectronics and semiconductor technology has been the basis of electronic implants controlling, restoring or improving the functions of the human body, such as cardiac pacemakers, retina implants, hearing implants, or implants for deep brain stimulation in pain or Parkinson therapies. Currently, bioelectronic developments are being combined with robotics systems to design highly complex neuroprostheses. Scientists are working on brain-machine interfaces (BMI) for the direct physical contacting of the brain. BMI are used among others to control prostheses and complex movements, such as gripping. Moreover, they are important tools in neurosciences, as they provide insight into the functioning of the brain. Apart from electric signals, substances released by implanted micro- and nanofluidic systems in a spatially or temporarily controlled manner can be used for communication between technical devices and organisms.

BMI are often considered data suppliers. However, they can also be used to feed signals into the brain, which is a highly controversial issue from the ethical point of view. “Implanted BMI that feed signals into nerves, muscles or directly into the brain are already used on a routine basis, e.g. in cardiac pacemakers or implants for deep brain stimulation,” Professor Christof M. Niemeyer, KIT, explains. “But these signals are neither planned to be used nor suited to control the entire organism – brains of most living organisms are far too complex.”

Brains of lower organisms, such as insects, are less complex. As soon as a signal is coupled in, a certain movement program, such as running or flying, is started. So-called biobots, i.e. large insects with implanted electronic and microfluidic control units, are used in a new generation of tools, such as small flying objects for monitoring and rescue missions. In addition, they are applied as model systems in neurosciences in order to understand basic relationships.

Electrically active medical implants that are used for longer terms depend on reliable power supply. Presently, scientists are working on methods to use the patient body’s own thermal, kinetic, electric or chemical energy.

In their review the KIT researchers sum up that developments combining technical devices with organisms have a fascinating potential. They may considerably improve the quality of life of many people in the medical sector in particular. However, ethical and social aspects always have to be taken into account.

After briefly reading the paper, I can say the researchers are most interested in the science and technology aspects but they do have this to say about ethical and social issues in the paper’s conclusion (Note: Links have been removed),

The research and development activities summarized here clearly raise significant social and ethical concerns, in particular, when it comes to the use of BMIs for signal injection into humans, which may lead to modulation or even control of behavior. The ethical issues of this new technology have been discussed in the excellent commentary of Jens Clausen,33 which we highly recommend for further reading. The recently described engineering of a synthetic polymer construct, which is capable of propulsion in water through a collection of adhered rat cardiomyocytes,77 a “medusoid” also described as a “cyborg jellyfish with a rat heart”, brings up an additional ethical aspect. The motivation of the work was to reverse-engineer muscular pumps, and it thus represents fundamental research in tissue engineering for biomedical applications. However, it is also an impressive, early demonstration that autonomous control of technical devices can be achieved through small populations of cells or microtissues. It seems reasonable that future developments along this line will strive, for example, to control complex robots through the use of brain tissue. Given the fact that the robots of today are already capable of autonomously performing complex missions, even in unknown territories,78 this approach might indeed pave the way for yet another entirely new generation of cybernetic organisms.

Here’s a link to and a citation for the English language version of the paper, which is open access (as of Jan. 10, 2014),

The Chemistry of Cyborgs—Interfacing Technical Devices with Organisms by Dr. Stefan Giselbrecht, Dr. Bastian E. Rapp, & Prof.Dr. Christof M. Niemeyer. Angewandte Chemie International Edition Volume 52, Issue 52, pages 13942–13957, December 23, 2013 Article first published online: 29 NOV 2013 DOI: 10.1002/anie.201307495

Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

For those with German language skills,

Chemie der Cyborgs – zur Verknüpfung technischer Systeme mit Lebewesen.  by Stefan Giselbrecht, Bastian E. Rapp, and Christof M. Niemeyer. Angewandte Chemie. Volume 125, issue 52, pages 14190, December 23, 2013. DOI: 10.1002/ange.201307495

I have written many times about cyborgs and neuroprosthetics including this Aug. 30, 2011 posting titled:  Eye, arm, & leg prostheses, cyborgs, eyeborgs, Deus Ex, and ableism, where I mention Gregor Wolbring, a Canadian academic (University of Calgary) who has written extensively on the social and ethical issues of human enhancement technologies. You can find out more on his blog, Nano and Nano- Bio, Info, Cogno, Neuro, Synbio, Geo, Chem…

For anyone wanting to search this blog for these pieces, try using the term machine/flesh as a tag, as well as, human enhancement, neuroprostheses, cyborgs …

Almost Human (tv series), smartphones, and anxieties about life/nonlife

The US-based Fox Broadcasting Company is set to premiere a new futuristic television series, Almost Human, over two nights, Nov. 17, and 18, 2013 for US and Canadian viewers. Here’s a description of the premise from its Wikipedia essay (Note: Links have been removed),

The series is set thirty-five years in the future when humans in the Los Angeles Police Department are paired up with lifelike androids; a detective who has a dislike for robots partners with an android capable of emotion.

One of the showrunners, Naren Shankar, seems to have also been functioning both as a science consultant and as a crime writing consultant,in addition to his other duties. From a Sept. 4, 2013 article by Lisa Tsering for Indiawest.com,

FOX is the latest television network to utilize the formidable talents of Naren Shankar, an Indian American writer and producer best known to fans for his work on “Star Trek: Deep Space Nine,” “Star Trek: Voyager” and “Star Trek: The Next Generation” as well as “Farscape,” the recently cancelled ABC series “Zero Hour” and “The Outer Limits.”

Set 35 years in the future, “Almost Human” stars Karl Urban and Michael Ealy as a crimefighting duo of a cop who is part-machine and a robot who is part-human. [emphasis mine]

“We are extrapolating the things we see today into the near future,” he explained. For example, the show will comment on the pervasiveness of location software, he said. “There will also be issues of technology such as medical ethics, or privacy; or how technology enables the rich but not the poor, who can’t afford it.”

Speaking at Comic-Con July 20 [2013], Shankar told media there, “Joel [J.H. Wyman] was looking for a collaboration with someone who had come from the crime world, and I had worked on ‘CSI’ for eight years.

“This is like coming back to my first love, since for many years I had done science fiction. It’s a great opportunity to get away from dismembered corpses and autopsy scenes.”

There’s plenty of drama — in the new series, the year is 2048, and police officer John Kennex (Karl Urban, “Dr. Bones” from the new “Star Trek” films) is trying to bounce back from one of the most catastrophic attacks ever made against the police department. Kennex wakes up from a 17-month coma and can’t remember much, except that his partner was killed; his girlfriend left him and one of his legs has been amputated and is now outfitted with a high-tech synthetic appendage. According to police department policy, every cop must partner with a robot, so Kennex is paired with Dorian (Ealy), an android with an unusual glitch that makes it have human emotions.

Shankar took an unusual path into television. He started college at age 16 and attended Cornell University, where he earned a B. Sc., an M.S. and a Ph.D. in engineering physics and electrical engineering, and was a member of the elite Kappa Alpha Society, he decided he didn’t want to work as a scientist and moved to Los Angeles to try to become a writer.

Shankar is eager to move in a new direction with “Almost Human,” which he says comes at the right time. “People are so technologically sophisticated now that maybe the audience is ready for a show like this,” he told India-West.

I am particularly intrigued by the ‘man who’s part machine and the machine that’s part human’ concept (something I’ve called machine/flesh in previous postings such as this May 9, 2012 posting titled ‘Everything becomes part machine’) and was looking forward to seeing how they would be integrating this concept along with some of the more recent scientific work being done on prosthetics and robots, given they had an engineer as part of the team (albeit with lots of crime writing experience), into the stories. Sadly, only days after Tserling’s article was published, Shankar parted ways with Almost Human according to the Sept. 10, 2013 posting on the Almost Human blog,

So this was supposed to be the week that I posted a profile of Naren Shankar, for whom I have developed a full-on crush–I mean, he has a PhD in Electrical Engineering from Cornell, he was hired by Gene Roddenberry to be science consultant on TNG, he was saying all sorts of great things about how he wanted to present the future in AH…aaaand he quit as co-showrunner yesterday, citing “creative differences.” That leaves Wyman as sole showrunner, with no plans to replace Shankar.

I’d like to base some of my comments on the previews, unfortunately, Fox Broadcasting,, in its infinite wisdom, has decided to block Canadians from watching Almost Human previews online. (Could someone please explain why? I mean, Canadians will be tuning in to watch or record for future viewing  the series premiere on the 17th & 18th of November 2013 just like our US neighbours, so, why can’t we watch the previews online?)

Getting back to machine/flesh (human with prosthetic)s and life/nonlife (android with feelings), it seems that Almost Human (as did the latest version of Battlestar Galactica, from 2004-2009) may be giving a popular culture voice to some contemporary anxieties being felt about the boundary or lack thereof between humans and machines and life/nonlife. I’ve touched on this topic many times both within and without the popular culture context. Probably one of my more comprehensive essays on machine/flesh is Eye, arm, & leg prostheses, cyborgs, eyeborgs, Deus Ex, and ableism from August 30, 2011, which includes this quote from a still earlier posting on this topic,

Here’s an excerpt from my Feb. 2, 2010 posting which reinforces what Gregor [Gregor Wolbring, University of Calgary] is saying,

This influx of R&D cash, combined with breakthroughs in materials science and processor speed, has had a striking visual and social result: an emblem of hurt and loss has become a paradigm of the sleek, modern, and powerful. Which is why Michael Bailey, a 24-year-old student in Duluth, Georgia, is looking forward to the day when he can amputate the last two fingers on his left hand.

“I don’t think I would have said this if it had never happened,” says Bailey, referring to the accident that tore off his pinkie, ring, and middle fingers. “But I told Touch Bionics I’d cut the rest of my hand off if I could make all five of my fingers robotic.” [originally excerpted from Paul Hochman’s Feb. 1, 2010 article, Bionic Legs, i-Limbs, and Other Super Human Prostheses You’ll Envy for Fast Company]

Here’s something else from the Hochman article,

But Bailey is most surprised by his own reaction. “When I’m wearing it, I do feel different: I feel stronger. As weird as that sounds, having a piece of machinery incorporated into your body, as a part of you, well, it makes you feel above human. [semphasis mine] It’s a very powerful thing.”

Bailey isn’t  almost human’, he’s ‘above human’. As Hochman points out. repeatedly throughout his article, this sentiment is not confined to Bailey. My guess is that Kennex (Karl Urban’s character) in Almost Human doesn’t echo Bailey’s sentiments and, instead feels he’s not quite human while the android, Dorian, (Michael Ealy’s character) struggles with his feelings in a human way that clashes with Kennex’s perspective on what is human and what is not (or what we might be called the boundary between life and nonlife).

Into this mix, one could add the rising anxiety around ‘intelligent’ machines present in real life, as well as, fiction as per this November 12 (?), 2013 article by Ian Barker for Beta News,

The rise of intelligent machines has long been fertile ground for science fiction writers, but a new report by technology research specialists Gartner suggests that the future is closer than we think.

“Smartphones are becoming smarter, and will be smarter than you by 2017,” says Carolina Milanesi, research vice president at Gartner. “If there is heavy traffic, it will wake you up early for a meeting with your boss, or simply send an apology if it is a meeting with your colleague. The smartphone will gather contextual information from its calendar, its sensors, the user’s location and personal data”.

Your smartphone will be able to predict your next move or your next purchase based on what it knows about you. This will be made possible by gathering data using a technique called “cognizant computing”.

Gartner analysts will be discussing the future of smart devices at the Gartner Symposium/ITxpo 2013 in Barcelona from November 10-14 [2013].

The Gartner Symposium/Txpo in Barcelona is ending today (Nov. 14, 2013) but should you be curious about it, you can go here to learn more.

This notion that machines might (or will) get smarter or more powerful than humans (or wizards) is explored by Will.i.am (of the Black Eyed Peas) and, futurist, Brian David Johnson in their upcoming comic book, Wizards and Robots (mentioned in my Oct. 6, 2013 posting),. This notion of machines or technology overtaking human life is also being discussed at the University of Cambridge where there’s talk of founding a Centre for the Study of Existential Risk (from my Nov. 26, 2012 posting)

The idea that robots of one kind or another (e.g. nanobots eating up the world and leaving grey goo, Cylons in both versions of Battlestar Galactica trying to exterminate humans, etc.) will take over the world and find humans unnecessary  isn’t especially new in works of fiction. It’s not always mentioned directly but the underlying anxiety often has to do with intelligence and concerns over an ‘explosion of intelligence’. The question it raises,’ what if our machines/creations become more intelligent than humans?’ has been described as existential risk. According to a Nov. 25, 2012 article by Sylvia Hui for Huffington Post, a group of eminent philosophers and scientists at the University of Cambridge are proposing to found a Centre for the Study of Existential Risk,

Could computers become cleverer than humans and take over the world? Or is that just the stuff of science fiction?

Philosophers and scientists at Britain’s Cambridge University think the question deserves serious study. A proposed Center for the Study of Existential Risk will bring together experts to consider the ways in which super intelligent technology, including artificial intelligence, could “threaten our own existence,” the institution said Sunday.

“In the case of artificial intelligence, it seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology,” Cambridge philosophy professor Huw Price said.

When that happens, “we’re no longer the smartest things around,” he said, and will risk being at the mercy of “machines that are not malicious, but machines whose interests don’t include us.”

Our emerging technologies give rise to questions abut what constitutes life and where human might fit in. For example,

  • are sufficiently advanced machines a new form of life,?
  • what does it mean when human bodies are partially integrated at the neural level with machinery?
  • what happens when machines have feelings?
  • etc.

While this doesn’t exactly fit into my theme of life/nonlife or machine/flesh, this does highlight how some popular culture efforts are attempting to integrate real science into the storytelling. Here’s an excerpt from an interview with Cosima Herter, the science consultant and namesake/model for one of the characters on Orphan Black (from the March 29, 2013 posting on the space.ca blog),

Cosima Herter is Orphan Black’s Science Consultant, and the inspiration for her namesake character in the series. In real-life, Real Cosima is a PhD. student in the History of Science, Technology, and Medicine Program at the University of Minnesota, working on the History and Philosophy of Biology. Hive interns Billi Knight & Peter Rowley spoke with her about her role on the show and the science behind it…

Q: Describe your role in the making of Orphan Black.

A: I’m a resource for the biology, particularly insofar as evolutionary biology is concerned. I study the history and the philosophy of biology, so I do offer some suggestions and some creative ideas, but also help correct some of the misconceptions about science.  I offer different angles and alternatives to look at the way biological science is represented, so (it’s) not reduced to your stereotypical tropes about evolutionary biology and cloning, but also to provide some accuracy for the scripts.

– See more at: http://www.space.ca/article/Orphan-Black-science-consultant#sthash.7P36bbPa.dpuf

For anyone not familiar with the series, from the Wikipedia essay (Note: Links have been removed),

Orphan Black is a Canadian science fiction television series starring Tatiana Maslany as several identical women who are revealed to be clones.

Medicine, nanoelectronics, social implications, and figuring it all out

Given today’s (Aug. 27, 2012) earlier posting about nanoelectronics and tissue engineering, I though it was finally time to feature Michael Berger’s Aug. 16, 2012 Nanowerk Spotlight essay, The future of nanotechnology electronics in medicine, which discusses the integration of electronics into the human body.

First, Berger offers a summary of some of the latest research (Note: I have removed  links),

In previous Nanowerk Spotlights we have already covered numerous research advances in this area: The development of a nanobioelectronic system that triggers enzyme activity and, in a similar vein, the electrically triggered drug release from smart nanomembranes; an artificial retina for color vision; nanomaterial-based breathalyzers as diagnostic tools; nanogenerators to power self-sustained biosystems and implants; future bio-nanotechnology might even use computer chips inside living cells.

A lot of nanotechnology work is going on in the area of brain research. For instance the use of a carbon nanotube rope to electrically stimlate neural stem cells; nanotechnology to repair the brain and other advances in fabricating nanomaterial-neural interfaces for signal generation.

International cooperation in this field has also picked up. Just recently, scientists have formed a global alliance for nanobioelectronics to rapidly find solutions for neurological disorders; the EuroNanoBio project is a Support Action funded under the 7th Framework Programme of the European Union; and ENIAC, the European Technology Platform on nanoelectronics, has decided to make the development of medical applications one of its main objectives.

Berger cites a recent article in the American Chemical Society’s (ACS) Nano (journal) by scientists in today’s earlier posting about tissue scaffolding and 3-D electrnonics,

In a new perspective article in the July 31, 2012, online edition of ACS Nano (“The Smartest Materials: The Future of Nanoelectronics in Medicine” [behind a paywall]), Tzahi Cohen-Karni (a researcher in Kohane’s lab), Robert Langer, and Daniel S. Kohane provide an overview of nanoelectronics’ potential in the biomedical sciences.

They write that, as with many other areas of scientific endeavor in recent decades, continued progress will require the convergence of multiple disciplines, including chemistry, biology, electrical engineering, computer science, optics, material science, drug delivery, and numerous medical disciplines. ”

Advances in this research could lead to extremely sophisticated smart materials with multifunctional capabilities that are built in – literally hard-wired. The impact of this research could cover the spectrum of biomedical possibilities from diagnostic studies to the creation of cyborgs.”

Berger finishes with this thought,

Ultimately, and here we are getting almost into science fiction territory, nanostructures could not only incorporate sensing and stimulating capabilities but also potentially introduce computational capabilities and energy-generating elements. “In this way, one could fabricate a truly independent system that senses and analyzes signals, initiates interventions, and is self-sustained. Future developments in this direction could, for example, lead to a synthetic nanoelectronic autonomic nervous system.”

This Nanowerk Spotlight essay provides a good overview of nanoelectronics  research in medicine and lots of  links to previous related essays and other related materials.

I am intrigued that there is no mention of the social implications for this research and I find social science or humanities research on social social implications of emerging technology rarely discusses the technical aspects revealing what seems to be an insurmountable gulf. I suppose that’s why we need writers, artists, musicians, dancers, pop culture, and the like to create experiences, installations, and narratives that help us examine the technologies and their social implications, up close.