Tag Archives: InteraXon

Turning brain-controlled wireless electronic prostheses into reality plus some ethical points

Researchers at Stanford University (California, US) believe they have a solution for a problem with neuroprosthetics (Note: I have included brief comments about neuroprosthetics and possible ethical issues at the end of this posting) according an August 5, 2020 news item on ScienceDaily,

The current generation of neural implants record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But, so far, when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the implants generated too much heat to be safe for the patient. A new study suggests how to solve his problem — and thus cut the wires.

Caption: Photo of a current neural implant, that uses wires to transmit information and receive power. New research suggests how to one day cut the wires. Credit: Sergey Stavisky

An August 3, 2020 Stanford University news release (also on EurekAlert but published August 4, 2020) by Tom Abate, which originated the news item, details the problem and the proposed solution,

Stanford researchers have been working for years to advance a technology that could one day help people with paralysis regain use of their limbs, and enable amputees to use their thoughts to control prostheses and interact with computers.

The team has been focusing on improving a brain-computer interface, a device implanted beneath the skull on the surface of a patient’s brain. This implant connects the human nervous system to an electronic device that might, for instance, help restore some motor control to a person with a spinal cord injury, or someone with a neurological condition like amyotrophic lateral sclerosis, also called Lou Gehrig’s disease.

The current generation of these devices record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the devices would generate too much heat to be safe for the patient.

Now, a team led by electrical engineers and neuroscientists Krishna Shenoy, PhD, and Boris Murmann, PhD, and neurosurgeon and neuroscientist Jaimie Henderson, MD, have shown how it would be possible to create a wireless device, capable of gathering and transmitting accurate neural signals, but using a tenth of the power required by current wire-enabled systems. These wireless devices would look more natural than the wired models and give patients freer range of motion.

Graduate student Nir Even-Chen and postdoctoral fellow Dante Muratore, PhD, describe the team’s approach in a Nature Biomedical Engineering paper.

The team’s neuroscientists identified the specific neural signals needed to control a prosthetic device, such as a robotic arm or a computer cursor. The team’s electrical engineers then designed the circuitry that would enable a future, wireless brain-computer interface to process and transmit these these carefully identified and isolated signals, using less power and thus making it safe to implant the device on the surface of the brain.

To test their idea, the researchers collected neuronal data from three nonhuman primates and one human participant in a (BrainGate) clinical trial.

As the subjects performed movement tasks, such as positioning a cursor on a computer screen, the researchers took measurements. The findings validated their hypothesis that a wireless interface could accurately control an individual’s motion by recording a subset of action-specific brain signals, rather than acting like the wired device and collecting brain signals in bulk.

The next step will be to build an implant based on this new approach and proceed through a series of tests toward the ultimate goal.

Here’s a link to and a citation for the paper,

Power-saving design opportunities for wireless intracortical brain–computer interfaces by Nir Even-Chen, Dante G. Muratore, Sergey D. Stavisky, Leigh R. Hochberg, Jaimie M. Henderson, Boris Murmann & Krishna V. Shenoy. Nature Biomedical Engineering (2020) DOI: https://doi.org/10.1038/s41551-020-0595-9 Published: 03 August 2020

This paper is behind a paywall.

Comments about ethical issues

As I found out while investigating, ethical issues in this area abound. My first thought was to look at how someone with a focus on ability studies might view the complexities.

My ‘go to’ resource for human enhancement and ethical issues is Gregor Wolbring, an associate professor at the University of Calgary (Alberta, Canada). his profile lists these areas of interest: ability studies, disability studies, governance of emerging and existing sciences and technologies (e.g. neuromorphic engineering, genetics, synthetic biology, robotics, artificial intelligence, automatization, brain machine interfaces, sensors) and more.

I can’t find anything more recent on this particular topic but I did find an August 10, 2017 essay for The Conversation where he comments on technology and human enhancement ethical issues where the technology is gene-editing. Regardless, he makes points that are applicable to brain-computer interfaces (human enhancement), Note: Links have been removed),

Ability expectations have been and still are used to disable, or disempower, many people, not only people seen as impaired. They’ve been used to disable or marginalize women (men making the argument that rationality is an important ability and women don’t have it). They also have been used to disable and disempower certain ethnic groups (one ethnic group argues they’re smarter than another ethnic group) and others.

A recent Pew Research survey on human enhancement revealed that an increase in the ability to be productive at work was seen as a positive. What does such ability expectation mean for the “us” in an era of scientific advancements in gene-editing, human enhancement and robotics?

Which abilities are seen as more important than others?

The ability expectations among “us” will determine how gene-editing and other scientific advances will be used.

And so how we govern ability expectations, and who influences that governance, will shape the future. Therefore, it’s essential that ability governance and ability literacy play a major role in shaping all advancements in science and technology.

One of the reasons I find Gregor’s commentary so valuable is that he writes lucidly about ability and disability as concepts and poses what can be provocative questions about expectations and what it is to be truly abled or disabled. You can find more of his writing here on his eponymous (more or less) blog.

Ethics of clinical trials for testing brain implants

This October 31, 2017 article by Emily Underwood for Science was revelatory,

In 2003, neurologist Helen Mayberg of Emory University in Atlanta began to test a bold, experimental treatment for people with severe depression, which involved implanting metal electrodes deep in the brain in a region called area 25 [emphases mine]. The initial data were promising; eventually, they convinced a device company, St. Jude Medical in Saint Paul, to sponsor a 200-person clinical trial dubbed BROADEN.

This month [October 2017], however, Lancet Psychiatry reported the first published data on the trial’s failure. The study stopped recruiting participants in 2012, after a 6-month study in 90 people failed to show statistically significant improvements between those receiving active stimulation and a control group, in which the device was implanted but switched off.

… a tricky dilemma for companies and research teams involved in deep brain stimulation (DBS) research: If trial participants want to keep their implants [emphases mine], who will take responsibility—and pay—for their ongoing care? And participants in last week’s meeting said it underscores the need for the growing corps of DBS researchers to think long-term about their planned studies.

… participants bear financial responsibility for maintaining the device should they choose to keep it, and for any additional surgeries that might be needed in the future, Mayberg says. “The big issue becomes cost [emphasis mine],” she says. “We transition from having grants and device donations” covering costs, to patients being responsible. And although the participants agreed to those conditions before enrolling in the trial, Mayberg says she considers it a “moral responsibility” to advocate for lower costs for her patients, even it if means “begging for charity payments” from hospitals. And she worries about what will happen to trial participants if she is no longer around to advocate for them. “What happens if I retire, or get hit by a bus?” she asks.

There’s another uncomfortable possibility: that the hypothesis was wrong [emphases mine] to begin with. A large body of evidence from many different labs supports the idea that area 25 is “key to successful antidepressant response,” Mayberg says. But “it may be too simple-minded” to think that zapping a single brain node and its connections can effectively treat a disease as complex as depression, Krakauer [John Krakauer, a neuroscientist at Johns Hopkins University in Baltimore, Maryland] says. Figuring that out will likely require more preclinical research in people—a daunting prospect that raises additional ethical dilemmas, Krakauer says. “The hardest thing about being a clinical researcher,” he says, “is knowing when to jump.”

Brain-computer interfaces, symbiosis, and ethical issues

This was the most recent and most directly applicable work that I could find. From a July 24, 2019 article by Liam Drew for Nature Outlook: The brain,

“It becomes part of you,” Patient 6 said, describing the technology that enabled her, after 45 years of severe epilepsy, to halt her disabling seizures. Electrodes had been implanted on the surface of her brain that would send a signal to a hand-held device when they detected signs of impending epileptic activity. On hearing a warning from the device, Patient 6 knew to take a dose of medication to halt the coming seizure.

“You grow gradually into it and get used to it, so it then becomes a part of every day,” she told Frederic Gilbert, an ethicist who studies brain–computer interfaces (BCIs) at the University of Tasmania in Hobart, Australia. “It became me,” she said. [emphasis mine]

Gilbert was interviewing six people who had participated in the first clinical trial of a predictive BCI to help understand how living with a computer that monitors brain activity directly affects individuals psychologically1. Patient 6’s experience was extreme: Gilbert describes her relationship with her BCI as a “radical symbiosis”.

Symbiosis is a term, borrowed from ecology, that means an intimate co-existence of two species for mutual advantage. As technologists work towards directly connecting the human brain to computers, it is increasingly being used to describe humans’ potential relationship with artificial intelligence.

Interface technologies are divided into those that ‘read’ the brain to record brain activity and decode its meaning, and those that ‘write’ to the brain to manipulate activity in specific regions and affect their function.

Commercial research is opaque, but scientists at social-media platform Facebook are known to be pursuing brain-reading techniques for use in headsets that would convert users’ brain activity into text. And neurotechnology companies such as Kernel in Los Angeles, California, and Neuralink, founded by Elon Musk in San Francisco, California, predict bidirectional coupling in which computers respond to people’s brain activity and insert information into their neural circuitry. [emphasis mine]

Already, it is clear that melding digital technologies with human brains can have provocative effects, not least on people’s agency — their ability to act freely and according to their own choices. Although neuroethicists’ priority is to optimize medical practice, their observations also shape the debate about the development of commercial neurotechnologies.

Neuroethicists began to note the complex nature of the therapy’s side effects. “Some effects that might be described as personality changes are more problematic than others,” says Maslen [Hannah Maslen, a neuroethicist at the University of Oxford, UK]. A crucial question is whether the person who is undergoing stimulation can reflect on how they have changed. Gilbert, for instance, describes a DBS patient who started to gamble compulsively, blowing his family’s savings and seeming not to care. He could only understand how problematic his behaviour was when the stimulation was turned off.

Such cases present serious questions about how the technology might affect a person’s ability to give consent to be treated, or for treatment to continue. [emphases mine] If the person who is undergoing DBS is happy to continue, should a concerned family member or doctor be able to overrule them? If someone other than the patient can terminate treatment against the patient’s wishes, it implies that the technology degrades people’s ability to make decisions for themselves. It suggests that if a person thinks in a certain way only when an electrical current alters their brain activity, then those thoughts do not reflect an authentic self.

To observe a person with tetraplegia bringing a drink to their mouth using a BCI-controlled robotic arm is spectacular. [emphasis mine] This rapidly advancing technology works by implanting an array of electrodes either on or in a person’s motor cortex — a brain region involved in planning and executing movements. The activity of the brain is recorded while the individual engages in cognitive tasks, such as imagining that they are moving their hand, and these recordings are used to command the robotic limb.

If neuroscientists could unambiguously discern a person’s intentions from the chattering electrical activity that they record in the brain, and then see that it matched the robotic arm’s actions, ethical concerns would be minimized. But this is not the case. The neural correlates of psychological phenomena are inexact and poorly understood, which means that signals from the brain are increasingly being processed by artificial intelligence (AI) software before reaching prostheses.[emphasis mine]

But, he [Philipp Kellmeyer, a neurologist and neuroethicist at the University of Freiburg, Germany] says, using AI tools also introduces ethical issues of which regulators have little experience. [emphasis mine] Machine-learning software learns to analyse data by generating algorithms that cannot be predicted and that are difficult, or impossible, to comprehend. This introduces an unknown and perhaps unaccountable process between a person’s thoughts and the technology that is acting on their behalf.

Maslen is already helping to shape BCI-device regulation. She is in discussion with the European Commission about regulations it will implement in 2020 that cover non-invasive brain-modulating devices that are sold straight to consumers. [emphases mine; Note: There is a Canadian company selling this type of product, MUSE] Maslen became interested in the safety of these devices, which were covered by only cursory safety regulations. Although such devices are simple, they pass electrical currents through people’s scalps to modulate brain activity. Maslen found reports of them causing burns, headaches and visual disturbances. She also says clinical studies have shown that, although non-invasive electrical stimulation of the brain can enhance certain cognitive abilities, this can come at the cost of deficits in other aspects of cognition.

Regarding my note about MUSE, the company is InteraXon and its product is MUSE.They advertise the product as “Brain Sensing Headbands That Improve Your Meditation Practice.” The company website and the product seem to be one entity, Choose Muse. The company’s product has been used in some serious research papers they can be found here. I did not see any research papers concerning safety issues.

Getting back to Drew’s July 24, 2019 article and Patient 6,

… He [Gilbert] is now preparing a follow-up report on Patient 6. The company that implanted the device in her brain to help free her from seizures went bankrupt. The device had to be removed.

… Patient 6 cried as she told Gilbert about losing the device. … “I lost myself,” she said.

“It was more than a device,” Gilbert says. “The company owned the existence of this new person.”

I strongly recommend reading Drew’s July 24, 2019 article in its entirety.

Finally

It’s easy to forget that in all the excitement over technologies ‘making our lives better’ that there can be a dark side or two. Some of the points brought forth in the articles by Wolbring, Underwood, and Drew confirmed my uneasiness as reasonable and gave me some specific examples of how these technologies raise new issues or old issues in new ways.

What I find interesting is that no one is using the term ‘cyborg’, which would seem quite applicable.There is an April 20, 2012 posting here titled ‘My mother is a cyborg‘ where I noted that by at lease one definition people with joint replacements, pacemakers, etc. are considered cyborgs. In short, cyborgs or technology integrated into bodies have been amongst us for quite some time.

Interestingly, no one seems to care much when insects are turned into cyborgs (can’t remember who pointed this out) but it is a popular area of research especially for military applications and search and rescue applications.

I’ve sometimes used the term ‘machine/flesh’ and or ‘augmentation’ as a description of technologies integrated with bodies, human or otherwise. You can find lots on the topic here however I’ve tagged or categorized it.

Amongst other pieces you can find here, there’s the August 8, 2016 posting, ‘Technology, athletics, and the ‘new’ human‘ featuring Oscar Pistorius when he was still best known as the ‘blade runner’ and a remarkably successful paralympic athlete. It’s about his efforts to compete against able-bodied athletes at the London Olympic Games in 2012. It is fascinating to read about technology and elite athletes of any kind as they are often the first to try out ‘enhancements’.

Gregor Wolbring has a number of essays on The Conversation looking at Paralympic athletes and their pursuit of enhancements and how all of this is affecting our notions of abilities and disabilities. By extension, one has to assume that ‘abled’ athletes are also affected with the trickle-down effect on the rest of us.

Regardless of where we start the investigation, there is a sameness to the participants in neuroethics discussions with a few experts and commercial interests deciding on how the rest of us (however you define ‘us’ as per Gregor Wolbring’s essay) will live.

This paucity of perspectives is something I was getting at in my COVID-19 editorial for the Canadian Science Policy Centre. My thesis being that we need a range of ideas and insights that cannot be culled from small groups of people who’ve trained and read the same materials or entrepreneurs who too often seem to put profit over thoughtful implementations of new technologies. (See the PDF May 2020 edition [you’ll find me under Policy Development]) or see my May 15, 2020 posting here (with all the sources listed.)

As for this new research at Stanford, it’s exciting news, which raises questions, as it offers the hope of independent movement for people diagnosed as tetraplegic (sometimes known as quadriplegic.)

More about MUSE, a Canadian company and its brain sensing headband; women and startups; Canadianess

I first wrote about Ariel Garten and her Toronto-based (Canada) company, InteraXon, in a Dec. 5, 2012 posting where I featured a product, MUSE (Muse), then described as a brainwave controller. A March 5, 2015 article by Lydia Dishman for Fast Company provides an update on the product now described as a brainwave-sensing headband and on the company (Note: Links have been removed),

The technology that had captured the imagination of millions was then incorporated to develop a headband called Muse. It sells at retail stores like BestBuy for about $300 and works in conjunction with an app called Calm as a tool to increase focus and reduce stress.

If you always wanted to learn to meditate without those pesky distracting thoughts commandeering your mind, Muse can help by taking you through a brief exercise that translates brainwaves into the sound of wind. Losing focus or getting antsy brings on the gales. Achieving calm rewards you with a flock of birds across your screen.

The company has grown to 50 employees and has raised close to $10 million from investors including Ashton Kutcher. Garten [Ariel Garten, founder and Chief Executive Founder] says they’re about to close on a Series B round, “which will be significant.”

She says that listening plays an important role at InteraXon. Reflecting back on what you think you heard is an exercise she encourages, especially in meetings. When the development team is building a tool, for example, they use their Muses to meditate and focus, which then allows for listening more attentively and nonjudgmentally.

Women and startups

Dishman references gender and high tech financing in her article about Garten,

Garten doesn’t dwell on her status as a woman in a mostly male-dominated sector. That goes for securing funding for the startup too, despite the notorious bias venture-capital investors have against women startup founders.

“I am sure I lost deals because I am a woman, but also because the idea didn’t resonate,” she says, adding, “I’m sure I gained some because I am a woman, so it is unfair to put a blanket statement on it.”

Yet Garten is the only female member of her C-suite, something she says “is just the way it happened.” Casting the net recently to fill the role of chief operating officer [COO], Garten says there weren’t any women in the running, in part because the position required hardware experience as well as knowledge of working with the Chinese.

She did just hire a woman to be senior vice president of sales and marketing, and says, “When we are hiring younger staff, we are gender agnostic.”

I can understand wanting to introduce nuance into the ‘gender bias and tech startup discussion’ by noting that some rejections could have been due to issues with the idea or implementation. But the comment about being the only female in late stage funding as “just the way it happened” suggests she is extraordinarily naïve or willfully blind. Given her followup statement about her hiring practices, I’m inclined to go with willfully blind. It’s hard to believe she couldn’t find any woman with hardware experience and China experience. It seems more likely she needed a male COO to counterbalance a company with a female CEO. As for being gender agnostic where younger staff are concerned, that’s nice but it’s not reassuring as women have been able to get more junior positions. It’s the senior positions such as COO which remain out of reach and, troublingly, Garten seems to have blown off the question with a weak explanation and a glib assurance of equality at the lower levels of the company.

For more about gender, high tech companies, and hiring/promoting practices, you can read a March 5, 2015 article titled, Ellen Pao Trial Reveals the Subtle Sexism of Silicon Valley, by Amanda Marcotte for Slate.

Getting back to MUSE, you can find out more here. You can find out more about InterAxon here. Unusually, there doesn’t seem to be any information about the management team on the website.

Canadianness

I thought it was interesting that InterAxon’s status as a Canada-based company was mentioned nowhere in Dishman’s article. This is in stark contrast to Nancy Owano’s  Dec. 5, 2012 article for phys.org,

A Canadian company is talking about having a window, aka computer screen, into your mind. … InteraXon, a Canadian company, is focused on making a business out of mind-control technology via a headband device, and they are planning to launch this as a $199 brainwave computer controller called Muse. … [emphases mine]

This is not the only recent instance I’ve noticed. My Sept. 1, 2014 posting mentions what was then an upcoming Margaret Atwood event at Arizona State University,

… (from the center’s home page [Note: The center is ASU’s Center for Science and the Imagination]),

Internationally renowned novelist and environmental activist Margaret Atwood will visit Arizona State University this November [2014] to discuss the relationship between art and science, and the importance of creative writing and imagination for addressing social and environmental challenges.

Atwood’s visit will mark the launch of the Imagination and Climate Futures Initiative … Atwood, author of the MaddAddam trilogy of novels that have become central to the emerging literary genre of climate fiction, or “CliFi,” will offer the inaugural lecture for the initiative on Nov. 5.

“We are proud to welcome Margaret Atwood, one of the world’s most celebrated living writers, to ASU and engage her in these discussions around climate, science and creative writing,” …  “A poet, novelist, literary critic and essayist, Ms. Atwood epitomizes the creative and professional excellence our students aspire to achieve.”

There’s not a single mention that she is Canadian there or in a recent posting by Martin Robbins about a word purge from the Oxford Junior Dictionary published by the Guardian science blog network (March 3, 2015 posting). In fact, Atwood was initially described by Robbins as one of Britain’s literary giants. I assume there were howls of anguish once Canadians woke up to read the article since the phrase was later amended to “a number of the Anglosphere’s literary giants.”

The omission of InterAxon’s Canadianness in Dishman’s article for an American online magazine and Atwood’s Canadianness on the Arizona State University website and Martin Robbins’ initial appropriation and later change to the vague-sounding “Anglospere” in his post for the British newspaper, The Guardian, means the bulk of their readers will likely assume InterAxon is American and that Margaret Atwood, depending on where you read about her, is either an American or a Brit.

It’s flattering that others want to grab a little bit of Canada for themselves.

Coda: The Oxford Junior Dictionary and its excision of ‘nature’ words

 

Robbins’ March 3, 2015 posting focused on a heated literary discussion about the excision of these words from the Oxford Junior Dictionary (Note:  A link has been removed),

“The deletions,” according to Robert Macfarlane in another article on Friday, “included acorn, adder, ash, beech, bluebell, buttercup, catkin, conker, cowslip, cygnet, dandelion, fern, hazel, heather, heron, ivy, kingfisher, lark, mistletoe, nectar, newt, otter, pasture and willow. The words taking their places in the new edition included attachment, block-graph, blog, broadband, bullet-point, celebrity, chatroom, committee, cut-and-paste, MP3 player and voice-mail.”

I’m surprised the ‘junior’ dictionary didn’t have “attachment,” “celebrity,” and “committee” prior to the 2007 purge. By the way, it seems no one noticed the purge till recently. Robbins has an interesting take on the issue, one with which I do not entirely agree. I understand needing to purge words but what happens a child reading a classic such as “The Wind in the Willows’ attempts to look up the word ‘willows’?  (Thanks to Susan Baxter who in a private communication pointed out the problems inherent with reading new and/or classic books and not being able to find basic vocabulary.)

A brainwave computer controller named Muse

Toronto-based (Canada) company, InteraXon has just presented a portable brainwave controller at the ParisLeWeb 2012 meeting according to a Dec. 5, 2012 article by Nancy Owano for phys.org,

A Canadian company is talking about having a window, aka computer screen, into your mind. Another of the many ways to put it—they believe your computer can be so into you. And vice-versa. InteraXon, a Canadian company, is focused on making a business out of mind-control technology via a headband device, and they are planning to launch this as a $199 brainwave computer controller called Muse. The company is running an Indiegogo campaign to obtain needed funds. Muse is a Bluetooth-connected headset with four electroencephalography sensors, communicating with the person’s computer via the Bluetooth connection.

Here’s more about the technology from InteraXon’s How It Works webpage,

Your brain generates electrical patterns that resonate outside your head, which accumulate into brainwaves detectable by an Electroencephalograph (EEG). The EEG can’t read your thoughts, just your brain’s overall pattern of activity, like how relaxed or alert you are. With practice you can learn to manipulate your brainwave pattern, like flexing a muscle you’ve never used before.

InteraXon’s interface works by turning brainwaves into binary (ones and zeros). We’re like interpreters fluent in the language of the mind: our system analyses the frequency of your brainwaves and then translates them into a control signal for the computer to understand.

Just like a button or switch can activate whatever it’s connected to, your translated brainwaves can now control anything electric. InteraXon designers and engineers make the experience so seamless, the connected technology seems like an extension of your own body.

It would be nice to have found a little more technical detail.

InteraXon is currently featuring its work at the 2010 Olympics in Vancouver (Canada) as an example of past work,

When visitors arrive at Bright Ideas, InteraXon’s thought-controlled computing experience custom designed and built for the 2010 Olympics, they are lead to their own pod. In front of each pod is a large projection screen as well as a small training screen. Once seated, a trained host hands them a headset that will measure their brain’s electrical signals.

With help from the host, the participants learn to deliberately alter their brainwaves. By focusing or relaxing their mind, they learn to change the display on their training screen; music and seat vibrations provide immediate feedback to speed the learning process to five minutes or less. Now they are ready for the main event.

Thoughts are turned into light patterns instantaneously as their brain’s digital signal is beamed over the Rocky Mountains, across vast prairies all the way to three major Ontario icons – a distance of 3000 km.

This project – a first at this grand scale – allows each participant to experience a very personal connection with these massive Ontario landmarks, and with every Canadian watching the lightshow, whether online, or in-person.

As for Muse, InteraXon’s latest project, the company has a campaign on Indiegogo to raise money. Here’s the video on the campaign website,

They seem very excited about it all, don’t they? The question that arises is whether or not you actually need a device to let you know when you’re concentrating or when your thoughts are wandering.  Apparently, the answer is yes. The campaign has raised over $240,000 (they asked for $150,000) and it’s open until Dec. 7, 2012.  If you go today, you will find that in addition to the other pledge inducements there’s a special ParisLeWeb $149 pledge for one day only (Dec. 5, 2012). Here’s where you go.

The Canadian Broadcasting Corporation’s Spark radio programme featured an interview (either in Nov. or Dec. 2012) with Ariel Garten, Chief Executive Office of InteraXon discussing her company’s work. You can find podcast no. 197 here (it is approximately 55 mins. and there are other interviews bundled with Garten’s). Thanks to Richard Boyer for the tip about the Spark interview.

I have mentioned brain-computer interfaces previously. There’s the Brain-controlled robotic arm means drinking coffee by yourself for the first time in 15 years May 17, 2012 posting and the Advertising for the 21st Century: B-Reel, ‘storytelling’, and mind control Oct. 6, 2011 posting amongst others.