Category Archives: ethics

3D bioprinting: a conference about the latest trends (May 3 – 5, 2017 at the University of British Columbia, Vancouver)

The University of British Columbia’s (UBC) Peter Wall Institute for Advanced Studies (PWIAS) is hosting along with local biotech firm, Aspect Biosystems, a May 3 -5, 2017 international research roundtable known as ‘Printing the Future of Therapeutics in 3D‘.

A May 1, 2017 UBC news release (received via email) offers some insight into the field of bioprinting from one of the roundtable organizers,

This week, global experts will gather [4] at the University of British
Columbia to discuss the latest trends in 3D bioprinting—a technology
used to create living tissues and organs.

In this Q&A, UBC chemical and biological engineering professor
Vikramaditya Yadav [5], who is also with the Regenerative Medicine
Cluster Initiative in B.C., explains how bioprinting could potentially
transform healthcare and drug development, and highlights Canadian
innovations in this field.

WHY IS 3D BIOPRINTING SIGNIFICANT?

Bioprinted tissues or organs could allow scientists to predict
beforehand how a drug will interact within the body. For every
life-saving therapeutic drug that makes its way into our medicine
cabinets, Health Canada blocks the entry of nine drugs because they are
proven unsafe or ineffective. Eliminating poor-quality drug candidates
to reduce development costs—and therefore the cost to consumers—has
never been more urgent.

In Canada alone, nearly 4,500 individuals are waiting to be matched with
organ donors. If and when bioprinters evolve to the point where they can
manufacture implantable organs, the concept of an organ transplant
waiting list would cease to exist. And bioprinted tissues and organs
from a patient’s own healthy cells could potentially reduce the risk
of transplant rejection and related challenges.

HOW IS THIS TECHNOLOGY CURRENTLY BEING USED?

Skin, cartilage and bone, and blood vessels are some of the tissue types
that have been successfully constructed using bioprinting. Two of the
most active players are the Wake Forest Institute for Regenerative
Medicine in North Carolina, which reports that its bioprinters can make
enough replacement skin to cover a burn with 10 times less healthy
tissue than is usually needed, and California-based Organovo, which
makes its kidney and liver tissue commercially available to
pharmaceutical companies for drug testing.

Beyond medicine, bioprinting has already been commercialized to print
meat and artificial leather. It’s been estimated that the global
bioprinting market will hit $2 billion by 2021.

HOW IS CANADA INVOLVED IN THIS FIELD?

Canada is home to some of the most innovative research clusters and
start-up companies in the field. The UBC spin-off Aspect Biosystems [6]
has pioneered a bioprinting paradigm that rapidly prints on-demand
tissues. It has successfully generated tissues found in human lungs.

Many initiatives at Canadian universities are laying strong foundations
for the translation of bioprinting and tissue engineering into
mainstream medical technologies. These include the Regenerative Medicine
Cluster Initiative in B.C., which is headed by UBC, and the University
of Toronto’s Institute of Biomaterials and Biomedical Engineering.

WHAT ETHICAL ISSUES DOES BIOPRINTING CREATE?

There are concerns about the quality of the printed tissues. It’s
important to note that the U.S. Food and Drug Administration and Health
Canada are dedicating entire divisions to regulation of biomanufactured
products and biomedical devices, and the FDA also has a special division
that focuses on regulation of additive manufacturing – another name
for 3D printing.

These regulatory bodies have an impressive track record that should
assuage concerns about the marketing of substandard tissue. But cost and
pricing are arguably much more complex issues.

Some ethicists have also raised questions about whether society is not
too far away from creating Replicants, à la _Blade Runner_. The idea is
fascinating, scary and ethically grey. In theory, if one could replace
the extracellular matrix of bones and muscles with a stronger substitute
and use cells that are viable for longer, it is not too far-fetched to
create bones or muscles that are stronger and more durable than their
natural counterparts.

WILL DOCTORS BE PRINTING REPLACEMENT BODY PARTS IN 20 YEARS’ TIME?

This is still some way off. Optimistically, patients could see the
technology in certain clinical environments within the next decade.
However, some technical challenges must be addressed in order for this
to occur, beginning with faithful replication of the correct 3D
architecture and vascularity of tissues and organs. The bioprinters
themselves need to be improved in order to increase cell viability after
printing.

These developments are happening as we speak. Regulation, though, will
be the biggest challenge for the field in the coming years.

There are some events open to the public (from the international research roundtable homepage),

OPEN EVENTS

You’re invited to attend the open events associated with Printing the Future of Therapeutics in 3D.

Café Scientifique

Thursday, May 4, 2017
Telus World of Science
5:30 – 8:00pm [all tickets have been claimed as of May 2, 2017 at 3:15 pm PT]

3D Bioprinting: Shaping the Future of Health

Imagine a world where drugs are developed without the use of animals, where doctors know how a patient will react to a drug before prescribing it and where patients can have a replacement organ 3D-printed using their own cells, without dealing with long donor waiting lists or organ rejection. 3D bioprinting could enable this world. Join us for lively discussion and dessert as experts in the field discuss the exciting potential of 3D bioprinting and the ethical issues raised when you can print human tissues on demand. This is also a rare opportunity to see a bioprinter live in action!

Open Session

Friday, May 5, 2017
Peter Wall Institute for Advanced Studies
2:00 – 7:00pm

A Scientific Discussion on the Promise of 3D Bioprinting

The medical industry is struggling to keep our ageing population healthy. Developing effective and safe drugs is too expensive and time-consuming to continue unchanged. We cannot meet the current demand for transplant organs, and people are dying on the donor waiting list every day.  We invite you to join an open session where four of the most influential academic and industry professionals in the field discuss how 3D bioprinting is being used to shape the future of health and what ethical challenges may be involved if you are able to print your own organs.

ROUNDTABLE INFORMATION

The University of British Columbia and the award-winning bioprinting company Aspect Biosystems, are proud to be co-organizing the first “Printing the Future of Therapeutics in 3D” International Research Roundtable. This event will congregate global leaders in tissue engineering research and pharmaceutical industry experts to discuss the rapidly emerging and potentially game-changing technology of 3D-printing living human tissues (bioprinting). The goals are to:

Highlight the state-of-the-art in 3D bioprinting research
Ideate on disruptive innovations that will transform bioprinting from a novel research tool to a broadly adopted systematic practice
Formulate an actionable strategy for industry engagement, clinical translation and societal impact
Present in a public forum, key messages to educate and stimulate discussion on the promises of bioprinting technology

The Roundtable will bring together a unique collection of industry experts and academic leaders to define a guiding vision to efficiently deploy bioprinting technology for the discovery and development of new therapeutics. As the novel technology of 3D bioprinting is more broadly adopted, we envision this Roundtable will become a key annual meeting to help guide the development of the technology both in Canada and globally.

We thank you for your involvement in this ground-breaking event and look forward to you all joining us in Vancouver for this unique research roundtable.

Kind Regards,
The Organizing Committee
Christian Naus, Professor, Cellular & Physiological Sciences, UBC
Vikram Yadav, Assistant Professor, Chemical & Biological Engineering, UBC
Tamer Mohamed, CEO, Aspect Biosystems
Sam Wadsworth, CSO, Aspect Biosystems
Natalie Korenic, Business Coordinator, Aspect Biosystems

I’m glad to see this event is taking place—and with public events too! (Wish I’d seen the Café Scientifique announcement earlier when I first checked for tickets  yesterday. I was hoping there’d been some cancellations today.) Finally, for the interested, you can find Aspect Biosystems here.

Vector Institute and Canada’s artificial intelligence sector

On the heels of the March 22, 2017 federal budget announcement of $125M for a Pan-Canadian Artificial Intelligence Strategy, the University of Toronto (U of T) has announced the inception of the Vector Institute for Artificial Intelligence in a March 28, 2017 news release by Jennifer Robinson (Note: Links have been removed),

A team of globally renowned researchers at the University of Toronto is driving the planning of a new institute staking Toronto’s and Canada’s claim as the global leader in AI.

Geoffrey Hinton, a University Professor Emeritus in computer science at U of T and vice-president engineering fellow at Google, will serve as the chief scientific adviser of the newly created Vector Institute based in downtown Toronto.

“The University of Toronto has long been considered a global leader in artificial intelligence research,” said U of T President Meric Gertler. “It’s wonderful to see that expertise act as an anchor to bring together researchers, government and private sector actors through the Vector Institute, enabling them to aim even higher in leading advancements in this fast-growing, critical field.”

As part of the Government of Canada’s Pan-Canadian Artificial Intelligence Strategy, Vector will share $125 million in federal funding with fellow institutes in Montreal and Edmonton. All three will conduct research and secure talent to cement Canada’s position as a world leader in AI.

In addition, Vector is expected to receive funding from the Province of Ontario and more than 30 top Canadian and global companies eager to tap this pool of talent to grow their businesses. The institute will also work closely with other Ontario universities with AI talent.

(See my March 24, 2017 posting; scroll down about 25% for the science part, including the Pan-Canadian Artificial Intelligence Strategy of the budget.)

Not obvious in last week’s coverage of the Pan-Canadian Artificial Intelligence Strategy is that the much lauded Hinton has been living in the US and working for Google. These latest announcements (Pan-Canadian AI Strategy and Vector Institute) mean that he’s moving back.

A March 28, 2017 article by Kate Allen for TorontoStar.com provides more details about the Vector Institute, Hinton, and the Canadian ‘brain drain’ as it applies to artificial intelligence, (Note:  A link has been removed)

Toronto will host a new institute devoted to artificial intelligence, a major gambit to bolster a field of research pioneered in Canada but consistently drained of talent by major U.S. technology companies like Google, Facebook and Microsoft.

The Vector Institute, an independent non-profit affiliated with the University of Toronto, will hire about 25 new faculty and research scientists. It will be backed by more than $150 million in public and corporate funding in an unusual hybridization of pure research and business-minded commercial goals.

The province will spend $50 million over five years, while the federal government, which announced a $125-million Pan-Canadian Artificial Intelligence Strategy in last week’s budget, is providing at least $40 million, backers say. More than two dozen companies have committed millions more over 10 years, including $5 million each from sponsors including Google, Air Canada, Loblaws, and Canada’s five biggest banks [Bank of Montreal (BMO). Canadian Imperial Bank of Commerce ({CIBC} President’s Choice Financial},  Royal Bank of Canada (RBC), Scotiabank (Tangerine), Toronto-Dominion Bank (TD Canada Trust)].

The mode of artificial intelligence that the Vector Institute will focus on, deep learning, has seen remarkable results in recent years, particularly in image and speech recognition. Geoffrey Hinton, considered the “godfather” of deep learning for the breakthroughs he made while a professor at U of T, has worked for Google since 2013 in California and Toronto.

Hinton will move back to Canada to lead a research team based at the tech giant’s Toronto offices and act as chief scientific adviser of the new institute.

Researchers trained in Canadian artificial intelligence labs fill the ranks of major technology companies, working on tools like instant language translation, facial recognition, and recommendation services. Academic institutions and startups in Toronto, Waterloo, Montreal and Edmonton boast leaders in the field, but other researchers have left for U.S. universities and corporate labs.

The goals of the Vector Institute are to retain, repatriate and attract AI talent, to create more trained experts, and to feed that expertise into existing Canadian companies and startups.

Hospitals are expected to be a major partner, since health care is an intriguing application for AI. Last month, researchers from Stanford University announced they had trained a deep learning algorithm to identify potentially cancerous skin lesions with accuracy comparable to human dermatologists. The Toronto company Deep Genomics is using deep learning to read genomes and identify mutations that may lead to disease, among other things.

Intelligent algorithms can also be applied to tasks that might seem less virtuous, like reading private data to better target advertising. Zemel [Richard Zemel, the institute’s research director and a professor of computer science at U of T] says the centre is creating an ethics working group [emphasis mine] and maintaining ties with organizations that promote fairness and transparency in machine learning. As for privacy concerns, “that’s something we are well aware of. We don’t have a well-formed policy yet but we will fairly soon.”

The institute’s annual funding pales in comparison to the revenues of the American tech giants, which are measured in tens of billions. The risk the institute’s backers are taking is simply creating an even more robust machine learning PhD mill for the U.S.

“They obviously won’t all stay in Canada, but Toronto industry is very keen to get them,” Hinton said. “I think Trump might help there.” Two researchers on Hinton’s new Toronto-based team are Iranian, one of the countries targeted by U.S. President Donald Trump’s travel bans.

Ethics do seem to be a bit of an afterthought. Presumably the Vector Institute’s ‘ethics working group’ won’t include any regular folks. Is there any thought to what the rest of us think about these developments? As there will also be some collaboration with other proposed AI institutes including ones at the University of Montreal (Université de Montréal) and the University of Alberta (Kate McGillivray’s article coming up shortly mentions them), might the ethics group be centered in either Edmonton or Montreal? Interestingly, two Canadians (Timothy Caulfield at the University of Alberta and Eric Racine at Université de Montréa) testified at the US Commission for the Study of Bioethical Issues Feb. 10 – 11, 2014 meeting, the Brain research, ethics, and nanotechnology. Still speculating here but I imagine Caulfield and/or Racine could be persuaded to extend their expertise in ethics and the human brain to AI and its neural networks.

Getting back to the topic at hand the ‘AI sceneCanada’, Allen’s article is worth reading in its entirety if you have the time.

Kate McGillivray’s March 29, 2017 article for the Canadian Broadcasting Corporation’s (CBC) news online provides more details about the Canadian AI situation and the new strategies,

With artificial intelligence set to transform our world, a new institute is putting Toronto to the front of the line to lead the charge.

The Vector Institute for Artificial Intelligence, made possible by funding from the federal government revealed in the 2017 budget, will move into new digs in the MaRS Discovery District by the end of the year.

Vector’s funding comes partially from a $125 million investment announced in last Wednesday’s federal budget to launch a pan-Canadian artificial intelligence strategy, with similar institutes being established in Montreal and Edmonton.

“[A.I.] cuts across pretty well every sector of the economy,” said Dr. Alan Bernstein, CEO and president of the Canadian Institute for Advanced Research, the organization tasked with administering the federal program.

“Silicon Valley and England and other places really jumped on it, so we kind of lost the lead a little bit. I think the Canadian federal government has now realized that,” he said.

Stopping up the brain drain

Critical to the strategy’s success is building a homegrown base of A.I. experts and innovators — a problem in the last decade, despite pioneering work on so-called “Deep Learning” by Canadian scholars such as Yoshua Bengio and Geoffrey Hinton, a former University of Toronto professor who will now serve as Vector’s chief scientific advisor.

With few university faculty positions in Canada and with many innovative companies headquartered elsewhere, it has been tough to keep the few graduates specializing in A.I. in town.

“We were paying to educate people and shipping them south,” explained Ed Clark, chair of the Vector Institute and business advisor to Ontario Premier Kathleen Wynne.

The existence of that “fantastic science” will lean heavily on how much buy-in Vector and Canada’s other two A.I. centres get.

Toronto’s portion of the $125 million is a “great start,” said Bernstein, but taken alone, “it’s not enough money.”

“My estimate of the right amount of money to make a difference is a half a billion or so, and I think we will get there,” he said.

Jessica Murphy’s March 29, 2017 article for the British Broadcasting Corporation’s (BBC) news online offers some intriguing detail about the Canadian AI scene,

Canadian researchers have been behind some recent major breakthroughs in artificial intelligence. Now, the country is betting on becoming a big player in one of the hottest fields in technology, with help from the likes of Google and RBC [Royal Bank of Canada].

In an unassuming building on the University of Toronto’s downtown campus, Geoff Hinton laboured for years on the “lunatic fringe” of academia and artificial intelligence, pursuing research in an area of AI called neural networks.

Also known as “deep learning”, neural networks are computer programs that learn in similar way to human brains. The field showed early promise in the 1980s, but the tech sector turned its attention to other AI methods after that promise seemed slow to develop.

“The approaches that I thought were silly were in the ascendancy and the approach that I thought was the right approach was regarded as silly,” says the British-born [emphasis mine] professor, who splits his time between the university and Google, where he is a vice-president of engineering fellow.

Neural networks are used by the likes of Netflix to recommend what you should binge watch and smartphones with voice assistance tools. Google DeepMind’s AlphaGo AI used them to win against a human in the ancient game of Go in 2016.

Foteini Agrafioti, who heads up the new RBC Research in Machine Learning lab at the University of Toronto, said those recent innovations made AI attractive to researchers and the tech industry.

“Anything that’s powering Google’s engines right now is powered by deep learning,” she says.

Developments in the field helped jumpstart innovation and paved the way for the technology’s commercialisation. They also captured the attention of Google, IBM and Microsoft, and kicked off a hiring race in the field.

The renewed focus on neural networks has boosted the careers of early Canadian AI machine learning pioneers like Hinton, the University of Montreal’s Yoshua Bengio, and University of Alberta’s Richard Sutton.

Money from big tech is coming north, along with investments by domestic corporations like banking multinational RBC and auto parts giant Magna, and millions of dollars in government funding.

Former banking executive Ed Clark will head the institute, and says the goal is to make Toronto, which has the largest concentration of AI-related industries in Canada, one of the top five places in the world for AI innovation and business.

The founders also want it to serve as a magnet and retention tool for top talent aggressively head-hunted by US firms.

Clark says they want to “wake up” Canadian industry to the possibilities of AI, which is expected to have a massive impact on fields like healthcare, banking, manufacturing and transportation.

Google invested C$4.5m (US$3.4m/£2.7m) last November [2016] in the University of Montreal’s Montreal Institute for Learning Algorithms.

Microsoft is funding a Montreal startup, Element AI. The Seattle-based company also announced it would acquire Montreal-based Maluuba and help fund AI research at the University of Montreal and McGill University.

Thomson Reuters and General Motors both recently moved AI labs to Toronto.

RBC is also investing in the future of AI in Canada, including opening a machine learning lab headed by Agrafioti, co-funding a program to bring global AI talent and entrepreneurs to Toronto, and collaborating with Sutton and the University of Alberta’s Machine Intelligence Institute.

Canadian tech also sees the travel uncertainty created by the Trump administration in the US as making Canada more attractive to foreign talent. (One of Clark’s the selling points is that Toronto as an “open and diverse” city).

This may reverse the ‘brain drain’ but it appears Canada’s role as a ‘branch plant economy’ for foreign (usually US) companies could become an important discussion once more. From the ‘Foreign ownership of companies of Canada’ Wikipedia entry (Note: Links have been removed),

Historically, foreign ownership was a political issue in Canada in the late 1960s and early 1970s, when it was believed by some that U.S. investment had reached new heights (though its levels had actually remained stable for decades), and then in the 1980s, during debates over the Free Trade Agreement.

But the situation has changed, since in the interim period Canada itself became a major investor and owner of foreign corporations. Since the 1980s, Canada’s levels of investment and ownership in foreign companies have been larger than foreign investment and ownership in Canada. In some smaller countries, such as Montenegro, Canadian investment is sizable enough to make up a major portion of the economy. In Northern Ireland, for example, Canada is the largest foreign investor. By becoming foreign owners themselves, Canadians have become far less politically concerned about investment within Canada.

Of note is that Canada’s largest companies by value, and largest employers, tend to be foreign-owned in a way that is more typical of a developing nation than a G8 member. The best example is the automotive sector, one of Canada’s most important industries. It is dominated by American, German, and Japanese giants. Although this situation is not unique to Canada in the global context, it is unique among G-8 nations, and many other relatively small nations also have national automotive companies.

It’s interesting to note that sometimes Canadian companies are the big investors but that doesn’t change our basic position. And, as I’ve noted in other postings (including the March 24, 2017 posting), these government investments in science and technology won’t necessarily lead to a move away from our ‘branch plant economy’ towards an innovative Canada.

You can find out more about the Vector Institute for Artificial Intelligence here.

BTW, I noted that reference to Hinton as ‘British-born’ in the BBC article. He was educated in the UK and subsidized by UK taxpayers (from his Wikipedia entry; Note: Links have been removed),

Hinton was educated at King’s College, Cambridge graduating in 1970, with a Bachelor of Arts in experimental psychology.[1] He continued his study at the University of Edinburgh where he was awarded a PhD in artificial intelligence in 1977 for research supervised by H. Christopher Longuet-Higgins.[3][12]

It seems Canadians are not the only ones to experience  ‘brain drains’.

Finally, I wrote at length about a recent initiative taking place between the University of British Columbia (Vancouver, Canada) and the University of Washington (Seattle, Washington), the Cascadia Urban Analytics Cooperative in a Feb. 28, 2017 posting noting that the initiative is being funded by Microsoft to the tune $1M and is part of a larger cooperative effort between the province of British Columbia and the state of Washington. Artificial intelligence is not the only area where US technology companies are hedging their bets (against Trump’s administration which seems determined to terrify people from crossing US borders) by investing in Canada.

For anyone interested in a little more information about AI in the US and China, there’s today’s (March 31, 2017)earlier posting: China, US, and the race for artificial intelligence research domination.

Robots, Dallas (US), ethics, and killing

I’ve waited a while before posting this piece in the hope that the situation would calm. Sadly, it took longer than hoped as there was an additional shooting incident of police officers in Baton Rouge on July 17, 2016. There’s more about that shooting in a July 18, 2016 news posting by Steve Visser for CNN.)

Finally: Robots, Dallas, ethics, and killing: In the wake of the Thursday, July 7, 2016 shooting in Dallas (Texas, US) and subsequent use of a robot armed with a bomb to kill  the suspect, a discussion about ethics has been raised.

This discussion comes at a difficult period. In the same week as the targeted shooting of white police officers in Dallas, two African-American males were shot and killed in two apparently unprovoked shootings by police. The victims were Alton Sterling in Baton Rouge, Louisiana on Tuesday, July 5, 2016 and, Philando Castile in Minnesota on Wednesday, July 6, 2016. (There’s more detail about the shootings prior to Dallas in a July 7, 2016 news item on CNN.) The suspect in Dallas, Micah Xavier Johnson, a 25-year-old African-American male had served in the US Army Reserve and been deployed in Afghanistan (there’s more in a July 9, 2016 news item by Emily Shapiro, Julia Jacobo, and Stephanie Wash for abcnews.go.com). All of this has taken place within the context of a movement started in 2013 in the US, Black Lives Matter.

Getting back to robots, most of the material I’ve seen about ‘killing or killer’ robots has so far involved industrial accidents (very few to date) and ethical issues for self-driven cars (see a May 31, 2016 posting by Noah J. Goodall on the IEEE [Institute of Electrical and Electronics Engineers] Spectrum website).

The incident in Dallas is apparently the first time a US police organization has used a robot as a bomb, although it has been an occasional practice by US Armed Forces in combat situations. Rob Lever in a July 8, 2016 Agence France-Presse piece on phys.org focuses on the technology aspect,

The “bomb robot” killing of a suspected Dallas shooter may be the first lethal use of an automated device by American police, and underscores growing role of technology in law enforcement.

Regardless of the methods in Dallas, the use of robots is expected to grow, to handle potentially dangerous missions in law enforcement and the military.


Researchers at Florida International University meanwhile have been working on a TeleBot that would allow disabled police officers to control a humanoid robot.

The robot, described in some reports as similar to the “RoboCop” in films from 1987 and 2014, was designed “to look intimidating and authoritative enough for citizens to obey the commands,” but with a “friendly appearance” that makes it “approachable to citizens of all ages,” according to a research paper.

Robot developers downplay the potential for the use of automated lethal force by the devices, but some analysts say debate on this is needed, both for policing and the military.

A July 9, 2016 Associated Press piece by Michael Liedtke and Bree Fowler on phys.org focuses more closely on ethical issues raised by the Dallas incident,

When Dallas police used a bomb-carrying robot to kill a sniper, they also kicked off an ethical debate about technology’s use as a crime-fighting weapon.

The strategy opens a new chapter in the escalating use of remote and semi-autonomous devices to fight crime and protect lives. It also raises new questions over when it’s appropriate to dispatch a robot to kill dangerous suspects instead of continuing to negotiate their surrender.

“If lethally equipped robots can be used in this situation, when else can they be used?” says Elizabeth Joh, a University of California at Davis law professor who has followed U.S. law enforcement’s use of technology. “Extreme emergencies shouldn’t define the scope of more ordinary situations where police may want to use robots that are capable of harm.”

In approaching the question about the ethics, Mike Masnick’s July 8, 2016 posting on Techdirt provides a surprisingly sympathetic reading for the Dallas Police Department’s actions, as well as, asking some provocative questions about how robots might be better employed by police organizations (Note: Links have been removed),

The Dallas Police have a long history of engaging in community policing designed to de-escalate situations, rather than encourage antagonism between police and the community, have been handling all of this with astounding restraint, frankly. Many other police departments would be lashing out, and yet the Dallas Police Dept, while obviously grieving for a horrible situation, appear to be handling this tragic situation professionally. And it appears that they did everything they could in a reasonable manner. They first tried to negotiate with Johnson, but after that failed and they feared more lives would be lost, they went with the robot + bomb option. And, obviously, considering he had already shot many police officers, I don’t think anyone would question the police justification if they had shot Johnson.

But, still, at the very least, the whole situation raises a lot of questions about the legality of police using a bomb offensively to blow someone up. And, it raises some serious questions about how other police departments might use this kind of technology in the future. The situation here appears to be one where people reasonably concluded that this was the most effective way to stop further bloodshed. And this is a police department with a strong track record of reasonable behavior. But what about other police departments where they don’t have that kind of history? What are the protocols for sending in a robot or drone to kill someone? Are there any rules at all?

Furthermore, it actually makes you wonder, why isn’t there a focus on using robots to de-escalate these situations? What if, instead of buying military surplus bomb robots, there were robots being designed to disarm a shooter, or detain him in a manner that would make it easier for the police to capture him alive? Why should the focus of remote robotic devices be to kill him? This isn’t faulting the Dallas Police Department for its actions last night. But, rather, if we’re going to enter the age of robocop, shouldn’t we be looking for ways to use such robotic devices in a manner that would help capture suspects alive, rather than dead?

Gordon Corera’s July 12, 2016 article on the BBC’s (British Broadcasting Corporation) news website provides an overview of the use of automation and of ‘killing/killer robots’,

Remote killing is not new in warfare. Technology has always been driven by military application, including allowing killing to be carried out at distance – prior examples might be the introduction of the longbow by the English at Crecy in 1346, then later the Nazi V1 and V2 rockets.

More recently, unmanned aerial vehicles (UAVs) or drones such as the Predator and the Reaper have been used by the US outside of traditional military battlefields.

Since 2009, the official US estimate is that about 2,500 “combatants” have been killed in 473 strikes, along with perhaps more than 100 non-combatants. Critics dispute those figures as being too low.

Back in 2008, I visited the Creech Air Force Base in the Nevada desert, where drones are flown from.

During our visit, the British pilots from the RAF deployed their weapons for the first time.

One of the pilots visibly bristled when I asked him if it ever felt like playing a video game – a question that many ask.

The military uses encrypted channels to control its ordnance disposal robots, but – as any hacker will tell you – there is almost always a flaw somewhere that a determined opponent can find and exploit.

We have already seen cars being taken control of remotely while people are driving them, and the nightmare of the future might be someone taking control of a robot and sending a weapon in the wrong direction.

The military is at the cutting edge of developing robotics, but domestic policing is also a different context in which greater separation from the community being policed risks compounding problems.

The balance between risks and benefits of robots, remote control and automation remain unclear.

But Dallas suggests that the future may be creeping up on us faster than we can debate it.

The excerpts here do not do justice to the articles, if you’re interested in this topic and have the time, I encourage you to read all the articles cited here in their entirety.

*(ETA: July 25, 2016 at 1405 hours PDT: There is a July 25, 2016 essay by Carrie Sheffield for Salon.com which may provide some insight into the Black Lives matter movement and some of the generational issues within the US African-American community as revealed by the movement.)*

A human user manual—for robots

Researchers from the Georgia Institute of Technology (Georgia Tech), funded by the US Office of Naval Research (ONR), have developed a program that teaches robots to read stories and more in an effort to educate them about humans. From a June 16, 2016 ONR news release by Warren Duffie Jr. (also on EurekAlert),

With support from the Office of Naval Research (ONR), researchers at the Georgia Institute of Technology have created an artificial intelligence software program named Quixote to teach robots to read stories, learn acceptable behavior and understand successful ways to conduct themselves in diverse social situations.

“For years, researchers have debated how to teach robots to act in ways that are appropriate, non-intrusive and trustworthy,” said Marc Steinberg, an ONR program manager who oversees the research. “One important question is how to explain complex concepts such as policies, values or ethics to robots. Humans are really good at using narrative stories to make sense of the world and communicate to other people. This could one day be an effective way to interact with robots.”

The rapid pace of artificial intelligence has stirred fears by some that robots could act unethically or harm humans. Dr. Mark Riedl, an associate professor and director of Georgia Tech’s Entertainment Intelligence Lab, hopes to ease concerns by having Quixote serve as a “human user manual” by teaching robots values through simple stories. After all, stories inform, educate and entertain–reflecting shared cultural knowledge, social mores and protocols.

For example, if a robot is tasked with picking up a pharmacy prescription for a human as quickly as possible, it could: a) take the medicine and leave, b) interact politely with pharmacists, c) or wait in line. Without value alignment and positive reinforcement, the robot might logically deduce robbery is the fastest, cheapest way to accomplish its task. However, with value alignment from Quixote, it would be rewarded for waiting patiently in line and paying for the prescription.

For their research, Riedl and his team crowdsourced stories from the Internet. Each tale needed to highlight daily social interactions–going to a pharmacy or restaurant, for example–as well as socially appropriate behaviors (e.g., paying for meals or medicine) within each setting.

The team plugged the data into Quixote to create a virtual agent–in this case, a video game character placed into various game-like scenarios mirroring the stories. As the virtual agent completed a game, it earned points and positive reinforcement for emulating the actions of protagonists in the stories.

Riedl’s team ran the agent through 500,000 simulations, and it displayed proper social interactions more than 90 percent of the time.

“These games are still fairly simple,” said Riedl, “more like ‘Pac-Man’ instead of ‘Halo.’ However, Quixote enables these artificial intelligence agents to immerse themselves in a story, learn the proper sequence of events and be encoded with acceptable behavior patterns. This type of artificial intelligence can be adapted to robots, offering a variety of applications.”

Within the next six months, Riedl’s team hopes to upgrade Quixote’s games from “old-school” to more modern and complex styles like those found in Minecraft–in which players use blocks to build elaborate structures and societies.

Riedl believes Quixote could one day make it easier for humans to train robots to perform diverse tasks. Steinberg notes that robotic and artificial intelligence systems may one day be a much larger part of military life. This could involve mine detection and deactivation, equipment transport and humanitarian and rescue operations.

“Within a decade, there will be more robots in society, rubbing elbows with us,” said Riedl. “Social conventions grease the wheels of society, and robots will need to understand the nuances of how humans do things. That’s where Quixote can serve as a valuable tool. We’re already seeing it with virtual agents like Siri and Cortana, which are programmed not to say hurtful or insulting things to users.”

This story brought to mind two other projects: RoboEarth (an internet for robots only) mentioned in my Jan. 14, 2014 which was an update on the project featuring its use in hospitals and RoboBrain, a robot learning project (sourcing the internet, YouTube, and more for information to teach robots) was mentioned in my Sept. 2, 2014 posting.

BRAIN and ethics in the US with some Canucks (not the hockey team) participating (part two of five)

The Brain research, ethics, and nanotechnology (part one of five) May 19, 2014 post kicked off a series titled ‘Brains, prostheses, nanotechnology, and human enhancement’ which brings together a number of developments in the worlds of neuroscience*, prosthetics, and, incidentally, nanotechnology in the field of interest called human enhancement. Parts one through four are an attempt to draw together a number of new developments, mostly in the US and in Europe. Due to my language skills which extend to English and, more tenuously, French, I can’t provide a more ‘global perspective’. Part five features a summary.

Before further discussing the US Presidential Commission for the Study of Bioethical Issues ‘brain’ meetings mentioned in part one, I have some background information.

The US launched its self-explanatory BRAIN (Brain Research through Advancing Innovative Neurotechnologies) initiative (originally called BAM; Brain Activity Map) in 2013. (You can find more about the history and details in this Wikipedia entry.)

From the beginning there has been discussion about how nanotechnology will be of fundamental use in the US BRAIN initiative and the European Union’s 10 year Human Brain Project (there’s more about that in my Jan. 28, 2013 posting). There’s also a 2013 book (Nanotechnology, the Brain, and the Future) from Springer, which, according to the table of contents, presents an exciting (to me) range of ideas about nanotechnology and brain research,

I. Introduction and key resources

1. Nanotechnology, the brain, and the future: Anticipatory governance via end-to-end real-time technology assessment by Jason Scott Robert, Ira Bennett, and Clark A. Miller
2. The complex cognitive systems manifesto by Richard P. W. Loosemore
3. Analysis of bibliometric data for research at the intersection of nanotechnology and neuroscience by Christina Nulle, Clark A. Miller, Harmeet Singh, and Alan Porter
4. Public attitudes toward nanotechnology-enabled human enhancement in the United States by Sean Hays, Michael Cobb, and Clark A. Miller
5. U.S. news coverage of neuroscience nanotechnology: How U.S. newspapers have covered neuroscience nanotechnology during the last decade by Doo-Hun Choi, Anthony Dudo, and Dietram Scheufele
6. Nanoethics and the brain by Valerye Milleson
7. Nanotechnology and religion: A dialogue by Tobie Milford

II. Brain repair

8. The age of neuroelectronics by Adam Keiper
9. Cochlear implants and Deaf culture by Derrick Anderson
10. Healing the blind: Attitudes of blind people toward technologies to cure blindness by Arielle Silverman
11. Ethical, legal and social aspects of brain-implants using nano-scale materials and techniques by Francois Berger et al.
12. Nanotechnology, the brain, and personal identity by Stephanie Naufel

III. Brain enhancement

13. Narratives of intelligence: the sociotechnical context of cognitive enhancement by Sean Hays
14. Towards responsible use of cognitive-enhancing drugs by the healthy by Henry T. Greeley et al.
15. The opposite of human enhancement: Nanotechnology and the blind chicken debate by Paul B. Thompson
16. Anticipatory governance of human enhancement: The National Citizens’ Technology Forum by Patrick Hamlett, Michael Cobb, and David Guston
a. Arizona site report
b. California site report
c. Colorado site reportd. Georgia site report
e. New Hampshire site report
f. Wisconsin site report

IV. Brain damage

17. A review of nanoparticle functionality and toxicity on the central nervous system by Yang et al.
18. Recommendations for a municipal health and safety policy for nanomaterials: A Report to the City of Cambridge City Manager by Sam Lipson
19. Museum of Science Nanotechnology Forum lets participants be the judge by Mark Griffin
20. Nanotechnology policy and citizen engagement in Cambridge, Massachusetts: Local reflexive governance by Shannon Conley

Thanks to David Bruggeman’s May 13, 2014 posting on his Pasco Phronesis blog, I stumbled across both a future meeting notice and documentation of the  Feb. 2014 meeting of the Presidential Commission for the Study of Bioethical Issues (Note: Links have been removed),

Continuing from its last meeting (in February 2014), the Presidential Commission for the Study of Bioethical Issues will continue working on the BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative in its June 9-10 meeting in Atlanta, Georgia.  An agenda is still forthcoming, …

In other developments, Commission staff are apparently going to examine some efforts to engage bioethical issues through plays.  I’d be very excited to see some of this happen during a Commission meeting, but any little bit is interesting.  The authors of these plays, Karen H. Rothenburg and Lynn W. Bush, have published excerpts in their book The Drama of DNA: Narrative Genomics.  …

The Commission also has a YouTube channel …

Integrating a theatrical experience into the reams of public engagement exercises that technologies such as stem cell, GMO (genetically modified organisms), nanotechnology, etc. tend to spawn seems a delightful idea.

Interestingly, the meeting in June 2014 will coincide with the book’s release date. I dug further and found these snippets of information. The book is being published by Oxford University Press and is available in both paperback and e-book formats. The authors are not playwrights, as one might assume. From the Author Information page,

Lynn Bush, PhD, MS, MA is on the faculty of Pediatric Clinical Genetics at Columbia University Medical Center, a faculty associate at their Center for Bioethics, and serves as an ethicist on pediatric and genomic advisory committees for numerous academic medical centers and professional organizations. Dr. Bush has an interdisciplinary graduate background in clinical and developmental psychology, bioethics, genomics, public health, and neuroscience that informs her research, writing, and teaching on the ethical, psychological, and policy challenges of genomic medicine and clinical research with children, and prenatal-newborn screening and sequencing.

Karen H. Rothenberg, JD, MPA serves as Senior Advisor on Genomics and Society to the Director, National Human Genome Research Institute and Visiting Scholar, Department of Bioethics, Clinical Center, National Institutes of Health. She is the Marjorie Cook Professor of Law, Founding Director, Law & Health Care Program and former Dean at the University of Maryland Francis King Carey School of Law and Visiting Professor, Johns Hopkins Berman Institute of Bioethics. Professor Rothenberg has served as Chair of the Maryland Stem Cell Research Commission, President of the American Society of Law, Medicine and Ethics, and has been on many NIH expert committees, including the NIH Recombinant DNA Advisory Committee.

It is possible to get a table of contents for the book but I notice not a single playwright is mentioned in any of the promotional material for the book. While I like the idea in principle, it seems a bit odd and suggests that these are purpose-written plays. I have not had good experiences with purpose-written plays which tend to be didactic and dull, especially when they’re not devised by a professional storyteller.

You can find out more about the upcoming ‘bioethics’ June 9 – 10, 2014 meeting here.  As for the Feb. 10 – 11, 2014 meeting, the Brain research, ethics, and nanotechnology (part one of five) May 19, 2014 post featured Barbara Herr Harthorn’s (director of the Center for Nanotechnology in Society at the University of California at Santa Barbara) participation only.

It turns out, there are some Canadian tidbits. From the Meeting Sixteen: Feb. 10-11, 2014 webcasts page, (each presenter is featured in their own webcast of approximately 11 mins.)

Timothy Caulfield, LL.M., F.R.S.C., F.C.A.H.S.

Canada Research Chair in Health Law and Policy
Professor in the Faculty of Law
and the School of Public Health
University of Alberta

Eric Racine, Ph.D.

Director, Neuroethics Research Unit
Associate Research Professor
Institut de Recherches Cliniques de Montréal
Associate Research Professor,
Department of Medicine
Université de Montréal
Adjunct Professor, Department of Medicine and Department of Neurology and Neurosurgery,
McGill University

It was a surprise to see a couple of Canucks listed as presenters and I’m grateful that the Presidential Commission for the Study of Bioethical Issues is so generous with information. in addition to the webcasts, there is the Federal Register Notice of the meeting, an agenda, transcripts, and presentation materials. By the way, Caulfield discussed hype and Racine discussed public understanding of science with regard to neuroscience both fitting into the overall theme of communication. I’ll have to look more thoroughly but it seems to me there’s no mention of pop culture as a means of communicating about science and technology.

Links to other posts in the Brains, prostheses, nanotechnology, and human enhancement five-part series:

Part one: Brain research, ethics, and nanotechnology (May 19, 2014 post)

Part three: Gray Matters: Integrative Approaches for Neuroscience, Ethics, and Society issued May 2014 by US Presidential Bioethics Commission (May 20, 2014)

Part four: Brazil, the 2014 World Cup kickoff, and a mind-controlled exoskeleton (May 20, 2014)

Part five: Brains, prostheses, nanotechnology, and human enhancement: summary (May 20, 2014)

* ‘neursocience’ corrected to ‘neuroscience’ on May 20, 2014.

Brain research, ethics, and nanotechnology (part one of five)

This post kicks off a series titled ‘Brains, prostheses, nanotechnology, and human enhancement’ which brings together a number of developments in the worlds of neuroscience*, prosthetics, and, incidentally, nanotechnology in the field of interest called human enhancement. Parts one through four are an attempt to draw together a number of new developments, mostly in the US and in Europe. Due to my language skills which extend to English and, more tenuously, French, I can’t provide a more ‘global perspective’. Part five features a summary.

Barbara Herr Harthorn, head of UCSB’s [University of California at Santa Barbara) Center for Nanotechnology in Society (CNS), one of two such centers in the US (the other is at Arizona State University) was featured in a May 12, 2014 article by Lyz Hoffman for the [Santa Barbara] Independent.com,

… Barbara Harthorn has spent the past eight-plus years leading a team of researchers in studying people’s perceptions of the small-scale science with big-scale implications. Sponsored by the National Science Foundation, CNS enjoys national and worldwide recognition for the social science lens it holds up to physical and life sciences.

Earlier this year, Harthorn attended a meeting hosted by the Presidential Commission for the Study of Bioethical Issues. The commission’s chief focus was on the intersection of ethics and brain research, but Harthorn was invited to share her thoughts on the relationship between ethics and nanotechnology.

(You can find Harthorn’s February 2014 presentation to the Presidential Commission for the Study of Bioethical Issues here on their webcasts page.)

I have excerpted part of the Q&A (questions and answers) from Hoffman’s May 12, 2014 article but encourage you to read the piece in its entirety as it provides both a brief beginners’ introduction to nanotechnology and an insight into some of the more complex social impact issues presented by nano and other emerging technologies vis à vis neuroscience and human enhancement,

So there are some environmental concerns with nanomaterials. What are the ethical concerns? What came across at the Presidential Commission meeting? They’re talking about treatment of Alzheimer’s and neurological brain disorders, where the issue of loss of self is a fairly integral part of the disease. There are complicated issues about patients’ decision-making. Nanomaterials could be used to grow new tissues and potentially new organs in the future.

What could that mean for us? Human enhancement is very interesting. It provokes really fascinating discussions. In our view, the discussions are not much at all about the technologies but very much about the social implications. People feel enthusiastic initially, but when reflecting, the issues of equitable access and justice immediately rise to the surface. We [at CNS] are talking about imagined futures and trying to get at the moral and ethical sort of citizen ideas about the risks and benefits of such technologies. Before they are in the marketplace, [the goal is to] understand and find a way to integrate the public’s ideas in the development process.

Here again is a link to the article.

Links to other posts in the Brains, prostheses, nanotechnology, and human enhancement five-part series:

Part two: BRAIN and ethics in the US with some Canucks (not the hockey team) participating (May 19, 2014)

Part three: Gray Matters: Integrative Approaches for Neuroscience, Ethics, and Society issued May 2014 by US Presidential Bioethics Commission (May 20, 2014)

Part four: Brazil, the 2014 World Cup kickoff, and a mind-controlled exoskeleton (May 20, 2014)

Part five: Brains, prostheses, nanotechnology, and human enhancement: summary (May 20, 2014)

* ‘neursocience’ corrected to ‘neuroscience’ on May 20, 2014.

LEGO serious play and Arizona State University’s nanotechnology* ethics and society project*

Arizona State University (ASU) is receiving a $200,000 grant for undergraduates to ‘play seriously’ according to an April 10, 2014 news item on Azonano,

ASU undergraduates have the opportunity to enroll in a challenging course this fall, designed to re-introduce the act of play as a problem-solving technique. The course is offered as part of the larger project, Cross-disciplinary Education in Social and Ethical Aspects of Nanotechnology, which received nearly $200,000 from the National Science Foundation’s Nano Undergraduate Education program.

An April 6, 2014 ASU news release, which originated the news item, provides more details (Note: Links have been removed),

The project is the brainchild of Camilla Nørgaard Jensen, a doctoral scholar in the ASU Herberger Institute’s design, environment and the arts doctoral program. Participants will use an approach called LEGO Serious Play to solve what Jensen calls “nano-conundrums” – ethical dilemmas arising in the field of nanotechnology.

“LEGO Serious Play is an engaging vehicle that helps to create a level playing field, fostering shared conversation and exchange of multiple perspectives,” said Jensen, a trained LEGO Serious Play facilitator. “This creates an environment for reflection and critical deliberation of complex decisions and their future impacts.”

LEGO Serious Play methods are often used by businesses to strategize and encourage creative thinking. In ASU’s project, students will use LEGO bricks to build metaphorical models, share and discuss their creations, and then adapt and respond to feedback received by other students. The expectation is that this activity will help students learn to think and communicate “outside the box” – literally and figuratively – about their work and its long-term societal effects.

This project was piloted, from the news release (Note: A link has been removed),

Fifteen engineering students enrolled in the Grand Challenge Scholar Program participated in a Feb. 24 [20??] pilot workshop to test project strategies. Comments from students included, “I experienced my ideas coming to life as I built the model,” and “I gained a perspective as to how ideas cannot take place entirely in the head.” These anecdotal outcomes confirmed the team’s assumptions that play and physical activity can enhance the formation and communication of ideas.

This is a cross-disciplinary effort (from the news release),

“Technology is a creative and collaborative process,” said Seager [Thomas Seager, an associate professor and Lincoln Fellow of Ethics and Sustainability in the School of Sustainable Engineering and the Built Environment], who is principal investigator for the grant. “I want a classroom that will unlock technology creativity, in which students from every discipline can be creative. For me, overcoming obstacles to communication is just the first step.”

Seager’s work teaching ethical reasoning skills to science and engineering graduate students will help inform the project. Selin’s [Cynthia Selin, an assistant professor in the School of Sustainability and the Center for Nanotechnology in Society] research on the social implications of new technologies, and Hannah’s [Mark Hannah, an assistant professor in the rhetoric and composition program in the ASU Department of English] expertise in professional and technical communication will facilitate the dialogue-based approach to understanding the communication responsibilities of transdisciplinary teams working in nanotechnology. A steering committee of 12 senior advisers is helping to guide the project’s progress.

“Being a new scientific field that involves very complex trade-offs and risk when it comes to implementation, the subject of ethics in nanoscience is best addressed in a transdisciplinary setting. When problems are too complex to be solved by one discipline alone, the approach needs to go beyond the disciplinary silos,” said Jensen.

“As we train the next generation of students to understand the opportunities and responsibilities involved in creating and using emerging technologies that have the potential to benefit society, we need to advance our capacity to teach diverse stakeholders how to communicate effectively,” said Jensen.

I last wrote about play and nanotechnology in an Aug. 2, 2013 posting about training teachers how to introduce nanotechnology to middle schoolers. As for ASU, they’ve had a rich week with regard to funding, in an April 8, 2014 posting, i described a $5M grant for a multi-university project, the Life Cycle of Nanomaterials Network headquartered at ASU.

* Added ‘o’ to the nantechnology so it now reads correctly as nanotechnology and added a space between the words ‘society’ and ‘project’ in the head for this post.

Surprise: telepresent Ed Snowden at TED 2014′s Session 2: Retrospect

The first session (Retrospect) this morning held a few surprises, i.e, unexpected speakers, Brian Greene and Ed Snowden (whistleblower re: extensive and [illegal or nonlegal?] surveillance by the US National Security Agency [NSA]). I’m not sure how Snowden fits into the session theme of Retrospect but I think that’s less the point than the sheer breathtaking surprise and his topic’s importance to current public discourse around much of the globe.

Snowden is mostly focused on PRISM (from its Wikipedia entry; Note: Links have been removed),

PRISM is a clandestine mass electronic surveillance data mining program launched in 2007 by the National Security Agency (NSA), with participation from an unknown date by the British equivalent agency, GCHQ.[1][2][3] PRISM is a government code name for a data-collection effort known officially by the SIGAD US-984XN.[4][5] The Prism program collects stored Internet communications based on demands made to Internet companies such as Google Inc. and Apple Inc. under Section 702 of the FISA Amendments Act of 2008 to turn over any data that match court-approved search terms.[6] The NSA can use these Prism requests to target communications that were encrypted when they traveled across the Internet backbone, to focus on stored data that telecommunication filtering systems discarded earlier,[7][8] and to get data that is easier to handle, among other things.[9]

He also described Boundless Informant in response to a question from the session co-moderator, Chris Anderson (from its Wikipedia entry; Note: Links have been removed),

Boundless Informant or BOUNDLESSINFORMANT is a big data analysis and data visualization tool used by the United States National Security Agency (NSA). It gives NSA managers summaries of the NSA’s world wide data collection activities by counting metadata.[1] The existence of this tool was disclosed by documents leaked by Edward Snowden, who worked at the NSA for the defense contractor Booz Allen Hamilton.[2]

Anderson asks Snowden, “Why should we care [about increased surveillance]? After all we’re not doing anything wrong.” Snowden response notes that we have a right to privacy and that our actions can be misinterpreted or used against us at any time, present or future.

Anderson mentions Dick Cheney and Snowden notes that Cheney has in the past made some overblown comments about Assange which he (Cheney) now dismisses in the face of what he now considers to be Snowden’s greater trespass.

Snowden is now commenting on the NSA’s attempt to undermine internet security by misleading their partners. He again makes a plea for privacy. He also notes that US security has largely been defensive, i.e., protection against other countries’ attempts to get US secrets. These latest programmes change US security from a defensive strategy to an offensive strategy (football metaphor). These changes have been made without public scrutiny.

Anderson asks Snowden about his personal safety.  His response (more or less), “I go to sleep every morning thinking about what I can do to help the American people. … I’m happy to do what I can.”

Anderson asks the audience members whether they think Snowden’s was a reckless act or an heroic act. Some hands go up for reckless, more hands go up for heroic, and many hands remain still.

Snowden, “We need to keep the internet safe for us and if we don’t act we will lose our freedom.”

Anderson asks Tim Berners-Lee to come up to the stage and the discussion turns to his (Berners-Lee) proposal for a Magna Carta for the internet.

Tim Berners-Lee biography from his Wikipedia entry,

Sir Timothy John “Tim” Berners-Lee, OM, KBE, FRS, FREng, FRSA, DFBCS (born 8 June 1955), also known as “TimBL”, is a British computer scientist, best known as the inventor of the World Wide Web. He made a proposal for an information management system in March 1989,[4] and he implemented the first successful communication between a Hypertext Transfer Protocol (HTTP) client and server via the Internet sometime around mid November.[5][6][7][8][9]

Berners-Lee is the director of the World Wide Web Consortium (W3C), which oversees the Web’s continued development. He is also the founder of the World Wide Web Foundation, and is a senior researcher and holder of the Founders Chair at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).[10] He is a director of the Web Science Research Initiative (WSRI),[11] and a member of the advisory board of the MIT Center for Collective Intelligence.[12][13]

The Magna Carta (from its Wikipedia entry; Note: Links have been removed),

Magna Carta (Latin for Great Charter),[1] also called Magna Carta Libertatum or The Great Charter of the Liberties of England, is an Angevin charter originally issued in Latin in June 1215. It was sealed under oath by King John at Runnymede, on the bank of the River Thames near Windsor, England at June 15, 1215.[2]

Magna Carta was the first document forced onto a King of England by a group of his subjects, the feudal barons, in an attempt to limit his powers by law and protect their rights.

The charter is widely known throughout the English speaking world as an important part of the protracted historical process that led to the rule of constitutional law in England and beyond.

When asked by Anderson if he would return to the US if given amnesty, Snowden says yes as long as he can continue his work. He’s not willing to trade his work of bringing these issues to the public forefront in order to go home again.

Ethical nano in Second Life

Isn’t Second Life dead? Apparently not.

While you won’t be able to attend the live event online, there will be free access to the nano and ethics discussion held on July 20, 2012, from 1 pm to 4 pm EDT at the Terasem Island Conference Center in Second Life. The question and speakers were (from the July 20, 2012 event posting on the Kurzweil Accelerating Intelligence website,

What should be the ethical constraints on nanotechnology?

Speakers include:

  • Martine Rothblatt, Ph.D. — “Geoethical Rules for Nanotechnological Advances”
  • Peter Wicks — “Nanotechnology and the Environment: Enemies or Allies?”
  • Alex Wissner-Gross, Ph.D. — “Physically Programmable Surfaces”

The workshop is an exchange of scholarly views on uses of lifesaving nanotechnologies, including the impacts on people, accessibility, monitoring compliance with ethical norms.

I think if you check out the Terasem Island Conference Center in Second Life (SLURL), you will be able to access the archived discussion.

Nano events

The Project on Emerging Nanotechnologies (PEN) has a couple of events coming up later this month. The first one is this coming Thurs., Jan. 8, 2009 ‘Synthetic Biology: Is Ethics A Showstopper? from 12:30 pm to 1:30 pm EST. The event features two speakers, Arthur Caplan, an ethicist from the University of Pennsylvania, and Andrew Maynard, the chief science advisor for PEN. They request an RSVP, if you are attending in person. Go here for more details and/or to RSVP. Or you can view the webcast live or later. Their other event is on Weds.,  Jan. 14, 2009 and is called ‘Nanotech and Your Daily Vitamins’. The time for this event is 9:30 am – 10:30 am EST. The featured speakers, William B. Schultz and Lisa Barclay, are the authors of a report for PEN about the FDA and how it can address issues surrounding dietary supplements that use nanomaterials. For more details about the event and/or to RSVP, go here. There is also the webcast option. There is a link to the report from the event page but you have to log in to view it (as of Jan.6.09).

Nanotech BC is cancelling its Jan. 15, 2009 breakfast speaker event. Meanwhile, Nanotech BC organizers are preparing for the second Cascadia Symposium on April 20 – 21, 2009 at the Bayshore. They’ve gone for a larger venue (250 people) than last year’s. No other details are available yet.