Tag Archives: US

Robots, Dallas (US), ethics, and killing

I’ve waited a while before posting this piece in the hope that the situation would calm. Sadly, it took longer than hoped as there was an additional shooting incident of police officers in Baton Rouge on July 17, 2016. There’s more about that shooting in a July 18, 2016 news posting by Steve Visser for CNN.)

Finally: Robots, Dallas, ethics, and killing: In the wake of the Thursday, July 7, 2016 shooting in Dallas (Texas, US) and subsequent use of a robot armed with a bomb to kill  the suspect, a discussion about ethics has been raised.

This discussion comes at a difficult period. In the same week as the targeted shooting of white police officers in Dallas, two African-American males were shot and killed in two apparently unprovoked shootings by police. The victims were Alton Sterling in Baton Rouge, Louisiana on Tuesday, July 5, 2016 and, Philando Castile in Minnesota on Wednesday, July 6, 2016. (There’s more detail about the shootings prior to Dallas in a July 7, 2016 news item on CNN.) The suspect in Dallas, Micah Xavier Johnson, a 25-year-old African-American male had served in the US Army Reserve and been deployed in Afghanistan (there’s more in a July 9, 2016 news item by Emily Shapiro, Julia Jacobo, and Stephanie Wash for abcnews.go.com). All of this has taken place within the context of a movement started in 2013 in the US, Black Lives Matter.

Getting back to robots, most of the material I’ve seen about ‘killing or killer’ robots has so far involved industrial accidents (very few to date) and ethical issues for self-driven cars (see a May 31, 2016 posting by Noah J. Goodall on the IEEE [Institute of Electrical and Electronics Engineers] Spectrum website).

The incident in Dallas is apparently the first time a US police organization has used a robot as a bomb, although it has been an occasional practice by US Armed Forces in combat situations. Rob Lever in a July 8, 2016 Agence France-Presse piece on phys.org focuses on the technology aspect,

The “bomb robot” killing of a suspected Dallas shooter may be the first lethal use of an automated device by American police, and underscores growing role of technology in law enforcement.

Regardless of the methods in Dallas, the use of robots is expected to grow, to handle potentially dangerous missions in law enforcement and the military.


Researchers at Florida International University meanwhile have been working on a TeleBot that would allow disabled police officers to control a humanoid robot.

The robot, described in some reports as similar to the “RoboCop” in films from 1987 and 2014, was designed “to look intimidating and authoritative enough for citizens to obey the commands,” but with a “friendly appearance” that makes it “approachable to citizens of all ages,” according to a research paper.

Robot developers downplay the potential for the use of automated lethal force by the devices, but some analysts say debate on this is needed, both for policing and the military.

A July 9, 2016 Associated Press piece by Michael Liedtke and Bree Fowler on phys.org focuses more closely on ethical issues raised by the Dallas incident,

When Dallas police used a bomb-carrying robot to kill a sniper, they also kicked off an ethical debate about technology’s use as a crime-fighting weapon.

The strategy opens a new chapter in the escalating use of remote and semi-autonomous devices to fight crime and protect lives. It also raises new questions over when it’s appropriate to dispatch a robot to kill dangerous suspects instead of continuing to negotiate their surrender.

“If lethally equipped robots can be used in this situation, when else can they be used?” says Elizabeth Joh, a University of California at Davis law professor who has followed U.S. law enforcement’s use of technology. “Extreme emergencies shouldn’t define the scope of more ordinary situations where police may want to use robots that are capable of harm.”

In approaching the question about the ethics, Mike Masnick’s July 8, 2016 posting on Techdirt provides a surprisingly sympathetic reading for the Dallas Police Department’s actions, as well as, asking some provocative questions about how robots might be better employed by police organizations (Note: Links have been removed),

The Dallas Police have a long history of engaging in community policing designed to de-escalate situations, rather than encourage antagonism between police and the community, have been handling all of this with astounding restraint, frankly. Many other police departments would be lashing out, and yet the Dallas Police Dept, while obviously grieving for a horrible situation, appear to be handling this tragic situation professionally. And it appears that they did everything they could in a reasonable manner. They first tried to negotiate with Johnson, but after that failed and they feared more lives would be lost, they went with the robot + bomb option. And, obviously, considering he had already shot many police officers, I don’t think anyone would question the police justification if they had shot Johnson.

But, still, at the very least, the whole situation raises a lot of questions about the legality of police using a bomb offensively to blow someone up. And, it raises some serious questions about how other police departments might use this kind of technology in the future. The situation here appears to be one where people reasonably concluded that this was the most effective way to stop further bloodshed. And this is a police department with a strong track record of reasonable behavior. But what about other police departments where they don’t have that kind of history? What are the protocols for sending in a robot or drone to kill someone? Are there any rules at all?

Furthermore, it actually makes you wonder, why isn’t there a focus on using robots to de-escalate these situations? What if, instead of buying military surplus bomb robots, there were robots being designed to disarm a shooter, or detain him in a manner that would make it easier for the police to capture him alive? Why should the focus of remote robotic devices be to kill him? This isn’t faulting the Dallas Police Department for its actions last night. But, rather, if we’re going to enter the age of robocop, shouldn’t we be looking for ways to use such robotic devices in a manner that would help capture suspects alive, rather than dead?

Gordon Corera’s July 12, 2016 article on the BBC’s (British Broadcasting Corporation) news website provides an overview of the use of automation and of ‘killing/killer robots’,

Remote killing is not new in warfare. Technology has always been driven by military application, including allowing killing to be carried out at distance – prior examples might be the introduction of the longbow by the English at Crecy in 1346, then later the Nazi V1 and V2 rockets.

More recently, unmanned aerial vehicles (UAVs) or drones such as the Predator and the Reaper have been used by the US outside of traditional military battlefields.

Since 2009, the official US estimate is that about 2,500 “combatants” have been killed in 473 strikes, along with perhaps more than 100 non-combatants. Critics dispute those figures as being too low.

Back in 2008, I visited the Creech Air Force Base in the Nevada desert, where drones are flown from.

During our visit, the British pilots from the RAF deployed their weapons for the first time.

One of the pilots visibly bristled when I asked him if it ever felt like playing a video game – a question that many ask.

The military uses encrypted channels to control its ordnance disposal robots, but – as any hacker will tell you – there is almost always a flaw somewhere that a determined opponent can find and exploit.

We have already seen cars being taken control of remotely while people are driving them, and the nightmare of the future might be someone taking control of a robot and sending a weapon in the wrong direction.

The military is at the cutting edge of developing robotics, but domestic policing is also a different context in which greater separation from the community being policed risks compounding problems.

The balance between risks and benefits of robots, remote control and automation remain unclear.

But Dallas suggests that the future may be creeping up on us faster than we can debate it.

The excerpts here do not do justice to the articles, if you’re interested in this topic and have the time, I encourage you to read all the articles cited here in their entirety.

*(ETA: July 25, 2016 at 1405 hours PDT: There is a July 25, 2016 essay by Carrie Sheffield for Salon.com which may provide some insight into the Black Lives matter movement and some of the generational issues within the US African-American community as revealed by the movement.)*

Using light to make gold crystal nanoparticles

Gold crystal nanoparticles? Courtesy: University of Florida

Gold crystal nanoparticles? Courtesy: University of Florida

A team from the University of Florida has used gold instead of silver in a process known as plasmon-driven synthesis. From a July 8, 2016 news item on phys.org,

A team of University of Florida researchers has figured out how gold can be used in crystals grown by light to create nanoparticles, a discovery that has major implications for industry and cancer treatment and could improve the function of pharmaceuticals, medical equipment and solar panels.

A July 6, 2016 University of Florida news release, which originated the news item, provides more detail,

Nanoparticles can be “grown” in crystal formations with special use of light, in a process called plasmon-driven synthesis. However, scientists have had limited control unless they used silver, but silver limits the uses for medical technology. The team is the first to successfully use gold, which works well within the human body, with this process.

“How does light actually play a role in the synthesis? [This knowledge] was not well developed,” said David Wei, an associate professor of chemistry who led the research team. “Gold was the model system to demonstrate this.”

Gold is highly desired for nanotechnology because it is malleable, does not react with oxygen and conducts heat well. Those properties make gold an ideal material for nanoparticles, especially those that will be placed in the body.

When polyvinylpyrrolidone, or PVP, a substance commonly found in pharmaceutical tablets, is used in the plasmon-driven synthesis, it enables scientists to better control the growth of crystals. In Wei’s research, PVP surprised the team by showing its potential to relay light-generated “hot” electrons to a gold surface to grow the crystals.

The research describes the first plasmonic synthesis strategy that can make high-yield gold nanoprisms. Even more exciting, the team has demonstrated that visible-range and low-power light can be used in the synthesis. Combined with nanoparticles being used in solar photovoltaic devices, this method can even harness solar energy for chemical synthesis, to make nanomaterials or for general applications in chemistry.

Wei has spent the last decade working in nanotechnology. He is intrigued by its applications in photochemistry and biomedicine, especially in targeted drug delivery and photothermal therapeutics, which is crucial to cancer treatment. His team includes collaborators from Pacific Northwest National Laboratory, where he has worked as a visiting scholar, and Brookhaven National Laboratory. In addition, the project has provided an educational opportunity for chemistry students: one high school student (through UF’s Student Science Training Program), two University scholars who also [sic] funded by the Howard Hughes Medical Institute, five graduate students and two postdocs.

Here’s a link to and a citation for the paper,

Polyvinylpyrrolidone-induced anisotropic growth of gold nanoprisms in plasmon-driven synthesis by Yueming Zhai, Joseph S. DuChene, Yi-Chung Wang, Jingjing Qiu, Aaron C. Johnston-Peck, Bo You, Wenxiao Guo, Benedetto DiCiaccio, Kun Qian, Evan W. Zhao, Frances Ooi, Dehong Hu, Dong Su, Eric A. Stach, Zihua Zhu, & Wei David Wei. Nature Materials (2016) doi:10.1038/nmat4683 Published online 04 July 2016

This paper is behind a paywall.

Nanotechnology-enhanced roads in South Africa and in Kerala, India

It’s all about road infrastructure in these two news bits.

Road building and maintenance in sub-Saharan Africa

A July 7, 2016 news item on mybroadband.co.za describes hopes that nanotechnology-enabled products will make roads easier to build and maintain,

The solution for affordable road infrastructure development could lie in the use of nanotechnology, according to a paper presented at the 35th annual Southern African Transport Conference in Pretoria.

The cost of upgrading, maintaining and rehabilitating road infrastructure with limited funds makes it impossible for sub-Saharan Africa to become competitive in the world market, according to Professor Gerrit Jordaan of the University of Pretoria, a speaker at the conference.

The affordability of road infrastructure depends on the materials used, the environment in which the road will be built and the traffic that will be using the road, explained Professor James Maina of the department of civil engineering at the University of Pretoria.

Hauling materials to a construction site contributes hugely to costs, which planners try to minimise by getting materials closer to the site. But if there aren’t good quality materials near the site, another option is to modify poor quality materials for construction purposes. This is where nanotechnology comes in, he explained.

For example, if the material is clay soil, it has a high affinity to water so when it absorbs water it expands, and when it dries out it contracts. Nanotechnology can make the soil water repellent. “Essentially, nanotechnology changes the properties to work for the construction process,” he said.

These nanotechnology-based products have been used successfully in many parts of the world, including India, the USA and in the West African region.

There have also been concerns about road building and maintenance in Kerala, India.

Nanotechnology for city roads in Kochi

A March 23, 2015 news item in the Times of India describes an upcoming test of a nanotechnology-enabled all weather road,

Citizens can now look forward to better roads with the local self-government department planning to use nanotechnology to construct all-weather roads.

For the district trial run, the department has selected a 300-metre stretch of a panchayat road in Edakkattuvayal panchayat. The trial would experiment with nanotechnology to build moisture resistant, long-lasting and maintenance-free roads.

“Like the public, the department is also fed up with the poor condition of roads in the state. Crores of rupees are spent every year for repairing and resurfacing the roads. This is because of heavy rains in the state that weakens the soil base of roads, resulting in potholes that affect the ride-quality of the road surface,” said KT Sajan, assistant executive engineer, LSGD, who is supervising the work.

The nanotechnology has been developed by Zydex Technologies, a Gujarat-headquartered firm. The company’s technology has already been used by major private contract firms that build national highways in India and in other major projects in European and African countries.

Oddly, you can’t find out more about the Zydex products mentioned in the article on its Roads Solution webpage , where you are provided a general description of the technology,

Revolutionary nanotechnology for building moisture resistant, long lasting & maintenance free roads through innovative adaptation of Organosilane chemistry.

Zydex Nanotechnology: A Game Changer

Zydex Nanotechnology has a value propositions for all layers of the road

SOIL LAYERS
Zydex Nanotechnology makes the soil moisture resistant, reduces expansiveness and stabilizes the soil to improve its bearing strength manifold. If used with 1% cement, it can stabilize almost any type of soil, by improving the California Bearing Ratio (CBR) to even 100 or above.

Here is the real change in game, as stronger soil bases would now allow optimization of road section thicknesses, potentially saving 10-15% road construction cost.

BOND COATS
Prime & Tack coats become 100 % waterproofed, due to penetration and chemical bonding. This also ensures uniform load transfer. And all this at lower residual bitumen.

ASPHALTIC LAYERS
Chemical bonding between aggregates and asphalt eliminates moisture induced damage of asphaltic layers.

Final comment

I hadn’t meant to wait so long to publish the bit about Kerala’s road but serendipity has allowed me to link it to a piece about South Africa ‘s roads and to note a resemblance to the problems encountered in both regions.

Re-envisioning the laboratory: an art/sci or sci-art (take your pick) symposium

DFA186 Hades. 2012. Unique digital C-print on watercolor paper. Artist: Brandon Ballengee

DFA186 Hades. 2012. Unique digital C-print on watercolor paper. Artist: Brandon Ballengée

Artist (work seen above)/Biologist/Environmental Activist, Brandon Ballengée will be a keynote speaker at the Re-envisioning the Laboratory: Sci-Art Symposium being held at the University of Wyoming. Thursday, Sept. 8, 2016 is for the evening reception while the symposium is being held Friday, Sept. 9 – Saturday, Sept. 10, 2016. You can read more about the symposium (the schedule is not yet complete) in a July 12, 2016 posting by CommNatural (Bethann G. Merkle) on her CommNatural blog,

I’m super excited to invite you to register for a Sci-Art Symposium I’ve been co-planning for the past year. The big idea is to bring together a wide-ranging set of ideas, examples, and thinkers/do-ers to build a powerful foundation for on-going SciArt synergy on the University of Wyoming campus, in Wyoming communities, and beyond. We’re organizing sessions around not just beautiful examples and great ideas, but also challenges and funding opportunities, with the intent to address not just what works, but how it works, what gets in the way, and how to move ahead with the SciArt initiatives you envision.

The rest of this blog post provides essential information about the symposium. If you have any questions, don’t hesitate to contact me or any of the other organizers – there’s a slew of us from art and science disciplines across campus!

Hope to see you there!

SYMPOSIUM INFORMATION

The 2016 Sci-Art Symposium will provide a forum for inspiration, scholarly research, networking and opportunities to get the tools, methods and momentum to take on innovative interdisciplinary work across community, disciplinary, and topical boundaries. Sessions will be organized into five thematic categories: influences and opportunities, processes and methods, outcomes and products, challenges and opportunities, and next steps and future applications. Keynote address will feature artist-biologist Brandon Ballengée, and other sessions will feature presenters from throughout the nation.

Registration Fees:

$75  General Admission
$0    Full-time Student Admission (Only applicable to students enrolled in full-time schedule, may be asked for verification)

Click here for transportation and lodging information, on the event website.

CONTACT INFORMATION

If you have questions about your registration or if you need to cancel your attendance, please contact Katie Christensen, Curator of Education and Statewide Engagement, at katie.christensen@uwyo.edu or 307-766-3496.

Re-envisioning the Lab: 2016 Sci-Art Symposium is made possible by University of Wyoming Art Museum, in partnership with the Biodiversity Institute, Haub School of Environment and Natural Resources, Department of Art and Art History, Science and Math Teaching Center and MFA in Creative Writing.

I’m a little surprised that the US National Science Foundation is not one of the funders. In fact, most, if not all, of the funders are part of the University of Wyoming.

As to whether there is a correct form: artsci or sciart; art/sci or sci/art; sci-art or art-sci; SciArt or ArtSci, and whether the terms refer to the same thing or two different approaches to bringing together art and science in a project, I have no idea. Perhaps they’ll discuss terminology at the symposium.

One final thought, since they don’t have the final schedule nailed down, perhaps it’s possible to submit a proposal for a talk or entry for a sciart piece. Good luck!

Better and greener oil recovery

A June 27, 2016 news item on phys.org describes research on achieving better oil recovery,

As oil producers struggle to adapt to lower prices, getting as much oil as possible out of every well has become even more important, despite concerns from nearby residents that some chemicals used to boost production may pollute underground water resources.

Researchers from the University of Houston have reported the discovery of a nanotechnology-based solution that could address both issues – achieving 15 percent tertiary oil recovery at low cost, without the large volume of chemicals used in most commercial fluids.

A June 27, 2016 University of Houston news release (also on EurekAlert) by Jeannie Kever, which originated the news item, provides more detail,

The solution – graphene-based Janus amphiphilic nanosheets – is effective at a concentration of just 0.01 percent, meeting or exceeding the performance of both conventional and other nanotechnology-based fluids, said Zhifeng Ren, MD Anderson Chair professor of physics. Janus nanoparticles have at least two physical properties, allowing different chemical reactions on the same particle.

The low concentration and the high efficiency in boosting tertiary oil recovery make the nanofluid both more environmentally friendly and less expensive than options now on the market, said Ren, who also is a principal investigator at the Texas Center for Superconductivity at UH. He is lead author on a paper describing the work, published June 27 [2016] in the Proceedings of the National Academy of Sciences.

“Our results provide a novel nanofluid flooding method for tertiary oil recovery that is comparable to the sophisticated chemical methods,” they wrote. “We anticipate that this work will bring simple nanofluid flooding at low concentration to the stage of oilfield practice, which could result in oil being recovered in a more environmentally friendly and cost-effective manner.”

In addition to Ren, researchers involved with the project include Ching-Wu “Paul” Chu, chief scientist at the Texas Center for Superconductivity at UH; graduate students Dan Luo and Yuan Liu; researchers Feng Wang and Feng Cao; Richard C. Willson, professor of chemical and biomolecular engineering; and Jingyi Zhu, Xiaogang Li and Zhaozhong Yang, all of Southwest Petroleum University in Chengdu, China.

The U.S. Department of Energy estimates as much as 75 percent of recoverable reserves may be left after producers capture hydrocarbons that naturally rise to the surface or are pumped out mechanically, followed by a secondary recovery process using water or gas injection.

Traditional “tertiary” recovery involves injecting a chemical mix into the well and can recover between 10 percent and 20 percent, according to the authors.

But the large volume of chemicals used in tertiary oil recovery has raised concerns about potential environmental damage.

“Obviously simple nanofluid flooding (containing only nanoparticles) at low concentration (0.01 wt% or less) shows the greatest potential from the environmental and economic perspective,” the researchers wrote.

Previously developed simple nanofluids recover less than 5 percent of the oil when used at a 0.01 percent concentration, they reported. That forces oil producers to choose between a higher nanoparticle concentration – adding to the cost – or mixing with polymers or surfactants.

In contrast, they describe recovering 15.2 percent of the oil using their new and simple nanofluid at that concentration – comparable to chemical methods and about three times more efficient than other nanofluids.

Dan Luo, a UH graduate student and first author on the paper, said when the graphene-based fluid meets with the brine/oil mixture in the reservoir, the nanosheets in the fluid spontaneously go to the interface, reducing interfacial tension and helping the oil flow toward the production well.

Ren said the solution works in a completely new way.

“When it is injected, the solution helps detach the oil from the rock surface,” he said. Under certain hydrodynamic conditions, the graphene-based fluid forms a strong elastic and recoverable film at the oil and water interface, instead of forming an emulsion, he said.

Researchers said the difference is due to the asymmetric property of the 2-dimensional material. Nanoparticles are usually either hydrophobic – water-repelling, like oil – or hydrophilic, water-like, said Feng Wang, a post-doctoral researcher who shared first author-duties with Luo.

“Ours is both,” he said. “Ours is Janus and also strictly amphiphilic.”

Here’s a link to and a citation for the paper,

Nanofluid of graphene-based amphiphilic Janus nanosheets for tertiary or enhanced oil recovery: High performance at low concentration by Dan Luo, Feng Wang, Jingyi Zhu, Feng Cao, Yuan Liu, Xiaogang Li, Richard C. Willson, Zhaozhong Yang, Ching-Wu Chu, and Zhifeng Ren. PNAS 2016 doi: 10.1073/pnas.1608135113 published ahead of print June 27, 2016,

This paper is behind a paywall.

‘Bionic’ cardiac patch with nanoelectric scaffolds and living cells

A June 27, 2016 news item on Nanowerk announced that Harvard University researchers may have taken us a step closer to bionic cardiac patches for human hearts (Note: A link has been removed),

Scientists and doctors in recent decades have made vast leaps in the treatment of cardiac problems – particularly with the development in recent years of so-called “cardiac patches,” swaths of engineered heart tissue that can replace heart muscle damaged during a heart attack.

Thanks to the work of Charles Lieber and others, the next leap may be in sight.

The Mark Hyman, Jr. Professor of Chemistry and Chair of the Department of Chemistry and Chemical Biology, Lieber, postdoctoral fellow Xiaochuan Dai and other co-authors of a study that describes the construction of nanoscale electronic scaffolds that can be seeded with cardiac cells to produce a “bionic” cardiac patch. The study is described in a June 27 [2016] paper published in Nature Nanotechnology (“Three-dimensional mapping and regulation of action potential propagation in nanoelectronics-innervated tissues”).

A June 27, 2016 Harvard University press release on EurekAlert, which originated the news item, provides more information,

“I think one of the biggest impacts would ultimately be in the area that involves replaced of damaged cardiac tissue with pre-formed tissue patches,” Lieber said. “Rather than simply implanting an engineered patch built on a passive scaffold, our works suggests it will be possible to surgically implant an innervated patch that would now be able to monitor and subtly adjust its performance.”

Once implanted, Lieber said, the bionic patch could act similarly to a pacemaker – delivering electrical shocks to correct arrhythmia, but the possibilities don’t end there.

“In this study, we’ve shown we can change the frequency and direction of signal propagation,” he continued. “We believe it could be very important for controlling arrhythmia and other cardiac conditions.”

Unlike traditional pacemakers, Lieber said, the bionic patch – because its electronic components are integrated throughout the tissue – can detect arrhythmia far sooner, and operate at far lower voltages.

“Even before a person started to go into large-scale arrhythmia that frequently causes irreversible damage or other heart problems, this could detect the early-stage instabilities and intervene sooner,” he said. “It can also continuously monitor the feedback from the tissue and actively respond.”

“And a normal pacemaker, because it’s on the surface, has to use relatively high voltages,” Lieber added.

The patch might also find use, Lieber said, as a tool to monitor the responses under cardiac drugs, or to help pharmaceutical companies to screen the effectiveness of drugs under development.

Likewise, the bionic cardiac patch can also be a unique platform, he further mentioned, to study the tissue behavior evolving during some developmental processes, such as aging, ischemia or differentiation of stem cells into mature cardiac cells.

Although the bionic cardiac patch has not yet been implanted in animals, “we are interested in identifying collaborators already investigating cardiac patch implantation to treat myocardial infarction in a rodent model,” he said. “I don’t think it would be difficult to build this into a simpler, easily implantable system.”

In the long term, Lieber believes, the development of nanoscale tissue scaffolds represents a new paradigm for integrating biology with electronics in a virtually seamless way.

Using the injectable electronics technology he pioneered last year, Lieber even suggested that similar cardiac patches might one day simply be delivered by injection.

“It may actually be that, in the future, this won’t be done with a surgical patch,” he said. “We could simply do a co-injection of cells with the mesh, and it assembles itself inside the body, so it’s less invasive.”

Here’s a link to and a citation for the paper,

Three-dimensional mapping and regulation of action potential propagation in nanoelectronics-innervated tissues by Xiaochuan Dai, Wei Zhou, Teng Gao, Jia Liu & Charles M. Lieber. Nature Nanotechnology (2016)  doi:10.1038/nnano.2016.96 Published online 27 June 2016

This paper is behind a paywall.

Dexter Johnson in a June 27, 2016 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) provides more technical detail (Note: Links have been removed),

In research described in the journal Nature Nanotechnology, Lieber and his team employed a bottom-up approach that started with the fabrication of doped p-type silicon nanowires. Lieber has been spearheading the use of silicon nanowires as a scaffold for growing nerve, heart, and muscle tissue for years now.

In this latest work, Lieber and his team fabricated the nanowires, applied them onto a polymer surface, and arranged them into a field-effect transistor (FET). The researchers avoided an increase in the device’s impedance as its dimensions were reduced by adopting this FET approach as opposed to simply configuring the device as an electrode. Each FET, along with its source-drain interconnects, created a 4-micrometer-by-20-micrometer-by-350-nanometer pad. Each of these pads was, in effect, a single recording device.

I recommend reading Dexter’s posting in its entirety as Charles Lieber shares additional technical information not found in the news release.

Deep learning and some history from the Swiss National Science Foundation (SNSF)

A June 27, 2016 news item on phys.org provides a measured analysis of deep learning and its current state of development (from a Swiss perspective),

In March 2016, the world Go champion Lee Sedol lost 1-4 against the artificial intelligence AlphaGo. For many, this was yet another defeat for humanity at the hands of the machines. Indeed, the success of the AlphaGo software was forged in an area of artificial intelligence that has seen huge progress over the last decade. Deep learning, as it’s called, uses artificial neural networks to process algorithmic calculations. This software architecture therefore mimics biological neural networks.

Much of the progress in deep learning is thanks to the work of Jürgen Schmidhuber, director of the IDSIA (Istituto Dalle Molle di Studi sull’Intelligenza Artificiale) which is located in the suburbs of Lugano. The IDSIA doctoral student Shane Legg and a group of former colleagues went on to found DeepMind, the startup acquired by Google in early 2014 for USD 500 million. The DeepMind algorithms eventually wound up in AlphaGo.

“Schmidhuber is one of the best at deep learning,” says Boi Faltings of the EPFL Artificial Intelligence Lab. “He never let go of the need to keep working at it.” According to Stéphane Marchand-Maillet of the University of Geneva computing department, “he’s been in the race since the very beginning.”

A June 27, 2016 SNSF news release (first published as a story in Horizons no. 109 June 2016) by Fabien Goubet, which originated the news item, goes on to provide a brief history,

The real strength of deep learning is structural recognition, and winning at Go is just an illustration of this, albeit a rather resounding one. Elsewhere, and for some years now, we have seen it applied to an entire spectrum of areas, such as visual and vocal recognition, online translation tools and smartphone personal assistants. One underlying principle of machine learning is that algorithms must first be trained using copious examples. Naturally, this has been helped by the deluge of user-generated content spawned by smartphones and web 2.0, stretching from Facebook photo comments to official translations published on the Internet. By feeding a machine thousands of accurately tagged images of cats, for example, it learns first to recognise those cats and later any image of a cat, including those it hasn’t been fed.

Deep learning isn’t new; it just needed modern computers to come of age. As far back as the early 1950s, biologists tried to lay out formal principles to explain the working of the brain’s cells. In 1956, the psychologist Frank Rosenblatt of the New York State Aeronautical Laboratory published a numerical model based on these concepts, thereby creating the very first artificial neural network. Once integrated into a calculator, it learned to recognise rudimentary images.

“This network only contained eight neurones organised in a single layer. It could only recognise simple characters”, says Claude Touzet of the Adaptive and Integrative Neuroscience Laboratory of Aix-Marseille University. “It wasn’t until 1985 that we saw the second generation of artificial neural networks featuring multiple layers and much greater performance”. This breakthrough was made simultaneously by three researchers: Yann LeCun in Paris, Geoffrey Hinton in Toronto and Terrence Sejnowski in Baltimore.

Byte-size learning

In multilayer networks, each layer learns to recognise the precise visual characteristics of a shape. The deeper the layer, the more abstract the characteristics. With cat photos, the first layer analyses pixel colour, and the following layer recognises the general form of the cat. This structural design can support calculations being made upon thousands of layers, and it was this aspect of the architecture that gave rise to the name ‘deep learning’.

Marchand-Maillet explains: “Each artificial neurone is assigned an input value, which it computes using a mathematical function, only firing if the output exceeds a pre-defined threshold”. In this way, it reproduces the behaviour of real neurones, which only fire and transmit information when the input signal (the potential difference across the entire neural circuit) reaches a certain level. In the artificial model, the results of a single layer are weighted, added up and then sent as the input signal to the following layer, which processes that input using different functions, and so on and so forth.

For example, if a system is trained with great quantities of photos of apples and watermelons, it will progressively learn to distinguish them on the basis of diameter, says Marchand-Maillet. If it cannot decide (e.g., when processing a picture of a tiny watermelon), the subsequent layers take over by analysing the colours or textures of the fruit in the photo, and so on. In this way, every step in the process further refines the assessment.

Video games to the rescue

For decades, the frontier of computing held back more complex applications, even at the cutting edge. Industry walked away, and deep learning only survived thanks to the video games sector, which eventually began producing graphics chips, or GPUs, with an unprecedented power at accessible prices: up to 6 teraflops (i.e., 6 trillion calculations per second) for a few hundred dollars. “There’s no doubt that it was this calculating power that laid the ground for the quantum leap in deep learning”, says Touzet. GPUs are also very good at parallel calculations, a useful function for executing the innumerable simultaneous operations required by neural networks.
Although image analysis is getting great results, things are more complicated for sequential data objects such as natural spoken language and video footage. This has formed part of Schmidhuber’s work since 1989, and his response has been to develop recurrent neural networks in which neurones communicate with each other in loops, feeding processed data back into the initial layers.

Such sequential data analysis is highly dependent on context and precursory data. In Lugano, networks have been instructed to memorise the order of a chain of events. Long Short Term Memory (LSTM) networks can distinguish ‘boat’ from ‘float’ by recalling the sound that preceded ‘oat’ (i.e., either ‘b’ or ‘fl’). “Recurrent neural networks are more powerful than other approaches such as the Hidden Markov models”, says Schmidhuber, who also notes that Google Voice integrated LSTMs in 2015. “With looped networks, the number of layers is potentially infinite”, says Faltings [?].

For Schmidhuber, deep learning is just one aspect of artificial intelligence; the real thing will lead to “the most important change in the history of our civilisation”. But Marchand-Maillet sees deep learning as “a bit of hype, leading us to believe that artificial intelligence can learn anything provided there’s data. But it’s still an open question as to whether deep learning can really be applied to every last domain”.

It’s nice to get an historical perspective and eye-opening to realize that scientists have been working on these concepts since the 1950s.

Nanotechnology Molecular Tagging for sniffing out explosives

A nifty technology for sniffing out explosives is described in a June 22, 2016 news item in Government Security News magazine. I do think they might have eased up on the Egypt Air disaster reference and the implication that it might have been avoided with the use of this technology,

The crash of an Egypt Air Flight 804 recently again raised concerns over whether a vulnerability in pre-flight security has led to another deadly terrorist attacks. Officials haven’t found a cause for the crash yet, but news reports indicate that officials believe either a bomb or fire are what brought the plane down [link included from press release].

Regardless of the cause, the Chief Executive Officer of British-based Ancon Technologies said that the incident shows the compelling need for more versatile and affordable explosive detection technology.

“There are still too many vulnerabilities in transportation systems around the world,” said CEO Dr. Robert Muir. “That’s why our focus has been on developing explosive detection technology that is highly efficient, easily deployable and economically priced.”

A June 21, 2015 Ancon Technologies press release on PR Web, which originated the news item, describes the technology in a little more detail,

Using nanotechnology to scan sensitive vapour readings, Ancon Technologies has developed unique security devices with exception sensitivity to detect explosive chemicals and materials. Called Nanotechnology Molecular Tagging, the technology is used to look for specific molecular markers that are emitted from the chemicals used in explosive compounds. An NMT device can then be programmed to look for these compounds and gauge concentrations.

“The result is unprecedented sensitivity for a device that is portable and versatile,” Dr. Muir said. “The technology is also highly selective, meaning it can distinguish the molecules is testing for against the backdrop of other chemicals and readings in the air.”

If terrorism is responsible for the crash of the Egypt Air flight on route to Cairo from Paris’ Charles de Gaulle Airport, the incident further shows the need for heightened screening processes, Muir said. Concerns about air travel’s vulnerabilities to terrorism were further raised in October when a Russian plane flying out of Egypt crashed in what several officials believe was a terrorist bombing.

Both cases show the need for improved security measures in airports around the world, especially those related to early explosive detection, Muir said. CNN reported that the Egypt Air crash would likely generate even more attention to airport security while Egypt has already been investing in new security measures following the October attack.

“An NMT device can bring laboratory-level sensitivity to the airport screening procedure, adding another level of safety in places where it’s needed most,” Muir said. “By being able to detect a compound at concentrations as small as a single molecule, NMT can pinpoint a threat and provide security teams with the early warning they need.”

The NMT device’s sensitivity and accuracy can also help balance another concern with airport security: long waits. Already, the Transportation Security Agency is coming under fire this summer for extended airport security screening lines, reports USA Today.

“An NMT device can produce results from test samples in minutes, meaning screenings can proceed at a reasonable pace without jeopardizing security,” Muir said.

Ancon Technologies has working arrangements with military and security agencies in both the United Kingdom and the United States, Muir said, following a recent round of investments. The company is headquartered in Canterbury, Kent and has an office in the U.S. in Bloomington, Minnesota.

So this is a sensing device and I believe this particular type can also be described as an artificial nose.

International nano news bits: Belarus and Vietnam

I have two nano news bits, one concerning Belarus and the other concerning Vietnam.

Belarus

From a June 21, 2016 news item on Belarus News,

In the current five-year term Belarus will put efforts into developing robot technology, nano and biotechnologies, medical industry and a number of other branches of the national economy that can make innovative products, BelTA learned from Belarusian Economy Minister Vladimir Zinovsky on 21 June [2016].

The Minister underlined that the creation of new kinds of products, the development of conventional industries will produce their own results in economy and will allow securing a GDP growth rate as high as 112-115% in the current five-year term.

The last time Belarus was mentioned here was in a June 24, 2014 posting (scroll down about 25% of the way to see Belarus mentioned) about the European Union’s Graphene Flagship programme and new partners in the project. There was also a March 6, 2013 posting about Belarus and a nanotechnology partnership with Indonesia. (There are other mentions but those are the most recent.)

Vietnam

Vietnam has put into operation its first bio-nano production plant. From a June 21, 2016 news item on vietnamnet,

The Vietlife biological nano-plant was officially put into operation on June 20 [2016] at the North Thang Long Industrial Park in Hanoi.

It is the first plant producing biological nano-products developed entirely by Vietnamese scientists with a successful combination of traditional medicine, nanotechnology and modern drugs.

At the inauguration, Professor, Academician Nguyen Van Hieu, former president of Vietnam Academy of Science and Technology, who is the first to bring nanotechnology to Vietnam, reviewed the milestones of nanotechnology around the world and in the country.

In 2000, former US President Bill Clinton proposed American scientists research and develop nanotechnology for the first time.

Japan and the Republic of Korea then began developing the new technology.

Just two years later, in 2002, Vietnamese scientists also recommended research on nanotechnology and got the approval from the Party and State.

Academician Hieu said that Vietnam does not currently use nanotechnology to manufacture flat-screen TVs or smartphones. However, in Southeast Asia Vietnam has pioneered the research and successful applications of nanotechnology in production of probiotics combined with traditional medicine in health care, opening up a new potential science research in Vietnam.

Cam Ha JSC and scientists at the Vietnam Academy of Science and Technology have co-operated with a number of laboratories in the US, Australia and Japan to study and successfully develop a bio-nano production line in sync with diverse technologies.

Vietlife is the first plant to combine traditional medicine with nanotechnology and modern medicine. It consists of three technological lines: NANO MICELLE No. 1, 2 and 3; a NANO SOL-GEL chain; a packaging line, and a bio-nano research centre.

Nghia [Prof. Dr. Nguyen Duc Nghia, former deputy director of the Chemistry Institute under the Vietnam Academy of Science and Technology] said the factory has successfully produced some typical bio products, including Nanocurcumin NDN22+ from Vietnamese turmeric by nano micelle and Nano Sol-Gel methods. Preclinical experiment results indicate that at a concentration of about 40ppm, NDN22+ solution can kill 100% of rectum cancer tumors and prostate tumor cells within 72 hours. [emphasis mine]

In addition, it also manufactures other bio-nano products like Nanorutin from luscious trees and Nanolycopen from gac (Momordica cochinchinensis) oil.

Unfortunately, this news item does not include links to the research supporting the claims regarding nanocurcumin NDN22+. Hopefully, I will stumble across it soon.