Tag Archives: US

Evolution of literature as seen by a classicist, a biologist and a computer scientist

Studying intertextuality shows how books are related in various ways and are reorganized and recombined over time. Image courtesy of Elena Poiata.

I find the image more instructive when I read it from the bottom up. For those who prefer to prefer to read from the top down, there’s this April 5, 2017 University of Texas at Austin news release (also on EurekAlert),

A classicist, biologist and computer scientist all walk into a room — what comes next isn’t the punchline but a new method to analyze relationships among ancient Latin and Greek texts, developed in part by researchers from The University of Texas at Austin.

Their work, referred to as quantitative criticism, is highlighted in a study published in the Proceedings of the National Academy of Sciences. The paper identifies subtle literary patterns in order to map relationships between texts and more broadly to trace the cultural evolution of literature.

“As scholars of the humanities well know, literature is a system within which texts bear a multitude of relationships to one another. Understanding what is distinctive about one text entails knowing how it fits within that system,” said Pramit Chaudhuri, associate professor in the Department of Classics at UT Austin. “Our work seeks to harness the power of quantification and computation to describe those relationships at macro and micro levels not easily achieved by conventional reading alone.”

In the study, the researchers create literary profiles based on stylometric features, such as word usage, punctuation and sentence structure, and use techniques from machine learning to understand these complex datasets. Taking a computational approach enables the discovery of small but important characteristics that distinguish one work from another — a process that could require years using manual counting methods.

“One aspect of the technical novelty of our work lies in the unusual types of literary features studied,” Chaudhuri said. “Much computational text analysis focuses on words, but there are many other important hallmarks of style, such as sound, rhythm and syntax.”

Another component of their work builds on Matthew Jockers’ literary “macroanalysis,” which uses machine learning to identify stylistic signatures of particular genres within a large body of English literature. Implementing related approaches, Chaudhuri and his colleagues have begun to trace the evolution of Latin prose style, providing new, quantitative evidence for the sweeping impact of writers such as Caesar and Livy on the subsequent development of Roman prose literature.

“There is a growing appreciation that culture evolves and that language can be studied as a cultural artifact, but there has been less research focused specifically on the cultural evolution of literature,” said the study’s lead author Joseph Dexter, a Ph.D. candidate in systems biology at Harvard University. “Working in the area of classics offers two advantages: the literary tradition is a long and influential one well served by digital resources, and classical scholarship maintains a strong interest in close linguistic study of literature.”

Unusually for a publication in a science journal, the paper contains several examples of the types of more speculative literary reading enabled by the quantitative methods introduced. The authors discuss the poetic use of rhyming sounds for emphasis and of particular vocabulary to evoke mood, among other literary features.

“Computation has long been employed for attribution and dating of literary works, problems that are unambiguous in scope and invite binary or numerical answers,” Dexter said. “The recent explosion of interest in the digital humanities, however, has led to the key insight that similar computational methods can be repurposed to address questions of literary significance and style, which are often more ambiguous and open ended. For our group, this humanist work of criticism is just as important as quantitative methods and data.”

The paper is the work of the Quantitative Criticism Lab (www.qcrit.org), co-directed by Chaudhuri and Dexter in collaboration with researchers from several other institutions. It is funded in part by a 2016 National Endowment for the Humanities grant and the Andrew W. Mellon Foundation New Directions Fellowship, awarded in 2016 to Chaudhuri to further his education in statistics and biology. Chaudhuri was one of 12 scholars selected for the award, which provides humanities researchers the opportunity to train outside of their own area of special interest with a larger goal of bridging the humanities and social sciences.

Here’s another link to the paper along with a citation,

Quantitative criticism of literary relationships by Joseph P. Dexter, Theodore Katz, Nilesh Tripuraneni, Tathagata Dasgupta, Ajay Kannan, James A. Brofos, Jorge A. Bonilla Lopez, Lea A. Schroeder, Adriana Casarez, Maxim Rabinovich, Ayelet Haimson Lushkov, and Pramit Chaudhuri. PNAS Published online before print April 3, 2017, doi: 10.1073/pnas.1611910114

This paper appears to be open access.

Seaweed supercapacitors

I like munching on seaweed from time to time but it seems that seaweed may be more than just a foodstuff according to an April 5, 2017 news item on Nanowerk,

Seaweed, the edible algae with a long history in some Asian cuisines, and which has also become part of the Western foodie culture, could turn out to be an essential ingredient in another trend: the development of more sustainable ways to power our devices. Researchers have made a seaweed-derived material to help boost the performance of superconductors, lithium-ion batteries and fuel cells.

The team will present the work today [April 5, 2017] at the 253rd National Meeting & Exposition of the American Chemical Society (ACS). ACS, the world’s largest scientific society, is holding the meeting here through Thursday. It features more than 14,000 presentations on a wide range of science topics.

An April 5, 2017 American Chemical Society news release on EurekAlert), which originated the news item, gives more details about the presentation,

“Carbon-based materials are the most versatile materials used in the field of energy storage and conversion,” Dongjiang Yang, Ph.D., says. “We wanted to produce carbon-based materials via a really ‘green’ pathway. Given the renewability of seaweed, we chose seaweed extract as a precursor and template to synthesize hierarchical porous carbon materials.” He explains that the project opens a new way to use earth-abundant materials to develop future high-performance, multifunctional carbon nanomaterials for energy storage and catalysis on a large scale.

Traditional carbon materials, such as graphite, have been essential to creating the current energy landscape. But to make the leap to the next generation of lithium-ion batteries and other storage devices, an even better material is needed, preferably one that can be sustainably sourced, Yang says.

With these factors in mind, Yang, who is currently at Qingdao University (China), turned to the ocean. Seaweed is an abundant algae that grows easily in salt water. While Yang was at Griffith University in Australia, he worked with colleagues at Qingdao University and at Los Alamos National Laboratory in the U.S. to make porous carbon nanofibers from seaweed extract. Chelating, or binding, metal ions such as cobalt to the alginate molecules resulted in nanofibers with an “egg-box” structure, with alginate units enveloping the metal ions. This architecture is key to the material’s stability and controllable synthesis, Yang says.

Testing showed that the seaweed-derived material had a large reversible capacity of 625 milliampere hours per gram (mAhg-1), which is considerably more than the 372 mAhg-1 capacity of traditional graphite anodes for lithium-ion batteries. This could help double the range of electric cars if the cathode material is of equal quality. The egg-box fibers also performed as well as commercial platinum-based catalysts used in fuel-cell technologies and with much better long-term stability. They also showed high capacitance as a superconductor material at 197 Farads per gram, which could be applied in zinc-air batteries and supercapacitors. The researchers published their initial results in ACS Central Science in 2015 and have since developed the materials further.

For example, building on the same egg-box structure, the researchers say they have suppressed defects in seaweed-based, lithium-ion battery cathodes that can block the movement of lithium ions and hinder battery performance. And recently, they have developed an approach using red algae-derived carrageenan and iron to make a porous sulfur-doped carbon aerogel with an ultra-high surface area. The structure could be a good candidate to use in lithium-sulfur batteries and supercapacitors.

More work is needed to commercialize the seaweed-based materials, however. Yang says currently more than 20,000 tons of alginate precursor can be extracted from seaweed per year for industrial use. But much more will be required to scale up production.

Here’s an image representing the research,

Scientists have created porous ‘egg-box’ structured nanofibers using seaweed extract. Credit: American Chemical Society

I’m not sure that looks like an egg-box but I’ll take their word for it.

Mathematicians get illustrative

Frank A. Farris, an associate Professor of Mathematics at Santa Clara University (US), writes about the latest in mathematicians and data visualization in an April 4, 2017 essay on The Conversation (Note: Links have been removed),

Today, digital tools like 3-D printing, animation and virtual reality are more affordable than ever, allowing mathematicians to investigate and illustrate their work at the same time. Instead of drawing a complicated surface on a chalkboard, we can now hand students a physical model to feel or invite them to fly over it in virtual reality.

Last year, a workshop called “Illustrating Mathematics” at the Institute for Computational and Experimental Research in Mathematics (ICERM) brought together an eclectic group of mathematicians and digital art practitioners to celebrate what seems to be a golden age of mathematical visualization. Of course, visualization has been central to mathematics since Pythagoras, but this seems to be the first time it had a workshop of its own.

Visualization plays a growing role in mathematical research. According to John Sullivan at the Technical University of Berlin, mathematical thinking styles can be roughly categorized into three groups: “the philosopher,” who thinks purely in abstract concepts; “the analyst,” who thinks in formulas; and “the geometer,” who thinks in pictures.

Mathematical research is stimulated by collaboration between all three types of thinkers. Many practitioners believe teaching should be calibrated to connect with different thinking styles.

Borromean Rings, the logo of the International Mathematical Union. John Sullivan

Sullivan’s own work has benefited from images. He studies geometric knot theory, which involves finding “best” configurations. For example, consider his Borromean rings, which won the logo contest of the International Mathematical Union several years ago. The rings are linked together, but if one of them is cut, the others fall apart, which makes it a nice symbol of unity.

Apparently this new ability to think mathematics visually has influenced mathematicians in some unexpected ways,

Take mathematician Fabienne Serrière, who raised US$124,306 through Kickstarter in 2015 to buy an industrial knitting machine. Her dream was to make custom-knit scarves that demonstrate cellular automata, mathematical models of cells on a grid. To realize her algorithmic design instructions, Serrière hacked the code that controls the machine. She now works full-time on custom textiles from a Seattle studio.

In this sculpture by Edmund Harriss, the drill traces are programmed to go perpendicular to the growth rings of the tree. This makes the finished sculpture a depiction of a concept mathematicians know as ‘paths of steepest descent.’ Edmund Harriss, Author provided

Edmund Harriss of the University of Arkansas hacked an architectural drilling machine, which he now uses to make mathematical sculptures from wood. The control process involves some deep ideas from differential geometry. Since his ideas are basically about controlling a robot arm, they have wide application beyond art. According to his website, Harriss is “driven by a passion to communicate the beauty and utility of mathematical thinking.”

Mathematical algorithms power the products made by Nervous System, a studio in Massachusetts that was founded in 2007 by Jessica Rosenkrantz, a biologist and architect, and Jess Louis-Rosenberg, a mathematician. Many of their designs, for things like custom jewelry and lampshades, look like naturally occurring structures from biology or geology.

Farris’ essay is a fascinating look at mathematics and data visualization.

Recycle electronic waste by crushing it into nanodust

Given the issues with e-waste this work seems quite exciting. From a March 21, 2017 Rice University news release (also on EurekAlert), Note: Links have been removed,

Researchers at Rice University and the Indian Institute of Science have an idea to simplify electronic waste recycling: Crush it into nanodust.

Specifically, they want to make the particles so small that separating different components is relatively simple compared with processes used to recycle electronic junk now.

Chandra Sekhar Tiwary, a postdoctoral researcher at Rice and a researcher at the Indian Institute of Science in Bangalore, uses a low-temperature cryo-mill to pulverize electronic waste – primarily the chips, other electronic components and polymers that make up printed circuit boards (PCBs) — into particles so small that they do not contaminate each other.

Then they can be sorted and reused, he said.

Circuit boards from electronics, like computer mice, can be crushed into nanodust by a cryo-mill, according to researchers at Rice and the Indian Institute of Science. The dust can then be easily separated into its component elements for recycling.

Circuit boards from electronics, like computer mice, can be crushed into nanodust by a cryo-mill, according to researchers at Rice and the Indian Institute of Science. The dust can then be easily separated into its component elements for recycling. Courtesy of the Ajayan Research Group

The process is the subject of a Materials Today paper by Tiwary, Rice materials scientist Pulickel Ajayan and Indian Institute professors Kamanio Chattopadhyay and D.P. Mahapatra. 

The researchers intend it to replace current processes that involve dumping outdated electronics into landfills, or burning or treating them with chemicals to recover valuable metals and alloys. None are particularly friendly to the environment, Tiwary said.

“In every case, the cycle is one way, and burning or using chemicals takes a lot of energy while still leaving waste,” he said. “We propose a system that breaks all of the components – metals, oxides and polymers – into homogenous powders and makes them easy to reuse.”

The researchers estimate that so-called e-waste will grow by 33 percent over the next four years, and by 2030 will weigh more than a billion tons. Nearly 80 to 85 percent of often-toxic e-waste ends up in an incinerator or a landfill, Tiwary said, and is the fastest-growing waste stream in the United States, according to the Environmental Protection Agency.

The answer may be scaled-up versions of a cryo-mill designed by the Indian team that, rather than heating them, keeps materials at ultra-low temperatures during crushing.

Cold materials are more brittle and easier to pulverize, Tiwary said. “We take advantage of the physics. When you heat things, they are more likely to combine: You can put metals into polymer, oxides into polymers. That’s what high-temperature processing is for, and it makes mixing really easy.

A transparent piece of epoxy, left, compared to epoxy with e-waste reinforcement at right. A cryo-milling process developed at Rice University and the Indian Institute of Science simplifies the process of separating and recycling electronic waste.

A transparent piece of epoxy, left, compared to epoxy with e-waste reinforcement at right. A cryo-milling process developed at Rice University and the Indian Institute of Science simplifies the process of separating and recycling electronic waste. Courtesy of the Ajayan Research Group

“But in low temperatures, they don’t like to mix. The materials’ basic properties – their elastic modulus, thermal conductivity and coefficient of thermal expansion – all change. They allow everything to separate really well,” he said.

The test subjects in this case were computer mice – or at least their PCB innards. The cryo-mill contained argon gas and a single tool-grade steel ball. A steady stream of liquid nitrogen kept the container at 154 kelvins (minus 182 degrees Fahrenheit).

When shaken, the ball smashes the polymer first, then the metals and then the oxides just long enough to separate the materials into a powder, with particles between 20 and 100 nanometers wide. That can take up to three hours, after which the particles are bathed in water to separate them.

“Then they can be reused,” he said. “Nothing is wasted.”

Here’s a link to and a citation for the paper,

Electronic waste recycling via cryo-milling and nanoparticle beneficiation by C.S. Tiwary, S. Kishore, R. Vasireddi, D.R. Mahapatra, P.M. Ajayan, K. Chattopadhyay. Materials Today         http://dx.doi.org/10.1016/j.mattod.2017.01.015 Available online 20 March 2017

This paper is behind a paywall.

Nanocar Race winners: The US-Austrian team

Sadly, I didn’t stumble across the news about the US-Austrian team sooner but it was not published until a May 8, 2017 news item on Nanowerk,

Rice University chemist James Tour and his international team have won the first Nanocar Race.

The Rice and University of Graz team finished first in the inaugural Nanocar Race in Toulouse, France, April 28, completing a 150-nanometer course — roughly a thousandth of the width of a human hair — in about 1½ hours. (The race was declared over after 30 hours.)

Interestingly the Rice University news release announcing the win was issued prior to the ‘winning’ Swiss team’s and it explains why the Swiss team was declared a co-winner despite the additional hours (6.5 hours as compared to 1.5 hours [see my May 9, 2017 posting: Nanocar Race winners! where the Swiss appear to claiming they raced 38 hours]) before completing the race. From an April 28, 2017 Rice University news release,

The team led by Tour and Graz physicist Leonhard Grill deployed a two-wheeled, single-molecule vehicle with adamantane tires on its home track in Graz, Austria, achieving an average speed of 95 nanometers per hour. Tour said the speed ranged from more than 300 to less than 1 nanometer per hour, depending upon the location along the course.

The Swiss Nano Dragster team finished next, five hours later. But organizers at the French National Center for Scientific Research declared them a co-winner of first place as they were tops among teams that raced on a gold track.

Because the scanning tunneling microscope track in Toulouse could only accommodate four cars, two of the six competing international teams — Ohio University and Rice-Graz — ran their vehicles on their home tracks (Ohio on gold) and operated them remotely from the Toulouse headquarters.

The Dipolar Racer designed at Rice.

The Dipolar Racer designed at Rice.

Five cars were driven across gold surfaces in a vacuum near absolute zero by electrons from the tips of microscopes in Toulouse and Ohio, but the Rice-Graz team got permission to use a silver track at Graz. “Gold was the surface of choice, so we tested it there, but it turns out it’s too fast,” Grill said. “It’s so fast, we can’t even image it.”

The team got permission from organizers in advance of the race to use the slower silver surface, but with an additional handicap. “We had to go 150 nanometers around two pylons instead of 100 nanometers since our car was so much faster,” Tour said.

Tour said the race directors used the Paris-Rouen auto race in 1894, considered by some to be the world’s first auto race, as precedent for their decision April 29. “I am told there will be two first prizes regardless of the time difference and handicap,” he said.

The Rice-Graz car, called the Dipolar Racer, was designed by Tour and former Rice graduate student Victor Garcia-Lopez and raced by the Graz team, which included postdoctoral researcher and pilot Grant Simpson and undergraduate and co-pilot Philipp Petermeier.

The silver track under the microscope. Two Rice nanocars are in the blue circle at top. The lower car was the first to run the race, finishing in an hour-and-a-half. The top car was put through the course later, finishing in 2 hours.

The silver track under the microscope. Two Rice nanocars are in the blue circle at top. The lower car was the first to run the race, finishing in a 1½ hours. The top car was put through the course later, finishing in 2 hours. Click on the image for a larger version.

The purpose of the competition, according to organizers, was to push the science of how single molecules can be manipulated as they interact with surfaces.

“We chose our fastest wheels and our strongest dipole so that it could be pulled by the electric field more efficiently,” said Tour, whose lab has been designing nanocars since 1998. ‘We gave it two (side-by-side) wheels to minimize interaction with the surface and to lower the molecular weight.

“We built in every possible design parameter that we could to optimize speed,” he said.

While details of the Dipolar Racer remained a closely held secret until race time, Tour and Grill said they will be revealed in a forthcoming paper.

“This is the beginning of our ability to demonstrate nanoscale manipulation with control around obstacles and speed and will pave the way for much faster paces and eventually for carrying cargo and doing bottom-up assembly.

“It’s a great day for nanotechnology,” Tour said. “And a great day for Rice University and the University of Graz.”

Clearly all the winners were very excited. Still, there’s a little shade being thrown (one of the scientists is just a tiny bit miffed) as you can see in James Tour’s quote given after noting the US-Austrian racer was too fast for the gold surface so the team used the slower silver surface and were given another handicap. As per the Rice University news release: ““I am told [emphasis mine] there will be two first prizes regardless of the time difference and handicap,” he said.” Of course, the Swiss team’s news release didn’t mention the US-Austrian team’s speedier finish nor did it name (Dipolar Racer) the US-Austrian racer. As I noted before, scientists are people too.

Nanocar Race winners!

In fact, there was a tie although it seems the Swiss winners were a little more excited. A May 1, 2017 news item on swissinfo.ch provides fascinating detail,

“Swiss Nano Dragster”, driven by scientists from Basel, has won the first international car race involving molecular machines. The race involved four nano cars zipping round a pure gold racetrack measuring 100 nanometres – or one ten-thousandth of a millimetre.

The two Swiss pilots, Rémy Pawlak and Tobias Meier from the Swiss Nanoscience Institute and the Department of Physicsexternal link at the University of Basel, had to reach the chequered flag – negotiating two curves en route – within 38 hours. [emphasis mine*]

The winning drivers, who actually shared first place with a US-Austrian team, were not sitting behind a steering wheel but in front of a computer. They used this to propel their single-molecule vehicle with a small electric shock from a scanning tunnelling microscope.

During such a race, a tunnelling current flows between the tip of the microscope and the molecule, with the size of the current depending on the distance between molecule and tip. If the current is high enough, the molecule starts to move and can be steered over the racetrack, a bit like a hovercraft.

….

The race track was maintained at a very low temperature (-268 degrees Celsius) so that the molecules didn’t move without the current.

What’s more, any nudging of the molecule by the microscope tip would have led to disqualification.

Miniature motors

The race, held in Toulouse, France, and organised by the National Centre for Scientific Research (CNRS), was originally going to be held in October 2016, but problems with some cars resulted in a slight delay. In the end, organisers selected four of nine applicants since there were only four racetracks.

The cars measured between one and three nanometres – about 30,000 times smaller than a human hair. The Swiss Nano Dragster is, in technical language, a 4′-(4-Tolyl)-2,2′:6′,2”-terpyridine molecule.

The Swiss and US-Austrian teams outraced rivals from the US and Germany.

The race is not just a bit of fun for scientists. The researchers hope to gain insights into how molecules move.

I believe this Basel University .gif is from the race,

*Emphasis added on May 9, 2017 at 12:26 pm PT. See my May 9, 2017 posting: Nanocar Race winners: The US-Austrian team for the other half of this story.

Using a sponge to remove mercury from lake water

I’ve heard of Lake Como in Italy but Como Lake in Minnesota is a new one for me. The Minnesota lake is featured in a March 22, 2017 news item about water and sponges on phys.org,

Mercury is very toxic and can cause long-term health damage, but removing it from water is challenging. To address this growing problem, University of Minnesota College of Food, Agricultural and Natural Sciences (CFANS) Professor Abdennour Abbas and his lab team created a sponge that can absorb mercury from a polluted water source within seconds. Thanks to the application of nanotechnology, the team developed a sponge with outstanding mercury adsorption properties where mercury contaminations can be removed from tap, lake and industrial wastewater to below detectable limits in less than 5 seconds (or around 5 minutes for industrial wastewater). The sponge converts the contamination into a non-toxic complex so it can be disposed of in a landfill after use. The sponge also kills bacterial and fungal microbes.

Think of it this way: If Como Lake in St. Paul was contaminated with mercury at the EPA limit, the sponge needed to remove all of the mercury would be the size of a basketball.

A March 16, 2017 University of Minnesota news release, which originated the news item, explains why this discovery is important for water supplies in the state of Minnesota,

This is an important advancement for the state of Minnesota, as more than two thirds of the waters on Minnesota’s 2004 Impaired Waters List are impaired because of mercury contamination that ranges from 0.27 to 12.43 ng/L (the EPA limit is 2 ng/L). Mercury contamination of lake waters results in mercury accumulation in fish, leading the Minnesota Department of Health to establish fish consumption guidelines. A number of fish species store-bought or caught in Minnesota lakes are not advised for consumption more than once a week or even once a month. In Minnesota’s North Shore, 10 percent of tested newborns had mercury concentrations above the EPA reference dose for methylmercury (the form of mercury found in fish). This means that some pregnant women in the Lake Superior region, and in Minnesota, have mercury exposures that need to be reduced.  In addition, a reduced deposition of mercury is projected to have economic benefits reflected by an annual state willingness-to-pay of $212 million in Minnesota alone.

According to the US-EPA, cutting mercury emissions to the latest established effluent limit standards would result in 130,000 fewer asthma attacks, 4,700 fewer heart attacks, and 11,000 fewer premature deaths each year. That adds up to at least $37 billion to $90 billion in annual monetized benefits annually.

In addition to improving air and water quality, aquatic life and public health, the new technology would have an impact on inspiring new regulations. Technology shapes regulations, which in turn determine the value of the market. The 2015 EPA Mercury and Air Toxics Standards regulation was estimated to cost the industry around of $9.6 billion annually in 2020. The new U of M technology has a potential of bringing this cost down and make it easy for the industry to meet regulatory requirements.

Research by Abbas and his team was funded by the MnDRIVE Global Food Venture, MnDRIVE Environment, and USDA-NIFA. They currently have three patents on this technology. To learn more, visit www.abbaslab.com.

Here’s a link to and a citation for the paper,

A Nanoselenium Sponge for Instantaneous Mercury Removal to Undetectable Levels by Snober Ahmed, John Brockgreitens, Ke Xu, and Abdennour Abbas. Advanced Functional Materials DOI: 10.1002/adfm.201606572 Version of Record online: 6 MAR 2017

© 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Predicting how a memristor functions

An April 3, 2017 news item on Nanowerk announces a new memristor development (Note: A link has been removed),

Researchers from the CNRS [Centre national de la recherche scientifique; France] , Thales, and the Universities of Bordeaux, Paris-Sud, and Evry have created an artificial synapse capable of learning autonomously. They were also able to model the device, which is essential for developing more complex circuits. The research was published in Nature Communications (“Learning through ferroelectric domain dynamics in solid-state synapses”)

An April 3, 2017 CNRS press release, which originated the news item, provides a nice introduction to the memristor concept before providing a few more details about this latest work (Note: A link has been removed),

One of the goals of biomimetics is to take inspiration from the functioning of the brain [also known as neuromorphic engineering or neuromorphic computing] in order to design increasingly intelligent machines. This principle is already at work in information technology, in the form of the algorithms used for completing certain tasks, such as image recognition; this, for instance, is what Facebook uses to identify photos. However, the procedure consumes a lot of energy. Vincent Garcia (Unité mixte de physique CNRS/Thales) and his colleagues have just taken a step forward in this area by creating directly on a chip an artificial synapse that is capable of learning. They have also developed a physical model that explains this learning capacity. This discovery opens the way to creating a network of synapses and hence intelligent systems requiring less time and energy.

Our brain’s learning process is linked to our synapses, which serve as connections between our neurons. The more the synapse is stimulated, the more the connection is reinforced and learning improved. Researchers took inspiration from this mechanism to design an artificial synapse, called a memristor. This electronic nanocomponent consists of a thin ferroelectric layer sandwiched between two electrodes, and whose resistance can be tuned using voltage pulses similar to those in neurons. If the resistance is low the synaptic connection will be strong, and if the resistance is high the connection will be weak. This capacity to adapt its resistance enables the synapse to learn.

Although research focusing on these artificial synapses is central to the concerns of many laboratories, the functioning of these devices remained largely unknown. The researchers have succeeded, for the first time, in developing a physical model able to predict how they function. This understanding of the process will make it possible to create more complex systems, such as a series of artificial neurons interconnected by these memristors.

As part of the ULPEC H2020 European project, this discovery will be used for real-time shape recognition using an innovative camera1 : the pixels remain inactive, except when they see a change in the angle of vision. The data processing procedure will require less energy, and will take less time to detect the selected objects. The research involved teams from the CNRS/Thales physics joint research unit, the Laboratoire de l’intégration du matériau au système (CNRS/Université de Bordeaux/Bordeaux INP), the University of Arkansas (US), the Centre de nanosciences et nanotechnologies (CNRS/Université Paris-Sud), the Université d’Evry, and Thales.

 

Image synapse


© Sören Boyn / CNRS/Thales physics joint research unit.

Artist’s impression of the electronic synapse: the particles represent electrons circulating through oxide, by analogy with neurotransmitters in biological synapses. The flow of electrons depends on the oxide’s ferroelectric domain structure, which is controlled by electric voltage pulses.


Here’s a link to and a citation for the paper,

Learning through ferroelectric domain dynamics in solid-state synapses by Sören Boyn, Julie Grollier, Gwendal Lecerf, Bin Xu, Nicolas Locatelli, Stéphane Fusil, Stéphanie Girod, Cécile Carrétéro, Karin Garcia, Stéphane Xavier, Jean Tomas, Laurent Bellaiche, Manuel Bibes, Agnès Barthélémy, Sylvain Saïghi, & Vincent Garcia. Nature Communications 8, Article number: 14736 (2017) doi:10.1038/ncomms14736 Published online: 03 April 2017

This paper is open access.

Thales or Thales Group is a French company, from its Wikipedia entry (Note: Links have been removed),

Thales Group (French: [talɛs]) is a French multinational company that designs and builds electrical systems and provides services for the aerospace, defence, transportation and security markets. Its headquarters are in La Défense[2] (the business district of Paris), and its stock is listed on the Euronext Paris.

The company changed its name to Thales (from the Greek philosopher Thales,[3] pronounced [talɛs] reflecting its pronunciation in French) from Thomson-CSF in December 2000 shortly after the £1.3 billion acquisition of Racal Electronics plc, a UK defence electronics group. It is partially state-owned by the French government,[4] and has operations in more than 56 countries. It has 64,000 employees and generated €14.9 billion in revenues in 2016. The Group is ranked as the 475th largest company in the world by Fortune 500 Global.[5] It is also the 10th largest defence contractor in the world[6] and 55% of its total sales are military sales.[4]

The ULPEC (Ultra-Low Power Event-Based Camera) H2020 [Horizon 2020 funded) European project can be found here,

The long term goal of ULPEC is to develop advanced vision applications with ultra-low power requirements and ultra-low latency. The output of the ULPEC project is a demonstrator connecting a neuromorphic event-based camera to a high speed ultra-low power consumption asynchronous visual data processing system (Spiking Neural Network with memristive synapses). Although ULPEC device aims to reach TRL 4, it is a highly application-oriented project: prospective use cases will b…

Finally, for anyone curious about Thales, the philosopher (from his Wikipedia entry), Note: Links have been removed,

Thales of Miletus (/ˈθeɪliːz/; Greek: Θαλῆς (ὁ Μῑλήσιος), Thalēs; c. 624 – c. 546 BC) was a pre-Socratic Greek/Phoenician philosopher, mathematician and astronomer from Miletus in Asia Minor (present-day Milet in Turkey). He was one of the Seven Sages of Greece. Many, most notably Aristotle, regard him as the first philosopher in the Greek tradition,[1][2] and he is otherwise historically recognized as the first individual in Western civilization known to have entertained and engaged in scientific philosophy.[3][4]

Off to the Nanocar Race: April 28, 2017

The Nanocar Race (which at one point was the NanoCar Race) took place on April 28 -29, 2017 in Toulouse, France. Presumably the fall 2016 race did not take place (as I had reported in my May 26, 2016 posting). A March 23, 2017 news item on ScienceDaily gave the latest news about the race,

Nanocars will compete for the first time ever during an international molecule-car race on April 28-29, 2017 in Toulouse (south-western France). The vehicles, which consist of a few hundred atoms, will be powered by minute electrical pulses during the 36 hours of the race, in which they must navigate a racecourse made of gold atoms, and measuring a maximum of a 100 nanometers in length. They will square off beneath the four tips of a unique microscope located at the CNRS’s Centre d’élaboration de matériaux et d’études structurales (CEMES) in Toulouse. The race, which was organized by the CNRS, is first and foremost a scientific and technological challenge, and will be broadcast live on the YouTube Nanocar Race channel. Beyond the competition, the overarching objective is to advance research in the observation and control of molecule-machines.

More than just a competition, the Nanocar Race is an international scientific experiment that will be conducted in real time, with the aim of testing the performance of molecule-machines and the scientific instruments used to control them. The years ahead will probably see the use of such molecular machinery — activated individually or in synchronized fashion — in the manufacture of common machines: atom-by-atom construction of electronic circuits, atom-by-atom deconstruction of industrial waste, capture of energy…The Nanocar Race is therefore a unique opportunity for researchers to implement cutting-edge techniques for the simultaneous observation and independent maneuvering of such nano-machines.

The experiment began in 2013 as part of an overview of nano-machine research for a scientific journal, when the idea for a car race took shape in the minds of CNRS senior researcher Christian Joachim (now the director of the race) and Gwénaël Rapenne, a Professor of chemistry at Université Toulouse III — Paul Sabatier. …

An April 19, 2017 article by Davide Castelvecchi for Nature (magazine) provided more detail about the race (Note: Links have been removed),

The term nanocar is actually a misnomer, because the molecules involved in this race have no motors. (Future races may incorporate them, Joachim says.) And it is not clear whether the molecules will even roll along like wagons: a few designs might, but many lack axles and wheels. Drivers will use electrons from the tip of a scanning tunnelling microscope (STM) to help jolt their molecules along, typically by just 0.3 nano-metres each time — making 100 nanometres “a pretty long distance”, notes physicist Leonhard Grill of the University of Graz, Austria, who co-leads a US–Austrian team in the race.

Contestants are not allowed to directly push on their molecules with the STM tip. Some teams have designed their molecules so that the incoming electrons raise their energy states, causing vibrations or changes to molecular structures that jolt the racers along. Others expect electrostatic repulsion from the electrons to be the main driving force. Waka Nakanishi, an organic chemist at the National Institute for Materials Science in Tsukuba, Japan, has designed a nanocar with two sets of ‘flaps’ that are intended to flutter like butterfly wings when the molecule is energized by the STM tip (see ‘Molecular race’). Part of the reason for entering the race, she says, was to gain access to the Toulouse lab’s state-of-the-art STM to better understand the molecule’s behaviour.

Eric Masson, a chemist at Ohio University in Athens, hopes to find out whether the ‘wheels’ (pumpkin-shaped groups of atoms) of his team’s car will roll on the surface or simply slide. “We want to better understand the nature of the interaction between the molecule and the surface,” says Masson..

Adapted from www.nanocar-race.cnrs.fr

Simply watching the race progress is half the battle. After each attempted jolt, teams will take three minutes to scan their race track with the STM, and after each hour they will produce a short animation that will immediately be posted online. That way, says Joachim, everyone will be able to see the race streamed almost live.

Nanoscale races

The Toulouse laboratory has an unusual STM with four scanning tips — most have only one — that will allow four teams to race at the same time, each on a different section of the gold surface. Six teams will compete this week to qualify for one of the four spots; the final race will begin on 28 April at 11 a.m. local time. The competitors will face many obstacles during the contest. Individual molecules in the race will often be lost or get stuck, and the trickiest part may be to negotiate the two turns in the track, Joachim says. He thinks the racers may require multiple restarts to cover the distance.

For anyone who wants more information, go to the Nanocar Race website. There is also a highlights video,

Published on Apr 29, 2017

The best moments of the first-ever international race of molecule- cars.

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.