Tag Archives: University of Maryland

Thanks for the memory: the US National Institute of Standards and Technology (NIST) and memristors

In January 2018 it seemed like I was tripping across a lot of memristor stories . This came from a January 19, 2018 news item on Nanowerk,

In the race to build a computer that mimics the massive computational power of the human brain, researchers are increasingly turning to memristors, which can vary their electrical resistance based on the memory of past activity. Scientists at the National Institute of Standards and Technology (NIST) have now unveiled the long-mysterious inner workings of these semiconductor elements, which can act like the short-term memory of nerve cells.

A January 18, 2018 NIST news release (also on EurekAlert), which originated the news item, fills in the details,

Just as the ability of one nerve cell to signal another depends on how often the cells have communicated in the recent past, the resistance of a memristor depends on the amount of current that recently flowed through it. Moreover, a memristor retains that memory even when electrical power is switched off.

But despite the keen interest in memristors, scientists have lacked a detailed understanding of how these devices work and have yet to develop a standard toolset to study them.

Now, NIST scientists have identified such a toolset and used it to more deeply probe how memristors operate. Their findings could lead to more efficient operation of the devices and suggest ways to minimize the leakage of current.

Brian Hoskins of NIST and the University of California, Santa Barbara, along with NIST scientists Nikolai Zhitenev, Andrei Kolmakov, Jabez McClelland and their colleagues from the University of Maryland’s NanoCenter (link is external) in College Park and the Institute for Research and Development in Microtechnologies in Bucharest, reported the findings (link is external) in a recent Nature Communications.

To explore the electrical function of memristors, the team aimed a tightly focused beam of electrons at different locations on a titanium dioxide memristor. The beam knocked free some of the device’s electrons, which formed ultrasharp images of those locations. The beam also induced four distinct currents to flow within the device. The team determined that the currents are associated with the multiple interfaces between materials in the memristor, which consists of two metal (conducting) layers separated by an insulator.

“We know exactly where each of the currents are coming from because we are controlling the location of the beam that is inducing those currents,” said Hoskins.

In imaging the device, the team found several dark spots—regions of enhanced conductivity—which indicated places where current might leak out of the memristor during its normal operation. These leakage pathways resided outside the memristor’s core—where it switches between the low and high resistance levels that are useful in an electronic device. The finding suggests that reducing the size of a memristor could minimize or even eliminate some of the unwanted current pathways. Although researchers had suspected that might be the case, they had lacked experimental guidance about just how much to reduce the size of the device.

Because the leakage pathways are tiny, involving distances of only 100 to 300 nanometers, “you’re probably not going to start seeing some really big improvements until you reduce dimensions of the memristor on that scale,” Hoskins said.

To their surprise, the team also found that the current that correlated with the memristor’s switch in resistance didn’t come from the active switching material at all, but the metal layer above it. The most important lesson of the memristor study, Hoskins noted, “is that you can’t just worry about the resistive switch, the switching spot itself, you have to worry about everything around it.” The team’s study, he added, “is a way of generating much stronger intuition about what might be a good way to engineer memristors.”

Here’s a link to and a citation for the paper,

Stateful characterization of resistive switching TiO2 with electron beam induced currents by Brian D. Hoskins, Gina C. Adam, Evgheni Strelcov, Nikolai Zhitenev, Andrei Kolmakov, Dmitri B. Strukov, & Jabez J. McClelland. Nature Communications 8, Article number: 1972 (2017) doi:10.1038/s41467-017-02116-9 Published online: 07 December 2017

This is an open access paper.

It might be my imagination but it seemed like a lot of papers from 2017 were being publicized in early 2018.

Finally, I borrowed much of my headline from the NIST’s headline for its news release, specifically, “Thanks for the memory,” which is a rather old song,

Bob Hope and Shirley Ross in “The Big Broadcast of 1938.”

A customized cruise experience with wearable technology (and decreased personal agency?)

The days when you went cruising to ‘get away from it all’ seem to have passed (if they ever really existed) with the introduction of wearable technology that will register your every preference and make life easier according to Cliff Kuang’s Oct. 19, 2017 article for Fast Company,

This month [October 2017], the 141,000-ton Regal Princess will push out to sea after a nine-figure revamp of mind-boggling scale. Passengers won’t be greeted by new restaurants, swimming pools, or onboard activities, but will instead step into a future augured by the likes of Netflix and Uber, where nearly everything is on demand and personally tailored. An ambitious new customization platform has been woven into the ship’s 19 passenger decks: some 7,000 onboard sensors and 4,000 “guest portals” (door-access panels and touch-screen TVs), all of them connected by 75 miles of internal cabling. As the Carnival-owned ship cruises to Nassau, Bahamas, and Grand Turk, its 3,500 passengers will have the option of carrying a quarter-size device, called the Ocean Medallion, which can be slipped into a pocket or worn on the wrist and is synced with a companion app.

The platform will provide a new level of service for passengers; the onboard sensors record their tastes and respond to their movements, and the app guides them around the ship and toward activities aligned with their preferences. Carnival plans to roll out the platform to another seven ships by January 2019. Eventually, the Ocean Medallion could be opening doors, ordering drinks, and scheduling activities for passengers on all 102 of Carnival’s vessels across 10 cruise lines, from the mass-market Princess ships to the legendary ocean liners of Cunard.

Kuang goes on to explain the reasoning behind this innovation,

The Ocean Medallion is Carnival’s attempt to address a problem that’s become increasingly vexing to the $35.5 billion cruise industry. Driven by economics, ships have exploded in size: In 1996, Carnival Destiny was the world’s largest cruise ship, carrying 2,600 passengers. Today, Royal Caribbean’s MS Harmony of the Seas carries up to 6,780 passengers and 2,300 crew. Larger ships expend less fuel per passenger; the money saved can then go to adding more amenities—which, in turn, are geared to attracting as many types of people as possible. Today on a typical ship you can do practically anything—from attending violin concertos to bungee jumping. And that’s just onboard. Most of a cruise is spent in port, where each day there are dozens of experiences available. This avalanche of choice can bury a passenger. It has also made personalized service harder to deliver. …

Kuang also wrote this brief description of how the technology works from the passenger’s perspective in an Oct. 19, 2017 item for Fast Company,

1. Pre-trip

On the web or on the app, you can book experiences, log your tastes and interests, and line up your days. That data powers the recommendations you’ll see. The Ocean Medallion arrives by mail and becomes the key to ship access.

2. Stateroom

When you draw near, your cabin-room door unlocks without swiping. The room’s unique 43-inch TV, which doubles as a touch screen, offers a range of Carnival’s bespoke travel shows. Whatever you watch is fed into your excursion suggestions.

3. Food

When you order something, sensors detect where you are, allowing your server to find you. Your allergies and preferences are also tracked, and shape the choices you’re offered. In all, the back-end data has 45,000 allergens tagged and manages 250,000 drink combinations.

4. Activities

The right algorithms can go beyond suggesting wines based on previous orders. Carnival is creating a massive semantic database, so if you like pricey reds, you’re more apt to be guided to a violin concerto than a limbo competition. Your onboard choices—the casino, the gym, the pool—inform your excursion recommendations.

In Kuang’s Oct. 19, 2017 article he notes that the cruise ship line is putting a lot of effort into retraining their staff and emphasizing the ‘soft’ skills that aren’t going to be found in this iteration of the technology. No mention is made of whether or not there will be reductions in the number of staff members on this cruise ship nor is the possibility that ‘soft’ skills may in the future be incorporated into this technological marvel.

Personalization/customization is increasingly everywhere

How do you feel about customized news feeds? As it turns out, this is not a rhetorical question as Adrienne LaFrance notes in her Oct. 19, 2017 article for The Atlantic (Note: Links have been removed),

Today, a Google search for news runs through the same algorithmic filtration system as any other Google search: A person’s individual search history, geographic location, and other demographic information affects what Google shows you. Exactly how your search results differ from any other person’s is a mystery, however. Not even the computer scientists who developed the algorithm could precisely reverse engineer it, given the fact that the same result can be achieved through numerous paths, and that ranking factors—deciding which results show up first—are constantly changing, as are the algorithms themselves.

We now get our news in real time, on demand, tailored to our interests, across multiple platforms, without knowing just how much is actually personalized. It was technology companies like Google and Facebook, not traditional newsrooms, that made it so. But news organizations are increasingly betting that offering personalized content can help them draw audiences to their sites—and keep them coming back.

Personalization extends beyond how and where news organizations meet their readers. Already, smartphone users can subscribe to push notifications for the specific coverage areas that interest them. On Facebook, users can decide—to some extent—which organizations’ stories they would like to appear in their news feeds. At the same time, devices and platforms that use machine learning to get to know their users will increasingly play a role in shaping ultra-personalized news products. Meanwhile, voice-activated artificially intelligent devices, such as Google Home and Amazon Echo, are poised to redefine the relationship between news consumers and the news [emphasis mine].

While news personalization can help people manage information overload by making individuals’ news diets unique, it also threatens to incite filter bubbles and, in turn, bias [emphasis mine]. This “creates a bit of an echo chamber,” says Judith Donath, author of The Social Machine: Designs for Living Online and a researcher affiliated with Harvard University ’s Berkman Klein Center for Internet and Society. “You get news that is designed to be palatable to you. It feeds into people’s appetite of expecting the news to be entertaining … [and] the desire to have news that’s reinforcing your beliefs, as opposed to teaching you about what’s happening in the world and helping you predict the future better.”

Still, algorithms have a place in responsible journalism. “An algorithm actually is the modern editorial tool,” says Tamar Charney, the managing editor of NPR One, the organization’s customizable mobile-listening app. A handcrafted hub for audio content from both local and national programs as well as podcasts from sources other than NPR, NPR One employs an algorithm to help populate users’ streams with content that is likely to interest them. But Charney assures there’s still a human hand involved: “The whole editorial vision of NPR One was to take the best of what humans do and take the best of what algorithms do and marry them together.” [emphasis mine]

The skimming and diving Charney describes sounds almost exactly like how Apple and Google approach their distributed-content platforms. With Apple News, users can decide which outlets and topics they are most interested in seeing, with Siri offering suggestions as the algorithm gets better at understanding your preferences. Siri now has have help from Safari. The personal assistant can now detect browser history and suggest news items based on what someone’s been looking at—for example, if someone is searching Safari for Reykjavík-related travel information, they will then see Iceland-related news on Apple News. But the For You view of Apple News isn’t 100 percent customizable, as it still spotlights top stories of the day, and trending stories that are popular with other users, alongside those curated just for you.

Similarly, with Google’s latest update to Google News, readers can scan fixed headlines, customize sidebars on the page to their core interests and location—and, of course, search. The latest redesign of Google News makes it look newsier than ever, and adds to many of the personalization features Google first introduced in 2010. There’s also a place where you can preprogram your own interests into the algorithm.

Google says this isn’t an attempt to supplant news organizations, nor is it inspired by them. The design is rather an embodiment of Google’s original ethos, the product manager for Google News Anand Paka says: “Just due to the deluge of information, users do want ways to control information overload. In other words, why should I read the news that I don’t care about?” [emphasis mine]

Meanwhile, in May [2017?], Google briefly tested a personalized search filter that would dip into its trove of data about users with personal Google and Gmail accounts and include results exclusively from their emails, photos, calendar items, and other personal data related to their query. [emphasis mine] The “personal” tab was supposedly “just an experiment,” a Google spokesperson said, and the option was temporarily removed, but seems to have rolled back out for many users as of August [2017?].

Now, Google, in seeking to settle a class-action lawsuit alleging that scanning emails to offer targeted ads amounts to illegal wiretapping, is promising that for the next three years it won’t use the content of its users’ emails to serve up targeted ads in Gmail. The move, which will go into effect at an unspecified date, doesn’t mean users won’t see ads, however. Google will continue to collect data from users’ search histories, YouTube, and Chrome browsing habits, and other activity.

The fear that personalization will encourage filter bubbles by narrowing the selection of stories is a valid one, especially considering that the average internet user or news consumer might not even be aware of such efforts. Elia Powers, an assistant professor of journalism and news media at Towson University in Maryland, studied the awareness of news personalization among students after he noticed those in his own classes didn’t seem to realize the extent to which Facebook and Google customized users’ results. “My sense is that they didn’t really understand … the role that people that were curating the algorithms [had], how influential that was. And they also didn’t understand that they could play a pretty active role on Facebook in telling Facebook what kinds of news they want them to show and how to prioritize [content] on Google,” he says.

The results of Powers’s study, which was published in Digital Journalism in February [2017], showed that the majority of students had no idea that algorithms were filtering the news content they saw on Facebook and Google. When asked if Facebook shows every news item, posted by organizations or people, in a users’ newsfeed, only 24 percent of those surveyed were aware that Facebook prioritizes certain posts and hides others. Similarly, only a quarter of respondents said Google search results would be different for two different people entering the same search terms at the same time. [emphasis mine; Note: Respondents in this study were students.]

This, of course, has implications beyond the classroom, says Powers: “People as news consumers need to be aware of what decisions are being made [for them], before they even open their news sites, by algorithms and the people behind them, and also be able to understand how they can counter the effects or maybe even turn off personalization or make tweaks to their feeds or their news sites so they take a more active role in actually seeing what they want to see in their feeds.”

On Google and Facebook, the algorithm that determines what you see is invisible. With voice-activated assistants, the algorithm suddenly has a persona. “We are being trained to have a relationship with the AI,” says Amy Webb, founder of the Future Today Institute and an adjunct professor at New York University Stern School of Business. “This is so much more catastrophically horrible for news organizations than the internet. At least with the internet, I have options. The voice ecosystem is not built that way. It’s being built so I just get the information I need in a pleasing way.”

LaFrance’s article is thoughtful and well worth reading in its entirety. Now, onto some commentary.

Loss of personal agency

I have been concerned for some time about the increasingly dull results I get from a Google search and while I realize the company has been gathering information about me via my searches , supposedly in service of giving me better searches, I had no idea how deeply the company can mine for personal data. It makes me wonder what would happen if Google and Facebook attempted a merger.

More cogently, I rather resent the search engines and artificial intelligence agents (e.g. Facebook bots) which have usurped my role as the arbiter of what interests me, in short, my increasing loss of personal agency.

I’m also deeply suspicious of what these companies are going to do with my data. Will it be used to manipulate me in some way? Presumably, the data will be sold and used for some purpose. In the US, they have married electoral data with consumer data as Brent Bambury notes in an Oct. 13, 2017 article for his CBC (Canadian Broadcasting Corporation) Radio show,

How much of your personal information circulates in the free-market ether of metadata? It could be more than you imagine, and it might be enough to let others change the way you vote.

A data firm that specializes in creating psychological profiles of voters claims to have up to 5,000 data points on 220 million Americans. Cambridge Analytica has deep ties to the American right and was hired by the campaigns of Ben Carson, Ted Cruz and Donald Trump.

During the U.S. election, CNN called them “Donald Trump’s mind readers” and his secret weapon.

David Carroll is a Professor at the Parsons School of Design in New York City. He is one of the millions of Americans profiled by Cambridge Analytica and he’s taking legal action to find out where the company gets its masses of data and how they use it to create their vaunted psychographic profiles of voters.

On Day 6 [Banbury’s CBC radio programme], he explained why that’s important.

“They claim to have figured out how to project our voting behavior based on our consumer behavior. So it’s important for citizens to be able to understand this because it would affect our ability to understand how we’re being targeted by campaigns and how the messages that we’re seeing on Facebook and television are being directed at us to manipulate us.” [emphasis mine]

The parent company of Cambridge Analytica, SCL Group, is a U.K.-based data operation with global ties to military and political activities. David Carroll says the potential for sharing personal data internationally is a cause for concern.

“It’s the first time that this kind of data is being collected and transferred across geographic boundaries,” he says.

But that also gives Carroll an opening for legal action. An individual has more rights to access their personal information in the U.K., so that’s where he’s launching his lawsuit.

Reports link Michael Flynn, briefly Trump’s National Security Adviser, to SCL Group and indicate that former White House strategist Steve Bannon is a board member of Cambridge Analytica. Billionaire Robert Mercer, who has underwritten Bannon’s Breitbart operations and is a major Trump donor, also has a significant stake in Cambridge Analytica.

In the world of data, Mercer’s credentials are impeccable.

“He is an important contributor to the field of artificial intelligence,” says David Carroll.

“His work at IBM is seminal and really important in terms of the foundational ideas that go into big data analytics, so the relationship between AI and big data analytics. …

Banbury’s piece offers a lot more, including embedded videos, than I’ve not included in that excerpt but I also wanted to include some material from Carole Cadwalladr’s Oct. 1, 2017 Guardian article about Carroll and his legal fight in the UK,

“There are so many disturbing aspects to this. One of the things that really troubles me is how the company can buy anonymous data completely legally from all these different sources, but as soon as it attaches it to voter files, you are re-identified. It means that every privacy policy we have ignored in our use of technology is a broken promise. It would be one thing if this information stayed in the US, if it was an American company and it only did voter data stuff.”

But, he [Carroll] argues, “it’s not just a US company and it’s not just a civilian company”. Instead, he says, it has ties with the military through SCL – “and it doesn’t just do voter targeting”. Carroll has provided information to the Senate intelligence committee and believes that the disclosures mandated by a British court could provide evidence helpful to investigators.

Frank Pasquale, a law professor at the University of Maryland, author of The Black Box Society and a leading expert on big data and the law, called the case a “watershed moment”.

“It really is a David and Goliath fight and I think it will be the model for other citizens’ actions against other big corporations. I think we will look back and see it as a really significant case in terms of the future of algorithmic accountability and data protection. …

Nobody is discussing personal agency directly but if you’re only being exposed to certain kinds of messages then your personal agency has been taken from you. Admittedly we don’t have complete personal agency in our lives but AI along with the data gathering done online and increasingly with wearable and smart technology means that another layer of control has been added to your life and it is largely invisible. After all, the students in Elia Powers’ study didn’t realize their news feeds were being pre-curated.

Brain stuff: quantum entanglement and a multi-dimensional universe

I have two brain news bits, one about neural networks and quantum entanglement and another about how the brain operates on more than three dimensions.

Quantum entanglement and neural networks

A June 13, 2017 news item on phys.org describes how machine learning can be used to solve problems in physics (Note: Links have been removed),

Machine learning, the field that’s driving a revolution in artificial intelligence, has cemented its role in modern technology. Its tools and techniques have led to rapid improvements in everything from self-driving cars and speech recognition to the digital mastery of an ancient board game.

Now, physicists are beginning to use machine learning tools to tackle a different kind of problem, one at the heart of quantum physics. In a paper published recently in Physical Review X, researchers from JQI [Joint Quantum Institute] and the Condensed Matter Theory Center (CMTC) at the University of Maryland showed that certain neural networks—abstract webs that pass information from node to node like neurons in the brain—can succinctly describe wide swathes of quantum systems.

An artist’s rendering of a neural network with two layers. At the top is a real quantum system, like atoms in an optical lattice. Below is a network of hidden neurons that capture their interactions (Credit: E. Edwards/JQI)

A June 12, 2017 JQI news release by Chris Cesare, which originated the news item, describes how neural networks can represent quantum entanglement,

Dongling Deng, a JQI Postdoctoral Fellow who is a member of CMTC and the paper’s first author, says that researchers who use computers to study quantum systems might benefit from the simple descriptions that neural networks provide. “If we want to numerically tackle some quantum problem,” Deng says, “we first need to find an efficient representation.”

On paper and, more importantly, on computers, physicists have many ways of representing quantum systems. Typically these representations comprise lists of numbers describing the likelihood that a system will be found in different quantum states. But it becomes difficult to extract properties or predictions from a digital description as the number of quantum particles grows, and the prevailing wisdom has been that entanglement—an exotic quantum connection between particles—plays a key role in thwarting simple representations.

The neural networks used by Deng and his collaborators—CMTC Director and JQI Fellow Sankar Das Sarma and Fudan University physicist and former JQI Postdoctoral Fellow Xiaopeng Li—can efficiently represent quantum systems that harbor lots of entanglement, a surprising improvement over prior methods.

What’s more, the new results go beyond mere representation. “This research is unique in that it does not just provide an efficient representation of highly entangled quantum states,” Das Sarma says. “It is a new way of solving intractable, interacting quantum many-body problems that uses machine learning tools to find exact solutions.”

Neural networks and their accompanying learning techniques powered AlphaGo, the computer program that beat some of the world’s best Go players last year (link is external) (and the top player this year (link is external)). The news excited Deng, an avid fan of the board game. Last year, around the same time as AlphaGo’s triumphs, a paper appeared that introduced the idea of using neural networks to represent quantum states (link is external), although it gave no indication of exactly how wide the tool’s reach might be. “We immediately recognized that this should be a very important paper,” Deng says, “so we put all our energy and time into studying the problem more.”

The result was a more complete account of the capabilities of certain neural networks to represent quantum states. In particular, the team studied neural networks that use two distinct groups of neurons. The first group, called the visible neurons, represents real quantum particles, like atoms in an optical lattice or ions in a chain. To account for interactions between particles, the researchers employed a second group of neurons—the hidden neurons—which link up with visible neurons. These links capture the physical interactions between real particles, and as long as the number of connections stays relatively small, the neural network description remains simple.

Specifying a number for each connection and mathematically forgetting the hidden neurons can produce a compact representation of many interesting quantum states, including states with topological characteristics and some with surprising amounts of entanglement.

Beyond its potential as a tool in numerical simulations, the new framework allowed Deng and collaborators to prove some mathematical facts about the families of quantum states represented by neural networks. For instance, neural networks with only short-range interactions—those in which each hidden neuron is only connected to a small cluster of visible neurons—have a strict limit on their total entanglement. This technical result, known as an area law, is a research pursuit of many condensed matter physicists.

These neural networks can’t capture everything, though. “They are a very restricted regime,” Deng says, adding that they don’t offer an efficient universal representation. If they did, they could be used to simulate a quantum computer with an ordinary computer, something physicists and computer scientists think is very unlikely. Still, the collection of states that they do represent efficiently, and the overlap of that collection with other representation methods, is an open problem that Deng says is ripe for further exploration.

Here’s a link to and a citation for the paper,

Quantum Entanglement in Neural Network States by Dong-Ling Deng, Xiaopeng Li, and S. Das Sarma. Phys. Rev. X 7, 021021 – Published 11 May 2017

This paper is open access.

Blue Brain and the multidimensional universe

Blue Brain is a Swiss government brain research initiative which officially came to life in 2006 although the initial agreement between the École Politechnique Fédérale de Lausanne (EPFL) and IBM was signed in 2005 (according to the project’s Timeline page). Moving on, the project’s latest research reveals something astounding (from a June 12, 2017 Frontiers Publishing press release on EurekAlert),

For most people, it is a stretch of the imagination to understand the world in four dimensions but a new study has discovered structures in the brain with up to eleven dimensions – ground-breaking work that is beginning to reveal the brain’s deepest architectural secrets.

Using algebraic topology in a way that it has never been used before in neuroscience, a team from the Blue Brain Project has uncovered a universe of multi-dimensional geometrical structures and spaces within the networks of the brain.

The research, published today in Frontiers in Computational Neuroscience, shows that these structures arise when a group of neurons forms a clique: each neuron connects to every other neuron in the group in a very specific way that generates a precise geometric object. The more neurons there are in a clique, the higher the dimension of the geometric object.

“We found a world that we had never imagined,” says neuroscientist Henry Markram, director of Blue Brain Project and professor at the EPFL in Lausanne, Switzerland, “there are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions.”

Markram suggests this may explain why it has been so hard to understand the brain. “The mathematics usually applied to study networks cannot detect the high-dimensional structures and spaces that we now see clearly.”

If 4D worlds stretch our imagination, worlds with 5, 6 or more dimensions are too complex for most of us to comprehend. This is where algebraic topology comes in: a branch of mathematics that can describe systems with any number of dimensions. The mathematicians who brought algebraic topology to the study of brain networks in the Blue Brain Project were Kathryn Hess from EPFL and Ran Levi from Aberdeen University.

“Algebraic topology is like a telescope and microscope at the same time. It can zoom into networks to find hidden structures – the trees in the forest – and see the empty spaces – the clearings – all at the same time,” explains Hess.

In 2015, Blue Brain published the first digital copy of a piece of the neocortex – the most evolved part of the brain and the seat of our sensations, actions, and consciousness. In this latest research, using algebraic topology, multiple tests were performed on the virtual brain tissue to show that the multi-dimensional brain structures discovered could never be produced by chance. Experiments were then performed on real brain tissue in the Blue Brain’s wet lab in Lausanne confirming that the earlier discoveries in the virtual tissue are biologically relevant and also suggesting that the brain constantly rewires during development to build a network with as many high-dimensional structures as possible.

When the researchers presented the virtual brain tissue with a stimulus, cliques of progressively higher dimensions assembled momentarily to enclose high-dimensional holes, that the researchers refer to as cavities. “The appearance of high-dimensional cavities when the brain is processing information means that the neurons in the network react to stimuli in an extremely organized manner,” says Levi. “It is as if the brain reacts to a stimulus by building then razing a tower of multi-dimensional blocks, starting with rods (1D), then planks (2D), then cubes (3D), and then more complex geometries with 4D, 5D, etc. The progression of activity through the brain resembles a multi-dimensional sandcastle that materializes out of the sand and then disintegrates.”

The big question these researchers are asking now is whether the intricacy of tasks we can perform depends on the complexity of the multi-dimensional “sandcastles” the brain can build. Neuroscience has also been struggling to find where the brain stores its memories. “They may be ‘hiding’ in high-dimensional cavities,” Markram speculates.

###

About Blue Brain

The aim of the Blue Brain Project, a Swiss brain initiative founded and directed by Professor Henry Markram, is to build accurate, biologically detailed digital reconstructions and simulations of the rodent brain, and ultimately, the human brain. The supercomputer-based reconstructions and simulations built by Blue Brain offer a radically new approach for understanding the multilevel structure and function of the brain. http://bluebrain.epfl.ch

About Frontiers

Frontiers is a leading community-driven open-access publisher. By taking publishing entirely online, we drive innovation with new technologies to make peer review more efficient and transparent. We provide impact metrics for articles and researchers, and merge open access publishing with a research network platform – Loop – to catalyse research dissemination, and popularize research to the public, including children. Our goal is to increase the reach and impact of research articles and their authors. Frontiers has received the ALPSP Gold Award for Innovation in Publishing in 2014. http://www.frontiersin.org.

Here’s a link to and a citation for the paper,

Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function by Michael W. Reimann, Max Nolte, Martina Scolamiero, Katharine Turner, Rodrigo Perin, Giuseppe Chindemi, Paweł Dłotko, Ran Levi, Kathryn Hess, and Henry Markram. Front. Comput. Neurosci., 12 June 2017 | https://doi.org/10.3389/fncom.2017.00048

This paper is open access.

Formation of a time (temporal) crystal

It’s a crystal arranged in time according to a March 8, 2017 University of Texas at Austin news release (also on EurekAlert), Note: Links have been removed,

Salt, snowflakes and diamonds are all crystals, meaning their atoms are arranged in 3-D patterns that repeat. Today scientists are reporting in the journal Nature on the creation of a phase of matter, dubbed a time crystal, in which atoms move in a pattern that repeats in time rather than in space.

The atoms in a time crystal never settle down into what’s known as thermal equilibrium, a state in which they all have the same amount of heat. It’s one of the first examples of a broad new class of matter, called nonequilibrium phases, that have been predicted but until now have remained out of reach. Like explorers stepping onto an uncharted continent, physicists are eager to explore this exotic new realm.

“This opens the door to a whole new world of nonequilibrium phases,” says Andrew Potter, an assistant professor of physics at The University of Texas at Austin. “We’ve taken these theoretical ideas that we’ve been poking around for the last couple of years and actually built it in the laboratory. Hopefully, this is just the first example of these, with many more to come.”

Some of these nonequilibrium phases of matter may prove useful for storing or transferring information in quantum computers.

Potter is part of the team led by researchers at the University of Maryland who successfully created the first time crystal from ions, or electrically charged atoms, of the element ytterbium. By applying just the right electrical field, the researchers levitated 10 of these ions above a surface like a magician’s assistant. Next, they whacked the atoms with a laser pulse, causing them to flip head over heels. Then they hit them again and again in a regular rhythm. That set up a pattern of flips that repeated in time.

Crucially, Potter noted, the pattern of atom flips repeated only half as fast as the laser pulses. This would be like pounding on a bunch of piano keys twice a second and notes coming out only once a second. This weird quantum behavior was a signature that he and his colleagues predicted, and helped confirm that the result was indeed a time crystal.

The team also consists of researchers at the National Institute of Standards and Technology, the University of California, Berkeley and Harvard University, in addition to the University of Maryland and UT Austin.

Frank Wilczek, a Nobel Prize-winning physicist at the Massachusetts Institute of Technology, was teaching a class about crystals in 2012 when he wondered whether a phase of matter could be created such that its atoms move in a pattern that repeats in time, rather than just in space.

Potter and his colleague Norman Yao at UC Berkeley created a recipe for building such a time crystal and developed ways to confirm that, once you had built such a crystal, it was in fact the real deal. That theoretical work was announced publically last August and then published in January in the journal Physical Review Letters.

A team led by Chris Monroe of the University of Maryland in College Park built a time crystal, and Potter and Yao helped confirm that it indeed had the properties they predicted. The team announced that breakthrough—constructing a working time crystal—last September and is publishing the full, peer-reviewed description today in Nature.

A team led by Mikhail Lukin at Harvard University created a second time crystal a month after the first team, in that case, from a diamond.

Here’s a link to and a citation for the paper,

Observation of a discrete time crystal by J. Zhang, P. W. Hess, A. Kyprianidis, P. Becker, A. Lee, J. Smith, G. Pagano, I.-D. Potirniche, A. C. Potter, A. Vishwanath, N. Y. Yao, & C. Monroe. Nature 543, 217–220 (09 March 2017) doi:10.1038/nature21413 Published online 08 March 2017

This paper is behind a paywall.

Watching a nanosized space rocket under a microscope

That is a silent video depicting the research. For anyone who may be puzzled, there’s an Aug. 8, 2016 news item on Nanowerk featuring the research announcement from Michigan Technological University (Note: A link has been removed),

Researchers at the University of Maryland and Michigan Technological University have operated a tiny proposed satellite ion rocket under a microscope to see how it works (Nanotechnology, “Radiation-induced solidification of ionic liquid under extreme electric field”).

The rocket, called an electrospray thruster, is a drop of molten salt. When electricity is applied, it creates a field on the tip of the droplet, until ions begin streaming off the end. The force created by the rocket is less than the weight of a human hair, but in the vacuum of space it is enough to push a small object forward with a constant acceleration. Many of these tiny thrusters packed together could propel a spacecraft over great distances, maybe even to the nearest exoplanet, and they are particularly useful for Earth-orbiting nanosatellites, which can be as small as a shoe box. These thrusters are currently being tested on the European Space Agency’s LISA Pathfinder, which hopes to poise objects in space so precisely that they would only be disturbed by gravitational waves.

An Aug, 8, 2016 Michigan Technological University news release on EurekAlert, which originated the news item, explains further,

these droplet engines have a problem: sometimes they form needle-like spikes that disrupt the way the thruster works – they get in the way of the ions flowing outward and turn the liquid to a gel. Lyon B. King and Kurt Terhune, mechanical engineers at Michigan Tech, wanted to find out how this actually happens.

“The challenge is making measurements of features as small as a few molecules in the presence of a strong electric field, which is why we turned to John Cumings at the University of Maryland,” King says, explaining Cumings is known for his work with challenging materials and that they needed to look for a needle in a haystack. “Getting a close look at these droplets is like looking through a straw to find a penny somewhere on the floor of a room–and if that penny moves out of view, like the tip of the molten salt needles do–then you have to start searching for it all over again.”

At the Advanced Imaging and Microscopy Lab at the University of Maryland, Cumings put the tiny thruster in a transmission electron microscope – an advanced scope that can see things down to millionths of a meter. They watched as the droplet elongated and sharpened to a point, and then started emitting ions. Then the tree-like defects began to appear.

The researchers say that figuring out why these branched structures grow could help prevent them from forming. The problem occurs when high-energy electrons, like those used in the microscope’s imaging beam, impact the fluid causing damage to the molecules that they strike. This damages the molten salt’s molecular structure, so it thickens into a gel and no longer flows properly.

“We were able to watch the dendritic structures accumulate in real time,” says Kurt Terhune, a mechanical engineering graduate student and the study’s lead author. “The specific mechanism still needs to be investigated, but this could have importance for spacecraft in high-radiation environments.”

He adds that the microscope’s electron beam is more powerful than natural settings, but the gelling effect could affect the lifetime of electrospray thrusters in low-Earth and geosynchronous orbit.

Here’s a link to and a citation for the paper,

Radiation-induced solidification of ionic liquid under extreme electric field by Kurt J Terhune, Lyon B King, Kai He, and John Cumings. Nanotechnology, Volume 27, Number 37 DOI: http://dx.doi.org/10.1088/0957-4484/27/37/375701 Published 3 August 2016

© 2016 IOP Publishing Ltd

This paper is behind a paywall.

Curbing police violence with machine learning

A rather fascinating Aug. 1, 2016 article by Hal Hodson about machine learning and curbing police violence has appeared in the New Scientist journal (Note: Links have been removed),

None of their colleagues may have noticed, but a computer has. By churning through the police’s own staff records, it has caught signs that an officer is at high risk of initiating an “adverse event” – racial profiling or, worse, an unwarranted shooting.

The Charlotte-Mecklenburg Police Department in North Carolina is piloting the system in an attempt to tackle the police violence that has become a heated issue in the US in the past three years. A team at the University of Chicago is helping them feed their data into a machine learning system that learns to spot risk factors for unprofessional conduct. The department can then step in before risk transforms into actual harm.

The idea is to prevent incidents in which officers who are stressed behave aggressively, for example, such as one in Texas where an officer pulled his gun on children at a pool party after responding to two suicide calls earlier that shift. Ideally, early warning systems would be able to identify individuals who had recently been deployed on tough assignments, and divert them from other sensitive calls.

According to Hodson, there are already systems, both human and algorithmic, in place but the goal is to make them better,

The system being tested in Charlotte is designed to include all of the records a department holds on an individual – from details of previous misconduct and gun use to their deployment history, such as how many suicide or domestic violence calls they have responded to. It retrospectively caught 48 out of 83 adverse incidents between 2005 and now – 12 per cent more than Charlotte-Mecklenberg’s existing early intervention system.

More importantly, the false positive rate – the fraction of officers flagged as being under stress who do not go on to act aggressively – was 32 per cent lower than the existing system’s. “Right now the systems that claim to do this end up flagging the majority of officers,” says Rayid Ghani, who leads the Chicago team. “You can’t really intervene then.”

There is some cautious optimism about this new algorithm (Note: Links have been removed),

Frank Pasquale, who studies the social impact of algorithms at the University of Maryland, is cautiously optimistic. “In many walks of life I think this algorithmic ranking of workers has gone too far – it troubles me,” he says. “But in the context of the police, I think it could work.”

Pasquale says that while such a system for tackling police misconduct is new, it’s likely that older systems created the problem in the first place. “The people behind this are going to say it’s all new,” he says. “But it could be seen as an effort to correct an earlier algorithmic failure. A lot of people say that the reason you have so much contact between minorities and police is because the CompStat system was rewarding officers who got the most arrests.”

CompStat, short for Computer Statistics, is a police management and accountability system that was used to implement the “broken windows” theory of policing, which proposes that coming down hard on minor infractions like public drinking and vandalism helps to create an atmosphere of law and order, bringing serious crime down in its wake. Many police researchers have suggested that the approach has led to the current dangerous tension between police and minority communities.

Ghani has not forgotten the human dimension,

One thing Ghani is certain of is that the interventions will need to be decided on and delivered by humans. “I would not want any of those to be automated,” he says. “As long as there is a human in the middle starting a conversation with them, we’re reducing the chance for things to go wrong.”

h/t Terkko Navigator

I have written about police and violence here in the context of the Dallas Police Department and its use of a robot in a violent confrontation with a sniper, July 25, 2016 posting titled: Robots, Dallas (US), ethics, and killing.

D-PLACE: an open access database of places, language, culture, and enviroment

In an attempt to be a bit more broad in my interpretation of the ‘society’ part of my commentary I’m including this July 8, 2016 news item on ScienceDaily (Note: A link has been removed),

An international team of researchers has developed a website at d-place.org to help answer long-standing questions about the forces that shaped human cultural diversity.

D-PLACE — the Database of Places, Language, Culture and Environment — is an expandable, open access database that brings together a dispersed body of information on the language, geography, culture and environment of more than 1,400 human societies. It comprises information mainly on pre-industrial societies that were described by ethnographers in the 19th and early 20th centuries.

A July 8, 2016 University of Toronto news release (also on EurekAlert), which originated the news item, expands on the theme,

“Human cultural diversity is expressed in numerous ways: from the foods we eat and the houses we build, to our religious practices and political organisation, to who we marry and the types of games we teach our children,” said Kathryn Kirby, a postdoctoral fellow in the Departments of Ecology & Evolutionary Biology and Geography at the University of Toronto and lead author of the study. “Cultural practices vary across space and time, but the factors and processes that drive cultural change and shape patterns of diversity remain largely unknown.

“D-PLACE will enable a whole new generation of scholars to answer these long-standing questions about the forces that have shaped human cultural diversity.”

Co-author Fiona Jordan, senior lecturer in anthropology at the University of Bristol and one of the project leads said, “Comparative research is critical for understanding the processes behind cultural diversity. Over a century of anthropological research around the globe has given us a rich resource for understanding the diversity of humanity – but bringing different resources and datasets together has been a huge challenge in the past.

“We’ve drawn on the emerging big data sets from ecology, and combined these with cultural and linguistic data so researchers can visualise diversity at a glance, and download data to analyse in their own projects.”

D-PLACE allows users to search by cultural practice (e.g., monogamy vs. polygamy), environmental variable (e.g. elevation, mean annual temperature), language family (e.g. Indo-European, Austronesian), or region (e.g. Siberia). The search results can be displayed on a map, a language tree or in a table, and can also be downloaded for further analysis.

It aims to enable researchers to investigate the extent to which patterns in cultural diversity are shaped by different forces, including shared history, demographics, migration/diffusion, cultural innovations, and environmental and ecological conditions.

D-PLACE was developed by an international team of scientists interested in cross-cultural research. It includes researchers from Max Planck Institute for the Science of Human history in Jena Germany, University of Auckland, Colorado State University, University of Toronto, University of Bristol, Yale, Human Relations Area Files, Washington University in Saint Louis, University of Michigan, American Museum of Natural History, and City University of New York.

The diverse team included: linguists; anthropologists; biogeographers; data scientists; ethnobiologists; and evolutionary ecologists, who employ a variety of research methods including field-based primary data collection; compilation of cross-cultural data sources; and analyses of existing cross-cultural datasets.

“The team’s diversity is reflected in D-PLACE, which is designed to appeal to a broad user base,” said Kirby. “Envisioned users range from members of the public world-wide interested in comparing their cultural practices with those of other groups, to cross-cultural researchers interested in pushing the boundaries of existing research into the drivers of cultural change.”

Here’s a link to and a citation for the paper,

D-PLACE: A Global Database of Cultural, Linguistic and Environmental Diversity by Kathryn R. Kirby, Russell D. Gray, Simon J. Greenhill, Fiona M. Jordan, Stephanie Gomes-Ng, Hans-Jörg Bibiko, Damián E. Blasi, Carlos A. Botero, Claire Bowern, Carol R. Ember, Dan Leehr, Bobbi S. Low, Joe McCarter, William Divale, Michael C. Gavin.  PLOS ONE, 2016; 11 (7): e0158391 DOI: 10.1371/journal.pone.0158391 Published July 8, 2016.

This paper is open access.

You can find D-PLACE here.

While it might not seem like that there would be a close link between anthropology and physics in the 19th and early 20th centuries, that information can be mined for more contemporary applications. For example, someone who wants to make a case for a more diverse scientific community may want to develop a social science approach to the discussion. The situation in my June 16, 2016 post titled: Science literacy, science advice, the US Supreme Court, and Britain’s House of Commons, could  be extended into a discussion and educational process using data from D-Place and other sources to make the point,

Science literacy may not be just for the public, it would seem that US Supreme Court judges may not have a basic understanding of how science works. David Bruggeman’s March 24, 2016 posting (on his Pasco Phronesis blog) describes a then current case before the Supreme Court (Justice Antonin Scalia has since died), Note: Links have been removed,

It’s a case concerning aspects of the University of Texas admissions process for undergraduates and the case is seen as a possible means of restricting race-based considerations for admission.  While I think the arguments in the case will likely revolve around factors far removed from science and or technology, there were comments raised by two Justices that struck a nerve with many scientists and engineers.

Both Justice Antonin Scalia and Chief Justice John Roberts raised questions about the validity of having diversity where science and scientists are concerned [emphasis mine].  Justice Scalia seemed to imply that diversity wasn’t esential for the University of Texas as most African-American scientists didn’t come from schools at the level of the University of Texas (considered the best university in Texas).  Chief Justice Roberts was a bit more plain about not understanding the benefits of diversity.  He stated, “What unique perspective does a black student bring to a class in physics?”

To that end, Dr. S. James Gates, theoretical physicist at the University of Maryland, and member of the President’s Council of Advisers on Science and Technology (and commercial actor) has an editorial in the March 25 [2016] issue of Science explaining that the value of having diversity in science does not accrue *just* to those who are underrepresented.

Dr. Gates relates his personal experience as a researcher and teacher of how people’s background inform their practice of science, and that two different people may use the same scientific method, but think about the problem differently.

I’m guessing that both Scalia and Roberts and possibly others believe that science is the discovery and accumulation of facts. In this worldview science facts such as gravity are waiting for discovery and formulation into a ‘law’. They do not recognize that most science is a collection of beliefs and may be influenced by personal beliefs. For example, we believe we’ve proved the existence of the Higgs boson but no one associated with the research has ever stated unequivocally that it exists.

More generally, with D-PLACE and the recently announced Trans-Atlantic Platform (see my July 15, 2016 post about it), it seems Canada’s humanities and social sciences communities are taking strides toward greater international collaboration and a more profound investment in digital scholarship.

Science literacy, science advice, the US Supreme Court, and Britain’s House of Commons

This ‘think’ piece is going to cover a fair bit of ground including science literacy in the general public and in the US Supreme Court, and what that might mean for science advice and UK Members of Parliament (MPs).

Science literacy generally and in the US Supreme Court

A science literacy report for the US National Academy of Sciences (NAS), due sometime from early to mid 2017, is being crafted with an eye to capturing a different perspective according to a March 24, 2016 University of Wisconsin-Madison news release by Terry Dewitt,

What does it mean to be science literate? How science literate is the American public? How do we stack up against other countries? What are the civic implications of a public with limited knowledge of science and how it works? How is science literacy measured?

These and other questions are under the microscope of a 12-member National Academy of Sciences (NAS) panel — including University of Wisconsin—Madison Life Sciences Communication Professor Dominique Brossard and School of Education Professor Noah Feinstein — charged with sorting through the existing data on American science and health literacy and exploring the association between knowledge of science and public perception of and support for science.

The committee — composed of educators, scientists, physicians and social scientists — will take a hard look at the existing data on the state of U.S. science literacy, the questions asked, and the methods used to measure what Americans know and don’t know about science and how that knowledge has changed over time. Critically for science, the panel will explore whether a lack of science literacy is associated with decreased public support for science or research.

Historically, policymakers and leaders in the scientific community have fretted over a perceived lack of knowledge among Americans about science and how it works. A prevailing fear is that an American public unequipped to come to terms with modern science will ultimately have serious economic, security and civic consequences, especially when it comes to addressing complex and nuanced issues like climate change, antibiotic resistance, emerging diseases, environment and energy choices.

While the prevailing wisdom, inspired by past studies, is that Americans don’t stack up well in terms of understanding science, Brossard is not so convinced. Much depends on what kinds of questions are asked, how they are asked, and how the data is analyzed.

It is very easy, she argues, to do bad social science and past studies may have measured the wrong things or otherwise created a perception about the state of U.S. science literacy that may or may not be true.

“How do you conceptualize scientific literacy? What do people need to know? Some argue that scientific literacy may be as simple as an understanding of how science works, the nature of science, [emphasis mine]” Brossard explains. “For others it may be a kind of ‘civic science literacy,’ where people have enough knowledge to be informed and make good decisions in a civics context.”

Science literacy may not be just for the public, it would seem that US Supreme Court judges may not have a basic understanding of how science works. David Bruggeman’s March 24, 2016 posting (on his Pasco Phronesis blog) describes a then current case before the Supreme Court (Justice Antonin Scalia has since died), Note: Links have been removed,

It’s a case concerning aspects of the University of Texas admissions process for undergraduates and the case is seen as a possible means of restricting race-based considerations for admission.  While I think the arguments in the case will likely revolve around factors far removed from science and or technology, there were comments raised by two Justices that struck a nerve with many scientists and engineers.

Both Justice Antonin Scalia and Chief Justice John Roberts raised questions about the validity of having diversity where science and scientists are concerned [emphasis mine].  Justice Scalia seemed to imply that diversity wasn’t esential for the University of Texas as most African-American scientists didn’t come from schools at the level of the University of Texas (considered the best university in Texas).  Chief Justice Roberts was a bit more plain about not understanding the benefits of diversity.  He stated, “What unique perspective does a black student bring to a class in physics?”

To that end, Dr. S. James Gates, theoretical physicist at the University of Maryland, and member of the President’s Council of Advisers on Science and Technology (and commercial actor) has an editorial in the March 25 [2016] issue of Science explaining that the value of having diversity in science does not accrue *just* to those who are underrepresented.

Dr. Gates relates his personal experience as a researcher and teacher of how people’s background inform their practice of science, and that two different people may use the same scientific method, but think about the problem differently.

I’m guessing that both Scalia and Roberts and possibly others believe that science is the discovery and accumulation of facts. In this worldview science facts such as gravity are waiting for discovery and formulation into a ‘law’. They do not recognize that most science is a collection of beliefs and may be influenced by personal beliefs. For example, we believe we’ve proved the existence of the Higgs boson but no one associated with the research has ever stated unequivocally that it exists.

For judges who are under the impression that scientific facts are out there somewhere waiting to be discovered diversity must seem irrelevant. It is not. Who you are affects the questions you ask and how you approach science. The easiest example is to look at how women were viewed when they were subjects in medical research. The fact that women’s physiology is significantly different (and not just in child-bearing ways) was never considered relevant when reporting results. Today, researchers consider not only gender, but age (to some extent), ethnicity, and more when examining results. It’s still not a perfect but it was a step forward.

So when Brossard included “… an understanding of how science works, the nature of science …” as an aspect of science literacy, the judges seemed to present a good example of how not understanding science can have a major impact on how others live.

I’d almost forgotten this science literacy piece as I’d started the draft some months ago but then I spotted a news item about a science advice/MP ‘dating’ service in the UK.

Science advice and UK MPs

First, the news, then, the speculation (from a June 6, 2016 news item on ScienceDaily),

MPs have expressed an overwhelming willingness to use a proposed new service to swiftly link them with academics in relevant areas to help ensure policy is based on the latest evidence.

A June 6, 2016 University of Exeter press release, which originated the news item, provides more detail about the proposed service and the research providing the supporting evidence (Note: A link has been removed),

The government is pursuing a drive towards evidence-based policy, yet policy makers still struggle to incorporate evidence into their decisions. One reason for this is limited easy access to the latest research findings or to academic experts who can respond to questions about evidence quickly.

Researchers at Cardiff University, the University of Exeter and University College London have today published results of the largest study to date reporting MPs’ attitudes to evidence in policy making and their reactions to a proposed Evidence Information Service (EIS) – a rapid match-making advisory service that would work alongside existing systems to put MPs in touch with relevant academic experts.

Dr Natalia Lawrence, of the University of Exeter, said: “It’s clear from our study that politicians want to ensure their decisions incorporate the most reliable evidence, but it can sometimes be very difficult for them to know how to access the latest research findings. This new matchmaking service could be a quick and easy way for them to seek advice from cutting-edge researchers and to check their understanding and facts. It could provide a useful complement to existing highly-valued information services.”

The research, published today in the journal Evidence and Policy, reports the findings of a national consultation exercise between politicians and the public. The researchers recruited members of the public to interview their local parliamentary representative. In total 86, politicians were contacted with 56 interviews completed. The MPs indicated an overwhelming willingness to use a service such as the EIS, with 85% supporting the idea, but noted a number of potential reservations related to the logistics of the EIS such as response time and familiarity with the service. Yet, the MPs indicated that their logistical reservations could be overcome by accessing the EIS via existing highly-valued parliamentary information services such as those provided by the House of Commons and Lords Libraries. Furthermore prior to rolling out the EIS on a nationwide basis it would first need to be piloted.

Developing the proposed EIS in line with feedback from this consultation of MPs would offer the potential to provide policy makers with rapid, reliable and confidential evidence from willing volunteers from the research community.

Professor Chris Chambers, of Cardiff University, said: “The government has given a robust steer that MPs need to link in more with academics to ensure decisions shaping the future of the country are evidence-based. It’s heartening to see that there is a will to adopt this system and we now need to move into a phase of developing a service that is both simple and effective to meet this need.”

The next steps for the project are parallel consultations of academics and members of the public and a pilot of the EIS, using funding from GW4 alliance of universities, made up of Bath, Bristol, Cardiff and Exeter.

What this study shows:
• The consultation shows that politicians recognise the importance of evidence-based policy making and agree on the need for an easier and more direct linkage between academic experts and policy makers.
• Politicians would welcome the creation of the EIS as a provider of rapid, reliable and confidential evidence.

What this study does not show:
• This study does not show how academics would provide evidence. This was a small-scale study which consulted politicians and has not attempted to give voice to the academic community.
• This study does not detail the mechanism of an operational EIS. Instead it indicates the need for a service such as the EIS and suggests ways in which the EIS can be operationalized.

Here’s a link to and a citation for the paper,

Service as a new platform for supporting evidence-based policy: a consultation of UK parliamentarians by Natalia Lawrence, Jemma Chambers, Sinead Morrison, Sven Bestmann, Gerard O’Grady, Christopher Chambers, Andrew Kythreotis. Evidence & Policy: A Journal of Research, Debate and Practice DOI: http://dx.doi.org/10.1332/174426416X14643531912169 Appeared or available online: June 6, 2016

This paper is behind a paywall open access. *Corrected June 17, 2016.*

It’s an interesting idea and I can understand the appeal. However, operationalizing this ‘dating’ or ‘matchmaking’ service could prove quite complex. I appreciate the logistics issues but I’m a little more concerned about the MPs’ science literacy. Are they going to be like the two US justices who believe that science is the pursuit of immutable facts? What happens if two MPs are matched up with a different scientist and those two scientists didn’t agree about what the evidence says. Or, what happens if one scientist is more cautious than the other. There are all kinds of pitfalls. I’m not arguing against the idea but it’s going to require a lot of careful consideration.

University of Maryland looks into transparent wood

Is transparent wood becoming the material du jour? Following on the heels of my April 1, 2016 post about transparent wood and the KTH Royal Institute of Technology (Sweden), there’s a May 6, 2016 news item on ScienceDaily about the material and a team at the University of Maryland,

Researchers at the University of Maryland have made a block of linden wood transparent, which they say will be useful in fancy building materials and in light-based electronics systems.

Materials scientist Liangbing Hu and his team at the University of Maryland, College Park, have removed the molecule in wood, lignin, that makes it rigid and dark in color. They left behind the colorless cellulose cell structures, filled them with epoxy, and came up with a version of the wood that is mostly see-thru.

I wonder if this is the type of material that might be used in structures like the proposed Center of Nanoscience and Nanotechnology at Tel Aviv University building (my May 9, 2016 posting about a building design that features no doors or windows)?

Regardless, there’s more about this latest transparent wood in a May 5, 2016 Tufts University news release, which originated the news item,

Remember “xylem” and “phloem” from grade-school science class? These structures pass water and nutrients up and down the tree. Hu and his colleagues see these as vertically aligned channels in the wood, a naturally-grown structure that can be used to pass light along, after the wood has been treated.

The resulting three-inch block of wood had both high transparency—the quality of being see-thru—and high haze—the quality of scattering light. This would be useful, said Hu, in making devices comfortable to look at. It would also help solar cells trap light; light could easily enter through the transparent function, but the high haze would keep the light bouncing around near where it would be absorbed by the solar panel.

They compared how the materials performed and how light worked its way through the wood when they sliced it two ways: one with the grain of the wood, so that the channels passed through the longest dimension of the block. And they also tried slicing it against the grain, so that the channels passed through the shortest dimension of the block.

The short channel wood proved slightly stronger and a little less brittle. But though the natural component making the wood strong had been removed, the addition of the epoxy made the wood four to six times tougher than the untreated version.

Then they investigated how the different directions of the wood affected the way the light passed through it. When laid down on top of a grid, both kinds of wood showed the lines clearly. When lifted just a touch above the grid, the long-channel wood still showed the grid, just a little bit more blurry. But the short channel wood, when lifted those same few millimeters, made the grid completely invisible.

Here’s a link to and a citation for the paper,

Highly Anisotropic, Highly Transparent Wood Composites by Mingwei Zhu, Jianwei Song, Tian Li, Amy Gong, Yanbin Wang, Jiaqi Dai, Yonggang Yao, Wei Luo, Doug Henderson, and Liangbing Hu. Advanced Materials DOI: 10.1002/adma.201600427 Article first published online: 4 MAY 2016

© 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

New ABCs of research: seminars and a book

David Bruggeman has featured a new book and mentioned its attendant seminars in an April 19, 2016 post on his Pasco Phronesis blog (Note: A link has been removed),

Ben Shneiderman, Professor of Computer Science at the University of Maryland at College Park, recently published The New ABCs of Research: Achieving Breakthrough Collaborations.  It’s meant to be a guide for students and researchers about the various efforts to better integrate different kinds of research and design to improve research outputs and outcomes. …

David has an embedded a video of Schneiderman discussing the principles espoused in his book. There are some upcoming seminars including one on Thursday, April 21, 2016 (today) at New York University (NYU) at 12:30 pm at 44 West 4th St, Kaufman Management Center, Room 3-50. From the description on the NYU event page,

Solving the immense problems of the 21st century will require ambitious research teams that are skilled at producing practical solutions and foundational theories simultaneously – that is the ABC Principle: Applied & Basic Combined.  Then these research teams can deliver high-impact outcomes by applying the SED Principle: Blend Science, Engineering and Design Thinking, which encourages use of the methods from all three disciplines.  These guiding principles (ABC & SED) are meant to replace Vannevar Bush’s flawed linear model from 1945 that has misled researchers for 70+ years.  These new guiding principles will enable students, researchers, business leaders, and government policy makers to accelerate discovery and innovation.

Oxford University Press:  http://ukcatalogue.oup.com/product/9780198758839.do

Book website:  http://www.cs.umd.edu/hcil/newabcs

There is another seminar on Wednesday, April 27, 2016 at 3:00 pm in the Pepco Room, #1105 Kim Engineering Building at the University of Maryland which is handy for anyone in the Washington, DC area.