Nanodiamonds if successfully extracted from oil could be used for imaging and communications and the world’s leading program for extracting nanodiamonds (also known as diamondoids) is in California (US). From a May 12, 2016 news item on Nanowerk,
Stanford and SLAC National Accelerator Laboratory jointly run the world’s leading program for isolating and studying diamondoids — the tiniest possible specks of diamond. Found naturally in petroleum fluids, these interlocking carbon cages weigh less than a billionth of a billionth of a carat (a carat weighs about the same as 12 grains of rice); the smallest ones contain just 10 atoms.
Over the past decade, a team led by two Stanford-SLAC faculty members — Nick Melosh, an associate professor of materials science and engineering and of photon science, and Zhi-Xun Shen, a professor of photon science and of physics and applied physics – has found potential roles for diamondoids in improving electron microscope images, assembling materials and printing circuits on computer chips. The team’s work takes place within SIMES, the Stanford Institute for Materials and Energy Sciences, which is run jointly with SLAC.
Close-up of purified diamondoids on a lab bench. Too small to see with the naked eye, diamondoids are visible only when they clump together in fine, sugar-like crystals like these. Photo: Christopher Smith, SLAC National Accelerator Laboratory
Before they can do that [use nanodiamonds in imaging and other applications], though, just getting the diamondoids is a technical feat. It starts at the nearby Chevron refinery in Richmond, California, with a railroad tank car full of crude oil from the Gulf of Mexico. “We analyzed more than a thousand oils from around the world to see which had the highest concentrations of diamondoids,” says Jeremy Dahl, who developed key diamondoid isolation techniques with fellow Chevron researcher Robert Carlson before both came to Stanford — Dahl as a physical science research associate and Carlson as a visiting scientist.
The original isolation steps were carried out at the Chevron refinery, where the selected crudes were boiled in huge pots to concentrate the diamondoids. Some of the residue from that work came to a SLAC lab, where small batches are repeatedly boiled to evaporate and isolate molecules of specific weights. These fluids are then forced at high pressure through sophisticated filtration systems to separate out diamondoids of different sizes and shapes, each of which has different properties.
The diamondoids themselves are invisible to the eye; the only reason we can see them is that they clump together in fine, sugar-like crystals. “If you had a spoonful,” Dahl says, holding a few in his palm, “you could give 100 billion of them to every person on Earth and still have some left over.”
Recently, the team started using diamondoids to seed the growth of flawless, nano-sized diamonds in a lab at Stanford. By introducing other elements, such as silicon or nickel, during the growing process, they hope to make nanodiamonds with precisely tailored flaws that can produce single photons of light for next-generation optical communications and biological imaging.
Early results show that the quality of optical materials grown from diamondoid seeds is consistently high, says Stanford’s Jelena Vuckovic, a professor of electrical engineering who is leading this part of the research with Steven Chu, professor of physics and of molecular and cellular physiology.
“Developing a reliable way of growing the nanodiamonds is critical,” says Vuckovic, who is also a member of Stanford Bio-X. “And it’s really great to have that source and the grower right here at Stanford. Our collaborators grow the material, we characterize it and we give them feedback right away. They can change whatever we want them to change.”
These days I’m thinking about sound, music, spoken word, and more as I prepare for a new art/science piece. It’s very early stages so I don’t have much more to say about it but along those lines of thought, there’s a recent piece of research on music and personality that caught my eye. From a May 11, 2016 news item on phys.org,
A team of scientists from McGill University, the University of Cambridge, and Stanford Graduate School of Business developed a new method of coding and categorizing music. They found that people’s preference for these musical categories is driven by personality. The researchers say the findings have important implications for industry and health professionals.
There are a multitude of adjectives that people use to describe music, but in a recent study to be published this week in the journal Social Psychological and Personality Science, researchers show that musical attributes can be grouped into three categories. Rather than relying on the genre or style of a song, the team of scientists led by music psychologist David Greenberg with the help of Daniel J. Levitin from McGill University mapped the musical attributes of song excerpts from 26 different genres and subgenres, and then applied a statistical procedure to group them into clusters. The study revealed three clusters, which they labeled Arousal, Valence, and Depth. Arousal describes intensity and energy in music; Valence describes the spectrum of emotions in music (from sad to happy); and Depth describes intellect and sophistication in music. They also found that characteristics describing music from a single genre (both rock and jazz separately) could be grouped in these same three categories.
The findings suggest that this may be a useful alternative to grouping music into genres, which is often based on social connotations rather than the attributes of the actual music. It also suggests that those in academia and industry (e.g. Spotify and Pandora) that are already coding music on a multitude of attributes might save time and money by coding music around these three composite categories instead.
The researchers also conducted a second study of nearly 10,000 Facebook users who indicated their preferences for 50 musical excerpts from different genres. The researchers were then able to map preferences for these three attribute categories onto five personality traits and 30 detailed personality facets. For example, they found people who scored high on Openness to Experience preferred Depth in music, while Extraverted excitement-seekers preferred high Arousal in music. And those who scored high on Neuroticism preferred negative emotions in music, while those who were self-assured preferred positive emotions in music. As the title from the old Kern and Hammerstein song suggests, “The Song is You”. That is, the musical attributes that you like most reflect your personality. It also provides scientific support for what Joni Mitchell said in a 2013 interview with the CBC: “The trick is if you listen to that music and you see me, you’re not getting anything out of it. If you listen to that music and you see yourself, it will probably make you cry and you’ll learn something about yourself and now you’re getting something out of it.”
The researchers hope that this information will not only be helpful to music therapists but also for health care professions and even hospitals. For example, recent evidence has showed that music listening can increase recovery after surgery. The researchers argue that information about music preferences and personality could inform a music listening protocol after surgery to boost recovery rates.
The article is another in a series of studies that Greenberg and his team have published on music and personality. This past July , they published an article in PLOS ONE showing that people’s musical preferences are linked to thinking styles. And in October , they published an article in the Journal of Research in Personality, identifying the personality trait Openness to Experience as a key predictor of musical ability, even in non-musicians. These series of studies tell us that there are close links between our personality and musical behavior that may be beyond our control and awareness.
David M. Greenberg, lead author from Cambridge University and City University of New York said: “Genre labels are informative but we’re trying to transcend them and move in a direction that points to the detailed characteristics in music that are driving people preferences and emotional reactions.”
Greenberg added: “As a musician, I see how vast the powers of music really are, and unfortunately, many of us do not use music to its full potential. Our ultimate goal is to create science that will help enhance the experience of listening to music. We want to use this information about personality and preferences to increase the day-to-day enjoyment and peak experiences people have with music.”
William Hoffman in a May 11, 2016 article for Inverse describes the work in connection with recently released new music from Radiohead and an upcoming release from Chance the Rapper (along with a brief mention of Drake), Note: Links have been removed,
Music critics regularly scour Thesaurus.com for the best adjectives to throw into their perfectly descriptive melodious disquisitions on the latest works from Drake, Radiohead, or whomever. And listeners of all walks have, since the beginning of music itself, been guilty of lazily pigeonholing artists into numerous socially constructed genres. But all of that can be (and should be) thrown out the window now, because new research suggests that, to perfectly match music to a listener’s personality, all you need are these three scientific measurables [arousal, valence, depth].
This suggests that a slow, introspective gospel song from Chance The Rapper’s upcoming album could have the same depth as a track from Radiohead’s A Moon Shaped Pool. So a system of categorization based on Greenberg’s research would, surprisingly but rightfully, place the rap and rock works in the same bin.
Here’s a link to and a citation for the latest paper,
Here’s a link to and a citation for the October 2015 paper
Personality predicts musical sophistication by David M. Greenberg, Daniel Müllensiefen, Michael E. Lamb, Peter J. Rentfrow. Journal of Research in Personality Volume 58, October 2015, Pages 154–158 doi:10.1016/j.jrp.2015.06.002 Note: A Feb. 2016 erratum is also listed.
The paper is behind a paywall and it looks as if you will have to pay for it and for the erratum separately.
Here’s a link to and a citation for the July 2015 paper,
Musical Preferences are Linked to Cognitive Styles by David M. Greenberg, Simon Baron-Cohen, David J. Stillwell, Michal Kosinski, Peter J. Rentfrow. PLOS [Public Library of Science ONE] http://dx.doi.org/10.1371/journal.pone.0131151 Published: July 22, 2015
This paper is open access.
I tried out the research project’s website: The Musical Universe. by filling out the Musical Taste questionnaire. Unfortunately, I did not receive my results. Since the team’s latest research has just been reported, I imagine there are many people trying do the same thing. It might be worth your while to wait a bit if you want to try this out or you can fill out one of their other questionnaires. Oh, and you might want to allot at least 20 mins.
Professor Ted Sargent’s research team at the University of Toronto has a developed a new technique for saving the energy harvested by sun and wind farms according to a March 28, 2016 news item on Nanotechnology Now,
We can’t control when the wind blows and when the sun shines, so finding efficient ways to store energy from alternative sources remains an urgent research problem. Now, a group of researchers led by Professor Ted Sargent at the University of Toronto’s Faculty of Applied Science & Engineering may have a solution inspired by nature.
The team has designed the most efficient catalyst for storing energy in chemical form, by splitting water into hydrogen and oxygen, just like plants do during photosynthesis. Oxygen is released harmlessly into the atmosphere, and hydrogen, as H2, can be converted back into energy using hydrogen fuel cells.
Discovering a better way of storing energy from solar and wind farms is “one of the grand challenges in this field,” Ted Sargent says (photo above by Megan Rosenbloom via flickr) Courtesy: University of Toronto
“Today on a solar farm or a wind farm, storage is typically provided with batteries. But batteries are expensive, and can typically only store a fixed amount of energy,” says Sargent. “That’s why discovering a more efficient and highly scalable means of storing energy generated by renewables is one of the grand challenges in this field.”
You may have seen the popular high-school science demonstration where the teacher splits water into its component elements, hydrogen and oxygen, by running electricity through it. Today this requires so much electrical input that it’s impractical to store energy this way — too great proportion of the energy generated is lost in the process of storing it.
This new catalyst facilitates the oxygen-evolution portion of the chemical reaction, making the conversion from H2O into O2 and H2 more energy-efficient than ever before. The intrinsic efficiency of the new catalyst material is over three times more efficient than the best state-of-the-art catalyst.
Details are offered in the news release,
The new catalyst is made of abundant and low-cost metals tungsten, iron and cobalt, which are much less expensive than state-of-the-art catalysts based on precious metals. It showed no signs of degradation over more than 500 hours of continuous activity, unlike other efficient but short-lived catalysts. …
“With the aid of theoretical predictions, we became convinced that including tungsten could lead to a better oxygen-evolving catalyst. Unfortunately, prior work did not show how to mix tungsten homogeneously with the active metals such as iron and cobalt,” says one of the study’s lead authors, Dr. Bo Zhang … .
“We invented a new way to distribute the catalyst homogenously in a gel, and as a result built a device that works incredibly efficiently and robustly.”
This research united engineers, chemists, materials scientists, mathematicians, physicists, and computer scientists across three countries. A chief partner in this joint theoretical-experimental studies was a leading team of theorists at Stanford University and SLAC National Accelerator Laboratory under the leadership of Dr. Aleksandra Vojvodic. The international collaboration included researchers at East China University of Science & Technology, Tianjin University, Brookhaven National Laboratory, Canadian Light Source and the Beijing Synchrotron Radiation Facility.
“The team developed a new materials synthesis strategy to mix multiple metals homogeneously — thereby overcoming the propensity of multi-metal mixtures to separate into distinct phases,” said Jeffrey C. Grossman, the Morton and Claire Goulder and Family Professor in Environmental Systems at Massachusetts Institute of Technology. “This work impressively highlights the power of tightly coupled computational materials science with advanced experimental techniques, and sets a high bar for such a combined approach. It opens new avenues to speed progress in efficient materials for energy conversion and storage.”
“This work demonstrates the utility of using theory to guide the development of improved water-oxidation catalysts for further advances in the field of solar fuels,” said Gary Brudvig, a professor in the Department of Chemistry at Yale University and director of the Yale Energy Sciences Institute.
“The intensive research by the Sargent group in the University of Toronto led to the discovery of oxy-hydroxide materials that exhibit electrochemically induced oxygen evolution at the lowest overpotential and show no degradation,” said University Professor Gabor A. Somorjai of the University of California, Berkeley, a leader in this field. “The authors should be complimented on the combined experimental and theoretical studies that led to this very important finding.”
Here’s a link to and a citation for the paper,
Homogeneously dispersed, multimetal oxygen-evolving catalysts by Bo Zhang, Xueli Zheng, Oleksandr Voznyy, Riccardo Comin, Michal Bajdich, Max García-Melchor, Lili Han, Jixian Xu, Min Liu, Lirong Zheng, F. Pelayo García de Arquer, Cao Thang Dinh, Fengjia Fan, Mingjian Yuan, Emre Yassitepe, Ning Chen, Tom Regier, Pengfei Liu, Yuhang Li, Phil De Luna, Alyf Janmohamed, Huolin L. Xin, Huagui Yang, Aleksandra Vojvodic, Edward H. Sargent. Science 24 Mar 2016: DOI: 10.1126/science.aaf1525
Should this technology prove successful once they start testing on people, the stated goal is to use it for the treatment of human neurodegenerative disorders such as Parkinson’s disease. But, I can’t help wondering if they might also consider constructing an artificial brain.
National Institutes of Health-funded scientists have developed a 3D micro-scaffold technology that promotes reprogramming of stem cells into neurons, and supports growth of neuronal connections capable of transmitting electrical signals. The injection of these networks of functioning human neural cells — compared to injecting individual cells — dramatically improved their survival following transplantation into mouse brains. This is a promising new platform that could make transplantation of neurons a viable treatment for a broad range of human neurodegenerative disorders.
Previously, transplantation of neurons to treat neurodegenerative disorders, such as Parkinson’s disease, had very limited success due to poor survival of neurons that were injected as a solution of individual cells. The new research is supported by the National Institute of Biomedical Imaging and Bioengineering (NIBIB), part of NIH.
“Working together, the stem cell biologists and the biomaterials experts developed a system capable of shuttling neural cells through the demanding journey of transplantation and engraftment into host brain tissue,” said Rosemarie Hunziker, Ph.D., director of the NIBIB Program in Tissue Engineering and Regenerative Medicine. “This exciting work was made possible by the close collaboration of experts in a wide range of disciplines.”
The research was performed by researchers from Rutgers University, Piscataway, New Jersey, departments of Biomedical Engineering, Neuroscience and Cell Biology, Chemical and Biochemical Engineering, and the Child Health Institute; Stanford University School of Medicine’s Institute of Stem Cell Biology and Regenerative Medicine, Stanford, California; the Human Genetics Institute of New Jersey, Piscataway; and the New Jersey Center for Biomaterials, Piscataway. The results are reported in the March 17, 2016 issue of Nature Communications.
The researchers experimented in creating scaffolds made of different types of polymer fibers, and of varying thickness and density. They ultimately created a web of relatively thick fibers using a polymer that stem cells successfully adhered to. The stem cells used were human induced pluripotent stem cells (iPSCs), which can be readily generated from adult cell types such as skin cells. The iPSCs were induced to differentiate into neural cells by introducing the protein NeuroD1 into the cells.
The space between the polymer fibers turned out to be critical. “If the scaffolds were too dense, the stem cell-derived neurons were unable to integrate into the scaffold, whereas if they are too sparse then the network organization tends to be poor,” explained Prabhas Moghe, Ph.D., distinguished professor of biomedical engineering & chemical engineering at Rutgers University and co-senior author of the paper. “The optimal pore size was one that was large enough for the cells to populate the scaffold but small enough that the differentiating neurons sensed the presence of their neighbors and produced outgrowths resulting in cell-to-cell contact. This contact enhances cell survival and development into functional neurons able to transmit an electrical signal across the developing neural network.”
To test the viability of neuron-seeded scaffolds when transplanted, the researchers created micro-scaffolds that were small enough for injection into mouse brain tissue using a standard hypodermic needle. They injected scaffolds carrying the human neurons into brain slices from mice and compared them to human neurons injected as individual, dissociated cells.
The neurons on the scaffolds had dramatically increased cell-survival compared with the individual cell suspensions. The scaffolds also promoted improved neuronal outgrowth and electrical activity. Neurons injected individually in suspension resulted in very few cells surviving the transplant procedure.
Human neurons on scaffolds compared to neurons in solution were then tested when injected into the brains of live mice. Similar to the results in the brain slices, the survival rate of neurons on the scaffold network was increased nearly 40-fold compared to injected isolated cells. A critical finding was that the neurons on the micro-scaffolds expressed proteins that are involved in the growth and maturation of neural synapses–a good indication that the transplanted neurons were capable of functionally integrating into the host brain tissue.
The success of the study gives this interdisciplinary group reason to believe that their combined areas of expertise have resulted in a system with much promise for eventual treatment of human neurodegenerative disorders. In fact, they are now refining their system for specific use as an eventual transplant therapy for Parkinson’s disease. The plan is to develop methods to differentiate the stem cells into neurons that produce dopamine, the specific neuron type that degenerates in individuals with Parkinson’s disease. The work also will include fine-tuning the scaffold materials, mechanics and dimensions to optimize the survival and function of dopamine-producing neurons, and finding the best mouse models of the disease to test this Parkinson’s-specific therapy.
A team of zoology researchers at Cambridge University (UK) find themselves in the unenviable position of having their peer-reviewed study used as a source of unintentional humour. I gather zoologists (Cambridge) and engineers (Stanford) don’t have much opportunity to share information.
Latest research reveals why geckos are the largest animals able to scale smooth vertical walls — even larger climbers would require unmanageably large sticky footpads. Scientists estimate that a human would need adhesive pads covering 40% of their body surface in order to walk up a wall like Spiderman, and believe their insights have implications for the feasibility of large-scale, gecko-like adhesives.
Dr David Labonte and his colleagues in the University of Cambridge’s Department of Zoology found that tiny mites use approximately 200 times less of their total body area for adhesive pads than geckos, nature’s largest adhesion-based climbers. And humans? We’d need about 40% of our total body surface, or roughly 80% of our front, to be covered in sticky footpads if we wanted to do a convincing Spiderman impression.
Once an animal is big enough to need a substantial fraction of its body surface to be covered in sticky footpads, the necessary morphological changes would make the evolution of this trait impractical, suggests Labonte.
“If a human, for example, wanted to walk up a wall the way a gecko does, we’d need impractically large sticky feet – our shoes would need to be a European size 145 or a US size 114,” says Walter Federle, senior author also from Cambridge’s Department of Zoology.
The researchers say that these insights into the size limits of sticky footpads could have profound implications for developing large-scale bio-inspired adhesives, which are currently only effective on very small areas.
“As animals increase in size, the amount of body surface area per volume decreases – an ant has a lot of surface area and very little volume, and a blue whale is mostly volume with not much surface area” explains Labonte.
“This poses a problem for larger climbing species because, when they are bigger and heavier, they need more sticking power to be able to adhere to vertical or inverted surfaces, but they have comparatively less body surface available to cover with sticky footpads. This implies that there is a size limit to sticky footpads as an evolutionary solution to climbing – and that turns out to be about the size of a gecko.”
Larger animals have evolved alternative strategies to help them climb, such as claws and toes to grip with.
The researchers compared the weight and footpad size of 225 climbing animal species including insects, frogs, spiders, lizards and even a mammal.
“We compared animals covering more than seven orders of magnitude in weight, which is roughly the same as comparing a cockroach to the weight of Big Ben, for example,” says Labonte.
These investigations also gave the researchers greater insights into how the size of adhesive footpads is influenced and constrained by the animals’ evolutionary history.
“We were looking at vastly different animals – a spider and a gecko are about as different as a human is to an ant- but if you look at their feet, they have remarkably similar footpads,” says Labonte.
“Adhesive pads of climbing animals are a prime example of convergent evolution – where multiple species have independently, through very different evolutionary histories, arrived at the same solution to a problem. When this happens, it’s a clear sign that it must be a very good solution.”
The researchers believe we can learn from these evolutionary solutions in the development of large-scale manmade adhesives.
“Our study emphasises the importance of scaling for animal adhesion, and scaling is also essential for improving the performance of adhesives over much larger areas. There is a lot of interesting work still to do looking into the strategies that animals have developed in order to maintain the ability to scale smooth walls, which would likely also have very useful applications in the development of large-scale, powerful yet controllable adhesives,” says Labonte.
There is one other possible solution to the problem of how to stick when you’re a large animal, and that’s to make your sticky footpads even stickier.
“We noticed that within closely related species pad size was not increasing fast enough to match body size, probably a result of evolutionary constraints. Yet these animals can still stick to walls,” says Christofer Clemente, a co-author from the University of the Sunshine Coast [Australia].
“Within frogs, we found that they have switched to this second option of making pads stickier rather than bigger. It’s remarkable that we see two different evolutionary solutions to the problem of getting big and sticking to walls,” says Clemente.
“Across all species the problem is solved by evolving relatively bigger pads, but this does not seem possible within closely related species, probably since there is not enough morphological diversity to allow it. Instead, within these closely related groups, pads get stickier. This is a great example of evolutionary constraint and innovation.”
A researcher at Stanford University (US) took strong exception to the Cambridge team’s conclusions , from a Jan. 28, 2016 article by Michael Grothaus for Fast Company (Note: A link has been removed),
It seems the dreams of the web-slinger’s fans were crushed forever—that is until a rival university swooped in and saved the day. A team of engineers working with mechanical engineering graduate student Elliot Hawkes at Stanford University have announced [in 2014] that they’ve invented a device called “gecko gloves” that proves the Cambridge researchers wrong.
Hawkes has created a video outlining the nature of his dispute with Cambridge University and US tv talk show host, Stephen Colbert who featured the Cambridge University research in one of his monologues,
Each handheld gecko pad is covered with 24 adhesive tiles, and each of these is covered with sawtooth-shape polymer structures each 100 micrometers long (about the width of a human hair).
The pads are connected to special degressive springs, which become less stiff the further they are stretched. This characteristic means that when the springs are pulled upon, they apply an identical force to each adhesive tile and cause the sawtooth-like structures to flatten.
“When the pad first touches the surface, only the tips touch, so it’s not sticky,” said co-author Eric Eason, a graduate student in applied physics. “But when the load is applied, and the wedges turn over and come into contact with the surface, that creates the adhesion force.”
As with actual geckos, the adhesives can be “turned” on and off. Simply release the load tension, and the pad loses its stickiness. “It can attach and detach with very little wasted energy,” Eason said.
The ability of the device to scale up controllable adhesion to support large loads makes it attractive for several applications beyond human climbing, said Mark Cutkosky, the Fletcher Jones Chair in the School of Engineering and senior author on the paper.
“Some of the applications we’re thinking of involve manufacturing robots that lift large glass panels or liquid-crystal displays,” Cutkosky said. “We’re also working on a project with NASA’s Jet Propulsion Laboratory to apply these to the robotic arms of spacecraft that could gently latch on to orbital space debris, such as fuel tanks and solar panels, and move it to an orbital graveyard or pitch it toward Earth to burn up.”
Previous work on synthetic and gecko adhesives showed that adhesive strength decreased as the size increased. In contrast, the engineers have shown that the special springs in their device make it possible to maintain the same adhesive strength at all sizes from a square millimeter to the size of a human hand.
The current version of the device can support about 200 pounds, Hawkes said, but, theoretically, increasing its size by 10 times would allow it to carry almost 2,000 pounds.
Here’s a link to and a citation for the Stanford paper,
To be fair to the Cambridge researchers, It’s stretching it a bit to say that Hawke’s gecko gloves allow someone to be like Spiderman. That’s a very careful, slow climb achieved in a relatively short period of time. Can the human body remain suspended that way for more than a few minutes? How big do your sticky pads have to be if you’re going to have the same wall-climbing ease of movement and staying power of either a gecko or Spiderman?
Here’s a link to and a citation for the Cambridge paper,
They have a ‘big data’ start to 2016 planned for the President’s (Andrew Petter at Simon Fraser University [SFU] in Vancouver, Canada) Dream Colloquium according to a Jan. 5, 2016 news release,
Big data explained: SFU launches spring 2016 President’s Dream Colloquium
Speaker series tackles history, use and implications of collecting data
Canadians experience and interact with big data on a daily basis. Some interactions are as simple as buying coffee or as complex as filling out the Canadian government’s mandatory long-form census. But while big data may be one of the most important technological and social shifts in the past five years, many experts are still grappling with what to do with the massive amounts of information being gathered every day.
To help understand the implications of collecting, analyzing and using big data, Simon Fraser University is launching the President’s Dream Colloquium on Engaging Big Data on Tuesday, January 5.
“Big data affects all sectors of society from governments to businesses to institutions to everyday people,” says Peter Chow-White, SFU Associate Professor of Communication. “This colloquium brings together people from industry and scholars in computing and social sciences in a dialogue around one of the most important innovations of our time next to the Internet.”
This spring marks the first President’s Dream Colloquium where all faculty and guest lectures will be available to the public. The speaker series will give a historical overview of big data, specific case studies in how big data is used today and discuss what the implications are for this information’s usage in business, health and government in the future.
The series includes notable guest speakers such as managing director of Microsoft Research, Surajit Chaudhuri, and Tableau co-founder Pat Hanrahan.
“Pat Hanrahan is a leader in a number of sectors and Tableau is a leader in accessing big data through visual analytics,” says Chow-White. “Rather than big data being available to only a small amount of professionals, Tableau makes it easier for everyday people to access and understand it in a visual way.”
Surajit Chaudhuri, Scientist and Managing Director of XCG (Microsoft Research)
Tuesday, January 19, 2016
Pat Hanrahan,Professor at the Stanford Computer Graphics Laboratory, Cofounder and Chief Scientist of Tableau, Founding member of Pixar
Wednesday, February 3, 2016
Sheelagh Carpendale,Professor of Computing ScienceUniversity of Calgary, Canada Research Chair in Information Visualization
Tuesday, February 23, 2016, 3:30pm
Colin Hill, CEO of GNS Healthcare
Tuesday, March 8, 2016
Chad Skelton, Award-winning Data Journalist and Consultant
Tuesday, March 22, 2016
Not to worry, even though the first talk with Sasha Issenberg and Mark Pickup (strangely, he’s [Pickup is an SFU professor of political science] not mentioned in the news release or on the event page) has taken place, a webcast is being posted to the event page here.
I watched the first event live (via a livestream webcast which I accessed by clicking on the link found on the Event’s Speaker’s page) and found it quite interesting although I’m not sure about asking Issenberg to speak extemporaneously. He rambled and offered more detail about things that don’t matter much to a Canadian audience. I couldn’t tell if part of the problem might lie with the fact that his ‘big data’ book (The Victory Lab: The Secret Science of Winning Campaigns) was published a while back and he’s since published one on medical tourism and is about to publish one on same sex marriages and the LGBTQ communities in the US. As someone else who moves from topic to topic, I know it’s an effort to ‘go back in time’ and to remember the details and to recapture the enthusiasm that made the piece interesting. Also, he has yet to get the latest scoop on big data and politics in the US as embarking on the 2016 campaign trail won’t take place until sometime later in January.
So, thanks to Issenberg for managing to dredge up as much as he did. Happily, he did recognize that there are differences between Canada and the US and the type of election data that is gathered and other data that can accessed. He provided a capsule version of the data situation in the US where they can identify individuals and predict how they might vote, while Pickup focused on the Canadian scene. As one expects from Canadian political parties and Canadian agencies in general, no one really wants to share how much information they can actually access (yes, that’s true of the Liberals and the NDP [New Democrats] too). By contrast, political parties and strategists in the US quite openly shared information with Issenberg about where and how they get data.
Pickup made some interesting points about data and how more data does not lead to better predictions. There was one study done on psychologists which Pickup replicated with undergraduate political science students. The psychologists and the political science students in the two separate studies were given data and asked to predict behaviour. They were then given more data about the same individuals and asked again to predict behaviour. In all. there were four sessions where the subjects were given successively more data and asked to predict behaviour based on that data. You may have already guessed but prediction accuracy decreased each time more information was added. Conversely, the people making the predictions became more confident as their predictive accuracy declined. A little disconcerting, non?
Pickup made another point noting that it may be easier to use big data to predict voting behaviour in a two-party system such as they have in the US but a multi-party system such as we have in Canada offers more challenges.
So, it was a good beginning and I look forward to more in the coming weeks (President’s Dream Colloquium on Engaging Big Data). Remember if you can’t listen to the live session, just click through to the event’s speaker’s page where they have hopefully posted the webcast.
The next dream colloquium takes place Tuesday, Jan. 19, 2016,
Big Data since 1854
Dr. Surajit Chaudhuri, Scientist and Managing Director of XCG (Microsoft Research)
Standford University, PhD
Tuesday, January 19, 2016, 3:30–5 pm
IRMACS Theatre, ASB 10900, Burnaby campus [or by webcast[
Current lithium-ion batteries present a fire hazard, which is why, last, year a team of researchers at the University of Michigan came up with a plan to prevent fires by wrapping the batteries in kevlar. My Jan. 30, 2015 post describes the research and provides some information about airplane fires caused by the use of lithium-ion batteries.
This year, a team of researchers at Stanford University (US) have invented a lithium-ion (li-ion) battery that shuts itself down when it overheats, according to a Jan. 12, 2016 news item on Nanotechnology Now,
Stanford researchers have developed the first lithium-ion battery that shuts down before overheating, then restarts immediately when the temperature cools.
The new technology could prevent the kind of fires that have prompted recalls and bans on a wide range of battery-powered devices, from recliners and computers to navigation systems and hoverboards [and on airplanes].
“People have tried different strategies to solve the problem of accidental fires in lithium-ion batteries,” said Zhenan Bao, a professor of chemical engineering at Stanford. “We’ve designed the first battery that can be shut down and revived over repeated heating and cooling cycles without compromising performance.”
Stanford has produced a video of Dr. Bao discussing her latest work,
A typical lithium-ion battery consists of two electrodes and a liquid or gel electrolyte that carries charged particles between them. Puncturing, shorting or overcharging the battery generates heat. If the temperature reaches about 300 degrees Fahrenheit (150 degrees Celsius), the electrolyte could catch fire and trigger an explosion.
Several techniques have been used to prevent battery fires, such as adding flame retardants to the electrolyte. In 2014, Stanford engineer Yi Cui created a “smart” battery that provides ample warning before it gets too hot.
“Unfortunately, these techniques are irreversible, so the battery is no longer functional after it overheats,” said study co-author Cui, an associate professor of materials science and engineering and of photon science. “Clearly, in spite of the many efforts made thus far, battery safety remains an important concern and requires a new approach.”
To address the problem Cui, Bao and postdoctoral scholar Zheng Chen turned to nanotechnology. Bao recently invented a wearable sensor to monitor human body temperature. The sensor is made of a plastic material embedded with tiny particles of nickel with nanoscale spikes protruding from their surface.
For the battery experiment, the researchers coated the spiky nickel particles with graphene, an atom-thick layer of carbon, and embedded the particles in a thin film of elastic polyethylene.
“We attached the polyethylene film to one of the battery electrodes so that an electric current could flow through it,” said Chen, lead author of the study. “To conduct electricity, the spiky particles have to physically touch one another. But during thermal expansion, polyethylene stretches. That causes the particles to spread apart, making the film nonconductive so that electricity can no longer flow through the battery.”
When the researchers heated the battery above 160 F (70 C), the polyethylene film quickly expanded like a balloon, causing the spiky particles to separate and the battery to shut down. But when the temperature dropped back down to 160 F (70 C), the polyethylene shrunk, the particles came back into contact, and the battery started generating electricity again.
“We can even tune the temperature higher or lower depending on how many particles we put in or what type of polymer materials we choose,” said Bao, who is also a professor, by courtesy, of chemistry and of materials science and engineering. “For example, we might want the battery to shut down at 50 C or 100 C.”
To test the stability of new material, the researchers repeatedly applied heat to the battery with a hot-air gun. Each time, the battery shut down when it got too hot and quickly resumed operating when the temperature cooled.
“Compared with previous approaches, our design provides a reliable, fast, reversible strategy that can achieve both high battery performance and improved safety,” Cui said. “This strategy holds great promise for practical battery applications.”
There’s been more than one piece here about water desalination and purification and/or remediation efforts and at least one of them claims to have successfully overcome issues such as reverse osmosis energy needs which are hampering adoption of various technologies. Now, researchers at the University of Illinois at Champaign Urbana have developed another new technique for desalinating water while reverse osmosis issues according to a Nov. 11, 2015 news item on Nanowerk (Note: A link has been removed) ,
University of Illinois engineers have found an energy-efficient material for removing salt from seawater that could provide a rebuttal to poet Samuel Taylor Coleridge’s lament, “Water, water, every where, nor any drop to drink.”
The material, a nanometer-thick sheet of molybdenum disulfide (MoS2) riddled with tiny holes called nanopores, is specially designed to let high volumes of water through but keep salt and other contaminates out, a process called desalination. In a study published in the journal Nature Communications (“Water desalination with a single-layer MoS2 nanopore”), the Illinois team modeled various thin-film membranes and found that MoS2 showed the greatest efficiency, filtering through up to 70 percent more water than graphene membranes. [emphasis mine]
“Even though we have a lot of water on this planet, there is very little that is drinkable,” said study leader Narayana Aluru, a U. of I. professor of mechanical science and engineering. “If we could find a low-cost, efficient way to purify sea water, we would be making good strides in solving the water crisis.
“Finding materials for efficient desalination has been a big issue, and I think this work lays the foundation for next-generation materials. These materials are efficient in terms of energy usage and fouling, which are issues that have plagued desalination technology for a long time,” said Aluru, who also is affiliated with the Beckman Institute for Advanced Science and Technology at the U. of I.
Most available desalination technologies rely on a process called reverse osmosis to push seawater through a thin plastic membrane to make fresh water. The membrane has holes in it small enough to not let salt or dirt through, but large enough to let water through. They are very good at filtering out salt, but yield only a trickle of fresh water. Although thin to the eye, these membranes are still relatively thick for filtering on the molecular level, so a lot of pressure has to be applied to push the water through.
“Reverse osmosis is a very expensive process,” Aluru said. “It’s very energy intensive. A lot of power is required to do this process, and it’s not very efficient. In addition, the membranes fail because of clogging. So we’d like to make it cheaper and make the membranes more efficient so they don’t fail as often. We also don’t want to have to use a lot of pressure to get a high flow rate of water.”
One way to dramatically increase the water flow is to make the membrane thinner, since the required force is proportional to the membrane thickness. Researchers have been looking at nanometer-thin membranes such as graphene. However, graphene presents its own challenges in the way it interacts with water.
Aluru’s group has previously studied MoS2 nanopores as a platform for DNA sequencing and decided to explore its properties for water desalination. Using the Blue Waters supercomputer at the National Center for Supercomputing Applications at the U. of I., they found that a single-layer sheet of MoS2 outperformed its competitors thanks to a combination of thinness, pore geometry and chemical properties.
A MoS2 molecule has one molybdenum atom sandwiched between two sulfur atoms. A sheet of MoS2, then, has sulfur coating either side with the molybdenum in the center. The researchers found that creating a pore in the sheet that left an exposed ring of molybdenum around the center of the pore created a nozzle-like shape that drew water through the pore.
“MoS2 has inherent advantages in that the molybdenum in the center attracts water, then the sulfur on the other side pushes it away, so we have much higher rate of water going through the pore,” said graduate student Mohammad Heiranian, the first author of the study. “It’s inherent in the chemistry of MoS2 and the geometry of the pore, so we don’t have to functionalize the pore, which is a very complex process with graphene.”
In addition to the chemical properties, the single-layer sheets of MoS2 have the advantages of thinness, requiring much less energy, which in turn dramatically reduces operating costs. MoS2 also is a robust material, so even such a thin sheet is able to withstand the necessary pressures and water volumes.
The Illinois researchers are establishing collaborations to experimentally test MoS2 for water desalination and to test its rate of fouling, or clogging of the pores, a major problem for plastic membranes. MoS2 is a relatively new material, but the researchers believe that manufacturing techniques will improve as its high performance becomes more sought-after for various applications.
“Nanotechnology could play a great role in reducing the cost of desalination plants and making them energy efficient,” said Amir Barati Farimani, who worked on the study as a graduate student at Illinois and is now a postdoctoral fellow at Stanford University. “I’m in California now, and there’s a lot of talk about the drought and how to tackle it. I’m very hopeful that this work can help the designers of desalination plants. This type of thin membrane can increase return on investment because they are much more energy efficient.”
In a July 13, 2015 essay on Nanotechnology Now, Tim Harper provides an overview of the research into using graphene for water desalination and purification/remediation about which he is quite hopeful. There is no mention of an issue with interactions between water and graphene. It should be noted that Tim Harper is the Chief Executive Officer of G20, a company which produces a graphene-based solution (graphene oxide sheets), which can desalinate water and can purify/remediate it. Tim is a scientist and while you might have some hesitation given his fiscal interests, his essay is worthwhile reading as he supplies context and explanations of the science.
Scientists have been working for years to allow artificial skin to transmit what the brain would recognize as the sense of touch. For anyone who has lost a limb and gotten a prosthetic replacement, the loss of touch is reputedly one of the more difficult losses to accept. The sense of touch is also vital in robotics if the field is to expand and include activities reliant on the sense of touch, e.g., how much pressure do you use to grasp a cup; how much strength do you apply when moving an object from one place to another?
For anyone interested in the ‘electronic skin and pursuit of touch’ story, I have a Nov. 15, 2013 posting which highlights the evolution of the research into e-skin and what was then some of the latest work.
Using flexible organic circuits and specialized pressure sensors, researchers have created an artificial “skin” that can sense the force of static objects. Furthermore, they were able to transfer these sensory signals to the brain cells of mice in vitro using optogenetics. For the many people around the world living with prosthetics, such a system could one day allow them to feel sensation in their artificial limbs. To create the artificial skin, Benjamin Tee et al. developed a specialized circuit out of flexible, organic materials. It translates static pressure into digital signals that depend on how much mechanical force is applied. A particular challenge was creating sensors that can “feel” the same range of pressure that humans can. Thus, on the sensors, the team used carbon nanotubes molded into pyramidal microstructures, which are particularly effective at tunneling the signals from the electric field of nearby objects to the receiving electrode in a way that maximizes sensitivity. Transferring the digital signal from the artificial skin system to the cortical neurons of mice proved to be another challenge, since conventional light-sensitive proteins used in optogenetics do not stimulate neural spikes for sufficient durations for these digital signals to be sensed. Tee et al. therefore engineered new optogenetic proteins able to accommodate longer intervals of stimulation. Applying these newly engineered optogenic proteins to fast-spiking interneurons of the somatosensory cortex of mice in vitro sufficiently prolonged the stimulation interval, allowing the neurons to fire in accordance with the digital stimulation pulse. These results indicate that the system may be compatible with other fast-spiking neurons, including peripheral nerves.
The heart of the technique is a two-ply plastic construct: the top layer creates a sensing mechanism and the bottom layer acts as the circuit to transport electrical signals and translate them into biochemical stimuli compatible with nerve cells. The top layer in the new work featured a sensor that can detect pressure over the same range as human skin, from a light finger tap to a firm handshake.
Five years ago, Bao’s [Zhenan Bao, a professor of chemical engineering at Stanford,] team members first described how to use plastics and rubbers as pressure sensors by measuring the natural springiness of their molecular structures. They then increased this natural pressure sensitivity by indenting a waffle pattern into the thin plastic, which further compresses the plastic’s molecular springs.
To exploit this pressure-sensing capability electronically, the team scattered billions of carbon nanotubes through the waffled plastic. Putting pressure on the plastic squeezes the nanotubes closer together and enables them to conduct electricity.
This allowed the plastic sensor to mimic human skin, which transmits pressure information as short pulses of electricity, similar to Morse code, to the brain. Increasing pressure on the waffled nanotubes squeezes them even closer together, allowing more electricity to flow through the sensor, and those varied impulses are sent as short pulses to the sensing mechanism. Remove pressure, and the flow of pulses relaxes, indicating light touch. Remove all pressure and the pulses cease entirely.
The team then hooked this pressure-sensing mechanism to the second ply of their artificial skin, a flexible electronic circuit that could carry pulses of electricity to nerve cells.
Importing the signal
Bao’s team has been developing flexible electronics that can bend without breaking. For this project, team members worked with researchers from PARC, a Xerox company, which has a technology that uses an inkjet printer to deposit flexible circuits onto plastic. Covering a large surface is important to making artificial skin practical, and the PARC collaboration offered that prospect.
Finally the team had to prove that the electronic signal could be recognized by a biological neuron. It did this by adapting a technique developed by Karl Deisseroth, a fellow professor of bioengineering at Stanford who pioneered a field that combines genetics and optics, called optogenetics. Researchers bioengineer cells to make them sensitive to specific frequencies of light, then use light pulses to switch cells, or the processes being carried on inside them, on and off.
For this experiment the team members engineered a line of neurons to simulate a portion of the human nervous system. They translated the electronic pressure signals from the artificial skin into light pulses, which activated the neurons, proving that the artificial skin could generate a sensory output compatible with nerve cells.
Optogenetics was only used as an experimental proof of concept, Bao said, and other methods of stimulating nerves are likely to be used in real prosthetic devices. Bao’s team has already worked with Bianxiao Cui, an associate professor of chemistry at Stanford, to show that direct stimulation of neurons with electrical pulses is possible.
Bao’s team envisions developing different sensors to replicate, for instance, the ability to distinguish corduroy versus silk, or a cold glass of water from a hot cup of coffee. This will take time. There are six types of biological sensing mechanisms in the human hand, and the experiment described in Science reports success in just one of them.
But the current two-ply approach means the team can add sensations as it develops new mechanisms. And the inkjet printing fabrication process suggests how a network of sensors could be deposited over a flexible layer and folded over a prosthetic hand.
“We have a lot of work to take this from experimental to practical applications,” Bao said. “But after spending many years in this work, I now see a clear path where we can take our artificial skin.”
Here’s a link to and a citation for the paper,
A skin-inspired organic digital mechanoreceptor by Benjamin C.-K. Tee, Alex Chortos, Andre Berndt, Amanda Kim Nguyen, Ariane Tom, Allister McGuire, Ziliang Carter Lin, Kevin Tien, Won-Gyu Bae, Huiliang Wang, Ping Mei, Ho-Hsiu Chou, Bianxiao Cui, Karl Deisseroth, Tse Nga Ng, & Zhenan Bao. Science 16 October 2015 Vol. 350 no. 6258 pp. 313-316 DOI: 10.1126/science.aaa9306
Strictly speaking this story of tricking cellulose into growing on the surface rather than the interior of a cell is not a nanotechnology topic but I imagine that the folks who research nanocellulose materials will find this work of great interest. An Oct. 8, 2015 news item on ScienceDaily describes the research,
Researchers have been able to watch the interior cells of a plant synthesize cellulose for the first time by tricking the cells into growing on the plant’s surface.
“The bulk of the world’s cellulose is produced within the thickened secondary cell walls of tissues hidden inside the plant body,” says University of British Columbia Botany PhD candidate Yoichiro Watanabe, lead author of the paper published this week in Science.
“So we’ve never been able to image the cells in high resolution as they produce this all-important biological material inside living plants.”
Cellulose, the structural component of cell walls that enables plants to stay upright, is the most abundant biopolymer on earth. It’s a critical resource for pulp and paper, textiles, building materials, and renewable biofuels.
“In order to be structurally sound, plants have to lay down their secondary cell walls very quickly once the plant has stopped growing, like a layer of concrete with rebar,” says UBC botanist Lacey Samuels, one of the senior authors on the paper.
“Based on our study, it appears plant cells need both a high density of the enzymes that create cellulose, and their rapid movement across the cell surface, to make this happen so quickly.”
This work, the culmination of years of research by four UBC graduate students supervised by UBC Forestry researcher Shawn Mansfield and Samuels, was facilitated by a collaboration with the Nara Institute of Technology in Japan to create the special plant lines, and researchers at the Carnegie Institution for Science at Stanford University to conduct the live cell imaging.
“This is a major step forward in our understanding of how plants synthesize their walls, specifically cellulose,” says Mansfield. “It could have significant implications for the way plants are bred or selected for improved or altered cellulose ultrastructural traits – which could impact industries ranging from cellulose nanocrystals to toiletries to structural building products.”
The researchers used a modified line of Arabidopsis thaliana, a small flowering plant related to cabbage and mustard, to conduct the experiment. The resulting plants look exactly like their non-modified parents, until they are triggered to make secondary cell walls on their exterior.
One of the other partners in this research, Stanford University’s Carnegie Institution of Science published an Oct. 8, 2015 news release on EurekAlert focusing on other aspects of the research (Note: Some of this is repetitive),
Now scientists, including Carnegie’s David Ehrhardt and Heather Cartwright, have exploited a new way to watch the trafficking of the proteins that make cellulose in the formation cell walls in real time. They found that organization of this trafficking by structural proteins called microtubules, combined with the high density and rapid rate of these cellulose producing enzymes explains how thick and high strength secondary walls are built. This basic knowledge helps us understand plants can stand upright, which was essential for the move of plants from the sea to the land, and may useful for engineering plants with improved mechanical properties for to increase yields or to produce novel bio-materials. The research is published in Science.
The live-cell imaging was conducted at Carnegie with colleagues from the University of British Columbia (UBC) using customized high-end instrumentation. For the first time, it directly tracked cellulose production to observe how xylem cells, cells that transport water and some nutrients, make cellulose for their secondary cell walls. Strong walls are based on a high density of enzymes that catalyze the synthesis of cellulose (called cellulose synthase enzymes) and their rapid movement across the xylem cell surface.
Watching xylem cells lay down cellulose in real time has not been possible before, because the vascular tissues of plants are hidden inside the plant body. Lead author Yoichiro Watanabe of UBC applied a system developed by colleagues at the Nara Institute of Science and Technology to trick plants into making xylem cells on their surface. The researchers fluorescently tagged a cellulose synthase enzyme of the experimental plant Arabidopsis to track the activity using high-end microscopes.
“For me, one of the most exciting aspects of this study was being able to observe how the microtubule cytoskeleton was actively directing the synthesis of the new cell walls at the level of individual enzymes. We can guess how a complex cellular process works from static snapshots, which is what we usually have had to work from in biology, but you can’t really understand the process until you can see it in action. ” remarked Carnegie’s David Ehrhardt.