Tag Archives: UCLA

US military wants you to remember

While this July 10, 2014 news item on ScienceDaily concerns DARPA, an implantable neural device, and the Lawrence Livermore National Laboratory (LLNL), it is a new project and not the one featured here in a June 18, 2014 posting titled: ‘DARPA (US Defense Advanced Research Projects Agency) awards funds for implantable neural interface’.

The new project as per the July 10, 2014 news item on ScienceDaily concerns memory,

The Department of Defense’s Defense Advanced Research Projects Agency (DARPA) awarded Lawrence Livermore National Laboratory (LLNL) up to $2.5 million to develop an implantable neural device with the ability to record and stimulate neurons within the brain to help restore memory, DARPA officials announced this week.

The research builds on the understanding that memory is a process in which neurons in certain regions of the brain encode information, store it and retrieve it. Certain types of illnesses and injuries, including Traumatic Brain Injury (TBI), Alzheimer’s disease and epilepsy, disrupt this process and cause memory loss. TBI, in particular, has affected 270,000 military service members since 2000.

A July 2, 2014 LLNL news release, which originated the news item, provides more detail,

The goal of LLNL’s work — driven by LLNL’s Neural Technology group and undertaken in collaboration with the University of California, Los Angeles (UCLA) and Medtronic — is to develop a device that uses real-time recording and closed-loop stimulation of neural tissues to bridge gaps in the injured brain and restore individuals’ ability to form new memories and access previously formed ones.

Specifically, the Neural Technology group will seek to develop a neuromodulation system — a sophisticated electronics system to modulate neurons — that will investigate areas of the brain associated with memory to understand how new memories are formed. The device will be developed at LLNL’s Center for Bioengineering.

“Currently, there is no effective treatment for memory loss resulting from conditions like TBI,” said LLNL’s project leader Satinderpall Pannu, director of the LLNL’s Center for Bioengineering, a unique facility dedicated to fabricating biocompatible neural interfaces. …

LLNL will develop a miniature, wireless and chronically implantable neural device that will incorporate both single neuron and local field potential recordings into a closed-loop system to implant into TBI patients’ brains. The device — implanted into the entorhinal cortex and hippocampus — will allow for stimulation and recording from 64 channels located on a pair of high-density electrode arrays. The entorhinal cortex and hippocampus are regions of the brain associated with memory.

The arrays will connect to an implantable electronics package capable of wireless data and power telemetry. An external electronic system worn around the ear will store digital information associated with memory storage and retrieval and provide power telemetry to the implantable package using a custom RF-coil system.

Designed to last throughout the duration of treatment, the device’s electrodes will be integrated with electronics using advanced LLNL integration and 3D packaging technologies. The microelectrodes that are the heart of this device are embedded in a biocompatible, flexible polymer.

Using the Center for Bioengineering’s capabilities, Pannu and his team of engineers have achieved 25 patents and many publications during the last decade. The team’s goal is to build the new prototype device for clinical testing by 2017.

Lawrence Livermore’s collaborators, UCLA and Medtronic, will focus on conducting clinical trials and fabricating parts and components, respectively.

“The RAM [Restoring Active Memory] program poses a formidable challenge reaching across multiple disciplines from basic brain research to medicine, computing and engineering,” said Itzhak Fried, lead investigator for the UCLA on this project and  professor of neurosurgery and psychiatry and biobehavioral sciences at the David Geffen School of Medicine at UCLA and the Semel Institute for Neuroscience and Human Behavior. “But at the end of the day, it is the suffering individual, whether an injured member of the armed forces or a patient with Alzheimer’s disease, who is at the center of our thoughts and efforts.”

LLNL’s work on the Restoring Active Memory program supports [US] President [Barack] Obama’s Brain Research through Advancing Innovative Neurotechnologies (BRAIN) initiative.

Obama’s BRAIN is picking up speed.

Protein nanomachines at the University of Washington

Scouring pad or protein nanomachine?

Caption: This is a computational model of a successfully designed two-component protein nanocage with tetrahedral symmetry. Credit: Dr. Vikram Mulligan

Caption: This is a computational model of a successfully designed two-component protein nanocage with tetrahedral symmetry.
Credit: Dr. Vikram Mulligan

This illustration of a protein nanocage reminded me of a type of scouring pad, which come to think of it, I haven’t seen in any stores for some years. Getting back on topic, this nanocage is a first step to building nanomachines according to a June 5, 2014 news item on Nanowerk,

A route for constructing protein nanomachines engineered for specific applications may be closer to reality.

Biological systems produce an incredible array of self-assembling, functional protein tools. Some examples of these nanoscale protein materials are scaffolds to anchor cellular activities, molecular motors to drive physiological events, and capsules for delivering viruses into host cells.

Scientists inspired by these sophisticated molecular machines want to build their own, with forms and functions customized to tackle modern-day challenges. The ability to design new protein nanostructures could have useful implications in targeted delivery of drugs, in vaccine development and in plasmonics, which is manipulating electromagnetic signals to guide light diffraction for information technologies, energy production or other uses.

A recently developed computational method may be an important step toward that goal. The project was led by the University of Washington’s [Washington state] Neil King, translational investigator; Jacob Bale, graduate student in Molecular and Cellular Biology; and William Sheffler in David Baker’s laboratory at the University of Washington Institute for Protein Design, in collaboration with colleagues at UCLA [University of California at Los Angeles] and Janelia Farm.

The work is based in the Rosetta macromolecular modeling package developed by Baker and his colleagues. The program was originally created to predict natural protein structures from amino acid sequences. Researchers in the Baker lab and around the world are increasingly using Rosetta to design new protein structures and sequences aimed at solving real-world problems.

A June 4 (?), 2014 University of Washington news release by Leila Gray (also on EurekAlert), which originated the news item, provides more detail about the models and what the scientists hope to accomplish,

“Proteins are amazing structures that can do remarkable things,” King said, “they can respond to changes in their environment. Exposure to a particular metabolite or a rise in temperature, for example, can trigger an alteration in a particular protein’s shape and function.” People often call proteins the building blocks of life.

“But unlike, say, a PVC pipe,” King said, “they are not simply construction material.” They are also construction (and demolition) workers — speeding up chemical reactions, breaking down food, carrying messages, interacting with each other, and performing countless other duties vital to life.

With the new software the scientists were able to create five novel, 24-subunit cage-like protein nanomaterials. Importantly, the actual structures, the researchers observed, were in very close agreement with their computer modeling.

Their method depends on encoding pairs of protein amino acid sequences with the information needed to direct molecular assembly through protein-protein interfaces. The interfaces not only provide the energetic forces that drive the assembly process, they also precisely orient the pairs of protein building blocks with the geometry required to yield the desired cage-like symmetric architectures.

Creating this cage-shaped protein, the scientists said, may be a first step towards building nano-scale containers. [emphasis mine] King said he looks forward to a time when cancer-drug molecules will be packaged inside of designed nanocages and delivered directly to tumor cells, sparing healthy cells.

“The problem today with cancer chemotherapy is that it hits every cell and makes the patient feel sick,” King said. Packaging the drugs inside customized nanovehicles with parking options restricted to cancer sites might circumvent the side effects.

The scientists note that combining just two types of symmetry elements, as in this study, can in theory give rise to a range of symmetrical shapes, such as cubic point groups, helices, layers, and crystals.

King explained that the immune system responds to repetitive, symmetric patterns, such as those on the surface of a virus or disease bacteria. Building nano-decoys may be a way train the immune system to attack certain types of pathogens.

“This concept may become the foundation for vaccines based on engineered nanomaterials,” King said. Further down the road, he and Bale anticipate that these design methods might also be useful for developing new clean energy technologies.

The scientists added in their report, “The precise control over interface geometry offered by our method enables the design of two-component protein nanomaterials with diverse nanoscale features, such as surfaces, pores, and internal volumes, with high accuracy.”

They went on to say that the combinations possible with two-component materials greatly expand the number and variety of potential nanomaterials that could be designed.

It may be possible to produce nanomaterials in a variety of sizes, shapes and arrangements, and also move on to construct increasingly more complex materials from more than two components.

The researchers emphasized that the long-term goal of such structures is not to be static. The hope is that they will mimic or go beyond the dynamic performance of naturally occurring protein assemblies, and that eventually novel molecular protein machines could be manufactured with programmable functions. [emphasis mine]

The researchers pointed out that although designing proteins and protein-based nanomaterials is very challenging due to the relative complexity of protein structures and interactions, there are now more than a handful of laboratories around the world making major strides in this field. Each of the leading contributors have key strengths, they said. The strengths of the UW team is in the accuracy of the match of the designed proteins to the computational models and the predictability of the results.

It seems like it’s going to be several years before we have protein nanomachines. Here’s a link to and a citation for the research paper,

Accurate design of co-assembling multi-component protein nanomaterials by Neil P. King, Jacob B. Bale, William Sheffler, Dan E. McNamara, Shane Gonen, Tamir Gonen, Todd O. Yeates, & David Baker. Nature 510, 103–108 (05 June 2014) doi:10.1038/nature13404 Published online 25 May 2014

This paper is behind a paywall but there is a free preview via ReadCube Access.

For anyone curious about the Rosetta macromolecular modeling package used in this work, you can find out more here at the Rosetta Commons website.  As for Janelia Farm, it is a research center in Virginia and is part of the Howard Hughes Medical Institute.

A complete medical checkup in a stapler-sized laboratory

I find this device strangely attractive,

© 2014 EPFL

A March 4, 2014 news item on Azonano provides more information,

About the size of a stapler, this new handheld device developed at EFPL [École polytechnique fédérale de Lausanne] is able to test a large number of proteins in our body all at once-a subtle combination of optical science and engineering.

Could it be possible one day to do a complete checkup without a doctor’s visit? EPFL’s latest discovery is headed in that direction. Professor Hatice Altug and postoctoral fellow Arif Cetin, in collaboration with Prof. Aydogan Ozcan from UCLA [University of California at Los Angeles], have developed an “optical lab on a chip.” Compact and inexpensive, it could offer to quickly analyze up to 170,000 different molecules in a blood sample. This method could simultaneously identify insulin levels, cancer and Alzheimer markers, or even certain viruses. “We were looking to build an interface similar to a car’s dashboard, which is able to indicate gas and oil levels as well as let you know if your headlights are on or if your engine is working correctly,” explains Altug.

A March 3, 2014 EPFL news release, which originated the news item, describes the technique and the device in detail,

Nanoholes on the gold substrates are compartmented into arrays of different sections, where each section functions as an independent sensor. Sensors are coated with special biofilms that are specifically attracting targeted proteins. Consequently, multiple different proteins in the biosamples could be captured at different places on the platform and monitored simultaneously.

The diode then allows for detection of the trapped proteins almost immediately. The light shines on the platform, passes through the nano-openings and its properties are recorded onto the CMOS chip. Since light going through the nanoscaled holes changes its properties depending on the presence of biomolecules, it is possible to easily deduce the number of particles trapped on the sensors.

Laboratories normally observe the difference between the original wavelength and the resulting one, but this requires using bulky spectrometers. Hatice Altug’s ingenuity consists in choosing to ignore the light’s wavelength, or spectrum, and focus on changes in the light’s intensity instead. This method is possible by tuning into the “surface plasmonic resonance” – the collective oscillation of electrons when in contact with light. And this oscillation is very different depending on the presence or absence of a particular protein. Then, the CMOS chip only needs to record the intensity of the oscillation.

The size, price and efficiency of this new multi-analyze device make it a highly promising invention for a multiplicity of uses. “Recent studies have shown that certain illness like cancer or Alzheimer’s are better diagnosed and false positive results avoided when several parameters can be analyzed at once,” says Hatice Altug. “Moreover, it is possible to remove the substrate and then replace it with another one, allowing to be adapted for a wide range of biomedical and environmental research requiring monitoring of biomolecules, chemicals and bioparticles.” The research team foresees collaborating with local hospitals in the near future to find the best way to use this new technology.

Nanodiamond contact lenses in attempt to improve glaucoma treatment

A School of Dentistry, at the University of California at Los Angeles (UCLA) or elsewhere, is not my first thought as a likely source for work on improving glaucoma treatment—it turns out that I’m a bit shortsighted (pun intended).  A Feb. 14, 2014 news item on Azonano describes the issue with glaucoma treatment and a new delivery system for it developed by a research team at UCLA,

By 2020, nearly 80 million people are expected to have glaucoma, a disorder of the eye that, if left untreated, can damage the optic nerve and eventually lead to blindness.

The disease often causes pressure in the eye due to a buildup of fluid and a breakdown of the tissue that is responsible for regulating fluid drainage. Doctors commonly treat glaucoma using eye drops that can help the eye drain or decrease fluid production.

Unfortunately, patients frequently have a hard time sticking to the dosing schedules prescribed by their doctors, and the medication — when administered through drops — can cause side effects in the eye and other parts of the body.

In what could be a significant step toward improving the management of glaucoma, researchers from the UCLA School of Dentistry have created a drug delivery system that may have less severe side effects than traditional glaucoma medication and improve patients’ ability to comply with their prescribed treatments. The scientists bound together glaucoma-fighting drugs with nanodiamonds and embedded them onto contact lenses. The drugs are released into the eye when they interact with the patient’s tears.

The new technology showed great promise for sustained glaucoma treatment and, as a side benefit, the nanodiamond-drug compound even improved the contact lenses’ durability.

The Feb. 13, 2014 UCLA news release by Brianna Deane, which originated the news item, describes the nanodiamonds and how they were employed in this project,

Nanodiamonds, which are byproducts of conventional mining and refining processes, are approximately five nanometers in diameter and are shaped like tiny soccer balls. They can be used to bind a wide spectrum of drug compounds and enable drugs to be released into the body over a long period of time.

To deliver a steady release of medication into the eye, the UCLA researchers combined nanodiamonds with timolol maleate, which is commonly used in eye drops to manage glaucoma. When applied to the nanodiamond-embedded lenses, timolol is released when it comes into contact with lysozyme, an enzyme that is abundant in tears.

“Delivering timolol through exposure to tears may prevent premature drug release when the contact lenses are in storage and may serve as a smarter route toward drug delivery from a contact lens.” said Kangyi Zhang, co-first author of the study and a graduate student in Ho’s lab.

One of the drawbacks of traditional timolol maleate drops is that as little as 5 percent of the drug actually reaches the intended site. Another disadvantage is burst release, where a majority of the drug is delivered too quickly, which can cause significant amounts of the drug to “leak” or spill out of the eye and, in the most serious cases, can cause complications such as an irregular heartbeat. Drops also can be uncomfortable to administer, which leads many patients to stop using their medication.

But the contact lenses developed by the UCLA team successfully avoided the burst release effect. The activity of the released timolol was verified by a primary human-cell study.

“In addition to nanodiamonds’ promise as triggered drug-delivery agents for eye diseases, they can also make the contact lenses more durable during the course of insertion, use and removal, and more comfortable to wear,” said Ho, who is also a professor of bioengineering and a member of the Jonsson Comprehensive Cancer Center and the California NanoSystems Institute.

Even with the nanodiamonds embedded, the lenses still possessed favorable levels of optical clarity. And, although mechanical testing verified that they were stronger than normal lenses, there were no apparent changes to water content, meaning that the contact lenses’ comfort and permeability to oxygen would likely be preserved.

By this time, I was madly curious as to what these contact lenses might look like and so I found this image, accompanying the researchers’ paper,  showing what looks like a standard contact lens with an illustration of how the artist imagines the diamonds and medications are functioning at the nanoscale,

nanodiamonds

[downloaded from http://pubs.acs.org/doi/abs/10.1021/nn5002968]

Here’s a link to and a citation for the paper,

Diamond Nanogel-Embedded Contact Lenses Mediate Lysozyme-Dependent Therapeutic Release by Ho-Joong Kim, Kangyi Zhang, Laura Moore, and Dean Ho. ACS Nano, Article ASAP DOI: 10.1021/nn5002968 Publication Date (Web): February 8, 2014

Copyright © 2014 American Chemical Society

This paper is behind a paywall.

Diamonds in your teeth—for health reasons

Scientists at the University of California at Los Angeles (UCLA) in collaboration with their colleagues at the NanoCarbon Research Institute (Japan) are investigating the possibility of using nanodiamonds to promote bone growth that supports dental implants. From the Sept.18, 2013 news item on ScienceDaily,

UCLA researchers have discovered that diamonds on a much, much smaller scale than those used in jewelry could be used to promote bone growth and the durability of dental implants.

Nanodiamonds, which are created as byproducts of conventional mining and refining operations, are approximately four to five nanometers in diameter and are shaped like tiny soccer balls. Scientists from the UCLA School of Dentistry, the UCLA Department of Bioengineering and Northwestern University, along with collaborators at the NanoCarbon Research Institute in Japan, may have found a way to use them to improve bone growth and combat osteonecrosis, a potentially debilitating disease in which bones break down due to reduced blood flow.

The Sept. 17,2013 UCLA news release by Brianna Deane (also on EurekAlert), which originated the news item, describes how osteonecrosis affects bones and the impact that this new technique using nanodiamonds could have on applications for regenerative medicine (Note: A link has been removed),

When osteonecrosis affects the jaw, it can prevent people from eating and speaking; when it occurs near joints, it can restrict or preclude movement. Bone loss also occurs next to implants such as prosthetic joints or teeth, which leads to the implants becoming loose — or failing.
Implant failures necessitate additional procedures, which can be painful and expensive, and can jeopardize the function the patient had gained with an implant. These challenges are exacerbated when the disease occurs in the mouth, where there is a limited supply of local bone that can be used to secure the prosthetic tooth, a key consideration for both functional and aesthetic reasons.
….
During bone repair operations, which are typically costly and time-consuming, doctors insert a sponge through invasive surgery to locally administer proteins that promote bone growth, such as bone morphogenic protein.
Ho’s team discovered that using nanodiamonds to deliver these proteins has the potential to be more effective than the conventional approaches. The study found that nanodiamonds, which are invisible to the human eye, bind rapidly to both bone morphogenetic protein  and fibroblast growth factor, demonstrating that the proteins can be simultaneously delivered using one vehicle. The unique surface of the diamonds allows the proteins to be delivered more slowly, which may allow the affected area to be treated for a longer period of time. Furthermore, the nanodiamonds can be administered non-invasively, such as by an injection or an oral rinse.
“We’ve conducted several comprehensive studies, in both cells and animal models, looking at the safety of the nanodiamond particles,” said Laura Moore, the first author of the study and an M.D.-Ph.D. student at Northwestern University under the mentorship of Dr. Ho. “Initial studies indicate that they are well tolerated, which further increases their potential in dental and bone repair applications.”
“Nanodiamonds are versatile platforms,” said Ho, who is also professor of bioengineering and a member of the Jonsson Comprehensive Cancer Center and the California NanoSystems Institute. “Because they are useful for delivering such a broad range of therapies, nanodiamonds have the potential to impact several other facets of oral, maxillofacial and orthopedic surgery, as well as regenerative medicine.”
Ho’s team previously showed that nanodiamonds in preclinical models were effective at treating multiple forms of cancer. Because osteonecrosis can be a side effect of chemotherapy, the group decided to examine whether nanodiamonds might help treat the bone loss as well. Results from the new study could open the door for this versatile material to be used to address multiple challenges in drug delivery, regenerative medicine and other fields.

Here’s a citation for and a link to the researchers’ published paper,

Multi-protein Delivery by Nanodiamonds Promotes Bone Formation by L. Moore, M. Gatica, H. Kim, E. Osawa, & D. Ho. Published online before print September 17, 2013, doi: 10.1177/0022034513504952 JDR September 17, 2013 0022034513504952

This paper is behind a paywall.

Looking at nanoparticles with your smartphone

Researcher Aydogan Ozcan and his team at the University of California at Los Angeles (UCLA) have developed a device which when attached to a smartphone allows the user to view viruses, bacteria, and/or nanoparticles. (Yikes, I understood nanoparticles were perceptible with haptic devices and that any work on developing optical capabilities was pretty rudimentary). From the UCLA Sept. 16, 2013 news release on EurekAlert,

Aydogan Ozcan, a professor of electrical engineering and bioengineering at the UCLA Henry Samueli School of Engineering and Applied Science, and his team have created a portable smartphone attachment that can be used to perform sophisticated field testing to detect viruses and bacteria without the need for bulky and expensive microscopes and lab equipment. The device weighs less than half a pound.

“This cellphone-based imaging platform could be used for specific and sensitive detection of sub-wavelength objects, including bacteria and viruses and therefore could enable the practice of nanotechnology and biomedical testing in field settings and even in remote and resource-limited environments,” Ozcan said. “These results also constitute the first time that single nanoparticles and viruses have been detected using a cellphone-based, field-portable imaging system.”

In the ACS [American Chemical Society]  Nano paper, Ozcan details a fluorescent microscope device fabricated by a 3-D printer that contains a color filter, an external lens and a laser diode. The diode illuminates fluid or solid samples at a steep angle of roughly 75 degrees. This oblique illumination avoids detection of scattered light that would otherwise interfere with the intended fluorescent image.

Using this device, which attaches directly to the camera module on a smartphone, Ozcan’s team was able to detect single human cytomegalovirus (HCMV) particles. HCMV is a common virus that can cause birth defects such as deafness and brain damage and can hasten the death of adults who have received organ implants, who are infected with the HIV virus or whose immune systems otherwise have been weakened. A single HCMV particle measures about 150–300 nanometers; a human hair is roughly 100,000 nanometers thick.

In a separate experiment, Ozcan’s team also detected nanoparticles — specially marked fluorescent beads made of polystyrene — as small as 90-100 nanometers.

To verify these results, researchers in Ozcan’s lab used other imaging devices, including a scanning electron microscope and a photon-counting confocal microscope. These experiments confirmed the findings made using the new cellphone-based imaging device.

For some reason I’m completely gobsmacked by the notion that I could look at nanoparticles on a smartphone at sometime in the foreseeable future.

Here’s a citation and a link to the paper,

Fluorescent Imaging of Single Nanoparticles and Viruses on a Smart Phone by Qingshan Wei, Hangfei Qi, Wei Luo, Derek Tseng , So Jung Ki, Zhe Wan, Zoltán Göröcs, Laurent A. Bentolila, Ting-Ting Wu, Ren Sun, and Aydogan Ozcan. ACS Nano, Article ASAP DOI: 10.1021/nn4037706 Publication Date (Web): September 9, 2013
Copyright © 2013 American Chemical Society

This paper is behind a paywall. Ozcan’s work was last mentioned here in a Jan. 21, 2013 posting about self-assembling liquid lenses.

 

Ouch, my brain hurts! Information overload in the neurosciences

Alcino Silva, a professor of neurobiology at the David Geffen School of Medicine at UCLA and of psychiatry at the Semel Institute for Neuroscience and Human Behavior, has been working on the information overload problem in neuroscience for almost 30 years. In Silva’s latest effort he and his team are designing and testing  research maps, from the Aug. 8, 2013 news item  on ScienceDaily,

Before the digital age, neuroscientists got their information in the library like the rest of us. But the field’s explosion has created nearly 2 million papers — more data than any researcher can read and absorb in a lifetime.

That’s why a UCLA [University of California at Los Angeles] team has invented research maps. Equipped with an online app, the maps help neuroscientists quickly scan what is already known and plan their next study. The Aug. 8 edition of Neuron describes the findings.

The Aug. 8, 2013 UCLA news release written by Elaine Schmidt, which originated the news item, provides details about the team’s strategy for developing and testing this new tool,

Silva collaborated with Anthony Landreth, a former UCLA postdoctoral fellow, to create maps that offer simplified, interactive and unbiased summaries of research findings designed to help neuroscientists in choosing what to study next. As a testing ground for their maps, the team focused on findings in molecular and cellular cognition.

UCLA programmer Darin Gilbert Nee also created a Web-based app to help scientists expand and interact with their field’s map.

“We founded research maps on a crowd-sourcing strategy in which individual scientists add papers that interest them to a growing map of their fields,” said Silva, who started working on the problem nearly 30 years ago as a graduate student and who wrote, along with Landreth, an upcoming Oxford Press book on the subject. “Each map is interactive and searchable; scientists see as much of the map as they query, much like an online search.”

According to Silva, the map allows scientists to zero in on areas that interest them. By tracking published findings, researchers can determine what’s missing and pinpoint worthwhile experiments to pursue.

“Just as a GPS map offers different levels of zoom, a research map would allow a scientist to survey a specific research area at different levels of resolution — from coarse summaries to fine-grained accounts of experimental results,” Silva said. “The map would display no more and no less detail than is necessary for the researcher’s purposes.”

Each map encodes information by classifying it into categories and scoring the weight of its evidence based on key criteria, such as reproducibility and “convergence” — when different experiments point to a single conclusion.

The team’s next step will be to automate the map-creation process. As scientists publish papers, their findings will automatically be added to the research map representing their field.

According to Silva, automation could be achieved by using journals’ existing publication process to divide an article’s findings into smaller chapters and build “nano-publications.” Publishers would use a software plug-in to render future papers machine-readable.

Here’s a link to and a citation for the published paper,

The Need for Research Maps to Navigate Published Work and Inform Experiment Planning by Anthony Landreth and Alcino J. Silva.  Neuron, Volume 79, Issue 3, 411-415, 7 August 2013 doi:10.1016/j.neuron.2013.07.024

Copyright © 2013 Elsevier Inc. All rights reserved.

I have provided a link to the HTML with thumbnail images version of the paper, which appears to  be open access (at least for now). I found this paper to be quite readable, from the Introduction,

The amount of published research in neuroscience has grown to be massive. The past three decades have accumulated more than 1.6 million articles alone. The rapid expansion of the published record has been accompanied by an unprecedented widening of the range of concepts, approaches, and techniques that individual neuroscientists are expected to be familiar with. The cutting edge of neuroscience is increasingly defined by studies demanding researchers in one area (e.g., molecular and cellular neuroscience) to have more than a passing familiarity with the tools, concepts, and literature of other areas (e.g., systems or behavioral neuroscience). [emphasis mine] As research relevant to a topic expands, it becomes increasingly more likely that researchers will be either overwhelmed or unaware of relevant results (or both).

Interestingly, neither author not any other team members (in addition to Nee, John Bickle, not mentioned in the news release, has co-written the forthcoming book with Silva and Landreth) mentioned seem to have any background in library or archival sciences or in information architecture or records management, all fields where people deal with massive amounts of information and accessibility issues. For example, the US National and Records Administration (NARA) is developing a data visualization tool (Action Science Explorer; my Dec. 9, 2011 posting profiles this project) to address some very similar issues to those faced in the neuroscience community.

Making a graphene micro-supercapacitor with a home DVD burner

Not all science research and breakthroughs require massive investments of money, sometimes all you need is a home DVD burner as this Feb. 19, 2013 news release on EurekAlert notes,

While the demand for ever-smaller electronic devices has spurred the miniaturization of a variety of technologies, one area has lagged behind in this downsizing revolution: energy-storage units, such as batteries and capacitors.

Now, Richard Kaner, a member of the California NanoSystems Institute at UCLA and a professor of chemistry and biochemistry, and Maher El-Kady, a graduate student in Kaner’s laboratory, may have changed the game.

The UCLA researchers have developed a groundbreaking technique that uses a DVD burner to fabricate micro-scale graphene-based supercapacitors — devices that can charge and discharge a hundred to a thousand times faster than standard batteries. These micro-supercapacitors, made from a one-atom–thick layer of graphitic carbon, can be easily manufactured and readily integrated into small devices such as next-generation pacemakers.

The new cost-effective fabrication method, described in a study published this week in the journal Nature Communications, holds promise for the mass production of these supercapacitors, which have the potential to transform electronics and other fields.

“Traditional methods for the fabrication of micro-supercapacitors involve labor-intensive lithographic techniques that have proven difficult for building cost-effective devices, thus limiting their commercial application,” El-Kady said. “Instead, we used a consumer-grade LightScribe DVD burner to produce graphene micro-supercapacitors over large areas at a fraction of the cost of traditional devices. [emphasis mine] Using this technique, we have been able to produce more than 100 micro-supercapacitors on a single disc in less than 30 minutes, using inexpensive materials.”

The University of California at Los Angeles (UCLA) Feb. 19, 2013 news release written by David Malasarn, the origin of the EurekAlert news release, features more information about the process,

The process of miniaturization often relies on flattening technology, making devices thinner and more like a geometric plane that has only two dimensions. In developing their new micro-supercapacitor, Kaner and El-Kady used a two-dimensional sheet of carbon, known as graphene, which only has the thickness of a single atom in the third dimension.
Kaner and El-Kady took advantage of a new structural design during the fabrication. For any supercapacitor to be effective, two separated electrodes have to be positioned so that the available surface area between them is maximized. This allows the supercapacitor to store a greater charge. A previous design stacked the layers of graphene serving as electrodes, like the slices of bread on a sandwich. While this design was functional, however, it was not compatible with integrated circuits.
In their new design, the researchers placed the electrodes side by side using an interdigitated pattern, akin to interwoven fingers. This helped to maximize the accessible surface area available for each of the two electrodes while also reducing the path over which ions in the electrolyte would need to diffuse. As a result, the new supercapacitors have more charge capacity and rate capability than their stacked counterparts.
Interestingly, the researchers found that by placing more electrodes per unit area, they boosted the micro-supercapacitor’s ability to store even more charge.
Kaner and El-Kady were able to fabricate these intricate supercapacitors using an affordable and scalable technique that they had developed earlier. They glued a layer of plastic onto the surface of a DVD and then coated the plastic with a layer of graphite oxide. Then, they simply inserted the coated disc into a commercially available LightScribe optical drive — traditionally used to label DVDs — and took advantage of the drive’s own laser to create the interdigitated pattern. The laser scribing is so precise that none of the “interwoven fingers” touch each other, which would short-circuit the supercapacitor.
“To label discs using LightScribe, the surface of the disc is coated with a reactive dye that changes color on exposure to the laser light. Instead of printing on this specialized coating, our approach is to coat the disc with a film of graphite oxide, which then can be directly printed on,” Kaner said. “We previously found an unusual photo-thermal effect in which graphite oxide absorbs the laser light and is converted into graphene in a similar fashion to the commercial LightScribe process. With the precision of the laser, the drive renders the computer-designed pattern onto the graphite oxide film to produce the desired graphene circuits.”
“The process is straightforward, cost-effective and can be done at home,” El-Kady said. “One only needs a DVD burner and graphite oxide dispersion in water, which is commercially available at a moderate cost.”
The new micro-supercapacitors are also highly bendable and twistable, making them potentially useful as energy-storage devices in flexible electronics like roll-up displays and TVs, e-paper, and even wearable electronics.

The reference to e-paper and roll-up displays calls to mind work being done at Queen’s University (Kingston, Canada) and Roel Vertegaal’s work on bendable, flexible phones and computers (my Jan. 9, 2013 posting). Could this work on micro-supercapacitors have an impact on that work?

Here’s an image (supplied by UCLA) of the micro-supercapacitors ,

Kaner and El-Kady's micro-supercapacitors

Kaner and El-Kady’s micro-supercapacitors

UCLA has  also supplied a video of Kaner and El-Kady discussing their work,

Interestingly this video has been supported by GE (General Electric), a company which seems to be doing a great deal to be seen on the internet these days as per my Feb. 11, 2013 posting titled, Visualizing nanotechnology data with Seed Media Group and GE (General Electric).

Getting back to the researchers, they are looking for industry partners as per Malasarn’s news release.

Self-assembling liquid lenses used in optical microscopy to reveal nanoscale objects

A Jan. 21, 2013 news item on Azonano highlights some research on microscope and self-assembling lenses done at University of California Los Angeles (UCLA),

By using tiny liquid lenses that self-assemble around microscopic objects, a team from UCLA’s Henry Samueli School of Engineering and Applied Science has created an optical microscopy method that allows users to directly see objects more than 1,000 times smaller than the width of a human hair.

Coupled with computer-based computational reconstruction techniques, this portable and cost-effective platform, which has a wide field of view, can detect individual viruses and nanoparticles, making it potentially useful in the diagnosis of diseases in point-of-care settings or areas where medical resources are limited.

The UCLA Jan. 20, 2013 news release, written by Matthew Chin and which originated the news item, explains why another microscopy technique is needed for viewing objects at the nanoscale,

Electron microscopy is one of the current gold standards for viewing nanoscale objects. This technology uses a beam of electrons to outline the shape and structure of nanoscale objects. Other optical imaging–based techniques are used as well, but all of them are relatively bulky, require time for the preparation and analysis of samples, and have a limited field of view — typically smaller than 0.2 square millimeters — which can make viewing particles in a sparse population, such as low concentrations of viruses, challenging.

To overcome these issues, the UCLA team, led by Aydogan Ozcan, an associate professor of electrical engineering and bioengineering, developed the new optical microscopy platform by using nanoscale lenses that stick to the objects that need to be imaged. This lets users see single viruses and other objects in a relatively inexpensive way and allows for the processing of a high volume of samples.

At scales smaller than 100 nanometers, optical microscopy becomes a challenge because of its weak light-signal levels. Using a special liquid composition, nanoscale lenses, which are typically thinner than 200 nanometers, self-assemble around objects on a glass substrate.

A simple light source, such as a light-emitting diode (LED), is then used to illuminate the nano-lens object assembly. By utilizing a silicon-based sensor array, which is also found in cell-phone cameras, lens-free holograms of the nanoparticles are detected. The holograms are then rapidly reconstructed with the help of a personal computer to detect single nanoparticles on a glass substrate.

The researchers have used the new technique to create images of single polystyrene nanoparticles, as well as adenoviruses and H1N1 influenza viral particles.

While the technique does not offer the high resolution of electron microscopy, it has a much wider field of view — more than 20 square millimeters — and can be helpful in finding nanoscale objects in samples that are sparsely populated.

Here a citation for and a link to the research article,

Wide-field optical detection of nanoparticles using on-chip microscopy and self-assembled nanolenses by Onur Mudanyali, Euan McLeod, Wei Luo, Alon Greenbaum, Ahmet F. Coskun, Yves Hennequin, Cédric P. Allier, & Aydogan Ozcan. Nature Photonics (2013) doi:10.1038/nphoton.2012.337 Published online: 20 January 2013

The article is behind a paywall.