Tag Archives: Stanford University

1st code poetry slam at Stanford University

It’s code as in computer code and slam as in performance competition which when added to the word poetry takes most of us into uncharted territory. Here’s a video clip featuring the winning entry, Say 23 by Leslie Wu, competing in Stanford University’s (located in California) 1st code poetry slam,


If you listen closely (this clip does not have the best sound quality), you can hear the words to Psalm 23 (from the bible).

Thanks to this Dec. 29, 2013 news item on phys.org for bringing this code poetry slam to my attention (Note: Links have been removed),

Leslie Wu, a doctoral student in computer science at Stanford, took an appropriately high-tech approach to presenting her poem “Say 23″ at the first Stanford Code Poetry Slam.

Wu wore Google Glass as she typed 16 lines of computer code that were projected onto a screen while she simultaneously recited the code aloud. She then stopped speaking and ran the script, which prompted the computer program to read a stream of words from Psalm 23 out loud three times, each one in a different pre-recorded-computer voice.

Wu, whose multimedia presentation earned her first place, was one of eight finalists to present at the Code Poetry Slam. Organized by Melissa Kagen, a graduate student in German studies, and Kurt James Werner, a graduate student in computer-based music theory and acoustics, the event was designed to explore the creative aspects of computer programming.

The Dec. 27, 2013 Stanford University news release by Mariana Lage, which originated the news item, goes on to describe the concept. the competition, and the organizers’ aims,

With presentations that ranged from poems written in a computer language format to those that incorporated digital media, the slam demonstrated the entrants’ broad interpretation of the definition of “code poetry.”

Kagen and Werner developed the code poetry slam as a means of investigating the poetic potentials of computer-programming languages.

“Code poetry has been around a while, at least in programming circles, but the conjunction of oral presentation and performance sounded really interesting to us,” said Werner. Added Kagen, “What we are interested is in the poetic aspect of code used as language to program a computer.”

Sponsored by the Division of Literatures, Cultures, and Languages, the slam drew online submissions from Stanford and beyond.

High school students and professors, graduate students and undergraduates from engineering, computer science, music, language and literature incorporated programming concepts into poem-like forms. Some of the works were written entirely in executable code, such as Ruby and C++ languages, while others were presented in multimedia formats. The works of all eight finalists can be viewed on the Code Poetry Slam website.

Kagen, Werner and Wu agree that code poetry requires some knowledge of programming from the spectators.

“I feel it’s like trying to read a poem in a language with which you are not comfortable. You get the basics, but to really get into the intricacies you really need to know that language,” said Kagen, who studies the traversal of musical space in Wagner and Schoenberg.

Wu noted that when she was typing the code most people didn’t know what she was doing. “They were probably confused and curious. But when I executed the poem, the program interpreted the code and they could hear words,” she said, adding that her presentation “gave voice to the code.”

“The code itself had its own synthesized voice, and its own poetics of computer code and singsong spoken word,” Wu said.

One of the contenders showed a poem that was “misread” by the computer.

“There was a bug in his poem, but more interestingly, there was the notion of a correct interpretation which is somewhat unique to computer code. Compared to human language, code generally has few interpretations or, in most cases, just one,” Wu said.

So what exactly is code poetry? According to Kagen, “Code poetry can mean a lot of different things depending on whom you ask.

“It can be a piece of text that can be read as code and run as program, but also read as poetry. It can mean a human language poetry that has mathematical elements and codes in it, or even code that aims for elegant expression within severe constraints, like a haiku or a sonnet, or code that generates automatic poetry. Poems that are readable to humans and readable to computers perform a kind of cyborg double coding.”

Werner noted that “Wu’s poem incorporated a lot of different concepts, languages and tools. It had Ruby language, Japanese and English, was short, compact and elegant. It did a lot for a little code.” Werner served as one of the four judges along with Kagen; Caroline Egan, a doctoral student in comparative literature; and Mayank Sanganeria, a master’s student at the Center for Computer Research in Music and Acoustics (CCRMA).

Kagen and Werner got some expert advice on judging from Michael Widner, the academic technology specialist for the Division of Literatures, Cultures and Languages.

Widner, who reviewed all of the submissions, noted that the slam allowed scholars and the public to “probe the connections between the act of writing poetry and the act of writing code, which as anyone who has done both can tell you are oddly similar enterprises.”

A scholar who specializes in the study of both medieval and machine languages, Widner said that “when we realize that coding is a creative act, we not only value that part of the coder’s labor, but we also realize that the technologies in which we swim have assumptions and ideologies behind them that, perhaps, we should challenge.”

I first encountered code poetry in 2006 and I don’t think it was new at that time but this is the first time I’ve encountered a code poetry slam. For the curious, here’s more about code poetry from the Digital poetry essay in Wikipedia (Note: Links have been removed),

… There are many types of ‘digital poetry’ such as hypertext, kinetic poetry, computer generated animation, digital visual poetry, interactive poetry, code poetry, holographic poetry (holopoetry), experimental video poetry, and poetries that take advantage of the programmable nature of the computer to create works that are interactive, or use generative or combinatorial approach to create text (or one of its states), or involve sound poetry, or take advantage of things like listservs, blogs, and other forms of network communication to create communities of collaborative writing and publication (as in poetical wikis).

The Stanford organizers have been sufficiently delighted with the response to their 1st code poetry slam that they are organizing a 2nd slam (from the Code Poetry Slam 1.1. homepage),

Call for Works 1.1

Submissions for the second Slam are now open! Submit your code/poetry to the Stanford Code Poetry Slam, sponsored by the Department of Literatures, Cultures, and Languages! Submissions due February 12th, finalists invited to present their work at a poetry slam (place and time TBA). Cash prizes and free pizza!

Stanford University’s Division of Literatures, Cultures, and Languages (DLCL) sponsors a series of Code Poetry Slams. Code Poetry Slam 1.0 was held on November 20th, 2013, and Code Poetry Slam 1.1 will be held Winter quarter 2014.

According to Lage’s news release you don’t have to be associated with Stanford University to be a competitor but, given that you will be performing your poetry there, you will likely have to live in some proximity to the university.

Do you hear what I hear?

It’s coming up Christmas time and as my thoughts turn to the music, Stanford University (California, US) researchers are focused on hearing and touch (the two are related) according to a Dec. 4, 2013 news item on Nanowerk,

Much of what is known about sensory touch and hearing cells is based on indirect observation. Scientists know that these exceptionally tiny cells are sensitive to changes in force and pressure. But to truly understand how they function, scientists must be able to manipulate them directly. Now, Stanford scientists are developing a set of tools that are small enough to stimulate an individual nerve or group of nerves, but also fast and flexible enough to mimic a realistic range of forces.

The Dec. 3, 2013 Stanford Report article by Cynthia McKelvey, which originated the news item, provides more detail about hearing and the problem the researchers are attempting to solve,

Our ability to interpret sound is largely dependent on bundles of thousands of tiny hair cells that get their name from the hair-like projections on their top surfaces. As sound waves vibrate the bundles, they force proteins in the cells’ surfaces to open and allow electrically charged molecules, called ions, to flow into the cells. The ions stimulate each hair cell, allowing it to transfer information from the sound wave to the brain. Hair bundles are more sensitive to particular frequencies of sound, which allows us to tell the difference between a siren and a subwoofer.

People with damaged or congenital defects in these delicate hair cells suffer from severe, irreversible hearing loss. Scientists remain unsure how to treat this form of hearing loss   because they do not know how to repair or replace a damaged hair cell. Physical manipulation of the cells is key to exploring the fine details of how they function. This new probe is the first tool nimble enough to do it.

The article also goes on to describe the ‘nano’ probe,

The new force probe represents several advantages over traditional glass force probes. At 300 nanometers thick, Pruitt’s [Beth Pruitt, an associate professor of mechanical engineering] probe is just three-thousandths the width of a human hair. Made of flexible silicon, the probe can mimic a much wider range of sound wave frequencies than rigid glass probes, making it more practical for studying hearing. The probe also measures the force it exerts on hair cells as it pushes, a new achievement for high-speed force probes at such small sizes.

Manipulating the probe requires a gentle touch, said Pruitt’s collaborator, Anthony Ricci, a professor of otolaryngology at the Stanford School of Medicine. The tissue samples – in this case, hair cells from a rat’s ear – sit under a microscope on a stage floating on a cushion of air that keeps it isolated from vibrations.

The probe is controlled using three dials that function similarly to an Etch-a-Sketch. The first step of the experiment involves connecting a tiny, delicate glass electrode to the body of a single hair cell.

Using a similar manipulator, Ricci and his team then press the force probe on a single hair cell, and the glass electrode records the changes in the cell’s electrical output. Pruitt and Ricci say that understanding how physical changes prompt electrical responses in hair cells can lead to a better understanding of how people lose their hearing following damage to the hair cells.

The force probe has the potential to catalyze future research on sensory science, Ricci said.

Up to now, limits in technology have held scientists back from understanding important functions such as hearing, touch, and balance. Like hair cells in the ear, cells involved in touch and balance react to the flexing and stretching of their cell membranes. The force probe can be used to study those cells in the same manner that Pruitt and Ricci are using it to study hair cells.

Understanding the mechanics of how cells register these sensory inputs could lead to innovative new treatments and prosthetics. For example, Pruitt and Ricci think their research could help bioengineers build a better hair cell for people with impaired hearing from damage to their natural hair cells.

Stanford has produced a video about this work,

I find it fascinating that hearing and touch are related although I haven’t yet seen anything that describes or explains the relationship. As for anyone hoping for a Christmas carol, I think I’m going to hold off until later in the season.

Carbon nanotubes a second way: Cedric, the carbon nanotube computer

On the heels of yesterday’s(Sept. 26, 2013) posting about carbon nnanotubes as flexible gas sensors, I have this item about a computer fashioned from carbon nanotubes.

This wafer contains tiny computers using carbon nanotubes, a material that could lead to smaller, more energy-efficient processors. Courtesy Standford University

This wafer contains tiny computers using carbon nanotubes, a material that could lead to smaller, more energy-efficient processors. Courtesy Stanford University

To me this looks more like a ping pong bat than a computer wafer. Regardless, here’s more about it from a Sept. 25, 2013 news item by James Morgan for BBC (British Broadcasting Corporation) News online,

The first computer built entirely with carbon nanotubes has been unveiled, opening the door to a new generation of digital devices.

“Cedric” is only a basic prototype but could be developed into a machine which is smaller, faster and more efficient than today’s silicon models.

Nanotubes have long been touted as the heir to silicon’s throne, but building a working computer has proven awkward.

Cedric is the most complex carbon-based electronic system yet realised.

So is it fast? Not at all. It might have been in 1955.
The computer operates on just one bit of information, and can only count to 32.

“In human terms, Cedric can count on his hands and sort the alphabet. But he is, in the full sense of the word, a computer,” says co-author [of the paper published in Nature] Max Shulaker.

Tom Abate’s Sept. 26, 2013 article for Stanford Report provides more detail about carbon nanotubes, their potential for replacing silicon chips and associated problems,

“Carbon nanotubes [CNTs] have long been considered as a potential successor to the silicon transistor,” said Professor Jan Rabaey, a world expert on electronic circuits and systems at the University of California-Berkeley.

Why worry about a successor to silicon?

Such concerns arise from the demands that designers place upon semiconductors and their fundamental workhorse unit, those on-off switches known as transistors.

For decades, progress in electronics has meant shrinking the size of each transistor to pack more transistors on a chip. But as transistors become tinier, they waste more power and generate more heat – all in a smaller and smaller space, as evidenced by the warmth emanating from the bottom of a laptop.

Many researchers believe that this power-wasting phenomenon could spell the end of Moore’s Law, named for Intel Corp. co-founder Gordon Moore, who predicted in 1965 that the density of transistors would double roughly every two years, leading to smaller, faster and, as it turned out, cheaper electronics.

But smaller, faster and cheaper has also meant smaller, faster and hotter.

“CNTs could take us at least an order of magnitude in performance beyond where you can project silicon could take us,” Wong [another co-author of the paper]  said.

But inherent imperfections have stood in the way of putting this promising material to practical use.

First, CNTs do not necessarily grow in neat parallel lines, as chipmakers would like.

Over time, researchers have devised tricks to grow 99.5 percent of CNTs in straight lines. But with billions of nanotubes on a chip, even a tiny degree of misaligned tubes could cause errors, so that problem remained.

A second type of imperfection has also stymied CNT technology.

Depending on how the CNTs grow, a fraction of these carbon nanotubes can end up behaving like metallic wires that always conduct electricity, instead of acting like semiconductors that can be switched off.

Since mass production is the eventual goal, researchers had to find ways to deal with misaligned and/or metallic CNTs without having to hunt for them like needles in a haystack.

“We needed a way to design circuits without having to look for imperfections or even know where they were,” Mitra said.

The researchers have dubbed their solution an “imperfection-immune design,” from the Abate article,

To eliminate the wire-like or metallic nanotubes, the Stanford team switched off all the good CNTs. Then they pumped the semiconductor circuit full of electricity. All of that electricity concentrated in the metallic nanotubes, which grew so hot that they burned up and literally vaporized into tiny puffs of carbon dioxide. This sophisticated technique eliminated the metallic CNTs in the circuit.

Bypassing the misaligned nanotubes required even greater subtlety.

The Stanford researchers created a powerful algorithm that maps out a circuit layout that is guaranteed to work no matter whether or where CNTs might be askew.

“This ‘imperfections-immune design’ [technique] makes this discovery truly exemplary,” said Sankar Basu, a program director at the National Science Foundation.

The Stanford team used this imperfection-immune design to assemble a basic computer with 178 transistors, a limit imposed by the fact that they used the university’s chip-making facilities rather than an industrial fabrication process.

Their CNT computer performed tasks such as counting and number sorting. It runs a basic operating system that allows it to swap between these processes. In a demonstration of its potential, the researchers also showed that the CNT computer could run MIPS, a commercial instruction set developed in the early 1980s by then Stanford engineering professor and now university President John Hennessy.

Though it could take years to mature, the Stanford approach points toward the possibility of industrial-scale production of carbon nanotube semiconductors, according to Naresh Shanbhag, a professor at the University of Illinois at Urbana-Champaign and director of SONIC, a consortium of next-generation chip design research.

Here’s a link to and a citation for the paper,

Carbon nanotube computer by Max M. Shulaker, Gage Hills, Nishant Patil, Hai Wei, Hong-Yu Chen, H.-S. Philip Wong, & Subhasish Mitra. Nature 501, 526–530 (26 September 2013) doi:10.1038/nature12502

This article is behind a paywall but you can gain temporary access via ReadCube.

Keeping it together—new glue for lithium-ion batteries

Glue isn’t the first component that comes to my mind when discussing ways to make lithium-ion (Li-ion) batteries more efficient but researchers at SLAC National Accelerator Laboratory at Stanford University have proved that the glue used to bind a Li-ion battery together can make a difference to its efficiency (from the Aug. 20, 2013 news item on phys.org),

When it comes to improving the performance of lithium-ion batteries, no part should be overlooked – not even the glue that binds materials together in the cathode, researchers at SLAC and Stanford have found.

Tweaking that material, which binds lithium sulfide and carbon particles together, created a cathode that lasted five times longer than earlier designs, according to a report published last month in Chemical Science. The research results are some of the earliest supported by the Department of Energy’s Joint Center for Energy Storage Research.

“We were very impressed with how important this binder was in improving the lifetime of our experimental battery,” said Yi Cui, an associate professor at SLAC and Stanford who led the research.

The Aug. 19, 2013 SLAC news release by Mike Ross, which originated the news item, provides context for this accidental finding about glue and Li-ion batteries,

Researchers worldwide have been racing to improve lithium-ion batteries, which are one of the most promising technologies for powering increasingly popular devices such as mobile electronics and electric vehicles. In theory, using silicon and sulfur as the active elements in the batteries’ terminals, called the anode and cathode, could allow lithium-ion batteries to store up to five times more energy than today’s best versions. But finding specific forms and formulations of silicon and sulfur that will last for several thousand charge-discharge cycles during real-life use has been difficult.

Cui’s group was exploring how to create a better cathode by using lithium sulfide rather than sulfur. The lithium atoms it contains can provide the ions that shuttle between anode and cathode during the battery’s charge/discharge cycle; this in turn means the battery’s other electrode can be made from a non-lithium material, such as silicon. Unfortunately, lithium sulfide is also electrically insulating, which greatly reduces any battery’s performance. To overcome this, electrically conducting carbon particles can be mixed with the sulfide; a glue-like material – the binder – holds it all together.

Scientists in Cui’s [Yi Cui, an associate professor at SLAC and Stanford who led the research] group devised a new binder that is particularly well-suited for use with a lithium sulfide cathode ­– and that also binds strongly with intermediate polysulfide molecules that dissolve out of the cathode and diminish the battery’s storage capacity and useful lifetime.

The experimental battery using the new binder, known by the initials PVP, retained 94 percent of its original energy-storage capacity after 100 charge/discharge cycles, compared with 72 percent for cells using a conventionally-used binder, known as PVDF. After 500 cycles, the PVP battery still had 69 percent of its initial capacity.

Cui said the improvement was due to PVP’s much stronger affinity for lithium sulfide; together they formed a fine-grained lithium sulfide/carbon composite that made it easier for lithium ions to penetrate and reach all of the active material within the cathode. In contrast, the previous binder, PVDF, caused the composite to grow into large clumps, which hindered the lithium ions’ penetration and ruined the battery within 100 cycles

Even the best batteries lose some energy-storage capacity with each charge/discharge cycle. Researchers aim to reduce such losses as much as possible. Further enhancements to the PVP/lithium sulfide cathode combination will be needed to extend its lifetime to more than 1,000 cycles, but Cui said he finds it encouraging that improving the usually overlooked binder material produced such dramatic benefits.

Here’s a link to and a citation for the published paper,

Stable cycling of lithium sulfide cathodes through strong affinity with a bifunctional binder by Zhi Wei Seh, Qianfan Zhang, Weiyang Li, Guangyuan Zheng, Hongbin Yaoa, and Yi Cui. Chem. Sci., 2013,4, 3673-3677 DOI: 10.1039/C3SC51476E First published online 11 Jul 2013

There’s a note on the website stating the article is free but the instructions for accessing the article are confusing seeming to suggest you need a subscription of some sort or you need to register for the site.

I have written about Yi Cui’s work with lithium-ion batteries before including this Jan. 9, 2013 posting, How is an eggshell like a lithium-ion battery?, which also features a news release by Mike Ross.

Teachers play with crayons while learning about nanotechnology at Stanford University

Stanford University’s Center for Probing the Nanoscale runs a program for middle school science teachers where they, over a period of a week, participate in lectures and more (from the Center’s Summer Institute for Middle School Teachers webpage),

Daily sessions focus on content lectures and inquiry-based modules that explicitly address California’s 5-8th grade physical science content standards. Teachers will also receive a hands-on activity kit with many fun activities that bring nanoscience into the classroom.

  • learn about nanoscience and nanotechnology in simple terms
  • develop and receive hands-on activities targeting CA 5-8th grade science content standards
  • interact with scientists at Stanford University
  • tour research labs and see instruments in action
  • receive a $850 stipend and professional development units ($550 after completion of SIMST, $300 after implementing a nano lesson in the classroom)

An Aug. 1, 2013 news item on Azonano provides a description of a “fun activity,”

After a lecture on nanofabrication, Maria Wang, associate director at Stanford’s Center for Probing the Nanoscale, handed out white paper, boxes of colored crayons, thick black crayons and pipette tips. …

Following Wang’s directions, the 13 teachers quickly got to work. Each filled a small square of paper with color, covered the color entirely with several layers of black crayon, then etched a design into the paper by pressing the pipette tip through the black layers to expose a colorful pattern – mimicking the plasma etching they had learned about in the lecture.

“These pipette tips are kind of high tech for this activity, but I think it’s neat to find any opportunity to introduce tools that we use in the lab to you,” Wang told the teachers, who were sitting at small tables in a classroom in the McCullough Building. “You can actually just use a paper clip as a low-cost solution in case you don’t have access to pipette tips.”

Michael Wilson, who teaches sixth-, seventh- and eighth-grade science at Stewart Elementary School in Pinole, Calif., said crayon etching, which is taught in art classes at his school, offered the possibility of injecting a “little science” into an art lesson.

“Now we can say this is why that happens,” said Wilson.

The crayon etching exercise, a demonstration of “top-down fabrication,” was one of a dozen hands-on activities scheduled during the July 22-26 [2013] summer institute.

The July 31, 2013 Stanford Report article by Kathleen J. Sullivan, which originated the news item, explains this program’s raison d’être,

David Goldhaber-Gordon, director of the center [Center for Probing the Nanoscale], said most elementary school students are excited about science, but lose interest or confidence in their ability to do science during the middle school years.

“Middle school science teachers are hungry for both subject area knowledge and for reinvigorating their passion for science,” said Goldhaber-Gordon, a Stanford associate professor of physics. “We select teacher participants primarily from schools with students who are traditionally underrepresented in science. In this way, we hope to impact the lives and decisions of thousands of students each year. More than ever today, as our economy is driven by scientific and technological developments, we need a scientifically literate populace. Middle school is a key time to reach students.”

Interestingly (to me), the center is a joint Stanford University/IBM Corporation project.

PGClear and remediating soil contaminated by chlorinated compounds

The story twists and turns a bit and some of the details are a little indistinct but it seems *a new technology, PGClear, has been developed for cleaning up water and soil. From the Apr. 16, 2013 news item on Nanowerk,

Researchers from Rice University [Texas], DuPont Central Research and Development and Stanford University [California] have announced a full-scale field test of an innovative process that gently but quickly destroys some of the world’s most pervasive and problematic pollutants. The technology, called PGClear, originated from basic scientific research at Rice during a 10-year, federally funded initiative to use nanotechnology to clean the environment.

PGClear uses a combination of palladium and gold metal to break down hazardous compounds like vinyl chloride, trichloroethene (TCE) and chloroform into nontoxic byproducts.

“Chlorinated compounds were widely used as solvents for many decades, and they are common groundwater contaminants the world over,” said Rice’s Michael Wong, professor of chemical and biomolecular engineering and the lead researcher on the PGClear project. “These compounds are also extremely difficult to treat inexpensively with conventional technology. My lab began its work to solve this problem more than a decade ago.”

The Apr. 15, 2013 Rice University news release, which originated the news item, provides more detail about Wong’s work and how it came to be applied to remediation of chlorine-based contaminants (Note: Links have been removed),

Wong began working on the catalytic remediation technology shortly after arriving at Rice in 2001, the same year Rice won a grant from the National Science Foundation for the Center for Biological and Environmental Nanotechnology (CBEN). CBEN, a 10-year, $25 million effort, was the world’s first academic research center dedicated to studying the interaction of nanomaterials with living organisms and ecosystems. CBEN was one of the first six U.S. academic research centers funded by the National Nanotechnology Initiative.

“Prior research had shown that palladium was an effective catalyst for breaking down TCE, but palladium is expensive, so it was thought to be impractical,” Wong said. “At CBEN, we used nanotechnology to design particles in which every atom of palladium was used to catalyze the reaction. We also found that adding a tiny bit of gold enhanced the reaction.”

DuPont contacted Wong about the award-winning research in 2007 and proposed developing a scalable process to use the palladium-gold catalysts to treat other chlorinated pollutants like chloroform and vinyl chloride. With additional support from the World Gold Council in London, researchers from Rice and DuPont worked to refine the catalyst and the process. They also worked with the South African mineral research organization MINTEK, which produced the catalytic pellets for the first PGClear unit. Gold and palladium make up only about 1 percent of material in each of the purple-black pellets.

Rice has supplied a video of the researchers discussing their work with palladium-gold pellets,

Here’s the plan for the unit that will be used by Dupont (from the Rice University news release),

The first large-scale PGClear unit, which is designed to treat groundwater contaminated with chloroform, is scheduled for installation at a DuPont site in Louisville, Ky., in June [2013?]. The 6-by-8-foot unit contains valves and pipes that will carry groundwater to a series of tubes that each contain thousands of pellets of palladium-gold (PG) catalyst. The pellets, which are about the size of a grain of rice, spur a chemical reaction that breaks down chloroform into nontoxic methane and chloride salt. [emphasis mine]

“The palladium-gold catalyst has so far performed well for remediating groundwater samples collected at DuPont,” said Brad Nave, director of the DuPont Remediation Project. “While the project is not yet full-scale, our next step will subject the technology to the rigors of real-world field conditions. Rice, Stanford and DuPont have been working on the details of the field pilot for several years, and we’re looking forward to a successful test.”

While it’s good to note that the pollutants are broken down into nontoxic materials, it would have been interesting to find out what happens to the pellets over time (presumably they become less effective and need to be replaced with new pellets while the old ones are disposed of) and to find out how the groundwater is being captured for purification.

* Coorection: ‘there’s’ changed to ‘a’ on Sept. 23, 2013.

Bringing home the chilling effects of outer space

They’ve invented a new type of cooling structure at Stanford University (California) which reflects sunlight back into outer space. From the Apr. 16, 2013 news item on Azonano,

A team of researchers at Stanford has designed an entirely new form of cooling structure that cools even when the sun is shining. Such a structure could vastly improve the daylight cooling of buildings, cars and other structures by reflecting sunlight back into the chilly vacuum of space.

The Apr. 15, 2013 Stanford Report by Andrew Myers, which originated the news item, describes the problem the engineers were solving,

The trick, from an engineering standpoint, is twofold. First, the reflector has to reflect as much of the sunlight as possible. Poor reflectors absorb too much sunlight, heating up in the process and defeating the goal of cooling.

The second challenge is that the structure must efficiently radiate heat (from a building, for example) back into space. Thus, the structure must emit thermal radiation very efficiently within a specific wavelength range in which the atmosphere is nearly transparent. Outside this range, the thermal radiation interacts with Earth’s atmosphere. Most people are familiar with this phenomenon. It’s better known as the greenhouse effect – the cause of global climate change.

Here’s the approach they used,

Radiative cooling at nighttime has been studied extensively as a mitigation strategy for climate change, yet peak demand for cooling occurs in the daytime.

“No one had yet been able to surmount the challenges of daytime radiative cooling –of cooling when the sun is shining,” said Eden Rephaeli, a doctoral candidate in Fan’s [Shanhui Fan, a professor of electrical engineering and the paper's senior author] lab and a co-first-author of the paper. “It’s a big hurdle.”

The Stanford team has succeeded where others have come up short by turning to nanostructured photonic materials. These materials can be engineered to enhance or suppress light reflection in certain wavelengths.

“We’ve taken a very different approach compared to previous efforts in this field,” said Aaswath Raman, a doctoral candidate in Fan’s lab and a co-first-author of the paper. “We combine the thermal emitter and solar reflector into one device, making it both higher performance and much more robust and practically relevant. In particular, we’re very excited because this design makes viable both industrial-scale and off-grid applications.”

Using engineered nanophotonic materials, the team was able to strongly suppress how much heat-inducing sunlight the panel absorbs, while it radiates heat very efficiently in the key frequency range necessary to escape Earth’s atmosphere. The material is made of quartz and silicon carbide, both very weak absorbers of sunlight.

This new approach offers both economic and social benefits,

The new device is capable of achieving a net cooling power in excess of 100 watts per square meter. By comparison, today’s standard 10-percent-efficient solar panels generate about the same amount of power. That means Fan’s radiative cooling panels could theoretically be substituted on rooftops where existing solar panels feed electricity to air conditioning systems needed to cool the building.

To put it a different way, a typical one-story, single-family house with just 10 percent of its roof covered by radiative cooling panels could offset 35 percent its entire air conditioning needs during the hottest hours of the summer.

Radiative cooling has another profound advantage over other cooling equipment, such as air conditioners. It is a passive technology. It requires no energy. It has no moving parts. It is easy to maintain. You put it on the roof or the sides of buildings and it starts working immediately.

Beyond the commercial implications, Fan and his collaborators foresee a broad potential social impact. Much of the human population on Earth lives in sun-drenched regions huddled around the equator. Electrical demand to drive air conditioners is skyrocketing in these places, presenting an economic and environmental challenge. These areas tend to be poor and the power necessary to drive cooling usually means fossil-fuel power plants that compound the greenhouse gas problem.

“In addition to these regions, we can foresee applications for radiative cooling in off-the-grid areas of the developing world where air conditioning is not even possible at this time. There are large numbers of people who could benefit from such systems,” Fan said.

Here’s a citation and a link for the paper,

Ultrabroadband Photonic Structures To Achieve High-Performance Daytime Radiative Cooling by Eden Rephaeli, Aaswath Raman, and Shanhui Fan.  Nano Lett. [American Chemical Society Nano Letters], 2013, 13 (4), pp 1457–1461
DOI: 10.1021/nl4004283 Publication Date (Web): March 5, 2013
Copyright © 2013 American Chemical Society

The article is behind a paywall.

For anyone who might be interested in what constitutes hot temperatures, here’s a sampling from the Wikipedia List of weather records (Note: I have removed links and included only countries which experienced temperatures of 43.9 °C or 111 °F or more; I made one exception: Antarctica),

Temperature

Location

Date

North America / On Earth

56.7 °C (134 °F) Furnace Creek Ranch (formerly Greenland Ranch), in Death Valley, California, United States 1913-07-10

Canada

45.0 °C (113 °F) Midale, Yellow Grass, Saskatchewan 1937-07-05

Mexico

52 °C (125.6 °F) San Luis Rio Colorado, Sonora

Africa

55.0 °C (131 °F) Kebili, Tunisia 1931-07-07

Algeria

50.6 °C (123.1 °F) In Salah, Tamanrasset Province 2002-07-12

Benin

44.5 °C (112 °F) Kandi  ?

Burkina Faso

47.2 °C (117 °F) Dori  ?

Cameroon

47.7 °C (117.9 °F) Kousseri  ?

Central African Republic

45 °C (113 °F) Birao  ?

Chad

47.6 °C (117.7 °F) Faya-Largeau 2010-06-22

Djibouti

49.5 °C (121 °F) Tadjourah  ?

Egypt

50.3 °C (122.6 °F) Kharga  ?

Eritrea

48 °C (118.4 °F) Massawa  ?

Ethiopia

48.9 °C (120 °F) Dallol  ?

The Gambia

45.5 °C (114 °F) Basse Santa Su 2008-?-?

Ghana

43.9 °C (111 °F) Navrongo  ?

Libya

50.2 °C (122.4 °F) Zuara 1995-06

Malawi

45 °C (113 °F) Ngabu, Chikwana  ?

Mali

48.2 °C (118 °F) Gao  ?

Mauritania

50.0 °C (122 °F) Akujit  ?

Morocco

49.6 °C (121.3 °F) Marrakech 2012-07-17

Mozambique

47.3 °C (117.2 °F) Chibuto 2009-02-03

Namibia

47.8 °C (118 °F) Noordoewer 2009-02-06

Niger

48.2 °C (118.8 °F) Bilma 2010-06-23

Nigeria

46.4 °C (115.5 °F) Yola 2010-04-03

Somalia

47.8 °C (118 °F) Berbera  ?

South Africa

50.0 °C (122 °F) Dunbrody, Eastern Cape 1918

Sudan

49.7 °C (121.5 °F) Dongola 2010-06-25

Swaziland

46.1 °C (115 °F) Sidvokodvo  ?

Zimbabwe

45.6 °C (114 °F) Beitbridge,  ?

Asia

53.6 °C (128.5 °F) Sulaibya, Kuwait 2012-07-31

Bangladesh

45.1 °C (113.2 °F) Rajshahi 1972-04-30

China

49.7 °C (118 °F) Ading Lake, Turpan, Xinjiang, China 2008-08-03

India

50 °C (122 °F) Sri, Ganganagar, Rajasthan Dholpur, Rajasthan  ?

Iraq

52.0 °C (125.7 °F) Basra, Ali Air Base, Nasiriyah 2010-06-14
2011-08-02

Israel

53 °C (127.4 °F) Tirat Zvi, Israel 1942-06-21

Myanmar

47.0 °C (116.6 °F) Myinmu 2010-05-12

Pakistan

53.5 °C (128.3 °F) Mohenjo-daro, Sindh 2010-05-26

Qatar

50.4 °C (122.7 °F) Doha 2010-07-14

Saudi Arabia

52.0 °C (125.6 °F) Jeddah 2010-06-22

Thailand

44.5 °C (112.1 °F) Uttaradit 1960-04-27

Turkey

48.8 °C (119.8 °F) Mardin 1993-08-14

Oceania

50.7 °C (123.3 °F) Oodnadatta, South Australia, Australia 1960-01-02

South America

49.1 °C (120.4 °F) Villa de María, Argentina 1920-01-02

Paraguay

45 °C (113 °F) Pratts Gill, Boquerón Department 2009-11-14

Uruguay

44 °C (111.2 °F) Paysandú, Paysandú Department 1943-01-20

Central America and Caribbean Islands

45 °C (113 °F) Estanzuela, Zacapa Guatemala  ?

Europe

48.0 °C or 48.5 °C (118.4 °F or 119.3 °F) Athens, Greece or Catenuova, Italy (Catenanuova’s record is disputed) 1977-07-10 or 1999-08-10;

Bosnia and Herzegovina

46.2 °C (115.16 °F) Mosta (Herzegovina, Federation of Bosnia and Herzegovina) 1900-07-31

Cyprus

46.6 °C (115.9 °F) Letkoniko, Cyprus 2010-08-01

Italy

47 °C or 48.5 °C (116.6 or 119.3 °F) Foggia, Apulia or Catenuuova, Sicily (Catenanuova’s record is disputed) 2007-06-25 and 1999-08-10

Macedonia

45.7 °C(114.26 °F) Demir Kapija, Demir Kapija Municipality 2007-07-24

Portugal

47.4 °C (117.3 °F) Amarelja, Beja 2003-08-01

Serbia

44.9 °C (112.8 °F) Smederevska Palanka, Podunavlie Distrrict, 2007-07-24

Spain

47.2 °C (116.9 °F) Murcia 1994-07-04

Antarctica

14.6 °C (59 °F) Vanda Station, Scott Coast 1974-01-05

It seems a disproportionate number of these hot temperatures have been recorded since 2000, eh?

Public domain biotechnology: biological transistors from Stanford University

Andrew Myers’ Mar. 28, 2013 article for the Stanford School of Medicine’s magazine (Inside Stanford Medicine) profiles some research which stands as a bridge between electronics and biology and could lead to biological computing,

… now a team of Stanford University bioengineers has taken computing beyond mechanics and electronics into the living realm of biology. In a paper published March 28 in Science, the team details a biological transistor made from genetic material — DNA and RNA — in place of gears or electrons. The team calls its biological transistor the “transcriptor.”

“Transcriptors are the key component behind amplifying genetic logic — akin to the transistor and electronics,” said Jerome Bonnet, PhD, a postdoctoral scholar in bioengineering and the paper’s lead author.

Here’s a description of the transcriptor (biological transistor) and biological computers (from the article),

In electronics, a transistor controls the flow of electrons along a circuit. Similarly, in biologics, a transcriptor controls the flow of a specific protein, RNA polymerase, as it travels along a strand of DNA.

“We have repurposed a group of natural proteins, called integrases, to realize digital control over the flow of RNA polymerase along DNA, which in turn allowed us to engineer amplifying genetic logic,” said Endy [Drew Endy, PhD, assistant professor of bioengineering and the paper’s senior author].

Using transcriptors, the team has created what are known in electrical engineering as logic gates that can derive true-false answers to virtually any biochemical question that might be posed within a cell.

They refer to their transcriptor-based logic gates as “Boolean Integrase Logic,” or “BIL gates” for short.

Transcriptor-based gates alone do not constitute a computer, but they are the third and final component of a biological computer that could operate within individual living cells.

The article also offers a description of Boolean logic and the workings of standard computers,

Digital logic is often referred to as “Boolean logic,” after George Boole, the mathematician who proposed the system in 1854. Today, Boolean logic typically takes the form of 1s and 0s within a computer. Answer true, gate open; answer false, gate closed. Open. Closed. On. Off. 1. 0. It’s that basic. But it turns out that with just these simple tools and ways of thinking you can accomplish quite a lot.

“AND” and “OR” are just two of the most basic Boolean logic gates. An “AND” gate, for instance, is “true” when both of its inputs are true — when “a” and “b” are true. An “OR” gate, on the other hand, is true when either or both of its inputs are true.

In a biological setting, the possibilities for logic are as limitless as in electronics, Bonnet explained. “You could test whether a given cell had been exposed to any number of external stimuli — the presence of glucose and caffeine, for instance. BIL gates would allow you to make that determination and to store that information so you could easily identify those which had been exposed and which had not,” he said.

Here’s how they created a transcriptor (from the article),

To create transcriptors and logic gates, the team used carefully calibrated combinations of enzymes — the integrases mentioned earlier — that control the flow of RNA polymerase along strands of DNA. If this were electronics, DNA is the wire and RNA polymerase is the electron.

“The choice of enzymes is important,” Bonnet said. “We have been careful to select enzymes that function in bacteria, fungi, plants and animals, so that bio-computers can be engineered within a variety of organisms.”

On the technical side, the transcriptor achieves a key similarity between the biological transistor and its semiconducting cousin: signal amplification.

Refreshingly the team made this decision (from the article),

To bring the age of the biological computer to a much speedier reality, Endy and his team have contributed all of BIL gates to the public domain so that others can immediately harness and improve upon the tools.

“Most of biotechnology has not yet been imagined, let alone made true. By freely sharing important basic tools everyone can work better together,” Bonnet said.

Here’s a citation and a link to the researchers’ paper in Science,

Amplifying Genetic Logic Gates by Jerome Bonnet, Peter Yin, Monica E. Ortiz, Pakpoom Subsoontorn, and Drew Endy. Science 1232758 Published online 28 March 2013 [DOI:10.1126/science.1232758]

This paper is behind a paywall. As for Myers’ article, it’s well worth reading for its clear explanations and forays into computing history.

Shake hands with Sacha, a robot controlled by carbon nanotube transistors

Since we use computer chips built from silicon in any number devices including robots, the announcement of a robot controlled by the first computer chip built entirely of a material other silicon bears notice. From the Mar. 15, 2013 news item on Nanowerk (Note: Links have been removed),

A group of Stanford researchers recently debuted the first robot controlled by a computer chip built entirely from carbon nanotube transistors, which many scientists predict may eventually replace silicon.

While scientists have produced simple demonstrations of working carbon nanotube circuit components in the past, the Stanford team, led by Professor of Electrical Engineering Philip Wong and Associate Professor of Electrical Engineering and Computer Science Subhasish Mitra Ph.D. ’00, was able to demonstrate an actual subsystem composed entirely of the material.

The news item was originated by a Mar. 7, 2013 article by Nikhita Obeegadoo for the Stanford Daily, where she noted,

The project was presented in the form of a robot named Sacha at the 2013 International Solid-State Circuits Conference (“Sacha, the Stanford Carbon Nanotube Controlled Handshaking Robot”), which was held in San Francisco. According to Mitra, the robot was created to demonstrate the development of a system that can function despite the errors caused by inherently imperfect nanotubes, which have posed issues for research teams working with carbon nanotubes in the past.

“Through several generations of technology, devices keep getting smaller and denser, and silicon will no longer be the best material for the purpose in about ten years,” Guha [Supratik Guha, director of physical sciences at IBM’s Yorktown Heights Research Center] said. “For needs that are close to atomic dimensions, carbon nanotubes have just the right shape and the right electrical behavior.”

Eric Juma on his eponymous blog offers more insight into the project in his Mar. 16, 2013 posting,

… The robot contained a carbon nanotube capacitor, a device found in many touchscreens, connected to another nanotube circuit, which turned the analog signal from the capacitor into a digital signal, which was transmitted to the microprocessor that contained CNT transistors. The microporcessor then sent a signal to a motor on the hand of the robot, which shook the person’s hand that touched the capacitors embedded in it.

This is not the first example of carbon nanotube circuitry, but it is the first example of CNTs being produced at mass for a microprocessor and circuit that were integrated. This advancement showed that it is possible to produce mass amounts of CNTs and have them integrate succesfully into a complex system. Although the size of the CNTs in this system are far from the optimal size of 10nm, it is a good starting point, and the nanotubes still can be much further refined.

Carbon nanotubes, although perfect in theory for microprocessors, present new challenges for engineers. The greatest challenge is the actual integration of CNTs into circuitry. Nanotubes often force themselves into a tangled position, which can cause circuits to fail without warning.

Juma gives a good explanation for why there is so much interest in carbon nanotubes in the field of electronics and he provides links to more information about it all. (There’s a video about carbon nanotubes and their various shapes and structures in my Mar. 15, 2013 posting about them.)

Sacha will be seen (or perhaps the work will simply be presented by Max Shulaker?) next in Switzerland at a Mar. 25, 2013 workshop (FED ’13; Functionality-Enhanced Devices Workshop) at the EPFl (École Polytechnique Fédérale de Lausanne.

Germany goes international with SpinNet, its spintronics project

A Feb. 8, 2013 news item on Nanowerk features an announcement of an international spintronics project, SpinNet, being funded by the federal government of Germany,

The German Academic Exchange Service (DAAD) is sponsoring a joint project involving Johannes Gutenberg University Mainz (JGU) in Mainz, Tohoku University in Japan, Stanford University, and IBM Research. The project will be focusing on the field of spintronics, a key technology that enables the creation of new energy-efficient IT devices. At Mainz researchers from JGU’s Institute of Physics and the Institute of Inorganic Chemistry and Analytical Chemistry participate with many of the activities taking place under the Materials Science in Mainz (MAINZ) Graduate School of Excellence. Over the next four years, the SpinNet network will be funded with about EUR 1 million from the German Federal Ministry of Education and Research (BMBF). SpinNet is one of the 21 projects that the German Academic Exchange Service approved from the total of 120 proposals submitted in the first round and from the 40 entries that made it to the second round.

The Johannes Gutenberg-Universität Mainz (Mainz University) Feb. 8, 2013 news release, which originated the news item, provides details about the network and about the project itself,

Under the aegis of the MAINZ Graduate School, Johannes Gutenberg University Mainz had submitted a proposal for financial support as a so-called “Thematic Network”. With this program, the German Academic Exchange Service aims to provide support to research-based multilateral and international networks with leading partners from abroad. The inclusion of non-university research facilities, such as IBM Research, was encouraged and the program is intended to help create attractive conditions that will help attract excellent international young researchers from partner universities to Germany. Another purpose is to enable the participating German universities to work at the cutting edge of international research by creating centers of competence. The MAINZ Graduate School has been closely cooperating with the partners for years and SpinNet will help to further this cooperation and fund complementary activities.

SpinNet will concentrate on the development of energy-saving information technology using the potential provided by spintronics. The current semiconductor-based systems will reach their limits in the foreseeable future, meaning that innovative technologies need to be developed if components are to be miniaturized further and energy consumption is reduced. In this context, spintronics is a highly promising approach. While conventional electronic systems in IT components employ only the charge of electrons, spintronics also involves the intrinsic angular momentum or spin of electrons for information processing. Using this technology, it should be possible to develop non-volatile storage and logic systems and these would then reduce energy consumption while also radically simplifying systems architecture. The new research network will be officially launched on April 1, 2013; with the inaugural meeting of the partners taking place at the Newspin3 Conference that is to be held on April 2-4, 2013 in Mainz.

You can find more information and videos about this initiative and/or spintronics by clicking the news item link or news release link.  There does not seem to be a SpinNet website. NewsSpin3 conference information can be found here along with details about the NewSpin3 summer school which takes place immediately following the conference. Spintronics was last mentioned here in a Jan. 31, 2013 posting about a 3-D microchip developed from a spintronics chip.