Tag Archives: University of Southern California

‘Touching’ infrared light, if you’re a rat followed by announcement of US FDA approval of first commercial artificial retina (bionic eye)

Researcher Miguel Nicolelis and his colleagues at Duke University have implanted a neuroprosthetic device in the portion of a rat’s brain related to touch that allows the rats to see infrared light. From the Feb. 12, 2013 news release on EurekAlert,

Researchers have given rats the ability to “touch” infrared light, normally invisible to them, by fitting them with an infrared detector wired to microscopic electrodes implanted in the part of the mammalian brain that processes tactile information. The achievement represents the first time a brain-machine interface has augmented a sense in adult animals, said Duke University neurobiologist Miguel Nicolelis, who led the research team.

The experiment also demonstrated for the first time that a novel sensory input could be processed by a cortical region specialized in another sense without “hijacking” the function of this brain area said Nicolelis. This discovery suggests, for example, that a person whose visual cortex was damaged could regain sight through a neuroprosthesis implanted in another cortical region, he said.

Although the initial experiments tested only whether rats could detect infrared light, there seems no reason that these animals in the future could not be given full-fledged infrared vision, said Nicolelis. For that matter, cortical neuroprostheses could be developed to give animals or humans the ability to see in any region of the electromagnetic spectrum, or even magnetic fields. “We could create devices sensitive to any physical energy,” he said. “It could be magnetic fields, radio waves, or ultrasound. We chose infrared initially because it didn’t interfere with our electrophysiological recordings.”

Interestingly, the research was supported by the US National Institute of Mental Health (as per the news release).

The researchers have more to say about what they’re doing,

“The philosophy of the field of brain-machine interfaces has until now been to attempt to restore a motor function lost to lesion or damage of the central nervous system,” said Thomson, [Eric Thomson] first author of the study. “This is the first paper in which a neuroprosthetic device was used to augment function—literally enabling a normal animal to acquire a sixth sense.”

Here’s how they conducted the research,

The mammalian retina is blind to infrared light, and mammals cannot detect any heat generated by the weak infrared light used in the studies. In their experiments, the researchers used a test chamber that contained three light sources that could be switched on randomly. Using visible LED lights, they first taught each rat to choose the active light source by poking its nose into an attached port to receive a reward of a sip of water.

After training the rats, the researchers implanted in their brains an array of stimulating microelectrodes, each roughly a tenth the diameter of a human hair. The microelectrodes were implanted in the cortical region that processes touch information from the animals’ facial whiskers.

Attached to the microelectrodes was an infrared detector affixed to the animals’ foreheads. The system was programmed so that orientation toward an infrared light would trigger an electrical signal to the brain. The signal pulses increased in frequency with the intensity and proximity of the light.

The researchers returned the animals to the test chamber, gradually replacing the visible lights with infrared lights. At first in infrared trials, when a light was switched on the animals would tend to poke randomly at the reward ports and scratch at their faces, said Nicolelis. This indicated that they were initially interpreting the brain signals as touch. However, over about a month, the animals learned to associate the brain signal with the infrared source. They began to actively “forage” for the signal, sweeping their heads back and forth to guide themselves to the active light source. Ultimately, they achieved a near-perfect score in tracking and identifying the correct location of the infrared light source.

To ensure that the animals were really using the infrared detector and not their eyes to sense the infrared light, the researchers conducted trials in which the light switched on, but the detector sent no signal to the brain. In these trials, the rats did not react to the infrared light.

Their finding could have an impact on notions of mammalian brain plasticity,

A key finding, said Nicolelis, was that enlisting the touch cortex for light detection did not reduce its ability to process touch signals. “When we recorded signals from the touch cortex of these animals, we found that although the cells had begun responding to infrared light, they continued to respond to whisker touch. It was almost like the cortex was dividing itself evenly so that the neurons could process both types of information.

This finding of brain plasticity is in contrast with the “optogenetic” approach to brain stimulation, which holds that a particular neuronal cell type should be stimulated to generate a desired neurological function. Rather, said Nicolelis, the experiments demonstrate that a broad electrical stimulation, which recruits many distinct cell types, can drive a cortical region to adapt to a new source of sensory input.

All of this work is part of Nicolelis’ larger project ‘Walk Again’ which is mentioned in my March 16, 2012 posting and includes a reference to some ethical issues raised by the work. Briefly, Nicolelis and an international team of collaborators are developing a brain-machine interface that will enable full mobility for people who are severely paralyzed. From the news release,

The Walk Again Project has recently received a $20 million grant from FINEP, a Brazilian research funding agency to allow the development of the first brain-controlled whole body exoskeleton aimed at restoring mobility in severely paralyzed patients. A first demonstration of this technology is expected to happen in the opening game of the 2014 Soccer World Cup in Brazil.

Expanding sensory abilities could also enable a new type of feedback loop to improve the speed and accuracy of such exoskeletons, said Nicolelis. For example, while researchers are now seeking to use tactile feedback to allow patients to feel the movements produced by such “robotic vests,” the feedback could also be in the form of a radio signal or infrared light that would give the person information on the exoskeleton limb’s position and encounter with objects.

There’s more information including videos about the work with infrared light and rats at the Nicolelis Lab website.  Here’s a citation for and link to the team’s research paper,

Perceiving invisible light through a somatosensory cortical prosthesis by Eric E. Thomson, Rafael Carra, & Miguel A.L. Nicolelis. Nature Communications Published 12 Feb 2013 DOI: 10.1038/ncomms2497

Meanwhile, the US Food and Drug Administraton (FDA) has approved the first commercial artificial retina, from the Feb. 14, 2013 news release,

The U.S. Food and Drug Administration (FDA) granted market approval to an artificial retina technology today, the first bionic eye to be approved for patients in the United States. The prosthetic technology was developed in part with support from the National Science Foundation (NSF).

The device, called the Argus® II Retinal Prosthesis System, transmits images from a small, eye-glass-mounted camera wirelessly to a microelectrode array implanted on a patient’s damaged retina. The array sends electrical signals via the optic nerve, and the brain interprets a visual image.

The FDA approval currently applies to individuals who have lost sight as a result of severe to profound retinitis pigmentosa (RP), an ailment that affects one in every 4,000 Americans. The implant allows some individuals with RP, who are completely blind, to locate objects, detect movement, improve orientation and mobility skills and discern shapes such as large letters.

The Argus II is manufactured by, and will be distributed by, Second Sight Medical Products of Sylmar, Calif., which is part of the team of scientists and engineers from the university, federal and private sectors who spent nearly two decades developing the system with public and private investment.

Scientists are often compelled to do research in an area inspired by family,

“Seeing my grandmother go blind motivated me to pursue ophthalmology and biomedical engineering to develop a treatment for patients for whom there was no foreseeable cure,” says the technology’s co-developer, Mark Humayun, associate director of research at the Doheny Eye Institute at the University of Southern California and director of the NSF Engineering Research Center for Biomimetic MicroElectronic Systems (BMES). …”

There’s also been considerable government investment,

The effort by Humayun and his colleagues has received early and continuing support from NSF, the National Institutes of Health and the Department of Energy, with grants totaling more than $100 million. The private sector’s support nearly matched that of the federal government.

“The retinal implant exemplifies how NSF grants for high-risk, fundamental research can directly result in ground-breaking technologies decades later,” said Acting NSF Assistant Director for Engineering Kesh Narayanan. “In collaboration with the Second Sight team and the courageous patients who volunteered to have experimental surgery to implant the first-generation devices, the researchers of NSF’s Biomimetic MicroElectronic Systems Engineering Research Center are developing technologies that may ultimately have as profound an impact on blindness as the cochlear implant has had for hearing loss.”

Leaving aside controversies about cochlear implants and the possibility of such controversies with artificial retinas (bionic eyes), it’s interesting to note that this device is dependent on an external camera,

The researchers’ efforts have bridged cellular biology–necessary for understanding how to stimulate the retinal ganglion cells without permanent damage–with microelectronics, which led to the miniaturized, low-power integrated chip for performing signal conversion, conditioning and stimulation functions. The hardware was paired with software processing and tuning algorithms that convert visual imagery to stimulation signals, and the entire system had to be incorporated within hermetically sealed packaging that allowed the electronics to operate in the vitreous fluid of the eye indefinitely. Finally, the research team had to develop new surgical techniques in order to integrate the device with the body, ensuring accurate placement of the stimulation electrodes on the retina.

“The artificial retina is a great engineering challenge under the interdisciplinary constraint of biology, enabling technology, regulatory compliance, as well as sophisticated design science,” adds Liu.  [Wentai Liu of the University of California, Los Angeles] “The artificial retina provides an interface between biotic and abiotic systems. Its unique design characteristics rely on system-level optimization, rather than the more common practice of component optimization, to achieve miniaturization and integration. Using the most advanced semiconductor technology, the engine for the artificial retina is a ‘system on a chip’ of mixed voltages and mixed analog-digital design, which provides self-contained power and data management and other functionality. This design for the artificial retina facilitates both surgical procedures and regulatory compliance.”

The Argus II design consists of an external video camera system matched to the implanted retinal stimulator, which contains a microelectrode array that spans 20 degrees of visual field. [emphasis mine] …

“The external camera system-built into a pair of glasses-streams video to a belt-worn computer, which converts the video into stimulus commands for the implant,” says Weiland [USC researcher Jim Weiland], “The belt-worn computer encodes the commands into a wireless signal that is transmitted to the implant, which has the necessary electronics to receive and decode both wireless power and data. Based on those data, the implant stimulates the retina with small electrical pulses. The electronics are hermetically packaged and the electrical stimulus is delivered to the retina via a microelectrode array.”

You can see some footage of people using artificial retinas in the context of Grégoire Cosendai’s TEDx Vienna presentation. As I noted in my Aug. 18, 2011 posting where this talk and developments in human enhancement are mentioned, the relevant material can be seen at approximately 13 mins., 25 secs. in Cosendai’s talk.

Second Sight Medical Devices can be found here.

Clone your carbon nanotubes

The Nov. 14, 2012 news release on EurekAlert highlights some work on a former nanomaterial superstar, carbon nanotubes,

Scientists and industry experts have long speculated that carbon nanotube transistors would one day replace their silicon predecessors. In 1998, Delft University built the world’s first carbon nanotube transistors – carbon nanotubes have the potential to be far smaller, faster, and consume less power than silicon transistors.

A key reason carbon nanotubes are not in your computer right now is that they are difficult to manufacture in a predictable way. Scientists have had a difficult time controlling the manufacture of nanotubes to the correct diameter, type and ultimately chirality, factors that control nanotubes’ electrical and mechanical properties.

Carbon nanotubes are typically grown using a chemical vapor deposition (CVD) system in which a chemical-laced gas is pumped into a chamber containing substrates with metal catalyst nanoparticles, upon which the nanotubes grow. It is generally believed that the diameters of the nanotubes are determined by the size of the catalytic metal nanoparticles. However, attempts to control the catalysts in hopes of achieving chirality-controlled nanotube growth have not been successful.

The USC [University of Southern California] team’s innovation was to jettison the catalyst and instead plant pieces of carbon nanotubes that have been separated and pre-selected based on chirality, using a nanotube separation technique developed and perfected by Zheng [Ming Zheng] and his coworkers at NIST [US National Institute of Standards and Technology]. Using those pieces as seeds, the team used chemical vapor deposition to extend the seeds to get much longer nanotubes, which were shown to have the same chirality as the seeds..

The process is referred to as “nanotube cloning.” The next steps in the research will be to carefully study the mechanism of the nanotube growth in this system, to scale up the cloning process to get large quantities of chirality-controlled nanotubes, and to use those nanotubes for electronic applications

H/T to ScienceDaily’s Nov. 14, 2012 news item for the full journal reference,

Jia Liu, Chuan Wang, Xiaomin Tu, Bilu Liu, Liang Chen, Ming Zheng, Chongwu Zhou. Chirality-controlled synthesis of single-wall carbon nanotubes using vapour-phase epitaxy. Nat. Commun., 13 Nov, 2012 DOI: 10.1038/ncomms2205

The article is behind a paywall.

Everything becomes part machine

Machine/flesh. That’s what I’ve taken to calling this process of integrating machinery into our and, as I newly realized, other animals’ flesh. My new realization was courtesy of a television ad for Absolut Greyhound Vodka. First, here’s the very, very long (3 mins. 39 secs.) ad/music video version,

I gather the dogs are mostly or, possibly, all animation. Still, the robotic dogs are very thought-provoking.  It’s kind of fascinating to me that I found a very unusual, futuristic, and thought-provoking idea embedded in advertising so I dug around online to find a March 2012 article by Rae Ann Fera, about the ad campaign, written for the Fast (Company} Co-Create website,

In the real world, music and cocktails go hand in hand. In an Absolut world, music and cocktails come with racing robotic greyhounds remotely controlled by a trio of DJs, spurred on by a cast of characters that make Lady Gaga look casual.

“Greyhound”–which is the title of the drink, the video, and the actual music track–is a three-minute visual feast created by TBWA\Chiat\Day that sees three groups of couture-sporting racing enthusiasts converge on the Bonneville Salt Flats to watch some robotic greyhounds speed across the parched plains, all while sipping light pink Absolut Greyhounds. While the fabulous people in the desert give each other the “my team’s going to win” stink-eye, the three members of Swedish House Mafia are off in a desolate bunker remotely controlling the robodogs to a photo-finish while ensconced in holographic orbs. …

Given that “Greyhound” is part music video, part ad, it will be distributed across a number of channels. “When it come to our target, music is their number one passion point and they live in the digital space so the campaign is really going to primarily TV and digital,” says Absolut’s Kouchnir [Maxime Kouchnir, Vice President, Vodkas, Pernod Ricard USA].

The advertisers, of course, are trying to sell vodka by digitally creating a greyhound that’s part robot/part flesh and then setting the stage for this race with music, fashion, cocktails, and an open-ended result. But, if one thinks of advertising as a reflection of culture, then these animated robot/flesh greyhounds suggest that something is percolating in the zeitgeist.

I have other examples on this blog  but here are a few recent  nonadvertising items I’ve come across that support my thesis. First, I found an April 27, 2012 article (MIT Media Lab Hosts The Future) by Neal Ungerleider for Fast Company, from the article,

This week, MIT [Massachusetts Institute of Technology] Media Lab researchers and minds from around the world got together to discuss artificial implantable memories, computers that understand emotion… and Microsoft-funded robotic teddy bears. Will the next Guitar Hero soon be discovered?

….

Then there are the scientists who will be able to plant artificial memories in your head. Ted Berger of the University of Southern California is developing prosthetic brain implants that mimic the mind. Apart from turning recipients into cyborgs, the brain prostheses actually create fake memories, science fiction movie style: In experiments, researchers successfully turned long-term memories on and off in lab rats. Berger hopes in the future, once primate testing is complete, to create brain implants for Alzheimer’s and stroke patients to help restore function.

While erasing and/or creating memories may seem a bit distant from our current experience, the BBC May 3, 2012 news article by Fergus Walsh, describes another machine/flesh project at the human clinical trials stage. Retinal implants have placed in two British men,

The two patients, Chris James and Robin Millar, lost their vision due to a condition known as retinitis pigmentosa, where the photoreceptor cells at the back of the eye gradually cease to function.

The wafer-thin, 3mm square microelectronic chip has 1,500 light-sensitive pixels which take over the function of the photoreceptor rods and cones.

The surgery involves placing it behind the retina from where a fine cable runs to a control unit under the skin behind the ear.

I believe this is the project I described in Aug. 18, 2011 posting (scroll down 2/3 of the way), which has 30 participants in the clinical trials, worldwide.

It sometimes seems that we’re not creating new life through biological means, synthetic or otherwise, but, rather, with our machines, which we are integrating into our own and other animal’s flesh.

What is a diamond worth?

A couple of diamond-related news items have crossed my path lately causing me to consider diamonds and their social implications. I’ll start first with the news items, according to an April 4, 2012 news item on physorg.com a quantum computer has been built inside a diamond (from the news item),

Diamonds are forever – or, at least, the effects of this diamond on quantum computing may be. A team that includes scientists from USC has built a quantum computer in a diamond, the first of its kind to include protection against “decoherence” – noise that prevents the computer from functioning properly.

I last mentioned decoherence in my July 21, 2011 posting about a joint (University of British Columbia, University of California at Santa Barbara and the University of Southern California) project on quantum computing.

According to the April 5, 2012 news item by Robert Perkins for the University of Southern California (USC),

The multinational team included USC professor Daniel Lidar and USC postdoctoral researcher Zhihui Wang, as well as researchers from the Delft University of Technology in the Netherlands, Iowa State University and the University of California, Santa Barbara. The findings were published today in Nature.

The team’s diamond quantum computer system featured two quantum bits, or qubits, made of subatomic particles.

As opposed to traditional computer bits, which can encode distinctly either a one or a zero, qubits can encode a one and a zero at the same time. This property, called superposition, along with the ability of quantum states to “tunnel” through energy barriers, some day will allow quantum computers to perform optimization calculations much faster than traditional computers.

Like all diamonds, the diamond used by the researchers has impurities – things other than carbon. The more impurities in a diamond, the less attractive it is as a piece of jewelry because it makes the crystal appear cloudy.

The team, however, utilized the impurities themselves.

A rogue nitrogen nucleus became the first qubit. In a second flaw sat an electron, which became the second qubit. (Though put more accurately, the “spin” of each of these subatomic particles was used as the qubit.)

Electrons are smaller than nuclei and perform computations much more quickly, but they also fall victim more quickly to decoherence. A qubit based on a nucleus, which is large, is much more stable but slower.

“A nucleus has a long decoherence time – in the milliseconds. You can think of it as very sluggish,” said Lidar, who holds appointments at the USC Viterbi School of Engineering and the USC Dornsife College of Letters, Arts and Sciences.

Though solid-state computing systems have existed before, this was the first to incorporate decoherence protection – using microwave pulses to continually switch the direction of the electron spin rotation.

“It’s a little like time travel,” Lidar said, because switching the direction of rotation time-reverses the inconsistencies in motion as the qubits move back to their original position.

Here’s an image I downloaded from the USC webpage hosting Perkins’s news item,

The diamond in the center measures 1 mm X 1 mm. Photo/Courtesy of Delft University of Technolgy/UC Santa Barbara

I’m not sure what they were trying to illustrate with the image but I thought it would provide an interesting contrast to the video which follows about the world’s first purely diamond ring,

I first came across this ring in Laura Hibberd’s March 22, 2012 piece for Huffington Post. For anyone who feels compelled to find out more about it, here’s the jeweller’s (Shawish) website.

What with the posting about Neal Stephenson and Diamond Age (aka, The Diamond Age Or A Young Lady’s Illustrated Primer; a novel that integrates nanotechnology into a story about the future and ubiquitous diamonds), a quantum computer in a diamond, and this ring, I’ve started to wonder about role diamonds will have in society. Will they be integrated into everyday objects or will they remain objects of desire? My guess is that the diamonds we create by manipulating carbon atoms will be considered everyday items while the ones which have been formed in the bowels of the earth will retain their status.

Environmental decoherence tackled by University of British Columbia and California researchers

The research team at the University of British Columbia (UBC) proved a theory for the prediction and control of environmental decoherence in a complex system (an important step on the way to quantum computing) while researchers performed experiments at the University of California Santa Barbara (UCSB) to prove the theory.  Here’s an explanation of decoherence and its impact on quantum computing from the July 20, 2011 UBC news release,

Quantum mechanics states that matter can be in more than one physical state at the same time – like a coin simultaneously showing heads and tails. In small objects like electrons, physicists have had success in observing and controlling these simultaneous states, called “state superpositions.”

Larger, more complex physical systems appear to be in one consistent physical state because they interact and “entangle” with other objects in their environment. This entanglement makes these complex objects “decay” into a single state – a process called decoherence.

Quantum computing’s potential to be exponentially faster and more powerful than any conventional computer technology depends on switches that are capable of state superposition – that is, being in the “on” and “off” positions at the same time. Until now, all efforts to achieve such superposition with many molecules at once were blocked by decoherence.

The UBC research team, headed by Phil Stamp, developed a theory for predicting and controlling environmental decoherence in the Iron-8 molecule, which is considered a large complex system.

Iron-8 molecule (image provided by UBC)

This next image represents one of two states of decoherence, i. e., the molecule ‘occupies’ only one of two superpositions, spin up or spin down,

 

Decoherence: occupying either the spin up or spin down position (image provided by UBC)

Here’s how the team at the UCSB proved the theory experimentally,

In their study, Takahashi [Professor Susumu Takahashi is now at the University of Southern California {USC}] and his colleagues investigated single crystals of molecular magnets. Because of their purity, molecular magnets eliminate the extrinsic decoherence, allowing researchers to calculate intrinsic decoherence precisely.

“For the first time, we’ve been able to predict and control all the environmental decoherence mechanisms in a very complex system – in this case a large magnetic molecule,” said Phil Stamp, University of British Columbia professor of physics and astronomy and director of the Pacific Institute of Theoretical Physics.

Using crystalline molecular magnets allowed researchers to build qubits out of an immense quantity of quantum particles rather than a single quantum object – the way most proto-quantum computers are built at the moment.

I did try to find definitions for extrinsic and intrinsic decoherence unfortunately the best I could find is the one provided by USC (from the news item on Nanowerk),

Decoherence in qubit systems falls into two general categories. One is an intrinsic decoherence caused by constituents in the qubit system, and the other is an extrinsic decoherence caused by imperfections of the system – impurities and defects, for example.

I have a conceptual framework of sorts for a ‘qubit system’, I just don’t understand what they mean by ‘system’. I performed an internet search and virtually all of the references I found to intrinsic and extrinsic decoherence cite this news release or, in a few cases, papers written by physicists for other physicists. If anyone could help clarify this question for me, I would much appreciate it.

Leaving extrinsic and intrinsic systems aside, the July 20, 2011 news item on Science Daily provides a little more detail about the experiment,

In the experiment, the California researchers prepared a crystalline array of Iron-8 molecules in a quantum superposition, where the net magnetization of each molecule was simultaneously oriented up and down. The decay of this superposition by decoherence was then observed in time — and the decay was spectacularly slow, behaving exactly as the UBC researchers predicted.

“Magnetic molecules now suddenly appear to have serious potential as candidates for quantum computing hardware,” said Susumu Takahashi, assistant professor of chemistry and physics at the University of Southern California.

Congratulations to all of the researchers involved.

ETA July 22, 2011: I changed the title to correct the grammar.

Math, science and the movies; research on the African continent; diabetes and mice in Canada; NANO Magazine and Canada; poetry on Bowen Island, April 17, 2010

About 10 years ago, I got interested in how the arts and sciences can inform each other when I was trying to organize an art/science event which never did get off the ground (although I still harbour hopes for it one day).  It all came back to me when I read Dave Bruggeman’s (Pasco Phronesis blog) recent post about a new Creative Science Studio opening at the School of Cinematic Arts at the University of Southern California (USC). From Dave’s post,

It [Creative Science Studio] will start this fall at USC, where its School of Cinematic Arts makes heavy use of its proximity to Hollywood, and builds on its history of other projects that use science, technology and entertainment in other areas of research.

The studio will not only help studios improve the depiction of science in the products of their students, faculty and alumni (much like the Science and Entertainment Exchange), but help scientists create entertaining outreach products. In addition, science and engineering topics will be incorporated into the School’s curriculum and be supported in faculty research.

This announcement reminds me a little bit of an IBM/USC initiative in 2008 (from the news item on Nanowerk),

For decades Hollywood has looked to science for inspiration, now IBM researchers are looking to Hollywood for new ideas too.

The entertainment industry has portrayed possible future worlds through science fiction movies – many created by USC’s famous alumni – and IBM wants to tap into that creativity.

At a kickoff event at the USC School of Cinematic Arts, five of IBM’s top scientists met with students and alumni of the school, along with other invitees from the entertainment industry, to “Imagine the World in 2050.” The event is the first phase of an expected collaboration between IBM and USC to explore how combining creative vision and insight with science and technology trends might fuel novel solutions to the most pressing problems and opportunities of our time.

It’s interesting to note that the inspiration is two-way if the two announcements are taken together. The creative people can have access to the latest science and technology work for their pieces and scientists can explore how an idea or solution to a problem that exists in a story might be made real.

I’ve also noted that the first collaboration mentioned suggests that the Creative Science Studio will be able to “help scientists create entertaining outreach products.” My only caveat is that scientists too often believe that science communication means that they do all the communicating while we members of the public are to receive their knowledge enthusiastically and uncritically.

Moving on to the math that I mentioned in the head, there’s an announcement of a new paper that discusses the use of mathematics in cinematic special effects. (I believe that the word cinematic is starting to include games and other media in addition to movies.)  From the news item on physorg.com,

The use of mathematics in cinematic special effects is described in the article “Crashing Waves, Awesome Explosions, Turbulent Smoke, and Beyond: Applied Mathematics and Scientific Computing in the Visual Effects Industry”, which will appear in the May 2010 issue of the NOTICES OF THE AMS [American Mathematical Society]. The article was written by three University of California, Los Angeles, mathematicians who have made significant contributions to research in this area: Aleka McAdams, Stanley Osher, and Joseph Teran.

Mathematics provides the language for expressing physical phenomena and their interactions, often in the form of partial differential equations. These equations are usually too complex to be solved exactly, so mathematicians have developed numerical methods and algorithms that can be implemented on computers to obtain approximate solutions. The kinds of approximations needed to, for example, simulate a firestorm, were in the past computationally intractable. With faster computing equipment and more-efficient architectures, such simulations are feasible today—and they drive many of the most spectacular feats in the visual effects industry.

This news item too brought back memories. There was a Canadian animated film, Ryan, which both won an Academy Award and involved significant collaboration between a mathematician and an animator. From the MITACS (Mathematics of Information Technology and Complex Systems)  2005 newsletter, Student Notes:

Karan Singh is an Associate Professor at the University of Toronto, where co-directs the graphics and HCI lab, DGP. His research interests are in artist driven interactive graphics encompassing geometric modeling, character animation and non-photorealistic rendering. As a researcher at Alias (1995-1999), he architected facial and character animation tools for Maya (Technical Oscar 2003). He was involved with conceptual design and reverse engineering software at Paraform (Academy award for technical achievement 2001) and currently as Chief Scientist for Geometry Systems Inc. He has worked on numerous film and animation projects and most recently was the R+D Director for the Oscar winning animation Ryan (2005)

Someone at Student Notes (SN) goes on to interview Dr. Singh (here’s an excerpt),

SN: Some materials discussing the film Ryan mention the term “psychorealism”. What does this term mean? What problems does the transition from realism to psychorealism pose for the animator, or the computer graphics designer?

KS: Psychorealism is a term coined by Chris {Landreth, film animator] to refer to the glorious complexity of the human psyche depicted through the visual medium of art and animation. The transition is not a problem, psychorealism is stylistic, just a facet to the look and feel of an animation. The challenges lies in the choice and execution of the metaphorical imagery that the animator makes.

Both the article and Dr. Singh’s page are well worth checking out, if the links between mathematics and visual imagery interest you.

Research on the African continent

Last week I received a copy of Thompson Reuters Global Research Report Africa. My hat’s off to the authors, Jonathan Adams, Christopher King, and Daniel Hook for including the fact that Africa is a continent with many countries, many languages, and many cultures. From the report, (you may need to register at the site to gain access to it but the only contact I ever get is a copy of their newsletter alerting me to a new report and other incidental info.), p. 3,

More than 50 nations, hundreds of languages, and a welter of ethnic and cultural diversity. A continent possessed of abundant natural resources but also perennially wracked by a now-familiar litany of post-colonial woes: poverty, want, political instability and corruption, disease, and armed conflicts frequently driven by ethnic and tribal divisions but supplied by more mature economies. OECD’s recent African Economic Outlook sets out in stark detail the challenge, and the extent to which current global economic problems may make this worse …

While they did the usual about challenges, the authors go on to add this somewhat contrasting information.

Yet the continent is also home to a rich history of higher education and knowledge creation. The University of Al-Karaouine, at Fez in Morocco, was founded in CE 859 as a madrasa and is identified by many as the oldest degree-awarding institution in the world.ii It was followed in 970 by Al-Azhar University in Egypt. While it was some centuries before the curriculum expanded from religious instruction into the sciences this makes a very early marker for learning. Today, the Association of African Universities lists 225 member institutions in 44 countries and, as Thomson Reuters data demonstrate, African research has a network of ties to the international community.

A problem for Africa as a whole, as it has been for China and India, is the hemorrhage of talent. Many of its best students take their higher degrees at universities in Europe, Asia and North America. Too few return.

I can’t speak for the details included in the report which appears to be a consolidation of information available in various reports from international organizations. Personally, I find these consolidations very helpful as I would never have the time to track all of this down. As well, they have created a graphic which illustrates research relationships. I did have to read the analysis in order to better understand the graphic but I found the idea itself quite engaging and as I can see (pun!) that as one gets more visually literate with this type of graphic that it could be a very useful tool for grasping complex information very quickly.

Diabetes and mice

Last week, I missed this notice about a Canadian nanotechnology effort at the University of Calgary. From the news item on Nanowerk,

Using a sophisticated nanotechnology-based “vaccine,” researchers were able to successfully cure mice with type 1 diabetes and slow the onset of the disease in mice at risk for the disease. The study, co-funded by the Juvenile Diabetes Research Foundation (JDRF), provides new and important insights into understanding how to stop the immune attack that causes type 1 diabetes, and could even have implications for other autoimmune diseases.

The study, conducted at the University of Calgary in Alberta, Canada, was published today [April 8, 2010?] in the online edition of the scientific journal Immunity.

NANO Magazine

In more recent news, NANO Magazine’s new issue (no. 17) features a country focus on Canada. From the news item on Nanowerk,

In a special bumper issue of NANO Magazine we focus on two topics – textiles and nanomedicine. We feature articles about textiles from Nicholas Kotov and Kay Obendorf, and Nanomedicine from the London Centre for Nanotechnology and Hans Hofstraat of Philips Healthcare and an interview with Peter Singer, NANO Magazine Issue 17 is essential reading, www.nanomagazine.co.uk.

The featured country in this issue is Canada [emphasis mine], notable for its well funded facilities and research that is aggressively focused on industrial applications. Although having no unifying national nanotechnology initiative, there are many extremely well-funded organisations with world class facilities that are undertaking important nano-related research.

I hope I get a chance to read this issue.

Poetry on Bowen Island

Heather Haley, a local Vancouver, BC area, poet is hosting a special event this coming Saturday at her home on Bowen Island. From the news release,

VISITING POETS Salon & Reading

Josef & Heather’s Place
Bowen Island, BC
7:30  PM
Saturday, April 17, 2010

PENN KEMP, inimitable sound poet from London, Ontario

The illustrious CATHERINE OWEN from Vancouver, BC

To RSVP and get directions please email hshaley@emspace.com

Free Admission
Snacks & beverages-BYOB

Please come on over to our place on the sunny south slope to welcome these fabulous poets, hear their marvelous work, *see* their voices right here on Bowen Island!

London, ON performer and playwright PENN KEMP has published twenty-five books of poetry and drama, had six plays and ten CDs produced as well as Canada’s first poetry CD-ROM and several videopoems.  She performs in festivals around the world, most recently in Britain, Brazil and India. Penn is the Canada Council Writer-in-Residence at UWO for 2009-10.  She hosts an eclectic literary show, Gathering Voices, on Radio Western, CHRWradio.com/talk/gatheringvoices.  Her own project for the year is a DVD devoted to Ecco Poetry, Luminous Entrance: a Sound Opera for Climate Change Action, which has just been released.
CATHERINE OWEN is a Vancouver writer who will be reading from her latest book Frenzy (Anvil Press 09) which she has just toured across the entirety of Canada. Her work has appeared in international magazines, seen translation into three languages and been nominated for honours such as the BC Book Prize and the CBC Award. She plays bass and sings in a couple of metal bands and runs her own tutoring and editing business.

I have seen one of Penn Kemp’s video poems. It was at least five years ago and it still resonates with me . Guess what? I highly recommend going if you can. If you’re curious about Heather and her work, go here.

Monkey writes baseball story; Feynman symposium at USC; US government releases nanotechnology data sets; World Economic Forum (at Davos) interested in science

To my horror, researchers at Northwestern University in the US have developed software (Stats Monkey) that will let you automatically generate a story about a baseball game by pressing a button. More specifically, the data from the game is input to a database which when activated can generate content based on the game’s statistics.

I knew this would happen when I interviewed some expert at Xerox about 4 or 5 years ago. He was happily burbling on about tagging words and being able to call information up into a database and generating text automatically. I noted that as a writer I found the concept disturbing. He claimed that it would never be used for standard writing but just for things which are repetitive. I guess he was thinking it could be used for instructions and such or perhaps he was just trying to placate me. Back to stats monkey: I find it interesting that the researchers don’t display any examples of the ‘writing’. If you are interested, you can check out the project here.

The discussion about the nanotechnology narrative continues. At the University of Southern California, they will be holding a 50th anniversary symposium about the publication (in 1960)  of Feynman’s 1959 talk, There’s plenty of room at the bottom, and its impact on nanotechnology. You can read more about the event here or you can see the programme for the symposium here.

Bravo to the US government as they are releasing information to the public in a bid for transparency. Dave Bruggeman at Pasco Phronesis notes that the major science agencies had not released data sets at the time of his posting. Still, the Office of Science and Technology Policy did make data available including data about the National Nanotechnology Initiative,

The National Nanotechnology Initiative (NNI) coordinates Federal nanotechnology research and development among 25 Federal agencies. The data presented here represent NNI investments by agency and program component area (PCA) from the Initiative’s founding in FY 2001 through FY 2010 (requested). These data have been available as part of the NNI’s annual supplements to the President’s Budget. But compared to earlier releases, the data as presented here are more accessible and readily available for analysis by users wishing to assess trends and examine investment allocations over the 10-year history of the NNI. The cumulative NNI investment of nearly $12 billion is advancing our understanding of the unique phenomena and processes that occur at the nanoscale and is helping leverage that knowledge to speed innovation in high-impact opportunity areas such as energy, security, and medicine.

You can get the data set here in either XLS or PDF formats.

It would be very difficult to get this type of information in Canada as we have no central hub for nanotechnology research funding. We do have the National Institute of Nanotechnology which is a National Research Council agency jointly funded by the province of Alberta and the federal government. Not all nanotechnology research is done under their auspices. There’s more than one government agency which funds nanotechnology research and there is no reporting mechanism that would allow us to easily find out how much funding or where it’s going.

The 2010 edition of the World Economic Forum meeting at Davos takes place January 27 – 31. It’s interesting to note that a meeting devoted to economic issues has sessions on science, social media, the arts, etc. which suggests a much broader view of economics than I’m usually exposed to. However, the session on ‘Entrepreneurial Science’ does ring a familiar note. From the session description,

According to the US National Academy of Sciences, only 0.1% of all funded basic science research results in a commercial venture.

How can the commercial viability of scientific research be improved?

I’m not sure how they derived the figure of 0.1%. Was the data international? Were they talking about government-funded research? Over what period of time? (It’s not uncommon for research to lie fallow for decades before conditions shift sufficiently to allow commercialization.) How do you determine the path from research to commercialization? e.g. Perhaps the work that resulted in a commercial application was based on 10 other studies that did not.