Roadmap to neuromorphic engineering digital and analog) for the creation of artificial brains *from the Georgia (US) Institute of Technology

While I didn’t mention neuromorphic engineering in my April 16, 2014 posting which focused on the more general aspect of nanotechnology in Transcendence, a movie starring Johnny Depp and opening on April 18, that specialty (neuromorphic engineering) is what makes the events in the movie ‘possible’ (assuming very large stretches of imagination bringing us into the realm implausibility and beyond). From the IMDB.com plot synopsis for Transcendence,

Dr. Will Caster (Johnny Depp) is the foremost researcher in the field of Artificial Intelligence, working to create a sentient machine that combines the collective intelligence of everything ever known with the full range of human emotions. His highly controversial experiments have made him famous, but they have also made him the prime target of anti-technology extremists who will do whatever it takes to stop him. However, in their attempt to destroy Will, they inadvertently become the catalyst for him to succeed to be a participant in his own transcendence. For his wife Evelyn (Rebecca Hall) and best friend Max Waters (Paul Bettany), both fellow researchers, the question is not if they canbut [sic] if they should. Their worst fears are realized as Will’s thirst for knowledge evolves into a seemingly omnipresent quest for power, to what end is unknown. The only thing that is becoming terrifyingly clear is there may be no way to stop him.

In the film, Carter’s intelligence/consciousness is uploaded to the computer, which suggests the computer has human brainlike qualities and abilities. The effort to make computer or artificial intelligence more humanlike is called neuromorphic engineering and according to an April 17, 2014 news item on phys.org, researchers at the Georgia Institute of Technology (Georgia Tech) have published a roadmap for this pursuit,

In the field of neuromorphic engineering, researchers study computing techniques that could someday mimic human cognition. Electrical engineers at the Georgia Institute of Technology recently published a “roadmap” that details innovative analog-based techniques that could make it possible to build a practical neuromorphic computer.

A core technological hurdle in this field involves the electrical power requirements of computing hardware. Although a human brain functions on a mere 20 watts of electrical energy, a digital computer that could approximate human cognitive abilities would require tens of thousands of integrated circuits (chips) and a hundred thousand watts of electricity or more – levels that exceed practical limits.

The Georgia Tech roadmap proposes a solution based on analog computing techniques, which require far less electrical power than traditional digital computing. The more efficient analog approach would help solve the daunting cooling and cost problems that presently make digital neuromorphic hardware systems impractical.

“To simulate the human brain, the eventual goal would be large-scale neuromorphic systems that could offer a great deal of computational power, robustness and performance,” said Jennifer Hasler, a professor in the Georgia Tech School of Electrical and Computer Engineering (ECE), who is a pioneer in using analog techniques for neuromorphic computing. “A configurable analog-digital system can be expected to have a power efficiency improvement of up to 10,000 times compared to an all-digital system.”

An April 16, 2014 Georgia Tech news release by Rick Robinson, which originated the news item, describes why Hasler wants to combine analog (based on biological principles) and digital computing approaches to the creation of artificial brains,

Unlike digital computing, in which computers can address many different applications by processing different software programs, analog circuits have traditionally been hard-wired to address a single application. For example, cell phones use energy-efficient analog circuits for a number of specific functions, including capturing the user’s voice, amplifying incoming voice signals, and controlling battery power.

Because analog devices do not have to process binary codes as digital computers do, their performance can be both faster and much less power hungry. Yet traditional analog circuits are limited because they’re built for a specific application, such as processing signals or controlling power. They don’t have the flexibility of digital devices that can process software, and they’re vulnerable to signal disturbance issues, or noise.

In recent years, Hasler has developed a new approach to analog computing, in which silicon-based analog integrated circuits take over many of the functions now performed by familiar digital integrated circuits. These analog chips can be quickly reconfigured to provide a range of processing capabilities, in a manner that resembles conventional digital techniques in some ways.

Over the last several years, Hasler and her research group have developed devices called field programmable analog arrays (FPAA). Like field programmable gate arrays (FPGA), which are digital integrated circuits that are ubiquitous in modern computing, the FPAA can be reconfigured after it’s manufactured – hence the phrase “field-programmable.”

Hasler and Marr’s 29-page paper traces a development process that could lead to the goal of reproducing human-brain complexity. The researchers investigate in detail a number of intermediate steps that would build on one another, helping researchers advance the technology sequentially.

For example, the researchers discuss ways to scale energy efficiency, performance and size in order to eventually achieve large-scale neuromorphic systems. The authors also address how the implementation and the application space of neuromorphic systems can be expected to evolve over time.

“A major concept here is that we have to first build smaller systems capable of a simple representation of one layer of human brain cortex,” Hasler said. “When that system has been successfully demonstrated, we can then replicate it in ways that increase its complexity and performance.”

Among neuromorphic computing’s major hurdles are the communication issues involved in networking integrated circuits in ways that could replicate human cognition. In their paper, Hasler and Marr emphasize local interconnectivity to reduce complexity. Moreover, they argue it’s possible to achieve these capabilities via purely silicon-based techniques, without relying on novel devices that are based on other approaches.

Commenting on the recent publication, Alice C. Parker, a professor of electrical engineering at the University of Southern California, said, “Professor Hasler’s technology roadmap is the first deep analysis of the prospects for large scale neuromorphic intelligent systems, clearly providing practical guidance for such systems, with a nearer-term perspective than our whole-brain emulation predictions. Her expertise in analog circuits, technology and device models positions her to provide this unique perspective on neuromorphic circuits.”

Eugenio Culurciello, an associate professor of biomedical engineering at Purdue University, commented, “I find this paper to be a very accurate description of the field of neuromorphic data processing systems. Hasler’s devices provide some of the best performance per unit power I have ever seen and are surely on the roadmap for one of the major technologies of the future.”

Said Hasler: “In this study, we conclude that useful neural computation machines based on biological principles – and potentially at the size of the human brain — seems technically within our grasp. We think that it’s more a question of gathering the right research teams and finding the funding for research and development than of any insurmountable technical barriers.”

Here’s a link to and a citation for the roadmap,

Finding a roadmap to achieve large neuromorphic hardware systems by Jennifer Hasler and Bo Marr.  Front. Neurosci. (Frontiers in Neuroscience), 10 September 2013 | doi: 10.3389/fnins.2013.00118

This is an open access article (at least, the HTML version is).

I have looked at Hasler’s roadmap and it provides a good and readable overview (even for an amateur like me; Note: you do have to need some tolerance for ‘not knowing’) of the state of neuromorphic engineering’s problems, and suggestions for overcoming them. Here’s a description of a human brain and its power requirements as compared to a computer’s (from the roadmap),

One of the amazing thing about the human brain is its ability to perform tasks beyond current supercomputers using roughly 20 W of average power, a level smaller than most individual computer microprocessor chips. A single neuron emulation can tax a high performance processor; given there is 1012 neurons operating at 20 W, each neuron consumes 20 pW average power. Assuming a neuron is conservatively performing the wordspotting computation (1000 synapses), 100,000 PMAC (PMAC = “Peta” MAC = 1015 MAC/s) would be required to duplicate the neural structure. A higher computational efficiency due to active dendritic line channels is expected as well as additional computation due to learning. The efficiency of a single neuron would be 5000 PMAC/W (or 5 TMAC/μW). A similar efficiency for 1011 neurons and 10,000 synapses is expected.

Building neuromorphic hardware requires that technology must scale from current levels given constraints of power, area, and cost: all issues typical in industrial and defense applications; if hardware technology does not scale as other available technologies, as well as takes advantage of the capabilities of IC technology that are currently visible, it will not be successful.

One of my main areas of interest is the memristor (a nanoscale ‘device/circuit element’ which emulates synaptic plasticity), which was mentioned in a way that allows me to understand how the device fits (or doesn’t fit) into the overall conceptual framework (from the roadmap),

The density for a 10 nm EEPROM device acting as a synapse begs the question of whether other nanotechnologies can improve on the resulting Si [silicon] synapse density. One transistor per synapse is hard to beat by any approach, particularly in scaled down Si (like 10 nm), when the synapse memory, computation, and update is contained within the EEPROM device. Most nano device technologies [i.e., memristors (Snider et al., 2011)] show considerable difficulties to get to two-dimensional arrays at a similar density level. Recently, a team from U. of Michigan announced the first functioning memristor two-dimensional (30 × 30) array built on a CMOS chip in 2012 (Kim et al., 2012), claiming applications in neuromorphic engineering, the same group has published innovative devices for digital (Jo and Lu, 2009) and analog applications (Jo et al., 2011).

I notice that the reference to the University’s of Michigan is relatively neutral in tone and the memristor does not figure substantively in Hasler’s roadmap.

Intriguingly, there is a section on commercialization; I didn’t think the research was at that stage yet (from the roadmap),

Although one can discuss how to build a cortical computer on the size of mammals and humans, the question is how will the technology developed for these large systems impact commercial development. The cost for ICs [integrated circuits or chips] alone for cortex would be approximately $20 M in current prices, which although possible for large users, would not be common to be found in individual households. Throughout the digital processor approach, commercial market opportunities have driven the progress in the field. Getting neuromorphic technology integrated into commercial environment allows us to ride this powerful economic “engine” rather than pull.

In most applications, the important commercial issues include minimization of cost, time to market, just sufficient performance for the application, power consumed, size and weight. The cost of a system built from ICs is, at a macro-level, a function of the area of those ICs, which then affects the number of ICs needed system wide, the number of components used, and the board space used. Efficiency of design tools, testing time and programming time also considerably affect system costs. Time to get an application to market is affected by the ability to reuse or quickly modify existing designs, and is reduced for a new application if existing hardware can be reconfigured, adapting to changing specifications, and a designer can utilize tools that allow rapid modifications to the design. Performance is key for any algorithm, but for a particular product, one only needs a solution to that particular problem; spending time to make the solution elegant is often a losing strategy.

The neuromorphic community has seen some early entries into commercial spaces, but we are just at the very beginning of the process. As the knowledge of neuromorphic engineering has progressed, which have included knowledge of sensor interfaces and analog signal processing, there have been those who have risen to the opportunities to commercialize these technologies. Neuromorphic research led to better understanding of sensory processing, particularly sensory systems interacting with other humans, enabling companies like Synaptics (touch pads), Foveon (CMOS color imagers), and Sonic Innovation (analog–digital hearing aids); Gilder provides a useful history of these two companies elsewhere (Gilder, 2005). From the early progress in analog signal processing we see companies like GTronix (acquired by National Semiconductor, then acquired by Texas Instruments) applying the impact of custom analog signal processing techniques and programmability toward auditory signal processing that improved sound quality requiring ultra-low power levels. Further, we see in companies like Audience there is some success from mapping the computational flow of the early stage auditory system, and implementing part of the event based auditory front-end to achieve useful results for improved voice quality. But the opportunities for the neuromorphic community are just beginning, and directly related to understanding the computational capabilities of these items. The availability of ICs that have these capabilities, whether or not one mentions they have any neuromorphic material, will further drive applications.

One expects that part of a cortex processing system would have significant computational possibilities, as well as cortex structures from smaller animals, and still be able to reach price points for commercial applications. In the following discussion, we will consider the potential of cortical structures at different levels of commercial applications. Figure 24 shows one typical block diagram, algorithms at each stage, resulting power efficiency (say based on current technology), as well as potential applications of the approach. In all cases, we will be considering a single die solution, typical for a commercial product, and will minimize the resulting communication power to I/O off the chip (no power consumed due to external memories or digital processing devices). We will assume a net computational efficiency of 10 TMAC/mW, corresponding to a lower power supply (i.e., mostly 500 mV, but not 180 mV) and slightly larger load capacitances; we make these assumptions as conservative pull back from possible applications, although we expect the more aggressive targets would be reachable. We assume the external power consumed is set by 1 event/second/neuron average event-rate off chip to a nearby IC. Given the input event rate is hard to predict, we don’t include that power requirement but assume it is handled by the input system. In all of these cases, getting the required computation using only digital techniques in a competitive size, weight, and especially power is hard to foresee.

We expect progress in these neuromorphic systems and that should find applications in traditional signal processing and graphics handling approaches. We will continue to have needs in computing that outpace our available computing resources, particularly at a power consumption required for a particular application. For example, the recent emphasis on cloud computing for academic/research problems shows the incredible need for larger computing resources than those directly available, or even projected to be available, for a portable computing platform (i.e., robotics). Of course a server per computing device is not a computing model that scales well. Given scaling limits on computing, both in power, area, and communication, one can expect to see more and more of these issues going forward.

We expect that a range of different ICs and systems will be built, all at different targets in the market. There are options for even larger networks, or integrating these systems with other processing elements on a chip/board. When moving to larger systems, particularly ones with 10–300 chips (3 × 107 to 109 neurons) or more, one can see utilization of stacking of dies, both decreasing the communication capacitance as well as board complexity. Stacking dies should roughly increase the final chip cost by the number of dies stacked.

In the following subsections, we overview general guidelines to consider when considering using neuromorphic ICs in the commercial market, first for low-cost consumer electronics, and second for a larger neuromorphic processor IC.

I have a casual observation to make. while the authors of the roadmap came to this conclusion “This study concludes that useful neural computation machines based on biological principles at the size of the human brain seems technically within our grasp.,” they’re also leaving themselves some wiggle room because the truth is no one knows if copying a human brain with circuits and various devices will lead to ‘thinking’ as we understand the concept.

For anyone who’s interested, you can search this blog for neuromorphic engineering, artificial brains, and/or memristors as I have many postings on these topics. One of my most recent on the topic of artificial brains is an April 7, 2014 piece titled: Brain-on-a-chip 2014 survey/overview.

One last observation about the movie ‘Transcendence’, has no one else noticed that it’s the ‘Easter’ story with a resurrected and digitized ‘Jesus’?

* Space inserted between ‘brains’ and ‘from’ in head on April 21, 2014.

Vancouver (Canada) and a city conversation about science that could have been better

Institutional insularity is a problem one finds everywhere. Interestingly, very few people see it that way due in large part to self-reinforcing loopbacks. Take universities for example and more specifically, Simon Fraser University’s April 17, 2014 City Conversation (in Vancouver, Canada) featuring Dr. Arvind Gupta (as of July 2014, president of the University of British Columbia) in a presentation titled: Creativity! Connection! Innovation!

Contrary to the hope I expressed in my April 14, 2014 post about the then upcoming event, this was largely an exercise in self-reference. Predictably with the flyer they used to advertise the event (the text was reproduced in its entirety in my April 14, 2014 posting), over 90% in the audiences (Vancouver, Burnaby, and Surrey campuses) were associated with one university or another.  Adding to the overwhelmingly ‘insider’ feel of this event, the speaker brought with him two students who had benefited from the organization he currently leads, Mitacs (a Canadian not-for-profit organization that offers funding for internships and fellowships at Canadian universities and formerly a mathematics NCE (Networks of Centres of Excellence of Canada program; a Canadian federal government program).

Despite the fact that this was billed as a ‘city conversation’ the talk focused largely on universities and their role in efforts to make Canada more productive and the wonderfulness of Mitacs. Unfortunately, what I wanted to hear and talk about was how Gupta, the students, and audience members saw the role of universities in cities, with a special reference to science.

It was less ‘city’ conversation and more ‘let’s focus on ourselves and our issues’ conversation. Mitacs, Canada’s productivity, and discussion about universities and innovation is of little inherent interest to anyone outside a select group of policy wonks (i.e., government and academe).

The conversation was self-referential until the very end. In the last minutes Gupta mentioned cities and science in the context of how cities in other parts of the world are actively supporting science. (For more about this interest elsewhere, you might find this Oct. 21, 2010 posting which features an article by Richard Van Noorden titled, Cities: Building the best cities for science; Which urban regions produce the best research — and can their success be replicated? as illuminating as I did.)

i wish Gupta had started with the last topic he introduced because Vancouverites have a lot of interest in science. In the last two years, TRIUMF, Canada’s national laboratory for particle and nuclear physics, has held a number of events at Science World and elsewhere which have been fully booked with waiting lists. The Peter Wall Institute for Advanced Studies has also held numerous science-themed events which routinely have waiting lists despite being held in one of Vancouver’s largest theatre venues.

If universities really want to invite outsiders into their environs and have city conversations, they need to follow through on the promise (e.g. talking about cities and science in a series titled “City Conversations”), as well as, do a better job of publicizing their events, encouraging people to enter their sacred portals, and addressing their ‘outsider’ audiences.

By the way, I have a few hints for the student speakers,

  • don’t scold your audience (you may find Canadians’ use of space shocking but please keep your indignation and sense of superiority to yourself)
  • before you start lecturing (at length) about the importance of interdisciplinary work, you might want to assess your audience’s understanding, otherwise you may find yourself preaching to the choir and/or losing your audience’s attention
  • before you start complaining that there’s no longer a mandatory retirement age and suggesting that this is the reason you can’t get a university job you may want to consider a few things: (1) your audience’s average age, in this case, I’d estimate that it was at least 50 and consequently not likely to be as sympathetic as you might like (2) the people who work past mandatory retirement may need the money or are you suggesting your needs are inherently more important? (3) whether or not a few people stay on past their ‘retirement’ age has less to do with your university job prospects than demographics and that’s a numbers game (not sure why I’d have to point that out to someone who’s associated with a mathematics organization such as Mitacs)

I expect no one has spoken or will speak to the organizers, Gupta, or the students other than to give them compliments. In fact, it’s unlikely there will be any real critique of having this presentation as part of a series titled “City Conversations” and that brings this posting back to institutional insularity. This problem is everywhere not just in universities and I’m increasingly interested in approaches to mitigating the tendency. If there’s anyone out there who knows of any examples where insularity has been tackled, please do leave a comment and, if possible, links.

Isis Innovation (University of Oxford, UK) spins out buckyball company, Designer Carbon Materials

Buckyballs are also known as Buckminsterfullerenes. The name is derived from Buckminster Fuller who designed something he called geodesic domes, from the Wikipedia entry (Note: Links have been removed),

Buckminsterfullerene (or bucky-ball) is a spherical fullerene molecule with the formula C60 [C = carbon; 60 is the number of carbon atoms in the molecule]. It has a cage-like fused-ring structure (truncated icosahedron) which resembles a soccer ball, made of twenty hexagons and twelve pentagons, with a carbon atom at each vertex of each polygon and a bond along each polygon edge.

It was first generated in 1985 by Harold Kroto, James R. Heath, Sean O’Brien, Robert Curl, and Richard Smalley at Rice University.[2] Kroto, Curl and Smalley were awarded the 1996 Nobel Prize in Chemistry for their roles in the discovery of buckminsterfullerene and the related class of molecules, the fullerenes. The name is a reference to Buckminster Fuller, as C60 resembles his trademark geodesic domes. Buckminsterfullerene is the most commonly naturally occurring fullerene molecule, as it can be found in small quantities in soot.[3][4] Solid and gaseous forms of the molecule have been detected in deep space.[5]

Here’s a model of a buckyball,

Courtesy: Isis Innovation (Oxford University)

Courtesy: Isis Innovation (Oxford University)

An April 15, 2014 University of Oxford (Isis Innovation) news release (h/t phys.org) describes the news research and some technical details while avoiding any mention of how they’ve tackled the production problems (a major issue, which has seriously constrained their commercial use),

The firm, Designer Carbon Materials, has been established by Isis Innovation, the University of Oxford’s technology commercialisation company, and will cost-effectively manufacture commercially useful quantities of the spherical carbon cage structures. Designer Carbon Materials is based on research from Dr Kyriakos Porfyrakis of Oxford University’s Department of Materials.

‘It is possible to insert a variety of useful atoms or atomic clusters into the hollow interior of these ball-like molecules, giving them new and intriguing abilities. Designer Carbon Materials will focus on the production of these value-added materials for a range of applications,’ said Dr Porfyrakis.

‘For instance, fullerenes are currently used as electron acceptors in polymer-based solar cells achieving some of the highest power conversion efficiencies known for these kinds of solar cells. Our endohedral fullerenes are even better electron-acceptors and therefore have the potential to lead to efficiencies exceeding 10 per cent.

‘The materials could also be developed as superior MRI contrast agents for medical imaging and as diagnostics for Alzheimer’s and Parkinson’s, as they are able to detect the presence of superoxide free radical molecules which may cause these conditions. We are receiving fantastic interest from organisations developing these applications, who until now have been unable to access useful quantities of these materials.’

The manufacturing process, patented by Isis Innovation, will continue to be developed by Designer Carbon Materials as it also makes its first sales of these extremely high-value materials.

Tom Hockaday, managing director of Isis Innovation, said: ‘This is a great example of an Isis spin-out which is both looking at exciting future applications for its technology and also answering a real market need. There is already significant demand for these nanomaterials and we expect the first customer orders will be fulfilled over the next few months.’

Investment in the company has been led by Oxford Technology Management and the Oxford Invention Fund. Lucius Carey from Oxford Technology Management said: ‘We are delighted to be investing in Designer Carbon Materials. The purposes of the investment will be to move into commercial premises and to scale up.’

Isis Innovation is a University of Oxford initiative and you can find out more about Isis Innovation here. As for the new spin-out company, Designer Carbon Materials, they have no website that I’ve been able to find but there is this webpage on the Isis Innovation website.

Your plant feeling stressed? Have we got a nanosensor for you!

An April 15, 2014 news item on ScienceDaily features an intriguing application for nansensors on plants that may have an important impact as we deal with the problems associated with droughts. This work comes from the University of California at San Diego (UCSD),

Biologists have succeeded in visualizing the movement within plants of a key hormone responsible for growth and resistance to drought. The achievement will allow researchers to conduct further studies to determine how the hormone helps plants respond to drought and other environmental stresses driven by the continuing increase in the atmosphere’s carbon dioxide, or CO2, concentration.

The April 15, 2014 UCSD news release by Kim McDonald, which originated the news item, describes the plant hormone being tracked and the tracking tool developed by the researchers,

The plant hormone the biologists directly tracked is abscisic acid, or ABA, which plays a major role in activating drought resistance responses of plants and in regulating plant growth under environmental stress conditions. The ABA stress hormone also controls the closing of stomata, the pores within leaves through which plants lose 95 percent of their water while taking in CO2 for growth.

Scientists already know the general role that ABA plays within plants, but by directly visualizing the hormone they can now better understand the complex interactions involving ABA when a plant is subjected to drought or other stress.

“Understanding the dynamic distribution of ABA in plants in response to environmental stimuli is of particular importance in elucidating the action of this important plant hormone,” says Julian Schroeder, a professor of biology at UC San Diego who headed the research effort. “For example, we can now investigate whether an increase in the leaf CO2 concentration that occurs every night due to respiration in leaves affects the ABA concentration in stomatal cells.”

The researchers developed what they call a “genetically-encoded reporter” in order to directly and instantaneously observe the movements of ABA within the mustard plant Arabidopsis. These reporters, called “ABAleons,” contain two differentially colored fluorescent proteins attached to an ABA-binding sensor protein. Once bound to ABA, the ABAleons change their fluorescence emission, which can be analyzed using a microscope. The researchers showed that ABA concentration changes and waves of ABA movement could be monitored in diverse tissues and individual cells over time and in response to stress.

“Using this reporter, we directly observed long distance ABA movements from the stem of a germinating seedling to the leaves and roots of the growing plant and, for the first time, we were able to determine the rate of ABA movement within the growing plant,” says Schroeder.

“Using this tool, we now can detect ABA in live plants and see how it is distributed,” says Rainer Waadt, a postdoctoral associate in Schroeder’s laboratory and the first author of the paper. “We are also able to directly see that environmental stress causes an increase in the ABA concentration in the stomatal guard cells that surround each stomatal pore. In the future, our sensors can be used to study ABA distribution in response to different stresses, including CO2 elevations, and to identify other molecules and proteins that affect the distribution of this hormone. We can also learn how fast plants respond to stresses and which tissues are important for the response.”

The researchers demonstrated that their new ABA nanosensors also function effectively as isolated proteins. This means that the sensors could be directly employed using state-of-the-art high-throughput screening platforms to screen for chemicals that could activate or enhance a drought resistance response. The scientists say such chemicals could become useful in the future for enhancing a drought resistance response, when crops experience a severe drought, like the one that occurred in the Midwest in the summer of 2012.

The scientists have provided a 1 min. 30 sec. (roughly) video where you can watch a vastly speeded up version of the process (Courtesy: UCSD),

Here’s a link to and a citation for the paper,

FRET-based reporters for the direct visualization of abscisic acid concentration changes and distribution in Arabidopsis by Rainer Waadt, Kenichi Hitomi, Noriyuki Nishimura, Chiharu Hitomi, Stephen R Adams, Elizabeth D Getzoff, & Julian I Schroeder. eLife 2014;3:e01739 DOI: http://dx.doi.org/10.7554/eLife.01739 Published April 15, 2014

This paper is open access.

CurTran and its plan to take over the world by replacing copper wire with LiteWire (carbon nanotubes)

This story is about carbon nanotubes and commercialization if I read Molly Ryan’s April 14, 2014 article for the Upstart Business Journal correctly,

CurTran LLC just signed its first customer contract with oilfield service Weatherford International Ltd. (NYSE: WFT) in a deal valued at more than $350 million per year.

To say the least, this is a pretty big step forward for the Houston-based nanotechnology materials company, especially since Gary Rome, CurTran’s CEO, said the entire length of the contract is valued at more than $7 billion. But when looking at the grand scheme of CurTran’s plans, this $7 billion contract is a baby step.

“We want to replace copper wire,” Rome said. “Globally, copper is used everywhere and it is a huge market. … We (have a product) that is substantially stronger than copper, and our electrical properties are in common.”

Rice University professor Richard Smalley began researching what would eventually become CurTran’s LiteWire product more than nine years ago, and CurTran officially formed in 2011.

CurTran, which is based in Houston, Texas, describes its LiteWire product this way,

Copper is a better conductor than Aluminum and Steel, and silver is too expensive to use in most applications.  So LiteWire is benchmarked against the dominant conductor in the market, copper.

So how does LiteWire match up against copper wire and cable?

Electrically, in established power transmission wiring standards and frequency, LiteWire has the same properties as copper conductors.  Resistivity, impedance, loading, sizing, etc, copper and LiteWire are the same at 60HZ.  This was intentional by our engineering department, ease adoption of LiteWire.  No need to change wire coating, cable winding, or wire processing equipment or processes, just change over to LiteWire and go.  Every electrician can work with LiteWire utilizing the same tools, standards and instruments.

So what is different between Copper Wire and LiteWire?

It’s Carbon.  LiteWire is an aligned structure double wall carbon nano-tube’s in wire form.  It is a 99.9% carbon structure that takes advantage of the free electrons available in carbon, while limiting the ability of the carbon to form new molecules, such as COx.  The outer electrons of carbon are loosely bound and easily conduced to move from atom to atom.

It is light.  LiteWire is 1/5th the weight of copper conductors.  A 40lb spool of 10ga 3-wire copper wire has 200 feet of wire.  A 40lb spool of 10ga 3-wire LiteWire has 800 feet of wire.  Aluminum wire is ½ the weight of copper, yet requires a 50% larger diameter wire for the same conductive properties, LiteWire sizing is exactly the same as copper.

It is strong.  LiteWire is stronger than steel, 20 times stronger than copper, and stronger than 8000 series Aluminum cable.  Span greater distances between towers, pull higher tension, reduce installation costs and maintenance.

It doesn’t creep.  LiteWire expands and contracts 1/3 less than copper and its aluminum equivalents.  Connection points are secure year round and year after year.  Less sagging of power lines in hot temperatures, less opportunity for grounding of power lines and power outages.

More power, less loss.  LiteWire is equal to copper wire at 60HA, and highly efficient at higher frequencies, voltages and amperes.  More electrical energy can be transmitted with lower losses in the system.  Less wasted energy in the line, means less power needs to be produced.

A longer life.  LiteWire is noncorrosive in all naturally occurring environments, from deep sea to outer space. No issue with dissimilar metals at connection points.  LiteWire is inert and does not degrade over time.

Can you hear me now.  Litewire is the perfect signal conducting wire.  LiteWire is superior at higher frequencies, losses are lower and signal clarity is greater.  Networks can carry more bandwidth and signal separation is cleaner.

Never wet.  LiteWire is hydrophobic by nature.  Water beads up and is shed, even if the water freezes, it does so in bead form and falls away.  No more powerline failures from ice buildup and breaking or shorting due to line sag.

How much does it cost.  LiteWire costs the same as copper wire of equal length and size.  As the price of copper continues to rise and as new LiteWire facilities come on line, the cost of LiteWire will decrease. Projecting out ten years, LiteWire will be half the cost of copper wire and cable.

Never fatigues.  LiteWire has a very long fatigue life, we are still looking for it.  LiteWire is not susceptible to fatigue failure.  LiteWire’s bonds are at the atomic level, when that bond is broken, the failure occurs.  Repeated cycles to near the breaking point do not degrade LiteWire’s integrity.  Metal conductors fatigue under repeated bending, reducing their load carrying capabilities and subsequent failure.

There is a table of specific technical properties on the LiteWire product webpage.

CurTran’s CEO has big plans (from the Ryan article),

With a multibillion-dollar contract under its belt only a few years after its founding, Rome intends for CurTran to have blockbuster years for the next five years. According to the company’s website, it plans to hire 3,600 new employees around the world in this time frame.

“We also plan to open a new production facility every six months for the next five years,” Rome said. “We’ve already identified the first four locations.”

For Weatherford’s perspective on this deal, there’s the company’s April 7, 2014 news release,

Weatherford International Ltd. today [April 7, 2014] announced that it has entered into an agreement with CurTran LLC to use, sell, and distribute LiteWire, the first commercial scale production of a carbon nanotube technology in wire and cable form.

“With LiteWire products, we gain exclusivity to a revolutionary technology that will greatly add value to our business,” said Dharmesh Mehta, chief operating officer for Weatherford. “The use of LiteWire products allows us to provide safer, faster, and more economic solutions for our customers.”

In addition to using LiteWire in its global operations, Weatherford will be the exclusive distributor of this product in the oil and gas industry.

Interestingly, Weatherford seems to be in a highly transitional state. From an April 3, 2014 article by Jordan Blum for Houston Business Journal (Note: Links have been removed),

Weatherford International Ltd. (NYSE: WFT) plans to move its corporate headquarters from Switzerland to Ireland largely because of changes to Swiss corporate executive laws and potential uncertainties.

Weatherford, which has its operational headquarters in Houston, is  undergoing a global downsizing as it relocates its corporate offices.

Weatherford President and CEO Bernard Duroc-Danner said the move will help the company “quickly and efficiently execute and move forward on our transformational path.”

The downsizing and move put a different complexion on Weatherford’s deal with CurTran. It seems Weatherford is taking a big gamble on its future. I’m basing that comment on the fact that there is, to my knowledge, no other deployment of a similar scope of a ‘carbon nanotube’ wire such as LightWire.

It would appear from CurTran’s Overview that LightWire’s deployment is an inevitability,

CurTran LLC was formed for one purpose.  To industrialize the production of Double Wall Carbon Nanotubes in wire form to be a direct replacement for metallic conductors in wire and cable applications.

That rhetoric is worthy of a 19th century capitalist. Of course, those guys did change the world.

There’s a bit more about the company’s history and activities from the Overview page,

CurTran was formed in 2011 by industrial manufacturing, engineering and research organizations.  An industrialization plan was defined, customer and industry partners engaged, the intellectual property consolidated and operations launched.

Operations are based in the following areas:

  • Corporate Headquarters, located in Houston Texas
  • Test Facility, located in Houston Texas and operated by NanoRidge and Rice University researchers.
  • Pilot Plant located in Eastern Europe
  • Production facilities are to be located in various global markets.  Production facilities will be fully operational in 2014 producing in excess of 50,000 tonnes per facility annually.

CurTran manufactures the LiteWire conductor in many forms.  We do not manufacture insulated products at this time.  We rely on our Joint Venture Partners to deliver a completed wire/cable product to their existing customer base.

CurTran provides engineering services to Partners and Customers that seek to optimize their products to the full capabilities of LiteWire.

CurTran supports ongoing research and development activities in applied material science, chemical/mechanical/thermo/fluid production processes, industrial equipment design, and  application sciences.

Getting back to Weatherford, I imagine there is celebration in Ireland although I can’t help wondering if the Swiss, in a last minute solution, might not find a way to keep Weatherford’s headquarters right where they are. I haven’t been able to find a date for Weatherford’s move to Ireland.

Inhibiting viruses with nanocrystalline cellulose (NCC) in Finland

Research and interest in cellulose nanomaterials of one kind or another seems to be reaching new heights. That’s my experience since this is my third posting on the topic in one week.

The latest research features NCC (nanocrystalline cellulose [NCC] or, as it’s sometimes known, cellulose nanocrystals [CNC]) ,as a ‘viral inhibitor’ and is described in an April 15, 2014 news item on Nanowerk,

Researchers from Aalto University [Finland] and and the University of Eastern Finland have succeeded in creating a surface on nano-sized cellulose crystals that imitates a biological structure. The surface adsorbs viruses and disables them. The results can prove useful in the development of antiviral ointments and surfaces, for instance.

There are many viral diseases in the world for which no pharmaceutical treatment exists. These include, among others, dengue fever, which is spread by mosquitoes in the tropics, as well as a type of diarrhea, which is more familiar in Finland and is easily spread by the hands and can be dangerous especially for small children and the elderly.

An April 15, 2014 Aalto University news release, which originated the news item, provides more detail,

Researchers at Aalto University and the University of Eastern Finland have now succeeded in preliminary tests to prevent the spread of one type of virus into cells with the help of a new type of nanocrystalline cellulose. Nano-sized cellulose crystals were manufactured out of cotton fibre or filter paper with the help of sulphuric acid, causing sulphate ions with negative charges to attach to their surfaces. The ions then attached to alphaviruses used in the test and neutralised them. When the researchers replaced the sulphate ions with cellulose derivatives that imitate tyrosine sulphates, the activity of the viruses was further reduced. The experiments succeeded in preventing viral infection in 88-100 percent of the time with no noticeable effect on the viability of the cells by the nanoparticles. The research findings were published in the journal Biomacromolecules.

Here’s a diagram illustrating how the new type of NCC works,

Courtesy of Aalto University

Courtesy of Aalto University

The news release includes perspectives from the researchers,

’Certain cellulose derivatives had been seen to have an impact on viruses before. The nano scale increases the proportion of the surface area to that of the number of grams to a very high level, which is an advantage, because viruses specifically attach themselves to surfaces. Making the cellulose crystals biomimetic, which means that they mimic biological structures, was an important step, as we know that in nature viruses often interact specifically with tyrosine structures,’ he [Jukka Seppälä, Professor of Polymer Technology at Aalto University] says.

Both Jukka Seppälä and Ari Hinkkanen, Professor of Gene Transfer Technology at the University of Eastern Finland, emphasise that the research is still in the early stages.

‘Now we know that the attachment of a certain alphavirus can be effectively prevented when we use large amounts of nanocrystalline cellulose.  Next we need to experiment with other alpha viruses and learn to better understand the mechanisms that prevent viral infection. In addition, it is necessary to ascertain if cellulose can also block other viruses and in what conditions, and to investigate whether or not the sulphates have a deleterious effects on an organism,’ Ari Hinkkanen explains.

According to Kristiina Järvinen, Professor of Pharmaceutical Technology at the University of Eastern Finland, there are many routes that can be taken in the commercialisation of the results. The development of an antiviral medicine is the most distant of these; the idea could be sooner applied in disinfectant ointments and coatings, for instance.

‘It would be possible to provide protection against viruses, spread by mosquitoes, by applying ointment containing nanocrystalline cellulose onto the skin. Nanocrystalline cellulose applied on hospital door handles could kill viruses and prevent them from spreading.  However, we first need to ascertain if the compounds will remain effective in a non-liquid form and how they work in animal tests,’ she ponders.

For the curious, here’s a link to and a citation for the paper,

Synthesis of Cellulose Nanocrystals Carrying Tyrosine Sulfate Mimetic Ligands and Inhibition of Alphavirus Infection by Justin O. Zoppe, Ville Ruottinen, Janne Ruotsalainen, Seppo Rönkkö, Leena-Sisko Johansson, Ari Hinkkanen, Kristiina Järvinen, and Jukka Seppälä. Biomacromolecules, 2014, 15 (4), pp 1534–1542 DOI: 10.1021/bm500229d Publication Date (Web): March 14, 2014

Copyright © 2014 American Chemical Society

This paper is behind a paywall.

As for my other recent postings on cellulose nanomaterials, there’s this April 14, 2014 piece titled: Preparing nanocellulose for eventual use in dressings for wounds and this from April 10, 2014 titled: US Dept. of Agriculture wants to commercialize cellulose nanomaterials.

Nanotechnology at the movies: Transcendence opens April 18, 2014 in the US & Canada

Screenwriter Jack Paglen has an intriguing interpretation of nanotechnology, one he (along with the director) shares in an April 13, 2014 article by Larry Getlen for the NY Post and in his movie, Transcendence. which is opening in the US and Canada on April 18, 2014. First, here are a few of the more general ideas underlying his screenplay,

In “Transcendence” — out Friday [April 18, 2014] and directed by Oscar-winning cinematographer Wally Pfister (“Inception,” “The Dark Knight”) — Johnny Depp plays Dr. Will Caster, an artificial-intelligence researcher who has spent his career trying to design a sentient computer that can hold, and even exceed, the world’s collective intelligence.

After he’s shot by antitechnology activists, his consciousness is uploaded to a computer network just before his body dies.

“The theories associated with the film say that when a strong artificial intelligence wakes up, it will quickly become more intelligent than a human being,” screenwriter Jack Paglen says, referring to a concept known as “the singularity.”

It should be noted that there are anti-technology terrorists. I don’t think I’ve covered that topic in a while so an Aug. 31, 2012 posting is the most recent and, despite the title, “In depth and one year later—the nanotechnology bombings in Mexico” provides an overview of sorts. For a more up-to-date view, you can read Eric Markowitz’s April 9, 2014 article for Vocative.com. I do have one observation about the article where Markowitz has linked some recent protests in San Francisco to the bombings in Mexico. Those protests in San Francisco seem more like a ‘poor vs. the rich’ situation where the rich happen to come from the technology sector.

Getting back to “Transcendence” and singularity, there’s a good Wikipedia entry describing the ideas and some of the thinkers behind the notion of a singularity or technological singularity, as it’s sometimes called (Note: Links have been removed),

The technological singularity, or simply the singularity, is a hypothetical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence, radically changing civilization, and perhaps human nature.[1] Because the capabilities of such an intelligence may be difficult for a human to comprehend, the technological singularity is often seen as an occurrence (akin to a gravitational singularity) beyond which the future course of human history is unpredictable or even unfathomable.

The first use of the term “singularity” in this context was by mathematician John von Neumann. In 1958, regarding a summary of a conversation with von Neumann, Stanislaw Ulam described “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.[2] The term was popularized by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity.[3] Futurist Ray Kurzweil cited von Neumann’s use of the term in a foreword to von Neumann’s classic The Computer and the Brain.

Proponents of the singularity typically postulate an “intelligence explosion”,[4][5] where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent’s cognitive abilities greatly surpass that of any human.

Kurzweil predicts the singularity to occur around 2045[6] whereas Vinge predicts some time before 2030.[7] At the 2012 Singularity Summit, Stuart Armstrong did a study of artificial generalized intelligence (AGI) predictions by experts and found a wide range of predicted dates, with a median value of 2040. His own prediction on reviewing the data is that there is an 80% probability that the singularity will occur between 2017 and 2112.[8]

The ‘technological singularity’ is controversial and contested (from the Wikipedia entry).

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil’s iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary “events” were picked arbitrarily.[104] Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.[105]

By the way, this movie is mentioned briefly in the pop culture portion of the Wikipedia entry.

Getting back to Paglen and his screenplay, here’s more from Getlen’s article,

… as Will’s powers grow, he begins to pull off fantastic achievements, including giving a blind man sight, regenerating his own body and spreading his power to the water and the air.

This conjecture was influenced by nanotechnology, the field of manipulating matter at the scale of a nanometer, or one-billionth of a meter. (By comparison, a human hair is around 70,000-100,000 nanometers wide.)

“In some circles, nanotechnology is the holy grail,” says Paglen, “where we could have microscopic, networked machines [emphasis mine] that would be capable of miracles.”

The potential uses of, and implications for, nanotechnology are vast and widely debated, but many believe the effects could be life-changing.

“When I visited MIT,” says Pfister, “I visited a cancer research institute. They’re talking about the ability of nanotechnology to be injected inside a human body, travel immediately to a cancer cell, and deliver a payload of medicine directly to that cell, eliminating [the need to] poison the whole body with chemo.”

“Nanotechnology could help us live longer, move faster and be stronger. It can possibly cure cancer, and help with all human ailments.”

I find the ‘golly gee wizness’ of Paglen’s and Pfister’s take on nanotechnology disconcerting but they can’t be dismissed. There are projects where people are testing retinal implants which allow them to see again. There is a lot of work in the field of medicine designed to make therapeutic procedures that are gentler on the body by making their actions specific to diseased tissue while ignoring healthy tissue (sadly, this is still not possible). As for human enhancement, I have so many pieces that it has its own category on this blog. I first wrote about it in a four-part series starting with this one: Nanotechnology enables robots and human enhancement: part 1, (You can read the series by scrolling past the end of the posting and clicking on the next part or search the category and pick through the more recent pieces.)

I’m not sure if this error is Paglen’s or Getlen’s but nanotechnology is not “microscopic, networked machines” as Paglen’s quote strongly suggests. Some nanoscale devices could be described as machines (often called nanobots) but there are also nanoparticles, nanotubes, nanowires, and more that cannot be described as machines or devices, for that matter. More importantly, it seems Paglen’s main concern is this,

“One of [science-fiction author] Arthur C. Clarke’s laws is that any sufficiently advanced technology is indistinguishable from magic. That very quickly would become the case if this happened, because this artificial intelligence would be evolving technologies that we do not understand, and it would be capable of miracles by that definition,” says Paglen. [emphasis mine]

This notion of “evolving technologies that we do not understand” brings to mind a  project that was announced at the University of Cambridge (from my Nov. 26, 2012 posting),

The idea that robots of one kind or another (e.g. nanobots eating up the world and leaving grey goo, Cylons in both versions of Battlestar Galactica trying to exterminate humans, etc.) will take over the world and find humans unnecessary  isn’t especially new in works of fiction. It’s not always mentioned directly but the underlying anxiety often has to do with intelligence and concerns over an ‘explosion of intelligence’. The question it raises,’ what if our machines/creations become more intelligent than humans?’ has been described as existential risk. According to a Nov. 25, 2012 article by Sylvia Hui for Huffington Post, a group of eminent philosophers and scientists at the University of Cambridge are proposing to found a Centre for the Study of Existential Risk,

While I do have some reservations about how Paglen and Pfister describe the science, I appreciate their interest in communicating the scientific ideas, particularly those underlying Paglen’s screenplay.

For anyone who may be concerned about the likelihood of emulating  a human brain and uploading it to a computer, there’s an April 13, 2014 article by Luke Muehlhauser and Stuart Armstrong for Slate discussing that very possibility (Note 1: Links have been removed; Note 2: Armstrong is mentioned in this posting’s excerpt from the Wikipedia entry on Technological Singularity),

Today scientists can’t even emulate the brain of a tiny worm called C. elegans, which has 302 neurons, compared with the human brain’s 86 billion neurons. Using models of expected technological progress on the three key problems, we’d estimate that we wouldn’t be able to emulate human brains until at least 2070 (though this estimate is very uncertain).

But would an emulation of your brain be you, and would it be conscious? Such questions quickly get us into thorny philosophical territory, so we’ll sidestep them for now. For many purposes—estimating the economic impact of brain emulations, for instance—it suffices to know that the brain emulations would have humanlike functionality, regardless of whether the brain emulation would also be conscious.

Paglen/Pfister seem to be equating intelligence (brain power) with consciousness while Muehlhauser/Armstrong simply sidestep the issue. As they (Muehlhauser/Armstrong) note, it’s “thorny.”

If you consider thinkers like David Chalmers who suggest everything has consciousness, then it follows that computers/robots/etc. may not appreciate having a human brain emulation which takes us back into Battlestar Galactica territory. From my March 19, 2014 posting (one of the postings where I recounted various TED 2014 talks in Vancouver), here’s more about David Chalmers,

Finally, I wasn’t expecting to write about David Chalmers so my notes aren’t very good. A philosopher, here’s an excerpt from Chalmers’ TED biography,

In his work, David Chalmers explores the “hard problem of consciousness” — the idea that science can’t ever explain our subjective experience.

David Chalmers is a philosopher at the Australian National University and New York University. He works in philosophy of mind and in related areas of philosophy and cognitive science. While he’s especially known for his theories on consciousness, he’s also interested (and has extensively published) in all sorts of other issues in the foundations of cognitive science, the philosophy of language, metaphysics and epistemology.

Chalmers provided an interesting bookend to a session started with a brain researcher (Nancy Kanwisher) who breaks the brain down into various processing regions (vastly oversimplified but the easiest way to summarize her work in this context). Chalmers reviewed the ‘science of consciousness’ and noted that current work in science tends to be reductionist, i.e., examining parts of things such as brains and that same reductionism has been brought to the question of consciousness.

Rather than trying to prove consciousness, Chalmers proposes that we consider it a fundamental in the same way that we consider time, space, and mass to be fundamental. He noted that there’s precedence for additions and gave the example of James Clerk Maxwell and his proposal to consider electricity and magnetism as fundamental.

Chalmers next suggestion is a little more outré and based on some thinking (sorry I didn’t catch the theorist’s name) that suggests everything, including photons, has a type of consciousness (but not intelligence).

Have a great time at the movie!

Data transmisstion at 1.44 terabits per second

It’s not only the amount of data we have which is increasing but the amount of data we want to transmit from one place to another. An April 14, 2014 news item on ScienceDaily describes a new technique designed to increase data transmission rates,

Miniaturized optical frequency comb sources allow for transmission of data streams of several terabits per second over hundreds of kilometers — this has now been demonstrated by researchers of Karlsruhe Institute of Technology (KIT) and the Swiss École Polytechnique Fédérale de Lausanne (EPFL) in a experiment presented in the journal Nature Photonics. The results may contribute to accelerating data transmission in large computing centers and worldwide communication networks.

In the study presented in Nature Photonics, the scientists of KIT, together with their EPFL colleagues, applied a miniaturized frequency comb as optical source. They reached a data rate of 1.44 terabits per second and the data was transmitted over a distance of 300 km. This corresponds to a data volume of more than 100 million telephone calls or up to 500,000 high-definition (HD) videos. For the first time, the study shows that miniaturized optical frequency comb sources are suited for coherent data transmission in the terabit range.

The April (?) 2014 KIT news release, which originated the news item, describes some of the current transmission technology’s constraints,

The amount of data generated and transmitted worldwide is growing continuously. With the help of light, data can be transmitted rapidly and efficiently. Optical communication is based on glass fibers, through which optical signals can be transmitted over large distances with hardly any losses. So-called wavelength division multiplexing (WDM) techniques allow for the transmission of several data channels independently of each other on a single optical fiber, thereby enabling extremely high data rates. For this purpose, the information is encoded on laser light of different wavelengths, i.e. different colors. However, scalability of such systems is limited, as presently an individual laser is required for each transmission channel. In addition, it is difficult to stabilize the wavelengths of these lasers, which requires additional spectral guard bands between the data channels to prevent crosstalk.

The news release goes on to further describe the new technology using ‘combs’,

Optical frequency combs, for the development of which John Hall and Theodor W. Hänsch received the 2005 Nobel Prize in Physics, consist of many densely spaced spectral lines, the distances of which are identical and exactly known. So far, frequency combs have been used mainly for highly precise optical atomic clocks or optical rulers measuring optical frequencies with utmost precision. However, conventional frequency comb sources are bulky and costly devices and hence not very well suited for use in data transmission. Moreover, spacing of the spectral lines in conventional frequency combs often is too small and does not correspond to the channel spacing used in optical communications, which is typically larger than 20 GHz.

In their joint experiment, the researchers of KIT and the EPFL have now demonstrated that integrated optical frequency comb sources with large line spacings can be realized on photonic chips and applied for the transmission of large data volumes. For this purpose, they use an optical microresonator made of silicon nitride, into which laser light is coupled via a waveguide and stored for a long time. “Due to the high light intensity in the resonator, the so-called Kerr effect can be exploited to produce a multitude of spectral lines from a single continuous-wave laser beam, hence forming a frequency comb,” explains Jörg Pfeifle, who performed the transmission experiment at KIT. This method to generate these so-called Kerr frequency combs was discovered by Tobias Kippenberg, EPFL, in 2007. Kerr combs are characterized by a large optical bandwidth and can feature line spacings that perfectly meet the requirements of data transmission. The underlying microresonators are produced with the help of complex nanofabrication methods by the EPFL Center of Micronanotechnology. “We are among the few university research groups that are able to produce such samples,” comments Kippenberg. Work at EPFL was funded by the Swiss program “NCCR Nanotera” and the European Space Agency [ESA].

Scientists of KIT’s Institute of Photonics and Quantum Electronics (IPQ) and Institute of Microstructure Technology (IMT) are the first to use such Kerr frequency combs for high-speed data transmission. “The use of Kerr combs might revolutionize communication within data centers, where highly compact transmission systems of high capacity are required most urgently,” Christian Koos says.

Here’s a link to and a citation for the paper,

Coherent terabit communications with microresonator Kerr frequency combs by Joerg Pfeifle, Victor Brasch, Matthias Lauermann, Yimin Yu, Daniel Wegner, Tobias Herr, Klaus Hartinger, Philipp Schindler, Jingshi Li, David Hillerkuss, Rene Schmogrow, Claudius Weimann, Ronald Holzwarth, Wolfgang Freude, Juerg Leuthold, Tobias J. Kippenberg, & Christian Koos. Nature Photonics (2014) doi:10.1038/nphoton.2014.57 Published online 13 April 2014

This paper is behind a paywall.

From the quantum to the cosmos; an event at Vancouver’s (Canada) Science World

ARPICO (Society of Italian Researchers & Professionals in Western Canada) sent out an April 9, 2014 announcement,

FROM THE QUANTUM TO THE COSMOS

May 7 [2014] “Unveiling the Universe” lecture registration now open:

Join Science World and TRIUMF on Wednesday, May 7, at Science World at TELUS World of Science in welcoming Professor Edward “Rocky” Kolb, the Arthur Holly Compton Distinguished Service Professor of Astronomy and Astrophysics at the University of Chicago, for his lecture on how the laws of quantum physics at the tiniest distances relate to structures in the universe at the largest scales. He also will highlight recent spectacular results into the nature of the Big Bang from the orbiting Planck satellite and the South Pole-based BICEP2 telescope.

Doors open at 6:15pm and lecture starts at 7pm. It will be followed by an audience Q&A session.

Tickets are free but registration is required. Details on the registration page (link below)
See http://www.eventbrite.ca/o/unveiling-the-universe-lecture-series-2882137721?s=23658359 for more information.

You can go here to the Science World website for more details and another link for tickets,

Join Science World, TRIUMF and guest speaker Dr Rocky Kolb on Wednesday, May 7 [2014], for another free Unveiling the Universe public lecture about the inner space/outer space connection that may hold the key to understanding the nature of dark matter, dark energy and the mysterious seeds of structure that grew to produce everything we see in the cosmos.

I notice Kolb is associated with the Fermi Lab, which coincidentally is where TRIUMF’s former director, Nigel Lockyer is currently located. You can find out more about Kolb on his personal webpage, where I found this description from his repertoire of talks,

Mysteries of the Dark Universe
Ninety-five percent of the universe is missing! Astronomical observations suggest that most of the mass of the universe is in a mysterious form called dark matter and most of the energy in the universe is in an even more mysterious form called dark energy. Unlocking the secrets of dark matter and dark energy will illuminate the nature of space and time and connect the quantum with the cosmos.

Perhaps this along with the next bit gives you a clearer idea of what Kolb will be discussing. He will also be speaking at TRIUMF, Canada’s national laboratory of particle and nuclear physics, from the events page,

Wed ,2014-05-07    14:00    Colloquium    Rocky Kolb, Fermilab     Auditorium    The Decade of the WIMP
Abstract:    The bulk of the matter in the present universe is dark. The most attractive possibility for the nature of the dark matter is a new species of elementary particle known as a WIMP (a Weakly Interacting Massive Particle). After a discussion of how a WIMP might fit into models of particle physics, I will review the current situation with respect to direct detection, indirect detection, and collider production of WIMPs. Rapid advances in the field should enable us to answer by the end of the decade whether our universe is dominated by WIMPs.

You may want to get your tickets soon as other lectures in the Unveiling the Universe series have gone quickly.