Tag Archives: Diamond Age

What is a diamond worth?

A couple of diamond-related news items have crossed my path lately causing me to consider diamonds and their social implications. I’ll start first with the news items, according to an April 4, 2012 news item on physorg.com a quantum computer has been built inside a diamond (from the news item),

Diamonds are forever – or, at least, the effects of this diamond on quantum computing may be. A team that includes scientists from USC has built a quantum computer in a diamond, the first of its kind to include protection against “decoherence” – noise that prevents the computer from functioning properly.

I last mentioned decoherence in my July 21, 2011 posting about a joint (University of British Columbia, University of California at Santa Barbara and the University of Southern California) project on quantum computing.

According to the April 5, 2012 news item by Robert Perkins for the University of Southern California (USC),

The multinational team included USC professor Daniel Lidar and USC postdoctoral researcher Zhihui Wang, as well as researchers from the Delft University of Technology in the Netherlands, Iowa State University and the University of California, Santa Barbara. The findings were published today in Nature.

The team’s diamond quantum computer system featured two quantum bits, or qubits, made of subatomic particles.

As opposed to traditional computer bits, which can encode distinctly either a one or a zero, qubits can encode a one and a zero at the same time. This property, called superposition, along with the ability of quantum states to “tunnel” through energy barriers, some day will allow quantum computers to perform optimization calculations much faster than traditional computers.

Like all diamonds, the diamond used by the researchers has impurities – things other than carbon. The more impurities in a diamond, the less attractive it is as a piece of jewelry because it makes the crystal appear cloudy.

The team, however, utilized the impurities themselves.

A rogue nitrogen nucleus became the first qubit. In a second flaw sat an electron, which became the second qubit. (Though put more accurately, the “spin” of each of these subatomic particles was used as the qubit.)

Electrons are smaller than nuclei and perform computations much more quickly, but they also fall victim more quickly to decoherence. A qubit based on a nucleus, which is large, is much more stable but slower.

“A nucleus has a long decoherence time – in the milliseconds. You can think of it as very sluggish,” said Lidar, who holds appointments at the USC Viterbi School of Engineering and the USC Dornsife College of Letters, Arts and Sciences.

Though solid-state computing systems have existed before, this was the first to incorporate decoherence protection – using microwave pulses to continually switch the direction of the electron spin rotation.

“It’s a little like time travel,” Lidar said, because switching the direction of rotation time-reverses the inconsistencies in motion as the qubits move back to their original position.

Here’s an image I downloaded from the USC webpage hosting Perkins’s news item,

The diamond in the center measures 1 mm X 1 mm. Photo/Courtesy of Delft University of Technolgy/UC Santa Barbara

I’m not sure what they were trying to illustrate with the image but I thought it would provide an interesting contrast to the video which follows about the world’s first purely diamond ring,

I first came across this ring in Laura Hibberd’s March 22, 2012 piece for Huffington Post. For anyone who feels compelled to find out more about it, here’s the jeweller’s (Shawish) website.

What with the posting about Neal Stephenson and Diamond Age (aka, The Diamond Age Or A Young Lady’s Illustrated Primer; a novel that integrates nanotechnology into a story about the future and ubiquitous diamonds), a quantum computer in a diamond, and this ring, I’ve started to wonder about role diamonds will have in society. Will they be integrated into everyday objects or will they remain objects of desire? My guess is that the diamonds we create by manipulating carbon atoms will be considered everyday items while the ones which have been formed in the bowels of the earth will retain their status.

Get your question to Neal Stephenson asked at April 17, 2012 event at MIT

After reading Diamond Age (aka, The Diamond Age Or A Young Lady’s Illustrated Prime; a novel that integrates nanotechnology into a story about the future), I have never been able to steel myself to read another Neal Stephenson book. In the last 1/3 of the book, the plot fell to pieces so none of the previously established narrative threads were addressed and the character development, such as it was, ceased to make sense. However, it seems I am in the minority as Stephenson and his work are widely and critically lauded.

April 17, 2012, Stephenson will be appearing in an event which features a live interview by Technology Review editor-in-chief, Jason Pontin at the Massachusetts Institute of Technology (MIT). From Stephen Cass’s April 3, 2012 article for Technology Review,

With assistance from the the MIT Graduate Program in Science Writing, if you’re in the Boston area, you can see Neal Stephenson in person at MIT on April 17. Technology Review‘s editor-in-chief, Jason Pontin, will publicly interview Stephenson for the 2012 issue of TRSF, our annual science fiction anthology. Topics on the table include the state and future of hard science fiction, and how digital publishing is affecting novels.

The event is free and you can get a ticket here. For anyone who can’t get to Boston for the event, you can ask your question here in the comments section.

Human-Computer interfaces: flying with thoughtpower, reading minds, and wrapping a telephone around your wrist

This time I’ve decided to explore a few of the human/computer interface stories I’ve run across lately. So this posting is largely speculative and rambling as I’m not driving towards a conclusion.

My first item is a May 3, 2011 news item on physorg.com. It concerns an art installation at Rensselaer Polytechnic Institute, The Ascent. From the news item,

A team of Rensselaer Polytechnic Institute students has created a system that pairs an EEG headset with a 3-D theatrical flying harness, allowing users to “fly” by controlling their thoughts. The “Infinity Simulator” will make its debut with an art installation [The Ascent] in which participants rise into the air – and trigger light, sound, and video effects – by calming their thoughts.

I found a video of someone demonstrating this project:
http://blog.makezine.com/archive/2011/03/eeg-controlled-wire-flight.html

Please do watch:

I’ve seen this a few times and it still absolutely blows me away.

If you should be near Rensselaer on May 12, 2011, you could have a chance to fly using your own thoughtpower, a harness, and an EEG helmet. From the event webpage,

Come ride The Ascent, a playful mash-up of theatrics, gaming and mind-control. The Ascent is a live-action, theatrical ride experience created for almost anyone to try. Individual riders wear an EEG headset, which reads brainwaves, along with a waist harness, and by marshaling their calm, focus, and concentration, try to levitate themselves thirty feet into the air as a small audience watches from below. The experience is full of obstacles-as a rider ascends via the power of concentration, sound and light also respond to brain activity, creating a storm of stimuli that conspires to distract the rider from achieving the goal: levitating into “transcendence.” The paradox is that in order to succeed, you need to release your desire for achievement, and contend with what might be the biggest obstacle: yourself.

Theater Artist and Experience Designer Yehuda Duenyas (XXXY) presents his MFA Thesis project The Ascent, and its operating platform the Infinity System, a new user driven experience created specifically for EMPAC’s automated rigging system.

The Infinity System is a new platform and user interface for 3D flying which combines aspects of thrill-ride, live-action video game, and interactive installation.

Using a unique and intuitive interface, the Infinity System uses 3D rigging to move bodies creatively through space, while employing wearable sensors to manipulate audio and visual content.

Like a live-action stunt-show crossed with a video game, the user is given the superhuman ability to safely and freely fly, leap, bound, flip, run up walls, fall from great heights, swoop, buzz, drop, soar, and otherwise creatively defy gravity.

“The effect is nothing short of movie magic.” – Sean Hollister, Engadget

Here’s a brief description of the technology behind this ‘Ascent’ (from the news item on physorg.com),

Ten computer programs running simultaneously link the commercially available EEG headset to the computer-controlled 3-D flying harness and various theater systems, said Todd. [Michael Todd, a Rensselaer 2010 graduate in computer science]

Within the theater, the rigging – including the harness – is controlled by a Stage Tech NOMAD console; lights are controlled by an ION console running MIDI show control; sound through MAX/MSP; and video through Isadora and Jitter. The “Infinity Simulator,” a series of three C programs written by Todd, acts as intermediary between the headset and the theater systems, connecting and conveying all input and output.

“We’ve built a software system on top of the rigging control board and now have control of it through an iPad, and since we have the iPad control, we can have anything control it,” said Duenyas. “The ‘Infinity Simulator’ is the center; everything talks to the ‘Infinity Simulator.’”

This May 3, 2011 article (Mystery Man Gives Mind-Reading Tech More Early Cash Than Facebook, Google Combined) by Kit Eaton on Fast Company also concerns itself with a brain/computer interface. From the article,

Imagine the money that could be made by a drug company that accurately predicted and treated the onset of Alzheimer’s before any symptoms surfaced. That may give us an idea why NeuroVigil, a company specializing in non-invasive, wireless brain-recording tech, just got a cash injection that puts it at a valuation “twice the combined seed valuations of Google’s and Facebook’s first rounds,” according to a company announcement

NeuroVigil’s key product at the moment is the iBrain, a slim device in a flexible head-cap that’s designed to be worn for continuous EEG monitoring of a patient’s brain function–mainly during sleep. It’s non-invasive, and replaces older technology that could only access these kind of brain functions via critically implanted electrodes actually on the brain itself. The idea is, first, to record how brain function changes over time, perhaps as a particular combination of drugs is administered or to help diagnose particular brain pathologies–such as epilepsy.

But the other half of the potentailly lucrative equation is the ability to analyze the trove of data coming from iBrain. And that’s where NeuroVigil’s SPEARS algorithm enters the picture. Not only is the company simplifying collection of brain data with a device that can be relatively comfortably worn during all sorts of tasks–sleeping, driving, watching advertising–but the combination of iBrain and SPEARS multiplies the efficiency of data analysis [emphasis mine].

I assume it’s the notion of combining the two technologies (iBrian and SPEARS) that spawned the ‘mind-reading’ part of this article’s title. The technology could be used for early detection and diagnosis, as well as, other possibilities as Eaton notes,

It’s also possible it could develop its technology into non-medicinal uses such as human-computer interfaces–in an earlier announcement, NeuroVigil noted, “We plan to make these kinds of devices available to the transportation industry, biofeedback, and defense. Applications regarding pandemics and bioterrorism are being considered but cannot be shared in this format.” And there’s even a popular line of kid’s toys that use an essentially similar technique, powered by NeuroSky sensors–themselves destined for future uses as games console controllers or even input devices for computers.

What these two technologies have in common is that, in some fashion or other, they have (shy of implanting a computer chip) a relatively direct interface with our brains, which means (to me anyway) a very different relationship between humans and computers.

In the next couple of items I’m going to profile a couple of very similar to each other technologies that allow for more traditional human/computer interactions, one of which I’ve posted about previously, the Nokia Morph (most recently in my Sept. 29, 2010 posting).

It was first introduced as a type of flexible phone with other capabilities. Since then, they seem to have elaborated on those capabilities. Here’s a description of what they now call the ‘Morph concept’ in a [ETA May 12, 2011: inserted correct link information] May 4, 2011 news item on Nanowerk,

Morph is a joint nanotechnology concept developed by Nokia Research Center (NRC) and the University of Cambridge (UK). Morph is a concept that demonstrates how future mobile devices might be stretchable and flexible, allowing the user to transform their mobile device into radically different shapes. It demonstrates the ultimate functionality that nanotechnology might be capable of delivering: flexible materials, transparent electronics and self-cleaning surfaces.

Morph, will act as a gateway. It will connect the user to the local environment as well as the global internet. It is an attentive device that adapts to the context – it shapes according to the context. The device can change its form from rigid to flexible and stretchable. Buttons of the user interface can grow up from a flat surface when needed. User will never have to worry about the battery life. It is a device that will help us in our everyday life, to keep our self connected and in shape. It is one significant piece of a system that will help us to look after the environment.

Without the new materials, i.e. new structures enabled by the novel materials and manufacturing methods it would be impossible to build Morph kind of device. Graphene has an important role in different components of the new device and the ecosystem needed to make the gateway and context awareness possible in an energy efficient way.

Graphene will enable evolution of the current technology e.g. continuation of the ever increasing computing power when the performance of the computing would require sub nanometer scale transistors by using conventional materials.

For someone who’s been following news of the Morph for the last few years, this news item doesn’t give you any new information. Still, it’s nice to be reminded of the Morph project. Here’s a video produced by the University of Cambridge that illustrates some of the project’s hopes for the Morph concept,

While the folks at the Nokia Research Centre and University of Cambridge have been working on their project, it appears the team at the Human Media Lab at the School of Computing at Queen’s University (Kingston, Ontario, Canada) in cooperation with a team from Arizona State University and E Ink Corporation have been able to produce a prototype of something remarkably similar, albeit with fewer functions. The PaperPhone is being introduced at the Association of Computing Machinery’s CHI 2011 (Computer Human Interaction) conference in Vancouver, Canada next Tuesday, May 10, 2011.

Here’s more about it from a May 4, 2011 news item on Nanowerk,

The world’s first interactive paper computer is set to revolutionize the world of interactive computing.

“This is the future. Everything is going to look and feel like this within five years,” says creator Roel Vertegaal, the director of Queen’s University Human Media Lab,. “This computer looks, feels and operates like a small sheet of interactive paper. You interact with it by bending it into a cell phone, flipping the corner to turn pages, or writing on it with a pen.”

The smartphone prototype, called PaperPhone is best described as a flexible iPhone – it does everything a smartphone does, like store books, play music or make phone calls. But its display consists of a 9.5 cm diagonal thin film flexible E Ink display. The flexible form of the display makes it much more portable that any current mobile computer: it will shape with your pocket.

For anyone who knows the novel, it’s very Diamond Age (by Neal Stephenson). On a more technical note, I would have liked more information about the display’s technology. What is E Ink using? Graphene? Carbon nanotubes?

(That does not look like to paper to me but I suppose you could call it ‘paperlike’.)

In reviewing all these news items, it seems to me there are two themes, the computer as bodywear and the computer as an extension of our thoughts. Both of these are more intimate relationships, the latter far more so than the former, than we’ve had with the computer till now. If any of you have any thoughts on this, please do leave a comment as I would be delighted to engage on some discussion about this.

You can get more information about the Association of Computing Machinery’s CHI 2011 (Computer Human Interaction) conference where Dr. Vertegaal will be presenting here.

You can find more about Dr. Vertegaal and the Human Media Lab at Queen’s University here.

The academic paper being presented at the Vancouver conference is here.

Also, if you are interested in the hardware end of things, you can check out E Ink Corporation, the company that partnered with the team from Queen’s and Arizona State University to create the PaperPhone. Interestingly, E Ink is a spin off company from the Massachusetts Institute of Technology (MIT).

E-paper technology takes another step forward, spooky magnetic attractions, and the business of nano

If you’ve read Neal Stephenson’s ‘Diamond Age’ then you’ll probably remember a passage near the beginning where a main character unrolls his flexible screen before glancing at the daily news. We’re not there yet with e-paper because there’s a problem with brightness (reflectance of ambient light). Most e-paper screens give 40% reflectance and that’s not been enough until this week when Gamma Dyanmics took a step closer to achieving the e-newspaper dream with their electrofluidic display. They are working on a international joint project with the University of Cincinnati, Sun Chemical, and Polymer Visual. They’ve unveiled a prototype and published a paper in the May issue of Nature Photonics (this is behind a paywall). For a consumer-friendly article describing the work, go to Fast Company here. For more technically-minded descriptions go here for a longer version and here for a shorter version.

Thanks to Fast Company, I found this video called, Magnetic Attractions. Artists at NASA created a short video illustrating various magnetic forces. What makes it spooky? They even show the forces extending through walls. You can see it here.

You might want to take a boo at Howard Lovy’s March 2009 posting about nano business and nano possibilities on Small Tech Talk. It was written in response to this (from Lovy’s post),

It has gotten to the point now where Scott E. Rickert, chief executive of Nanofilm Ltd., has gone as far as to declare that “the era of endless exploration is over — at least as long as the economy stumbles.” Writing in IndustryWeek, Rickert expresses his impatience now with nanotech information that is not directly related to business.

“Nanobusiness is business. Period. First, last, always,” Rickert declares.

I like Lovy’s response to this. As for me, some of the business people have extremist positions not that far removed from those of some scientists who think that they should be allowed to research whatever they want and that business is a dirty word.