Tag Archives: haptics

Smartphone as augmented reality system with software from Brown University

You need to see this,

Amazing, eh? The researchers are scheduled to present this work sometime this week at the ACM Symposium on User Interface Software and Technology (UIST) being held in New Orleans, US, from October 20-23, 2019.

Here’s more about ‘Portal-ble’ in an October 16, 2019 news item on ScienceDaily,

A new software system developed by Brown University [US] researchers turns cell phones into augmented reality portals, enabling users to place virtual building blocks, furniture and other objects into real-world backdrops, and use their hands to manipulate those objects as if they were really there.

The developers hope the new system, called Portal-ble, could be a tool for artists, designers, game developers and others to experiment with augmented reality (AR). The team will present the work later this month at the ACM Symposium on User Interface Software and Technology (UIST 2019) in New Orleans. The source code for Andriod is freely available for download on the researchers’ website, and iPhone code will follow soon.

“AR is going to be a great new mode of interaction,” said Jeff Huang, an assistant professor of computer science at Brown who developed the system with his students. “We wanted to make something that made AR portable so that people could use anywhere without any bulky headsets. We also wanted people to be able to interact with the virtual world in a natural way using their hands.”

An October 16, 2019 Brown University news release (also on EurekAlert), which originated the news item, provides more detail,

Huang said the idea for Portal-ble’s “hands-on” interaction grew out of some frustration with AR apps like Pokemon GO. AR apps use smartphones to place virtual objects (like Pokemon characters) into real-world scenes, but interacting with those objects requires users to swipe on the screen.

“Swiping just wasn’t a satisfying way of interacting,” Huang said. “In the real world, we interact with objects with our hands. We turn doorknobs, pick things up and throw things. So we thought manipulating virtual objects by hand would be much more powerful than swiping. That’s what’s different about Portal-ble.”

The platform makes use of a small infrared sensor mounted on the back of a phone. The sensor tracks the position of people’s hands in relation to virtual objects, enabling users to pick objects up, turn them, stack them or drop them. It also lets people use their hands to virtually “paint” onto real-world backdrops. As a demonstration, Huang and his students used the system to paint a virtual garden into a green space on Brown’s College Hill campus.

Huang says the main technical contribution of the work was developing the right accommodations and feedback tools to enable people to interact intuitively with virtual objects.

“It turns out that picking up a virtual object is really hard if you try to apply real-world physics,” Huang said. “People try to grab in the wrong place, or they put their fingers through the objects. So we had to observe how people tried to interact with these objects and then make our system able accommodate those tendencies.”

To do that, Huang enlisted students in a class he was teaching to come up with tasks they might want to do in the AR world — stacking a set of blocks, for example. The students then asked other people to try performing those tasks using Portal-ble, while recording what people were able to do and what they couldn’t. They could then adjust the system’s physics and user interface to make interactions more successful.

“It’s a little like what happens when people draw lines in Photoshop,” Huang said. “The lines people draw are never perfect, but the program can smooth them out and make them perfectly straight. Those were the kinds of accommodations we were trying to make with these virtual objects.”

The team also added sensory feedback — visual highlights on objects and phone vibrations — to make interactions easier. Huang said he was somewhat surprised that phone vibrations helped users to interact. Users feel the vibrations in the hand they’re using to hold the phone, not in the hand that’s actually grabbing for the virtual object. Still, Huang said, vibration feedback still helped users to more successfully interact with objects.

In follow-up studies, users reported that the accommodations and feedback used by the system made tasks significantly easier, less time-consuming and more satisfying.

Huang and his students plan to continue working with Portal-ble — expanding its object library, refining interactions and developing new activities. They also hope to streamline the system to make it run entirely on a phone. Currently the infrared sensor requires an infrared sensor and external compute stick for extra processing power.

Huang hopes people will download the freely available source code and try it for themselves. 
“We really just want to put this out there and see what people do with it,” he said. “The code is on our website for people to download, edit and build off of. It will be interesting to see what people do with it.

Co-authors on the research paper were Jing Qian, Jiaju Ma, Xiangyu Li, Benjamin Attal, Haoming Lai, James Tompkin and John Hughes. The work was supported by the National Science Foundation (IIS-1552663) and by a gift from Pixar.

You can find the conference paper here on jeffhuang.com,

Portal-ble: Intuitive Free-hand Manipulationin Unbounded Smartphone-based Augmented Reality by Jing Qian, Jiaju Ma, Xiangyu Li∗, Benjamin Attal, Haoming Lai,James Tompkin, John F. Hughes, Jeff Huang. Brown University, Providence RI, USA; Southeast University, Nanjing, China. Presented at ACM Symposium on User Interface Software and Technology (UIST) being held in New Orleans, US

This is the first time I’ve seen an augmented reality system that seems accessible, i.e., affordable. You can find out more on the Portal-ble ‘resource’ page where you’ll also find a link to the source code repository. The researchers, as noted in the news release, have an Android version available now with an iPhone version to be released in the future.

Atomic force microscope (AFM) shrunk down to a dime-sized device?

Before getting to the announcement, here’s a little background from Dexter Johnson’s Feb. 21, 2017 posting on his NanoClast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website; Note: Links have been removed),

Ever since the 1980s, when Gerd Binnig of IBM first heard that “beautiful noise” made by the tip of the first scanning tunneling microscope (STM) dragging across the surface of an atom, and he later developed the atomic force microscope (AFM), these microscopy tools have been the bedrock of nanotechnology research and development.

AFMs have continued to evolve over the years, and at one time, IBM even looked into using them as the basis of a memory technology in the company’s Millipede project. Despite all this development, AFMs have remained bulky and expensive devices, costing as much as $50,000 [or more].

Now, here’s the announcement in a Feb. 15, 2017 news item on Nanowerk,

Researchers at The University of Texas at Dallas have created an atomic force microscope on a chip, dramatically shrinking the size — and, hopefully, the price tag — of a high-tech device commonly used to characterize material properties.

“A standard atomic force microscope is a large, bulky instrument, with multiple control loops, electronics and amplifiers,” said Dr. Reza Moheimani, professor of mechanical engineering at UT Dallas. “We have managed to miniaturize all of the electromechanical components down onto a single small chip.”

A Feb. 15, 2017 University of Texas at Dallas news release, which originated the news item, provides more detail,

An atomic force microscope (AFM) is a scientific tool that is used to create detailed three-dimensional images of the surfaces of materials, down to the nanometer scale — that’s roughly on the scale of individual molecules.

The basic AFM design consists of a tiny cantilever, or arm, that has a sharp tip attached to one end. As the apparatus scans back and forth across the surface of a sample, or the sample moves under it, the interactive forces between the sample and the tip cause the cantilever to move up and down as the tip follows the contours of the surface. Those movements are then translated into an image.

“An AFM is a microscope that ‘sees’ a surface kind of the way a visually impaired person might, by touching. You can get a resolution that is well beyond what an optical microscope can achieve,” said Moheimani, who holds the James Von Ehr Distinguished Chair in Science and Technology in the Erik Jonsson School of Engineering and Computer Science. “It can capture features that are very, very small.”

The UT Dallas team created its prototype on-chip AFM using a microelectromechanical systems (MEMS) approach.

“A classic example of MEMS technology are the accelerometers and gyroscopes found in smartphones,” said Dr. Anthony Fowler, a research scientist in Moheimani’s Laboratory for Dynamics and Control of Nanosystems and one of the article’s co-authors. “These used to be big, expensive, mechanical devices, but using MEMS technology, accelerometers have shrunk down onto a single chip, which can be manufactured for just a few dollars apiece.”

The MEMS-based AFM is about 1 square centimeter in size, or a little smaller than a dime. It is attached to a small printed circuit board, about half the size of a credit card, which contains circuitry, sensors and other miniaturized components that control the movement and other aspects of the device.

Conventional AFMs operate in various modes. Some map out a sample’s features by maintaining a constant force as the probe tip drags across the surface, while others do so by maintaining a constant distance between the two.

“The problem with using a constant height approach is that the tip is applying varying forces on a sample all the time, which can damage a sample that is very soft,” Fowler said. “Or, if you are scanning a very hard surface, you could wear down the tip,”

The MEMS-based AFM operates in “tapping mode,” which means the cantilever and tip oscillate up and down perpendicular to the sample, and the tip alternately contacts then lifts off from the surface. As the probe moves back and forth across a sample material, a feedback loop maintains the height of that oscillation, ultimately creating an image.

“In tapping mode, as the oscillating cantilever moves across the surface topography, the amplitude of the oscillation wants to change as it interacts with sample,” said Dr. Mohammad Maroufi, a research associate in mechanical engineering and co-author of the paper. “This device creates an image by maintaining the amplitude of oscillation.”

Because conventional AFMs require lasers and other large components to operate, their use can be limited. They’re also expensive.

“An educational version can cost about $30,000 or $40,000, and a laboratory-level AFM can run $500,000 or more,” Moheimani said. “Our MEMS approach to AFM design has the potential to significantly reduce the complexity and cost of the instrument.

“One of the attractive aspects about MEMS is that you can mass produce them, building hundreds or thousands of them in one shot, so the price of each chip would only be a few dollars. As a result, you might be able to offer the whole miniature AFM system for a few thousand dollars.”

A reduced size and price tag also could expand the AFMs’ utility beyond current scientific applications.

“For example, the semiconductor industry might benefit from these small devices, in particular companies that manufacture the silicon wafers from which computer chips are made,” Moheimani said. “With our technology, you might have an array of AFMs to characterize the wafer’s surface to find micro-faults before the product is shipped out.”

The lab prototype is a first-generation device, Moheimani said, and the group is already working on ways to improve and streamline the fabrication of the device.

“This is one of those technologies where, as they say, ‘If you build it, they will come.’ We anticipate finding many applications as the technology matures,” Moheimani said.

In addition to the UT Dallas researchers, Michael Ruppert, a visiting graduate student from the University of Newcastle in Australia, was a co-author of the journal article. Moheimani was Ruppert’s doctoral advisor.

So, an AFM that could cost as much as $500,000 for a laboratory has been shrunk to this size and become far less expensive,

A MEMS-based atomic force microscope developed by engineers at UT Dallas is about 1 square centimeter in size (top center). Here it is attached to a small printed circuit board that contains circuitry, sensors and other miniaturized components that control the movement and other aspects of the device. Courtesy: University of Texas at Dallas

Of course, there’s still more work to be done as you’ll note when reading Dexter’s Feb. 21, 2017 posting where he features answers to questions he directed to the researchers.

Here’s a link to and a citation for the paper,

On-Chip Dynamic Mode Atomic Force Microscopy: A Silicon-on-Insulator MEMS Approach by  Michael G. Ruppert, Anthony G. Fowler, Mohammad Maroufi, S. O. Reza Moheimani. IEEE Journal of Microelectromechanical Systems Volume: 26 Issue: 1  Feb. 2017 DOI: 10.1109/JMEMS.2016.2628890 Date of Publication: 06 December 2016

This paper is behind a paywall.

Skin as a touchscreen (“smart” hands)

An April 11, 2016 news item on phys.org highlights some research presented at the IEEE (Institute of Electrical and Electronics Engineers) Haptics (touch) Symposium 2016,

Using your skin as a touchscreen has been brought a step closer after UK scientists successfully created tactile sensations on the palm using ultrasound sent through the hand.

The University of Sussex-led study – funded by the Nokia Research Centre and the European Research Council – is the first to find a way for users to feel what they are doing when interacting with displays projected on their hand.

This solves one of the biggest challenges for technology companies who see the human body, particularly the hand, as the ideal display extension for the next generation of smartwatches and other smart devices.

Current ideas rely on vibrations or pins, which both need contact with the palm to work, interrupting the display.

However, this new innovation, called SkinHaptics, sends sensations to the palm from the other side of the hand, leaving the palm free to display the screen.

An April 11, 2016 University of Sussex press release (also on EurekAlert) by James Hakmer, which originated the news item, provides more detail,

The device uses ‘time-reversal’ processing to send ultrasound waves through the hand. This technique is effectively like ripples in water but in reverse – the waves become more targeted as they travel through the hand, ending at a precise point on the palm.

It draws on a rapidly growing field of technology called haptics, which is the science of applying touch sensation and control to interaction with computers and technology.

Professor Sriram Subramanian, who leads the research team at the University of Sussex, says that technologies will inevitably need to engage other senses, such as touch, as we enter what designers are calling an ‘eye-free’ age of technology.

He says: “Wearables are already big business and will only get bigger. But as we wear technology more, it gets smaller and we look at it less, and therefore multisensory capabilities become much more important.

“If you imagine you are on your bike and want to change the volume control on your smartwatch, the interaction space on the watch is very small. So companies are looking at how to extend this space to the hand of the user.

“What we offer people is the ability to feel their actions when they are interacting with the hand.”

The findings were presented at the IEEE Haptics Symposium [April 8 – 11] 2016 in Philadelphia, USA, by the study’s co-author Dr Daniel Spelmezan, a research assistant in the Interact Lab.

There is a video of the work (I was not able to activate sound, if there is any accompanying this video),

The consequence of watching this silent video was that I found the whole thing somewhat mysterious.

UK’s National Physical Laboratory reaches out to ‘BioTouch’ MIT and UCL

This March 27, 2014 news item on Azonano is an announcement for a new project featuring haptics and self-assembly,

NPL (UK’s National Physical Laboratory) has started a new strategic research partnership with UCL (University College of London) and MIT (Massachusetts Institute of Technology) focused on haptic-enabled sensing and micromanipulation of biological self-assembly – BioTouch.

The NPL March 27, 2014 news release, which originated the news item, is accompanied by a rather interesting image,

A computer operated dexterous robotic hand holding a microscope slide with a fluorescent human cell (not to scale) embedded into a synthetic extracellular matrix. Courtesy: NPL

A computer operated dexterous
robotic hand holding a microscope
slide with a fluorescent human cell
(not to scale) embedded into a
synthetic extracellular matrix. Courtesy: NPL

The news release goes on to describe the BioTouch project in more detail (Note: A link has been removed),

The project will probe sensing and application of force and related vectors specific to biological self-assembly as a means of synthetic biology and nanoscale construction. The overarching objective is to enable the re-programming of self-assembled patterns and objects by directed micro-to-nano manipulation with compliant robotic haptic control.

This joint venture, funded by the European Research Council, EPSRC and NPL’s Strategic Research Programme, is a rare blend of interdisciplinary research bringing together expertise in robotics, haptics and machine vision with synthetic and cell biology, protein design, and super- and high-resolution microscopy. The research builds on the NPL’s pioneering developments in bioengineering and imaging and world-leading haptics technologies from UCL and MIT.

Haptics is an emerging enabling tool for sensing and manipulation through touch, which holds particular promise for the development of autonomous robots that need to perform human-like functions in unstructured environments. However, the path to all such applications is hampered by the lack of a compliant interface between a predictably assembled biological system and a human user. This research will enable human directed micro-manipulation of experimental biological systems using cutting-edge robotic systems and haptic feedback.

Recently the UK government has announced ‘eight great technologies’ in which Britain is to become a world leader. Robotics, synthetic biology, regenerative medicine and advanced materials are four of these technologies for which this project serves as a merging point providing thus an excellent example of how multidisciplinary collaborative research can shape our future.

If it read this rightly, it means they’re trying to design systems where robots will work directly with materials in the labs while humans direct the robots’ actions from a remote location. My best example of this (it’s not a laboratory example) would be of a surgery where a robot actually performs the work while a human directs the robot’s actions based on haptic (touch) information the human receives from the robot. Surgeons don’t necessarily see what they’re dealing with, they may be feeling it with their fingers (haptic information). In effect, the robot’s hands become an extension of the surgeon’s hands. I imagine using a robot’s ‘hands’ would allow for less invasive procedures to be performed.

Reaching beyond the screen with the Tangible Media Group at the Massachusetts Institute of Technology (MIT)

Researchers at MIT’s (Massachusetts Institute of Technology) Tangible Media Group are quite literally reaching beyond the screen with inFORM, their Dynamic Shape Display,

John Brownlee’s Nov. 12, 2013 article for Fast Company describes the project this way (Note: A link has been removed),

Created by Daniel Leithinger and Sean Follmer and overseen by Professor Hiroshi Ishii, the technology behind the inFORM isn’t that hard to understand. It’s basically a fancy Pinscreen, one of those executive desk toys that allows you to create a rough 3-D model of an object by pressing it into a bed of flattened pins. With inFORM, each of those “pins” is connected to a motor controlled by a nearby laptop, which can not only move the pins to render digital content physically, but can also register real-life objects interacting with its surface thanks to the sensors of a hacked Microsoft Kinect.

To put it in the simplest terms, the inFORM is a self-aware computer monitor that doesn’t just display light, but shape as well. Remotely, two people Skyping could physically interact by playing catch, for example, or manipulating an object together, or even slapping high five from across the planet.

I found this bit in Brownlee’s article particularly interesting,

As the world increasingly embraces touch screens, the pullable knobs, twisting dials, and pushable buttons that defined the interfaces of the past have become digital ghosts. The tactile is gone and the Tangible Media Group sees that as a huge problem.

I echo what the researchers suggest about the loss of the tactile. Many years ago, when I worked in libraries, we digitized the  card catalogues and it was, for me, the beginning of the end for my career in the world of libraries. To this day, I still miss the cards.(I suspect there’s a subtle relationship between tactile cues and memory.)

Research in libraries was a more physical pursuit then. Now, almost everything can be done with a computer screen; you need never leave your chair to research and retrieve your documents. Of course, there are some advantages to this world of screens; I can access documents in a way that would have been unthinkable in a world dominated by library card catalogues. Still, I am pleased to see work being done to reintegrate the tactile into our digitized world as I agree with the researchers who view this loss as a problem. It’s not just exercise that we’re missing with our current regime.

The researchers have produced a paper for a SIGCHI (Special Interest Group, Computer Human Interface; Association for Computing Machinery) conference but it appears to be unpublished and it is undated,

inFORM: Dynamic Physical Affordances and Constraints through Shape and Object Actuation by Sean Follmer, Daniel Leithinger, Alex Olwal, Akimitsu Hogge, and  Hiroshi Ishi.

The researchers have made this paper freely available.

Sometimes when we touch: Touché, a sensing project from Disnery Research and Carnegie Mellon

Researchers at Carnegie Mellon University and Disney Research, Pittsburgh (Philadelphia, US) have taken capacitive sensing, used for touchscreens such as smartphones, and added new capabilities. From the May 4, 2012 news item on Nanowerk,

A doorknob that knows whether to lock or unlock based on how it is grasped, a smartphone that silences itself if the user holds a finger to her lips and a chair that adjusts room lighting based on recognizing if a user is reclining or leaning forward are among the many possible applications of Touché, a new sensing technique developed by a team at Disney Research, Pittsburgh, and Carnegie Mellon University.

Touché is a form of capacitive touch sensing, the same principle underlying the types of touchscreens used in most smartphones. But instead of sensing electrical signals at a single frequency, like the typical touchscreen, Touché monitors capacitive signals across a broad range of frequencies.

This Swept Frequency Capacitive Sensing (SFCS) makes it possible to not only detect a “touch event,” but to recognize complex configurations of the hand or body that is doing the touching. An object thus could sense how it is being touched, or might sense the body configuration of the person doing the touching.

Disney Research, Pittsburgh made this video describing the technology and speculating on some of the possible applications (this is a research-oriented video, not your standard Disney fare),

Here’s a bit  more about the technology (from the May 4, 2012 news item),

Both Touché and smartphone touchscreens are based on the phenomenon known as capacitive coupling. In a capacitive touchscreen, the surface is coated with a transparent conductor that carries an electrical signal. That signal is altered when a person’s finger touches it, providing an alternative path for the electrical charge.

By monitoring the change in the signal, the device can determine if a touch occurs. By monitoring a range of signal frequencies, however, Touché can derive much more information. Different body tissues have different capacitive properties, so monitoring a range of frequencies can detect a number of different paths that the electrical charge takes through the body.

Making sense of all of that SFCS information, however, requires analyzing hundreds of data points. As microprocessors have become steadily faster and less expensive, it now is feasible to use SFCS in touch interfaces, the researchers said.

“Devices keep getting smaller and increasingly are embedded throughout the environment, which has made it necessary for us to find ways to control or interact with them, and that is where Touché could really shine,” Harrison [Chris Harrison, a Ph.D. student in Carnegie Mellon’s Human-Computer Interaction Institute] said. Sato [Munehiko Sato, a Disney intern and a Ph.D. student in engineering at the University of Tokyo] said Touché could make computer interfaces as invisible to users as the embedded computers themselves. “This might enable us to one day do away with keyboards, mice and perhaps even conventional touchscreens for many applications,” he said.

We’re seeing more of these automatic responses to a gesture or movement. For example, common spelling errors are corrected as you key in (type) text in wordprocessing packages and in search engines. In fact, there are times when an applications insists on its own correction and I have to insist (and I don’t always manage to override the system) if I have something which is nonstandard. As I watch these videos and read about these new technical possibilities, I keep asking myself, Where is the override?

Making sounds with gestures

It’s kind of haptic; it’s kind of gestural; and it’s all about the sound, Mogees. Here’s the video,

Mogees – Gesture recognition with contact-microphones from bruno zamborlin on Vimeo.

In case what you’ve just seen interests you, here are some more details from the Jan. 5, 2012 article by Nancy Owano for physorg.com,

 The Mogees is a project that stems from the department of computing at Goldsmiths, University of London, where researcher Bruno Zamborlin collaborates with a team at IRCAM [Institut de Recherche et Coordination Acoustique/Musique] in Paris to experiment with new methods for “gestural interaction” in coming up with novel ways of making sounds. … The video shows the use of a contact microphone and audio processing software to construct a gesture-recognizing touch interface from assorted surfaces—a tree trunk, a balloon, a glass panel at a bus stage, and an inflated balloon. Also, different gestures control different sounds.

As to how that microphone and audio processing software work, here’s an explanation from Sebastian Anthony’s Jan. 4, 2012 article for ExtremeTech,

First of all, that little silver nugget — which seems to utilize some kind of suction cup — contains multiple microphones to create a stereo image of the sounds it hears. Second, that black cable connects to a PC of some kind; probably a laptop, considering the guy plays music on a tree and a bus shelter. On the PC, the vibrations of your fingers tapping on the surface are analyzed and converted into gestures, and then MaxMSP — a visual programming language for creating music and other multimedia experiences — turns the gestures into sounds.

You can get more information about Bruno Zamborlin at his website and you can find more about Mogees here at Goldsmith’s. I highly recommend reading the two articles mentioned.

Surround Haptics and Cory Doctorow at Vancouver’s SIGGRAPH 2011

Given that nanotechnology research is based on microscopes that are haptic rather than optical, I find the latest Disney technology, Surround Haptics, being demonstrated at the 2011 SIGGRAPH conference quite intriguing. From the August 8, 2011 news item on physorg.com,

A new tactile technology developed at Disney Research, Pittsburgh (DRP), called Surround Haptics, makes it possible for video game players and film viewers to feel a wide variety of sensations, from the smoothness of a finger being drawn against skin to the jolt of a collision.

The technology is based on rigorous psychophysical experiments and new models of tactile perception. Disney will demonstrate Surround Haptics Aug. 7-11 at the Emerging Technology Exhibition at SIGGRAPH 2011, the International Conference on Computer Graphics and Interactive Techniques in Vancouver.

There have been previous attempts to integrate tactile technologies into entertainment but this latest version from Disney offers a more refined experience. From the news item,

The DRP researchers have accomplished this feat by designing an algorithm for controlling an array of vibrating actuators in such a way as to create “virtual actuators” anywhere within the grid of actuators. A virtual actuator, Poupyrev said, can be created between any two physical actuators; the user has the illusion of feeling only the virtual actuator.

As a result, users don’t feel the general buzzing or pulsing typical of most haptic devices today, but can feel discrete, continuous motions such as a finger tracing a pattern on skin.

The 2011 SIGGRAPH conference started Aug. 7 and extends to August 11. The keynote speaker, scheduled for August 8, was Cory Doctorow (from the Aug. 3, 2011 article by Blaine Kyllo for The Georgia Straight),

Cory Doctorow is an author, activist, journalist, and blogger. As a vocal advocate of copyright reform, he’s got clear ideas about how copyright could work to the benefit of creators and publishers.

Doctorow, a Canadian living in London, will deliver the keynote address at the SIGGRAPH 2011 conference on Monday [August 8] at 11 a.m. at the Vancouver Convention Centre.

He spoke with the Georgia Straight about copyright reform and his Twitter argument with Canadian Heritage Minister James Moore, the Conservative MP for Port Moody-Westwood-Port Coquitlam.

There wasn’t much video of Doctorow’s keynote; this 37 second excerpt is all I could find,

Reimagining prosthetic arms; touchable holograms and brief thoughts on multimodal science communication; and nanoscience conference in Seattle

Reimagining the prosthetic arm, an article by Cliff Kuang in Fast Company (here) highlights a student design project at New York’s School of Visual Arts. Students were asked to improve prosthetic arms and were given four categories: decorative, playful, utilitarian, and awareness. This one by Tonya Douraghey and Carli Pierce caught my fancy, after all, who hasn’t thought of growing wings? (Rrom the Fast Company website),

Feathered cuff and wing arm

Feathered cuff and wing arm

I suggest reading Kuang’s article before heading off to the project website to see more student projects.

At the end of yesterday’s posting about MICA and multidimensional data visualization in spaces with up to 12 dimensions (here)  in virtual worlds such as Second Life, I made a comment about multimodal discourse which is something I think will become increasingly important. I’m not sure I can imagine 12 dimensions but I don’t expect that our usual means of visualizing or understanding data are going to be sufficient for the task. Consequently, I’ve been noticing more projects which engage some of our other senses, notably touch. For example, the SIGGRAPH 2009 conference in New Orleans featured a hologram that you can touch. This is another article by Cliff Kuang in Fast Company, Holograms that you can touch and feel. For anyone unfamiliar with SIGGRAPH, the show has introduced a number of important innovations, notably, clickable icons. It’s hard to believe but there was a time when everything was done by keyboard.

My August newsletter from NISE Net (Nanoscale Informal Science Education Network) brings news of a conference in Seattle, WA at the Pacific Science Centre, Sept. 8 – 11, 2009. It will feature (from the NISE Net blog),

Members of the NISE Net Program group and faculty and students at the Center for Nanotechnology in Society at Arizona State University are teaming up to demonstrate and discuss potential collaborations between the social science community and the informal science education community at a conference of the Society for the Study of Nanoscience and Emerging Technologies in Seattle in early September.

There’s more at the NISE Net blog here including a link to the conference site. (I gather the Society for the Study of Nanoscience and Emerging Nanotechnologies is in its very early stages of organizing so this is a fairly informal call for registrants.)

The NISE Net nano haiku this month is,

Nanoparticles

Surface plasmon resonance
Silver looks yellow

by Dr. Katie D. Cadwell of the University of Wisconsin-Madison MRSEC.

Have a nice weekend!