Tag Archives: Mark Wilson

I am a book. I am a portal to …

Interactive data visualization for children who want to learn about the universe in the form of a book was published by Penguin Books as “I am a book. I am a portal to the universe.” was first published in 2020. As of April 2021, it has crossed the Atlantic Ocean occasioning an April 16, 2021 article by Mark Wilson for Fast Company (Note: Links have been removed),

… A collaboration between data-centric designer Stefanie Posavec and data journalist Miriam Quick, …

“The pared-back aesthetic is due to the book’s core concept. The whole book, even the endnotes and acknowledgements, is written in the first person, in the book’s own voice. [emphasis mine] It developed its own rather theatrical character as we worked on it,” says Posavec. “The book speaks directly to the reader using whatever materials it has at its disposal to communicate the wonders of our universe. In the purest sense, that means the book’s paper and binding, its typeface and its CMYK [cyan, magenta, yellow, black] ink, or, as the book would call them, its ‘superpowers.’” [emphases mine]

It’s hard to explain without actually experiencing it. Which is exactly why it’s so much fun. For instance, at one moment, the book asks you to put it on your head [emphasis mine] and take it off. That difference in weight you feel? That’s how much lighter you are on the top of a mountain than at sea level, the book explains, because of the difference in gravity at different altitudes. …

I recommend reading Wilson’s April 16, 2021 article in its entirety if you have the time as it is peppered with images, GIFs, and illustrative stories.

The “I am a book. I am a portal to the universe.” website offers more details,

“Typography and design combine thrillingly to form something that is

eye-opening in

every sense”

— Financial Times

Hello. I am a book.
But I’m also a portal to the universe.

I have 112 pages, measuring 20cm high and wide. I weigh 450g. And I have the power to show you the wonders of the world.

I’m different to any other book around today. I am not a book of infographics. I’m an informative, interactive experience, in which the data can be touched, felt and understood, with every measurement represented on a 1:1 scale. How long is an anteater’s tongue? How tiny is the DNA in your cells? How fast is gold mined? How loud is the sun? And how many stars have been born and exploded in the time you’ve taken to read this sentence?

… 

There is a September 2020 Conversations with Data podcast: Episode 13 (hosted by Tara Kelly on Spotify) featuring Stefanie Posavec (data-centric designer) and Miriam Quick (data journalist) discussing their book.

You can find Miriam Quick’s website here and Stefanie Posavec’s website here.

MIT Media Lab releases new educational site for kids K-12: it’s all about artificial intelligence (AI)

Mark Wilson announces a timely new online programme from the Massachusetts Institute of Technology (MIT) in his April 9, 2020 article for Fast Company (Note: Links have been removed).

Not every child will grow up to attend MIT, but that doesn’t mean they can’t get a jump start on its curriculum. In response to the COVID-19 pandemic, which has forced millions of students to learn from home, MIT Media Lab associate professor Cynthia Breazeal has released [April 7, 2020] a website for K-12 students to learn about one of the most important topics in STEM [science, technology, engineering, and mathematics]: artificial intelligence.

The site provides 60 activities, lesson plans, and links to interactive AI experiments that MIT and companies like Google have developed in the past. Projects include coding robots to doodle, developing an image classifier (a tool that can identify images), writing speculative fiction to tackle the murky ethics of AI, and developing a chatbot (your grade schooler cannot possibly be worse at that task than I was). Everything is free, but schools are supposed to license lesson plans from MIT before adopting them.

Various associated MIT groups are covering a wide range of topics including the already mentioned AI ethics, as well as, cyber security and privacy issues, creativity, and more. Here’s a little something from a programme for the Girl Scouts of America, which focused on data privacy and tech policy,

The Girl Scouts awarded the Brownie (7-9) and Junior (9-11) troops with Cybersecurity badges at the end of the full event. 
Credit: Daniella DiPaola [downloaded from https://www.media.mit.edu/posts/data-privacy-policy-to-practice-with-the-girl-scouts/]

You can find MIT’s AI education website here. While the focus is largely on children, it seems they are inviting adults to participate as well. At least that’s what I infer from what one of the groups associated with this AI education website, the LifeLong Kindergarten group states on their webpage,

The Lifelong Kindergarten group develops new technologies and activities that, in the spirit of the blocks and finger paint of kindergarten, engage people in creative learning experiences. Our ultimate goal is to foster a world full of playfully creative people, who are constantly inventing new possibilities for themselves and their communities.

The website is a little challenging with regard to navigation but perhaps these links to the Research Projects page will help you get started quickly or, for those who like to investigate a little further before jumping in, this News page (which is a blog) might prove helpful.

That’s it for today. I wish everyone a peaceful long weekend while we all observe as joyfully and carefully as possible our various religious and seasonal traditions. From my tradition to yours, Joyeuses Pâques!

Audio map of 24 emotions

Caption: Audio map of vocal bursts across 24 emotions. To visit the online map and hear the sounds, go to https://s3-us-west-1.amazonaws.com/vocs/map.html# and move the cursor across the map. Credit: Courtesy of Alan Cowen

The real map, not the the image of the map you see above, offers a disconcerting (for me, anyway) experience. Especially since I’ve just finished reading Lisa Feldman Barrett’s 2017 book, How Emotions are Made, where she presents her theory of ‘constructed emotion. (There’s more about ‘constructed emotion’ later in this post.)

Moving on to the story about the ‘auditory emotion map’ in the headline, a February 4, 2019 University of California at Berkeley news release by Yasmin Anwar (also on EurekAlert but published on Feb. 5, 2019) describes the work,

Ooh, surprise! Those spontaneous sounds we make to express everything from elation (woohoo) to embarrassment (oops) say a lot more about what we’re feeling than previously understood, according to new research from the University of California, Berkeley.

Proving that a sigh is not just a sigh [a reference to the song, As Time Goes By? The lyric is “a kiss is still a kiss, a sigh is just a sigh …”], UC Berkeley scientists conducted a statistical analysis of listener responses to more than 2,000 nonverbal exclamations known as “vocal bursts” and found they convey at least 24 kinds of emotion. Previous studies of vocal bursts set the number of recognizable emotions closer to 13.

The results, recently published online in the American Psychologist journal, are demonstrated in vivid sound and color on the first-ever interactive audio map of nonverbal vocal communication.

“This study is the most extensive demonstration of our rich emotional vocal repertoire, involving brief signals of upwards of two dozen emotions as intriguing as awe, adoration, interest, sympathy and embarrassment,” said study senior author Dacher Keltner, a psychology professor at UC Berkeley and faculty director of the Greater Good Science Center, which helped support the research.

For millions of years, humans have used wordless vocalizations to communicate feelings that can be decoded in a matter of seconds, as this latest study demonstrates.

“Our findings show that the voice is a much more powerful tool for expressing emotion than previously assumed,” said study lead author Alan Cowen, a Ph.D. student in psychology at UC Berkeley.

On Cowen’s audio map, one can slide one’s cursor across the emotional topography and hover over fear (scream), then surprise (gasp), then awe (woah), realization (ohhh), interest (ah?) and finally confusion (huh?).

Among other applications, the map can be used to help teach voice-controlled digital assistants and other robotic devices to better recognize human emotions based on the sounds we make, he said.

As for clinical uses, the map could theoretically guide medical professionals and researchers working with people with dementia, autism and other emotional processing disorders to zero in on specific emotion-related deficits.

“It lays out the different vocal emotions that someone with a disorder might have difficulty understanding,” Cowen said. “For example, you might want to sample the sounds to see if the patient is recognizing nuanced differences between, say, awe and confusion.”

Though limited to U.S. responses, the study suggests humans are so keenly attuned to nonverbal signals – such as the bonding “coos” between parents and infants – that we can pick up on the subtle differences between surprise and alarm, or an amused laugh versus an embarrassed laugh.

For example, by placing the cursor in the embarrassment region of the map, you might find a vocalization that is recognized as a mix of amusement, embarrassment and positive surprise.

A tour through amusement reveals the rich vocabulary of laughter and a spin through the sounds of adoration, sympathy, ecstasy and desire may tell you more about romantic life than you might expect,” said Keltner.

Researchers recorded more than 2,000 vocal bursts from 56 male and female professional actors and non-actors from the United States, India, Kenya and Singapore by asking them to respond to emotionally evocative scenarios.

Next, more than 1,000 adults recruited via Amazon’s Mechanical Turk online marketplace listened to the vocal bursts and evaluated them based on the emotions and meaning they conveyed and whether the tone was positive or negative, among several other characteristics.

A statistical analysis of their responses found that the vocal bursts fit into at least two dozen distinct categories including amusement, anger, awe, confusion, contempt, contentment, desire, disappointment, disgust, distress, ecstasy, elation, embarrassment, fear, interest, pain, realization, relief, sadness, surprise (positive) surprise (negative), sympathy and triumph.

For the second part of the study, researchers sought to present real-world contexts for the vocal bursts. They did this by sampling YouTube video clips that would evoke the 24 emotions established in the first part of the study, such as babies falling, puppies being hugged and spellbinding magic tricks.

This time, 88 adults of all ages judged the vocal bursts extracted from YouTube videos. Again, the researchers were able to categorize their responses into 24 shades of emotion. The full set of data were then organized into a semantic space onto an interactive map.

“These results show that emotional expressions color our social interactions with spirited declarations of our inner feelings that are difficult to fake, and that our friends, co-workers, and loved ones rely on to decipher our true commitments,” Cowen said.

The writer assumes that emotions are pre-existing. Somewhere, there’s happiness, sadness, anger, etc. It’s the pre-existence that Lisa Feldman Barret challenges with her theory that we construct our emotions (from her Wikipedia entry),

She highlights differences in emotions between different cultures, and says that emotions “are not triggered; you create them. They emerge as a combination of the physical properties of your body, a flexible brain that wires itself to whatever environment it develops in, and your culture and upbringing, which provide that environment.”

You can find Barrett’s December 6, 2017 TED talk here wheres she explains her theory in greater detail. One final note about Barrett, she was born and educated in Canada and now works as a Professor of Psychology at Northeastern University, with appointments at Harvard Medical School and Massachusetts General Hospital at Northeastern University in Boston, Massachusetts; US.

A February 7, 2019 by Mark Wilson for Fast Company delves further into the 24 emotion audio map mentioned at the outset of this posting (Note: Links have been removed),

Fear, surprise, awe. Desire, ecstasy, relief.

These emotions are not distinct, but interconnected, across the gradient of human experience. At least that’s what a new paper from researchers at the University of California, Berkeley, Washington University, and Stockholm University proposes. The accompanying interactive map, which charts the sounds we make and how we feel about them, will likely persuade you to agree.

At the end of his article, Wilson also mentions the Dalai Lama and his Atlas of Emotions, a data visualization project, (featured in Mark Wilson’s May 13, 2016 article for Fast Company). It seems humans of all stripes are interested in emotions.

Here’s a link to and a citation for the paper about the audio map,

Mapping 24 emotions conveyed by brief human vocalization by Cowen, Alan S;, Elfenbein, Hillary Ange;, Laukka, Petri; Keltner, Dacher. American Psychologist, Dec 20, 2018, No Pagination Specified DOI: 10.1037/amp0000399


This paper is behind a paywall.

A watch that conducts sound through your body and into your ear

Apparently, you all you have to do is tap your ear to access your telephone calls. A Jan. 8, 2016 article by Mark Wilson for Fast Company describes the technology and the experience of using Samsung’s TipTalk device,

It’s not so helpful to see a call on your smartwatch when you have to pull our your phone to take it anyway. And therein lies the problem with products like the Apple Watch: They’re often not a replacement for your phone, but an intermediary to inevitably using it.

But at this year’s Consumer Electronics Show [held in Las Vegas (Nevada, US) annually (Jan. 6 – 9, 2016)], Samsung’s secret R&D lab … showed off a promising concept to fix one of the biggest problems with smartwatches. Called TipTalk, it’s technology that can send sound from your smartwatch through your arm so when you touch your finger to your ear, you can hear a call or a voicemail—no headphones required.

Engineering breakthroughs like these can be easy to dismiss as a gimmick rather than revolutionary UX [user experience], but I like TipTalk for a few reasons. First, it maps hearing UI [user interface] into a gesture that we already might use to hear to something better … . Second, it could be practical in real world use. You see a new voicemail on your watch, and without even a button press, you listen—but crucially, you still opt-in to hear the message rather than just have it play. And third, the gesture conveys to people around you that you’re occupied.

Ulrich Rozier in his Jan. 8, 2016 article for frandroid.com also raves albeit, in French,

Samsung a développé un bracelet que l’on peut utiliser sur n’importe quelle montre.

Ce bracelet vibre lorsque l’on reçoit un appel… il est ainsi possible de décrocher. Il faut ensuite positionner son doigt au niveau du pavillon de l’oreille. C’est là que la magie opère. On se retrouve à entendre des sons. Contrairement à ce que je pensais, le son ne se transmet pas par conduction osseuse, mais grâce à des vibrations envoyées à partir de votre poignet à travers votre corps. Vous pouvez l’utiliser pour prendre des appels ou pour lire vos SMS et autres messages. Et ça fonctionne.

Here’s my very rough translation,

Samsung has developed a bracelet that can worn under any watch’s band strap.

It’s the ‘bracelet’ that vibrates when you get a phone call. If you want to answer the call, reach up and tap your ear. That’s when the magic happens and sound is transmitted to your ear. Not through your bones as I thought but with vibrations transmitted by your wrist through your body. This way you can answer your calls or read SMS and other messages [?]. It works

I get sound vibration being transmitted to your ear but I don’t understand how you’d be able to read SMS or other messages.

Your smartphone can be an anti-counterfeiting device thanks to the Massachusetts Institute of Technology

MIT (Massachusetts Institute of Technology) has announced an anti-counterfeiting technology, from an April 29, 2014 article by Mark Wilson for Fast Company (Note: Links have been removed),

Most of us [in the United States] know the Secret Service as the black-suited organization employed to protect the President. But in reality, the service was created toward the end of the Civil War, before Lincoln was assassinated, to crack down on counterfeit currency. Because up to a third of all money at the time was counterfeit.

Fast-forward 150 years:  … the Secret Service reports that they expect counterfeiting to increase. And counterfeiting is no longer a problem for money alone. [emphasis mine] Prescription drugs are also counterfeited–with potentially deadly side effects.

As I noted in an April 28, 2014 posting (How do you know that’s extra virgin olive oil?) about a Swiss anti-counterfeiting effort involving nanoscale labels/tags, foodstuffs and petrol can also be counterfeited.

An April 13, 2014 MIT news release describes the project further,

Led by MIT chemical engineering professor Patrick Doyle and Lincoln Laboratory technical staff member Albert Swiston, the researchers have invented a new type of tiny, smartphone-readable particle that they believe could be deployed to help authenticate currency, electronic parts, and luxury goods, among other products. The particles, which are invisible to the naked eye, contain colored stripes of nanocrystals that glow brightly when lit up with near-infrared light.

These particles can easily be manufactured and integrated into a variety of materials, and can withstand extreme temperatures, sun exposure, and heavy wear, says Doyle, the senior author of a paper describing the particles in the April 13 issue of Nature Materials. They could also be equipped with sensors that can “record” their environments — noting, for example, if a refrigerated vaccine has ever been exposed to temperatures too high or low.

The new particles are about 200 microns long and include several stripes of different colored nanocrystals, known as “rare earth upconverting nanocrystals.” [emphasis mine] These crystals are doped with elements such as ytterbium, gadolinium, erbium, and thulium, which emit visible colors when exposed to near-infrared light. By altering the ratios of these elements, the researchers can tune the crystals to emit any color in the visible spectrum.

The researchers have produced a video where they describe the counterfeiting problem and their solution in nontechnical terms,

For anyone who prefers to read their science, there’s this more technically detailed description (than the one in the video), from the MIT news release ,

To manufacture the particles, the researchers used stop-flow lithography, a technique developed previously by Doyle. This approach allows shapes to be imprinted onto parallel flowing streams of liquid monomers — chemical building blocks that can form longer chains called polymers. Wherever pulses of ultraviolet light strike the streams, a reaction is set off that forms a solid polymeric particle.

In this case, each polymer stream contains nanocrystals that emit different colors, allowing the researchers to form striped particles. So far, the researchers have created nanocrystals in nine different colors, but it should be possible to create many more, Doyle says.

Using this procedure, the researchers can generate vast quantities of unique tags. With particles that contain six stripes, there are 1 million different possible color combinations; this capacity can be exponentially enhanced by tagging products with more than one particle. For example, if the researchers created a set of 1,000 unique particles and then tagged products with any 10 of those particles, there would be 1030 possible combinations — far more than enough to tag every grain of sand on Earth.

“It’s really a massive encoding capacity,” says Bisso, who started this project while on the technical staff at Lincoln Lab. “You can apply different combinations of 10 particles to products from now until long past our time and you’ll never get the same combination.”

“The use of these upconverting nanocrystals is quite clever and highly enabling,” says Jennifer Lewis, a professor of biologically inspired engineering at Harvard University who was not involved in the research. “There are several striking features of this work, namely the exponentially scaling encoding capacities and the ultralow decoding false-alarm rate.”

Versatile particles

The microparticles could be dispersed within electronic parts or drug packaging during the manufacturing process, incorporated directly into 3-D-printed objects, or printed onto currency, the researchers say. They could also be incorporated into ink that artists could use to authenticate their artwork.

The researchers demonstrated the versatility of their approach by using two polymers with radically different material properties — one hydrophobic and one hydrophilic —to make their particles. The color readouts were the same with each, suggesting that the process could easily be adapted to many types of products that companies might want to tag with these particles, Bisso says.

“The ability to tailor the tag’s material properties without impacting the coding strategy is really powerful,” he says. “What separates our system from other anti-counterfeiting technologies is this ability to rapidly and inexpensively tailor material properties to meet the needs of very different and challenging requirements, without impacting smartphone readout or requiring a complete redesign of the system.”

Another advantage to these particles is that they can be read without an expensive decoder like those required by most other anti-counterfeiting technologies. [emphasis mine] Using a smartphone camera equipped with a lens offering twentyfold magnification, anyone could image the particles after shining near-infrared light on them with a laser pointer. The researchers are also working on a smartphone app that would further process the images and reveal the exact composition of the particles.

Before giving a link to and a citation for the paper, I’m going to make an observations.  ‘Rare earths’ the source from which these nanocrystals are derived is concerning since China, the main supplier of rare earths, is limiting the supply made available outside the country and seems intent on continuing to do so. While I appreciate the amount of rare earth needed in the laboratory is minor, should this technology be commercialized and adopted there may be a problem given that ‘rare earths’ are used extensively in smartphones, computers, etc. and that China is limiting the supply.

That said, here’s a link to and a citation for the paper,

Universal process-inert encoding architecture for polymer microparticles by Jiseok Lee, Paul W. Bisso, Rathi L. Srinivas, Jae Jung Kim, Albert J. Swiston, & Patrick S. Doyle. Nature Materials 13, 524–529 (2014) doi:10.1038/nmat3938 Published online 13 April 2014

This article  is behind a paywall.

E-tattoo without the nanotech

John Rogers and his team at the University of Illinois and a colleague’s (Yonggang Huang) team at Northwestern University have devised an ‘electronic tattoo’ (a soft, stick-on patch) made up from materials that anyone can purchase off-the-shelf. Rogers is known for his work with nanomaterials (my Aug. 10, 2012 posting titled ‘Surgery with fingertip control‘ mentioned a silicon nanomembrane that can be fitted onto the fingertips for possible use in surgical procedures) and with electronics (my Aug. 12, 2011 posting titled: ‘Electronic tattoos‘ mentioned his earlier attempts at developing e-tattoos).

This latest effort from Rogers and his multi-university team is mentioned in an April 4, 2014 article by Mark Wilson for Fast Company,

About a year ago, University of Illinois researcher John Rogers revealed a pretty amazing creation: a circuit that, rather than living on an inflexible board, could stick to and move with someone’s skin just like an ink stamp. But like any early research, it was mostly a proof-of-concept, and it would require relatively expensive, custom-printed electronics to work.

Today, Rogers, in conjunction with Northwestern University’s Yonggang Huang, has published details on version 2.0 in Science, revealing that this once-esoteric project has more immediate, mass market appeal.

… It means that you could create a wearable electronic that’s one-part special sticky circuit board, every other part whatever-the-hell-you-manufactured-in-China. This flexible circuit could accommodate a stock battery, an accelerometer, a Wi-Fi chip, and a Bluetooth circuitry, for instance, all living on your skin rather than inside your iPhone. And as an added bonus, it would be relatively cheap.

A University of Illinois April ?, 2014 news release describes Rogers, his multi-university team, and their current (pun intended) e-tattoo,

Engineers at the University of Illinois at Urbana-Champaign and Northwestern University have demonstrated thin, soft stick-on patches that stretch and move with the skin and incorporate commercial, off-the-shelf chip-based electronics for sophisticated wireless health monitoring.

The patches stick to the skin like a temporary tattoo and incorporate a unique microfluidic construction with wires folded like origami to allow the patch to bend and flex without being constrained by the rigid electronics components. The patches could be used for everyday health tracking – wirelessly sending updates to your cellphone or computer – and could revolutionize clinical monitoring such as EKG and EEG testing – no bulky wires, pads or tape needed.

“We designed this device to monitor human health 24/7, but without interfering with a person’s daily activity,” said Yonggang Huang, the Northwestern University professor who co-led the work with Illinois professor John A. Rogers. “It is as soft as human skin and can move with your body, but at the same time it has many different monitoring functions. What is very important about this device is it is wirelessly powered and can send high-quality data about the human body to a computer, in real time.”

The researchers did a side-by-side comparison with traditional EKG and EEG monitors and found the wireless patch performed equally to conventional sensors, while being significantly more comfortable for patients. Such a distinction is crucial for long-term monitoring, situations such as stress tests or sleep studies when the outcome depends on the patient’s ability to move and behave naturally, or for patients with fragile skin such as premature newborns.

Rogers’ group at Illinois previously demonstrated skin electronics made of very tiny, ultrathin, specially designed and printed components. While those also offer high-performance monitoring, the ability to incorporate readily available chip-based components provides many important, complementary capabilities in engineering design, at very low cost.

“Our original epidermal devices exploited specialized device geometries – super thin, structured in certain ways,” Rogers said. “But chip-scale devices, batteries, capacitors and other components must be re-formulated for these platforms. There’s a lot of value in complementing this specialized strategy with our new concepts in microfluidics and origami interconnects to enable compatibility with commercial off-the-shelf parts for accelerated development, reduced costs and expanded options in device types.”

The multi-university team turned to soft microfluidic designs to address the challenge of integrating relatively big, bulky chips with the soft, elastic base of the patch. The patch is constructed of a thin elastic envelope filled with fluid. The chip components are suspended on tiny raised support points, bonding them to the underlying patch but allowing the patch to stretch and move.

One of the biggest engineering feats of the patch is the design of the tiny, squiggly wires connecting the electronics components – radios, power inductors, sensors and more. The serpentine-shaped wires are folded like origami, so that no matter which way the patch bends, twists or stretches, the wires can unfold in any direction to accommodate the motion. Since the wires stretch, the chips don’t have to.

Skin-mounted devices could give those interested in fitness tracking a more complete and accurate picture of their activity level.

“When you measure motion on a wristwatch type device, your body is not very accurately or reliably coupled to the device,” said Rogers, a Swanlund Professor of Materials Science and Engineering at the U. of I. “Relative motion causes a lot of background noise. If you have these skin-mounted devices and an ability to locate them on multiple parts of the body, you can get a much deeper and richer set of information than would be possible with devices that are not well coupled with the skin. And that’s just the beginning of the rich range of accurate measurements relevant to physiological health that are possible when you are softly and intimately integrated onto the skin.”

The researchers hope that their sophisticated, integrated sensing systems could not only monitor health but also could help identify problems before the patient may be aware. For example, according to Rogers, data analysis could detect motions associated with Parkinson’s disease at its onset.

“The application of stretchable electronics to medicine has a lot of potential,” Huang said. “If we can continuously monitor our health with a comfortable, small device that attaches to our skin, it could be possible to catch health conditions before experiencing pain, discomfort and illness.”

Here’s a link to and a citation for the paper,

Soft Microfluidic Assemblies of Sensors, Circuits, and Radios for the Skin by Sheng Xu, Yihui Zhang, Lin Jia, Kyle E. Mathewson, Kyung-In Jang, Jeonghyun Kim, Haoran Fu, Xian Huang, Pranav Chava, Renhan Wang, Sanat Bhole, Lizhe Wang, Yoon Joo Na, Yue Guan, Matthew Flavin, Zheshen Han, Yonggang Huang, & John A. Rogers. Science 4 April 2014: Vol. 344 no. 6179 pp. 70-74 DOI: 10.1126/science.1250169

This paper is behind a paywall.

Visualizing beautiful math

Two artists ,Yann Pineill and Nicolas Lefaucheux, associated with Parachutes, a video production and graphic design studio located in Paris, France, ,have produced a video demonstrating this quote from Bertrand Russell, which is in the opening frame,

“Mathematics, rightly viewed, possesses not only truth, but supreme beauty — a beauty cold and austere, without the gorgeous trappings of painting or music.” — Bertrand Russell

H/t Mark Wilson’s Nov. 6, 2013 article for Fast Company,

One viewing note, the screen is arranged as a tryptich with the mathematical equation on the left, a schematic in the centre, and the real life manifestation on the right. Enjoy!

Cyborgian dance at McGill University (Canada)

As noted in the Canadian Council of Academies report ((State of Science and Technology in Canada, 2012), which was mentioned in my Dec. 28, 2012 posting, the field of visual and performing arts is an area of strength and that is due to one province, Québec. Mark Wilson’s Aug. 13, 2013 article for Fast Company and Paul Ridden’s Aug. 7, 2013 article for gizmag.com about McGill University’s Instrumented Bodies: Digital Prostheses for Music and Dance Performance seem to confirm Québec’s leadership.

From Wilson’s Aug. 13, 2013 article (Note: A link has been removed),

One is a glowing exoskeleton spine, while another looks like a pair of cyborg butterfly wings. But these aren’t just costumes; they’re wearable, functional art.

In fact, the team of researchers from the IDML (Input Devices and Music Interaction Laboratory [at McGill University]) who are responsible for the designs go so far as to call their creations “prosthetic instruments.”

Ridden’s Aug. 7, 2013 article offers more about the project’s history and technology,

For the last three years, a small research team at McGill University has been working with a choreographer, a composer, dancers and musicians on a project named Instrumented Bodies. Three groups of sensor-packed, internally-lit digital music controllers that attach to a dancer’s costume have been developed, each capable of wirelessly triggering synthesized music as the performer moves around the stage. Sounds are produced by tapping or stroking transparent Ribs or Visors, or by twisting, turning or moving Spines. Though work on the project continues, the instruments have already been used in a performance piece called Les Gestes which toured Canada and Europe during March and April.

Both articles are interesting but Wilson’s is the fast read and Ridden’s gives you information you can’t find by looking up the Instrumented Bodies: Digital Prostheses for Music and Dance Performance project webpage,

These instruments are the culmination of a three-year long project in which the designers worked closely with dancers, musicians, composers and a choreographer. The goal of the project was to develop instruments that are visually striking, utilize advanced sensing technologies, and are rugged enough for extensive use in performance.

The complex, transparent shapes are lit from within, and include articulated spines, curved visors and ribcages. Unlike most computer music control interfaces, they function both as hand-held, manipulable controllers and as wearable, movement-tracking extensions to the body. Further, since the performers can smoothly attach and detach the objects, these new instruments deliberately blur the line between the performers’ bodies and the instrument being played.

The prosthetic instruments were designed and developed by Ph.D. researchers Joseph Malloch and Ian Hattwick [and Marlon Schumacher] under the supervision of IDMIL director Marcelo Wanderley. Starting with sketches and rough foam prototypes for exploring shape and movement, they progressed through many iterations of the design before arriving at the current versions. The researchers made heavy use of digital fabrication technologies such as laser-cutters and 3D printers, which they accessed through the McGill University School of Architecture and the Centre for Interdisciplinary Research in Music Media and Technology, also hosted by McGill.

Each of the nearly thirty working instruments produced for the project has embedded sensors, power supplies and wireless data transceivers, allowing a performer to control the parameters of music synthesis and processing in real time through touch, movement, and orientation. The signals produced by the instruments are routed through an open-source peer-to-peer software system the IDMIL team has developed for designing the connections between sensor signals and sound synthesis parameters.

For those who prefer to listen and watch, the researchers have created a video documentary,

I usually don’t include videos that run past 5 mins. but I’ve made an exception for this almost 15 mins. documentary.

I was trying to find mention of a dancer and/or choreographer associated with this project and found a name along with another early stage participant, choreographer, Isabelle Van Grimde, and composer, Sean Ferguson, in Ridden’s article.

The space-time continuum as a table

Table: The Fourth Dimension from the Potential for Collapse collection by Axel Yberg (downloaded from http://www.akkefunctionalart.com/potentialforcollapse/fourthdimension_2.html)

Table: The Fourth Dimension from the Potential for Collapse collection by Axel Yberg (downloaded from http://www.akkefunctionalart.com/potentialforcollapse/fourthdimension_2.html)

Thanks to Mark Wilson and his Aug. 6, 2013 article for Fast Company for information about this extraordinary science-themed table,

The first three dimensions of Einstein’s space-time continuum are easy–X, Y, and Z vectors give our world a shape. The fourth dimension is time, but it’s a bit more complicated than just looking at a clock because it’s actually all times happening at once. “The separation between past, present, and future is only an illusion, although a convincing one,” Einstein once said. That’s a nice soundbite, but how do you wrap your brain around it?[emphasis mine]

Yberg’s answer to that question is a table. From the Fourth Dimension webpage on the akke functional art (Yberg’s company) website,

The steel-mesh embedded glass top of this piece represents the space-time continuum and the supporting pipes represent four-vectors. This theory, first proposed by Albert Einstein, states that time — the fourth dimension — is only a direction in space and that “the separation between past, present, and future is only an illusion, although a convincing one.” It’s a challenging concept because we are only able to perceive one path that time takes: the ever-changing present.

I began to think about Einstein’s theory, and how it relates to our life experiences and the time that we have for them, when talking to my brother-in-law, Chris.  He and his wife, Jill, had recently undergone two of the most emotional events that we experience as humans: the birth of a child, and the death of a loved one — their incredible dog, Hazel.  As they joyously welcomed a new member to their family, they grieved for the loss of another. The concept of time — and the importance of cherishing the present — became especially poignant.  I built The Fourth Dimension as a gift for their family, celebrating the new and honoring the old.

The four legs of the table represent the four members of their family and the cables represent how they are all connected to one another. Bound together as a family, they rely on each other for support. If any of the cables were severed, the table would collapse.

There’s also a video that features glimpses of the table as Yberg markets his company and its products,

According the akke website, the Fourth Dimension table became available in January 2012.