Tag Archives: Google

A dress that lights up according to reactions on Twitter

I don’t usually have an opportunity to write about red carpet events but the recent Met Gala, also known as the Costume Institute Gala and the Met Ball, which took place on the evening of May 2, 2016 in New York, featured a ‘cognitive’ dress. Here’s more from a May 2, 2016 article by Emma Spedding for The Telegraph (UK),

“Tech white tie” was the dress code for last night’s Met Gala, inspired by the theme of this year’s Met fashion exhibition, ‘Manus x Machina: Fashion in the Age of Technology’. While many of the a-list attendees interpreted this to mean ‘silver sequins’, several rose to the challenge with beautiful, future-gazing gowns which give a glimpse of how our clothes might behave in the future.

Supermodel Karolina Kurkova wore a ‘cognitive’ Marchesa gown that was created in collaboration with technology company IBM. The two companies came together following a survey conducted by IBM which found that Marchesa was one of the favourite designers of its employees. The dress is created using a conductive fabric chosen from 40,000 options and embedded with 150 LED lights which change colour in reaction to the sentiments of Kurkova’s Twitter followers.

A May 2, 2016 article by Rose Pastore for Fast Company provides a little more technical detail and some insight into why Marchesa partnered with IBM,

At the Met Gala in Manhattan tonight [May 2, 2016], one model will be wearing a “cognitive dress”: A gown, designed by fashion house Marchesa, that will shift in color based on input from IBM’s Watson supercomputer. The dress features gauzy white roses, each embedded with an LED that will display different colors depending on the general sentiment of tweets about the Met Gala. The algorithm powering the dress relies on Watson Color Theory, which links emotions to colors, and on the Watson Tone Analyzer, a service that can detect emotion in text.

In addition to the color-changing cognitive dress, Marchesa designers are using Watson to get new color palette ideas. The designers choose from a list of emotions and concepts—things like romance, excitement, and power—and Watson recommends a palette of colors it associates with those sentiments.

An April 29, 2016 posting by Ann Rubin for IBM’s Think blog discusses the history of technology/art partnerships and provides more technical detail (yes!) about this one,

Throughout history, we’ve seen traces of technology enabling humans to create – from Da Vinci’s use of the camera obscura to Caravaggio’s work with mirrors and lenses. Today, cognitive systems like Watson are giving artists, designers and creative minds the tools to make sense of the world in ground-breaking ways, opening up new avenues for humans to approach creative thinking.

The dress’ cognitive creation relies on a mix of Watson APIs, cognitive tools from IBM Research, solutions from Watson developer partner Inno360 and the creative vision from the Marchesa design team. In advance of it making its exciting debut on the red carpet, we’d like to take you on the journey of how man and machine collaborated to create this special dress.

Rooted in the belief that color and images can indicate moods and send messages, Marchesa first selected five key human emotions – joy, passion, excitement, encouragement and curiosity – that they wanted the dress to convey. IBM Research then fed this data into the cognitive color design tool, a groundbreaking project out of IBM Research-Yorktown that understands the psychological effects of colors, the interrelationships between emotions, and image aesthetics.

This process also involved feeding Watson hundreds of images associated with Marchesa dresses in order to understand and learn the brand’s color palette. Ultimately, Watson was able to suggest color palettes that were in line with Marchesa’s brand and the identified emotions, which will come to life on the dress during the Met Gala.

Once the colors were finalized, Marchesa turned to IBM partner Inno360 to source a fabric for their creation. Using Inno360’s R&D platform – powered by a combination of seven Watson services – the team searched more than 40,000 sources for fabric information, narrowing down to 150 sources of the most useful options to consider for the dress.

From this selection, Inno360 worked in partnership with IBM Research-Almaden to identify printed and woven textiles that would respond well to the LED technology needed to execute the final part of the collaboration. Inno360 was then able to deliver 35 unique fabric recommendations based on a variety of criteria important to Marchesa, like weight, luminosity, and flexibility. From there, Marchesa weighed the benefits of different material compositions, weights and qualities to select the final fabric that suited the criteria for their dress and remained true to their brand.

Here’s what the dress looks like,

Courtesy of Marchesa Facebook page {https://www.facebook.com/MarchesaFashion/)

Courtesy of Marchesa Facebook page {https://www.facebook.com/MarchesaFashion/)

Watson is an artificial intelligence program,which I have written about a few times but I think this Feb. 28, 2011 posting (scroll down about 50% of the way), which mentions Watson, product placement, Jeopardy (tv quiz show), and medical diagnoses seems the most à propos given IBM’s latest product placement at the Met Gala.

Not the only ‘tech’ dress

There was at least one other ‘tech’ dress at the 2016 Met Gala, this one designed by Zac Posen and worn by Claire Danes. It did not receive a stellar review in a May 3, 2016 posting by Elaine Lui on Laineygossip.com,

People are losing their goddamn minds over this dress, by Zac Posen. Because it lights up.

It’s bullsh-t.

This is a BULLSH-T DRESS.

It’s Cinderella with a lamp shoved underneath her skirt.

Here’s a video of Danes and her dress at the Met Gala,

A Sept. 10, 2015 news item in People magazine indicates that Posen’s a different version of a ‘tech’ dress was a collaboration with Google (Note: Links have been removed),

Designer Zac Posen lit up his 2015 New York Fashion Week kickoff show on Tuesday by debuting a gorgeous and tech-savvy coded LED dress that blinked in different, dazzling pre-programmed patterns down the runway.

In coordination with Google’s non-profit organization, Made with Code, which inspires girls to pursue careers in tech coding, Posen teamed up with 30 girls (all between the ages of 13 and 18), who attended the show, to introduce the flashy dress — which was designed by Posen and coded by the young women.

“This is the future of the industry: mixing craft, fashion and technology,” the 34-year-old designer told PEOPLE. “There’s a discrepancy in the coding field, hardly any women are at the forefront, and that’s a real shame. If we can entice young women through the allure of fashion, to get them learning this language, why not?”

..

Through a micro controller, the gown displays coded patterns in 500 LED lights that are set to match the blues and yellows of Posen’s new collection. The circuit was designed and physically built into Posen’s dress fabric by 22-year-old up-and-coming fashion designer and computer science enthusiast, Maddy Maxey, who tells PEOPLE she was nervous watching Rocha [model Coco Rocha] make her way down the catwalk.

“It’s exactly as if she was carrying a microwave down the runway,” Maxey said. “It’s an entire circuit on a textile, so if one connection had come lose, the dress wouldn’t have worked. But, it did! And it was so deeply rewarding.”

Other ‘tech’ dresses

Back in 2009 I attended that year’s International Symposium on Electronic Arts and heard Clive van Heerden of Royal Philips Electronics talk about a number of innovative concepts including a ‘mood’ dress that would reveal the wearer’s emotions to whomever should glance their way. It was not a popular concept especially not in Japan where it was first tested.

The symposium also featured Maurits Waldemeyer who worked with fashion designer Chalayan Hussein and LED dresses and dresses that changed shape as the models went down the runway.

In 2010 there was a flurry of media interest in mood changing ‘smart’ clothes designed by researchers at Concordia University (Barbara Layne, Canada) and Goldsmiths College (Janis Jefferies, UK). Here’s more from a June 4, 2010 BBC news online item,

The clothes are connected to a database that analyses the data to work out a person’s emotional state.

Media, including songs, words and images, are then piped to the display and speakers in the clothes to calm a wearer or offer support.

Created as part of an artistic project called Wearable Absence the clothes are made from textiles woven with different sorts of wireless sensors. These can track a wide variety of tell-tale biological markers including temperature, heart rate, breathing and galvanic skin response.

Final comments

I don’t have anything grand to say. It is interesting to see the progression of ‘tech’ dresses from avant garde designers and academics to haute couture.

Wearable tech for Christmas 2015 and into 2016

This is a roundup post of four items to cross my path this morning (Dec. 17, 2015), all of them concerned with wearable technology.

The first, a Dec. 16, 2015 news item on phys.org, is a fluffy little piece concerning the imminent arrival of a new generation of wearable technology,

It’s not every day that there’s a news story about socks. But in November [2015], a pair won the Best New Wearable Technology Device Award at a Silicon Valley conference. The smart socks, which track foot landings and cadence, are at the forefront of a new generation of wearable electronics, according to an article in Chemical & Engineering News (C&EN), the weekly newsmagazine of the American Chemical Society [ACS].

That news item was originated by a Dec. 16, 2015 ACS news release on EurekAlert which adds this,

Marc S. Reisch, a senior correspondent at C&EN, notes that stiff wristbands like the popular FitBit that measure heart rate and the number of steps people take have become common. But the long-touted technology needed to create more flexible monitoring devices has finally reached the market. Developers have successfully figured out how to incorporate stretchable wiring and conductive inks in clothing fabric, program them to transmit data wirelessly and withstand washing.

In addition to smart socks, fitness shirts and shoe insoles are on the market already or are nearly there. Although athletes are among the first to gain from the technology, the less fitness-oriented among us could also benefit. One fabric concept product — designed not for covering humans but a car steering-wheel — could sense driver alertness and make roads safer.

Reisch’s Dec. 7, 2015 article (C&EN vol. 93, issue 48, pp. 28-90) provides more detailed information and market information such as this,

Materials suppliers, component makers, and apparel developers gathered at a printed-electronics conference in Santa Clara, Calif., within a short drive of tech giants such as Google and Apple, to compare notes on embedding electronics into the routines of daily life. A notable theme was the effort to stealthily [emphasis mine] place sensors on exercise shirts, socks, and shoe soles so that athletes and fitness buffs can wirelessly track their workouts and doctors can monitor the health of their patients.

“Wearable technology is becoming more wearable,” said Raghu Das, chief executive officer of IDTechEx [emphasis mine], the consulting firm that organized the conference. By that he meant the trend is toward thinner and more flexible devices that include not just wrist-worn fitness bands but also textiles printed with stretchable wiring and electronic sensors, thanks to advances in conductive inks.

Interesting use of the word ‘stealthy’, which often suggests something sneaky as opposed to merely secretive. I imagine what’s being suggested is that the technology will not impose itself on the user (i.e., you won’t have to learn how to use it as you did with phones and computers).

Leading into my second item, IDC (International Data Corporation), not to be confused with IDTechEx, is mentioned in a Dec. 17, 2015 news item about wearable technology markets on phys.org,

The global market for wearable technology is seeing a surge, led by watches, smart clothing and other connected gadgets, a research report said Thursday [Dec. 16, 2015].

IDC said its forecast showed the worldwide wearable device market will reach a total of 111.1 million units in 2016, up 44.4 percent from this year.

By 2019, IDC sees some 214.6 million units, or a growth rate averaging 28 percent.

A Dec. 17, 2015 IDC press release, which originated the news item, provides more details about the market forecast,

“The most common type of wearables today are fairly basic, like fitness trackers, but over the next few years we expect a proliferation of form factors and device types,” said Jitesh Ubrani , Senior Research Analyst for IDC Mobile Device Trackers. “Smarter clothing, eyewear, and even hearables (ear-worn devices) are all in their early stages of mass adoption. Though at present these may not be significantly smarter than their analog counterparts, the next generation of wearables are on track to offer vastly improved experiences and perhaps even augment human abilities.”

One of the most popular types of wearables will be smartwatches, reaching a total of 34.3 million units shipped in 2016, up from the 21.3 million units expected to ship in 2015. By 2019, the final year of the forecast, total shipments will reach 88.3 million units, resulting in a five-year CAGR of 42.8%.

“In a short amount of time, smartwatches have evolved from being extensions of the smartphone to wearable computers capable of communications, notifications, applications, and numerous other functionalities,” noted Ramon Llamas , Research Manager for IDC’s Wearables team. “The smartwatch we have today will look nothing like the smartwatch we will see in the future. Cellular connectivity, health sensors, not to mention the explosive third-party application market all stand to change the game and will raise both the appeal and value of the market going forward.

“Smartwatch platforms will lead the evolution,” added Llamas. “As the brains of the smartwatch, platforms manage all the tasks and processes, not the least of which are interacting with the user, running all of the applications, and connecting with the smartphone. Once that third element is replaced with cellular connectivity, the first two elements will take on greater roles to make sense of all the data and connections.”

Top Five Smartwatch Platform Highlights

Apple’s watchOS will lead the smartwatch market throughout our forecast, with a loyal fanbase of Apple product owners and a rapidly growing application selection, including both native apps and Watch-designed apps. Very quickly, watchOS has become the measuring stick against which other smartwatches and platforms are compared. While there is much room for improvement and additional features, there is enough momentum to keep it ahead of the rest of the market.

Android/Android Wear will be a distant second behind watchOS even as its vendor list grows to include technology companies (ASUS, Huawei, LG, Motorola, and Sony) and traditional watchmakers (Fossil and Tag Heuer). The user experience on Android Wear devices has been largely the same from one device to the next, leaving little room for OEMs to develop further and users left to select solely on price and smartwatch design.

Smartwatch pioneer Pebble will cede market share to AndroidWear and watchOS but will not disappear altogether. Its simple user interface and devices make for an easy-to-understand use case, and its price point relative to other platforms makes Pebble one of the most affordable smartwatches on the market.

Samsung’s Tizen stands to be the dark horse of the smartwatch market and poses a threat to Android Wear, including compatibility with most flagship Android smartphones and an application selection rivaling Android Wear. Moreover, with Samsung, Tizen has benefited from technology developments including a QWERTY keyboard on a smartwatch screen, cellular connectivity, and new user interfaces. It’s a combination that helps Tizen stand out, but not enough to keep up with AndroidWear and watchOS.

There will be a small, but nonetheless significant market for smart wristwear running on a Real-Time Operating System (RTOS), which is capable of running third-party applications, but not on any of these listed platforms. These tend to be proprietary operating systems and OEMs will use them when they want to champion their own devices. These will help within specific markets or devices, but will not overtake the majority of the market.

The company has provided a table with five-year CAGR (compound annual growth rate) growth estimates, which can be found with the Dec. 17, 2015 IDC press release.

Disclaimer: I am not endorsing IDC’s claims regarding the market for wearable technology.

For the third and fourth items, it’s back to the science. A Dec. 17, 2015 news item on Nanowerk, describes, in general terms, some recent wearable technology research at the University of Manchester (UK), Note: A link has been removed),

Cheap, flexible, wireless graphene communication devices such as mobile phones and healthcare monitors can be directly printed into clothing and even skin, University of Manchester academics have demonstrated.

In a breakthrough paper in Scientific Reports (“Highly Flexible and Conductive Printed Graphene for Wireless Wearable Communications Applications”), the researchers show how graphene could be crucial to wearable electronic applications because it is highly-conductive and ultra-flexible.

The research could pave the way for smart, battery-free healthcare and fitness monitoring, phones, internet-ready devices and chargers to be incorporated into clothing and ‘smart skin’ applications – printed graphene sensors integrated with other 2D materials stuck onto a patient’s skin to monitor temperature, strain and moisture levels.

Detail is provided in a Dec. 17, 2015 University of Manchester press release, which originated the news item, (Note: Links have been removed),

Examples of communication devices include:

• In a hospital, a patient wears a printed graphene RFID tag on his or her arm. The tag, integrated with other 2D materials, can sense the patient’s body temperature and heartbeat and sends them back to the reader. The medical staff can monitor the patient’s conditions wirelessly, greatly simplifying the patient’s care.

• In a care home, battery-free printed graphene sensors can be printed on elderly peoples’ clothes. These sensors could detect and collect elderly people’s health conditions and send them back to the monitoring access points when they are interrogated, enabling remote healthcare and improving quality of life.

Existing materials used in wearable devices are either too expensive, such as silver nanoparticles, or not adequately conductive to have an effect, such as conductive polymers.

Graphene, the world’s thinnest, strongest and most conductive material, is perfect for the wearables market because of its broad range of superlative qualities. Graphene conductive ink can be cheaply mass produced and printed onto various materials, including clothing and paper.

“Sir Kostya Novoselov

To see evidence that cheap, scalable wearable communication devices are on the horizon is excellent news for graphene commercial applications.

Sir Kostya Novoselov (tweet)„

The researchers, led by Dr Zhirun Hu, printed graphene to construct transmission lines and antennas and experimented with these in communication devices, such as mobile and Wifi connectivity.

Using a mannequin, they attached graphene-enabled antennas on each arm. The devices were able to ‘talk’ to each other, effectively creating an on-body communications system.

The results proved that graphene enabled components have the required quality and functionality for wireless wearable devices.

Dr Hu, from the School of Electrical and Electronic Engineering, said: “This is a significant step forward – we can expect to see a truly all graphene enabled wireless wearable communications system in the near future.

“The potential applications for this research are huge – whether it be for health monitoring, mobile communications or applications attached to skin for monitoring or messaging.

“This work demonstrates that this revolutionary scientific material is bringing a real change into our daily lives.”

Co-author Sir Kostya Novoselov, who with his colleague Sir Andre Geim first isolated graphene at the University in 2004, added: “Research into graphene has thrown up significant potential applications, but to see evidence that cheap, scalable wearable communication devices are on the horizon is excellent news for graphene commercial applications.”

Here’s a link to and a citation for the paper,

Highly Flexible and Conductive Printed Graphene for Wireless Wearable Communications Applications by Xianjun Huang, Ting Leng, Mengjian Zhu, Xiao Zhang, JiaCing Chen, KuoHsin Chang, Mohammed Aqeeli, Andre K. Geim, Kostya S. Novoselov, & Zhirun Hu. Scientific Reports 5, Article number: 18298 (2015) doi:10.1038/srep18298 Published online: 17 December 2015

This is an open access paper.

The next and final item concerns supercapacitors for wearable tech, which makes it slightly different from the other items and is why, despite the date, this is the final item. The research comes from Case Western Research University (CWRU; US) according to a Dec. 16, 2015 news item on Nanowerk (Note: A link has been removed),

Wearable power sources for wearable electronics are limited by the size of garments.

With that in mind, researchers at Case Western Reserve University have developed flexible wire-shaped microsupercapacitors that can be woven into a jacket, shirt or dress (Energy Storage Materials, “Flexible and wearable wire-shaped microsupercapacitors based on highly aligned titania and carbon nanotubes”).

A Dec. 16, 2015 CWRU news release (on EurekAlert), which originated the news item, provides more detail about a device that would make wearable tech more wearable (after all, you don’t want to recharge your clothes the same way you do your phone and other mobile devices),

By their design or by connecting the capacitors in series or parallel, the devices can be tailored to match the charge storage and delivery needs of electronics donned.

While there’s been progress in development of those electronics–body cameras, smart glasses, sensors that monitor health, activity trackers and more–one challenge remaining is providing less obtrusive and cumbersome power sources.

“The area of clothing is fixed, so to generate the power density needed in a small area, we grew radially-aligned titanium oxide nanotubes on a titanium wire used as the main electrode,” said Liming Dai, the Kent Hale Smith Professor of Macromolecular Science and Engineering. “By increasing the surface area of the electrode, you increase the capacitance.”

Dai and Tao Chen, a postdoctoral fellow in molecular science and engineering at Case Western Reserve, published their research on the microsupercapacitor in the journal Energy Storage Materials this week. The study builds on earlier carbon-based supercapacitors.

A capacitor is cousin to the battery, but offers the advantage of charging and releasing energy much faster.

How it works

In this new supercapacitor, the modified titanium wire is coated with a solid electrolyte made of polyvinyl alcohol and phosphoric acid. The wire is then wrapped with either yarn or a sheet made of aligned carbon nanotubes, which serves as the second electrode. The titanium oxide nanotubes, which are semiconducting, separate the two active portions of the electrodes, preventing a short circuit.

In testing, capacitance–the capability to store charge–increased from 0.57 to 0.9 to 1.04 milliFarads per micrometer as the strands of carbon nanotube yarn were increased from 1 to 2 to 3.

When wrapped with a sheet of carbon nanotubes, which increases the effective area of electrode, the microsupercapactitor stored 1.84 milliFarads per micrometer. Energy density was 0.16 x 10-3 milliwatt-hours per cubic centimeter and power density .01 milliwatt per cubic centimeter.

Whether wrapped with yarn or a sheet, the microsupercapacitor retained at least 80 percent of its capacitance after 1,000 charge-discharge cycles. To match various specific power needs of wearable devices, the wire-shaped capacitors can be connected in series or parallel to raise voltage or current, the researchers say.

When bent up to 180 degrees hundreds of times, the capacitors showed no loss of performance. Those wrapped in sheets showed more mechanical strength.

“They’re very flexible, so they can be integrated into fabric or textile materials,” Dai said. “They can be a wearable, flexible power source for wearable electronics and also for self-powered biosensors or other biomedical devices, particularly for applications inside the body.” [emphasis mine]

Dai ‘s lab is in the process of weaving the wire-like capacitors into fabric and integrating them with a wearable device.

So one day we may be carrying supercapacitors in our bodies? I’m not sure how I feel about that goal. In any event, here’s a link and a citation for the paper,

Flexible and wearable wire-shaped microsupercapacitors based on highly aligned titania and carbon nanotubes by Tao Chen, Liming Dai. Energy Storage Materials Volume 2, January 2016, Pages 21–26 doi:10.1016/j.ensm.2015.11.004

This paper appears to be open access.

Google announces research results after testing 1,097-qubit D-Wave 2X™ quantum computers

If you’ve been reading this blog over the last few months, you’ll know that I’ve mentioned D-Wave Systems, a Vancouver (Canada)-based quantum computing company, frequently. The company seems to be signing all kinds of deals lately including one with Google (my Oct. 5, 2015 posting). Well, a Dec. 9, 2015 news item on Nanotechnology Now sheds more light on how Google is using D-Wave’s quantum computers,

Harris & Harris Group, Inc. (NASDAQ: TINY), an investor in transformative companies enabled by disruptive science, notes that yesterday [Dec. 8, 2015] NASA, Google and the Universities Space Research Association (USRA) hosted a tour of the jointly run Quantum Artificial Intelligence Laboratory located at the NASA’s Ames Research Center which houses one of D-Wave’s 1,097-qubit D-Wave 2X™ quantum computers. At this event, Google announced that D-Wave’s quantum computer was able to find solutions to complicated problems of nearly 1,000 variables up to 108 (100,000,000) times faster than classical computers.

A Dec. 8, 2015 posting by Hartmut Neven for the Google Research blog describes the research and the results (Note: Links have been removed),

During the last two years, the Google Quantum AI [artificial intelligence] team has made progress in understanding the physics governing quantum annealers. We recently applied these new insights to construct proof-of-principle optimization problems and programmed these into the D-Wave 2X quantum annealer that Google operates jointly with NASA. The problems were designed to demonstrate that quantum annealing can offer runtime advantages for hard optimization problems characterized by rugged energy landscapes. We found that for problem instances involving nearly 1000 binary variables, quantum annealing significantly outperforms its classical counterpart, simulated annealing. It is more than 108 times faster than simulated annealing running on a single core. We also compared the quantum hardware to another algorithm called Quantum Monte Carlo. This is a method designed to emulate the behavior of quantum systems, but it runs on conventional processors. While the scaling with size between these two methods is comparable, they are again separated by a large factor sometimes as high as 108.

For anyone (like me) who needs an explanation of quantum annealing, there’s this from its Wikipedia entry (Note: Links have been removed),

Quantum annealing (QA) is a metaheuristic for finding the global minimum of a given objective function over a given set of candidate solutions (candidate states), by a process using quantum fluctuations. Quantum annealing is used mainly for problems where the search space is discrete (combinatorial optimization problems) with many local minima; such as finding the ground state of a spin glass.[1] It was formulated in its present form by T. Kadowaki and H. Nishimori in “Quantum annealing in the transverse Ising model”[2] though a proposal in a different form had been proposed by A. B. Finilla, M. A. Gomez, C. Sebenik and J. D. Doll, in “Quantum annealing: A new method for minimizing multidimensional functions”.[3]

Not as helpful as I’d hoped but sometimes its necessary to learn a new vocabulary and a new set of basic principles, which takes time and requires the ability to ‘not know’ and/or ‘not understand’ until one day, you do.

In the meantime, here’s more possibly befuddling information from the researchers in the form of a paper on arXiv.org,

What is the Computational Value of Finite Range Tunneling? by Vasil S. Denchev, Sergio Boixo, Sergei V. Isakov, Nan Ding, Ryan Babbush, Vadim Smelyanskiy, John Martinis, Hartmut Neven. http://arxiv.org/abs/1512.02206

This paper is open access.

US Los Alamos National Laboratory catches the D-Wave (buys a 1000+ Qubit quantum computer from D-Wave)

It can be euphoric experience making a major technical breakthrough (June 2015), selling to a new large customer (Nov. 2015) and impressing your important customers so they upgrade to the new system (Oct. 2015) within a few short months.* D-Wave Systems (a Vancouver-based quantum computer company) certainly has cause to experience it given the events of the last six weeks or so. Yesterday, in a Nov. 11, 2015, D-Wave news release, the company trumpeted its sale of a 1000+ Qubit system (Note: Links have been removed),

D-Wave Systems Inc., the world’s first quantum computing company, announced that Los Alamos National Laboratory will acquire and install the latest D-Wave quantum computer, the 1000+ qubit D-Wave 2X™ system. Los Alamos, a multidisciplinary research institution engaged in strategic science on behalf of national security, will lead a collaboration within the Department of Energy and with select university partners to explore the capabilities and applications of quantum annealing technology, consistent with the goals of the government-wide National Strategic Computing Initiative. The National Strategic Computing Initiative, created by executive order of President Obama in late July [2015], is intended “to maximize [the] benefits of high-performance computing (HPC) research, development, and deployment.”

“Los Alamos is a global leader in high performance computing and a pioneer in the application of new architectures to solve critical problems related to national security, energy, the environment, materials, health and earth science,” said Robert “Bo” Ewald, president of D-Wave U.S. “As we work jointly with scientists and engineers at Los Alamos we expect to be able to accelerate the pace of quantum software development to advance the state of algorithms, applications and software tools for quantum computing.”

A Nov. 11, 2015 news item on Nanotechnology Now is written from the company’s venture capitalist’s perspective,

Harris & Harris Group, Inc. (NASDAQ:TINY), an investor in transformative companies enabled by disruptive science, notes that its portfolio company, D-Wave Systems, Inc., announced that Los Alamos National Laboratory will acquire and install the latest D-Wave quantum computer, the 1000+ qubit D-Wave 2X™ system.

The news about the Los Alamos sale comes only weeks after D-Wave announced renewed agreements with Google, NASA (US National Aeronautics and Space Administration), and the Universities Space Research Association (USRA) in the aftermath of a technical breakthrough. See my Oct. 5, 2015 posting for more details about the agreements, the type of quantum computer D-Wave sells, and news of interesting and related research in Australia. Cracking the 512 qubit barrier also occasioned a posting here (June 26, 2015) where I described the breakthrough, the company, and included excerpts from an Economist article which mentioned D-Wave in its review of research in the field of quantum computing.

Congratulations to D-Wave!

*’It can be euphoric selling to your first large and/or important customers and D-Wave Systems (a Vancouver-based quantum computer company) certainly has cause to experience it. ‘ changed to more accurately express my thoughts to ‘It can be euphoric experience making a major technical breakthrough (June 2015), selling to a new large customer (Nov. 2015) and impressing your important customers so they upgrade to the new system (Oct. 2015) within a few short months.’ on Nov. 12, 2015 at 1025 hours PST.

D-Wave upgrades Google’s quantum computing capabilities

Vancouver-based (more accurately, Burnaby-based) D-Wave systems has scored a coup as key customers have upgraded from a 512-qubit system to a system with over 1,000 qubits. (The technical breakthrough and concomitant interest from the business community was mentioned here in a June 26, 2015 posting.) As for the latest business breakthrough, here’s more from a Sept. 28, 2015 D-Wave press release,

D-Wave Systems Inc., the world’s first quantum computing company, announced that it has entered into a new agreement covering the installation of a succession of D-Wave systems located at NASA’s Ames Research Center in Moffett Field, California. This agreement supports collaboration among Google, NASA and USRA (Universities Space Research Association) that is dedicated to studying how quantum computing can advance artificial intelligence and machine learning, and the solution of difficult optimization problems. The new agreement enables Google and its partners to keep their D-Wave system at the state-of-the-art for up to seven years, with new generations of D-Wave systems to be installed at NASA Ames as they become available.

“The new agreement is the largest order in D-Wave’s history, and indicative of the importance of quantum computing in its evolution toward solving problems that are difficult for even the largest supercomputers,” said D-Wave CEO Vern Brownell. “We highly value the commitment that our partners have made to D-Wave and our technology, and are excited about the potential use of our systems for machine learning and complex optimization problems.”

Cade Wetz’s Sept. 28, 2015 article for Wired magazine provides some interesting observations about D-Wave computers along with some explanations of quantum computing (Note: Links have been removed),

Though the D-Wave machine is less powerful than many scientists hope quantum computers will one day be, the leap to 1000 qubits represents an exponential improvement in what the machine is capable of. What is it capable of? Google and its partners are still trying to figure that out. But Google has said it’s confident there are situations where the D-Wave can outperform today’s non-quantum machines, and scientists at the University of Southern California [USC] have published research suggesting that the D-Wave exhibits behavior beyond classical physics.

A quantum computer operates according to the principles of quantum mechanics, the physics of very small things, such as electrons and photons. In a classical computer, a transistor stores a single “bit” of information. If the transistor is “on,” it holds a 1, and if it’s “off,” it holds a 0. But in quantum computer, thanks to what’s called the superposition principle, information is held in a quantum system that can exist in two states at the same time. This “qubit” can store a 0 and 1 simultaneously.

Two qubits, then, can hold four values at any given time (00, 01, 10, and 11). And as you keep increasing the number of qubits, you exponentially increase the power of the system. The problem is that building a qubit is a extreme difficult thing. If you read information from a quantum system, it “decoheres.” Basically, it turns into a classical bit that houses only a single value.

D-Wave claims to have a found a solution to the decoherence problem and that appears to be borne out by the USC researchers. Still, it isn’t a general quantum computer (from Wetz’s article),

… researchers at USC say that the system appears to display a phenomenon called “quantum annealing” that suggests it’s truly operating in the quantum realm. Regardless, the D-Wave is not a general quantum computer—that is, it’s not a computer for just any task. But D-Wave says the machine is well-suited to “optimization” problems, where you’re facing many, many different ways forward and must pick the best option, and to machine learning, where computers teach themselves tasks by analyzing large amount of data.

It takes a lot of innovation before you make big strides forward and I think D-Wave is to be congratulated on producing what is to my knowledge the only commercially available form of quantum computing of any sort in the world.

ETA Oct. 6, 2015* at 1230 hours PST: Minutes after publishing about D-Wave I came across this item (h/t Quirks & Quarks twitter) about Australian researchers and their quantum computing breakthrough. From an Oct. 6, 2015 article by Hannah Francis for the Sydney (Australia) Morning Herald,

For decades scientists have been trying to turn quantum computing — which allows for multiple calculations to happen at once, making it immeasurably faster than standard computing — into a practical reality rather than a moonshot theory. Until now, they have largely relied on “exotic” materials to construct quantum computers, making them unsuitable for commercial production.

But researchers at the University of New South Wales have patented a new design, published in the scientific journal Nature on Tuesday, created specifically with computer industry manufacturing standards in mind and using affordable silicon, which is found in regular computer chips like those we use every day in smartphones or tablets.

“Our team at UNSW has just cleared a major hurdle to making quantum computing a reality,” the director of the university’s Australian National Fabrication Facility, Andrew Dzurak, the project’s leader, said.

“As well as demonstrating the first quantum logic gate in silicon, we’ve also designed and patented a way to scale this technology to millions of qubits using standard industrial manufacturing techniques to build the world’s first quantum processor chip.”

According to the article, the university is looking for industrial partners to help them exploit this breakthrough. Fisher’s article features an embedded video, as well as, more detail.

*It was Oct. 6, 2015 in Australia but Oct. 5, 2015 my side of the international date line.

ETA Oct. 6, 2015 (my side of the international date line): An Oct. 5, 2015 University of New South Wales news release on EurekAlert provides additional details.

Here’s a link to and a citation for the paper,

A two-qubit logic gate in silicon by M. Veldhorst, C. H. Yang, J. C. C. Hwang, W. Huang,    J. P. Dehollain, J. T. Muhonen, S. Simmons, A. Laucht, F. E. Hudson, K. M. Itoh, A. Morello    & A. S. Dzurak. Nature (2015 doi:10.1038/nature15263 Published online 05 October 2015

This paper is behind a paywall.

D-Wave passes 1000-qubit barrier

A local (Vancouver, Canada-based, quantum computing company, D-Wave is making quite a splash lately due to a technical breakthrough.  h/t’s Speaking up for Canadian Science for Business in Vancouver article and Nanotechnology Now for Harris & Harris Group press release and Economist article.

A June 22, 2015 article by Tyler Orton for Business in Vancouver describes D-Wave’s latest technical breakthrough,

“This updated processor will allow significantly more complex computational problems to be solved than ever before,” Jeremy Hilton, D-Wave’s vice-president of processor development, wrote in a June 22 [2015] blog entry.

Regular computers use two bits – ones and zeroes – to make calculations, while quantum computers rely on qubits.

Qubits possess a “superposition” that allow it to be one and zero at the same time, meaning it can calculate all possible values in a single operation.

But the algorithm for a full-scale quantum computer requires 8,000 qubits.

A June 23, 2015 Harris & Harris Group press release adds more information about the breakthrough,

Harris & Harris Group, Inc. (Nasdaq: TINY), an investor in transformative companies enabled by disruptive science, notes that its portfolio company, D-Wave Systems, Inc., announced that it has successfully fabricated 1,000 qubit processors that power its quantum computers.  D-Wave’s quantum computer runs a quantum annealing algorithm to find the lowest points, corresponding to optimal or near optimal solutions, in a virtual “energy landscape.”  Every additional qubit doubles the search space of the processor.  At 1,000 qubits, the new processor considers 21000 possibilities simultaneously, a search space which is substantially larger than the 2512 possibilities available to the company’s currently available 512 qubit D-Wave Two. In fact, the new search space contains far more possibilities than there are particles in the observable universe.

A June 22, 2015 D-Wave news release, which originated the technical details about the breakthrough found in the Harris & Harris press release, provides more information along with some marketing hype (hyperbole), Note: Links have been removed,

As the only manufacturer of scalable quantum processors, D-Wave breaks new ground with every succeeding generation it develops. The new processors, comprising over 128,000 Josephson tunnel junctions, are believed to be the most complex superconductor integrated circuits ever successfully yielded. They are fabricated in part at D-Wave’s facilities in Palo Alto, CA and at Cypress Semiconductor’s wafer foundry located in Bloomington, Minnesota.

“Temperature, noise, and precision all play a profound role in how well quantum processors solve problems.  Beyond scaling up the technology by doubling the number of qubits, we also achieved key technology advances prioritized around their impact on performance,” said Jeremy Hilton, D-Wave vice president, processor development. “We expect to release benchmarking data that demonstrate new levels of performance later this year.”

The 1000-qubit milestone is the result of intensive research and development by D-Wave and reflects a triumph over a variety of design challenges aimed at enhancing performance and boosting solution quality. Beyond the much larger number of qubits, other significant innovations include:

  •  Lower Operating Temperature: While the previous generation processor ran at a temperature close to absolute zero, the new processor runs 40% colder. The lower operating temperature enhances the importance of quantum effects, which increases the ability to discriminate the best result from a collection of good candidates.​
  • Reduced Noise: Through a combination of improved design, architectural enhancements and materials changes, noise levels have been reduced by 50% in comparison to the previous generation. The lower noise environment enhances problem-solving performance while boosting reliability and stability.
  • Increased Control Circuitry Precision: In the testing to date, the increased precision coupled with the noise reduction has demonstrated improved precision by up to 40%. To accomplish both while also improving manufacturing yield is a significant achievement.
  • Advanced Fabrication:  The new processors comprise over 128,000 Josephson junctions (tunnel junctions with superconducting electrodes) in a 6-metal layer planar process with 0.25μm features, believed to be the most complex superconductor integrated circuits ever built.
  • New Modes of Use: The new technology expands the boundaries of ways to exploit quantum resources.  In addition to performing discrete optimization like its predecessor, firmware and software upgrades will make it easier to use the system for sampling applications.

“Breaking the 1000 qubit barrier marks the culmination of years of research and development by our scientists, engineers and manufacturing team,” said D-Wave CEO Vern Brownell. “It is a critical step toward bringing the promise of quantum computing to bear on some of the most challenging technical, commercial, scientific, and national defense problems that organizations face.”

A June 20, 2015 article in The Economist notes there is now commercial interest as it provides good introductory information about quantum computing. The article includes an analysis of various research efforts in Canada (they mention D-Wave), the US, and the UK. These excerpts don’t do justice to the article but will hopefully whet your appetite or provide an overview for anyone with limited time,

A COMPUTER proceeds one step at a time. At any particular moment, each of its bits—the binary digits it adds and subtracts to arrive at its conclusions—has a single, definite value: zero or one. At that moment the machine is in just one state, a particular mixture of zeros and ones. It can therefore perform only one calculation next. This puts a limit on its power. To increase that power, you have to make it work faster.

But bits do not exist in the abstract. Each depends for its reality on the physical state of part of the computer’s processor or memory. And physical states, at the quantum level, are not as clear-cut as classical physics pretends. That leaves engineers a bit of wriggle room. By exploiting certain quantum effects they can create bits, known as qubits, that do not have a definite value, thus overcoming classical computing’s limits.

… The biggest question is what the qubits themselves should be made from.

A qubit needs a physical system with two opposite quantum states, such as the direction of spin of an electron orbiting an atomic nucleus. Several things which can do the job exist, and each has its fans. Some suggest nitrogen atoms trapped in the crystal lattices of diamonds. Calcium ions held in the grip of magnetic fields are another favourite. So are the photons of which light is composed (in this case the qubit would be stored in the plane of polarisation). And quasiparticles, which are vibrations in matter that behave like real subatomic particles, also have a following.

The leading candidate at the moment, though, is to use a superconductor in which the qubit is either the direction of a circulating current, or the presence or absence of an electric charge. Both Google and IBM are banking on this approach. It has the advantage that superconducting qubits can be arranged on semiconductor chips of the sort used in existing computers. That, the two firms think, should make them easier to commercialise.

Google is also collaborating with D-Wave of Vancouver, Canada, which sells what it calls quantum annealers. The field’s practitioners took much convincing that these devices really do exploit the quantum advantage, and in any case they are limited to a narrower set of problems—such as searching for images similar to a reference image. But such searches are just the type of application of interest to Google. In 2013, in collaboration with NASA and USRA, a research consortium, the firm bought a D-Wave machine in order to put it through its paces. Hartmut Neven, director of engineering at Google Research, is guarded about what his team has found, but he believes D-Wave’s approach is best suited to calculations involving fewer qubits, while Dr Martinis and his colleagues build devices with more.

It’s not clear to me if the writers at The Economist were aware of  D-Wave’s latest breakthrough at the time of writing but I think not. In any event, they (The Economist writers) have included a provocative tidbit about quantum encryption,

Documents released by Edward Snowden, a whistleblower, revealed that the Penetrating Hard Targets programme of America’s National Security Agency was actively researching “if, and how, a cryptologically useful quantum computer can be built”. In May IARPA [Intellligence Advanced Research Projects Agency], the American government’s intelligence-research arm, issued a call for partners in its Logical Qubits programme, to make robust, error-free qubits. In April, meanwhile, Tanja Lange and Daniel Bernstein of Eindhoven University of Technology, in the Netherlands, announced PQCRYPTO, a programme to advance and standardise “post-quantum cryptography”. They are concerned that encrypted communications captured now could be subjected to quantum cracking in the future. That means strong pre-emptive encryption is needed immediately.

I encourage you to read the Economist article.

Two final comments. (1) The latest piece, prior to this one, about D-Wave was in a Feb. 6, 2015 posting about then new investment into the company. (2) A Canadian effort in the field of quantum cryptography was mentioned in a May 11, 2015 posting (scroll down about 50% of the way) featuring a profile of Raymond Laflamme, at the University of Waterloo’s Institute of Quantum Computing in the context of an announcement about science media initiative Research2Reality.

More investment money for Canada’s D-Wave Systems (quantum computing)

A Feb. 2, 2015 news item on Nanotechnology Now features D-Wave Systems (located in the Vancouver region, Canada) and its recent funding bonanza of $28M dollars,

Harris & Harris Group, Inc. (Nasdaq:TINY), an investor in transformative companies enabled by disruptive science, notes the announcement by portfolio company, D-Wave Systems, Inc., that it has closed $29 million (CAD) in funding from a large institutional investor, among others. This funding will be used to accelerate development of D-Wave’s quantum hardware and software and expand the software application ecosystem. This investment brings total funding in D-Wave to $174 million (CAD), with approximately $62 million (CAD) raised in 2014. Harris & Harris Group’s total investment in D-Wave is approximately $5.8 million (USD). D-Wave’s announcement also includes highlights of 2014, a year of strong growth and advancement for D-Wave.

A Jan. 29, 2015 D-Wave news release provides more details about the new investment and D-Wave’s 2014 triumphs,

D-Wave Systems Inc., the world’s first quantum computing company, today announced that it has closed $29 million in funding from a large institutional investor, among others. This funding will be used to accelerate development of D-Wave’s quantum hardware and software and expand the software application ecosystem. This investment brings total funding in D-Wave to $174 million (CAD), with approximately $62 million raised in 2014.

“The investment is a testament to the progress D-Wave continues to make as the leader in quantum computing systems,” said Vern Brownell, CEO of D-Wave. “The funding we received in 2014 will advance our quantum hardware and software development, as well as our work on leading edge applications of our systems. By making quantum computing available to more organizations, we’re driving our goal of finding solutions to the most complex optimization and machine learning applications in national defense, computing, research and finance.”

The funding follows a year of strong growth and advancement for D-Wave. Highlights include:

•    Significant progress made towards the release of the next D-Wave quantum system featuring a 1000 qubit processor, which is currently undergoing testing in D-Wave’s labs.
•    The company’s patent portfolio grew to over 150 issued patents worldwide, with 11 new U.S. patents being granted in 2014, covering aspects of D-Wave’s processor technology, systems and techniques for solving computational problems using D-Wave’s technology.
•    D-Wave Professional Services launched, providing quantum computing experts to collaborate directly with customers, and deliver training classes on the usage and programming of the D-Wave system to a number of national laboratories, businesses and universities.
•    Partnerships were established with DNA-SEQ and 1QBit, companies that are developing quantum software applications in the spheres of medicine and finance, respectively.
•    Research throughout the year continued to validate D-Wave’s work, including a study showing further evidence of quantum entanglement by D-Wave and USC  [University of Southern California] scientists, published in Physical Review X this past May.

Since 2011, some of the most prestigious organizations in the world, including Lockheed Martin, NASA, Google, USC and the Universities Space Research Association (USRA), have partnered with D-Wave to use their quantum computing systems. In 2015, these partners will continue to work with the D-Wave computer, conducting pioneering research in machine learning, optimization, and space exploration.

D-Wave, which already employs over 120 people, plans to expand hiring with the additional funding. Key areas of growth include research, processor and systems development and software engineering.

Harris & Harris Group offers a description of D-Wave which mentions nanotechnology and hosts a couple of explanatory videos,

D-Wave Systems develops an adiabatic quantum computer (QC).

Status
Privately Held

The Market
Electronics – High Performance Computing

The Problem
Traditional or “classical computers” are constrained by the sequential character of data processing that makes the solving of non-polynomial (NP)-hard problems difficult or potentially impossible in reasonable timeframes. These types of computationally intense problems are commonly observed in software verifications, scheduling and logistics planning, integer programming, bioinformatics and financial portfolio optimization.

D-Wave’s Solution
D-Wave develops quantum computers that are capable of processing data quantum mechanical properties of matter. This leverage of quantum mechanics enables the identification of solutions to some non-polynomial (NP)-hard problems in a reasonable timeframe, instead of the exponential time needed for any classical digital computer. D-Wave sold and installed its first quantum computing system to a commercial customer in 2011.

Nanotechnology Factor
To function properly, D-wave processor requires tight control and manipulation of quantum mechanical phenomena. This control and manipulation is achieved by creating integrated circuits based on Josephson Junctions and other superconducting circuitry. By picking superconductors, D-wave managed to combine quantum mechanical behavior with macroscopic dimensions needed for hi-yield design and manufacturing.

It seems D-Wave has made some research and funding strides since I last wrote about the company in a Jan. 19, 2012 posting, although there is no mention of quantum computer sales.

Robo Brain; a new robot learning project

Having covered the RoboEarth project (a European Union funded ‘internet for robots’ first mentioned here in a Feb. 14, 2011 posting [scroll down about 1/4 of the way] and again in a March 12 2013 posting about the project’s cloud engine, Rapyuta and. most recently in a Jan. 14, 2014 posting), an Aug. 25, 2014 Cornell University news release by Bill Steele (also on EurekAlert with some editorial changes) about the US Robo Brain project immediately caught my attention,

Robo Brain – a large-scale computational system that learns from publicly available Internet resources – is currently downloading and processing about 1 billion images, 120,000 YouTube videos, and 100 million how-to documents and appliance manuals. The information is being translated and stored in a robot-friendly format that robots will be able to draw on when they need it.

The news release spells out why and how researchers have created Robo Brain,

To serve as helpers in our homes, offices and factories, robots will need to understand how the world works and how the humans around them behave. Robotics researchers have been teaching them these things one at a time: How to find your keys, pour a drink, put away dishes, and when not to interrupt two people having a conversation.

This will all come in one package with Robo Brain, a giant repository of knowledge collected from the Internet and stored in a robot-friendly format that robots will be able to draw on when they need it. [emphasis mine]

“Our laptops and cell phones have access to all the information we want. If a robot encounters a situation it hasn’t seen before it can query Robo Brain in the cloud,” explained Ashutosh Saxena, assistant professor of computer science.

Saxena and colleagues at Cornell, Stanford and Brown universities and the University of California, Berkeley, started in July to download about one billion images, 120,000 YouTube videos and 100 million how-to documents and appliance manuals, along with all the training they have already given the various robots in their own laboratories. Robo Brain will process images to pick out the objects in them, and by connecting images and video with text, it will learn to recognize objects and how they are used, along with human language and behavior.

Saxena described the project at the 2014 Robotics: Science and Systems Conference, July 12-16 [2014] in Berkeley.

If a robot sees a coffee mug, it can learn from Robo Brain not only that it’s a coffee mug, but also that liquids can be poured into or out of it, that it can be grasped by the handle, and that it must be carried upright when it is full, as opposed to when it is being carried from the dishwasher to the cupboard.

The system employs what computer scientists call “structured deep learning,” where information is stored in many levels of abstraction. An easy chair is a member of the class of chairs, and going up another level, chairs are furniture. Sitting is something you can do on a chair, but a human can also sit on a stool, a bench or the lawn.

A robot’s computer brain stores what it has learned in a form mathematicians call a Markov model, which can be represented graphically as a set of points connected by lines (formally called nodes and edges). The nodes could represent objects, actions or parts of an image, and each one is assigned a probability – how much you can vary it and still be correct. In searching for knowledge, a robot’s brain makes its own chain and looks for one in the knowledge base that matches within those probability limits.

“The Robo Brain will look like a gigantic, branching graph with abilities for multidimensional queries,” said Aditya Jami, a visiting researcher at Cornell who designed the large-scale database for the brain. It might look something like a chart of relationships between Facebook friends but more on the scale of the Milky Way.

Like a human learner, Robo Brain will have teachers, thanks to crowdsourcing. The Robo Brain website will display things the brain has learned, and visitors will be able to make additions and corrections.

The “robot-friendly format” for information in the European project (RoboEarth) meant machine language but if I understand what’s written in the news release correctly, this project incorporates a mix of machine language and natural (human) language.

This is one of the times the funding sources (US National Science Foundation, two of the armed forces, businesses and a couple of not-for-profit agencies) seem particularly interesting (from the news release),

The project is supported by the National Science Foundation, the Office of Naval Research, the Army Research Office, Google, Microsoft, Qualcomm, the Alfred P. Sloan Foundation and the National Robotics Initiative, whose goal is to advance robotics to help make the United States more competitive in the world economy.

For the curious, here’s a link to the Robo Brain and RoboEarth websites.

Printing food, changing prostheses, and talking with Google (Larry Page) at TED 2014′s Session 6: Wired

I’m covering two speakers and an interview from this session. First, Avi Reichental, CEO (Chief Executive Officer) 3D Sytems, from his TED biography (Note: A link has been removed),

At 3D Systems, Avi Reichental is helping to imagine a future where 3D scanning-and-printing is an everyday act, and food, clothing, objects are routinely output at home.

Lately, he’s been demo-ing the Cube, a tabletop 3D printer that can print a basketball-sized object, and the ChefJet, a food-grade machine that prints in sugar and chocolate. His company is also rolling out consumer-grade 3D scanning cameras that clip to a tablet to capture three-dimensional objects for printing out later. He’s an instructor at Singularity University (watch his 4-minute intro to 3D printing).

Reichental started by talking about his grandfather, a cobbler who died in the Holocaust and whom he’d never met. Nonetheless, his grandfather had inspired him to be a maker of things in a society where craftsmanship and crafting atrophied until recently with the rise of ‘maker’ culture and 3D printing.

There were a number of items on the stage, shoes, a cake, a guitar and more, all of which had been 3D printed. Reichental’s shoes had also been produced on a 3D printer. If I understand his dream properly, it is to enable everyone to make what they need more cheaply and better.

Next, Hugh Herr, bionics designer, from his TED biography,

Hugh Herr directs the Biomechatronics research group at the MIT Media Lab, where he is pioneering a new class of biohybrid smart prostheses and exoskeletons to improve the quality of life for thousands of people with physical challenges. A computer-controlled prosthesis called the Rheo Knee, for instance, is outfitted with a microprocessor that continually senses the joint’s position and the loads applied to the limb. A powered ankle-foot prosthesis called the BiOM emulates the action of a biological leg to create a natural gait, allowing amputees to walk with normal levels of speed and metabolism as if their legs were biological.

Herr is the founder and chief technology officer of BiOM Inc., which markets the BiOM as the first in a series of products that will emulate or even augment physiological function through electromechanical replacement. You can call it (as they do) “personal bionics.”

Herr walked on his two bionic limbs onto the TED stage. He not only researches and works in the field of bionics, he lives it. His name was mentioned in a previous presentation by David Sengeh (can be found in my March 17, 2014 posting), a 2014 TED Fellow.

Herr talked about biomimcry, i.e., following nature’s lead in design but he also suggested that design is driving (affecting) nature.  If I understand him rightly, he was referencing some of the work with proteins, ligands, etc. and creating devices that are not what we would consider biological or natural as we have tended to use the term.

His talk contrasted somewhat with Reichental’s as Herr wants to remove the artisanal approach to developing prosthetics and replacing the artisanal with data-driven strategies. Herr covered the mechanical, the dynamic, and the electrical as applied to bionic limbs. I think the term prosthetic is being applied the older, artisanal limbs as opposed to these mechanical, electrical, dynamic marvels known as bionic limbs.

The mechanical aspect has to do with figuring out how your specific limbs are formed and used and getting precise measurements (with robotic tools) because everyone is a little bit different. The dynamic aspect, also highly individual, is how your muscles work. For example, standing still, walking, etc. all require dynamic responses from your muscles. Finally, there’s the integration with the nervous system so you can feel your limb.

Herr shows a few videos including one of a woman who lost part of her leg in last year’s Boston Marathon bombing (April 15, 2013). A ballroom dancer, Herr invites her to the stage so she can perform in front of the TED 2014 audience. She got a standing ovation.

In the midst of session 6, there was an interview conducted by Charlie Rose (US television presenter) with Larry Page, a co-founder of Google.

Very briefly, I was mildly relieved (although I’m not convinced) to hear that Page is devoted to the notion that search is important. I’ve been concerned about the Google search results I get. Those results seem less rich and interesting than they were a few years ago. I attribute the situation to the chase for advertising dollars and a decreasing interest in ‘search’ as the company expands with initiatives such as ‘Google glass’, artificial intelligence, and pursues other interests distinct from what had been the company’s core focus.

I didn’t find much else of interest. Larry Page wants to help people and he’s interested in artificial intelligence and transportation. His perspective seemed a bit simplistic (technology will solve our problems) but perhaps that was for the benefit of people like me. I suspect one of a speaker’s challenges at TED is finding the right level. Certainly, I’ve experienced difficulties with some of the more technical presentations.

One more observation, there was no mention of a current scandal at Google profiled in the April 2014 issue of Vanity Fair, (by Vanessa Grigoriadis)

 O.K., Glass: Make Google Eyes

The story behind Google co-founder Sergey Brin’s liaison with Google Glass marketing manager Amanda Rosenberg—and his split from his wife, genetic-testing entrepreneur Anne Wojcicki— has a decidedly futuristic edge. But, as Vanessa Grigoriadis reports, the drama leaves Silicon Valley debating emotional issues, from office romance to fear of mortality.

Given that Page agreed to be on the TED stage in the last 10 days, this appearance seems like an attempt at damage control especially with the mention of Brin who had his picture taken with the telepresent Ed Snowden on Tuesday, March 18, 2014 at TED 2014.

Unintended consequences of reading science news online

University of Wisconsin-Madison researchers Dominique Brossard and  Dietram Scheufele have written a cautionary piece for the AAAS’s (American Association for the Advancement of Science) magazine, Science, according to a Jan. 3, 2013 news item on ScienceDaily,

A science-inclined audience and wide array of communications tools make the Internet an excellent opportunity for scientists hoping to share their research with the world. But that opportunity is fraught with unintended consequences, according to a pair of University of Wisconsin-Madison life sciences communication professors.

Dominique Brossard and Dietram Scheufele, writing in a Perspectives piece for the journal Science, encourage scientists to join an effort to make sure the public receives full, accurate and unbiased information on science and technology.

“This is an opportunity to promote interest in science — especially basic research, fundamental science — but, on the other hand, we could be missing the boat,” Brossard says. “Even our most well-intended effort could backfire, because we don’t understand the ways these same tools can work against us.”

The Jan. 3, 2012 University of Wisconsin-Madison news release by Chris Barncard (which originated the news item) notes,

Recent research by Brossard and Scheufele has described the way the Internet may be narrowing public discourse, and new work shows that a staple of online news presentation — the comments section — and other ubiquitous means to provide endorsement or feedback can color the opinions of readers of even the most neutral science stories.

Online news sources pare down discussion or limit visibility of some information in several ways, according to Brossard and Scheufele.

Many news sites use the popularity of stories or subjects (measured by the numbers of clicks they receive, or the rate at which users share that content with others, or other metrics) to guide the presentation of material.

The search engine Google offers users suggested search terms as they make requests, offering up “nanotechnology in medicine,” for example, to those who begin typing “nanotechnology” in a search box. Users often avail themselves of the list of suggestions, making certain searches more popular, which in turn makes those search terms even more likely to appear as suggestions.

Brossard and Scheufele have published an earlier study about the ‘narrowing’ effects of search engines such as Google, using the example of the topic ‘nanotechnology’, as per my May 19, 2010 posting. The researchers appear to be building on this earlier work,

The consequences become more daunting for the researchers as Brossard and Scheufele uncover more surprising effects of Web 2.0.

In their newest study, they show that independent of the content of an article about a new technological development, the tone of comments posted by other readers can make a significant difference in the way new readers feel about the article’s subject. The less civil the accompanying comments, the more risk readers attributed to the research described in the news story.

“The day of reading a story and then turning the page to read another is over,” Scheufele says. “Now each story is surrounded by numbers of Facebook likes and tweets and comments that color the way readers interpret even truly unbiased information. This will produce more and more unintended effects on readers, and unless we understand what those are and even capitalize on them, they will just cause more and more problems.”

If even some of the for-profit media world and advocacy organizations are approaching the digital landscape from a marketing perspective, Brossard and Scheufele argue, scientists need to turn to more empirical communications research and engage in active discussions across disciplines of how to most effectively reach large audiences.

“It’s not because there is not decent science writing out there. We know all kinds of excellent writers and sources,” Brossard says. “But can people be certain that those are the sites they will find when they search for information? That is not clear.”

It’s not about preparing for the future. It’s about catching up to the present. And the present, Scheufele says, includes scientific subjects — think fracking, or synthetic biology — that need debate and input from the public.

Here’s a citation and link for the Science article,

Science, New Media, and the Public by Dominique Brossard and Dietram A. Scheufele in Science 4 January 2013: Vol. 339 no. 6115 pp. 40-41 DOI: 10.1126/science.1232329

This article is behind a paywall.