Tag Archives: Google

Westworld: a US television programme investigating AI (artificial intelligence) and consciousness

The US television network, Home Box Office (HBO) is getting ready to première Westworld, a new series based on a movie first released in 1973. Here’s more about the movie from its Wikipedia entry (Note: Links have been removed),

Westworld is a 1973 science fiction Western thriller film written and directed by novelist Michael Crichton and produced by Paul Lazarus III about amusement park robots that malfunction and begin killing visitors. It stars Yul Brynner as an android in a futuristic Western-themed amusement park, and Richard Benjamin and James Brolin as guests of the park.

Westworld was the first theatrical feature directed by Michael Crichton.[3] It was also the first feature film to use digital image processing, to pixellate photography to simulate an android point of view.[4] The film was nominated for Hugo, Nebula and Saturn awards, and was followed by a sequel film, Futureworld, and a short-lived television series, Beyond Westworld. In August 2013, HBO announced plans for a television series based on the original film.

The latest version is due to start broadcasting in the US on Sunday, Oct. 2, 2016 and as part of the publicity effort the producers are profiled by Sean Captain for Fast Company in a Sept. 30, 2016 article,

As Game of Thrones marches into its final seasons, HBO is debuting this Sunday what it hopes—and is betting millions of dollars on—will be its new blockbuster series: Westworld, a thorough reimagining of Michael Crichton’s 1973 cult classic film about a Western theme park populated by lifelike robot hosts. A philosophical prelude to Jurassic Park, Crichton’s Westworld is a cautionary tale about technology gone very wrong: the classic tale of robots that rise up and kill the humans. HBO’s new series, starring Evan Rachel Wood, Anthony Hopkins, and Ed Harris, is subtler and also darker: The humans are the scary ones.

“We subverted the entire premise of Westworld in that our sympathies are meant to be with the robots, the hosts,” says series co-creator Lisa Joy. She’s sitting on a couch in her Burbank office next to her partner in life and on the show—writer, director, producer, and husband Jonathan Nolan—who goes by Jonah. …

Their Westworld, which runs in the revered Sunday-night 9 p.m. time slot, combines present-day production values and futuristic technological visions—thoroughly revamping Crichton’s story with hybrid mechanical-biological robots [emphasis mine] fumbling along the blurry line between simulated and actual consciousness.

Captain never does explain the “hybrid mechanical-biological robots.” For example, do they have human skin or other organs grown for use in a robot? In other words, how are they hybrid?

That nitpick aside, the article provides some interesting nuggets of information and insight into the themes and ideas 2016 Westworld’s creators are exploring (Note: A link has been removed),

… Based on the four episodes I previewed (which get progressively more interesting), Westworld does a good job with the trope—which focused especially on the awakening of Dolores, an old soul of a robot played by Evan Rachel Wood. Dolores is also the catchall Spanish word for suffering, pain, grief, and other displeasures. “There are no coincidences in Westworld,” says Joy, noting that the name is also a play on Dolly, the first cloned mammal.

The show operates on a deeper, though hard-to-define level, that runs beneath the shoot-em and screw-em frontier adventure and robotic enlightenment narratives. It’s an allegory of how even today’s artificial intelligence is already taking over, by cataloging and monetizing our lives and identities. “Google and Facebook, their business is reading your mind in order to advertise shit to you,” says Jonah Nolan. …

“Exist free of rules, laws or judgment. No impulse is taboo,” reads a spoof home page for the resort that HBO launched a few weeks ago. That’s lived to the fullest by the park’s utterly sadistic loyal guest, played by Ed Harris and known only as the Man in Black.

The article also features some quotes from scientists on the topic of artificial intelligence (Note: Links have been removed),

“In some sense, being human, but less than human, it’s a good thing,” says Jon Gratch, professor of computer science and psychology at the University of Southern California [USC]. Gratch directs research at the university’s Institute for Creative Technologies on “virtual humans,” AI-driven onscreen avatars used in military-funded training programs. One of the projects, SimSensei, features an avatar of a sympathetic female therapist, Ellie. It uses AI and sensors to interpret facial expressions, posture, tension in the voice, and word choices by users in order to direct a conversation with them.

“One of the things that we’ve found is that people don’t feel like they’re being judged by this character,” says Gratch. In work with a National Guard unit, Ellie elicited more honest responses about their psychological stresses than a web form did, he says. Other data show that people are more honest when they know the avatar is controlled by an AI versus being told that it was controlled remotely by a human mental health clinician.

“If you build it like a human, and it can interact like a human. That solves a lot of the human-computer or human-robot interaction issues,” says professor Paul Rosenbloom, also with USC’s Institute for Creative Technologies. He works on artificial general intelligence, or AGI—the effort to create a human-like or human level of intellect.

Rosenbloom is building an AGI platform called Sigma that models human cognition, including emotions. These could make a more effective robotic tutor, for instance, “There are times you want the person to know you are unhappy with them, times you want them to know that you think they’re doing great,” he says, where “you” is the AI programmer. “And there’s an emotional component as well as the content.”

Achieving full AGI could take a long time, says Rosenbloom, perhaps a century. Bernie Meyerson, IBM’s chief innovation officer, is also circumspect in predicting if or when Watson could evolve into something like HAL or Her. “Boy, we are so far from that reality, or even that possibility, that it becomes ludicrous trying to get hung up there, when we’re trying to get something to reasonably deal with fact-based data,” he says.

Gratch, Rosenbloom, and Meyerson are talking about screen-based entities and concepts of consciousness and emotions. Then, there’s a scientist who’s talking about the difficulties with robots,

… Ken Goldberg, an artist and professor of engineering at UC [University of California] Berkeley, calls the notion of cyborg robots in Westworld “a pretty common trope in science fiction.” (Joy will take up the theme again, as the screenwriter for a new Battlestar Galactica movie.) Goldberg’s lab is struggling just to build and program a robotic hand that can reliably pick things up. But a sympathetic, somewhat believable Dolores in a virtual setting is not so farfetched.

Captain delves further into a thorny issue,

“Can simulations, at some point, become the real thing?” asks Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University. “If we perfectly simulate a rainstorm on a computer, it’s still not a rainstorm. We won’t get wet. But is the mind or consciousness different? The jury is still out.”

While artificial consciousness is still in the dreamy phase, today’s level of AI is serious business. “What was sort of a highfalutin philosophical question a few years ago has become an urgent industrial need,” says Jonah Nolan. It’s not clear yet how the Delos management intends, beyond entrance fees, to monetize Westworld, although you get a hint when Ford tells Theresa Cullen “We know everything about our guests, don’t we? As we know everything about our employees.”

AI has a clear moneymaking model in this world, according to Nolan. “Facebook is monetizing your social graph, and Google is advertising to you.” Both companies (and others) are investing in AI to better understand users and find ways to make money off this knowledge. …

As my colleague David Bruggeman has often noted on his Pasco Phronesis blog, there’s a lot of science on television.

For anyone who’s interested in artificial intelligence and the effects it may have on urban life, see my Sept. 27, 2016 posting featuring the ‘One Hundred Year Study on Artificial Intelligence (AI100)’, hosted by Stanford University.

Points to anyone who recognized Jonah (Jonathan) Nolan as the producer for the US television series, Person of Interest, a programme based on the concept of a supercomputer with intelligence and personality and the ability to continuously monitor the population 24/7.

Cientifica’s latest smart textiles and wearable electronics report

After publishing a report on wearable technology in May 2016 (see my June 2, 2016 posting), Cientifica has published another wearable technology report, this one is titled, Smart Textiles and Wearables: Markets, Applications and Technologies. Here’s more about the latest report from the report order page,

“Smart Textiles and Wearables: Markets, Applications and Technologies” examines the markets for textile based wearable technologies, the companies producing them and the enabling technologies. This is creating a 4th industrial revolution for the textiles and fashion industry worth over $130 billion by 2025.

Advances in fields such as nanotechnology, organic electronics (also known as plastic electronics) and conducting polymers are creating a range of textile–based technologies with the ability to sense and react to the world around them.  This includes monitoring biometric data such as heart rate, the environmental factors such as temperature and The presence of toxic gases producing real time feedback in the form of electrical stimuli, haptic feedback or changes in color.

The report identifies three distinct generations of textile wearable technologies.

First generation is where a sensor is attached to apparel and is the approach currently taken by major sportswear brands such as Adidas, Nike and Under Armour
Second generation products embed the sensor in the garment as demonstrated by products from Samsung, Alphabet, Ralph Lauren and Flex.
In third generation wearables the garment is the sensor and a growing number of companies including AdvanPro, Tamicare and BeBop sensors are making rapid progress in creating pressure, strain and temperature sensors.

Third generation wearables represent a significant opportunity for new and established textile companies to add significant value without having to directly compete with Apple, Samsung and Intel.

The report predicts that the key growth areas will be initially sports and wellbeing

followed by medical applications for patient monitoring. Technical textiles, fashion and entertainment will also be significant applications with the total market expected to rise to over $130 billion by 2025 with triple digit compound annual growth rates across many applications.

The rise of textile wearables also represents a significant opportunity for manufacturers of the advanced materials used in their manufacture. Toray, Panasonic, Covestro, DuPont and Toyobo are already suppling the necessary materials, while researchers are creating sensing and energy storage technologies, from flexible batteries to graphene supercapacitors which will power tomorrows wearables. The report details the latest advances and their applications.

This report is based on an extensive research study of the wearables and smart textile markets backed with over a decade of experience in identifying, predicting and sizing markets for nanotechnologies and smart textiles. Detailed market figures are given from 2016-2025, along with an analysis of the key opportunities, and illustrated with 139 figures and 6 tables.

The September 2016 report is organized differently and has a somewhat different focus from the report published in May 2016. Not having read either report, I’m guessing that while there might be a little repetition, you might better consider them to be companion volumes.

Here’s more from the September 2016 report’s table of contents which you can download from the order page (Note: The formatting has been changed),

SMART TEXTILES AND WEARABLES:
MARKETS, APPLICATIONS AND
TECHNOLOGIES

Contents  1
List of Tables  4
List of Figures  4
Introduction  8
How to Use This Report  8
Wearable Technologies and the 4Th Industrial Revolution  9
The Evolution of Wearable Technologies  10
Defining Smart Textiles  15
Factors Affecting The Adoption of Smart Textiles for Wearables  18
Cost  18
Accuracy  18
On Shoring  19
Power management  19
Security and Privacy  20
Markets  21
Total Market Growth and CAGR  21
Market Growth By Application  21
Adding Value To Textiles Through Technology  27
How Nanomaterials Add Functionality and Value  31
Business Models  33
Applications  35
Sports and Wellbeing  35
1st Generation Technologies  35
Under Armour Healthbox Wearables  35
Adidas MiCoach  36
Sensoria  36
EMPA’s Long Term Research  39
2nd Generation Technologies  39
Google’s Project Jacquard  39
Samsung Creative Lab  43
Microsoft Collaborations  44
Intel Systems on a Chip  44
Flex (Formerly Flextronics) and MAS Holdings  45
Jiobit  46
Asensei Personal Trainer  47
OmSignal Smart Clothing  48
Ralph Lauren PoloTech  49
Hexoskin Performance Management  50
Jabil Circuit Textile Heart Monitoring  51
Stretch Sense Sensors  52
NTT Data and Toray  54
Goldwin Inc. and DoCoMo  55
SupaSpot Inc Smart Sensors  55
Wearable Experiments and Brand Marketing  56
Wearable Life Sciences Antelope  57
Textronics NuMetrex  59
3rd Generation Technologies  60
AdvanPro Pressure Sensing Shoes  60
Tamicare 3D printed Wearables with Integrated Sensors  62
AiQ Smart Clothing Stainless Steel Yarns  64
Flex Printed Inks And Conductive Yarns  66
Sensing Tech Conductive Inks  67
EHO Textiles Body Motion Monitoring  68
Bebop Sensors Washable E-Ink Sensors  70
Fraunhofer Institute for Silicate Research Piezolectric Polymer
Sensors  71
CLIM8 GEAR Heated Textiles  74
VTT Smart Clothing Human Thermal Model  74
ATTACH (Adaptive Textiles Technology with Active Cooling and Heating) 76
Energy Storage and Generation  78
Intelligent Textiles Military Uniforms  78
BAE Systems Broadsword Spine  79
Stretchable Batteries  80
LG Chem Cable Batteries  81
Supercapacitors  83
Swinburne Graphene Supercapacitors  83
MIT Niobium Nanowire Supercapacitors  83
Energy Harvesting  86
Kinetic  86
StretchSense Energy Harvesting Kit  86
NASA Environmental Sensing Fibers  86
Solar  87
Powertextiles  88
Sphelar Power Corp Solar Textiles  88
Ohmatex and Powerweave  89
Fashion  89
1st Generation Technologies  92
Cute Circuit LED Couture  92
MAKEFASHION LED Couture  94
2nd Generation Technologies  94
Covestro Luminous Clothing  94
3rd Generation Technologies  96
The Unseen Temperature Sensitive Dyes  96
Entertainment  98
Wearable Experiments Marketing  98
Key Technologies 100
Circuitry  100
Conductive Inks for Fabrics  100
Conductive Ink For Printing On Stretchable Fabrics  100
Creative Materials Conductive Inks And Adhesives  100
Dupont Stretchable Electronic Inks  101
Aluminium Inks From Alink Co  101
Conductive Fibres  102
Circuitex Silver Coated Nylon  102
Textronics Yarns and Fibres  102
Novonic Elastic Conductive Yarn  103
Copper Coated Polyacrylonitrile (PAN) Fibres  103
Printed electronics  105
Covestro TPU Films for Flexible Circuits  105
Sensors  107
Electrical  107
Hitoe  107
Cocomi  108
Panasonic Polymer Resin  109
Cardiac Monitoring  110
Mechanical  113
Strain  113
Textile-Based Weft Knitted Strain Sensors  113
Chain Mail Fabric for Smart Textiles  113
Nano-Treatment for Conductive Fiber/Sensors 115
Piezoceramic materials  116
Graphene-Based Woven Fabric  117
Pressure Sensing  117
LG Innotek Flexible Textile Pressure Sensors  117
Hong Kong Polytechnic University Pressure Sensing Fibers  119
Conductive Polymer Composite Coatings  122
Printed Textile Sensors To Track Movement  125
Environment  127
Photochromic Textiles  127
Temperature  127
Sefar PowerSens  127
Gasses & Chemicals  127
Textile Gas Sensors  127
Energy  130
Storage  130
Graphene Supercapacitors  130
Niobium Nanowire Supercapacitors  130
Stretchy supercapacitors  132
Energy Generation  133
StretchSense Energy Harvesting Kit  133
Piezoelectric Or Thermoelectric Coated Fibres  134
Optical  137
Light Emitting  137
University of Manchester Electroluminescent Inks and Yarns 137
Polyera Wove  138
Companies Mentioned  141
List of Tables
Table 1 CAGR by application  22
Table 2 Value of market by application 2016-25 (millions USD)  24
Table 3 % market share by application  26
Table 4 CAGR 2016-25 by application  26
Table 5 Technology-Enabled Market Growth in Textile by Sector (2016-22) 28
Table 6 Value of nanomaterials by sector 2016-22 ($ Millions)  33
List of Figures
Figure 1 The 4th Industrial Revolution (World Economic Forum)  9
Figure 2 Block Diagram of typical MEMS digital output motion sensor: ultra
low-power high performance 3-axis “femto” accelerometer used in
fitness tracking devices.  11
Figure 3 Interior of Fitbit Flex device (from iFixit)  11
Figure 4 Internal layout of Fitbit Flex. Red is the main CPU, orange is the
BTLE chip, blue is a charger, yellow is the accelerometer (from iFixit)  11
Figure 5 Intel’s Curie processor stretches the definition of ‘wearable’  12
Figure 6 Typical Textile Based Wearable System Components  13
Figure 7 The Chromat Aeros Sports Bra “powered by Intel, inspired by wind, air and flight.”  14
Figure 8 The Evolution of Smart textiles  15
Figure 9 Goldwin’s C2fit IN-pulse sportswear using Toray’s Hitoe  16
Figure 10 Sensoglove reads grip pressure for golfers  16
Figure 11 Textile Based Wearables Growth 2016-25(USD Millions)  21
Figure 12 Total market for textile based wearables 2016-25 (USD Millions)  22
Figure 13 Health and Sports Market Size 2016-20 (USD Millions)  23
Figure 14 Health and Sports Market Size 2016-25 (USD Millions)  23
Figure 15 Critical steps for obtaining FDA medical device approval  25
Figure 16 Market split between wellbeing and medical 2016-25  26
Figure 17 Current World Textile Market by Sector (2016)  27
Figure 18 The Global Textile Market By Sector ($ Millions)  27
Figure 19 Compound Annual Growth Rates (CAGR) by Sector (2016-25)  28
Figure 20 The Global Textile Market in 2022  29
Figure 21 The Global Textile Market in 2025  30
Figure 22 Textile Market Evolution (2012-2025)  30
Figure 23 Total Value of Nanomaterials in Textiles 2012-2022 ($ Millions)  31
Figure 24 Value of Nanomaterials in Textiles by Sector 2016-2025 ($ Millions) 32
Figure 25 Adidas miCoach Connect Heart Rate Monitor  36
Figure 26 Sensoria’s Hear[t] Rate Monitoring Garments . 37
Figure 27 Flexible components used in Google’s Project Jacquard  40
Figure 28 Google and Levi’s Smart Jacket  41
Figure 29 Embedded electronics Google’s Project Jacquard  42
Figure 30 Samsung’s WELT ‘smart’ belt  43
Figure 31 Samsung Body Compass at CES16  44
Figure 32 Lumo Run washable motion sensor  45
Figure 33 OMSignal’s Smart Bra  49
Figure 34 PoloTech Shirt from Ralph Lauren  50
Figure 35 Hexoskin Data Acquisition and Processing  51
Figure 36 Peak+™ Hear[t] Rate Monitoring Garment  52
Figure 37 StretchSense CEO Ben O’Brien, with a fabric stretch sensor  53
Figure 38 C3fit Pulse from Goldwin Inc  55
Figure 39 The Antelope Tank-Top  58
Figure 40 Sportswear with integrated sensors from Textronix  60
Figure 41 AdvanPro’s pressure sensing insoles  61
Figure 42 AdvanPro’s pressure sensing textile  62
Figure 43 Tamicare 3D Printing Sensors and Apparel  63
Figure 44 Smart clothing using stainless steel yarns and textile sensors from AiQ  65
Figure 45 EHO Smart Sock  69
Figure 46 BeBop Smart Car Seat Sensor  71
Figure 47 Non-transparent printed sensors from Fraunhofer ISC  73
Figure 48 Clim8 Intelligent Heat Regulating Shirt  74
Figure 49 Temperature regulating smart fabric printed at UC San Diego  76
Figure 50 Intelligent Textiles Ltd smart uniform  79
Figure 51 BAE Systems Broadsword Spine  80
Figure 52 LG Chem cable-shaped lithium-ion battery powers an LED display even when twisted and strained  81
Figure 53 Supercapacitor yarn made of niobium nanowires  84
Figure 54 Sphelar Textile  89
Figure 55 Sphelar Textile Solar Cells  89
Figure 56 Katy Perry wears Cute Circuit in 2010  91
Figure 57 Cute Circuit K Dress  93
Figure 58 MAKEFASHION runway at the Brother’s “Back to Business” conference, Nashville 2016  94
Figure 59 Covestro material with LEDs are positioned on formable films made from thermoplastic polyurethane (TPU).  95
Figure 60 Unseen headpiece, made of 4000 conductive Swarovski stones, changes color to correspond with localized brain activity  96
Figure 61 Eighthsense a coded couture piece.  97
Figure 62 Durex Fundawear  98
Figure 63 Printed fabric sensors from the University of Tokyo  100
Figure 64 Tony Kanaan’s shirt with electrically conductive nano-fibers  107
Figure 65 Panasonic stretchable resin technology  109
Figure 66 Nanoflex moniroring system  111
Figure 67 Knitted strain sensors  113
Figure 68 Chain Mail Fabric for Smart Textiles  114
Figure 69 Electroplated Fabric  115
Figure 70 LG Innotek flexible textile pressure sensors  118
Figure 71 Smart Footwear installed with fabric sensors. (Credit: Image courtesy of The Hong Kong Polytechnic University)  120
Figure 72 SOFTCEPTOR™ textile strain sensors  122
Figure 73 conductive polymer composite coating for pressure sensing  123
Figure 74 Fraunhofer ISC_ printed sensor  125
Figure 75 The graphene-coated yarn sensor. (Image: ETRI)  128
Figure 76 Supercapacitor yarn made of niobium nanowires  131
Figure 77 StretchSense Energy Harvesting Kit  134
Figure 78 Energy harvesting textiles at the University of Southampton  135
Figure 79 Polyera Wove Flexible Screen  139

If you compare that with the table of contents for the May 2016 report in my June 2, 2016 posting, you can see the difference.

Here’s one last tidbit, a Sept. 15, 2016 news item on phys.org highlights another wearable technology report,

Wearable tech, which was seeing sizzling sales growth a year ago [2015], is cooling this year amid consumer hesitation over new devices, a survey showed Thursday [Sept. 15, 2016].

The research firm IDC said it expects global sales of wearables to grow some 29.4 percent to some 103 million units in 2016.

That follows 171 percent growth in 2015, fueled by the launch of the Apple Watch and a variety of fitness bands.

“It is increasingly becoming more obvious that consumers are not willing to deal with technical pain points that have to date been associated with many wearable devices,” said IDC analyst Ryan Reith.

So-called basic wearables—including fitness bands and other devices that do not run third party applications—will make up the lion’s share of the market with some 80.7 million units shipped this year, according to IDC.

According to IDC, it seems that the short term does not promise the explosive growth of the previous year but that new generations of wearable technology, according to both IDC and Cientifica, offer considerable promise for the market.

Connecting chaos and entanglement

Researchers seem to have stumbled across a link between classical and quantum physics. A July 12, 2016 University of California at Santa Barbara (UCSB) news release (also on EurekAlert) by Sonia Fernandez provides a description of both classical and quantum physics, as well as, the research that connects the two,

Using a small quantum system consisting of three superconducting qubits, researchers at UC Santa Barbara and Google have uncovered a link between aspects of classical and quantum physics thought to be unrelated: classical chaos and quantum entanglement. Their findings suggest that it would be possible to use controllable quantum systems to investigate certain fundamental aspects of nature.

“It’s kind of surprising because chaos is this totally classical concept — there’s no idea of chaos in a quantum system,” Charles Neill, a researcher in the UCSB Department of Physics and lead author of a paper that appears in Nature Physics. “Similarly, there’s no concept of entanglement within classical systems. And yet it turns out that chaos and entanglement are really very strongly and clearly related.”

Initiated in the 15th century, classical physics generally examines and describes systems larger than atoms and molecules. It consists of hundreds of years’ worth of study including Newton’s laws of motion, electrodynamics, relativity, thermodynamics as well as chaos theory — the field that studies the behavior of highly sensitive and unpredictable systems. One classic example of chaos theory is the weather, in which a relatively small change in one part of the system is enough to foil predictions — and vacation plans — anywhere on the globe.

At smaller size and length scales in nature, however, such as those involving atoms and photons and their behaviors, classical physics falls short. In the early 20th century quantum physics emerged, with its seemingly counterintuitive and sometimes controversial science, including the notions of superposition (the theory that a particle can be located in several places at once) and entanglement (particles that are deeply linked behave as such despite physical distance from one another).

And so began the continuing search for connections between the two fields.

All systems are fundamentally quantum systems, according [to] Neill, but the means of describing in a quantum sense the chaotic behavior of, say, air molecules in an evacuated room, remains limited.

Imagine taking a balloon full of air molecules, somehow tagging them so you could see them and then releasing them into a room with no air molecules, noted co-author and UCSB/Google researcher Pedram Roushan. One possible outcome is that the air molecules remain clumped together in a little cloud following the same trajectory around the room. And yet, he continued, as we can probably intuit, the molecules will more likely take off in a variety of velocities and directions, bouncing off walls and interacting with each other, resting after the room is sufficiently saturated with them.

“The underlying physics is chaos, essentially,” he said. The molecules coming to rest — at least on the macroscopic level — is the result of thermalization, or of reaching equilibrium after they have achieved uniform saturation within the system. But in the infinitesimal world of quantum physics, there is still little to describe that behavior. The mathematics of quantum mechanics, Roushan said, do not allow for the chaos described by Newtonian laws of motion.

To investigate, the researchers devised an experiment using three quantum bits, the basic computational units of the quantum computer. Unlike classical computer bits, which utilize a binary system of two possible states (e.g., zero/one), a qubit can also use a superposition of both states (zero and one) as a single state. Additionally, multiple qubits can entangle, or link so closely that their measurements will automatically correlate. By manipulating these qubits with electronic pulses, Neill caused them to interact, rotate and evolve in the quantum analog of a highly sensitive classical system.

The result is a map of entanglement entropy of a qubit that, over time, comes to strongly resemble that of classical dynamics — the regions of entanglement in the quantum map resemble the regions of chaos on the classical map. The islands of low entanglement in the quantum map are located in the places of low chaos on the classical map.

“There’s a very clear connection between entanglement and chaos in these two pictures,” said Neill. “And, it turns out that thermalization is the thing that connects chaos and entanglement. It turns out that they are actually the driving forces behind thermalization.

“What we realize is that in almost any quantum system, including on quantum computers, if you just let it evolve and you start to study what happens as a function of time, it’s going to thermalize,” added Neill, referring to the quantum-level equilibration. “And this really ties together the intuition between classical thermalization and chaos and how it occurs in quantum systems that entangle.”

The study’s findings have fundamental implications for quantum computing. At the level of three qubits, the computation is relatively simple, said Roushan, but as researchers push to build increasingly sophisticated and powerful quantum computers that incorporate more qubits to study highly complex problems that are beyond the ability of classical computing — such as those in the realms of machine learning, artificial intelligence, fluid dynamics or chemistry — a quantum processor optimized for such calculations will be a very powerful tool.

“It means we can study things that are completely impossible to study right now, once we get to bigger systems,” said Neill.

Experimental link between quantum entanglement (left) and classical chaos (right) found using a small quantum computer. Photo Credit: Courtesy Image (Courtesy: UCSB)

Experimental link between quantum entanglement (left) and classical chaos (right) found using a small quantum computer. Photo Credit: Courtesy Image (Courtesy: UCSB)

Here’s a link to and a citation for the paper,

Ergodic dynamics and thermalization in an isolated quantum system by C. Neill, P. Roushan, M. Fang, Y. Chen, M. Kolodrubetz, Z. Chen, A. Megrant, R. Barends, B. Campbell, B. Chiaro, A. Dunsworth, E. Jeffrey, J. Kelly, J. Mutus, P. J. J. O’Malley, C. Quintana, D. Sank, A. Vainsencher, J. Wenner, T. C. White, A. Polkovnikov, & J. M. Martinis. Nature Physics (2016)  doi:10.1038/nphys3830 Published online 11 July 2016

This paper is behind a paywall.

Google Arts & Culture: an app for culture vultures

In its drive to take over single aspect of our lives in the most charming, helpful, and delightful ways possible, Google has developed its Arts & Culture app.

Here’s more from a July 19, 2016 article by John Brownlee for Fast Company (Note: Links have been removed),

… Google has just unveiled a new app that makes it as easy to find the opening times of your local museum as it is to figure out who painted that bright purple Impressionist masterpiece you saw five years ago at the Louvre.

It’s called Google Arts & Culture, and it’s a tool for discovering art “from more than a thousand museums across 70 countries,” Google writes on its blog. More than just an online display of art, though, it encourages viewers to parse the works and gather insight into the visual culture we rarely encounter outside the rarified world of brick-and-mortar museums.

For instance, you can browse all of Van Gogh’s paintings chronologically to see how much more vibrant his work became over time. Or you can sort Monet’s paintings by color for a glimpse at his nuanced use of gray.

You can also read daily stories about subjects such as stolen Nazi artworks or Bruegel’s Tower of Babel. …

A July 19, 2016 post announcing the Arts & Culture app on the Google blog by Duncan Osborn provides more details,

Just as the world’s precious artworks and monuments need a touch-up to look their best, the home we’ve built to host the world’s cultural treasures online needs a lick of paint every now and then. We’re ready to pull off the dust sheets and introduce the new Google Arts & Culture website and app, by the Google Cultural Institute. The app lets you explore anything from cats in art since 200 BCE to the color red in Abstract Expressionism, and everything in between.

• Search for anything, from shoes to all things gold • Scroll through art by time—see how Van Gogh’s works went from gloomy to vivid • Browse by color and learn about Monet’s 50 shades of gray • Find a new fascinating story to discover every day—today, it’s nine powerful men in heels

You can also use this app when visiting a real life museum. For the interested, you can download it for for iOS and Android.

Deep learning and some history from the Swiss National Science Foundation (SNSF)

A June 27, 2016 news item on phys.org provides a measured analysis of deep learning and its current state of development (from a Swiss perspective),

In March 2016, the world Go champion Lee Sedol lost 1-4 against the artificial intelligence AlphaGo. For many, this was yet another defeat for humanity at the hands of the machines. Indeed, the success of the AlphaGo software was forged in an area of artificial intelligence that has seen huge progress over the last decade. Deep learning, as it’s called, uses artificial neural networks to process algorithmic calculations. This software architecture therefore mimics biological neural networks.

Much of the progress in deep learning is thanks to the work of Jürgen Schmidhuber, director of the IDSIA (Istituto Dalle Molle di Studi sull’Intelligenza Artificiale) which is located in the suburbs of Lugano. The IDSIA doctoral student Shane Legg and a group of former colleagues went on to found DeepMind, the startup acquired by Google in early 2014 for USD 500 million. The DeepMind algorithms eventually wound up in AlphaGo.

“Schmidhuber is one of the best at deep learning,” says Boi Faltings of the EPFL Artificial Intelligence Lab. “He never let go of the need to keep working at it.” According to Stéphane Marchand-Maillet of the University of Geneva computing department, “he’s been in the race since the very beginning.”

A June 27, 2016 SNSF news release (first published as a story in Horizons no. 109 June 2016) by Fabien Goubet, which originated the news item, goes on to provide a brief history,

The real strength of deep learning is structural recognition, and winning at Go is just an illustration of this, albeit a rather resounding one. Elsewhere, and for some years now, we have seen it applied to an entire spectrum of areas, such as visual and vocal recognition, online translation tools and smartphone personal assistants. One underlying principle of machine learning is that algorithms must first be trained using copious examples. Naturally, this has been helped by the deluge of user-generated content spawned by smartphones and web 2.0, stretching from Facebook photo comments to official translations published on the Internet. By feeding a machine thousands of accurately tagged images of cats, for example, it learns first to recognise those cats and later any image of a cat, including those it hasn’t been fed.

Deep learning isn’t new; it just needed modern computers to come of age. As far back as the early 1950s, biologists tried to lay out formal principles to explain the working of the brain’s cells. In 1956, the psychologist Frank Rosenblatt of the New York State Aeronautical Laboratory published a numerical model based on these concepts, thereby creating the very first artificial neural network. Once integrated into a calculator, it learned to recognise rudimentary images.

“This network only contained eight neurones organised in a single layer. It could only recognise simple characters”, says Claude Touzet of the Adaptive and Integrative Neuroscience Laboratory of Aix-Marseille University. “It wasn’t until 1985 that we saw the second generation of artificial neural networks featuring multiple layers and much greater performance”. This breakthrough was made simultaneously by three researchers: Yann LeCun in Paris, Geoffrey Hinton in Toronto and Terrence Sejnowski in Baltimore.

Byte-size learning

In multilayer networks, each layer learns to recognise the precise visual characteristics of a shape. The deeper the layer, the more abstract the characteristics. With cat photos, the first layer analyses pixel colour, and the following layer recognises the general form of the cat. This structural design can support calculations being made upon thousands of layers, and it was this aspect of the architecture that gave rise to the name ‘deep learning’.

Marchand-Maillet explains: “Each artificial neurone is assigned an input value, which it computes using a mathematical function, only firing if the output exceeds a pre-defined threshold”. In this way, it reproduces the behaviour of real neurones, which only fire and transmit information when the input signal (the potential difference across the entire neural circuit) reaches a certain level. In the artificial model, the results of a single layer are weighted, added up and then sent as the input signal to the following layer, which processes that input using different functions, and so on and so forth.

For example, if a system is trained with great quantities of photos of apples and watermelons, it will progressively learn to distinguish them on the basis of diameter, says Marchand-Maillet. If it cannot decide (e.g., when processing a picture of a tiny watermelon), the subsequent layers take over by analysing the colours or textures of the fruit in the photo, and so on. In this way, every step in the process further refines the assessment.

Video games to the rescue

For decades, the frontier of computing held back more complex applications, even at the cutting edge. Industry walked away, and deep learning only survived thanks to the video games sector, which eventually began producing graphics chips, or GPUs, with an unprecedented power at accessible prices: up to 6 teraflops (i.e., 6 trillion calculations per second) for a few hundred dollars. “There’s no doubt that it was this calculating power that laid the ground for the quantum leap in deep learning”, says Touzet. GPUs are also very good at parallel calculations, a useful function for executing the innumerable simultaneous operations required by neural networks.
Although image analysis is getting great results, things are more complicated for sequential data objects such as natural spoken language and video footage. This has formed part of Schmidhuber’s work since 1989, and his response has been to develop recurrent neural networks in which neurones communicate with each other in loops, feeding processed data back into the initial layers.

Such sequential data analysis is highly dependent on context and precursory data. In Lugano, networks have been instructed to memorise the order of a chain of events. Long Short Term Memory (LSTM) networks can distinguish ‘boat’ from ‘float’ by recalling the sound that preceded ‘oat’ (i.e., either ‘b’ or ‘fl’). “Recurrent neural networks are more powerful than other approaches such as the Hidden Markov models”, says Schmidhuber, who also notes that Google Voice integrated LSTMs in 2015. “With looped networks, the number of layers is potentially infinite”, says Faltings [?].

For Schmidhuber, deep learning is just one aspect of artificial intelligence; the real thing will lead to “the most important change in the history of our civilisation”. But Marchand-Maillet sees deep learning as “a bit of hype, leading us to believe that artificial intelligence can learn anything provided there’s data. But it’s still an open question as to whether deep learning can really be applied to every last domain”.

It’s nice to get an historical perspective and eye-opening to realize that scientists have been working on these concepts since the 1950s.

Science-themed scriptwriting competition for Google (call for submissions)

David Bruggeman writes about a Google-sponsored scriptwriting competition in an April 28, 2016 posting on his Pasco Phronesis blog (Note: Links have been removed),

At the Tribeca Film Festival last week [the festival ran from April 13 – 24, 2016] Google announced that its CS Education in Media Program is partnering with the website The Black List for a fellowship competition to support the image of computer science and computer scientists in media (H/T STEMDaily).  The Black List is a screenwriting site known for hosting the best unproduced screenplays in Hollywood.

The fellowship could award up to $15,000 for as many as three scripts (one film script and two episodic television pilots).  The writers would use the money to support their work on new materials for six months.  At the end of that period the writer(s) would present that work to Google along with a summary of how the grant helped advance that work and/or affected their career.

Here’s more about the competition from The Black list website’s The Google Computer Science Education in Media Fellowship Call for Submissions webpage,

The Black List is pleased to partner with Google’s Computer Science Education in Media program to offer financial grants in support of the development of three scripts with a focus on changing the image in popular culture of computer science and computer scientists.

REQUIREMENTS

  • The candidate must host a script on www.blcklst.com for a least one week during the opt-in period.
  • Such script must be original to the candidate.
  • The candidate must be competent to contract.
  • If selected for the fellowship, writers must develop a feature screenplay or episodic pilot that changes the image of computer science or computer scientists, particular as it applies to women and minorities, in popular culture.
  • Further, selected writers must agree that six months following receipt of the fellowship that they will provide a designated representative of Google with a sample of his/her new work along with a report addressing how the grant has been used to advance his/her work and/or impacted his/her career.

SELECTION PROCESS

Beginning April 20, 2016, users of the Black List website can opt into consideration for this fellowship.

On July 15 [2016], the Black List will short list ten writers based on all data available on the Black List website about their opted in feature screenplays and teleplays.

These ten short listed candidates will be asked to submit one-page biographies, which will be sent to Google along with the screenplays/teleplays.

Google will review these 10 scripts and choose the Fellowship recipients. Google reserves the right to grant no fellowships if, in Google’s opinion, no entry is of sufficient merit.

DEADLINES OF NOTE (ALL TIMES 11:59 PM PT)

Evaluation purchase deadline* June 15, 2016

Opt in deadline July 15, 2016

* In order for new script evaluations to guarantee consideration for this opportunity, they must be purchased by midnight on the Evaluation deadline.

ADDITIONAL INFORMATION ABOUT GOOGLE’S COMPUTER SCIENCE EDUCATION IN MEDIA PROGRAM

Why is Google working with Hollywood? 

Google aims to inspire young people around the world not just to use technology, but to create it.  To do so, we need more students pursuing an education in CS, particularly girls and minorities, who have historically been underrepresented in the field. Google wants to prepare the next generation for the workplace of the future, and expand access to CS education that engages and retains students from all backgrounds.

  • Moreover, Google’s research shows that perceptions of CS and computer scientists are primary drivers that motivate girls to pursue CS. “If you can’t see it, you can’t be it,” as our friend Geena Davis notes.
  • Google’s hope is that by dispelling stereotypes and identifying positive portrayals of women in tech it can do for CS what CSI did for the field of forensic science, changing its gender make-up and increasing its appeal to a wider audience.
  • Media is part of the ecosystem that needs to change in conjunction with the other areas of work where Google has invested including increasing access to curriculum, non-profit grants, and policy support. If we don’t address the perceptions piece for both young people and adults through mainstream media, we run the risk of undermining our other efforts in CS education.

Background stats on perceptions of CS: 

Google’s research shows that perceptions of careers in computer science really matter.  Girls who feel that television portrays programmers negatively or who don’t see other students like them taking CS are significantly less likely to get into computing. Interestingly, girls who want a career with social impact are also less likely to go into CS.

Google conducted a research study to identify the factors that most influence girls to study computer science, and the second most important category of factors was Career Perceptions.

  • Girls who felt that television portrays programmers in a negative light were less likely to pursue CS.
  • If a girl didn’t see the right social crowd in a class — that is, if there weren’t enough students like her — she was less likely to go into CS.
  • Girls who want careers with social impact are less likely to go into CS. (It’s clear we need to do a better job of showing how CS can be used to develop solutions to some of the world’s most challenging problems.)
  • Perception accounts for 27% of the decision making for girls to pursue CS.. #1 factor is parent/adult encouragement which is also influenced by media.

Stats on representation in media:

  • Blacks & Hispanics are already underrepresented on-screen 14.1% and 4.9%, respectively.
  • Combine this with lack of / misrepresentation of STEM/CS characters in family movies and prime TV, you get STEM characters < 18% women; CS characters <13%.

Proven Success with other Fields:

  • Forensic Science – CSI increased the number of forensic science majors in nationally recognized programs by at least 50% in 5 years – a majority being women.
  • Law – UCLA claimed a 16.5% increase in law school applicants 1 year after LA Law premiered.  Justice Sotomayor credits her interest in law from watching Perry Mason at 10 years old.
 …

FREQUENTLY ASKED QUESTIONS

FAQ & Answers

Go here to register (there is a cost associated with registering but there don’t appear to be any citizenship or residency restrictions, e.g., must be US citizen or must reside in the US. Good Luck!

A dress that lights up according to reactions on Twitter

I don’t usually have an opportunity to write about red carpet events but the recent Met Gala, also known as the Costume Institute Gala and the Met Ball, which took place on the evening of May 2, 2016 in New York, featured a ‘cognitive’ dress. Here’s more from a May 2, 2016 article by Emma Spedding for The Telegraph (UK),

“Tech white tie” was the dress code for last night’s Met Gala, inspired by the theme of this year’s Met fashion exhibition, ‘Manus x Machina: Fashion in the Age of Technology’. While many of the a-list attendees interpreted this to mean ‘silver sequins’, several rose to the challenge with beautiful, future-gazing gowns which give a glimpse of how our clothes might behave in the future.

Supermodel Karolina Kurkova wore a ‘cognitive’ Marchesa gown that was created in collaboration with technology company IBM. The two companies came together following a survey conducted by IBM which found that Marchesa was one of the favourite designers of its employees. The dress is created using a conductive fabric chosen from 40,000 options and embedded with 150 LED lights which change colour in reaction to the sentiments of Kurkova’s Twitter followers.

A May 2, 2016 article by Rose Pastore for Fast Company provides a little more technical detail and some insight into why Marchesa partnered with IBM,

At the Met Gala in Manhattan tonight [May 2, 2016], one model will be wearing a “cognitive dress”: A gown, designed by fashion house Marchesa, that will shift in color based on input from IBM’s Watson supercomputer. The dress features gauzy white roses, each embedded with an LED that will display different colors depending on the general sentiment of tweets about the Met Gala. The algorithm powering the dress relies on Watson Color Theory, which links emotions to colors, and on the Watson Tone Analyzer, a service that can detect emotion in text.

In addition to the color-changing cognitive dress, Marchesa designers are using Watson to get new color palette ideas. The designers choose from a list of emotions and concepts—things like romance, excitement, and power—and Watson recommends a palette of colors it associates with those sentiments.

An April 29, 2016 posting by Ann Rubin for IBM’s Think blog discusses the history of technology/art partnerships and provides more technical detail (yes!) about this one,

Throughout history, we’ve seen traces of technology enabling humans to create – from Da Vinci’s use of the camera obscura to Caravaggio’s work with mirrors and lenses. Today, cognitive systems like Watson are giving artists, designers and creative minds the tools to make sense of the world in ground-breaking ways, opening up new avenues for humans to approach creative thinking.

The dress’ cognitive creation relies on a mix of Watson APIs, cognitive tools from IBM Research, solutions from Watson developer partner Inno360 and the creative vision from the Marchesa design team. In advance of it making its exciting debut on the red carpet, we’d like to take you on the journey of how man and machine collaborated to create this special dress.

Rooted in the belief that color and images can indicate moods and send messages, Marchesa first selected five key human emotions – joy, passion, excitement, encouragement and curiosity – that they wanted the dress to convey. IBM Research then fed this data into the cognitive color design tool, a groundbreaking project out of IBM Research-Yorktown that understands the psychological effects of colors, the interrelationships between emotions, and image aesthetics.

This process also involved feeding Watson hundreds of images associated with Marchesa dresses in order to understand and learn the brand’s color palette. Ultimately, Watson was able to suggest color palettes that were in line with Marchesa’s brand and the identified emotions, which will come to life on the dress during the Met Gala.

Once the colors were finalized, Marchesa turned to IBM partner Inno360 to source a fabric for their creation. Using Inno360’s R&D platform – powered by a combination of seven Watson services – the team searched more than 40,000 sources for fabric information, narrowing down to 150 sources of the most useful options to consider for the dress.

From this selection, Inno360 worked in partnership with IBM Research-Almaden to identify printed and woven textiles that would respond well to the LED technology needed to execute the final part of the collaboration. Inno360 was then able to deliver 35 unique fabric recommendations based on a variety of criteria important to Marchesa, like weight, luminosity, and flexibility. From there, Marchesa weighed the benefits of different material compositions, weights and qualities to select the final fabric that suited the criteria for their dress and remained true to their brand.

Here’s what the dress looks like,

Courtesy of Marchesa Facebook page {https://www.facebook.com/MarchesaFashion/)

Courtesy of Marchesa Facebook page {https://www.facebook.com/MarchesaFashion/)

Watson is an artificial intelligence program,which I have written about a few times but I think this Feb. 28, 2011 posting (scroll down about 50% of the way), which mentions Watson, product placement, Jeopardy (tv quiz show), and medical diagnoses seems the most à propos given IBM’s latest product placement at the Met Gala.

Not the only ‘tech’ dress

There was at least one other ‘tech’ dress at the 2016 Met Gala, this one designed by Zac Posen and worn by Claire Danes. It did not receive a stellar review in a May 3, 2016 posting by Elaine Lui on Laineygossip.com,

People are losing their goddamn minds over this dress, by Zac Posen. Because it lights up.

It’s bullsh-t.

This is a BULLSH-T DRESS.

It’s Cinderella with a lamp shoved underneath her skirt.

Here’s a video of Danes and her dress at the Met Gala,

A Sept. 10, 2015 news item in People magazine indicates that Posen’s a different version of a ‘tech’ dress was a collaboration with Google (Note: Links have been removed),

Designer Zac Posen lit up his 2015 New York Fashion Week kickoff show on Tuesday by debuting a gorgeous and tech-savvy coded LED dress that blinked in different, dazzling pre-programmed patterns down the runway.

In coordination with Google’s non-profit organization, Made with Code, which inspires girls to pursue careers in tech coding, Posen teamed up with 30 girls (all between the ages of 13 and 18), who attended the show, to introduce the flashy dress — which was designed by Posen and coded by the young women.

“This is the future of the industry: mixing craft, fashion and technology,” the 34-year-old designer told PEOPLE. “There’s a discrepancy in the coding field, hardly any women are at the forefront, and that’s a real shame. If we can entice young women through the allure of fashion, to get them learning this language, why not?”

..

Through a micro controller, the gown displays coded patterns in 500 LED lights that are set to match the blues and yellows of Posen’s new collection. The circuit was designed and physically built into Posen’s dress fabric by 22-year-old up-and-coming fashion designer and computer science enthusiast, Maddy Maxey, who tells PEOPLE she was nervous watching Rocha [model Coco Rocha] make her way down the catwalk.

“It’s exactly as if she was carrying a microwave down the runway,” Maxey said. “It’s an entire circuit on a textile, so if one connection had come lose, the dress wouldn’t have worked. But, it did! And it was so deeply rewarding.”

Other ‘tech’ dresses

Back in 2009 I attended that year’s International Symposium on Electronic Arts and heard Clive van Heerden of Royal Philips Electronics talk about a number of innovative concepts including a ‘mood’ dress that would reveal the wearer’s emotions to whomever should glance their way. It was not a popular concept especially not in Japan where it was first tested.

The symposium also featured Maurits Waldemeyer who worked with fashion designer Chalayan Hussein and LED dresses and dresses that changed shape as the models went down the runway.

In 2010 there was a flurry of media interest in mood changing ‘smart’ clothes designed by researchers at Concordia University (Barbara Layne, Canada) and Goldsmiths College (Janis Jefferies, UK). Here’s more from a June 4, 2010 BBC news online item,

The clothes are connected to a database that analyses the data to work out a person’s emotional state.

Media, including songs, words and images, are then piped to the display and speakers in the clothes to calm a wearer or offer support.

Created as part of an artistic project called Wearable Absence the clothes are made from textiles woven with different sorts of wireless sensors. These can track a wide variety of tell-tale biological markers including temperature, heart rate, breathing and galvanic skin response.

Final comments

I don’t have anything grand to say. It is interesting to see the progression of ‘tech’ dresses from avant garde designers and academics to haute couture.

Wearable tech for Christmas 2015 and into 2016

This is a roundup post of four items to cross my path this morning (Dec. 17, 2015), all of them concerned with wearable technology.

The first, a Dec. 16, 2015 news item on phys.org, is a fluffy little piece concerning the imminent arrival of a new generation of wearable technology,

It’s not every day that there’s a news story about socks. But in November [2015], a pair won the Best New Wearable Technology Device Award at a Silicon Valley conference. The smart socks, which track foot landings and cadence, are at the forefront of a new generation of wearable electronics, according to an article in Chemical & Engineering News (C&EN), the weekly newsmagazine of the American Chemical Society [ACS].

That news item was originated by a Dec. 16, 2015 ACS news release on EurekAlert which adds this,

Marc S. Reisch, a senior correspondent at C&EN, notes that stiff wristbands like the popular FitBit that measure heart rate and the number of steps people take have become common. But the long-touted technology needed to create more flexible monitoring devices has finally reached the market. Developers have successfully figured out how to incorporate stretchable wiring and conductive inks in clothing fabric, program them to transmit data wirelessly and withstand washing.

In addition to smart socks, fitness shirts and shoe insoles are on the market already or are nearly there. Although athletes are among the first to gain from the technology, the less fitness-oriented among us could also benefit. One fabric concept product — designed not for covering humans but a car steering-wheel — could sense driver alertness and make roads safer.

Reisch’s Dec. 7, 2015 article (C&EN vol. 93, issue 48, pp. 28-90) provides more detailed information and market information such as this,

Materials suppliers, component makers, and apparel developers gathered at a printed-electronics conference in Santa Clara, Calif., within a short drive of tech giants such as Google and Apple, to compare notes on embedding electronics into the routines of daily life. A notable theme was the effort to stealthily [emphasis mine] place sensors on exercise shirts, socks, and shoe soles so that athletes and fitness buffs can wirelessly track their workouts and doctors can monitor the health of their patients.

“Wearable technology is becoming more wearable,” said Raghu Das, chief executive officer of IDTechEx [emphasis mine], the consulting firm that organized the conference. By that he meant the trend is toward thinner and more flexible devices that include not just wrist-worn fitness bands but also textiles printed with stretchable wiring and electronic sensors, thanks to advances in conductive inks.

Interesting use of the word ‘stealthy’, which often suggests something sneaky as opposed to merely secretive. I imagine what’s being suggested is that the technology will not impose itself on the user (i.e., you won’t have to learn how to use it as you did with phones and computers).

Leading into my second item, IDC (International Data Corporation), not to be confused with IDTechEx, is mentioned in a Dec. 17, 2015 news item about wearable technology markets on phys.org,

The global market for wearable technology is seeing a surge, led by watches, smart clothing and other connected gadgets, a research report said Thursday [Dec. 16, 2015].

IDC said its forecast showed the worldwide wearable device market will reach a total of 111.1 million units in 2016, up 44.4 percent from this year.

By 2019, IDC sees some 214.6 million units, or a growth rate averaging 28 percent.

A Dec. 17, 2015 IDC press release, which originated the news item, provides more details about the market forecast,

“The most common type of wearables today are fairly basic, like fitness trackers, but over the next few years we expect a proliferation of form factors and device types,” said Jitesh Ubrani , Senior Research Analyst for IDC Mobile Device Trackers. “Smarter clothing, eyewear, and even hearables (ear-worn devices) are all in their early stages of mass adoption. Though at present these may not be significantly smarter than their analog counterparts, the next generation of wearables are on track to offer vastly improved experiences and perhaps even augment human abilities.”

One of the most popular types of wearables will be smartwatches, reaching a total of 34.3 million units shipped in 2016, up from the 21.3 million units expected to ship in 2015. By 2019, the final year of the forecast, total shipments will reach 88.3 million units, resulting in a five-year CAGR of 42.8%.

“In a short amount of time, smartwatches have evolved from being extensions of the smartphone to wearable computers capable of communications, notifications, applications, and numerous other functionalities,” noted Ramon Llamas , Research Manager for IDC’s Wearables team. “The smartwatch we have today will look nothing like the smartwatch we will see in the future. Cellular connectivity, health sensors, not to mention the explosive third-party application market all stand to change the game and will raise both the appeal and value of the market going forward.

“Smartwatch platforms will lead the evolution,” added Llamas. “As the brains of the smartwatch, platforms manage all the tasks and processes, not the least of which are interacting with the user, running all of the applications, and connecting with the smartphone. Once that third element is replaced with cellular connectivity, the first two elements will take on greater roles to make sense of all the data and connections.”

Top Five Smartwatch Platform Highlights

Apple’s watchOS will lead the smartwatch market throughout our forecast, with a loyal fanbase of Apple product owners and a rapidly growing application selection, including both native apps and Watch-designed apps. Very quickly, watchOS has become the measuring stick against which other smartwatches and platforms are compared. While there is much room for improvement and additional features, there is enough momentum to keep it ahead of the rest of the market.

Android/Android Wear will be a distant second behind watchOS even as its vendor list grows to include technology companies (ASUS, Huawei, LG, Motorola, and Sony) and traditional watchmakers (Fossil and Tag Heuer). The user experience on Android Wear devices has been largely the same from one device to the next, leaving little room for OEMs to develop further and users left to select solely on price and smartwatch design.

Smartwatch pioneer Pebble will cede market share to AndroidWear and watchOS but will not disappear altogether. Its simple user interface and devices make for an easy-to-understand use case, and its price point relative to other platforms makes Pebble one of the most affordable smartwatches on the market.

Samsung’s Tizen stands to be the dark horse of the smartwatch market and poses a threat to Android Wear, including compatibility with most flagship Android smartphones and an application selection rivaling Android Wear. Moreover, with Samsung, Tizen has benefited from technology developments including a QWERTY keyboard on a smartwatch screen, cellular connectivity, and new user interfaces. It’s a combination that helps Tizen stand out, but not enough to keep up with AndroidWear and watchOS.

There will be a small, but nonetheless significant market for smart wristwear running on a Real-Time Operating System (RTOS), which is capable of running third-party applications, but not on any of these listed platforms. These tend to be proprietary operating systems and OEMs will use them when they want to champion their own devices. These will help within specific markets or devices, but will not overtake the majority of the market.

The company has provided a table with five-year CAGR (compound annual growth rate) growth estimates, which can be found with the Dec. 17, 2015 IDC press release.

Disclaimer: I am not endorsing IDC’s claims regarding the market for wearable technology.

For the third and fourth items, it’s back to the science. A Dec. 17, 2015 news item on Nanowerk, describes, in general terms, some recent wearable technology research at the University of Manchester (UK), Note: A link has been removed),

Cheap, flexible, wireless graphene communication devices such as mobile phones and healthcare monitors can be directly printed into clothing and even skin, University of Manchester academics have demonstrated.

In a breakthrough paper in Scientific Reports (“Highly Flexible and Conductive Printed Graphene for Wireless Wearable Communications Applications”), the researchers show how graphene could be crucial to wearable electronic applications because it is highly-conductive and ultra-flexible.

The research could pave the way for smart, battery-free healthcare and fitness monitoring, phones, internet-ready devices and chargers to be incorporated into clothing and ‘smart skin’ applications – printed graphene sensors integrated with other 2D materials stuck onto a patient’s skin to monitor temperature, strain and moisture levels.

Detail is provided in a Dec. 17, 2015 University of Manchester press release, which originated the news item, (Note: Links have been removed),

Examples of communication devices include:

• In a hospital, a patient wears a printed graphene RFID tag on his or her arm. The tag, integrated with other 2D materials, can sense the patient’s body temperature and heartbeat and sends them back to the reader. The medical staff can monitor the patient’s conditions wirelessly, greatly simplifying the patient’s care.

• In a care home, battery-free printed graphene sensors can be printed on elderly peoples’ clothes. These sensors could detect and collect elderly people’s health conditions and send them back to the monitoring access points when they are interrogated, enabling remote healthcare and improving quality of life.

Existing materials used in wearable devices are either too expensive, such as silver nanoparticles, or not adequately conductive to have an effect, such as conductive polymers.

Graphene, the world’s thinnest, strongest and most conductive material, is perfect for the wearables market because of its broad range of superlative qualities. Graphene conductive ink can be cheaply mass produced and printed onto various materials, including clothing and paper.

“Sir Kostya Novoselov

To see evidence that cheap, scalable wearable communication devices are on the horizon is excellent news for graphene commercial applications.

Sir Kostya Novoselov (tweet)„

The researchers, led by Dr Zhirun Hu, printed graphene to construct transmission lines and antennas and experimented with these in communication devices, such as mobile and Wifi connectivity.

Using a mannequin, they attached graphene-enabled antennas on each arm. The devices were able to ‘talk’ to each other, effectively creating an on-body communications system.

The results proved that graphene enabled components have the required quality and functionality for wireless wearable devices.

Dr Hu, from the School of Electrical and Electronic Engineering, said: “This is a significant step forward – we can expect to see a truly all graphene enabled wireless wearable communications system in the near future.

“The potential applications for this research are huge – whether it be for health monitoring, mobile communications or applications attached to skin for monitoring or messaging.

“This work demonstrates that this revolutionary scientific material is bringing a real change into our daily lives.”

Co-author Sir Kostya Novoselov, who with his colleague Sir Andre Geim first isolated graphene at the University in 2004, added: “Research into graphene has thrown up significant potential applications, but to see evidence that cheap, scalable wearable communication devices are on the horizon is excellent news for graphene commercial applications.”

Here’s a link to and a citation for the paper,

Highly Flexible and Conductive Printed Graphene for Wireless Wearable Communications Applications by Xianjun Huang, Ting Leng, Mengjian Zhu, Xiao Zhang, JiaCing Chen, KuoHsin Chang, Mohammed Aqeeli, Andre K. Geim, Kostya S. Novoselov, & Zhirun Hu. Scientific Reports 5, Article number: 18298 (2015) doi:10.1038/srep18298 Published online: 17 December 2015

This is an open access paper.

The next and final item concerns supercapacitors for wearable tech, which makes it slightly different from the other items and is why, despite the date, this is the final item. The research comes from Case Western Research University (CWRU; US) according to a Dec. 16, 2015 news item on Nanowerk (Note: A link has been removed),

Wearable power sources for wearable electronics are limited by the size of garments.

With that in mind, researchers at Case Western Reserve University have developed flexible wire-shaped microsupercapacitors that can be woven into a jacket, shirt or dress (Energy Storage Materials, “Flexible and wearable wire-shaped microsupercapacitors based on highly aligned titania and carbon nanotubes”).

A Dec. 16, 2015 CWRU news release (on EurekAlert), which originated the news item, provides more detail about a device that would make wearable tech more wearable (after all, you don’t want to recharge your clothes the same way you do your phone and other mobile devices),

By their design or by connecting the capacitors in series or parallel, the devices can be tailored to match the charge storage and delivery needs of electronics donned.

While there’s been progress in development of those electronics–body cameras, smart glasses, sensors that monitor health, activity trackers and more–one challenge remaining is providing less obtrusive and cumbersome power sources.

“The area of clothing is fixed, so to generate the power density needed in a small area, we grew radially-aligned titanium oxide nanotubes on a titanium wire used as the main electrode,” said Liming Dai, the Kent Hale Smith Professor of Macromolecular Science and Engineering. “By increasing the surface area of the electrode, you increase the capacitance.”

Dai and Tao Chen, a postdoctoral fellow in molecular science and engineering at Case Western Reserve, published their research on the microsupercapacitor in the journal Energy Storage Materials this week. The study builds on earlier carbon-based supercapacitors.

A capacitor is cousin to the battery, but offers the advantage of charging and releasing energy much faster.

How it works

In this new supercapacitor, the modified titanium wire is coated with a solid electrolyte made of polyvinyl alcohol and phosphoric acid. The wire is then wrapped with either yarn or a sheet made of aligned carbon nanotubes, which serves as the second electrode. The titanium oxide nanotubes, which are semiconducting, separate the two active portions of the electrodes, preventing a short circuit.

In testing, capacitance–the capability to store charge–increased from 0.57 to 0.9 to 1.04 milliFarads per micrometer as the strands of carbon nanotube yarn were increased from 1 to 2 to 3.

When wrapped with a sheet of carbon nanotubes, which increases the effective area of electrode, the microsupercapactitor stored 1.84 milliFarads per micrometer. Energy density was 0.16 x 10-3 milliwatt-hours per cubic centimeter and power density .01 milliwatt per cubic centimeter.

Whether wrapped with yarn or a sheet, the microsupercapacitor retained at least 80 percent of its capacitance after 1,000 charge-discharge cycles. To match various specific power needs of wearable devices, the wire-shaped capacitors can be connected in series or parallel to raise voltage or current, the researchers say.

When bent up to 180 degrees hundreds of times, the capacitors showed no loss of performance. Those wrapped in sheets showed more mechanical strength.

“They’re very flexible, so they can be integrated into fabric or textile materials,” Dai said. “They can be a wearable, flexible power source for wearable electronics and also for self-powered biosensors or other biomedical devices, particularly for applications inside the body.” [emphasis mine]

Dai ‘s lab is in the process of weaving the wire-like capacitors into fabric and integrating them with a wearable device.

So one day we may be carrying supercapacitors in our bodies? I’m not sure how I feel about that goal. In any event, here’s a link and a citation for the paper,

Flexible and wearable wire-shaped microsupercapacitors based on highly aligned titania and carbon nanotubes by Tao Chen, Liming Dai. Energy Storage Materials Volume 2, January 2016, Pages 21–26 doi:10.1016/j.ensm.2015.11.004

This paper appears to be open access.

Google announces research results after testing 1,097-qubit D-Wave 2X™ quantum computers

If you’ve been reading this blog over the last few months, you’ll know that I’ve mentioned D-Wave Systems, a Vancouver (Canada)-based quantum computing company, frequently. The company seems to be signing all kinds of deals lately including one with Google (my Oct. 5, 2015 posting). Well, a Dec. 9, 2015 news item on Nanotechnology Now sheds more light on how Google is using D-Wave’s quantum computers,

Harris & Harris Group, Inc. (NASDAQ: TINY), an investor in transformative companies enabled by disruptive science, notes that yesterday [Dec. 8, 2015] NASA, Google and the Universities Space Research Association (USRA) hosted a tour of the jointly run Quantum Artificial Intelligence Laboratory located at the NASA’s Ames Research Center which houses one of D-Wave’s 1,097-qubit D-Wave 2X™ quantum computers. At this event, Google announced that D-Wave’s quantum computer was able to find solutions to complicated problems of nearly 1,000 variables up to 108 (100,000,000) times faster than classical computers.

A Dec. 8, 2015 posting by Hartmut Neven for the Google Research blog describes the research and the results (Note: Links have been removed),

During the last two years, the Google Quantum AI [artificial intelligence] team has made progress in understanding the physics governing quantum annealers. We recently applied these new insights to construct proof-of-principle optimization problems and programmed these into the D-Wave 2X quantum annealer that Google operates jointly with NASA. The problems were designed to demonstrate that quantum annealing can offer runtime advantages for hard optimization problems characterized by rugged energy landscapes. We found that for problem instances involving nearly 1000 binary variables, quantum annealing significantly outperforms its classical counterpart, simulated annealing. It is more than 108 times faster than simulated annealing running on a single core. We also compared the quantum hardware to another algorithm called Quantum Monte Carlo. This is a method designed to emulate the behavior of quantum systems, but it runs on conventional processors. While the scaling with size between these two methods is comparable, they are again separated by a large factor sometimes as high as 108.

For anyone (like me) who needs an explanation of quantum annealing, there’s this from its Wikipedia entry (Note: Links have been removed),

Quantum annealing (QA) is a metaheuristic for finding the global minimum of a given objective function over a given set of candidate solutions (candidate states), by a process using quantum fluctuations. Quantum annealing is used mainly for problems where the search space is discrete (combinatorial optimization problems) with many local minima; such as finding the ground state of a spin glass.[1] It was formulated in its present form by T. Kadowaki and H. Nishimori in “Quantum annealing in the transverse Ising model”[2] though a proposal in a different form had been proposed by A. B. Finilla, M. A. Gomez, C. Sebenik and J. D. Doll, in “Quantum annealing: A new method for minimizing multidimensional functions”.[3]

Not as helpful as I’d hoped but sometimes its necessary to learn a new vocabulary and a new set of basic principles, which takes time and requires the ability to ‘not know’ and/or ‘not understand’ until one day, you do.

In the meantime, here’s more possibly befuddling information from the researchers in the form of a paper on arXiv.org,

What is the Computational Value of Finite Range Tunneling? by Vasil S. Denchev, Sergio Boixo, Sergei V. Isakov, Nan Ding, Ryan Babbush, Vadim Smelyanskiy, John Martinis, Hartmut Neven. http://arxiv.org/abs/1512.02206

This paper is open access.

US Los Alamos National Laboratory catches the D-Wave (buys a 1000+ Qubit quantum computer from D-Wave)

It can be euphoric experience making a major technical breakthrough (June 2015), selling to a new large customer (Nov. 2015) and impressing your important customers so they upgrade to the new system (Oct. 2015) within a few short months.* D-Wave Systems (a Vancouver-based quantum computer company) certainly has cause to experience it given the events of the last six weeks or so. Yesterday, in a Nov. 11, 2015, D-Wave news release, the company trumpeted its sale of a 1000+ Qubit system (Note: Links have been removed),

D-Wave Systems Inc., the world’s first quantum computing company, announced that Los Alamos National Laboratory will acquire and install the latest D-Wave quantum computer, the 1000+ qubit D-Wave 2X™ system. Los Alamos, a multidisciplinary research institution engaged in strategic science on behalf of national security, will lead a collaboration within the Department of Energy and with select university partners to explore the capabilities and applications of quantum annealing technology, consistent with the goals of the government-wide National Strategic Computing Initiative. The National Strategic Computing Initiative, created by executive order of President Obama in late July [2015], is intended “to maximize [the] benefits of high-performance computing (HPC) research, development, and deployment.”

“Los Alamos is a global leader in high performance computing and a pioneer in the application of new architectures to solve critical problems related to national security, energy, the environment, materials, health and earth science,” said Robert “Bo” Ewald, president of D-Wave U.S. “As we work jointly with scientists and engineers at Los Alamos we expect to be able to accelerate the pace of quantum software development to advance the state of algorithms, applications and software tools for quantum computing.”

A Nov. 11, 2015 news item on Nanotechnology Now is written from the company’s venture capitalist’s perspective,

Harris & Harris Group, Inc. (NASDAQ:TINY), an investor in transformative companies enabled by disruptive science, notes that its portfolio company, D-Wave Systems, Inc., announced that Los Alamos National Laboratory will acquire and install the latest D-Wave quantum computer, the 1000+ qubit D-Wave 2X™ system.

The news about the Los Alamos sale comes only weeks after D-Wave announced renewed agreements with Google, NASA (US National Aeronautics and Space Administration), and the Universities Space Research Association (USRA) in the aftermath of a technical breakthrough. See my Oct. 5, 2015 posting for more details about the agreements, the type of quantum computer D-Wave sells, and news of interesting and related research in Australia. Cracking the 512 qubit barrier also occasioned a posting here (June 26, 2015) where I described the breakthrough, the company, and included excerpts from an Economist article which mentioned D-Wave in its review of research in the field of quantum computing.

Congratulations to D-Wave!

*’It can be euphoric selling to your first large and/or important customers and D-Wave Systems (a Vancouver-based quantum computer company) certainly has cause to experience it. ‘ changed to more accurately express my thoughts to ‘It can be euphoric experience making a major technical breakthrough (June 2015), selling to a new large customer (Nov. 2015) and impressing your important customers so they upgrade to the new system (Oct. 2015) within a few short months.’ on Nov. 12, 2015 at 1025 hours PST.