Tag Archives: Google

Does understanding your pet mean understanding artificial intelligence better?

Heather Roff’s take on artificial intelligence features an approach I haven’t seen before. From her March 30, 2017 essay for The Conversation (h/t March 31, 2017 news item on phys.org),

It turns out, though, that we already have a concept we can use when we think about AI: It’s how we think about animals. As a former animal trainer (albeit briefly) who now studies how people use AI, I know that animals and animal training can teach us quite a lot about how we ought to think about, approach and interact with artificial intelligence, both now and in the future.

Using animal analogies can help regular people understand many of the complex aspects of artificial intelligence. It can also help us think about how best to teach these systems new skills and, perhaps most importantly, how we can properly conceive of their limitations, even as we celebrate AI’s new possibilities.
Looking at constraints

As AI expert Maggie Boden explains, “Artificial intelligence seeks to make computers do the sorts of things that minds can do.” AI researchers are working on teaching computers to reason, perceive, plan, move and make associations. AI can see patterns in large data sets, predict the likelihood of an event occurring, plan a route, manage a person’s meeting schedule and even play war-game scenarios.

Many of these capabilities are, in themselves, unsurprising: Of course a robot can roll around a space and not collide with anything. But somehow AI seems more magical when the computer starts to put these skills together to accomplish tasks.

Thinking of AI as a trainable animal isn’t just useful for explaining it to the general public. It is also helpful for the researchers and engineers building the technology. If an AI scholar is trying to teach a system a new skill, thinking of the process from the perspective of an animal trainer could help identify potential problems or complications.

For instance, if I try to train my dog to sit, and every time I say “sit” the buzzer to the oven goes off, then my dog will begin to associate sitting not only with my command, but also with the sound of the oven’s buzzer. In essence, the buzzer becomes another signal telling the dog to sit, which is called an “accidental reinforcement.” If we look for accidental reinforcements or signals in AI systems that are not working properly, then we’ll know better not only what’s going wrong, but also what specific retraining will be most effective.

This requires us to understand what messages we are giving during AI training, as well as what the AI might be observing in the surrounding environment. The oven buzzer is a simple example; in the real world it will be far more complicated.

Before we welcome our AI overlords and hand over our lives and jobs to robots, we ought to pause and think about the kind of intelligences we are creating. …

Source: pixabay.com

It’s just last year (2016) that an AI system beat a human Go master player. Here’s how a March 17, 2016 article by John Russell for TechCrunch described the feat (Note: Links have been removed),

Much was written of an historic moment for artificial intelligence last week when a Google-developed AI beat one of the planet’s most sophisticated players of Go, an East Asia strategy game renowned for its deep thinking and strategy.

Go is viewed as one of the ultimate tests for an AI given the sheer possibilities on hand. “There are 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 possible positions [in the game] — that’s more than the number of atoms in the universe, and more than a googol times larger than chess,” Google said earlier this year.

If you missed the series — which AlphaGo, the AI, won 4-1 — or were unsure of exactly why it was so significant, Google summed the general importance up in a post this week.

Far from just being a game, Demis Hassabis, CEO and Co-Founder of DeepMind — the Google-owned company behind AlphaGo — said the AI’s development is proof that it can be used to solve problems in ways that humans may be not be accustomed or able to do:

We’ve learned two important things from this experience. First, this test bodes well for AI’s potential in solving other problems. AlphaGo has the ability to look “globally” across a board—and find solutions that humans either have been trained not to play or would not consider. This has huge potential for using AlphaGo-like technology to find solutions that humans don’t necessarily see in other areas.

I find Roff’s thesis intriguing and is likely applicable to the short-term but in the longer term and in light of the attempts to  create devices that mimic neural plasticity and neuromorphic engineering  I don’t find her thesis convincing.

Vector Institute and Canada’s artificial intelligence sector

On the heels of the March 22, 2017 federal budget announcement of $125M for a Pan-Canadian Artificial Intelligence Strategy, the University of Toronto (U of T) has announced the inception of the Vector Institute for Artificial Intelligence in a March 28, 2017 news release by Jennifer Robinson (Note: Links have been removed),

A team of globally renowned researchers at the University of Toronto is driving the planning of a new institute staking Toronto’s and Canada’s claim as the global leader in AI.

Geoffrey Hinton, a University Professor Emeritus in computer science at U of T and vice-president engineering fellow at Google, will serve as the chief scientific adviser of the newly created Vector Institute based in downtown Toronto.

“The University of Toronto has long been considered a global leader in artificial intelligence research,” said U of T President Meric Gertler. “It’s wonderful to see that expertise act as an anchor to bring together researchers, government and private sector actors through the Vector Institute, enabling them to aim even higher in leading advancements in this fast-growing, critical field.”

As part of the Government of Canada’s Pan-Canadian Artificial Intelligence Strategy, Vector will share $125 million in federal funding with fellow institutes in Montreal and Edmonton. All three will conduct research and secure talent to cement Canada’s position as a world leader in AI.

In addition, Vector is expected to receive funding from the Province of Ontario and more than 30 top Canadian and global companies eager to tap this pool of talent to grow their businesses. The institute will also work closely with other Ontario universities with AI talent.

(See my March 24, 2017 posting; scroll down about 25% for the science part, including the Pan-Canadian Artificial Intelligence Strategy of the budget.)

Not obvious in last week’s coverage of the Pan-Canadian Artificial Intelligence Strategy is that the much lauded Hinton has been living in the US and working for Google. These latest announcements (Pan-Canadian AI Strategy and Vector Institute) mean that he’s moving back.

A March 28, 2017 article by Kate Allen for TorontoStar.com provides more details about the Vector Institute, Hinton, and the Canadian ‘brain drain’ as it applies to artificial intelligence, (Note:  A link has been removed)

Toronto will host a new institute devoted to artificial intelligence, a major gambit to bolster a field of research pioneered in Canada but consistently drained of talent by major U.S. technology companies like Google, Facebook and Microsoft.

The Vector Institute, an independent non-profit affiliated with the University of Toronto, will hire about 25 new faculty and research scientists. It will be backed by more than $150 million in public and corporate funding in an unusual hybridization of pure research and business-minded commercial goals.

The province will spend $50 million over five years, while the federal government, which announced a $125-million Pan-Canadian Artificial Intelligence Strategy in last week’s budget, is providing at least $40 million, backers say. More than two dozen companies have committed millions more over 10 years, including $5 million each from sponsors including Google, Air Canada, Loblaws, and Canada’s five biggest banks [Bank of Montreal (BMO). Canadian Imperial Bank of Commerce ({CIBC} President’s Choice Financial},  Royal Bank of Canada (RBC), Scotiabank (Tangerine), Toronto-Dominion Bank (TD Canada Trust)].

The mode of artificial intelligence that the Vector Institute will focus on, deep learning, has seen remarkable results in recent years, particularly in image and speech recognition. Geoffrey Hinton, considered the “godfather” of deep learning for the breakthroughs he made while a professor at U of T, has worked for Google since 2013 in California and Toronto.

Hinton will move back to Canada to lead a research team based at the tech giant’s Toronto offices and act as chief scientific adviser of the new institute.

Researchers trained in Canadian artificial intelligence labs fill the ranks of major technology companies, working on tools like instant language translation, facial recognition, and recommendation services. Academic institutions and startups in Toronto, Waterloo, Montreal and Edmonton boast leaders in the field, but other researchers have left for U.S. universities and corporate labs.

The goals of the Vector Institute are to retain, repatriate and attract AI talent, to create more trained experts, and to feed that expertise into existing Canadian companies and startups.

Hospitals are expected to be a major partner, since health care is an intriguing application for AI. Last month, researchers from Stanford University announced they had trained a deep learning algorithm to identify potentially cancerous skin lesions with accuracy comparable to human dermatologists. The Toronto company Deep Genomics is using deep learning to read genomes and identify mutations that may lead to disease, among other things.

Intelligent algorithms can also be applied to tasks that might seem less virtuous, like reading private data to better target advertising. Zemel [Richard Zemel, the institute’s research director and a professor of computer science at U of T] says the centre is creating an ethics working group [emphasis mine] and maintaining ties with organizations that promote fairness and transparency in machine learning. As for privacy concerns, “that’s something we are well aware of. We don’t have a well-formed policy yet but we will fairly soon.”

The institute’s annual funding pales in comparison to the revenues of the American tech giants, which are measured in tens of billions. The risk the institute’s backers are taking is simply creating an even more robust machine learning PhD mill for the U.S.

“They obviously won’t all stay in Canada, but Toronto industry is very keen to get them,” Hinton said. “I think Trump might help there.” Two researchers on Hinton’s new Toronto-based team are Iranian, one of the countries targeted by U.S. President Donald Trump’s travel bans.

Ethics do seem to be a bit of an afterthought. Presumably the Vector Institute’s ‘ethics working group’ won’t include any regular folks. Is there any thought to what the rest of us think about these developments? As there will also be some collaboration with other proposed AI institutes including ones at the University of Montreal (Université de Montréal) and the University of Alberta (Kate McGillivray’s article coming up shortly mentions them), might the ethics group be centered in either Edmonton or Montreal? Interestingly, two Canadians (Timothy Caulfield at the University of Alberta and Eric Racine at Université de Montréa) testified at the US Commission for the Study of Bioethical Issues Feb. 10 – 11, 2014 meeting, the Brain research, ethics, and nanotechnology. Still speculating here but I imagine Caulfield and/or Racine could be persuaded to extend their expertise in ethics and the human brain to AI and its neural networks.

Getting back to the topic at hand the ‘AI sceneCanada’, Allen’s article is worth reading in its entirety if you have the time.

Kate McGillivray’s March 29, 2017 article for the Canadian Broadcasting Corporation’s (CBC) news online provides more details about the Canadian AI situation and the new strategies,

With artificial intelligence set to transform our world, a new institute is putting Toronto to the front of the line to lead the charge.

The Vector Institute for Artificial Intelligence, made possible by funding from the federal government revealed in the 2017 budget, will move into new digs in the MaRS Discovery District by the end of the year.

Vector’s funding comes partially from a $125 million investment announced in last Wednesday’s federal budget to launch a pan-Canadian artificial intelligence strategy, with similar institutes being established in Montreal and Edmonton.

“[A.I.] cuts across pretty well every sector of the economy,” said Dr. Alan Bernstein, CEO and president of the Canadian Institute for Advanced Research, the organization tasked with administering the federal program.

“Silicon Valley and England and other places really jumped on it, so we kind of lost the lead a little bit. I think the Canadian federal government has now realized that,” he said.

Stopping up the brain drain

Critical to the strategy’s success is building a homegrown base of A.I. experts and innovators — a problem in the last decade, despite pioneering work on so-called “Deep Learning” by Canadian scholars such as Yoshua Bengio and Geoffrey Hinton, a former University of Toronto professor who will now serve as Vector’s chief scientific advisor.

With few university faculty positions in Canada and with many innovative companies headquartered elsewhere, it has been tough to keep the few graduates specializing in A.I. in town.

“We were paying to educate people and shipping them south,” explained Ed Clark, chair of the Vector Institute and business advisor to Ontario Premier Kathleen Wynne.

The existence of that “fantastic science” will lean heavily on how much buy-in Vector and Canada’s other two A.I. centres get.

Toronto’s portion of the $125 million is a “great start,” said Bernstein, but taken alone, “it’s not enough money.”

“My estimate of the right amount of money to make a difference is a half a billion or so, and I think we will get there,” he said.

Jessica Murphy’s March 29, 2017 article for the British Broadcasting Corporation’s (BBC) news online offers some intriguing detail about the Canadian AI scene,

Canadian researchers have been behind some recent major breakthroughs in artificial intelligence. Now, the country is betting on becoming a big player in one of the hottest fields in technology, with help from the likes of Google and RBC [Royal Bank of Canada].

In an unassuming building on the University of Toronto’s downtown campus, Geoff Hinton laboured for years on the “lunatic fringe” of academia and artificial intelligence, pursuing research in an area of AI called neural networks.

Also known as “deep learning”, neural networks are computer programs that learn in similar way to human brains. The field showed early promise in the 1980s, but the tech sector turned its attention to other AI methods after that promise seemed slow to develop.

“The approaches that I thought were silly were in the ascendancy and the approach that I thought was the right approach was regarded as silly,” says the British-born [emphasis mine] professor, who splits his time between the university and Google, where he is a vice-president of engineering fellow.

Neural networks are used by the likes of Netflix to recommend what you should binge watch and smartphones with voice assistance tools. Google DeepMind’s AlphaGo AI used them to win against a human in the ancient game of Go in 2016.

Foteini Agrafioti, who heads up the new RBC Research in Machine Learning lab at the University of Toronto, said those recent innovations made AI attractive to researchers and the tech industry.

“Anything that’s powering Google’s engines right now is powered by deep learning,” she says.

Developments in the field helped jumpstart innovation and paved the way for the technology’s commercialisation. They also captured the attention of Google, IBM and Microsoft, and kicked off a hiring race in the field.

The renewed focus on neural networks has boosted the careers of early Canadian AI machine learning pioneers like Hinton, the University of Montreal’s Yoshua Bengio, and University of Alberta’s Richard Sutton.

Money from big tech is coming north, along with investments by domestic corporations like banking multinational RBC and auto parts giant Magna, and millions of dollars in government funding.

Former banking executive Ed Clark will head the institute, and says the goal is to make Toronto, which has the largest concentration of AI-related industries in Canada, one of the top five places in the world for AI innovation and business.

The founders also want it to serve as a magnet and retention tool for top talent aggressively head-hunted by US firms.

Clark says they want to “wake up” Canadian industry to the possibilities of AI, which is expected to have a massive impact on fields like healthcare, banking, manufacturing and transportation.

Google invested C$4.5m (US$3.4m/£2.7m) last November [2016] in the University of Montreal’s Montreal Institute for Learning Algorithms.

Microsoft is funding a Montreal startup, Element AI. The Seattle-based company also announced it would acquire Montreal-based Maluuba and help fund AI research at the University of Montreal and McGill University.

Thomson Reuters and General Motors both recently moved AI labs to Toronto.

RBC is also investing in the future of AI in Canada, including opening a machine learning lab headed by Agrafioti, co-funding a program to bring global AI talent and entrepreneurs to Toronto, and collaborating with Sutton and the University of Alberta’s Machine Intelligence Institute.

Canadian tech also sees the travel uncertainty created by the Trump administration in the US as making Canada more attractive to foreign talent. (One of Clark’s the selling points is that Toronto as an “open and diverse” city).

This may reverse the ‘brain drain’ but it appears Canada’s role as a ‘branch plant economy’ for foreign (usually US) companies could become an important discussion once more. From the ‘Foreign ownership of companies of Canada’ Wikipedia entry (Note: Links have been removed),

Historically, foreign ownership was a political issue in Canada in the late 1960s and early 1970s, when it was believed by some that U.S. investment had reached new heights (though its levels had actually remained stable for decades), and then in the 1980s, during debates over the Free Trade Agreement.

But the situation has changed, since in the interim period Canada itself became a major investor and owner of foreign corporations. Since the 1980s, Canada’s levels of investment and ownership in foreign companies have been larger than foreign investment and ownership in Canada. In some smaller countries, such as Montenegro, Canadian investment is sizable enough to make up a major portion of the economy. In Northern Ireland, for example, Canada is the largest foreign investor. By becoming foreign owners themselves, Canadians have become far less politically concerned about investment within Canada.

Of note is that Canada’s largest companies by value, and largest employers, tend to be foreign-owned in a way that is more typical of a developing nation than a G8 member. The best example is the automotive sector, one of Canada’s most important industries. It is dominated by American, German, and Japanese giants. Although this situation is not unique to Canada in the global context, it is unique among G-8 nations, and many other relatively small nations also have national automotive companies.

It’s interesting to note that sometimes Canadian companies are the big investors but that doesn’t change our basic position. And, as I’ve noted in other postings (including the March 24, 2017 posting), these government investments in science and technology won’t necessarily lead to a move away from our ‘branch plant economy’ towards an innovative Canada.

You can find out more about the Vector Institute for Artificial Intelligence here.

BTW, I noted that reference to Hinton as ‘British-born’ in the BBC article. He was educated in the UK and subsidized by UK taxpayers (from his Wikipedia entry; Note: Links have been removed),

Hinton was educated at King’s College, Cambridge graduating in 1970, with a Bachelor of Arts in experimental psychology.[1] He continued his study at the University of Edinburgh where he was awarded a PhD in artificial intelligence in 1977 for research supervised by H. Christopher Longuet-Higgins.[3][12]

It seems Canadians are not the only ones to experience  ‘brain drains’.

Finally, I wrote at length about a recent initiative taking place between the University of British Columbia (Vancouver, Canada) and the University of Washington (Seattle, Washington), the Cascadia Urban Analytics Cooperative in a Feb. 28, 2017 posting noting that the initiative is being funded by Microsoft to the tune $1M and is part of a larger cooperative effort between the province of British Columbia and the state of Washington. Artificial intelligence is not the only area where US technology companies are hedging their bets (against Trump’s administration which seems determined to terrify people from crossing US borders) by investing in Canada.

For anyone interested in a little more information about AI in the US and China, there’s today’s (March 31, 2017)earlier posting: China, US, and the race for artificial intelligence research domination.

New principles for AI (artificial intelligence) research along with some history and a plea for a democratic discussion

For almost a month I’ve been meaning to get to this Feb. 1, 2017 essay by Andrew Maynard (director of Risk Innovation Lab at Arizona State University) and Jack Stilgoe (science policy lecturer at University College London [UCL]) on the topic of artificial intelligence and principles (Note: Links have been removed). First, a walk down memory lane,

Today [Feb. 1, 2017] in Washington DC, leading US and UK scientists are meeting to share dispatches from the frontiers of machine learning – an area of research that is creating new breakthroughs in artificial intelligence (AI). Their meeting follows the publication of a set of principles for beneficial AI that emerged from a conference earlier this year at a place with an important history.

In February 1975, 140 people – mostly scientists, with a few assorted lawyers, journalists and others – gathered at a conference centre on the California coast. A magazine article from the time by Michael Rogers, one of the few journalists allowed in, reported that most of the four days’ discussion was about the scientific possibilities of genetic modification. Two years earlier, scientists had begun using recombinant DNA to genetically modify viruses. The Promethean nature of this new tool prompted scientists to impose a moratorium on such experiments until they had worked out the risks. By the time of the Asilomar conference, the pent-up excitement was ready to burst. It was only towards the end of the conference when a lawyer stood up to raise the possibility of a multimillion-dollar lawsuit that the scientists focussed on the task at hand – creating a set of principles to govern their experiments.

The 1975 Asilomar meeting is still held up as a beacon of scientific responsibility. However, the story told by Rogers, and subsequently by historians, is of scientists motivated by a desire to head-off top down regulation with a promise of self-governance. Geneticist Stanley Cohen said at the time, ‘If the collected wisdom of this group doesn’t result in recommendations, the recommendations may come from other groups less well qualified’. The mayor of Cambridge, Massachusetts was a prominent critic of the biotechnology experiments then taking place in his city. He said, ‘I don’t think these scientists are thinking about mankind at all. I think that they’re getting the thrills and the excitement and the passion to dig in and keep digging to see what the hell they can do’.

The concern in 1975 was with safety and containment in research, not with the futures that biotechnology might bring about. A year after Asilomar, Cohen’s colleague Herbert Boyer founded Genentech, one of the first biotechnology companies. Corporate interests barely figured in the conversations of the mainly university scientists.

Fast-forward 42 years and it is clear that machine learning, natural language processing and other technologies that come under the AI umbrella are becoming big business. The cast list of the 2017 Asilomar meeting included corporate wunderkinds from Google, Facebook and Tesla as well as researchers, philosophers, and other academics. The group was more intellectually diverse than their 1975 equivalents, but there were some notable absences – no public and their concerns, no journalists, and few experts in the responsible development of new technologies.

Maynard and Stilgoe offer a critique of the latest principles,

The principles that came out of the meeting are, at least at first glance, a comforting affirmation that AI should be ‘for the people’, and not to be developed in ways that could cause harm. They promote the idea of beneficial and secure AI, development for the common good, and the importance of upholding human values and shared prosperity.

This is good stuff. But it’s all rather Motherhood and Apple Pie: comforting and hard to argue against, but lacking substance. The principles are short on accountability, and there are notable absences, including the need to engage with a broader set of stakeholders and the public. At the early stages of developing new technologies, public concerns are often seen as an inconvenience. In a world in which populism appears to be trampling expertise into the dirt, it is easy to understand why scientists may be defensive.

I encourage you to read this thoughtful essay in its entirety although I do have one nit to pick:  Why only US and UK scientists? I imagine the answer may lie in funding and logistics issues but I find it surprising that the critique makes no mention of the international community as a nod to inclusion.

For anyone interested in the Asolimar AI principles (2017), you can find them here. You can also find videos of the two-day workshop (Jan. 31 – Feb. 1, 2017 workshop titled The Frontiers of Machine Learning (a Raymond and Beverly Sackler USA-UK Scientific Forum [US National Academy of Sciences]) here (videos for each session are available on Youtube).

Using melanin in bioelectronic devices

Brazilian researchers are working with melanin to make biosensors and other bioelectronic devices according to a Dec. 20, 2016 news item on phys.org,

Bioelectronics, sometimes called the next medical frontier, is a research field that combines electronics and biology to develop miniaturized implantable devices capable of altering and controlling electrical signals in the human body. Large corporations are increasingly interested: a joint venture in the field has recently been announced by Alphabet, Google’s parent company, and pharmaceutical giant GlaxoSmithKline (GSK).

One of the challenges that scientists face in developing bioelectronic devices is identifying and finding ways to use materials that conduct not only electrons but also ions, as most communication and other processes in the human organism use ionic biosignals (e.g., neurotransmitters). In addition, the materials must be biocompatible.

Resolving this challenge is one of the motivations for researchers at São Paulo State University’s School of Sciences (FC-UNESP) at Bauru in Brazil. They have succeeded in developing a novel route to more rapidly synthesize and to enable the use of melanin, a polymeric compound that pigments the skin, eyes and hair of mammals and is considered one of the most promising materials for use in miniaturized implantable devices such as biosensors.

A Dec. 14, 2016 FAPESP (São Paulo Research Foundation) press release, which originated the news item, further describes both the research and a recent meeting where the research was shared (Note: A link has been removed),

Some of the group’s research findings were presented at FAPESP Week Montevideo during a round-table session on materials science and engineering.

The symposium was organized by the Montevideo Group Association of Universities (AUGM), Uruguay’s University of the Republic (UdelaR) and FAPESP and took place on November 17-18 at UdelaR’s campus in Montevideo. Its purpose was to strengthen existing collaborations and establish new partnerships among South American scientists in a range of knowledge areas. Researchers and leaders of institutions in Uruguay, Brazil, Argentina, Chile and Paraguay attended the meeting.

“All the materials that have been tested to date for applications in bioelectronics are entirely synthetic,” said Carlos Frederico de Oliveira Graeff, a professor at UNESP Bauru and principal investigator for the project, in an interview given to Agência FAPESP.

“One of the great advantages of melanin is that it’s a totally natural compound and biocompatible with the human body: hence its potential use in electronic devices that interface with brain neurons, for example.”

Application challenges

According to Graeff, the challenges of using melanin as a material for the development of bioelectronic devices include the fact that like other carbon-based materials, such as graphene, melanin is not easily dispersible in an aqueous medium, a characteristic that hinders its application in thin-film production.

Furthermore, the conventional process for synthesizing melanin is complex: several steps are hard to control, it can last up to 56 days, and it can result in disorderly structures.

In a series of studies performed in recent years at the Center for Research and Development of Functional Materials (CDFM), where Graeff is a leading researcher and which is one of the Research, Innovation and Dissemination Centers (RIDCs) funded by FAPESP, he and his collaborators managed to obtain biosynthetic melanin with good dispersion in water and a strong resemblance to natural melanin using a novel synthesis route.

The process developed by the group at CDMF takes only a few hours and is based on changes in parameters such as temperature and the application of oxygen pressure to promote oxidation of the material.

By applying oxygen pressure, the researchers were able to increase the density of carboxyl groups, which are organic functional groups consisting of a carbon atom double bonded to an oxygen atom and single bonded to a hydroxyl group (oxygen + hydrogen). This enhances solubility and facilitates the suspension of biosynthetic melanin in water.

“The production of thin films of melanin with high homogeneity and quality is made far easier by these characteristics,” Graeff said.

By increasing the density of carboxyl groups, the researchers were also able to make biosynthetic melanin more similar to the biological compound.

In living organisms, an enzyme that participates in the synthesis of melanin facilitates the production of carboxylic acids. The new melanin synthesis route enabled the researchers to mimic the role of this enzyme chemically while increasing carboxyl group density.

“We’ve succeeded in obtaining a material that’s very close to biological melanin by chemical synthesis and in producing high-quality film for use in bioelectronic devices,” Graeff said.

Through collaboration with colleagues at research institutions in Canada [emphasis mine], the Brazilian researchers have begun using the material in a series of applications, including electrical contacts, pH sensors and photovoltaic cells.

More recently, they have embarked on an attempt to develop a transistor, a semiconductor device used to amplify or switch electronic signals and electrical power.

“Above all, we aim to produce transistors precisely in order to enhance this coupling of electronics with biological systems,” Graeff said.

I’m glad to have gotten some information about the work in South America. It’s one of FrogHeart’s shortcomings that I have so little about the research in that area of the world. I believe this is largely due to my lack of Spanish language skills. Perhaps one day there’ll be a universal translator that works well. In the meantime, it was a surprise to see Canada mentioned in this piece. I wonder which Canadian research institutions are involved with this research in South America.

Artificial intelligence and industrial applications

This is take on artificial intelligence that I haven’t encountered before. Sean Captain’s Nov. 15, 2016 article for Fast Company profiles industry giant GE (General Electric) and its foray into that world (Note: Links have been removed),

When you hear the term “artificial intelligence,” you may think of tech giants Amazon, Google, IBM, Microsoft, or Facebook. Industrial powerhouse General Electric is now aiming to be included on that short list. It may not have a chipper digital assistant like Cortana or Alexa. It won’t sort through selfies, but it will look through X-rays. It won’t recommend movies, but it will suggest how to care for a diesel locomotive. Today, GE announced a pair of acquisitions and new services that will bring machine learning AI to the kinds of products it’s known for, including planes, trains, X-ray machines, and power plants.

The effort started in 2015 when GE announced Predix Cloud—an online platform to network and collect data from sensors on industrial machinery such as gas turbines or windmills. At the time, GE touted the benefits of using machine learning to find patterns in sensor data that could lead to energy savings or preventative maintenance before a breakdown. Predix Cloud opened up to customers in February [2016?], but GE is still building up the AI capabilities to fulfill the promise. “We were using machine learning, but I would call it in a custom way,” says Bill Ruh, GE’s chief digital officer and CEO of its GE Digital business (GE calls its division heads CEOs). “And we hadn’t gotten to a general-purpose framework in machine learning.”

Today [Nov. 15, 2016] GE revealed the purchase of two AI companies that Ruh says will get them there. Bit Stew Systems, founded in 2005, was already doing much of what Predix Cloud promises—collecting and analyzing sensor data from power utilities, oil and gas companies, aviation, and factories. (GE Ventures has funded the company.) Customers include BC Hydro, Pacific Gas & Electric, and Scottish & Southern Energy.

The second purchase, Wise.io is a less obvious purchase. Founded by astrophysics and AI experts using machine learning to study the heavens, the company reapplied the tech to streamlining a company’s customer support systems, picking up clients like Pinterest, Twilio, and TaskRabbit. GE believes the technology will transfer yet again, to managing industrial machines. “I think by the middle of next year we will have a full machine learning stack,” says Ruh.

Though young, Predix is growing fast, with 270 partner companies using the platform, according to GE, which expects revenue on software and services to grow over 25% this year, to more than $7 billion. Ruh calls Predix a “significant part” of that extra money. And he’s ready to brag, taking a jab at IBM Watson for being a “general-purpose” machine-learning provider without the deep knowledge of the industries it serves. “We have domain algorithms, on machine learning, that’ll know what a power plant is and all the depth of that, that a general-purpose machine learning will never really understand,” he says.

One especially dull-sounding new Predix service—Predictive Corrosion Management—touches on a very hot political issue: giant oil and gas pipeline projects. Over 400 people have been arrested in months of protests against the Dakota Access Pipeline, which would carry crude oil from North Dakota to Illinois. The issue is very complicated, but one concern of protestors is that a pipeline rupture would contaminate drinking water for the Standing Rock Sioux reservation.

“I think absolutely this is aimed at that problem. If you look at why pipelines spill, it’s corrosion,” says Ruh. “We believe that 10 years from now, we can detect a leak before it occurs and fix it before you see it happen.” Given how political battles over pipelines drag on, 10 years might not be so long to wait.

I recommend reading the article in its entirety if you have the time. And, for those of us in British Columbia, Canada, it was a surprise to see BC Hydro on the list of customers for one of GE’s new acquisitions. As well, that business about the pipelines hits home hard given the current debates (Enbridge Northern Gateway Pipelines) here. *ETA Dec. 27, 2016: This was originally edited just prior to publication to include information about the announcement by the Trudeau cabinet approving two pipelines for TransMountain  and Enbridge respectively while rejecting the Northern Gateway pipeline (Canadian Broadcasting Corporation [CBC] online news Nov. 29, 2016).  I trust this second edit will stick.*

It seems GE is splashing out in a big way. There’s a second piece on Fast Company, a Nov. 16, 2016 article by Sean Captain (again) this time featuring a chat between an engineer and a robotic power plant,

We are entering the era of talking machines—and it’s about more than just asking Amazon’s Alexa to turn down the music. General Electric has built a digital assistant into its cloud service for managing power plants, jet engines, locomotives, and the other heavy equipment it builds. Over the internet, an engineer can ask a machine—even one hundreds of miles away—how it’s doing and what it needs. …

Voice controls are built on top of GE’s Digital Twin program, which uses sensor readings from machinery to create virtual models in cyberspace. “That model is constantly getting a stream of data, both operational and environmental,” says Colin Parris, VP at GE Software Research. “So it’s adapting itself to that type of data.” The machines live virtual lives online, allowing engineers to see how efficiently each is running and if they are wearing down.

GE partnered with Microsoft on the interface, using the Bing Speech API (the same tech powering the Cortana digital assistant), with special training on key terms like “rotor.” The twin had little trouble understanding the Mandarin Chinese accent of Bo Yu, one of the researchers who built the system; nor did it stumble on Parris’s Trinidad accent. Digital Twin will also work with Microsoft’s HoloLens mixed reality goggles, allowing someone to step into a 3D image of the equipment.

I can’t help wondering if there are some jobs that were eliminated with this technology.

Westworld: a US television programme investigating AI (artificial intelligence) and consciousness

The US television network, Home Box Office (HBO) is getting ready to première Westworld, a new series based on a movie first released in 1973. Here’s more about the movie from its Wikipedia entry (Note: Links have been removed),

Westworld is a 1973 science fiction Western thriller film written and directed by novelist Michael Crichton and produced by Paul Lazarus III about amusement park robots that malfunction and begin killing visitors. It stars Yul Brynner as an android in a futuristic Western-themed amusement park, and Richard Benjamin and James Brolin as guests of the park.

Westworld was the first theatrical feature directed by Michael Crichton.[3] It was also the first feature film to use digital image processing, to pixellate photography to simulate an android point of view.[4] The film was nominated for Hugo, Nebula and Saturn awards, and was followed by a sequel film, Futureworld, and a short-lived television series, Beyond Westworld. In August 2013, HBO announced plans for a television series based on the original film.

The latest version is due to start broadcasting in the US on Sunday, Oct. 2, 2016 and as part of the publicity effort the producers are profiled by Sean Captain for Fast Company in a Sept. 30, 2016 article,

As Game of Thrones marches into its final seasons, HBO is debuting this Sunday what it hopes—and is betting millions of dollars on—will be its new blockbuster series: Westworld, a thorough reimagining of Michael Crichton’s 1973 cult classic film about a Western theme park populated by lifelike robot hosts. A philosophical prelude to Jurassic Park, Crichton’s Westworld is a cautionary tale about technology gone very wrong: the classic tale of robots that rise up and kill the humans. HBO’s new series, starring Evan Rachel Wood, Anthony Hopkins, and Ed Harris, is subtler and also darker: The humans are the scary ones.

“We subverted the entire premise of Westworld in that our sympathies are meant to be with the robots, the hosts,” says series co-creator Lisa Joy. She’s sitting on a couch in her Burbank office next to her partner in life and on the show—writer, director, producer, and husband Jonathan Nolan—who goes by Jonah. …

Their Westworld, which runs in the revered Sunday-night 9 p.m. time slot, combines present-day production values and futuristic technological visions—thoroughly revamping Crichton’s story with hybrid mechanical-biological robots [emphasis mine] fumbling along the blurry line between simulated and actual consciousness.

Captain never does explain the “hybrid mechanical-biological robots.” For example, do they have human skin or other organs grown for use in a robot? In other words, how are they hybrid?

That nitpick aside, the article provides some interesting nuggets of information and insight into the themes and ideas 2016 Westworld’s creators are exploring (Note: A link has been removed),

… Based on the four episodes I previewed (which get progressively more interesting), Westworld does a good job with the trope—which focused especially on the awakening of Dolores, an old soul of a robot played by Evan Rachel Wood. Dolores is also the catchall Spanish word for suffering, pain, grief, and other displeasures. “There are no coincidences in Westworld,” says Joy, noting that the name is also a play on Dolly, the first cloned mammal.

The show operates on a deeper, though hard-to-define level, that runs beneath the shoot-em and screw-em frontier adventure and robotic enlightenment narratives. It’s an allegory of how even today’s artificial intelligence is already taking over, by cataloging and monetizing our lives and identities. “Google and Facebook, their business is reading your mind in order to advertise shit to you,” says Jonah Nolan. …

“Exist free of rules, laws or judgment. No impulse is taboo,” reads a spoof home page for the resort that HBO launched a few weeks ago. That’s lived to the fullest by the park’s utterly sadistic loyal guest, played by Ed Harris and known only as the Man in Black.

The article also features some quotes from scientists on the topic of artificial intelligence (Note: Links have been removed),

“In some sense, being human, but less than human, it’s a good thing,” says Jon Gratch, professor of computer science and psychology at the University of Southern California [USC]. Gratch directs research at the university’s Institute for Creative Technologies on “virtual humans,” AI-driven onscreen avatars used in military-funded training programs. One of the projects, SimSensei, features an avatar of a sympathetic female therapist, Ellie. It uses AI and sensors to interpret facial expressions, posture, tension in the voice, and word choices by users in order to direct a conversation with them.

“One of the things that we’ve found is that people don’t feel like they’re being judged by this character,” says Gratch. In work with a National Guard unit, Ellie elicited more honest responses about their psychological stresses than a web form did, he says. Other data show that people are more honest when they know the avatar is controlled by an AI versus being told that it was controlled remotely by a human mental health clinician.

“If you build it like a human, and it can interact like a human. That solves a lot of the human-computer or human-robot interaction issues,” says professor Paul Rosenbloom, also with USC’s Institute for Creative Technologies. He works on artificial general intelligence, or AGI—the effort to create a human-like or human level of intellect.

Rosenbloom is building an AGI platform called Sigma that models human cognition, including emotions. These could make a more effective robotic tutor, for instance, “There are times you want the person to know you are unhappy with them, times you want them to know that you think they’re doing great,” he says, where “you” is the AI programmer. “And there’s an emotional component as well as the content.”

Achieving full AGI could take a long time, says Rosenbloom, perhaps a century. Bernie Meyerson, IBM’s chief innovation officer, is also circumspect in predicting if or when Watson could evolve into something like HAL or Her. “Boy, we are so far from that reality, or even that possibility, that it becomes ludicrous trying to get hung up there, when we’re trying to get something to reasonably deal with fact-based data,” he says.

Gratch, Rosenbloom, and Meyerson are talking about screen-based entities and concepts of consciousness and emotions. Then, there’s a scientist who’s talking about the difficulties with robots,

… Ken Goldberg, an artist and professor of engineering at UC [University of California] Berkeley, calls the notion of cyborg robots in Westworld “a pretty common trope in science fiction.” (Joy will take up the theme again, as the screenwriter for a new Battlestar Galactica movie.) Goldberg’s lab is struggling just to build and program a robotic hand that can reliably pick things up. But a sympathetic, somewhat believable Dolores in a virtual setting is not so farfetched.

Captain delves further into a thorny issue,

“Can simulations, at some point, become the real thing?” asks Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University. “If we perfectly simulate a rainstorm on a computer, it’s still not a rainstorm. We won’t get wet. But is the mind or consciousness different? The jury is still out.”

While artificial consciousness is still in the dreamy phase, today’s level of AI is serious business. “What was sort of a highfalutin philosophical question a few years ago has become an urgent industrial need,” says Jonah Nolan. It’s not clear yet how the Delos management intends, beyond entrance fees, to monetize Westworld, although you get a hint when Ford tells Theresa Cullen “We know everything about our guests, don’t we? As we know everything about our employees.”

AI has a clear moneymaking model in this world, according to Nolan. “Facebook is monetizing your social graph, and Google is advertising to you.” Both companies (and others) are investing in AI to better understand users and find ways to make money off this knowledge. …

As my colleague David Bruggeman has often noted on his Pasco Phronesis blog, there’s a lot of science on television.

For anyone who’s interested in artificial intelligence and the effects it may have on urban life, see my Sept. 27, 2016 posting featuring the ‘One Hundred Year Study on Artificial Intelligence (AI100)’, hosted by Stanford University.

Points to anyone who recognized Jonah (Jonathan) Nolan as the producer for the US television series, Person of Interest, a programme based on the concept of a supercomputer with intelligence and personality and the ability to continuously monitor the population 24/7.

Cientifica’s latest smart textiles and wearable electronics report

After publishing a report on wearable technology in May 2016 (see my June 2, 2016 posting), Cientifica has published another wearable technology report, this one is titled, Smart Textiles and Wearables: Markets, Applications and Technologies. Here’s more about the latest report from the report order page,

“Smart Textiles and Wearables: Markets, Applications and Technologies” examines the markets for textile based wearable technologies, the companies producing them and the enabling technologies. This is creating a 4th industrial revolution for the textiles and fashion industry worth over $130 billion by 2025.

Advances in fields such as nanotechnology, organic electronics (also known as plastic electronics) and conducting polymers are creating a range of textile–based technologies with the ability to sense and react to the world around them.  This includes monitoring biometric data such as heart rate, the environmental factors such as temperature and The presence of toxic gases producing real time feedback in the form of electrical stimuli, haptic feedback or changes in color.

The report identifies three distinct generations of textile wearable technologies.

First generation is where a sensor is attached to apparel and is the approach currently taken by major sportswear brands such as Adidas, Nike and Under Armour
Second generation products embed the sensor in the garment as demonstrated by products from Samsung, Alphabet, Ralph Lauren and Flex.
In third generation wearables the garment is the sensor and a growing number of companies including AdvanPro, Tamicare and BeBop sensors are making rapid progress in creating pressure, strain and temperature sensors.

Third generation wearables represent a significant opportunity for new and established textile companies to add significant value without having to directly compete with Apple, Samsung and Intel.

The report predicts that the key growth areas will be initially sports and wellbeing

followed by medical applications for patient monitoring. Technical textiles, fashion and entertainment will also be significant applications with the total market expected to rise to over $130 billion by 2025 with triple digit compound annual growth rates across many applications.

The rise of textile wearables also represents a significant opportunity for manufacturers of the advanced materials used in their manufacture. Toray, Panasonic, Covestro, DuPont and Toyobo are already suppling the necessary materials, while researchers are creating sensing and energy storage technologies, from flexible batteries to graphene supercapacitors which will power tomorrows wearables. The report details the latest advances and their applications.

This report is based on an extensive research study of the wearables and smart textile markets backed with over a decade of experience in identifying, predicting and sizing markets for nanotechnologies and smart textiles. Detailed market figures are given from 2016-2025, along with an analysis of the key opportunities, and illustrated with 139 figures and 6 tables.

The September 2016 report is organized differently and has a somewhat different focus from the report published in May 2016. Not having read either report, I’m guessing that while there might be a little repetition, you might better consider them to be companion volumes.

Here’s more from the September 2016 report’s table of contents which you can download from the order page (Note: The formatting has been changed),

SMART TEXTILES AND WEARABLES:
MARKETS, APPLICATIONS AND
TECHNOLOGIES

Contents  1
List of Tables  4
List of Figures  4
Introduction  8
How to Use This Report  8
Wearable Technologies and the 4Th Industrial Revolution  9
The Evolution of Wearable Technologies  10
Defining Smart Textiles  15
Factors Affecting The Adoption of Smart Textiles for Wearables  18
Cost  18
Accuracy  18
On Shoring  19
Power management  19
Security and Privacy  20
Markets  21
Total Market Growth and CAGR  21
Market Growth By Application  21
Adding Value To Textiles Through Technology  27
How Nanomaterials Add Functionality and Value  31
Business Models  33
Applications  35
Sports and Wellbeing  35
1st Generation Technologies  35
Under Armour Healthbox Wearables  35
Adidas MiCoach  36
Sensoria  36
EMPA’s Long Term Research  39
2nd Generation Technologies  39
Google’s Project Jacquard  39
Samsung Creative Lab  43
Microsoft Collaborations  44
Intel Systems on a Chip  44
Flex (Formerly Flextronics) and MAS Holdings  45
Jiobit  46
Asensei Personal Trainer  47
OmSignal Smart Clothing  48
Ralph Lauren PoloTech  49
Hexoskin Performance Management  50
Jabil Circuit Textile Heart Monitoring  51
Stretch Sense Sensors  52
NTT Data and Toray  54
Goldwin Inc. and DoCoMo  55
SupaSpot Inc Smart Sensors  55
Wearable Experiments and Brand Marketing  56
Wearable Life Sciences Antelope  57
Textronics NuMetrex  59
3rd Generation Technologies  60
AdvanPro Pressure Sensing Shoes  60
Tamicare 3D printed Wearables with Integrated Sensors  62
AiQ Smart Clothing Stainless Steel Yarns  64
Flex Printed Inks And Conductive Yarns  66
Sensing Tech Conductive Inks  67
EHO Textiles Body Motion Monitoring  68
Bebop Sensors Washable E-Ink Sensors  70
Fraunhofer Institute for Silicate Research Piezolectric Polymer
Sensors  71
CLIM8 GEAR Heated Textiles  74
VTT Smart Clothing Human Thermal Model  74
ATTACH (Adaptive Textiles Technology with Active Cooling and Heating) 76
Energy Storage and Generation  78
Intelligent Textiles Military Uniforms  78
BAE Systems Broadsword Spine  79
Stretchable Batteries  80
LG Chem Cable Batteries  81
Supercapacitors  83
Swinburne Graphene Supercapacitors  83
MIT Niobium Nanowire Supercapacitors  83
Energy Harvesting  86
Kinetic  86
StretchSense Energy Harvesting Kit  86
NASA Environmental Sensing Fibers  86
Solar  87
Powertextiles  88
Sphelar Power Corp Solar Textiles  88
Ohmatex and Powerweave  89
Fashion  89
1st Generation Technologies  92
Cute Circuit LED Couture  92
MAKEFASHION LED Couture  94
2nd Generation Technologies  94
Covestro Luminous Clothing  94
3rd Generation Technologies  96
The Unseen Temperature Sensitive Dyes  96
Entertainment  98
Wearable Experiments Marketing  98
Key Technologies 100
Circuitry  100
Conductive Inks for Fabrics  100
Conductive Ink For Printing On Stretchable Fabrics  100
Creative Materials Conductive Inks And Adhesives  100
Dupont Stretchable Electronic Inks  101
Aluminium Inks From Alink Co  101
Conductive Fibres  102
Circuitex Silver Coated Nylon  102
Textronics Yarns and Fibres  102
Novonic Elastic Conductive Yarn  103
Copper Coated Polyacrylonitrile (PAN) Fibres  103
Printed electronics  105
Covestro TPU Films for Flexible Circuits  105
Sensors  107
Electrical  107
Hitoe  107
Cocomi  108
Panasonic Polymer Resin  109
Cardiac Monitoring  110
Mechanical  113
Strain  113
Textile-Based Weft Knitted Strain Sensors  113
Chain Mail Fabric for Smart Textiles  113
Nano-Treatment for Conductive Fiber/Sensors 115
Piezoceramic materials  116
Graphene-Based Woven Fabric  117
Pressure Sensing  117
LG Innotek Flexible Textile Pressure Sensors  117
Hong Kong Polytechnic University Pressure Sensing Fibers  119
Conductive Polymer Composite Coatings  122
Printed Textile Sensors To Track Movement  125
Environment  127
Photochromic Textiles  127
Temperature  127
Sefar PowerSens  127
Gasses & Chemicals  127
Textile Gas Sensors  127
Energy  130
Storage  130
Graphene Supercapacitors  130
Niobium Nanowire Supercapacitors  130
Stretchy supercapacitors  132
Energy Generation  133
StretchSense Energy Harvesting Kit  133
Piezoelectric Or Thermoelectric Coated Fibres  134
Optical  137
Light Emitting  137
University of Manchester Electroluminescent Inks and Yarns 137
Polyera Wove  138
Companies Mentioned  141
List of Tables
Table 1 CAGR by application  22
Table 2 Value of market by application 2016-25 (millions USD)  24
Table 3 % market share by application  26
Table 4 CAGR 2016-25 by application  26
Table 5 Technology-Enabled Market Growth in Textile by Sector (2016-22) 28
Table 6 Value of nanomaterials by sector 2016-22 ($ Millions)  33
List of Figures
Figure 1 The 4th Industrial Revolution (World Economic Forum)  9
Figure 2 Block Diagram of typical MEMS digital output motion sensor: ultra
low-power high performance 3-axis “femto” accelerometer used in
fitness tracking devices.  11
Figure 3 Interior of Fitbit Flex device (from iFixit)  11
Figure 4 Internal layout of Fitbit Flex. Red is the main CPU, orange is the
BTLE chip, blue is a charger, yellow is the accelerometer (from iFixit)  11
Figure 5 Intel’s Curie processor stretches the definition of ‘wearable’  12
Figure 6 Typical Textile Based Wearable System Components  13
Figure 7 The Chromat Aeros Sports Bra “powered by Intel, inspired by wind, air and flight.”  14
Figure 8 The Evolution of Smart textiles  15
Figure 9 Goldwin’s C2fit IN-pulse sportswear using Toray’s Hitoe  16
Figure 10 Sensoglove reads grip pressure for golfers  16
Figure 11 Textile Based Wearables Growth 2016-25(USD Millions)  21
Figure 12 Total market for textile based wearables 2016-25 (USD Millions)  22
Figure 13 Health and Sports Market Size 2016-20 (USD Millions)  23
Figure 14 Health and Sports Market Size 2016-25 (USD Millions)  23
Figure 15 Critical steps for obtaining FDA medical device approval  25
Figure 16 Market split between wellbeing and medical 2016-25  26
Figure 17 Current World Textile Market by Sector (2016)  27
Figure 18 The Global Textile Market By Sector ($ Millions)  27
Figure 19 Compound Annual Growth Rates (CAGR) by Sector (2016-25)  28
Figure 20 The Global Textile Market in 2022  29
Figure 21 The Global Textile Market in 2025  30
Figure 22 Textile Market Evolution (2012-2025)  30
Figure 23 Total Value of Nanomaterials in Textiles 2012-2022 ($ Millions)  31
Figure 24 Value of Nanomaterials in Textiles by Sector 2016-2025 ($ Millions) 32
Figure 25 Adidas miCoach Connect Heart Rate Monitor  36
Figure 26 Sensoria’s Hear[t] Rate Monitoring Garments . 37
Figure 27 Flexible components used in Google’s Project Jacquard  40
Figure 28 Google and Levi’s Smart Jacket  41
Figure 29 Embedded electronics Google’s Project Jacquard  42
Figure 30 Samsung’s WELT ‘smart’ belt  43
Figure 31 Samsung Body Compass at CES16  44
Figure 32 Lumo Run washable motion sensor  45
Figure 33 OMSignal’s Smart Bra  49
Figure 34 PoloTech Shirt from Ralph Lauren  50
Figure 35 Hexoskin Data Acquisition and Processing  51
Figure 36 Peak+™ Hear[t] Rate Monitoring Garment  52
Figure 37 StretchSense CEO Ben O’Brien, with a fabric stretch sensor  53
Figure 38 C3fit Pulse from Goldwin Inc  55
Figure 39 The Antelope Tank-Top  58
Figure 40 Sportswear with integrated sensors from Textronix  60
Figure 41 AdvanPro’s pressure sensing insoles  61
Figure 42 AdvanPro’s pressure sensing textile  62
Figure 43 Tamicare 3D Printing Sensors and Apparel  63
Figure 44 Smart clothing using stainless steel yarns and textile sensors from AiQ  65
Figure 45 EHO Smart Sock  69
Figure 46 BeBop Smart Car Seat Sensor  71
Figure 47 Non-transparent printed sensors from Fraunhofer ISC  73
Figure 48 Clim8 Intelligent Heat Regulating Shirt  74
Figure 49 Temperature regulating smart fabric printed at UC San Diego  76
Figure 50 Intelligent Textiles Ltd smart uniform  79
Figure 51 BAE Systems Broadsword Spine  80
Figure 52 LG Chem cable-shaped lithium-ion battery powers an LED display even when twisted and strained  81
Figure 53 Supercapacitor yarn made of niobium nanowires  84
Figure 54 Sphelar Textile  89
Figure 55 Sphelar Textile Solar Cells  89
Figure 56 Katy Perry wears Cute Circuit in 2010  91
Figure 57 Cute Circuit K Dress  93
Figure 58 MAKEFASHION runway at the Brother’s “Back to Business” conference, Nashville 2016  94
Figure 59 Covestro material with LEDs are positioned on formable films made from thermoplastic polyurethane (TPU).  95
Figure 60 Unseen headpiece, made of 4000 conductive Swarovski stones, changes color to correspond with localized brain activity  96
Figure 61 Eighthsense a coded couture piece.  97
Figure 62 Durex Fundawear  98
Figure 63 Printed fabric sensors from the University of Tokyo  100
Figure 64 Tony Kanaan’s shirt with electrically conductive nano-fibers  107
Figure 65 Panasonic stretchable resin technology  109
Figure 66 Nanoflex moniroring system  111
Figure 67 Knitted strain sensors  113
Figure 68 Chain Mail Fabric for Smart Textiles  114
Figure 69 Electroplated Fabric  115
Figure 70 LG Innotek flexible textile pressure sensors  118
Figure 71 Smart Footwear installed with fabric sensors. (Credit: Image courtesy of The Hong Kong Polytechnic University)  120
Figure 72 SOFTCEPTOR™ textile strain sensors  122
Figure 73 conductive polymer composite coating for pressure sensing  123
Figure 74 Fraunhofer ISC_ printed sensor  125
Figure 75 The graphene-coated yarn sensor. (Image: ETRI)  128
Figure 76 Supercapacitor yarn made of niobium nanowires  131
Figure 77 StretchSense Energy Harvesting Kit  134
Figure 78 Energy harvesting textiles at the University of Southampton  135
Figure 79 Polyera Wove Flexible Screen  139

If you compare that with the table of contents for the May 2016 report in my June 2, 2016 posting, you can see the difference.

Here’s one last tidbit, a Sept. 15, 2016 news item on phys.org highlights another wearable technology report,

Wearable tech, which was seeing sizzling sales growth a year ago [2015], is cooling this year amid consumer hesitation over new devices, a survey showed Thursday [Sept. 15, 2016].

The research firm IDC said it expects global sales of wearables to grow some 29.4 percent to some 103 million units in 2016.

That follows 171 percent growth in 2015, fueled by the launch of the Apple Watch and a variety of fitness bands.

“It is increasingly becoming more obvious that consumers are not willing to deal with technical pain points that have to date been associated with many wearable devices,” said IDC analyst Ryan Reith.

So-called basic wearables—including fitness bands and other devices that do not run third party applications—will make up the lion’s share of the market with some 80.7 million units shipped this year, according to IDC.

According to IDC, it seems that the short term does not promise the explosive growth of the previous year but that new generations of wearable technology, according to both IDC and Cientifica, offer considerable promise for the market.

Connecting chaos and entanglement

Researchers seem to have stumbled across a link between classical and quantum physics. A July 12, 2016 University of California at Santa Barbara (UCSB) news release (also on EurekAlert) by Sonia Fernandez provides a description of both classical and quantum physics, as well as, the research that connects the two,

Using a small quantum system consisting of three superconducting qubits, researchers at UC Santa Barbara and Google have uncovered a link between aspects of classical and quantum physics thought to be unrelated: classical chaos and quantum entanglement. Their findings suggest that it would be possible to use controllable quantum systems to investigate certain fundamental aspects of nature.

“It’s kind of surprising because chaos is this totally classical concept — there’s no idea of chaos in a quantum system,” Charles Neill, a researcher in the UCSB Department of Physics and lead author of a paper that appears in Nature Physics. “Similarly, there’s no concept of entanglement within classical systems. And yet it turns out that chaos and entanglement are really very strongly and clearly related.”

Initiated in the 15th century, classical physics generally examines and describes systems larger than atoms and molecules. It consists of hundreds of years’ worth of study including Newton’s laws of motion, electrodynamics, relativity, thermodynamics as well as chaos theory — the field that studies the behavior of highly sensitive and unpredictable systems. One classic example of chaos theory is the weather, in which a relatively small change in one part of the system is enough to foil predictions — and vacation plans — anywhere on the globe.

At smaller size and length scales in nature, however, such as those involving atoms and photons and their behaviors, classical physics falls short. In the early 20th century quantum physics emerged, with its seemingly counterintuitive and sometimes controversial science, including the notions of superposition (the theory that a particle can be located in several places at once) and entanglement (particles that are deeply linked behave as such despite physical distance from one another).

And so began the continuing search for connections between the two fields.

All systems are fundamentally quantum systems, according [to] Neill, but the means of describing in a quantum sense the chaotic behavior of, say, air molecules in an evacuated room, remains limited.

Imagine taking a balloon full of air molecules, somehow tagging them so you could see them and then releasing them into a room with no air molecules, noted co-author and UCSB/Google researcher Pedram Roushan. One possible outcome is that the air molecules remain clumped together in a little cloud following the same trajectory around the room. And yet, he continued, as we can probably intuit, the molecules will more likely take off in a variety of velocities and directions, bouncing off walls and interacting with each other, resting after the room is sufficiently saturated with them.

“The underlying physics is chaos, essentially,” he said. The molecules coming to rest — at least on the macroscopic level — is the result of thermalization, or of reaching equilibrium after they have achieved uniform saturation within the system. But in the infinitesimal world of quantum physics, there is still little to describe that behavior. The mathematics of quantum mechanics, Roushan said, do not allow for the chaos described by Newtonian laws of motion.

To investigate, the researchers devised an experiment using three quantum bits, the basic computational units of the quantum computer. Unlike classical computer bits, which utilize a binary system of two possible states (e.g., zero/one), a qubit can also use a superposition of both states (zero and one) as a single state. Additionally, multiple qubits can entangle, or link so closely that their measurements will automatically correlate. By manipulating these qubits with electronic pulses, Neill caused them to interact, rotate and evolve in the quantum analog of a highly sensitive classical system.

The result is a map of entanglement entropy of a qubit that, over time, comes to strongly resemble that of classical dynamics — the regions of entanglement in the quantum map resemble the regions of chaos on the classical map. The islands of low entanglement in the quantum map are located in the places of low chaos on the classical map.

“There’s a very clear connection between entanglement and chaos in these two pictures,” said Neill. “And, it turns out that thermalization is the thing that connects chaos and entanglement. It turns out that they are actually the driving forces behind thermalization.

“What we realize is that in almost any quantum system, including on quantum computers, if you just let it evolve and you start to study what happens as a function of time, it’s going to thermalize,” added Neill, referring to the quantum-level equilibration. “And this really ties together the intuition between classical thermalization and chaos and how it occurs in quantum systems that entangle.”

The study’s findings have fundamental implications for quantum computing. At the level of three qubits, the computation is relatively simple, said Roushan, but as researchers push to build increasingly sophisticated and powerful quantum computers that incorporate more qubits to study highly complex problems that are beyond the ability of classical computing — such as those in the realms of machine learning, artificial intelligence, fluid dynamics or chemistry — a quantum processor optimized for such calculations will be a very powerful tool.

“It means we can study things that are completely impossible to study right now, once we get to bigger systems,” said Neill.

Experimental link between quantum entanglement (left) and classical chaos (right) found using a small quantum computer. Photo Credit: Courtesy Image (Courtesy: UCSB)

Experimental link between quantum entanglement (left) and classical chaos (right) found using a small quantum computer. Photo Credit: Courtesy Image (Courtesy: UCSB)

Here’s a link to and a citation for the paper,

Ergodic dynamics and thermalization in an isolated quantum system by C. Neill, P. Roushan, M. Fang, Y. Chen, M. Kolodrubetz, Z. Chen, A. Megrant, R. Barends, B. Campbell, B. Chiaro, A. Dunsworth, E. Jeffrey, J. Kelly, J. Mutus, P. J. J. O’Malley, C. Quintana, D. Sank, A. Vainsencher, J. Wenner, T. C. White, A. Polkovnikov, & J. M. Martinis. Nature Physics (2016)  doi:10.1038/nphys3830 Published online 11 July 2016

This paper is behind a paywall.

Google Arts & Culture: an app for culture vultures

In its drive to take over single aspect of our lives in the most charming, helpful, and delightful ways possible, Google has developed its Arts & Culture app.

Here’s more from a July 19, 2016 article by John Brownlee for Fast Company (Note: Links have been removed),

… Google has just unveiled a new app that makes it as easy to find the opening times of your local museum as it is to figure out who painted that bright purple Impressionist masterpiece you saw five years ago at the Louvre.

It’s called Google Arts & Culture, and it’s a tool for discovering art “from more than a thousand museums across 70 countries,” Google writes on its blog. More than just an online display of art, though, it encourages viewers to parse the works and gather insight into the visual culture we rarely encounter outside the rarified world of brick-and-mortar museums.

For instance, you can browse all of Van Gogh’s paintings chronologically to see how much more vibrant his work became over time. Or you can sort Monet’s paintings by color for a glimpse at his nuanced use of gray.

You can also read daily stories about subjects such as stolen Nazi artworks or Bruegel’s Tower of Babel. …

A July 19, 2016 post announcing the Arts & Culture app on the Google blog by Duncan Osborn provides more details,

Just as the world’s precious artworks and monuments need a touch-up to look their best, the home we’ve built to host the world’s cultural treasures online needs a lick of paint every now and then. We’re ready to pull off the dust sheets and introduce the new Google Arts & Culture website and app, by the Google Cultural Institute. The app lets you explore anything from cats in art since 200 BCE to the color red in Abstract Expressionism, and everything in between.

• Search for anything, from shoes to all things gold • Scroll through art by time—see how Van Gogh’s works went from gloomy to vivid • Browse by color and learn about Monet’s 50 shades of gray • Find a new fascinating story to discover every day—today, it’s nine powerful men in heels

You can also use this app when visiting a real life museum. For the interested, you can download it for for iOS and Android.

Deep learning and some history from the Swiss National Science Foundation (SNSF)

A June 27, 2016 news item on phys.org provides a measured analysis of deep learning and its current state of development (from a Swiss perspective),

In March 2016, the world Go champion Lee Sedol lost 1-4 against the artificial intelligence AlphaGo. For many, this was yet another defeat for humanity at the hands of the machines. Indeed, the success of the AlphaGo software was forged in an area of artificial intelligence that has seen huge progress over the last decade. Deep learning, as it’s called, uses artificial neural networks to process algorithmic calculations. This software architecture therefore mimics biological neural networks.

Much of the progress in deep learning is thanks to the work of Jürgen Schmidhuber, director of the IDSIA (Istituto Dalle Molle di Studi sull’Intelligenza Artificiale) which is located in the suburbs of Lugano. The IDSIA doctoral student Shane Legg and a group of former colleagues went on to found DeepMind, the startup acquired by Google in early 2014 for USD 500 million. The DeepMind algorithms eventually wound up in AlphaGo.

“Schmidhuber is one of the best at deep learning,” says Boi Faltings of the EPFL Artificial Intelligence Lab. “He never let go of the need to keep working at it.” According to Stéphane Marchand-Maillet of the University of Geneva computing department, “he’s been in the race since the very beginning.”

A June 27, 2016 SNSF news release (first published as a story in Horizons no. 109 June 2016) by Fabien Goubet, which originated the news item, goes on to provide a brief history,

The real strength of deep learning is structural recognition, and winning at Go is just an illustration of this, albeit a rather resounding one. Elsewhere, and for some years now, we have seen it applied to an entire spectrum of areas, such as visual and vocal recognition, online translation tools and smartphone personal assistants. One underlying principle of machine learning is that algorithms must first be trained using copious examples. Naturally, this has been helped by the deluge of user-generated content spawned by smartphones and web 2.0, stretching from Facebook photo comments to official translations published on the Internet. By feeding a machine thousands of accurately tagged images of cats, for example, it learns first to recognise those cats and later any image of a cat, including those it hasn’t been fed.

Deep learning isn’t new; it just needed modern computers to come of age. As far back as the early 1950s, biologists tried to lay out formal principles to explain the working of the brain’s cells. In 1956, the psychologist Frank Rosenblatt of the New York State Aeronautical Laboratory published a numerical model based on these concepts, thereby creating the very first artificial neural network. Once integrated into a calculator, it learned to recognise rudimentary images.

“This network only contained eight neurones organised in a single layer. It could only recognise simple characters”, says Claude Touzet of the Adaptive and Integrative Neuroscience Laboratory of Aix-Marseille University. “It wasn’t until 1985 that we saw the second generation of artificial neural networks featuring multiple layers and much greater performance”. This breakthrough was made simultaneously by three researchers: Yann LeCun in Paris, Geoffrey Hinton in Toronto and Terrence Sejnowski in Baltimore.

Byte-size learning

In multilayer networks, each layer learns to recognise the precise visual characteristics of a shape. The deeper the layer, the more abstract the characteristics. With cat photos, the first layer analyses pixel colour, and the following layer recognises the general form of the cat. This structural design can support calculations being made upon thousands of layers, and it was this aspect of the architecture that gave rise to the name ‘deep learning’.

Marchand-Maillet explains: “Each artificial neurone is assigned an input value, which it computes using a mathematical function, only firing if the output exceeds a pre-defined threshold”. In this way, it reproduces the behaviour of real neurones, which only fire and transmit information when the input signal (the potential difference across the entire neural circuit) reaches a certain level. In the artificial model, the results of a single layer are weighted, added up and then sent as the input signal to the following layer, which processes that input using different functions, and so on and so forth.

For example, if a system is trained with great quantities of photos of apples and watermelons, it will progressively learn to distinguish them on the basis of diameter, says Marchand-Maillet. If it cannot decide (e.g., when processing a picture of a tiny watermelon), the subsequent layers take over by analysing the colours or textures of the fruit in the photo, and so on. In this way, every step in the process further refines the assessment.

Video games to the rescue

For decades, the frontier of computing held back more complex applications, even at the cutting edge. Industry walked away, and deep learning only survived thanks to the video games sector, which eventually began producing graphics chips, or GPUs, with an unprecedented power at accessible prices: up to 6 teraflops (i.e., 6 trillion calculations per second) for a few hundred dollars. “There’s no doubt that it was this calculating power that laid the ground for the quantum leap in deep learning”, says Touzet. GPUs are also very good at parallel calculations, a useful function for executing the innumerable simultaneous operations required by neural networks.
Although image analysis is getting great results, things are more complicated for sequential data objects such as natural spoken language and video footage. This has formed part of Schmidhuber’s work since 1989, and his response has been to develop recurrent neural networks in which neurones communicate with each other in loops, feeding processed data back into the initial layers.

Such sequential data analysis is highly dependent on context and precursory data. In Lugano, networks have been instructed to memorise the order of a chain of events. Long Short Term Memory (LSTM) networks can distinguish ‘boat’ from ‘float’ by recalling the sound that preceded ‘oat’ (i.e., either ‘b’ or ‘fl’). “Recurrent neural networks are more powerful than other approaches such as the Hidden Markov models”, says Schmidhuber, who also notes that Google Voice integrated LSTMs in 2015. “With looped networks, the number of layers is potentially infinite”, says Faltings [?].

For Schmidhuber, deep learning is just one aspect of artificial intelligence; the real thing will lead to “the most important change in the history of our civilisation”. But Marchand-Maillet sees deep learning as “a bit of hype, leading us to believe that artificial intelligence can learn anything provided there’s data. But it’s still an open question as to whether deep learning can really be applied to every last domain”.

It’s nice to get an historical perspective and eye-opening to realize that scientists have been working on these concepts since the 1950s.