Tag Archives: IBM

Vector Institute and Canada’s artificial intelligence sector

On the heels of the March 22, 2017 federal budget announcement of $125M for a Pan-Canadian Artificial Intelligence Strategy, the University of Toronto (U of T) has announced the inception of the Vector Institute for Artificial Intelligence in a March 28, 2017 news release by Jennifer Robinson (Note: Links have been removed),

A team of globally renowned researchers at the University of Toronto is driving the planning of a new institute staking Toronto’s and Canada’s claim as the global leader in AI.

Geoffrey Hinton, a University Professor Emeritus in computer science at U of T and vice-president engineering fellow at Google, will serve as the chief scientific adviser of the newly created Vector Institute based in downtown Toronto.

“The University of Toronto has long been considered a global leader in artificial intelligence research,” said U of T President Meric Gertler. “It’s wonderful to see that expertise act as an anchor to bring together researchers, government and private sector actors through the Vector Institute, enabling them to aim even higher in leading advancements in this fast-growing, critical field.”

As part of the Government of Canada’s Pan-Canadian Artificial Intelligence Strategy, Vector will share $125 million in federal funding with fellow institutes in Montreal and Edmonton. All three will conduct research and secure talent to cement Canada’s position as a world leader in AI.

In addition, Vector is expected to receive funding from the Province of Ontario and more than 30 top Canadian and global companies eager to tap this pool of talent to grow their businesses. The institute will also work closely with other Ontario universities with AI talent.

(See my March 24, 2017 posting; scroll down about 25% for the science part, including the Pan-Canadian Artificial Intelligence Strategy of the budget.)

Not obvious in last week’s coverage of the Pan-Canadian Artificial Intelligence Strategy is that the much lauded Hinton has been living in the US and working for Google. These latest announcements (Pan-Canadian AI Strategy and Vector Institute) mean that he’s moving back.

A March 28, 2017 article by Kate Allen for TorontoStar.com provides more details about the Vector Institute, Hinton, and the Canadian ‘brain drain’ as it applies to artificial intelligence, (Note:  A link has been removed)

Toronto will host a new institute devoted to artificial intelligence, a major gambit to bolster a field of research pioneered in Canada but consistently drained of talent by major U.S. technology companies like Google, Facebook and Microsoft.

The Vector Institute, an independent non-profit affiliated with the University of Toronto, will hire about 25 new faculty and research scientists. It will be backed by more than $150 million in public and corporate funding in an unusual hybridization of pure research and business-minded commercial goals.

The province will spend $50 million over five years, while the federal government, which announced a $125-million Pan-Canadian Artificial Intelligence Strategy in last week’s budget, is providing at least $40 million, backers say. More than two dozen companies have committed millions more over 10 years, including $5 million each from sponsors including Google, Air Canada, Loblaws, and Canada’s five biggest banks [Bank of Montreal (BMO). Canadian Imperial Bank of Commerce ({CIBC} President’s Choice Financial},  Royal Bank of Canada (RBC), Scotiabank (Tangerine), Toronto-Dominion Bank (TD Canada Trust)].

The mode of artificial intelligence that the Vector Institute will focus on, deep learning, has seen remarkable results in recent years, particularly in image and speech recognition. Geoffrey Hinton, considered the “godfather” of deep learning for the breakthroughs he made while a professor at U of T, has worked for Google since 2013 in California and Toronto.

Hinton will move back to Canada to lead a research team based at the tech giant’s Toronto offices and act as chief scientific adviser of the new institute.

Researchers trained in Canadian artificial intelligence labs fill the ranks of major technology companies, working on tools like instant language translation, facial recognition, and recommendation services. Academic institutions and startups in Toronto, Waterloo, Montreal and Edmonton boast leaders in the field, but other researchers have left for U.S. universities and corporate labs.

The goals of the Vector Institute are to retain, repatriate and attract AI talent, to create more trained experts, and to feed that expertise into existing Canadian companies and startups.

Hospitals are expected to be a major partner, since health care is an intriguing application for AI. Last month, researchers from Stanford University announced they had trained a deep learning algorithm to identify potentially cancerous skin lesions with accuracy comparable to human dermatologists. The Toronto company Deep Genomics is using deep learning to read genomes and identify mutations that may lead to disease, among other things.

Intelligent algorithms can also be applied to tasks that might seem less virtuous, like reading private data to better target advertising. Zemel [Richard Zemel, the institute’s research director and a professor of computer science at U of T] says the centre is creating an ethics working group [emphasis mine] and maintaining ties with organizations that promote fairness and transparency in machine learning. As for privacy concerns, “that’s something we are well aware of. We don’t have a well-formed policy yet but we will fairly soon.”

The institute’s annual funding pales in comparison to the revenues of the American tech giants, which are measured in tens of billions. The risk the institute’s backers are taking is simply creating an even more robust machine learning PhD mill for the U.S.

“They obviously won’t all stay in Canada, but Toronto industry is very keen to get them,” Hinton said. “I think Trump might help there.” Two researchers on Hinton’s new Toronto-based team are Iranian, one of the countries targeted by U.S. President Donald Trump’s travel bans.

Ethics do seem to be a bit of an afterthought. Presumably the Vector Institute’s ‘ethics working group’ won’t include any regular folks. Is there any thought to what the rest of us think about these developments? As there will also be some collaboration with other proposed AI institutes including ones at the University of Montreal (Université de Montréal) and the University of Alberta (Kate McGillivray’s article coming up shortly mentions them), might the ethics group be centered in either Edmonton or Montreal? Interestingly, two Canadians (Timothy Caulfield at the University of Alberta and Eric Racine at Université de Montréa) testified at the US Commission for the Study of Bioethical Issues Feb. 10 – 11, 2014 meeting, the Brain research, ethics, and nanotechnology. Still speculating here but I imagine Caulfield and/or Racine could be persuaded to extend their expertise in ethics and the human brain to AI and its neural networks.

Getting back to the topic at hand the ‘AI sceneCanada’, Allen’s article is worth reading in its entirety if you have the time.

Kate McGillivray’s March 29, 2017 article for the Canadian Broadcasting Corporation’s (CBC) news online provides more details about the Canadian AI situation and the new strategies,

With artificial intelligence set to transform our world, a new institute is putting Toronto to the front of the line to lead the charge.

The Vector Institute for Artificial Intelligence, made possible by funding from the federal government revealed in the 2017 budget, will move into new digs in the MaRS Discovery District by the end of the year.

Vector’s funding comes partially from a $125 million investment announced in last Wednesday’s federal budget to launch a pan-Canadian artificial intelligence strategy, with similar institutes being established in Montreal and Edmonton.

“[A.I.] cuts across pretty well every sector of the economy,” said Dr. Alan Bernstein, CEO and president of the Canadian Institute for Advanced Research, the organization tasked with administering the federal program.

“Silicon Valley and England and other places really jumped on it, so we kind of lost the lead a little bit. I think the Canadian federal government has now realized that,” he said.

Stopping up the brain drain

Critical to the strategy’s success is building a homegrown base of A.I. experts and innovators — a problem in the last decade, despite pioneering work on so-called “Deep Learning” by Canadian scholars such as Yoshua Bengio and Geoffrey Hinton, a former University of Toronto professor who will now serve as Vector’s chief scientific advisor.

With few university faculty positions in Canada and with many innovative companies headquartered elsewhere, it has been tough to keep the few graduates specializing in A.I. in town.

“We were paying to educate people and shipping them south,” explained Ed Clark, chair of the Vector Institute and business advisor to Ontario Premier Kathleen Wynne.

The existence of that “fantastic science” will lean heavily on how much buy-in Vector and Canada’s other two A.I. centres get.

Toronto’s portion of the $125 million is a “great start,” said Bernstein, but taken alone, “it’s not enough money.”

“My estimate of the right amount of money to make a difference is a half a billion or so, and I think we will get there,” he said.

Jessica Murphy’s March 29, 2017 article for the British Broadcasting Corporation’s (BBC) news online offers some intriguing detail about the Canadian AI scene,

Canadian researchers have been behind some recent major breakthroughs in artificial intelligence. Now, the country is betting on becoming a big player in one of the hottest fields in technology, with help from the likes of Google and RBC [Royal Bank of Canada].

In an unassuming building on the University of Toronto’s downtown campus, Geoff Hinton laboured for years on the “lunatic fringe” of academia and artificial intelligence, pursuing research in an area of AI called neural networks.

Also known as “deep learning”, neural networks are computer programs that learn in similar way to human brains. The field showed early promise in the 1980s, but the tech sector turned its attention to other AI methods after that promise seemed slow to develop.

“The approaches that I thought were silly were in the ascendancy and the approach that I thought was the right approach was regarded as silly,” says the British-born [emphasis mine] professor, who splits his time between the university and Google, where he is a vice-president of engineering fellow.

Neural networks are used by the likes of Netflix to recommend what you should binge watch and smartphones with voice assistance tools. Google DeepMind’s AlphaGo AI used them to win against a human in the ancient game of Go in 2016.

Foteini Agrafioti, who heads up the new RBC Research in Machine Learning lab at the University of Toronto, said those recent innovations made AI attractive to researchers and the tech industry.

“Anything that’s powering Google’s engines right now is powered by deep learning,” she says.

Developments in the field helped jumpstart innovation and paved the way for the technology’s commercialisation. They also captured the attention of Google, IBM and Microsoft, and kicked off a hiring race in the field.

The renewed focus on neural networks has boosted the careers of early Canadian AI machine learning pioneers like Hinton, the University of Montreal’s Yoshua Bengio, and University of Alberta’s Richard Sutton.

Money from big tech is coming north, along with investments by domestic corporations like banking multinational RBC and auto parts giant Magna, and millions of dollars in government funding.

Former banking executive Ed Clark will head the institute, and says the goal is to make Toronto, which has the largest concentration of AI-related industries in Canada, one of the top five places in the world for AI innovation and business.

The founders also want it to serve as a magnet and retention tool for top talent aggressively head-hunted by US firms.

Clark says they want to “wake up” Canadian industry to the possibilities of AI, which is expected to have a massive impact on fields like healthcare, banking, manufacturing and transportation.

Google invested C$4.5m (US$3.4m/£2.7m) last November [2016] in the University of Montreal’s Montreal Institute for Learning Algorithms.

Microsoft is funding a Montreal startup, Element AI. The Seattle-based company also announced it would acquire Montreal-based Maluuba and help fund AI research at the University of Montreal and McGill University.

Thomson Reuters and General Motors both recently moved AI labs to Toronto.

RBC is also investing in the future of AI in Canada, including opening a machine learning lab headed by Agrafioti, co-funding a program to bring global AI talent and entrepreneurs to Toronto, and collaborating with Sutton and the University of Alberta’s Machine Intelligence Institute.

Canadian tech also sees the travel uncertainty created by the Trump administration in the US as making Canada more attractive to foreign talent. (One of Clark’s the selling points is that Toronto as an “open and diverse” city).

This may reverse the ‘brain drain’ but it appears Canada’s role as a ‘branch plant economy’ for foreign (usually US) companies could become an important discussion once more. From the ‘Foreign ownership of companies of Canada’ Wikipedia entry (Note: Links have been removed),

Historically, foreign ownership was a political issue in Canada in the late 1960s and early 1970s, when it was believed by some that U.S. investment had reached new heights (though its levels had actually remained stable for decades), and then in the 1980s, during debates over the Free Trade Agreement.

But the situation has changed, since in the interim period Canada itself became a major investor and owner of foreign corporations. Since the 1980s, Canada’s levels of investment and ownership in foreign companies have been larger than foreign investment and ownership in Canada. In some smaller countries, such as Montenegro, Canadian investment is sizable enough to make up a major portion of the economy. In Northern Ireland, for example, Canada is the largest foreign investor. By becoming foreign owners themselves, Canadians have become far less politically concerned about investment within Canada.

Of note is that Canada’s largest companies by value, and largest employers, tend to be foreign-owned in a way that is more typical of a developing nation than a G8 member. The best example is the automotive sector, one of Canada’s most important industries. It is dominated by American, German, and Japanese giants. Although this situation is not unique to Canada in the global context, it is unique among G-8 nations, and many other relatively small nations also have national automotive companies.

It’s interesting to note that sometimes Canadian companies are the big investors but that doesn’t change our basic position. And, as I’ve noted in other postings (including the March 24, 2017 posting), these government investments in science and technology won’t necessarily lead to a move away from our ‘branch plant economy’ towards an innovative Canada.

You can find out more about the Vector Institute for Artificial Intelligence here.

BTW, I noted that reference to Hinton as ‘British-born’ in the BBC article. He was educated in the UK and subsidized by UK taxpayers (from his Wikipedia entry; Note: Links have been removed),

Hinton was educated at King’s College, Cambridge graduating in 1970, with a Bachelor of Arts in experimental psychology.[1] He continued his study at the University of Edinburgh where he was awarded a PhD in artificial intelligence in 1977 for research supervised by H. Christopher Longuet-Higgins.[3][12]

It seems Canadians are not the only ones to experience  ‘brain drains’.

Finally, I wrote at length about a recent initiative taking place between the University of British Columbia (Vancouver, Canada) and the University of Washington (Seattle, Washington), the Cascadia Urban Analytics Cooperative in a Feb. 28, 2017 posting noting that the initiative is being funded by Microsoft to the tune $1M and is part of a larger cooperative effort between the province of British Columbia and the state of Washington. Artificial intelligence is not the only area where US technology companies are hedging their bets (against Trump’s administration which seems determined to terrify people from crossing US borders) by investing in Canada.

For anyone interested in a little more information about AI in the US and China, there’s today’s (March 31, 2017)earlier posting: China, US, and the race for artificial intelligence research domination.

Atomic force microscope (AFM) shrunk down to a dime-sized device?

Before getting to the announcement, here’s a little background from Dexter Johnson’s Feb. 21, 2017 posting on his NanoClast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website; Note: Links have been removed),

Ever since the 1980s, when Gerd Binnig of IBM first heard that “beautiful noise” made by the tip of the first scanning tunneling microscope (STM) dragging across the surface of an atom, and he later developed the atomic force microscope (AFM), these microscopy tools have been the bedrock of nanotechnology research and development.

AFMs have continued to evolve over the years, and at one time, IBM even looked into using them as the basis of a memory technology in the company’s Millipede project. Despite all this development, AFMs have remained bulky and expensive devices, costing as much as $50,000 [or more].

Now, here’s the announcement in a Feb. 15, 2017 news item on Nanowerk,

Researchers at The University of Texas at Dallas have created an atomic force microscope on a chip, dramatically shrinking the size — and, hopefully, the price tag — of a high-tech device commonly used to characterize material properties.

“A standard atomic force microscope is a large, bulky instrument, with multiple control loops, electronics and amplifiers,” said Dr. Reza Moheimani, professor of mechanical engineering at UT Dallas. “We have managed to miniaturize all of the electromechanical components down onto a single small chip.”

A Feb. 15, 2017 University of Texas at Dallas news release, which originated the news item, provides more detail,

An atomic force microscope (AFM) is a scientific tool that is used to create detailed three-dimensional images of the surfaces of materials, down to the nanometer scale — that’s roughly on the scale of individual molecules.

The basic AFM design consists of a tiny cantilever, or arm, that has a sharp tip attached to one end. As the apparatus scans back and forth across the surface of a sample, or the sample moves under it, the interactive forces between the sample and the tip cause the cantilever to move up and down as the tip follows the contours of the surface. Those movements are then translated into an image.

“An AFM is a microscope that ‘sees’ a surface kind of the way a visually impaired person might, by touching. You can get a resolution that is well beyond what an optical microscope can achieve,” said Moheimani, who holds the James Von Ehr Distinguished Chair in Science and Technology in the Erik Jonsson School of Engineering and Computer Science. “It can capture features that are very, very small.”

The UT Dallas team created its prototype on-chip AFM using a microelectromechanical systems (MEMS) approach.

“A classic example of MEMS technology are the accelerometers and gyroscopes found in smartphones,” said Dr. Anthony Fowler, a research scientist in Moheimani’s Laboratory for Dynamics and Control of Nanosystems and one of the article’s co-authors. “These used to be big, expensive, mechanical devices, but using MEMS technology, accelerometers have shrunk down onto a single chip, which can be manufactured for just a few dollars apiece.”

The MEMS-based AFM is about 1 square centimeter in size, or a little smaller than a dime. It is attached to a small printed circuit board, about half the size of a credit card, which contains circuitry, sensors and other miniaturized components that control the movement and other aspects of the device.

Conventional AFMs operate in various modes. Some map out a sample’s features by maintaining a constant force as the probe tip drags across the surface, while others do so by maintaining a constant distance between the two.

“The problem with using a constant height approach is that the tip is applying varying forces on a sample all the time, which can damage a sample that is very soft,” Fowler said. “Or, if you are scanning a very hard surface, you could wear down the tip,”

The MEMS-based AFM operates in “tapping mode,” which means the cantilever and tip oscillate up and down perpendicular to the sample, and the tip alternately contacts then lifts off from the surface. As the probe moves back and forth across a sample material, a feedback loop maintains the height of that oscillation, ultimately creating an image.

“In tapping mode, as the oscillating cantilever moves across the surface topography, the amplitude of the oscillation wants to change as it interacts with sample,” said Dr. Mohammad Maroufi, a research associate in mechanical engineering and co-author of the paper. “This device creates an image by maintaining the amplitude of oscillation.”

Because conventional AFMs require lasers and other large components to operate, their use can be limited. They’re also expensive.

“An educational version can cost about $30,000 or $40,000, and a laboratory-level AFM can run $500,000 or more,” Moheimani said. “Our MEMS approach to AFM design has the potential to significantly reduce the complexity and cost of the instrument.

“One of the attractive aspects about MEMS is that you can mass produce them, building hundreds or thousands of them in one shot, so the price of each chip would only be a few dollars. As a result, you might be able to offer the whole miniature AFM system for a few thousand dollars.”

A reduced size and price tag also could expand the AFMs’ utility beyond current scientific applications.

“For example, the semiconductor industry might benefit from these small devices, in particular companies that manufacture the silicon wafers from which computer chips are made,” Moheimani said. “With our technology, you might have an array of AFMs to characterize the wafer’s surface to find micro-faults before the product is shipped out.”

The lab prototype is a first-generation device, Moheimani said, and the group is already working on ways to improve and streamline the fabrication of the device.

“This is one of those technologies where, as they say, ‘If you build it, they will come.’ We anticipate finding many applications as the technology matures,” Moheimani said.

In addition to the UT Dallas researchers, Michael Ruppert, a visiting graduate student from the University of Newcastle in Australia, was a co-author of the journal article. Moheimani was Ruppert’s doctoral advisor.

So, an AFM that could cost as much as $500,000 for a laboratory has been shrunk to this size and become far less expensive,

A MEMS-based atomic force microscope developed by engineers at UT Dallas is about 1 square centimeter in size (top center). Here it is attached to a small printed circuit board that contains circuitry, sensors and other miniaturized components that control the movement and other aspects of the device. Courtesy: University of Texas at Dallas

Of course, there’s still more work to be done as you’ll note when reading Dexter’s Feb. 21, 2017 posting where he features answers to questions he directed to the researchers.

Here’s a link to and a citation for the paper,

On-Chip Dynamic Mode Atomic Force Microscopy: A Silicon-on-Insulator MEMS Approach by  Michael G. Ruppert, Anthony G. Fowler, Mohammad Maroufi, S. O. Reza Moheimani. IEEE Journal of Microelectromechanical Systems Volume: 26 Issue: 1  Feb. 2017 DOI: 10.1109/JMEMS.2016.2628890 Date of Publication: 06 December 2016

This paper is behind a paywall.

Keeping up with science is impossible: ruminations on a nanotechnology talk

I think it’s time to give this suggestion again. Always hold a little doubt about the science information you read and hear. Everybody makes mistakes.

Here’s an example of what can happen. George Tulevski who gave a talk about nanotechnology in Nov. 2016 for TED@IBM is an accomplished scientist who appears to have made an error during his TED talk. From Tulevski’s The Next Step in Nanotechnology talk transcript page,

When I was a graduate student, it was one of the most exciting times to be working in nanotechnology. There were scientific breakthroughs happening all the time. The conferences were buzzing, there was tons of money pouring in from funding agencies. And the reason is when objects get really small, they’re governed by a different set of physics that govern ordinary objects, like the ones we interact with. We call this physics quantum mechanics. [emphases mine] And what it tells you is that you can precisely tune their behavior just by making seemingly small changes to them, like adding or removing a handful of atoms, or twisting the material. It’s like this ultimate toolkit. You really felt empowered; you felt like you could make anything.

In September 2016, scientists at Cambridge University (UK) announced they had concrete proof that the physics governing materials at the nanoscale is unique, i.e., it does not follow the rules of either classical or quantum physics. From my Oct. 27, 2016 posting,

A Sept. 29, 2016 University of Cambridge press release, which originated the news item, hones in on the peculiarities of the nanoscale,

In the middle, on the order of around 10–100,000 molecules, something different is going on. Because it’s such a tiny scale, the particles have a really big surface-area-to-volume ratio. This means the energetics of what goes on at the surface become very important, much as they do on the atomic scale, where quantum mechanics is often applied.

Classical thermodynamics breaks down. But because there are so many particles, and there are many interactions between them, the quantum model doesn’t quite work either.

It is very, very easy to miss new developments no matter how tirelessly you scan for information.

Tulevski is a good, interesting, and informed speaker but I do have one other hesitation regarding his talk. He seems to think that over the last 15 years there should have been more practical applications arising from the field of nanotechnology. There are two aspects here. First, he seems to be dating the ‘nanotechnology’ effort from the beginning of the US National Nanotechnology Initiative and there are many scientists who would object to that as the starting point. Second, 15 or even 30 or more years is a brief period of time especially when you are investigating that which hasn’t been investigated before. For example, you might want to check out the book, “Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life” (published 1985) is a book by Steven Shapin and Simon Schaffer (Wikipedia entry for the book). The amount of time (years) spent on how to make just the glue which held the various experimental apparatuses together was a revelation to me. Of  course, it makes perfect sense that if you’re trying something new, you’re going to have figure out everything.

By the way, I include my blog as one of the sources of information that can be faulty despite efforts to make corrections and to keep up with the latest. Even the scientists at Cambridge University can run into some problems as I noted in my Jan. 28, 2016 posting.

Getting back to Tulevsk, herei’s a link to his lively, informative talk :
https://www.ted.com/talks/george_tulevski_the_next_step_in_nanotechnology#t-562570

ETA Jan. 24, 2017: For some insight into how uncertain, tortuous, and expensive commercializing technology can be read Dexter Johnson’s Jan. 23, 2017 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website). Here’s an excerpt (Note: Links have been removed),

The brief description of this odyssey includes US $78 million in financing over 15 years and $50 million in revenues over that period through licensing of its technology and patents. That revenue includes a back-against-the-wall sell-off of a key business unit to Lockheed Martin in 2008.  Another key moment occured back in 2012 when Belgian-based nanoelectronics powerhouse Imec took on the job of further developing Nantero’s carbon-nanotube-based memory back in 2012. Despite the money and support from major electronics players, the big commercial breakout of their NRAM technology seemed ever less likely to happen with the passage of time.

Artificial intelligence and industrial applications

This is take on artificial intelligence that I haven’t encountered before. Sean Captain’s Nov. 15, 2016 article for Fast Company profiles industry giant GE (General Electric) and its foray into that world (Note: Links have been removed),

When you hear the term “artificial intelligence,” you may think of tech giants Amazon, Google, IBM, Microsoft, or Facebook. Industrial powerhouse General Electric is now aiming to be included on that short list. It may not have a chipper digital assistant like Cortana or Alexa. It won’t sort through selfies, but it will look through X-rays. It won’t recommend movies, but it will suggest how to care for a diesel locomotive. Today, GE announced a pair of acquisitions and new services that will bring machine learning AI to the kinds of products it’s known for, including planes, trains, X-ray machines, and power plants.

The effort started in 2015 when GE announced Predix Cloud—an online platform to network and collect data from sensors on industrial machinery such as gas turbines or windmills. At the time, GE touted the benefits of using machine learning to find patterns in sensor data that could lead to energy savings or preventative maintenance before a breakdown. Predix Cloud opened up to customers in February [2016?], but GE is still building up the AI capabilities to fulfill the promise. “We were using machine learning, but I would call it in a custom way,” says Bill Ruh, GE’s chief digital officer and CEO of its GE Digital business (GE calls its division heads CEOs). “And we hadn’t gotten to a general-purpose framework in machine learning.”

Today [Nov. 15, 2016] GE revealed the purchase of two AI companies that Ruh says will get them there. Bit Stew Systems, founded in 2005, was already doing much of what Predix Cloud promises—collecting and analyzing sensor data from power utilities, oil and gas companies, aviation, and factories. (GE Ventures has funded the company.) Customers include BC Hydro, Pacific Gas & Electric, and Scottish & Southern Energy.

The second purchase, Wise.io is a less obvious purchase. Founded by astrophysics and AI experts using machine learning to study the heavens, the company reapplied the tech to streamlining a company’s customer support systems, picking up clients like Pinterest, Twilio, and TaskRabbit. GE believes the technology will transfer yet again, to managing industrial machines. “I think by the middle of next year we will have a full machine learning stack,” says Ruh.

Though young, Predix is growing fast, with 270 partner companies using the platform, according to GE, which expects revenue on software and services to grow over 25% this year, to more than $7 billion. Ruh calls Predix a “significant part” of that extra money. And he’s ready to brag, taking a jab at IBM Watson for being a “general-purpose” machine-learning provider without the deep knowledge of the industries it serves. “We have domain algorithms, on machine learning, that’ll know what a power plant is and all the depth of that, that a general-purpose machine learning will never really understand,” he says.

One especially dull-sounding new Predix service—Predictive Corrosion Management—touches on a very hot political issue: giant oil and gas pipeline projects. Over 400 people have been arrested in months of protests against the Dakota Access Pipeline, which would carry crude oil from North Dakota to Illinois. The issue is very complicated, but one concern of protestors is that a pipeline rupture would contaminate drinking water for the Standing Rock Sioux reservation.

“I think absolutely this is aimed at that problem. If you look at why pipelines spill, it’s corrosion,” says Ruh. “We believe that 10 years from now, we can detect a leak before it occurs and fix it before you see it happen.” Given how political battles over pipelines drag on, 10 years might not be so long to wait.

I recommend reading the article in its entirety if you have the time. And, for those of us in British Columbia, Canada, it was a surprise to see BC Hydro on the list of customers for one of GE’s new acquisitions. As well, that business about the pipelines hits home hard given the current debates (Enbridge Northern Gateway Pipelines) here. *ETA Dec. 27, 2016: This was originally edited just prior to publication to include information about the announcement by the Trudeau cabinet approving two pipelines for TransMountain  and Enbridge respectively while rejecting the Northern Gateway pipeline (Canadian Broadcasting Corporation [CBC] online news Nov. 29, 2016).  I trust this second edit will stick.*

It seems GE is splashing out in a big way. There’s a second piece on Fast Company, a Nov. 16, 2016 article by Sean Captain (again) this time featuring a chat between an engineer and a robotic power plant,

We are entering the era of talking machines—and it’s about more than just asking Amazon’s Alexa to turn down the music. General Electric has built a digital assistant into its cloud service for managing power plants, jet engines, locomotives, and the other heavy equipment it builds. Over the internet, an engineer can ask a machine—even one hundreds of miles away—how it’s doing and what it needs. …

Voice controls are built on top of GE’s Digital Twin program, which uses sensor readings from machinery to create virtual models in cyberspace. “That model is constantly getting a stream of data, both operational and environmental,” says Colin Parris, VP at GE Software Research. “So it’s adapting itself to that type of data.” The machines live virtual lives online, allowing engineers to see how efficiently each is running and if they are wearing down.

GE partnered with Microsoft on the interface, using the Bing Speech API (the same tech powering the Cortana digital assistant), with special training on key terms like “rotor.” The twin had little trouble understanding the Mandarin Chinese accent of Bo Yu, one of the researchers who built the system; nor did it stumble on Parris’s Trinidad accent. Digital Twin will also work with Microsoft’s HoloLens mixed reality goggles, allowing someone to step into a 3D image of the equipment.

I can’t help wondering if there are some jobs that were eliminated with this technology.

The State of Science and Technology (S&T) and Industrial Research and Development (IR&D) in Canada

Earlier this year I featured (in a July 1, 2016 posting) the announcement of a third assessment of science and technology in Canada by the Council of Canadian Academies. At the time I speculated as to the size of the ‘expert panel’ making the assessment as they had rolled a second assessment (Industrial Research and Development) into this one on the state of science and technology. I now have my answer thanks to an Oct. 17, 2016 Council of Canadian Academies news release announcing the chairperson (received via email; Note: Links have been removed and emphases added for greater readability),

The Council of Canadian Academies (CCA) is pleased to announce Dr. Max Blouw, President and Vice-Chancellor of Wilfrid Laurier University, as Chair of the newly appointed Expert Panel on the State of Science and Technology (S&T) and Industrial Research and Development (IR&D) in Canada.

“Dr. Blouw is a widely respected leader with a strong background in research and academia,” said Eric M. Meslin, PhD, FCAHS, President and CEO of the CCA. “I am delighted he has agreed to serve as Chair for an assessment that will contribute to the current policy discussion in Canada.”

As Chair of the Expert Panel, Dr. Blouw will work with the multidisciplinary, multi-sectoral Expert Panel to address the following assessment question, referred to the CCA by Innovation, Science and Economic Development Canada (ISED):

What is the current state of science and technology and industrial research and development in Canada?

Dr. Blouw will lead the CCA Expert Panel to assess the available evidence and deliver its final report by late 2017. Members of the panel include experts from different fields of academic research, R&D, innovation, and research administration. The depth of the Panel’s experience and expertise, paired with the CCA’s rigorous assessment methodology, will ensure the most authoritative, credible, and independent response to the question.

“I am very pleased to accept the position of Chair for this assessment and I consider myself privileged to be working with such an eminent group of experts,” said Dr. Blouw. “The CCA’s previous reports on S&T and IR&D provided crucial insights into Canada’s strengths and weaknesses in these areas. I look forward to contributing to this important set of reports with new evidence and trends.”

Dr. Blouw was Vice-President Research, Associate Vice-President Research, and Professor of Biology, at the University of Northern British Columbia, before joining Wilfrid Laurier as President. Dr. Blouw served two terms as the chair of the university advisory group to Industry Canada and was a member of the adjudication panel for the Ontario Premier’s Discovery Awards, which recognize the province’s finest senior researchers. He recently chaired the International Review Committee of the NSERC Discovery Grants Program.

For a complete list of Expert Panel members, their biographies, and details on the assessment, please visit the assessment page. The CCA’s Member Academies – the Royal Society of Canada, the Canadian Academy of Engineering, and the Canadian Academy of Health Sciences – are a key source of membership for expert panels. Many experts are also Fellows of the Academies.

The Expert Panel on the State of S&T and IR&D
Max Blouw, (Chair) President and Vice-Chancellor of Wilfrid Laurier University
Luis Barreto, President, Dr. Luis Barreto & Associates and Special Advisor, NEOMED-LABS
Catherine Beaudry, Professor, Department of Mathematical and Industrial Engineering, Polytechnique Montréal
Donald Brooks, FCAHS, Professor, Pathology and Laboratory Medicine, and Chemistry, University of British Columbia
Madeleine Jean, General Manager, Prompt
Philip Jessop, FRSC, Professor, Inorganic Chemistry and Canada Research Chair in Green Chemistry, Department of Chemistry, Queen’s University; Technical Director, GreenCentre Canada
Claude Lajeunesse, FCAE, Corporate Director and Interim Chair of the Board of Directors, Atomic Energy of Canada Ltd.
Steve Liang, Associate Professor, Geomatics Engineering, University of Calgary; Director, GeoSensorWeb Laboratory; CEO, SensorUp Inc.
Robert Luke, Vice-President, Research and Innovation, OCAD University
Douglas Peers, Professor, Dean of Arts, Department of History, University of Waterloo
John M. Thompson, O.C., FCAE, Retired Executive Vice-Chairman, IBM Corporation
Anne Whitelaw, Associate Dean Research, Faculty of Fine Arts and Associate Professor, Department of Art History, Concordia University
David A. Wolfe, Professor, Political Science and Co-Director, Innovation Policy Lab, Munk School of Global Affairs, University of Toronto

You can find more information about the expert panel here and about this assessment and its predecesors here.

A few observations, given the size of the task this panel is lean. As well, there are three women in a group of 13 (less than 25% representation) in 2016? It’s Ontario and Québec-dominant; only BC and Alberta rate a representative on the panel. I hope they will find ways to better balance this panel and communicate that ‘balanced story’ to the rest of us. On the plus side, the panel has representatives from the humanities, arts, and industry in addition to the expected representatives from the sciences.

Westworld: a US television programme investigating AI (artificial intelligence) and consciousness

The US television network, Home Box Office (HBO) is getting ready to première Westworld, a new series based on a movie first released in 1973. Here’s more about the movie from its Wikipedia entry (Note: Links have been removed),

Westworld is a 1973 science fiction Western thriller film written and directed by novelist Michael Crichton and produced by Paul Lazarus III about amusement park robots that malfunction and begin killing visitors. It stars Yul Brynner as an android in a futuristic Western-themed amusement park, and Richard Benjamin and James Brolin as guests of the park.

Westworld was the first theatrical feature directed by Michael Crichton.[3] It was also the first feature film to use digital image processing, to pixellate photography to simulate an android point of view.[4] The film was nominated for Hugo, Nebula and Saturn awards, and was followed by a sequel film, Futureworld, and a short-lived television series, Beyond Westworld. In August 2013, HBO announced plans for a television series based on the original film.

The latest version is due to start broadcasting in the US on Sunday, Oct. 2, 2016 and as part of the publicity effort the producers are profiled by Sean Captain for Fast Company in a Sept. 30, 2016 article,

As Game of Thrones marches into its final seasons, HBO is debuting this Sunday what it hopes—and is betting millions of dollars on—will be its new blockbuster series: Westworld, a thorough reimagining of Michael Crichton’s 1973 cult classic film about a Western theme park populated by lifelike robot hosts. A philosophical prelude to Jurassic Park, Crichton’s Westworld is a cautionary tale about technology gone very wrong: the classic tale of robots that rise up and kill the humans. HBO’s new series, starring Evan Rachel Wood, Anthony Hopkins, and Ed Harris, is subtler and also darker: The humans are the scary ones.

“We subverted the entire premise of Westworld in that our sympathies are meant to be with the robots, the hosts,” says series co-creator Lisa Joy. She’s sitting on a couch in her Burbank office next to her partner in life and on the show—writer, director, producer, and husband Jonathan Nolan—who goes by Jonah. …

Their Westworld, which runs in the revered Sunday-night 9 p.m. time slot, combines present-day production values and futuristic technological visions—thoroughly revamping Crichton’s story with hybrid mechanical-biological robots [emphasis mine] fumbling along the blurry line between simulated and actual consciousness.

Captain never does explain the “hybrid mechanical-biological robots.” For example, do they have human skin or other organs grown for use in a robot? In other words, how are they hybrid?

That nitpick aside, the article provides some interesting nuggets of information and insight into the themes and ideas 2016 Westworld’s creators are exploring (Note: A link has been removed),

… Based on the four episodes I previewed (which get progressively more interesting), Westworld does a good job with the trope—which focused especially on the awakening of Dolores, an old soul of a robot played by Evan Rachel Wood. Dolores is also the catchall Spanish word for suffering, pain, grief, and other displeasures. “There are no coincidences in Westworld,” says Joy, noting that the name is also a play on Dolly, the first cloned mammal.

The show operates on a deeper, though hard-to-define level, that runs beneath the shoot-em and screw-em frontier adventure and robotic enlightenment narratives. It’s an allegory of how even today’s artificial intelligence is already taking over, by cataloging and monetizing our lives and identities. “Google and Facebook, their business is reading your mind in order to advertise shit to you,” says Jonah Nolan. …

“Exist free of rules, laws or judgment. No impulse is taboo,” reads a spoof home page for the resort that HBO launched a few weeks ago. That’s lived to the fullest by the park’s utterly sadistic loyal guest, played by Ed Harris and known only as the Man in Black.

The article also features some quotes from scientists on the topic of artificial intelligence (Note: Links have been removed),

“In some sense, being human, but less than human, it’s a good thing,” says Jon Gratch, professor of computer science and psychology at the University of Southern California [USC]. Gratch directs research at the university’s Institute for Creative Technologies on “virtual humans,” AI-driven onscreen avatars used in military-funded training programs. One of the projects, SimSensei, features an avatar of a sympathetic female therapist, Ellie. It uses AI and sensors to interpret facial expressions, posture, tension in the voice, and word choices by users in order to direct a conversation with them.

“One of the things that we’ve found is that people don’t feel like they’re being judged by this character,” says Gratch. In work with a National Guard unit, Ellie elicited more honest responses about their psychological stresses than a web form did, he says. Other data show that people are more honest when they know the avatar is controlled by an AI versus being told that it was controlled remotely by a human mental health clinician.

“If you build it like a human, and it can interact like a human. That solves a lot of the human-computer or human-robot interaction issues,” says professor Paul Rosenbloom, also with USC’s Institute for Creative Technologies. He works on artificial general intelligence, or AGI—the effort to create a human-like or human level of intellect.

Rosenbloom is building an AGI platform called Sigma that models human cognition, including emotions. These could make a more effective robotic tutor, for instance, “There are times you want the person to know you are unhappy with them, times you want them to know that you think they’re doing great,” he says, where “you” is the AI programmer. “And there’s an emotional component as well as the content.”

Achieving full AGI could take a long time, says Rosenbloom, perhaps a century. Bernie Meyerson, IBM’s chief innovation officer, is also circumspect in predicting if or when Watson could evolve into something like HAL or Her. “Boy, we are so far from that reality, or even that possibility, that it becomes ludicrous trying to get hung up there, when we’re trying to get something to reasonably deal with fact-based data,” he says.

Gratch, Rosenbloom, and Meyerson are talking about screen-based entities and concepts of consciousness and emotions. Then, there’s a scientist who’s talking about the difficulties with robots,

… Ken Goldberg, an artist and professor of engineering at UC [University of California] Berkeley, calls the notion of cyborg robots in Westworld “a pretty common trope in science fiction.” (Joy will take up the theme again, as the screenwriter for a new Battlestar Galactica movie.) Goldberg’s lab is struggling just to build and program a robotic hand that can reliably pick things up. But a sympathetic, somewhat believable Dolores in a virtual setting is not so farfetched.

Captain delves further into a thorny issue,

“Can simulations, at some point, become the real thing?” asks Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University. “If we perfectly simulate a rainstorm on a computer, it’s still not a rainstorm. We won’t get wet. But is the mind or consciousness different? The jury is still out.”

While artificial consciousness is still in the dreamy phase, today’s level of AI is serious business. “What was sort of a highfalutin philosophical question a few years ago has become an urgent industrial need,” says Jonah Nolan. It’s not clear yet how the Delos management intends, beyond entrance fees, to monetize Westworld, although you get a hint when Ford tells Theresa Cullen “We know everything about our guests, don’t we? As we know everything about our employees.”

AI has a clear moneymaking model in this world, according to Nolan. “Facebook is monetizing your social graph, and Google is advertising to you.” Both companies (and others) are investing in AI to better understand users and find ways to make money off this knowledge. …

As my colleague David Bruggeman has often noted on his Pasco Phronesis blog, there’s a lot of science on television.

For anyone who’s interested in artificial intelligence and the effects it may have on urban life, see my Sept. 27, 2016 posting featuring the ‘One Hundred Year Study on Artificial Intelligence (AI100)’, hosted by Stanford University.

Points to anyone who recognized Jonah (Jonathan) Nolan as the producer for the US television series, Person of Interest, a programme based on the concept of a supercomputer with intelligence and personality and the ability to continuously monitor the population 24/7.

Memristive capabilities from IBM (International Business Machines)

Does memristive mean it’s like a memristor but it’s not one? In any event, IBM is claiming some new ground in the world of cognitive computing (also known as, neuromorphic computing).

An artistic rendering of a population of stochastic phase-change neurons which appears on the cover of Nature Nanotechnology, 3 August 2016. (Credit: IBM Research)

An artistic rendering of a population of stochastic phase-change neurons which appears on the cover of Nature Nanotechnology, 3 August 2016. (Credit: IBM Research)

From an Aug. 3, 2016 news item on phys.org,

IBM scientists have created randomly spiking neurons using phase-change materials to store and process data. This demonstration marks a significant step forward in the development of energy-efficient, ultra-dense integrated neuromorphic technologies for applications in cognitive computing.

Inspired by the way the biological brain functions, scientists have theorized for decades that it should be possible to imitate the versatile computational capabilities of large populations of neurons. However, doing so at densities and with a power budget that would be comparable to those seen in biology has been a significant challenge, until now.

“We have been researching phase-change materials for memory applications for over a decade, and our progress in the past 24 months has been remarkable,” said IBM Fellow Evangelos Eleftheriou. “In this period, we have discovered and published new memory techniques, including projected memory, stored 3 bits per cell in phase-change memory for the first time, and now are demonstrating the powerful capabilities of phase-change-based artificial neurons, which can perform various computational primitives such as data-correlation detection and unsupervised learning at high speeds using very little energy.”

An Aug. 3, 2016 IBM news release, which originated the news item, expands on the theme,

The artificial neurons designed by IBM scientists in Zurich consist of phase-change materials, including germanium antimony telluride, which exhibit two stable states, an amorphous one (without a clearly defined structure) and a crystalline one (with structure). These materials are the basis of re-writable Blu-ray discs. However, the artificial neurons do not store digital information; they are analog, just like the synapses and neurons in our biological brain.

In the published demonstration, the team applied a series of electrical pulses to the artificial neurons, which resulted in the progressive crystallization of the phase-change material, ultimately causing the neuron to fire. In neuroscience, this function is known as the integrate-and-fire property of biological neurons. This is the foundation for event-based computation and, in principle, is similar to how our brain triggers a response when we touch something hot.

Exploiting this integrate-and-fire property, even a single neuron can be used to detect patterns and discover correlations in real-time streams of event-based data. For example, in the Internet of Things, sensors can collect and analyze volumes of weather data collected at the edge for faster forecasts. The artificial neurons could be used to detect patterns in financial transactions to find discrepancies or use data from social media to discover new cultural trends in real time. Large populations of these high-speed, low-energy nano-scale neurons could also be used in neuromorphic coprocessors with co-located memory and processing units.

IBM scientists have organized hundreds of artificial neurons into populations and used them to represent fast and complex signals. Moreover, the artificial neurons have been shown to sustain billions of switching cycles, which would correspond to multiple years of operation at an update frequency of 100 Hz. The energy required for each neuron update was less than five picojoule and the average power less than 120 microwatts — for comparison, 60 million microwatts power a 60 watt lightbulb.

“Populations of stochastic phase-change neurons, combined with other nanoscale computational elements such as artificial synapses, could be a key enabler for the creation of a new generation of extremely dense neuromorphic computing systems,” said Tomas Tuma, a co-author of the paper.

Here’s a link to and a citation for the paper,

Stochastic phase-change neurons by Tomas Tuma, Angeliki Pantazi, Manuel Le Gallo, Abu Sebastian, & Evangelos Eleftheriou. Nature Nanotechnology  11, 693–699 (2016) doi:10.1038/nnano.2016.70 Published online 16 May 2016

I gather IBM waited for the print version of the paper before publicizing the work. The online version is behind paper. For those who can’t get past the paywall, there is a video offering a demonstration of sorts,

For the interested, the US government recently issued a white paper on neuromorphic computing (my Aug. 22, 2016 post).

This team has published a paper that has a similar theme to the one in Nature Nanotechnology,

All-memristive neuromorphic computing with level-tuned neurons by Angeliki Pantazi, Stanisław Woźniak, Tomas Tuma, and Evangelos Eleftheriou. Nanotechnology, Volume 27, Number 35  DOI: 10.1088/0957-4484/27/35/355205 Published 26 July 2016

© 2016 IOP Publishing Ltd

This paper is open access.

An Aug. 18, 2016 news piece by Lisa Zyga for phys.org provides a summary of the research in the July 2016 published paper.

Book announcement: Atomistic Simulation of Quantum Transport in Nanoelectronic Devices

For anyone who’s curious about where we go after creating chips at the 7nm size, this may be the book for you. Here’s more from a July 27, 2016 news item on Nanowerk,

In the year 2015, Intel, Samsung and TSMC began to mass-market the 14nm technology called FinFETs. In the same year, IBM, working with Global Foundries, Samsung, SUNY, and various equipment suppliers, announced their success in fabricating 7nm devices. A 7nm silicon channel is about 50 atomic layers and these devices are truly atomic! It is clear that we have entered an era of atomic scale transistors. How do we model the carrier transport in such atomic scale devices?

One way is to improve existing device models by including more and more parameters. This is called the top-down approach. However, as device sizes shrink, the number of parameters grows rapidly, making the top-down approach more and more sophisticated and challenging. Most importantly, to continue Moore’s law, electronic engineers are exploring new electronic materials and new operating mechanisms. These efforts are beyond the scope of well-established device models — hence significant changes are necessary to the top-down approach.

An alternative way is called the bottom-up approach. The idea is to build up nanoelectronic devices atom by atom on a computer, and predict the transport behavior from first principles. By doing so, one is allowed to go inside atomic structures and see what happens from there. The elegance of the approach comes from its unification and generality. Everything comes out naturally from the very basic principles of quantum mechanics and nonequilibrium statistics. The bottom-up approach is complementary to the top-down approach, and is extremely useful for testing innovative ideas of future technologies.

A July 27, 2016 World Scientific news release on EurekAlert, which originated the news item, delves into the topics covered by the book,

In recent decades, several device simulation tools using the bottom-up approach have been developed in universities and software companies. Some examples are McDcal, Transiesta, Atomistic Tool Kit, Smeagol, NanoDcal, NanoDsim, OpenMX, GPAW and NEMO-5. These software tools are capable of predicting electric current flowing through a nanostructure. Essentially the input is the atomic coordinates and the output is the electric current. These software tools have been applied extensively to study emerging electronic materials and devices.

However, developing such a software tool is extremely difficult. It takes years-long experiences and requires knowledge of and techniques in condensed matter physics, computer science, electronic engineering, and applied mathematics. In a library, one can find books on density functional theory, books on quantum transport, books on computer programming, books on numerical algorithms, and books on device simulation. But one can hardly find a book integrating all these fields for the purpose of nanoelectronic device simulation.

“Atomistic Simulation of Quantum Transport in Nanoelectronic Devices” (With CD-ROM) fills the chasm. Authors Yu Zhu and Lei Liu have experience in both academic research and software development. Yu Zhu is the project manager of NanoDsim, and Lei Liu is the project manager of NanoDcal. The content of the book is based Zhu and Liu’s combined R&D experiences of more than forty years.

In this book, the authors conduct an experiment and adopt a “paradigm” approach. Instead of organizing materials by fields, they focus on the development of one particular software tool called NanoDsim, and provide relevant knowledge and techniques whenever needed. The black of box of NanoDsim is opened, and the complete procedure from theoretical derivation, to numerical implementation, all the way to device simulation is illustrated. The affilicated source code of NanoDsim also provides an open platform for new researchers.

I’m not recommending the book as I haven’t read it but it does seem intriguing. For anyone who wishes to purchase it, you can do that here.

I wrote about IBM and its 7nm chip in a July 15, 2015 post.

Short term exposure to engineered nanoparticles used for semiconductors not too risky?

Short term exposure means anywhere from 30 minutes to 48 hours according to the news release and the concentration is much higher than would be expected in current real life conditions. Still, this research from the University of Arizona and collaborators represents an addition to the data about engineered nanoparticles (ENP) and their possible impact on health and safety. From a Feb. 22, 2016 news item on phys.org,

Short-term exposure to engineered nanoparticles used in semiconductor manufacturing poses little risk to people or the environment, according to a widely read research paper from a University of Arizona-led research team.

Co-authored by 27 researchers from eight U.S. universities, the article, “Physical, chemical and in vitro toxicological characterization of nanoparticles in chemical mechanical planarization suspensions used in the semiconductor industry: towards environmental health and safety assessments,” was published in the Royal Society of Chemistry journal Environmental Science Nano in May 2015. The paper, which calls for further analysis of potential toxicity for longer exposure periods, was one of the journal’s 10 most downloaded papers in 2015.

A Feb. 17, 2016 University of Arizona news release (also on EurekAlert), which originated the news item, provides more detail,

“This study is extremely relevant both for industry and for the public,” said Reyes Sierra, lead researcher of the study and professor of chemical and environmental engineering at the University of Arizona.

Small Wonder

Engineered nanoparticles are used to make semiconductors, solar panels, satellites, food packaging, food additives, batteries, baseball bats, cosmetics, sunscreen and countless other products. They also hold great promise for biomedical applications, such as cancer drug delivery systems.

Designing and studying nano-scale materials is no small feat. Most university researchers produce them in the laboratory to approximate those used in industry. But for this study, Cabot Microelectronics provided slurries of engineered nanoparticles to the researchers.

“Minus a few proprietary ingredients, our slurries were exactly the same as those used by companies like Intel and IBM,” Sierra said. Both companies collaborated on the study.

The engineers analyzed the physical, chemical and biological attributes of four metal oxide nanomaterials — ceria, alumina, and two forms of silica — commonly used in chemical mechanical planarization slurries for making semiconductors.

Clean Manufacturing

Chemical mechanical planarization is the process used to etch and polish silicon wafers to be smooth and flat so the hundreds of silicon chips attached to their surfaces will produce properly functioning circuits. Even the most infinitesimal scratch on a wafer can wreak havoc on the circuitry.

When their work is done, engineered nanoparticles are released to wastewater treatment facilities. Engineered nanoparticles are not regulated, and their prevalence in the environment is poorly understood [emphasis mine].

Researchers at the UA and around the world are studying the potential effects of these tiny and complex materials on human health and the environment.

“One of the few things we know for sure about engineered nanoparticles is that they behave very differently than other materials,” Sierra said. “For example, they have much greater surface area relative to their volume, which can make them more reactive. We don’t know whether this greater reactivity translates to enhanced toxicity.”

The researchers exposed the four nanoparticles, suspended in separate slurries, to adenocarcinoma human alveolar basal epithelial cells at doses up to 2,000 milligrams per liter for 24 to 38 hours, and to marine bacteria cells, Aliivibrio fischeri, up to 1,300 milligrams per liter for approximately 30 minutes.

These concentrations are much higher than would be expected in the environment, Sierra said.

Using a variety of techniques, including toxicity bioassays, electron microscopy, mass spectrometry and laser scattering, to measure such factors as particle size, surface area and particle composition, the researchers determined that all four nanoparticles posed low risk to the human and bacterial cells.

“These nanoparticles showed no adverse effects on the human cells or the bacteria, even at very high concentrations,” Sierra said. “The cells showed the very same behavior as cells that were not exposed to nanoparticles.”

The authors recommended further studies to characterize potential adverse effects at longer exposures and higher concentrations.

“Think of a fish in a stream where wastewater containing nanoparticles is discharged,” Sierra said. “Exposure to the nanoparticles could be for much longer.”

Here’s a link to and a citation for the paper,

Physical, chemical, and in vitro toxicological characterization of nanoparticles in chemical mechanical planarization suspensions used in the semiconductor industry: towards environmental health and safety assessments by David Speed, Paul Westerhoff, Reyes Sierra-Alvarez, Rockford Draper, Paul Pantano, Shyam Aravamudhan, Kai Loon Chen, Kiril Hristovski, Pierre Herckes, Xiangyu Bi, Yu Yang, Chao Zeng, Lila Otero-Gonzalez, Carole Mikoryak, Blake A. Wilson, Karshak Kosaraju, Mubin Tarannum, Steven Crawford, Peng Yi, Xitong Liu, S. V. Babu, Mansour Moinpour, James Ranville, Manuel Montano, Charlie Corredor, Jonathan Posner, and Farhang Shadman. Environ. Sci.: Nano, 2015,2, 227-244 DOI: 10.1039/C5EN00046G First published online 14 May 2015

This is open access but you may need to register before reading the paper.

The bit about nanoparticles’ “… prevalence in the environment is poorly understood …”and the focus of this research reminded me of an April 2014 announcement (my April 8, 2014 posting; scroll down about 40% of the way) regarding a new research network being hosted by Arizona State University, the LCnano network, which is part of the Life Cycle of Nanomaterials project being funded by the US National Science Foundation. The network’s (LCnano) director is Paul Westerhoff who is also one of this report’s authors.

The sound of moving data

In fact, scientists from the University of Sheffield (UK) and the University of Leeds (UK) have found a way to move data easily and quickly by using sound waves. From a Nov. 3, 2015 news item on ScienceDaily,

Nothing is more frustrating that watching that circle spinning in the centre of your screen, while you wait for your computer to load a programme or access the data you need. Now a team from the Universities of Sheffield and Leeds may have found the answer to faster computing: sound.

The research — published in Applied Physics Letters — has shown that certain types of sound waves can move data quickly, using minimal power.

A Nov. 3, 2015 University of Sheffield news release on EurekAlert, which originated the news item, explains some of the issues with data and memory before briefly describing how sound waves could provide a solution,

The world’s 2.7 zettabytes (2.7 followed by 21 zeros) of data are mostly held on hard disk drives: magnetic disks that work like miniaturised record players, with the data read by sensors that scan over the disk’s surface as it spins. But because this involves moving parts, there are limits on how fast it can operate.

For computers to run faster, we need to create “solid-state” drives that eliminate the need for moving parts – essentially making the data move, not the device on which it’s stored. Flash-based solid-state disk drives have achieved this, and store information electrically rather than magnetically. However, while they operate much faster than normal hard disks, they last much less time before becoming unreliable, are much more expensive and still run much slower than other parts of a modern computer – limiting total speed.

Creating a magnetic solid-state drive could overcome all of these problems. One solution being developed is ‘racetrack memory’, which uses tiny magnetic wires, each one hundreds of times thinner than a human hair, down which magnetic “bits” of data run like racing cars around a track. Existing research into racetrack memory has focused on using magnetic fields or electric currents to move the data bits down the wires. However, both these options create heat and reduce power efficiency, which will limit battery life, increase energy bills and CO2 emissions.

Dr Tom Hayward from the University of Sheffield and Professor John Cunningham from the University of Leeds have together come up with a completely new solution: passing sound waves across the surface on which the wires are fixed. They also found that the direction of data flow depends on the pitch of the sound generated – in effect they “sang” to the data to move it.

The sound used is in the form of surface acoustic waves – the same as the most destructive wave that can emanate from an earthquake. Although already harnessed for use in electronics and other areas of engineering, this is the first time surface acoustic waves have been applied to a data storage system.

Dr Hayward, from Sheffield’s Faculty of Engineering, said: “The key advantage of surface acoustic waves in this application is their ability to travel up to several centimetres without decaying, which at the nano-scale is a huge distance. Because of this, we think a single sound wave could be used to “sing” to large numbers of nanowires simultaneously, enabling us to move a lot of data using very little power. We’re now aiming to create prototype devices in which this concept can be fully tested.”

Here’s a link to and a citation for the paper,

A sound idea: Manipulating domain walls in magnetic nanowires using surface acoustic waves by J. Dean, M. T. Bryan, J. D. Cooper, A. Virbule, J. E. Cunningham, and T. J. Hayward. Appl. Phys. Lett. 107, 142405 (2015); http://dx.doi.org/10.1063/1.4932057

This is an open access paper.

Dexter Johnson in a Nov. 5, 2015 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) provides a few additional details about the work such as a brief mention of IBM’s work developing racetrack memory, also known as, a non-volatile memory device.