Tag Archives: Cornell University

Ingenuity Lab (a nanotechnology initiative), the University of Alberta, and Carlo Montemagno—what is happening in Canadian universities? (2 of 2)

You can find Part 1 of the latest installment in this sad story here.

Who says Carlo Montemagno is a star nanotechnology researcher?

Unusually and despite his eminent stature, Dr. Montemagno does not rate a Wikipedia entry. Luckily, his CV (curriculum vitae) is online (placed there by SIU) so we can get to know a bit more (the CV is a 63 pp. document) about the man’s accomplishments (Note: There are some formatting differences), Note: Unusually, I will put my comments into the excerpted CV using [] i.e., square brackets to signify my input,

Carlo Montemagno, PhD
University of Alberta
Department of Chemical and Materials Engineering
and
NRC/CNRC National Institute for Nanotechnology
Edmonton, AB T6G 2V4
Canada

 

Educational Background

1995, Ph.D., Department of Civil Engineering and Geological Sciences, College of Earth and Mineral Sciences University of Notre Dame

1990, M.S., Petroleum and Natural Gas Engineering, College of Earth and Mineral Sciences, Pennsylvania State University

1980, B.S., Agricultural and Biological Engineering, College of Engineering, Cornell University

Supplemental Education

1986, Practical Environmental Law, Federal Publications, Washington, DC

1985, Effective Executive Training Program, Wharton Business School, University of Pennsylvannia, Philadelphia, PA

1980, Civil Engineer Corp Officer Project, CECOS & General Management School, Port Hueneme, CA

[He doesn’t seem to have taken any courses in the last 30 years.]

Professional Experience

(Select Achievements)

Over three decades of experience in shepherding complex organizations both inside and outside academia. Working as a builder, I have led organizations in government, industry and higher education during periods of change and challenge to achieved goals that many perceived to be unattainable.

University of Alberta, Edmonton AB 9/12 to present

9/12 to present, Founding Director, Ingenuity Lab [largely defunct as of April 18, 2018], Province of Alberta

8/13 to present, Director Biomaterials Program, NRC/CNRC National Institute for Nanotechnology [It’s not clear if this position still exists.]

10/13 to present, Canada Research Chair, Government of Canada in Intelligent Nanosystems [Canadian universities receive up to $200,000 for an individual Canada research chair. The money can be used to fund the chair in its entirety or it can be added to other monies., e.g., faculty salary. There are two tiers, one for established researchers and one for new researchers. Montemagno would have been a Tier 1 Canada Research Chair. At McGill University {a major Canadian educational institution} for example, total compensation including salary, academic stipend, benefits, X-coded research funds would be a maximum of $200,000 at Montemagno’s Tier 1 level. See: here scroll down about 90% of the way).

3/13 to present, AITF iCORE Strategic Chair, Province of Alberta in BioNanotechnology and Biomimetic Systems [I cannot find this position in the current list of the University of Alberta Faculty of Science’s research chairs.]

9/12 to present, Professor, Faculty of Engineering, Chemical and Materials Engineering

Crafted and currently lead an Institute that bridges multiple organizations named Ingenuity Lab (www.ingenuitylab.ca). This Institute is a truly integrated multidisciplinary organization comprised of dedicated researchers from STEM, medicine, and the social sciences. Ingenuity Lab leverages Alberta’s strengths in medicine, engineering, science and, agriculture that are present in multiple academic enterprises across the province to solve grand challenges in the areas of energy, environment, and health and rapidly translate the solutions to the economy.

The exciting and relevant feature of Ingenuity Lab is that support comes from resources outside the normal academic funding streams. Core funding of approximately $8.6M/yr emerged by working and communicating a compelling vision directly with the Provincial Executive and Legislative branches of government. [In the material I’ve read, the money for the research was part of how Dr. Montemagno was wooed by the University of Alberta. My understanding is that he himself did not obtain the funding, which in CAD was $100M over 10 years. Perhaps the university was able to attract the funding based on Dr. Montemagno’s reputation and it was contingent on his acceptance?] I significantly augmented these base resources by developing Federal Government, and Industry partnership agreements with a suite of multinational corporations and SME’s across varied industry sectors.

Collectively, this effort is generating enhanced resource streams that support innovative academic programming, builds new research infrastructure, and enables high risk/high reward research. Just as important, it established new pathways to interact meaningfully with local and global communities.

Strategic Leadership

•Created the Ingenuity Lab organization including a governing board representing multiple academic institutions, government and industry sectors.

•Developed and executed a strategic plan to achieve near and long-term strategic objectives.

•Recruited~100 researchers representing a wide range disciplnes.[sic] [How hard can it be to attract researchers in this job climate?]

•Built out ~36,000 S.F. of laboratory and administrative space.

•Crafted operational policy and procedures.

•Developed and implemented a unique stakeholder inclusive management strategy focused on the rapid translation of solutions to the economy.

Innovation and Economic Engagement

•Member of the Expert Panel on innovation, commissioned by the Government of Alberta, to assess opportunities, challenges and design and implementation options for Alberta’s multi-billion dollar investment to drive long-term economic growth and diversification. The developed strategy is currently being implemented. [Details?]

•Served as a representive [sic] on multiple Canadian national trade missions to Asia, United States and the Middle East. [Sounds like he got to enjoy some nice trips.]

•Instituted formal development partnerships with several multi-national corporations including Johnson & Johnson, Cenovus and Sabuto Inc. [Details?]

•Launched multiple for-profit joint ventures founded on technologies collaboratively developed with industry with funding from both private and public sources. [Details?]

Branding

•Developed and implement a communication program focused on branding of Ingenuity Lab’s unique mission, both regionally and globally, to the lay public, academia, government, and industry. [Why didn’t the communication specialist do this? ]

This effort employs traditional paper, online, and social media outlets to effectively reach different demographics.

•Awarded “Best Nanotechnology Research Organization–2014” by The New Economy. [What is the New Economy? The Economist, yes. New Economy, no.]

Global Development

•Executed formal research and education partnerships with the Yonsei Institute of Convergence Technology and the Yonsei Bio-IT MicroFab Center in Korea, Mahatma Gandhi University in India. and the Italian Institute of Technology. [{1}The Yonsei Institute of Convergence Technology doesn’t have any news items prior to 2015 or after 2016. The Ingenuity Lab and/or Carlo Montemagno did not feature in them. {2} There are six Mahatma Ghandi Universities in India. {3} The Italian Institute of Technology does not have any news listings on the English language version of its site.]

•Opened Ingenuity Lab, India in May 2015. Focused on translating 21st-century technology to enable solutions appropriate for developing nations in the Energy, Agriculture, and Health economic sectors. [Found this May 9, 2016 notice on the Asia Pacific Foundation of Canada website, noting this: “… opening of the Ingenuity Lab Research Hub at Mahatma Gandhi University in Kottayam, in the Indian state of Kerala.” There’s also this May 6, 2016 news release. I can’t find anything on the Mahatma Ghandi University Kerala website.]

•Established partnership research and development agreements with SME’s in both Israel and India.

•Developed active research collaborations with medical and educational institutions in Nepal, Qatar, India, Israel, India and the United States.

Community Outreach

•Created Young Innovators research experience program to educate, support and nurture tyro undergraduate researchers and entrepreneurs.

•Developed an educational game, “Scopey’s Nano Adventure” for iOS and Android platforms to educate 6yr to 10yr olds about Nanotechnology. [What did the children learn? Was this really part of the mandate?]

•Delivered educational science programs to the lay public at multiple, high profile events. [Which events? The ones on the trade junkets?]

University of Cincinnati, Cincinnati OH 7/06 to 8/12

7/10 to 8/12 Founding Dean, College of Engineering and Applied Science

7/09 to 6/10 Dean, College of Applied Science

7/06 to 6/10 Dean, College of Engineering

7/06 to 8/12 Geier Professor of College of Engineering Engineering Education

7/06 to 8/12, Professor of Bioengineering, College of Engineering & College of Medicine

University of California, Los Angeles 7/01 to 6/06

5/03 to 6/06, Associate Director California Nanosystems Institute

7/02 to 6/06, Co-Director NASA Center for Cell Mimetic Space Exploration

7/02 to 6/06, Founding Department Chair, Department of Bioengineering

7/02 to 6/06, Chair Biomedical Engineering IDP

7/01 to 6/02, Chair of Academic Biomedical Engineering IDP Affairs

7/01 to 6/06, Carol and Roy College of Engineering and Applied Doumani Professor of Sciences Biomedical Engineering

7/01 to 6/06, Professor Mechanical and Aerospace Engineering

Recommending Montemagno

Presumably the folks at Southern Illinois University asked for recommendations from Montemagno’s previous employers. So, how did he get a recommendation from the folks in Alberta when according to Spoerre’s April 10, 2018 article the Ingenuity Lab was undergoing a review as of June 2017 by the province of Alberta’s Alberta Innovates programme? I find it hard to believe that the folks at the University of Alberta were unaware of the review.

When you’re trying to get rid of someone, it’s pretty much standard practice that once they’ve gotten the message, you give a good recommendation to their prospective employer. The question begs to be asked, how many times have employers done this for Montemagno?

Stars in their eyes

Every one exaggerates a bit on their résumé or CV. One of my difficulties with this whole affair lies in how Montemagno can be described as a ‘nanotechnology star’. The accomplishments foregrounded on Montemagno’s CV are administrative and if memory serves, the University of Cincinnati too. Given the situation with the Ingenuity Lab, I’m wondering about these accomplishments.

Was due diligence performed by SIU, the University of the Alberta, or anywhere else that Montemagno worked? I realize that you’re not likely to get much information from calling up the universities where he worked previously, especially if there was a problem and they wanted to get rid of him. Still, did someone check out his degrees, his start-ups,  dig a little deeper into some of his claims?

His credentials and stated accomplishments are quite impressive and I, too,  would have been dazzled. (He also lists positions at the Argonne National Laboratory and at Cornell University.) I’ve picked at some bits but one thing that stands out to me is the move from UCLA to the University of Cincinnati. It’s all big names: UCLA, Cornell, NASA, Argonne and then, not: University of Cincinnati, University of Alberta, Southern Illinois University—what happened?

(If anyone better versed in the world of academe and career has answers, please do add them to the comments.)

It’s tempting to think the Peter Principle (one of them) was at work here. In brief, this principle states that as you keep getting better jobs on based on past performance you reach a point where you can’t manage the new challenges having risen to your level of incompetence.In accepting the offer from the University of Alberta had Dr. Montemagno risen to his level of incompetence? Or, perhaps it was just one big failure. Unfortunately, any excuses don’t hold up under the weight of a series of misjudgments and ethical failures. Still, I’m guessing that Dr. Montemagno was hoping for a big win on a project such as this (from an Oct. 19, 2016 news release on MarketWired),

Ingenuity Lab Carbon Solutions announced today that it has been named as one of the 27 teams advancing in the $20M NRG COSIA Carbon XPRIZE. The competition sees scientists develop technologies to convert carbon dioxide emissions into products with high net value.

The Ingenuity Lab Carbon Solutions team – headquartered in Edmonton of Alberta, Canada – has made it to the second round of competition. Its team of 14 has proposed to convert CO2 waste emitted from a natural gas power plant into usable chemical products.

Ingenuity Lab Carbon Solutions is comprised of a multidisciplinary group of scientists and engineers, and was formed in the winter of 2012 to develop new approaches for the chemical industry. Ingenuity Lab Carbon Solutions is sponsored by CCEMC, and has also partnered with Ensovi for access to intellectual property and know how.

I can’t identify CCEMC with any certainty but Ensovi is one of Montemagno’s six start-up companies, as listed in his CV,

Founder and Chief Technical Officer, Ensovi, LLC., Focused on the production of low-cost bioenergy and high-value added products from sunlight using bionanotechnology, Total Funding; ~$10M, November 2010-present.

Sadly the April 9,2018 NRG COSIA Carbon XPRIZE news release  announcing the finalists in round 3 of the competition includes an Alberta track of five teams from which the Ingenuity Lab is notably absent.

The Montemagno affair seems to be a story of hubris, greed, and good intentions. Finally, the issues associated with Dr. Montemagno give rise to another, broader question.

Is something rotten in Canada’s higher education establishment?

Starting with the University of Alberta:

it would seem pretty obvious that if you’re hiring family member(s) as part of the deal to secure a new member of faculty that you place and follow very stringent rules. No rewriting of the job descriptions, no direct role in hiring or supervising, no extra benefits, no inflated salaries in other words, no special treatment for your family as they know at the University of Alberta since they have policies for this very situation.

Yes, universities do hire spouses (although a daughter, a nephew, and a son-in-law seems truly excessive) and even when the university follows all of the rules, there’s resentment from staff (I know because I worked in a university). There is a caveat to the rule, there’s resentment unless that spouse is a ‘star’ in his or her own right or an exceptionally pleasant person. It’s also very helpful if the spouse is both.

I have to say I loved Fraser Forbes that crazy University of Alberta engineer who thought he’d make things better by telling us that the family’s salaries had been paid out of federal and provincial funds rather than university funds. (sigh) Forbes was the new dean of engineering at the time of his interview in the CBC’s April 10, 2018 online article but that no longer seems to be the case as of April 19, 2018.

Given Montemagno’s misjudgments, it seems cruel that Forbes was removed after one foolish interview. But, perhaps he didn’t want the job after all. Regardless, those people who were afraid to speak out about Dr. Montemagno cannot feel reassured by Forbes’ apparent removal.

Money, money, money

Anyone who has visited a university in Canada (and presumably the US too) has to have noticed the number of ‘sponsored’ buildings and rooms. The hunger for money seems insatiable and any sensible person knows it’s unsupportable over the long term.

The scramble for students

Mel Broitman in a Sept. 22, 2016 article for Higher Education lays out some harsh truths,

Make no mistake. It is a stunning condemnation and a “wakeup call to higher education worldwide”. The recent UNESCO report states that academic institutions are rife with corruption and turning a blind eye to malpractice right under their noses. When UNESCO, a United Nations organization created after the chaos of World War II to focus on moral and intellectual solidarity, makes such an alarming allegation, it’s sobering and not to be dismissed.

So although Canadians typically think of their society and themselves as among the more honest and transparent found anywhere, how many Canadian institutions are engaging in activities that border on dishonest and are not entirely transparent around the world?

It is overwhelmingly evident that in the last two decades we have witnessed first-hand a remarkable and callous disregard for academic ethics and standards in a scramble by Canadian universities and colleges to sign up foreign students, who represent tens of millions of dollars to their bottom lines.

We have been in a school auditorium in China and listened to the school owner tell prospective parents that the Grade 12 marks from the Canadian provincial school board program can be manipulated to secure admission for their children into Canadian universities. This, while the Canadian teachers sat oblivious to the presentation in Chinese.

In hundreds of our own interaction with students who completed the Canadian provincial school board’s curriculum in China and who achieved grades of 70% and higher in their English class have been unable to achieve even a basic level of English literacy in the written tests we have administered.   But when the largest country of origin for incoming international students and revenue is China – the Canadian universities admitting these students salivate over the dollars and focus less on due diligence.

We were once asked by a university on Canada’s west coast to review 200 applications from Saudi Arabia, in order to identify the two or three Saudi students who were actually eligible for conditional admission to that university’s undergraduate engineering program. But the proposal was scuttled by the university’s ESL department that wanted all 200 to enroll in its language courses. It insisted on and managed conditional admissions for all 200. It’s common at Canadian universities for the ESL program “tail” to wag the campus “dog” when it comes to admissions. In fact, recent Canadian government regulations have been proposed to crack down on this practice as it is an affront to academic integrity.

If you have time, do read the rest as it’s eye-opening. As for the report Broitman cites, I was not able to find it. Broitman gives a link to the report in response to one of the later comments and there’s a link in Tony Bates’s July 31, 2016 posting but you will get a “too bad, so sad” message should you follow either link.The closed I can get to it is this Advisory Statement for Effective International Practice; Combatting Corruption and Enhancing Integrity: A Contemporary Challenge for the Quality and Credibility of Higher Education (PDF). The ‘note’ was jointly published by the (US) Council for Higher Education (CHEA) and UNESCO.

What about the professors?

As they scramble for students, the universities appear to be cutting their ‘teaching costs’, from an April 18, 2018 article by Charles Menzies (professor of anthropology and an elected member of the UBC [University of British Columbia] Board)  for THE UBYSSEY (UBC) student newspaper,

For the first time ever at UBC the contributions of student tuition fees exceeded provincial government contributions to UBC’s core budget. This startling fact was the backdrop to a strenuous grilling of UBC’s VP Finance and Provost Peter Smailes by governors at the Friday the 13 meeting of UBC’s Board of Governors’ standing committee for finance.

Given the fact students contribute more to UBC’s budget than the provincial government, governors asked why more wasn’t being done to enhance the student experience. By way of explanation the provost reiterated UBC’s commitment to the student experience. In a back-and-forth with a governor the provost outlined a range of programs that focus on enhancing the student experience. At several points the chair of the Board would intervene and press the provost for more explanations and elaboration. For his part the provost responded in a measured and deliberate tone outlining the programs in play, conceding more could be done, and affirming the importance of students in the overall process.

As a faculty member listening to this, I wondered about the background discourse undergirding the discussion. How is focussing on a student’s experience at UBC related to our core mission: education and research? What is actually being meant by experience? Why is no one questioning the inadequacy of the government’s core contribution? What about our contingent colleagues? Our part-time precarious colleagues pick up a great deal of the teaching responsibilities across our campuses. Is there not something we can do to improve their working conditions? Remember, faculty working conditions are student learning conditions. From my perspective all these questions received short shrift.

I did take the opportunity to ask the provost, given how financially sound our university is, why more funds couldn’t be directed toward improving the living and working conditions of contingent faculty. However, this was never elaborated upon after the fact.

There is much about the university as a total institution that seems driven to cultivate experiences. A lot of Board discussion circles around ideas of reputation and brand. Who pays and how much they pay (be they governments, donors, or students) is also a big deal. Cultivating a good experience for students is central to many of these discussions.

What is this experience that everyone is talking about? I hear about classroom experience, residence experience, and student experience writ large. Very little of it seems to be specifically tied to learning (unless it’s about more engaging, entertaining, learning with technology). While I’m sure some Board colleagues will disagree with this conclusion, it does seem to me that the experience being touted is really the experience of a customer seeking fulfilment through the purchase of a service. What is seen as important is not what is learned, but the grade; not the productive struggle of learning but the validation of self in a great experience as a member of an imagined community. A good student experience very likely leads to a productive alumni relationship — one where the alumni feels good about giving money.

Inside UBC’s Board of Governors

Should anyone be under illusions as to what goes on at the highest levels of university governance, there is the telling description from Professor Jennifer Berdahl about her experience on a ‘search committee for a new university president’ of the shameful treatment of previous president, Arvind Gupta (from Berdahl’s April 25, 2018 posting on her eponymous blog),

If Prof. Chaudhry’s [Canada Research Chair and Professor Ayesha Chaudhry’s resignation was announced in an April 25, 2018 UBYSSEY article by Alex Nguyen and Zak Vescera] experience was anything like mine on the UBC Presidential Search Committee, she quickly realized how alienating it is to be one of only three faculty members on a 21-person corporate-controlled Board. It was likely even worse for Chaudhry as a woman of color. Combining this with the Board’s shenanigans that are designed to manipulate information and process to achieve desired decisions and minimize academic voices, a sense of helpless futility can set in. [emphasis mine]

These shenanigans include [emphasis mine] strategic seating arrangements, sudden breaks during meetings when conversation veers from the desired direction, hand-written notes from the secretary to speaking members, hundreds of pages of documents sent the night before a meeting, private tête-à-têtes arranged between a powerful board member and a junior or more vulnerable one, portals for community input vetted before sharing, and planning op-eds to promote preferred perspectives. These are a few of many tricks employed to sideline unpopular voices, mostly academic ones.

It’s impossible to believe that UBC’s BoG is the site for these shenanigans take place. The question I have is how many BoGs and how much damage are they inflicting?

Finally getting back to my point, simultaneous with cutting back on teaching and other associated costs and manipulative, childish behaviour at BoG meetings, large amounts of money are being spent to attract ‘stars’ such as Dr. Montemagno. The idea is to attract students (and their money) to the institution where they can network with the ‘stars’. What the student actually learns does not seem to be the primary interest.

So, what kind of deals are the universities making with the ‘stars’?

The Montemagno affair provides a few hints but, in the end,I don’t know and I don’t think anyone outside the ‘sacred circle’ does either. UBC, for example,is quite secretive and, seemingly, quite liberal in its use of nondisclosure agreements (NDA). There was the scandal a few years ago when president Arvind Gupta abruptly resigned after one year in his position. As far as I know, no one has ever gotten to the bottom of this mystery although there certainly seems to have been a fair degree skullduggery involved.

After a previous president, Martha Cook Piper took over the reigns in an interim arrangement, Dr. Santa J. Ono (his Wikipedia entry) was hired.  Interestingly, he was previously at the University of Cincinnati, one of Montemagno’s previous employers. That university’s apparent eagerness to treat Montemagno’s extras seems to have led to the University of Alberta’s excesses.  So, what deal did UBC make with Dr. Ono? I’m pretty sure both he and the university are covered by an NDA but there is this about his tenure as president at the University of Cincinnati (from a June 14, 2016 article by Jack Hauen for THE UBYSSEY),

… in exchange for UC not raising undergraduate tuition, he didn’t accept a salary increase or bonus for two years. And once those two years were up, he kept going: his $200,000 bonus in 2015 went to “14 different organizations and scholarships, including a campus LGBTQ centre, a local science and technology-focused high school and a program for first-generation college students,” according to the Vancouver Sun.

In 2013 he toured around the States promoting UC with a hashtag of his own creation — #HottestCollegeInAmerica — while answering anything and everything asked of him during fireside chats.

He describes himself as a “servant leader,” which is a follower of a philosophy of leadership focused primarily on “the growth and well-being of people and the communities to which they belong.”

“I see my job as working on behalf of the entire UBC community. I am working to serve you, and not vice-versa,” he said in his announcement speech this morning.

Thank goodness it’s possible to end this piece on a more or less upbeat note. Ono seems to be what my father would have called ‘a decent human being’. It’s nice to be able to include a ‘happyish’ note.

Plea

There is huge money at stake where these ‘mega’ science and technology projects are concerned. The Ingenuity Lab was $100M investment to be paid out over 10 years and some basic questions don’t seem to have been asked. How does this person manage money? Leaving aside any issues with an individual’s ethics and moral compass, scientists don’t usually take any courses in business and yet they are expected to manage huge budgets. Had Montemagno handled a large budget or any budget? It’s certainly not foregrounded (and I’d like to see dollar amounts) in his CV.

As well, the Ingenuity Lab was funded as a 10 year project. Had Montemagno ever stayed in one job for 10 years? Not according to his CV. His longest stint was approximately eight years when he was in the US Navy in the 1980s. Otherwise, it was five to six years, including the Ingenuity Lab stint.

Meanwhile, our universities don’t appear to be applying the rules and protocols we have in place to ensure fairness. This unseemly rush for money seems to have infected how Canadian universities attract (local, interprovincial, and, especially, international) students to pay for their education. The infection also seems to have spread into the ways ‘star’ researchers and faculty members are recruited to Canadian universities while the bulk of the teaching staff are ‘starved’ under one pretext or another while a BoG may or may not be indulging in shenanigans designed to drive decision-making to a preordained outcome. And, for the most part, this is occurring under terms of secrecy that our intelligence agencies must envy.

In the end, I can’t be the only person wondering how all this affects our science.

An exoskeleton for a cell-sized robot

A January 3, 2018 news item on phys.org announces work on cell-sized robots,

An electricity-conducting, environment-sensing, shape-changing machine the size of a human cell? Is that even possible?

Cornell physicists Paul McEuen and Itai Cohen not only say yes, but they’ve actually built the “muscle” for one.

With postdoctoral researcher Marc Miskin at the helm, the team has made a robot exoskeleton that can rapidly change its shape upon sensing chemical or thermal changes in its environment. And, they claim, these microscale machines – equipped with electronic, photonic and chemical payloads – could become a powerful platform for robotics at the size scale of biological microorganisms.

“You could put the computational power of the spaceship Voyager onto an object the size of a cell,” Cohen said. “Then, where do you go explore?”

“We are trying to build what you might call an ‘exoskeleton’ for electronics,” said McEuen, the John A. Newman Professor of Physical Science and director of the Kavli Institute at Cornell for Nanoscale Science. “Right now, you can make little computer chips that do a lot of information-processing … but they don’t know how to move or cause something to bend.”

Cornell University has produced a video of the researchers discussing their work (about 3 mins. running time)

For those who prefer text or need it to reinforce their understanding, there’s a January 2, 2018 Cornell University news release (also on EurekAlert but dated Jan. 3, 2018) by Tom Fleischman, which originated the news item,

The machines move using a motor called a bimorph. A bimorph is an assembly of two materials – in this case, graphene and glass – that bends when driven by a stimulus like heat, a chemical reaction or an applied voltage. The shape change happens because, in the case of heat, two materials with different thermal responses expand by different amounts over the same temperature change.

As a consequence, the bimorph bends to relieve some of this strain, allowing one layer to stretch out longer than the other. By adding rigid flat panels that cannot be bent by bimorphs, the researchers localize bending to take place only in specific places, creating folds. With this concept, they are able to make a variety of folding structures ranging from tetrahedra (triangular pyramids) to cubes.

In the case of graphene and glass, the bimorphs also fold in response to chemical stimuli by driving large ions into the glass, causing it to expand. Typically this chemical activity only occurs on the very outer edge of glass when submerged in water or some other ionic fluid. Since their bimorph is only a few nanometers thick, the glass is basically all outer edge and very reactive.

“It’s a neat trick,” Miskin said, “because it’s something you can do only with these nanoscale systems.”

The bimorph is built using atomic layer deposition – chemically “painting” atomically thin layers of silicon dioxide onto aluminum over a cover slip – then wet-transferring a single atomic layer of graphene on top of the stack. The result is the thinnest bimorph ever made. One of their machines was described as being “three times larger than a red blood cell and three times smaller than a large neuron” when folded. Folding scaffolds of this size have been built before, but this group’s version has one clear advantage.

“Our devices are compatible with semiconductor manufacturing,” Cohen said. “That’s what’s making this compatible with our future vision for robotics at this scale.”

And due to graphene’s relative strength, Miskin said, it can handle the types of loads necessary for electronics applications. “If you want to build this electronics exoskeleton,” he said, “you need it to be able to produce enough force to carry the electronics. Ours does that.”

For now, these tiniest of tiny machines have no commercial application in electronics, biological sensing or anything else. But the research pushes the science of nanoscale robots forward, McEuen said.

“Right now, there are no ‘muscles’ for small-scale machines,” he said, “so we’re building the small-scale muscles.”

Here’s a link to and a citation for the paper,

Graphene-based bimorphs for micron-sized, autonomous origami machines by Marc Z. Miskin, Kyle J. Dorsey, Baris Bircan, Yimo Han, David A. Muller, Paul L. McEuen, and Itai Cohen. PNAS [Proceedings of the National Academy of Sciences] 2018 doi: 10.1073/pnas.1712889115 published ahead of print January 2, 2018

This paper is behind a paywall.

(Merry Christmas!) Japanese tree frogs inspire hardware for the highest of tech: a swarmalator

First, the frog,

[Japanese Tree Frog] By 池田正樹 (talk)masaki ikeda – Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=4593224

I wish they had a recording of the mating calls for Japanese tree frogs since they were the inspiration for mathematicians at Cornell University (New York state, US) according to a November 17, 2017 news item on ScienceDaily,

How does the Japanese tree frog figure into the latest work of noted mathematician Steven Strogatz? As it turns out, quite prominently.

“We had read about these funny frogs that hop around and croak,” said Strogatz, the Jacob Gould Schurman Professor of Applied Mathematics. “They form patterns in space and time. Usually it’s about reproduction. And based on how the other guy or guys are croaking, they don’t want to be around another one that’s croaking at the same time as they are, because they’ll jam each other.”

A November 15, 2017 Cornell University news release (also on EurekAlert but dated November 17, 2017) by Tom Fleischman, which originated the news item, details how the calls led to ‘swarmalators’ (Note: Links have been removed),

Strogatz and Kevin O’Keeffe, Ph.D. ’17, used the curious mating ritual of male Japanese tree frogs as inspiration for their exploration of “swarmalators” – their term for systems in which both synchronization and swarming occur together.

Specifically, they considered oscillators whose phase dynamics and spatial dynamics are coupled. In the instance of the male tree frogs, they attempt to croak in exact anti-phase (one croaks while the other is silent) while moving away from a rival so as to be heard by females.

This opens up “a new class of math problems,” said Strogatz, a Stephen H. Weiss Presidential Fellow. “The question is, what do we expect to see when people start building systems like this or observing them in biology?”

Their paper, “Oscillators That Sync and Swarm,” was published Nov. 13 [2017] in Nature Communications. Strogatz and O’Keeffe – now a postdoctoral researcher with the Senseable City Lab at the Massachusetts Institute of Technology – collaborated with Hyunsuk Hong from Chonbuk National University in Jeonju, South Korea.

Swarming and synchronization both involve large, self-organizing groups of individuals interacting according to simple rules, but rarely have they been studied together, O’Keeffe said.

“No one had connected these two areas, in spite of the fact that there were all these parallels,” he said. “That was the theoretical idea that sort of seduced us, I suppose. And there were also a couple of concrete examples, which we liked – including the tree frogs.”

Studies of swarms focus on how animals move – think of birds flocking or fish schooling – while neglecting the dynamics of their internal states. Studies of synchronization do the opposite: They focus on oscillators’ internal dynamics. Strogatz long has been fascinated by fireflies’ synchrony and other similar phenomena, giving a TED Talk on the topic in 2004, but not on their motion.

“[Swarming and synchronization] are so similar, and yet they were never connected together, and it seems so obvious,” O’Keeffe said. “It’s a whole new landscape of possible behaviors that hadn’t been explored before.”

Using a pair of governing equations that assume swarmalators are free to move about, along with numerical simulations, the group found that a swarmalator system settles into one of five states:

  • Static synchrony – featuring circular symmetry, crystal-like distribution, fully synchronized in phase;
  • Static asynchrony – featuring uniform distribution, meaning that every phase occurs everywhere;
  • Static phase wave – swarmalators settle near others in a phase similar to their own, and phases are frozen at their initial values;
  • Splintered phase wave – nonstationary, disconnected clusters of distinct phases; and
  • Active phase wave – similar to bidirectional states found in biological swarms, where populations split into counter-rotating subgroups; also similar to vortex arrays formed by groups of sperm.

Through the study of simple models, the group found that the coupling of “sync” and “swarm” leads to rich patterns in both time and space, and could lead to further study of systems that exhibit this dual behavior.

“This opens up a lot of questions for many parts of science – there are a lot of things to try that people hadn’t thought of trying,” Strogatz said. “It’s science that opens doors for science. It’s inaugurating science, rather than culminating science.”

Here’s a link to and a citation for the paper,

Oscillators that sync and swarm by Kevin P. O’Keeffe, Hyunsuk Hong, & Steven H. Strogatz. Nature Communications 8, Article number: 1504 (2017) doi:10.1038/s41467-017-01190-3 Published online: 15 November 2017

This paper is open access.

One last thing, these frogs have also inspired WiFi improvements (from the Japanese tree frog Wikipedia entry; Note: Links have been removed),

Journalist Toyohiro Akiyama carried some Japanese tree frogs with him during his trip to the Mir space station in December 1990.[citation needed] Calling behavior of the species was used to create an algorithm for optimizing Wi-Fi networks.[3]

While it’s not clear in the Wikipedia entry, the frogs were part of an experiment. Here’s a link to and a citation for the paper about the experiment, along with an abstract,

The Frog in Space (FRIS) experiment onboard Space Station Mir: final report and follow-on studies by Yamashita, M.; Izumi-Kurotani, A.; Mogami, Y.; Okuno,k M.; Naitoh, T.; Wassersug, R. J. Biol Sci Space. 1997 Dec 11(4):313-20.

Abstract

The “Frog in Space” (FRIS) experiment marked a major step for Japanese space life science, on the occasion of the first space flight of a Japanese cosmonaut. At the core of FRIS were six Japanese tree frogs, Hyla japonica, flown on Space Station Mir for 8 days in 1990. The behavior of these frogs was observed and recorded under microgravity. The frogs took up a “parachuting” posture when drifting in a free volume on Mir. When perched on surfaces, they typically sat with their heads bent backward. Such a peculiar posture, after long exposure to microgravity, is discussed in light of motion sickness in amphibians. Histological examinations and other studies were made on the specimens upon recovery. Some organs, such as the liver and the vertebra, showed changes as a result of space flight; others were unaffected. Studies that followed FRIS have been conducted to prepare for a second FRIS on the International Space Station. Interspecific diversity in the behavioral reactions of anurans to changes in acceleration is the major focus of these investigations. The ultimate goal of this research is to better understand how organisms have adapted to gravity through their evolution on earth.

The paper is open access.

Gecko lets go!

After all these years of writing about geckos and their adhesive properties it seems that geckos sometimes slip or let go, theoretically. (BTW, there’s a Canadian connection’ one of  the researchers is at the University of Calgary in the province of Alberta.) From a July 19, 2017 Cornell University news release (also on EurekAlert),

Geckos climb vertically up trees, walls and even windows, thanks to pads on the digits of their feet that employ a huge number of tiny bristles and hooks.

Scientists have long marveled at the gecko’s adhesive capabilities, which have been described as 100 times more than what is needed to support their body weight or run quickly up a surface.

But a new theoretical study examines for the first time the limits of geckos’ gripping ability in natural contexts. The study, recently published in the Journal of the Royal Society Interface, reports there are circumstances – such as when geckos fear for their lives, leap into the air and are forced to grab on to a leaf below – when they need every bit of that fabled adhesive ability, and sometimes it’s not enough.

“Geckos are notoriously described as having incredible ability to adhere to a surface,” said Karl Niklas, professor of plant evolution at Cornell University and a co-author of the paper. The study’s lead authors, Timothy Higham at the University of California, Riverside, and Anthony Russell at the University of Calgary, Canada, both zoologists, brought Niklas into the project for his expertise on plant biomechanics.

“The paper shows that [adhesive capability] might be exaggerated, because geckos experience falls and a necessity to grip a surface like a leaf that requires a much more tenacious adhesion force; the paper shows that in some cases the adhesive ability can be exceeded,” Niklas said.

In the theoretical study, the researchers developed computer models to understand if there are common-place instances when the geckos’ ability to hold on to surfaces might be challenged, such as when canopy-dwelling geckos are being chased by a predator and are forced to leap from a tree, hoping to land on a leaf below. The researchers incorporated ecological observations, adhesive force measurements, and body size and shape measurements of museum specimens to conduct simulations. They also considered the biomechanics of the leaves, the size of the leaves and the angles on the surface that geckos might land on to determine impact forces. Calculations were also based on worst-case scenarios, where a gecko reaches a maximum speed when it is no longer accelerating, called “terminal settling velocity.”

“Leaves are cantilevered like diving boards and they go through harmonic motion [when struck], so you have to calculate the initial deflection and orientation, and then consider how does that leaf rebound and can the gecko still stay attached,” Niklas said.

The final result showed that in some cases geckos don’t have enough adhesion to save themselves, he added.

Higham and Russell are planning to travel to French Guiana to do empirical adhesive force studies on living geckos in native forests.

The basic research helps people better understand how geckos stick to surfaces, and has the potential for future applications that mimic such biological mechanisms.

Here’s a link to and a citation for the paper,

Leaping lizards landing on leaves: escape-induced jumps in the rainforest canopy challenge the adhesive limits of geckos by Timothy E. Higham, Anthony P. Russell, Karl J. Niklas. Journal of the Royal Society Interface June 2017 Volume 14, issue 131 DOI: 10.1098/rsif.2017.0156 Published 28 June 2017

I think the authors had some fun with that title. In any event, the paper is behind a paywall.

US Dept. of Agriculture announces its nanotechnology research grants

I don’t always stumble across the US Department of Agriculture’s nanotechnology research grant announcements but I’m always grateful when I do as it’s good to find out about  nanotechnology research taking place in the agricultural sector. From a July 21, 2017 news item on Nanowerk,,

The U.S. Department of Agriculture’s (USDA) National Institute of Food and Agriculture (NIFA) today announced 13 grants totaling $4.6 million for research on the next generation of agricultural technologies and systems to meet the growing demand for food, fuel, and fiber. The grants are funded through NIFA’s Agriculture and Food Research Initiative (AFRI), authorized by the 2014 Farm Bill.

“Nanotechnology is being rapidly implemented in medicine, electronics, energy, and biotechnology, and it has huge potential to enhance the agricultural sector,” said NIFA Director Sonny Ramaswamy. “NIFA research investments can help spur nanotechnology-based improvements to ensure global nutritional security and prosperity in rural communities.”

A July 20, 2017 USDA news release, which originated the news item, lists this year’s grants and provides a brief description of a few of the newly and previously funded projects,

Fiscal year 2016 grants being announced include:

Nanotechnology for Agricultural and Food Systems

  • Kansas State University, Manhattan, Kansas, $450,200
  • Wichita State University, Wichita, Kansas, $340,000
  • University of Massachusetts, Amherst, Massachusetts, $444,550
  • University of Nevada, Las Vegas, Nevada,$150,000
  • North Dakota State University, Fargo, North Dakota, $149,000
  • Cornell University, Ithaca, New York, $455,000
  • Cornell University, Ithaca, New York, $450,200
  • Oregon State University, Corvallis, Oregon, $402,550
  • University of Pennsylvania, Philadelphia, Pennsylvania, $405,055
  • Gordon Research Conferences, West Kingston, Rhode Island, $45,000
  • The University of Tennessee,  Knoxville, Tennessee, $450,200
  • Utah State University, Logan, Utah, $450,200
  • The George Washington University, Washington, D.C., $450,200

Project details can be found at the NIFA website (link is external).

Among the grants, a University of Pennsylvania project will engineer cellulose nanomaterials [emphasis mine] with high toughness for potential use in building materials, automotive components, and consumer products. A University of Nevada-Las Vegas project will develop a rapid, sensitive test to detect Salmonella typhimurium to enhance food supply safety.

Previously funded grants include an Iowa State University project in which a low-cost and disposable biosensor made out of nanoparticle graphene that can detect pesticides in soil was developed. The biosensor also has the potential for use in the biomedical, environmental, and food safety fields. University of Minnesota (link is external) researchers created a sponge that uses nanotechnology to quickly absorb mercury, as well as bacterial and fungal microbes from polluted water. The sponge can be used on tap water, industrial wastewater, and in lakes. It converts contaminants into nontoxic waste that can be disposed in a landfill.

NIFA invests in and advances agricultural research, education, and extension and promotes transformative discoveries that solve societal challenges. NIFA support for the best and brightest scientists and extension personnel has resulted in user-inspired, groundbreaking discoveries that combat childhood obesity, improve and sustain rural economic growth, address water availability issues, increase food production, find new sources of energy, mitigate climate variability and ensure food safety. To learn more about NIFA’s impact on agricultural science, visit www.nifa.usda.gov/impacts, sign up for email updates (link is external) or follow us on Twitter @USDA_NIFA (link is external), #NIFAImpacts (link is external).

Given my interest in nanocellulose materials (Canada was/is a leader in the production of cellulose nanocrystals [CNC] but there has been little news about Canadian research into CNC applications), I used the NIFA link to access the table listing the grants and clicked on ‘brief’ in the View column in the University of Pennsylania row to find this description of the project,

ENGINEERING CELLULOSE NANOMATERIALS WITH HIGH TOUGHNESS

NON-TECHNICAL SUMMARY: Cellulose nanofibrils (CNFs) are natural materials with exceptional mechanical properties that can be obtained from renewable plant-based resources. CNFs are stiff, strong, and lightweight, thus they are ideal for use in structural materials. In particular, there is a significant opportunity to use CNFs to realize polymer composites with improved toughness and resistance to fracture. The overall goal of this project is to establish an understanding of fracture toughness enhancement in polymer composites reinforced with CNFs. A key outcome of this work will be process – structure – fracture property relationships for CNF-reinforced composites. The knowledge developed in this project will enable a new class of tough CNF-reinforced composite materials with applications in areas such as building materials, automotive components, and consumer products.The composite materials that will be investigated are at the convergence of nanotechnology and bio-sourced material trends. Emerging nanocellulose technologies have the potential to move biomass materials into high value-added applications and entirely new markets.

It’s not the only nanocellulose material project being funded in this round, there’s this at North Dakota State University, from the NIFA ‘brief’ project description page,

NOVEL NANOCELLULOSE BASED FIRE RETARDANT FOR POLYMER COMPOSITES

NON-TECHNICAL SUMMARY: Synthetic polymers are quite vulnerable to fire.There are 2.4 million reported fires, resulting in 7.8 billion dollars of direct property loss, an estimated 30 billion dollars of indirect loss, 29,000 civilian injuries, 101,000 firefighter injuries and 6000 civilian fatalities annually in the U.S. There is an urgent need for a safe, potent, and reliable fire retardant (FR) system that can be used in commodity polymers to reduce their flammability and protect lives and properties. The goal of this project is to develop a novel, safe and biobased FR system using agricultural and woody biomass. The project is divided into three major tasks. The first is to manufacture zinc oxide (ZnO) coated cellulose nanoparticles and evaluate their morphological, chemical, structural and thermal characteristics. The second task will be to design and manufacture polymer composites containing nano sized zinc oxide and cellulose crystals. Finally the third task will be to test the fire retardancy and mechanical properties of the composites. Wbelieve that presence of zinc oxide and cellulose nanocrystals in polymers will limit the oxygen supply by charring, shielding the surface and cellulose nanocrystals will make composites strong. The outcome of this project will help in developing a safe, reliable and biobased fire retardant for consumer goods, automotive, building products and will help in saving human lives and property damage due to fire.

One day, I hope to hear about Canadian research into applications for nanocellulose materials. (fingers crossed for good luck)

IBM to build brain-inspired AI supercomputing system equal to 64 million neurons for US Air Force

This is the second IBM computer announcement I’ve stumbled onto within the last 4 weeks or so,  which seems like a veritable deluge given the last time I wrote about IBM’s computing efforts was in an Oct. 8, 2015 posting about carbon nanotubes,. I believe that up until now that was my  most recent posting about IBM and computers.

Moving onto the news, here’s more from a June 23, 3017 news item on Nanotechnology Now,

IBM (NYSE: IBM) and the U.S. Air Force Research Laboratory (AFRL) today [June 23, 2017] announced they are collaborating on a first-of-a-kind brain-inspired supercomputing system powered by a 64-chip array of the IBM TrueNorth Neurosynaptic System. The scalable platform IBM is building for AFRL will feature an end-to-end software ecosystem designed to enable deep neural-network learning and information discovery. The system’s advanced pattern recognition and sensory processing power will be the equivalent of 64 million neurons and 16 billion synapses, while the processor component will consume the energy equivalent of a dim light bulb – a mere 10 watts to power.

A June 23, 2017 IBM news release, which originated the news item, describes the proposed collaboration, which is based on IBM’s TrueNorth brain-inspired chip architecture (see my Aug. 8, 2014 posting for more about TrueNorth),

IBM researchers believe the brain-inspired, neural network design of TrueNorth will be far more efficient for pattern recognition and integrated sensory processing than systems powered by conventional chips. AFRL is investigating applications of the system in embedded, mobile, autonomous settings where, today, size, weight and power (SWaP) are key limiting factors.

The IBM TrueNorth Neurosynaptic System can efficiently convert data (such as images, video, audio and text) from multiple, distributed sensors into symbols in real time. AFRL will combine this “right-brain” perception capability of the system with the “left-brain” symbol processing capabilities of conventional computer systems. The large scale of the system will enable both “data parallelism” where multiple data sources can be run in parallel against the same neural network and “model parallelism” where independent neural networks form an ensemble that can be run in parallel on the same data.

“AFRL was the earliest adopter of TrueNorth for converting data into decisions,” said Daniel S. Goddard, director, information directorate, U.S. Air Force Research Lab. “The new neurosynaptic system will be used to enable new computing capabilities important to AFRL’s mission to explore, prototype and demonstrate high-impact, game-changing technologies that enable the Air Force and the nation to maintain its superior technical advantage.”

“The evolution of the IBM TrueNorth Neurosynaptic System is a solid proof point in our quest to lead the industry in AI hardware innovation,” said Dharmendra S. Modha, IBM Fellow, chief scientist, brain-inspired computing, IBM Research – Almaden. “Over the last six years, IBM has expanded the number of neurons per system from 256 to more than 64 million – an 800 percent annual increase over six years.’’

The system fits in a 4U-high (7”) space in a standard server rack and eight such systems will enable the unprecedented scale of 512 million neurons per rack. A single processor in the system consists of 5.4 billion transistors organized into 4,096 neural cores creating an array of 1 million digital neurons that communicate with one another via 256 million electrical synapses.    For CIFAR-100 dataset, TrueNorth achieves near state-of-the-art accuracy, while running at >1,500 frames/s and using 200 mW (effectively >7,000 frames/s per Watt) – orders of magnitude lower speed and energy than a conventional computer running inference on the same neural network.

The IBM TrueNorth Neurosynaptic System was originally developed under the auspices of Defense Advanced Research Projects Agency’s (DARPA) Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program in collaboration with Cornell University. In 2016, the TrueNorth Team received the inaugural Misha Mahowald Prize for Neuromorphic Engineering and TrueNorth was accepted into the Computer History Museum.  Research with TrueNorth is currently being performed by more than 40 universities, government labs, and industrial partners on five continents.

There is an IBM video accompanying this news release, which seems more promotional than informational,

The IBM scientist featured in the video has a Dec. 19, 2016 posting on an IBM research blog which provides context for this collaboration with AFRL,

2016 was a big year for brain-inspired computing. My team and I proved in our paper “Convolutional networks for fast, energy-efficient neuromorphic computing” that the value of this breakthrough is that it can perform neural network inference at unprecedented ultra-low energy consumption. Simply stated, our TrueNorth chip’s non-von Neumann architecture mimics the brain’s neural architecture — giving it unprecedented efficiency and scalability over today’s computers.

The brain-inspired TrueNorth processor [is] a 70mW reconfigurable silicon chip with 1 million neurons, 256 million synapses, and 4096 parallel and distributed neural cores. For systems, we present a scale-out system loosely coupling 16 single-chip boards and a scale-up system tightly integrating 16 chips in a 4´4 configuration by exploiting TrueNorth’s native tiling.

For the scale-up systems we summarize our approach to physical placement of neural network, to reduce intra- and inter-chip network traffic. The ecosystem is in use at over 30 universities and government / corporate labs. Our platform is a substrate for a spectrum of applications from mobile and embedded computing to cloud and supercomputers.
TrueNorth Ecosystem for Brain-Inspired Computing: Scalable Systems, Software, and Applications

TrueNorth, once loaded with a neural network model, can be used in real-time as a sensory streaming inference engine, performing rapid and accurate classifications while using minimal energy. TrueNorth’s 1 million neurons consume only 70 mW, which is like having a neurosynaptic supercomputer the size of a postage stamp that can run on a smartphone battery for a week.

Recently, in collaboration with Lawrence Livermore National Laboratory, U.S. Air Force Research Laboratory, and U.S. Army Research Laboratory, we published our fifth paper at IEEE’s prestigious Supercomputing 2016 conference that summarizes the results of the team’s 12.5-year journey (see the associated graphic) to unlock this value proposition. [keep scrolling for the graphic]

Applying the mind of a chip

Three of our partners, U.S. Army Research Lab, U.S. Air Force Research Lab and Lawrence Livermore National Lab, contributed sections to the Supercomputing paper each showcasing a different TrueNorth system, as summarized by my colleagues Jun Sawada, Brian Taba, Pallab Datta, and Ben Shaw:

U.S. Army Research Lab (ARL) prototyped a computational offloading scheme to illustrate how TrueNorth’s low power profile enables computation at the point of data collection. Using the single-chip NS1e board and an Android tablet, ARL researchers created a demonstration system that allows visitors to their lab to hand write arithmetic expressions on the tablet, with handwriting streamed to the NS1e for character recognition, and recognized characters sent back to the tablet for arithmetic calculation.

Of course, the point here is not to make a handwriting calculator, it is to show how TrueNorth’s low power and real time pattern recognition might be deployed at the point of data collection to reduce latency, complexity and transmission bandwidth, as well as back-end data storage requirements in distributed systems.

U.S. Air Force Research Lab (AFRL) contributed another prototype application utilizing a TrueNorth scale-out system to perform a data-parallel text extraction and recognition task. In this application, an image of a document is segmented into individual characters that are streamed to AFRL’s NS1e16 TrueNorth system for parallel character recognition. Classification results are then sent to an inference-based natural language model to reconstruct words and sentences. This system can process 16,000 characters per second! AFRL plans to implement the word and sentence inference algorithms on TrueNorth, as well.

Lawrence Livermore National Lab (LLNL) has a 16-chip NS16e scale-up system to explore the potential of post-von Neumann computation through larger neural models and more complex algorithms, enabled by the native tiling characteristics of the TrueNorth chip. For the Supercomputing paper, they contributed a single-chip application performing in-situ process monitoring in an additive manufacturing process. LLNL trained a TrueNorth network to recognize seven classes related to track weld quality in welds produced by a selective laser melting machine. Real-time weld quality determination allows for closed-loop process improvement and immediate rejection of defective parts. This is one of several applications LLNL is developing to showcase TrueNorth as a scalable platform for low-power, real-time inference.

[downloaded from https://www.ibm.com/blogs/research/2016/12/the-brains-architecture-efficiency-on-a-chip/] Courtesy: IBM

I gather this 2017 announcement is the latest milestone on the TrueNorth journey.

An explanation of neural networks from the Massachusetts Institute of Technology (MIT)

I always enjoy the MIT ‘explainers’ and have been a little sad that I haven’t stumbled across one in a while. Until now, that is. Here’s an April 14, 201 neural network ‘explainer’ (in its entirety) by Larry Hardesty (?),

In the past 10 years, the best-performing artificial-intelligence systems — such as the speech recognizers on smartphones or Google’s latest automatic translator — have resulted from a technique called “deep learning.”

Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years. Neural networks were first proposed in 1944 by Warren McCullough and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952 as founding members of what’s sometimes called the first cognitive science department.

Neural nets were a major area of research in both neuroscience and computer science until 1969, when, according to computer science lore, they were killed off by the MIT mathematicians Marvin Minsky and Seymour Papert, who a year later would become co-directors of the new MIT Artificial Intelligence Laboratory.

The technique then enjoyed a resurgence in the 1980s, fell into eclipse again in the first decade of the new century, and has returned like gangbusters in the second, fueled largely by the increased processing power of graphics chips.

“There’s this idea that ideas in science are a bit like epidemics of viruses,” says Tomaso Poggio, the Eugene McDermott Professor of Brain and Cognitive Sciences at MIT, an investigator at MIT’s McGovern Institute for Brain Research, and director of MIT’s Center for Brains, Minds, and Machines. “There are apparently five or six basic strains of flu viruses, and apparently each one comes back with a period of around 25 years. People get infected, and they develop an immune response, and so they don’t get infected for the next 25 years. And then there is a new generation that is ready to be infected by the same strain of virus. In science, people fall in love with an idea, get excited about it, hammer it to death, and then get immunized — they get tired of it. So ideas should have the same kind of periodicity!”

Weighty matters

Neural nets are a means of doing machine learning, in which a computer learns to perform some task by analyzing training examples. Usually, the examples have been hand-labeled in advance. An object recognition system, for instance, might be fed thousands of labeled images of cars, houses, coffee cups, and so on, and it would find visual patterns in the images that consistently correlate with particular labels.

Modeled loosely on the human brain, a neural net consists of thousands or even millions of simple processing nodes that are densely interconnected. Most of today’s neural nets are organized into layers of nodes, and they’re “feed-forward,” meaning that data moves through them in only one direction. An individual node might be connected to several nodes in the layer beneath it, from which it receives data, and several nodes in the layer above it, to which it sends data.

To each of its incoming connections, a node will assign a number known as a “weight.” When the network is active, the node receives a different data item — a different number — over each of its connections and multiplies it by the associated weight. It then adds the resulting products together, yielding a single number. If that number is below a threshold value, the node passes no data to the next layer. If the number exceeds the threshold value, the node “fires,” which in today’s neural nets generally means sending the number — the sum of the weighted inputs — along all its outgoing connections.

When a neural net is being trained, all of its weights and thresholds are initially set to random values. Training data is fed to the bottom layer — the input layer — and it passes through the succeeding layers, getting multiplied and added together in complex ways, until it finally arrives, radically transformed, at the output layer. During training, the weights and thresholds are continually adjusted until training data with the same labels consistently yield similar outputs.

Minds and machines

The neural nets described by McCullough and Pitts in 1944 had thresholds and weights, but they weren’t arranged into layers, and the researchers didn’t specify any training mechanism. What McCullough and Pitts showed was that a neural net could, in principle, compute any function that a digital computer could. The result was more neuroscience than computer science: The point was to suggest that the human brain could be thought of as a computing device.

Neural nets continue to be a valuable tool for neuroscientific research. For instance, particular network layouts or rules for adjusting weights and thresholds have reproduced observed features of human neuroanatomy and cognition, an indication that they capture something about how the brain processes information.

The first trainable neural network, the Perceptron, was demonstrated by the Cornell University psychologist Frank Rosenblatt in 1957. The Perceptron’s design was much like that of the modern neural net, except that it had only one layer with adjustable weights and thresholds, sandwiched between input and output layers.

Perceptrons were an active area of research in both psychology and the fledgling discipline of computer science until 1959, when Minsky and Papert published a book titled “Perceptrons,” which demonstrated that executing certain fairly common computations on Perceptrons would be impractically time consuming.

“Of course, all of these limitations kind of disappear if you take machinery that is a little more complicated — like, two layers,” Poggio says. But at the time, the book had a chilling effect on neural-net research.

“You have to put these things in historical context,” Poggio says. “They were arguing for programming — for languages like Lisp. Not many years before, people were still using analog computers. It was not clear at all at the time that programming was the way to go. I think they went a little bit overboard, but as usual, it’s not black and white. If you think of this as this competition between analog computing and digital computing, they fought for what at the time was the right thing.”

Periodicity

By the 1980s, however, researchers had developed algorithms for modifying neural nets’ weights and thresholds that were efficient enough for networks with more than one layer, removing many of the limitations identified by Minsky and Papert. The field enjoyed a renaissance.

But intellectually, there’s something unsatisfying about neural nets. Enough training may revise a network’s settings to the point that it can usefully classify data, but what do those settings mean? What image features is an object recognizer looking at, and how does it piece them together into the distinctive visual signatures of cars, houses, and coffee cups? Looking at the weights of individual connections won’t answer that question.

In recent years, computer scientists have begun to come up with ingenious methods for deducing the analytic strategies adopted by neural nets. But in the 1980s, the networks’ strategies were indecipherable. So around the turn of the century, neural networks were supplanted by support vector machines, an alternative approach to machine learning that’s based on some very clean and elegant mathematics.

The recent resurgence in neural networks — the deep-learning revolution — comes courtesy of the computer-game industry. The complex imagery and rapid pace of today’s video games require hardware that can keep up, and the result has been the graphics processing unit (GPU), which packs thousands of relatively simple processing cores on a single chip. It didn’t take long for researchers to realize that the architecture of a GPU is remarkably like that of a neural net.

Modern GPUs enabled the one-layer networks of the 1960s and the two- to three-layer networks of the 1980s to blossom into the 10-, 15-, even 50-layer networks of today. That’s what the “deep” in “deep learning” refers to — the depth of the network’s layers. And currently, deep learning is responsible for the best-performing systems in almost every area of artificial-intelligence research.

Under the hood

The networks’ opacity is still unsettling to theorists, but there’s headway on that front, too. In addition to directing the Center for Brains, Minds, and Machines (CBMM), Poggio leads the center’s research program in Theoretical Frameworks for Intelligence. Recently, Poggio and his CBMM colleagues have released a three-part theoretical study of neural networks.

The first part, which was published last month in the International Journal of Automation and Computing, addresses the range of computations that deep-learning networks can execute and when deep networks offer advantages over shallower ones. Parts two and three, which have been released as CBMM technical reports, address the problems of global optimization, or guaranteeing that a network has found the settings that best accord with its training data, and overfitting, or cases in which the network becomes so attuned to the specifics of its training data that it fails to generalize to other instances of the same categories.

There are still plenty of theoretical questions to be answered, but CBMM researchers’ work could help ensure that neural networks finally break the generational cycle that has brought them in and out of favor for seven decades.

This image from MIT illustrates a ‘modern’ neural network,

Most applications of deep learning use “convolutional” neural networks, in which the nodes of each layer are clustered, the clusters overlap, and each cluster feeds data to multiple nodes (orange and green) of the next layer. Image: Jose-Luis Olivares/MIT

h/t phys.org April 17, 2017

One final note, I wish the folks at MIT had an ‘explainer’ archive. I’m not sure how to find any more ‘explainers on MIT’s website.

Using open-source software for a 3D look at nanomaterials

A 3-D view of a hyperbranched nanoparticle with complex structure, made possible by Tomviz 1.0, a new open-source software platform developed by researchers at the University of Michigan, Cornell University and Kitware Inc. Image credit: Robert Hovden, Michigan Engineering

An April 3, 2017 news item on ScienceDaily describes this new and freely available software,

Now it’s possible for anyone to see and share 3-D nanoscale imagery with a new open-source software platform developed by researchers at the University of Michigan, Cornell University and open-source software company Kitware Inc.

Tomviz 1.0 is the first open-source tool that enables researchers to easily create 3-D images from electron tomography data, then share and manipulate those images in a single platform.

A March 31, 2017 University of Michigan news release, which originated the news item, expands on the theme,

The world of nanoscale materials—things 100 nanometers and smaller—is an important place for scientists and engineers who are designing the stuff of the future: semiconductors, metal alloys and other advanced materials.

Seeing in 3-D how nanoscale flecks of platinum arrange themselves in a car’s catalytic converter, for example, or how spiky dendrites can cause short circuits inside lithium-ion batteries, could spur advances like safer, longer-lasting batteries; lighter, more fuel efficient cars; and more powerful computers.

“3-D nanoscale imagery is useful in a variety of fields, including the auto industry, semiconductors and even geology,” said Robert Hovden, U-M assistant professor of materials science engineering and one of the creators of the program. “Now you don’t have to be a tomography expert to work with these images in a meaningful way.”

Tomviz solves a key challenge: the difficulty of interpreting data from the electron microscopes that examine nanoscale objects in 3-D. The machines shoot electron beams through nanoparticles from different angles. The beams form projections as they travel through the object, a bit like nanoscale shadow puppets.

Once the machine does its work, it’s up to researchers to piece hundreds of shadows into a single three-dimensional image. It’s as difficult as it sounds—an art as well as a science. Like staining a traditional microscope slide, researchers often add shading or color to 3-D images to highlight certain attributes.

A 3-D view of a particle used in a hydrogen fuel cell powered vehicle. The gray structure is carbon; the red and blue particles are nanoscale flecks of platinum. The image is made possible by Tomviz 1.0. Image credit: Elliot Padget, Cornell UniversityA 3-D view of a particle used in a hydrogen fuel cell powered vehicle. The gray structure is carbon; the red and blue particles are nanoscale flecks of platinum. The image is made possible by Tomviz 1.0. Image credit: Elliot Padget, Cornell UniversityTraditionally, they’ve have had to rely on a hodgepodge of proprietary software to do the heavy lifting. The work is expensive and time-consuming; so much so that even big companies like automakers struggle with it. And once a 3-D image is created, it’s often impossible for other researchers to reproduce it or to share it with others.

Tomviz dramatically simplifies the process and reduces the amount of time and computing power needed to make it happen, its designers say. It also enables researchers to readily collaborate by sharing all the steps that went into creating a given image and enabling them to make tweaks of their own.

“These images are far different from the 3-D graphics you’d see at a movie theater, which are essentially cleverly lit surfaces,” Hovden said. “Tomviz explores both the surface and the interior of a nanoscale object, with detailed information about its density and structure. In some cases, we can see individual atoms.”

Key to making Tomviz happen was getting tomography experts and software developers together to collaborate, Hovden said. Their first challenge was gaining access to a large volume of high-quality tomography. The team rallied experts at Cornell, Berkeley Lab and UCLA to contribute their data, and also created their own using U-M’s microscopy center. To turn raw data into code, Hovden’s team worked with open-source software maker Kitware.

With the release of Tomviz 1.0, Hovden is looking toward the next stages of the project, where he hopes to integrate the software directly with microscopes. He believes that U-M’s atom probe tomography facilities and expertise could help him design a version that could ultimately uncover the chemistry of all atoms in 3-D.

“We are unlocking access to see new 3D nanomaterials that will power the next generation of technology,” Hovden said. “I’m very interested in pushing the boundaries of understanding materials in 3-D.”

There is a video about Tomviz,

You can download Tomviz from here and you can find Kitware here. Happy 3D nanomaterial viewing!

Tree-on-a-chip

It’s usually organ-on-a-chip or lab-on-a-chip or human-on-a-chip; this is my first tree-on-a-chip.

Engineers have designed a microfluidic device they call a “tree-on-a-chip,” which mimics the pumping mechanism of trees and other plants. Courtesy: MIT

From a March 20, 2017 news item on phys.org,

Trees and other plants, from towering redwoods to diminutive daisies, are nature’s hydraulic pumps. They are constantly pulling water up from their roots to the topmost leaves, and pumping sugars produced by their leaves back down to the roots. This constant stream of nutrients is shuttled through a system of tissues called xylem and phloem, which are packed together in woody, parallel conduits.

Now engineers at MIT [Massachusetts Institute of Technology] and their collaborators have designed a microfluidic device they call a “tree-on-a-chip,” which mimics the pumping mechanism of trees and plants. Like its natural counterparts, the chip operates passively, requiring no moving parts or external pumps. It is able to pump water and sugars through the chip at a steady flow rate for several days. The results are published this week in Nature Plants.

A March 20, 2017 MIT news release by Jennifer Chu, which originated the news item, describes the work in more detail,

Anette “Peko” Hosoi, professor and associate department head for operations in MIT’s Department of Mechanical Engineering, says the chip’s passive pumping may be leveraged as a simple hydraulic actuator for small robots. Engineers have found it difficult and expensive to make tiny, movable parts and pumps to power complex movements in small robots. The team’s new pumping mechanism may enable robots whose motions are propelled by inexpensive, sugar-powered pumps.

“The goal of this work is cheap complexity, like one sees in nature,” Hosoi says. “It’s easy to add another leaf or xylem channel in a tree. In small robotics, everything is hard, from manufacturing, to integration, to actuation. If we could make the building blocks that enable cheap complexity, that would be super exciting. I think these [microfluidic pumps] are a step in that direction.”

Hosoi’s co-authors on the paper are lead author Jean Comtet, a former graduate student in MIT’s Department of Mechanical Engineering; Kaare Jensen of the Technical University of Denmark; and Robert Turgeon and Abraham Stroock, both of Cornell University.

A hydraulic lift

The group’s tree-inspired work grew out of a project on hydraulic robots powered by pumping fluids. Hosoi was interested in designing hydraulic robots at the small scale, that could perform actions similar to much bigger robots like Boston Dynamic’s Big Dog, a four-legged, Saint Bernard-sized robot that runs and jumps over rough terrain, powered by hydraulic actuators.

“For small systems, it’s often expensive to manufacture tiny moving pieces,” Hosoi says. “So we thought, ‘What if we could make a small-scale hydraulic system that could generate large pressures, with no moving parts?’ And then we asked, ‘Does anything do this in nature?’ It turns out that trees do.”

The general understanding among biologists has been that water, propelled by surface tension, travels up a tree’s channels of xylem, then diffuses through a semipermeable membrane and down into channels of phloem that contain sugar and other nutrients.

The more sugar there is in the phloem, the more water flows from xylem to phloem to balance out the sugar-to-water gradient, in a passive process known as osmosis. The resulting water flow flushes nutrients down to the roots. Trees and plants are thought to maintain this pumping process as more water is drawn up from their roots.

“This simple model of xylem and phloem has been well-known for decades,” Hosoi says. “From a qualitative point of view, this makes sense. But when you actually run the numbers, you realize this simple model does not allow for steady flow.”

In fact, engineers have previously attempted to design tree-inspired microfluidic pumps, fabricating parts that mimic xylem and phloem. But they found that these designs quickly stopped pumping within minutes.

It was Hosoi’s student Comtet who identified a third essential part to a tree’s pumping system: its leaves, which produce sugars through photosynthesis. Comtet’s model includes this additional source of sugars that diffuse from the leaves into a plant’s phloem, increasing the sugar-to-water gradient, which in turn maintains a constant osmotic pressure, circulating water and nutrients continuously throughout a tree.

Running on sugar

With Comtet’s hypothesis in mind, Hosoi and her team designed their tree-on-a-chip, a microfluidic pump that mimics a tree’s xylem, phloem, and most importantly, its sugar-producing leaves.

To make the chip, the researchers sandwiched together two plastic slides, through which they drilled small channels to represent xylem and phloem. They filled the xylem channel with water, and the phloem channel with water and sugar, then separated the two slides with a semipermeable material to mimic the membrane between xylem and phloem. They placed another membrane over the slide containing the phloem channel, and set a sugar cube on top to represent the additional source of sugar diffusing from a tree’s leaves into the phloem. They hooked the chip up to a tube, which fed water from a tank into the chip.

With this simple setup, the chip was able to passively pump water from the tank through the chip and out into a beaker, at a constant flow rate for several days, as opposed to previous designs that only pumped for several minutes.

“As soon as we put this sugar source in, we had it running for days at a steady state,” Hosoi says. “That’s exactly what we need. We want a device we can actually put in a robot.”

Hosoi envisions that the tree-on-a-chip pump may be built into a small robot to produce hydraulically powered motions, without requiring active pumps or parts.

“If you design your robot in a smart way, you could absolutely stick a sugar cube on it and let it go,” Hosoi says.

This research was supported, in part, by the Defense Advance Research Projects Agency [DARPA].

This research’s funding connection to DARPA reminded me that MIT has an Institute of Soldier Nanotechnologies.

Getting back to the tree-on-a-chip, here’s a link to and a citation for the paper,

Passive phloem loading and long-distance transport in a synthetic tree-on-a-chip by Jean Comtet, Kaare H. Jensen, Robert Turgeon, Abraham D. Stroock & A. E. Hosoi. Nature Plants 3, Article number: 17032 (2017)  doi:10.1038/nplants.2017.32 Published online: 20 March 2017

This paper is behind a paywall.