Tag Archives: Institute for Electrical and Electronics Engineers

AI (artificial intelligence) for Good Global Summit from May 15 – 17, 2018 in Geneva, Switzerland: details and an interview with Frederic Werner

With all the talk about artificial intelligence (AI), a lot more attention seems to be paid to apocalyptic scenarios: loss of jobs, financial hardship, loss of personal agency and privacy, and more with all of these impacts being described as global. Still, there are some folks who are considering and working on ‘AI for good’.

If you’d asked me, the International Telecommunications Union (ITU) would not have been my first guess (my choice would have been United Nations Educational, Scientific and Cultural Organization [UNESCO]) as an agency likely to host the 2018 AI for Good Global Summit. But, it turns out the ITU is a UN (United Nations agency) and, according to its Wikipedia entry, it’s an intergovernmental public-private partnership, which may explain the nature of the participants in the upcoming summit.

The news

First, there’s a May 4, 2018 ITU media advisory (received via email or you can find the full media advisory here) about the upcoming summit,

Artificial Intelligence (AI) is now widely identified as being able to address the greatest challenges facing humanity – supporting innovation in fields ranging from crisis management and healthcare to smart cities and communications networking.

The second annual ‘AI for Good Global Summit’ will take place 15-17 May [2018] in Geneva, and seeks to leverage AI to accelerate progress towards the United Nations’ Sustainable Development Goals and ultimately benefit humanity.

WHAT: Global event to advance ‘AI for Good’ with the participation of internationally recognized AI experts. The programme will include interactive high-level panels, while ‘AI Breakthrough Teams’ will propose AI strategies able to create impact in the near term, guided by an expert audience of mentors representing government, industry, academia and civil society – through interactive sessions. The summit will connect AI innovators with public and private-sector decision-makers, building collaboration to take promising strategies forward.

A special demo & exhibit track will feature innovative applications of AI designed to: protect women from sexual violence, avoid infant crib deaths, end child abuse, predict oral cancer, and improve mental health treatments for depression – as well as interactive robots including: Alice, a Dutch invention designed to support the aged; iCub, an open-source robot; and Sophia, the humanoid AI robot.

WHEN: 15-17 May 2018, beginning daily at 9 AM

WHERE: ITU Headquarters, 2 Rue de Varembé, Geneva, Switzerland (Please note: entrance to ITU is now limited for all visitors to the Montbrillant building entrance only on rue Varembé).

WHO: Confirmed participants to date include expert representatives from: Association for Computing Machinery, Bill and Melinda Gates Foundation, Cambridge University, Carnegie Mellon, Chan Zuckerberg Initiative, Consumer Trade Association, Facebook, Fraunhofer, Google, Harvard University, IBM Watson, IEEE, Intellectual Ventures, ITU, Microsoft, Massachusetts Institute of Technology (MIT), Partnership on AI, Planet Labs, Shenzhen Open Innovation Lab, University of California at Berkeley, University of Tokyo, XPRIZE Foundation, Yale University – and the participation of “Sophia” the humanoid robot and “iCub” the EU open source robotcub.

The interview

Frederic Werner, Senior Communications Officer at the International Telecommunication Union and and one of the organizers of the AI for Good Global Summit 2018 kindly took the time to speak to me and provide a few more details about the upcoming event.

Werner noted that the 2018 event grew out of a much smaller 2017 ‘workshop’ and first of its kind, about beneficial AI which this year has ballooned in size to 91 countries (about 15 participants are expected from Canada), 32 UN agencies, and substantive representation from the private sector. The 2017 event featured Dr. Yoshua Bengio of the University of Montreal  (Université de Montréal) was a featured speaker.

“This year, we’re focused on action-oriented projects that will help us reach our Sustainable Development Goals (SDGs) by 2030. We’re looking at near-term practical AI applications,” says Werner. “We’re matchmaking problem-owners and solution-owners.”

Academics, industry professionals, government officials, and representatives from UN agencies are gathering  to work on four tracks/themes:

In advance of this meeting, the group launched an AI repository (an action item from the 2017 meeting) on April 25, 2018 inviting people to list their AI projects (from the ITU’s April 25, 2018? AI repository news announcement),

ITU has just launched an AI Repository where anyone working in the field of artificial intelligence (AI) can contribute key information about how to leverage AI to help solve humanity’s greatest challenges.

This is the only global repository that identifies AI-related projects, research initiatives, think-tanks and organizations that aim to accelerate progress on the 17 United Nations’ Sustainable Development Goals (SDGs).

To submit a project, just press ‘Submit’ on the AI Repository site and fill in the online questionnaire, providing all relevant details of your project. You will also be asked to map your project to the relevant World Summit on the Information Society (WSIS) action lines and the SDGs. Approved projects will be officially registered in the repository database.

Benefits of participation on the AI Repository include:

WSIS Prizes recognize individuals, governments, civil society, local, regional and international agencies, research institutions and private-sector companies for outstanding success in implementing development oriented strategies that leverage the power of AI and ICTs.

Creating the AI Repository was one of the action items of last year’s AI for Good Global Summit.

We are looking forward to your submissions.

If you have any questions, please send an email to: ai@itu.int

“Your project won’t be visible immediately as we have to vet the submissions to weed out spam-type material and projects that are not in line with our goals,” says Werner. That said, there are already 29 projects in the repository. As you might expect, the UK, China, and US are in the repository but also represented are Egypt, Uganda, Belarus, Serbia, Peru, Italy, and other countries not commonly cited when discussing AI research.

Werner also pointed out in response to my surprise over the ITU’s role with regard to this AI initiative that the ITU is the only UN agency which has 192* member states (countries), 150 universities, and over 700 industry members as well as other member entities, which gives them tremendous breadth of reach. As well, the organization, founded originally in 1865 as the International Telegraph Convention, has extensive experience with global standardization in the information technology and telecommunications industries. (See more in their Wikipedia entry.)

Finally

There is a bit more about the summit on the ITU’s AI for Good Global Summit 2018 webpage,

The 2nd edition of the AI for Good Global Summit will be organized by ITU in Geneva on 15-17 May 2018, in partnership with XPRIZE Foundation, the global leader in incentivized prize competitions, the Association for Computing Machinery (ACM) and sister United Nations agencies including UNESCO, UNICEF, UNCTAD, UNIDO, Global Pulse, UNICRI, UNODA, UNIDIR, UNODC, WFP, IFAD, UNAIDS, WIPO, ILO, UNITAR, UNOPS, OHCHR, UN UniversityWHO, UNEP, ICAO, UNDP, The World Bank, UN DESA, CTBTOUNISDRUNOG, UNOOSAUNFPAUNECE, UNDPA, and UNHCR.

The AI for Good series is the leading United Nations platform for dialogue on AI. The action​​-oriented 2018 summit will identify practical applications of AI and supporting strategies to improve the quality and sustainability of life on our planet. The summit will continue to formulate strategies to ensure trusted, safe and inclusive development of AI technologies and equitable access to their benefits.

While the 2017 summit sparked the first ever inclusive global dialogue on beneficial AI, the action-oriented 2018 summit will focus on impactful AI solutions able to yield long-term benefits and help achieve the Sustainable Development Goals. ‘Breakthrough teams’ will demonstrate the potential of AI to map poverty and aid with natural disasters using satellite imagery, how AI could assist the delivery of citizen-centric services in smart cities, and new opportunities for AI to help achieve Universal Health Coverage, and finally to help achieve transparency and explainability in AI algorithms.

Teams will propose impactful AI strategies able to be enacted in the near term, guided by an expert audience of mentors representing government, industry, academia and civil society. Strategies will be evaluated by the mentors according to their feasibility and scalability, potential to address truly global challenges, degree of supporting advocacy, and applicability to market failures beyond the scope of government and industry. The exercise will connect AI innovators with public and private-sector decision-makers, building collaboration to take promising strategies forward.

“As the UN specialized agency for information and communication technologies, ITU is well placed to guide AI innovation towards the achievement of the UN Sustainable Development ​Goals. We are providing a neutral close quotation markplatform for international dialogue aimed at ​building a ​common understanding of the capabilities of emerging AI technologies.​​” Houlin Zhao, Secretary General ​of ITU​

Should you be close to Geneva, it seems that registration is still open. Just go to the ITU’s AI for Good Global Summit 2018 webpage, scroll the page down to ‘Documentation’ and you will find a link to the invitation and a link to online registration. Participation is free but I expect that you are responsible for your travel and accommodation costs.

For anyone unable to attend in person, the summit will be livestreamed (webcast in real time) and you can watch the sessions by following the link below,

https://www.itu.int/en/ITU-T/AI/2018/Pages/webcast.aspx

For those of us on the West Coast of Canada and other parts distant to Geneva, you will want to take the nine hour difference between Geneva (Switzerland) and here into account when viewing the proceedings. If you can’t manage the time difference, the sessions are being recorded and will be posted at a later date.

*’132 member states’ corrected to ‘192 member states’ on May 11, 2018 at 1500 hours PDT.

How small can a carbon nanotube get before it stops being ‘electrical’?

Research, which began as an attempt to get reproducible electronics (?) measurements, yielded some unexpected results according ta January 3, 2018 news item on phys.org,

Carbon nanotubes bound for electronics not only need to be as clean as possible to maximize their utility in next-generation nanoscale devices, but contact effects may limit how small a nano device can be, according to researchers at the Energy Safety Research Institute (ESRI) at Swansea University [UK] in collaboration with researchers at Rice University [US].

ESRI Director Andrew Barron, also a professor at Rice University in the USA, and his team have figured out how to get nanotubes clean enough to obtain reproducible electronic measurements and in the process not only explained why the electrical properties of nanotubes have historically been so difficult to measure consistently, but have shown that there may be a limit to how “nano” future electronic devices can be using carbon nanotubes.

Swansea University Issued a January 3, 2018 press release (also on EurekAlert), which originated the news item, explains the work in more detail,

Like any normal wire, semiconducting nanotubes are progressively more resistant to current along their length. But conductivity measurements of nanotubes over the years have been anything but consistent. The ESRI team wanted to know why.

“We are interested in the creation of nanotube based conductors, and while people have been able to make wires their conduction has not met expectations. We were interested in determining the basic sconce behind the variability observed by other researchers.”

They discovered that hard-to-remove contaminants — leftover iron catalyst, carbon and water — could easily skew the results of conductivity tests. Burning them away, Barron said, creates new possibilities for carbon nanotubes in nanoscale electronics.

The new study appears in the American Chemical Society journal Nano Letters.

The researchers first made multiwalled carbon nanotubes between 40 and 200 nanometers in diameter and up to 30 microns long. They then either heated the nanotubes in a vacuum or bombarded them with argon ions to clean their surfaces.

They tested individual nanotubes the same way one would test any electrical conductor: By touching them with two probes to see how much current passes through the material from one tip to the other. In this case, their tungsten probes were attached to a scanning tunneling microscope.

In clean nanotubes, resistance got progressively stronger as the distance increased, as it should. But the results were skewed when the probes encountered surface contaminants, which increased the electric field strength at the tip. And when measurements were taken within 4 microns of each other, regions of depleted conductivity caused by contaminants overlapped, further scrambling the results.

“We think this is why there’s such inconsistency in the literature,” Barron said.

“If nanotubes are to be the next generation lightweight conductor, then consistent results, batch-to-batch, and sample-to-sample, is needed for devices such as motors and generators as well as power systems.”

Annealing the nanotubes in a vacuum above 200 degrees Celsius (392 degrees Fahrenheit) reduced surface contamination, but not enough to eliminate inconsistent results, they found. Argon ion bombardment also cleaned the tubes, but led to an increase in defects that degrade conductivity.

Ultimately they discovered vacuum annealing nanotubes at 500 degrees Celsius (932 Fahrenheit) reduced contamination enough to accurately measure resistance, they reported.

To now, Barron said, engineers who use nanotube fibers or films in devices modify the material through doping or other means to get the conductive properties they require. But if the source nanotubes are sufficiently decontaminated, they should be able to get the right conductivity by simply putting their contacts in the right spot.

“A key result of our work was that if contacts on a nanotube are less than 1 micron apart, the electronic properties of the nanotube changes from conductor to semiconductor, due to the presence of overlapping depletion zones” said Barron, “this has a potential limiting factor on the size of nanotube based electronic devices – this would limit the application of Moore’s law to nanotube devices.”

Chris Barnett of Swansea is lead author of the paper. Co-authors are Cathren Gowenlock and Kathryn Welsby, and Rice alumnus Alvin Orbaek White of Swansea. Barron is the Sêr Cymru Chair of Low Carbon Energy and Environment at Swansea and the Charles W. Duncan Jr.–Welch Professor of Chemistry and a professor of materials science and nanoengineering at Rice.

The Welsh Government Sêr Cymru National Research Network in Advanced Engineering and Materials, the Sêr Cymru Chair Program, the Office of Naval Research and the Robert A. Welch Foundation supported the research.

Rice University has published a January 4, 2018 Rice University news release (also on EurekAlert), which is almost (95%) identical to the press release from Swansea. That’s a bit unusual as collaborating institutions usually like to focus on their unique contributions to the research, hence, multiple news/press releases.

Dexter Johnson, in a January 11, 2018 post on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website,  adds a detail or two while writing in an accessible style.

Here’s a link to and a citation for the paper,

Spatial and Contamination-Dependent Electrical Properties of Carbon Nanotubes by Chris J. Barnett, Cathren E. Gowenlock, Kathryn Welsby, Alvin Orbaek White, and Andrew R. Barron. Nano Lett., Article ASAP DOI: 10.1021/acs.nanolett.7b03390 Publication Date (Web): December 19, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.

FrogHeart’s good-bye to 2017 and hello to 2018

This is going to be relatively short and sweet(ish). Starting with the 2017 review:

Nano blogosphere and the Canadian blogosphere

From my perspective there’s been a change taking place in the nano blogosphere over the last few years. There are fewer blogs along with fewer postings from those who still blog. Interestingly, some blogs are becoming more generalized. At the same time, Foresight Institute’s Nanodot blog (as has FrogHeart) has expanded its range of topics to include artificial intelligence and other topics. Andrew Maynard’s 2020 Science blog now exists in an archived from but before its demise, it, too, had started to include other topics, notably risk in its many forms as opposed to risk and nanomaterials. Dexter Johnson’s blog, Nanoclast (on the IEEE [Institute for Electrical and Electronics Engineers] website), maintains its 3x weekly postings. Tim Harper who often wrote about nanotechnology on his Cientifica blog appears to have found a more freewheeling approach that is dominated by his Twitter feed although he also seems (I can’t confirm that the latest posts were written in 2017) to blog here on timharper.net.

The Canadian science blogosphere seems to be getting quieter if Science Borealis (blog aggregator) is a measure. My overall impression is that the bloggers have been a bit quieter this year with fewer postings on the feed or perhaps that’s due to some technical issues (sometimes FrogHeart posts do not get onto the feed). On the promising side, Science Borealis teamed with the Science Writers and Communicators of Canada Association to run a contest, “2017 People’s Choice Awards: Canada’s Favourite Science Online!”  There were two categories (Favourite Science Blog and Favourite Science Site) and you can find a list of the finalists with links to the winners here.

Big congratulations for the winners: Canada’s Favourite Blog 2017: Body of Evidence (Dec. 6, 2017 article by Alina Fisher for Science Borealis) and Let’s Talk Science won Canada’s Favourite Science Online 2017 category as per this announcement.

However, I can’t help wondering: where were ASAP Science, Acapella Science, Quirks & Quarks, IFLS (I f***ing love science), and others on the list for finalists? I would have thought any of these would have a lock on a position as a finalist. These are Canadian online science purveyors and they are hugely popular, which should mean they’d have no problem getting nominated and getting votes. I can’t find the criteria for nominations (or any hint there will be a 2018 contest) so I imagine their nonpresence on the 2017 finalists list will remain a mystery to me.

Looking forward to 2018, I think that the nano blogosphere will continue with its transformation into a more general science/technology-oriented community. To some extent, I believe this reflects the fact that nanotechnology is being absorbed into the larger science/technology effort as foundational (something wiser folks than me predicted some years ago).

As for Science Borealis and the Canadian science online effort, I’m going to interpret the quieter feeds as a sign of a maturing community. After all, there are always ups and downs in terms of enthusiasm and participation and as I noted earlier the launch of an online contest is promising as is the collaboration with Science Writers and Communicators of Canada.

Canadian science policy

It was a big year.

Canada’s Chief Science Advisor

With Canada’s first chief science advisor in many years, being announced Dr. Mona Nemer stepped into her position sometime in Fall 2017. The official announcement was made on Sept. 26, 2017. I covered the event in my Sept. 26, 2017 posting, which includes a few more details than found the official announcement.

You’ll also find in that Sept. 26, 2017 posting a brief discourse on the Naylor report (also known as the Review of Fundamental Science) and some speculation on why, to my knowledge, there has been no action taken as a consequence.  The Naylor report was released April 10, 2017 and was covered here in a three-part review, published on June 8, 2017,

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 1 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 2 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 3 of 3

I have found another commentary (much briefer than mine) by Paul Dufour on the Canadian Science Policy Centre website. (November 9, 2017)

Subnational and regional science funding

This began in 2016 with a workshop mentioned in my November 10, 2016 posting: ‘Council of Canadian Academies and science policy for Alberta.” By the time the report was published the endeavour had been transformed into: Science Policy: Considerations for Subnational Governments (report here and my June 22, 2017 commentary here).

I don’t know what will come of this but I imagine scientists will be supportive as it means more money and they are always looking for more money. Still, the new government in British Columbia has only one ‘science entity’ and I’m not sure it’s still operational but i was called the Premier’s Technology Council. To my knowledge, there is no ministry or other agency that is focused primarily or partially on science.

Meanwhile, a couple of representatives from the health sciences (neither of whom were involved in the production of the report) seem quite enthused about the prospects for provincial money in their (Bev Holmes, Interim CEO, Michael Smith Foundation for Health Research, British Columbia, and Patrick Odnokon (CEO, Saskatchewan Health Research Foundation) October 27, 2017 opinion piece for the Canadian Science Policy Centre.

Artificial intelligence and Canadians

An event which I find more interesting with time was the announcement of the Pan=Canadian Artificial Intelligence Strategy in the 2017 Canadian federal budget. Since then there has been a veritable gold rush mentality with regard to artificial intelligence in Canada. One announcement after the next about various corporations opening new offices in Toronto or Montréal has been made in the months since.

What has really piqued my interest recently is a report being written for Canada’s Treasury Board by Michael Karlin (you can learn more from his Twitter feed although you may need to scroll down past some of his more personal tweets (something cassoulet in the Dec. 29, 2017 tweets).  As for Karlin’s report, which is a work in progress, you can find out more about the report and Karlin in a December 12, 2017 article by Rob Hunt for the Algorithmic Media Observatory (sponsored by the Social Sciences and Humanities Research Council of Canada [SHRCC], the Centre for Study of Democratic Citizenship, and the Fonds de recherche du Québec: Société et culture).

You can ring in 2018 by reading and making comments, which could influence the final version, on Karlin’s “Responsible Artificial Intelligence in the Government of Canada” part of the government’s Digital Disruption White Paper Series.

As for other 2018 news, the Council of Canadian Academies is expected to publish “The State of Science and Technology and Industrial Research and Development in Canada” at some point soon (we hope). This report follows and incorporates two previous ‘states’, The State of Science and Technology in Canada, 2012 (the first of these was a 2006 report) and the 2013 version of The State of Industrial R&D in Canada. There is already some preliminary data for this latest ‘state of’  (you can find a link and commentary in my December 15, 2016 posting).

FrogHeart then (2017) and soon (2018)

On looking back I see that the year started out at quite a clip as I was attempting to hit the 5000th blog posting mark, which I did on March 3,  2017. I have cut back somewhat from the 3 postings/day high to approximately 1 posting/day. It makes things more manageable allowing me to focus on other matters.

By the way, you may note that the ‘Donate’ button has disappeared from my sidebard. I thank everyone who donated from the bottom of my heart. The money was more than currency, it also symbolized encouragement. On the sad side, I moved from one hosting service to a new one (Sibername) late in December 2016 and have been experiencing serious bandwidth issues which result on FrogHeart’s disappearance from the web for days at a time. I am trying to resolve the issues and hope that such actions as removing the ‘Donate’ button will help.

I wish my readers all the best for 2018 as we explore nanotechnology and other emerging technologies!

(I apologize for any and all errors. I usually take a little more time to write this end-of-year and coming-year piece but due to bandwidth issues I was unable to access my draft and give it at least one review. And at this point, I’m too tired to try spotting error. If you see any, please do let me know.)

Nanoelectronic thread (NET) brain probes for long-term neural recording

A rendering of the ultra-flexible probe in neural tissue gives viewers a sense of the device’s tiny size and footprint in the brain. Image credit: Science Advances.

As long time readers have likely noted, I’m not a big a fan of this rush to ‘colonize’ the brain but it continues apace as a Feb. 15, 2017 news item on Nanowerk announces a new type of brain probe,

Engineering researchers at The University of Texas at Austin have designed ultra-flexible, nanoelectronic thread (NET) brain probes that can achieve more reliable long-term neural recording than existing probes and don’t elicit scar formation when implanted.

A Feb. 15, 2017 University of Texas at Austin news release, which originated the news item, provides more information about the new probes (Note: A link has been removed),

A team led by Chong Xie, an assistant professor in the Department of Biomedical Engineering in the Cockrell School of Engineering, and Lan Luan, a research scientist in the Cockrell School and the College of Natural Sciences, have developed new probes that have mechanical compliances approaching that of the brain tissue and are more than 1,000 times more flexible than other neural probes. This ultra-flexibility leads to an improved ability to reliably record and track the electrical activity of individual neurons for long periods of time. There is a growing interest in developing long-term tracking of individual neurons for neural interface applications, such as extracting neural-control signals for amputees to control high-performance prostheses. It also opens up new possibilities to follow the progression of neurovascular and neurodegenerative diseases such as stroke, Parkinson’s and Alzheimer’s diseases.

One of the problems with conventional probes is their size and mechanical stiffness; their larger dimensions and stiffer structures often cause damage around the tissue they encompass. Additionally, while it is possible for the conventional electrodes to record brain activity for months, they often provide unreliable and degrading recordings. It is also challenging for conventional electrodes to electrophysiologically track individual neurons for more than a few days.

In contrast, the UT Austin team’s electrodes are flexible enough that they comply with the microscale movements of tissue and still stay in place. The probe’s size also drastically reduces the tissue displacement, so the brain interface is more stable, and the readings are more reliable for longer periods of time. To the researchers’ knowledge, the UT Austin probe — which is as small as 10 microns at a thickness below 1 micron, and has a cross-section that is only a fraction of that of a neuron or blood capillary — is the smallest among all neural probes.

“What we did in our research is prove that we can suppress tissue reaction while maintaining a stable recording,” Xie said. “In our case, because the electrodes are very, very flexible, we don’t see any sign of brain damage — neurons stayed alive even in contact with the NET probes, glial cells remained inactive and the vasculature didn’t become leaky.”

In experiments in mouse models, the researchers found that the probe’s flexibility and size prevented the agitation of glial cells, which is the normal biological reaction to a foreign body and leads to scarring and neuronal loss.

“The most surprising part of our work is that the living brain tissue, the biological system, really doesn’t mind having an artificial device around for months,” Luan said.

The researchers also used advanced imaging techniques in collaboration with biomedical engineering professor Andrew Dunn and neuroscientists Raymond Chitwood and Jenni Siegel from the Institute for Neuroscience at UT Austin to confirm that the NET enabled neural interface did not degrade in the mouse model for over four months of experiments. The researchers plan to continue testing their probes in animal models and hope to eventually engage in clinical testing. The research received funding from the UT BRAIN seed grant program, the Department of Defense and National Institutes of Health.

Here’s a link to and citation for the paper,

Ultraflexible nanoelectronic probes form reliable, glial scar–free neural integration by Lan Luan, Xiaoling Wei, Zhengtuo Zhao, Jennifer J. Siegel, Ojas Potnis, Catherine A Tuppen, Shengqing Lin, Shams Kazmi, Robert A. Fowler, Stewart Holloway, Andrew K. Dunn, Raymond A. Chitwood, and Chong Xie. Science Advances  15 Feb 2017: Vol. 3, no. 2, e1601966 DOI: 10.1126/sciadv.1601966

This paper is open access.

You can get more detail about the research in a Feb. 17, 2017 posting by Dexter Johnson on his Nanoclast blog (on the IEEE [International Institute for Electrical and Electronics Engineers] website).

Tempest in a teapot or a sign of things to come? UK’s National Graphene Institute kerfuffle

A scandal-in-the-offing, intellectual property, miffed academics, a chortling businessman, graphene, and much more make this a fascinating story.

Before launching into the main attractions, those unfamiliar with the UK graphene effort might find this background informal useful. Graphene, was first isolated at the University of Manchester in 2004 by scientists Andre Geim* and Konstantin Novoselov, Russian immigrants, both of whom have since become Nobel laureates and knights of the realm. The excitement in the UK and elsewhere is due to graphene’s extraordinary properties which could lead to transparent electronics, foldable/bendable electronics, better implants, efficient and inexpensive (they hope) water filters, and more. The UK government has invested a lot of money in graphene as has the European Union (1B Euros in the Graphene Flagship) in the hope that huge economic benefits will be reaped.

Dexter Johnson’s March 15, 2016 posting on his Nanoclast blog (on the IEEE [Institute for Electrical and Electronics Engineers] website) provides details about the situation (Note: Links have been removed),

A technology that, a year ago, was being lauded as the “first commercially viable consumer product” using graphene now appears to be caught up in an imbroglio over who owns its intellectual property rights. The resulting controversy has left the research institute behind the technology in a bit of a public relations quagmire.

The venerable UK publication The Sunday Times reported this week on what appeared to be a mutiny occurring at the National Graphene Institute (NGI) located at the University of Manchester. Researchers at the NGI had reportedly stayed away from working at the institute’s gleaming new $71 million research facility over fears that their research was going to end up in the hands of foreign companies, in particular a Taiwan-based company called BGT Materials.

The “first commercially viable consumer product” noted in Dexter’s posting was a graphene-based lightbulb which was announced by the NGI to much loud crowing in March 2015 (see my March 30, 2015 posting). The company producing the lightbulb was announced as “… Graphene Lighting PLC is a spin-out based on a strategic partnership with the National Graphene Institute (NGI) at The University of Manchester to create graphene applications.” There was no mention of BGT.

Dexter describes the situation from the BGT perspective (from his March 15, 2016 posting), Note: Links have been removed,

… BGT did not demur when asked by  the Times whether it owned the technology. In fact, Chung Ping Lai, BGT’s CEO, claimed it was his company that had invented the technology for the light bulb and not the NGI. The Times report further stated that Lai controls all the key patents and claims to be delighted with his joint venture with the university. “I believe in luck and I have had luck in Manchester,” Lai told the Times.

With companies outside the UK holding majority stakes in the companies spun out of the NGI—allowing them to claim ownership of the technologies developed at the institute—one is left to wonder what was the purpose of the £50 million (US $79 million) earmarked for graphene research in the UK more than four years ago? Was it to develop a local economy based around graphene—a “Graphene Valley”, if you will? Or was it to prop up the local construction industry through the building of shiny new buildings that reportedly few people occupy? That’s the charge leveled by Andre Geim, Nobel laureate for his discovery of graphene, and NGI’s shining star. Geim reportedly described the new NGI building as: “Money put in the British building industry rather than science.”

Dexter ends his March 15, 2016 posting with an observation  that will seem familiar to Canadians,

Now, it seems the government’s eagerness to invest in graphene research—or at least, the facilities for conducting that research—might have ended up bringing it to the same place as its previous lack of investment: the science is done in the UK and the exploitation of the technology is done elsewhere.

The March 13, 2016 Sunday Times article [ETA on April 3, 2016: This article is now behind a paywall] by Tom Harper, Jon Ungoed-Thomas and Michael Sheridan, which seems to be the source of Dexter’s posting, takes a more partisan approach,

ACADEMICS are boycotting a top research facility after a company linked to China was given access to lucrative confidential material from one of Britain’s greatest scientific breakthroughs.

Some scientists at Manchester University working on graphene, a wonder substance 200 times stronger than steel, refuse to work at the new £61m national institution, set up to find ways to exploit the material, amid concerns over a deal struck between senior university management and BGT Materials.

The academics are concerned that the National Graphene Institute (NGI), which was opened last year by George Osborne, the chancellor, and forms one of the key planks of his “northern powerhouse” industrial strategy, does not have the necessary safeguards to protect their confidential research, which could revolutionise the electronics, energy, health and building industries.

BGT, which is controlled by a Taiwanese businessman, subsequently agreed to work with a Chinese manufacturing company and university to develop similar graphene technology.

BGT says its work in Manchester has been successful and it is “offensive” and “untrue” to suggest that it would unfairly use intellectual property. The university say there is no evidence “whatsoever” of unfair use of confidential information. Manchester says it is understandable that some scientists are cautious about the collaborative environment of the new institute. But one senior academic said the arrangement with BGT had caused the university’s graphene research to descend into “complete anarchy”.

The academic said: “The NGI is a national facility, and why should we use it for a company, which is not even an English [owned] company? How much [intellectual property] is staying in England and how much is going to Taiwan?”

The row highlights concerns that the UK has dawdled in developing one of its greatest discoveries. Nearly 50% of ­graphene-related patents have been filed in China, and just 1% in Britain.

Manchester signed a £5m “research collaboration agreement” with BGT Materials in October 2013. Although the company is controlled by a Taiwanese businessman, Chung-ping Lai, the university does have a 17.5% shareholding.

Manchester claimed that the commercial deal would “attract a significant number of jobs to the city” and “benefit the UK economy”.

However, an investigation by The Sunday Times has established:

Only four jobs have been created as a result of the deal and BGT has not paid the full £5m due under the agreement after two projects were cancelled.

Pictures sent to The Sunday Times by a source at the university last month show that the offices at the NGI [National Graphene Institute], which can accommodate 120 staff, were deserted.

British-based businessmen working with graphene have also told The Sunday Times of their concerns about the institute’s information security. Tim Harper, a Manchester-based graphene entrepreneur, said: “We looked at locating there [at the NGI] but we take intellectual property extremely seriously and it is a problem locating in such a facility.

“If you don’t have control over your computer systems or the keys to your lab, then you’ve got a problem.”

I recommend reading Dexter’s post and the Sunday Times article as they provide some compelling insight into the UK situation vis à vis nanotechnology, science, and innovation.

*’Gheim’ corrected to ‘Geim’ on March 30, 2016.

Portable graphene-based supercapacitor comes to market soon

Dexter Johnson’s excitement is palpable in a Feb. 25, 2016 posting (on his Nanoclast blog on the IEEE [Institute for Electrical and Electronics Engineers] website) about a graphene-based supercapacitor,

At long last, there is a company that is about to launch a commercially available product based on a graphene-enabled supercapacitor. A UK-based startup called Zap&Go has found a way to exploit the attractive properties of graphene for supercapactiors to fabricate a portable charger and expects to make it available to consumers this year.

While graphene’s theoretical surface area of 2630 square meters per gram is pretty high, and would presumably bode well for increased capacity, this density is only possible with a single, standalone graphene sheet. And therein lies the rub: you can’t actually use a standalone sheet for the electrode of a supercapacitor because it will result in a very low volumetric capacitance. To get to a real-world device, you have to stack the sheets on top of each other. When you do this, the surface area is reduced.

Nonetheless graphene does have two main benefits going for it in supercapacitors: its ability to be structured into smaller sizes and its high conductance.

It is these qualities that Zap&Go have exploited for their portable charger. While there are other rechargers on the market, they are built around Li-ion batteries that take a long time to charge up and still present some small danger when packed up for traveling.

While your devices will still take just as long to charge, there are some compelling benefits,

You can find out more in Dexter’s posting, or on Zap&Go’s website, or on the company’s IndieGoGo crowdfunding campaign page (it’s closed and they more than reached their goal).

The charger is available for pre-ordering and will be delivered in Summer 2016, according to the company’s website store.

One final comment, I’m not endorsing this product, in other words, caveat emptor (buyer beware).

Replace silicon with black phosphorus instead of graphene?

I have two black phosphorus pieces. This first piece of research comes out of ‘La belle province’ or, as it’s more usually called, Québec (Canada).

Foundational research on phosphorene

There’s a lot of interest in replacing silicon for a number of reasons and, increasingly, there’s interest in finding an alternative to graphene.

A July 7, 2015 news item on Nanotechnology Now describes a new material for use as transistors,

As scientists continue to hunt for a material that will make it possible to pack more transistors on a chip, new research from McGill University and Université de Montréal adds to evidence that black phosphorus could emerge as a strong candidate.

In a study published today in Nature Communications, the researchers report that when electrons move in a phosphorus transistor, they do so only in two dimensions. The finding suggests that black phosphorus could help engineers surmount one of the big challenges for future electronics: designing energy-efficient transistors.

A July 7, 2015 McGill University news release on EurekAlert, which originated the news item, describes the field of 2D materials and the research into black phosphorus and its 2D version, phosperene (analogous to graphite and graphene),

“Transistors work more efficiently when they are thin, with electrons moving in only two dimensions,” says Thomas Szkopek, an associate professor in McGill’s Department of Electrical and Computer Engineering and senior author of the new study. “Nothing gets thinner than a single layer of atoms.”

In 2004, physicists at the University of Manchester in the U.K. first isolated and explored the remarkable properties of graphene — a one-atom-thick layer of carbon. Since then scientists have rushed to to investigate a range of other two-dimensional materials. One of those is black phosphorus, a form of phosphorus that is similar to graphite and can be separated easily into single atomic layers, known as phosphorene.

Phosphorene has sparked growing interest because it overcomes many of the challenges of using graphene in electronics. Unlike graphene, which acts like a metal, black phosphorus is a natural semiconductor: it can be readily switched on and off.

“To lower the operating voltage of transistors, and thereby reduce the heat they generate, we have to get closer and closer to designing the transistor at the atomic level,” Szkopek says. “The toolbox of the future for transistor designers will require a variety of atomic-layered materials: an ideal semiconductor, an ideal metal, and an ideal dielectric. All three components must be optimized for a well designed transistor. Black phosphorus fills the semiconducting-material role.”

The work resulted from a multidisciplinary collaboration among Szkopek’s nanoelectronics research group, the nanoscience lab of McGill Physics Prof. Guillaume Gervais, and the nanostructures research group of Prof. Richard Martel in Université de Montréal’s Department of Chemistry.

To examine how the electrons move in a phosphorus transistor, the researchers observed them under the influence of a magnetic field in experiments performed at the National High Magnetic Field Laboratory in Tallahassee, FL, the largest and highest-powered magnet laboratory in the world. This research “provides important insights into the fundamental physics that dictate the behavior of black phosphorus,” says Tim Murphy, DC Field Facility Director at the Florida facility.

“What’s surprising in these results is that the electrons are able to be pulled into a sheet of charge which is two-dimensional, even though they occupy a volume that is several atomic layers in thickness,” Szkopek says. That finding is significant because it could potentially facilitate manufacturing the material — though at this point “no one knows how to manufacture this material on a large scale.”

“There is a great emerging interest around the world in black phosphorus,” Szkopek says. “We are still a long way from seeing atomic layer transistors in a commercial product, but we have now moved one step closer.”

Here’s a link to and a citation for the paper,

Two-dimensional magnetotransport in a black phosphorus naked quantum well by V. Tayari, N. Hemsworth, I. Fakih, A. Favron, E. Gaufrès, G. Gervais, R. Martel & T. Szkopek. Nature Communications 6, Article number: 7702 doi:10.1038/ncomms8702 Published 07 July 2015

This is an open access paper.

The second piece of research into black phosphorus is courtesy of an international collaboration.

A phosporene transistor

A July 9, 2015 Technical University of Munich (TUM) press release (also on EurekAlert) describes the formation of a phosphorene transistor made possible by the introduction of arsenic,

Chemists at the Technische Universität München (TUM) have now developed a semiconducting material in which individual phosphorus atoms are replaced by arsenic. In a collaborative international effort, American colleagues have built the first field-effect transistors from the new material.

For many decades silicon has formed the basis of modern electronics. To date silicon technology could provide ever tinier transistors for smaller and smaller devices. But the size of silicon transistors is reaching its physical limit. Also, consumers would like to have flexible devices, devices that can be incorporated into clothing and the likes. However, silicon is hard and brittle. All this has triggered a race for new materials that might one day replace silicon.

Black arsenic phosphorus might be such a material. Like graphene, which consists of a single layer of carbon atoms, it forms extremely thin layers. The array of possible applications ranges from transistors and sensors to mechanically flexible semiconductor devices. Unlike graphene, whose electronic properties are similar to those of metals, black arsenic phosphorus behaves like a semiconductor.

The press release goes on to provide more detail about the collaboration and the research,

A cooperation between the Technical University of Munich and the University of Regensburg on the German side and the University of Southern California (USC) and Yale University in the United States has now, for the first time, produced a field effect transistor made of black arsenic phosphorus. The compounds were synthesized by Marianne Koepf at the laboratory of the research group for Synthesis and Characterization of Innovative Materials at the TUM. The field effect transistors were built and characterized by a group headed by Professor Zhou and Dr. Liu at the Department of Electrical Engineering at USC.

The new technology developed at TUM allows the synthesis of black arsenic phosphorus without high pressure. This requires less energy and is cheaper. The gap between valence and conduction bands can be precisely controlled by adjusting the arsenic concentration. “This allows us to produce materials with previously unattainable electronic and optical properties in an energy window that was hitherto inaccessible,” says Professor Tom Nilges, head of the research group for Synthesis and Characterization of Innovative Materials.

Detectors for infrared

With an arsenic concentration of 83 percent the material exhibits an extremely small band gap of only 0.15 electron volts, making it predestined for sensors which can detect long wavelength infrared radiation. LiDAR (Light Detection and Ranging) sensors operate in this wavelength range, for example. They are used, among other things, as distance sensors in automobiles. Another application is the measurement of dust particles and trace gases in environmental monitoring.

A further interesting aspect of these new, two-dimensional semiconductors is their anisotropic electronic and optical behavior. The material exhibits different characteristics along the x- and y-axes in the same plane. To produce graphene like films the material can be peeled off in ultra thin layers. The thinnest films obtained so far are only two atomic layers thick.

Here’s a link to and a citation for the paper,

Black Arsenic–Phosphorus: Layered Anisotropic Infrared Semiconductors with Highly Tunable Compositions and Properties by Bilu Liu, Marianne Köpf, Ahmad N. Abbas, Xiaomu Wang, Qiushi Guo, Yichen Jia, Fengnian Xia, Richard Weihrich, Frederik Bachhuber, Florian Pielnhofer, Han Wang, Rohan Dhall, Stephen B. Cronin, Mingyuan Ge1 Xin Fang, Tom Nilges, and Chongwu Zhou. DOI: 10.1002/adma.201501758 Article first published online: 25 JUN 2015

© 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Dexter Johnson, on his Nanoclast blog (on the Institute for Electrical and Electronics Engineers website), adds more information about black phosphorus and its electrical properties in his July 9, 2015 posting about the Germany/US collaboration (Note: Links have been removed),

Black phosphorus has been around for about 100 years, but recently it has been synthesized as a two-dimensional material—dubbed phosphorene in reference to its two-dimensional cousin, graphene. Black phosphorus is quite attractive for electronic applications like field-effect transistors because of its inherent band gap and it is one of the few 2-D materials to be a natively p-type semiconductor.

One final comment, I notice the Germany-US work was published weeks prior to the Canadian research suggesting that the TUM July 9, 2015 press release is an attempt to capitalize on the interest generated by the Canadian research. That’s a smart move.

A more complex memristor: from two terminals to three for brain-like computing

Researchers have developed a more complex memristor device than has been the case according to an April 6, 2015 Northwestern University news release (also on EurekAlert),

Researchers are always searching for improved technologies, but the most efficient computer possible already exists. It can learn and adapt without needing to be programmed or updated. It has nearly limitless memory, is difficult to crash, and works at extremely fast speeds. It’s not a Mac or a PC; it’s the human brain. And scientists around the world want to mimic its abilities.

Both academic and industrial laboratories are working to develop computers that operate more like the human brain. Instead of operating like a conventional, digital system, these new devices could potentially function more like a network of neurons.

“Computers are very impressive in many ways, but they’re not equal to the mind,” said Mark Hersam, the Bette and Neison Harris Chair in Teaching Excellence in Northwestern University’s McCormick School of Engineering. “Neurons can achieve very complicated computation with very low power consumption compared to a digital computer.”

A team of Northwestern researchers, including Hersam, has accomplished a new step forward in electronics that could bring brain-like computing closer to reality. The team’s work advances memory resistors, or “memristors,” which are resistors in a circuit that “remember” how much current has flowed through them.

“Memristors could be used as a memory element in an integrated circuit or computer,” Hersam said. “Unlike other memories that exist today in modern electronics, memristors are stable and remember their state even if you lose power.”

Current computers use random access memory (RAM), which moves very quickly as a user works but does not retain unsaved data if power is lost. Flash drives, on the other hand, store information when they are not powered but work much slower. Memristors could provide a memory that is the best of both worlds: fast and reliable. But there’s a problem: memristors are two-terminal electronic devices, which can only control one voltage channel. Hersam wanted to transform it into a three-terminal device, allowing it to be used in more complex electronic circuits and systems.

The memristor is of some interest to a number of other parties prominent amongst them, the University of Michigan’s Professor Wei Lu and HP (Hewlett Packard) Labs, both of whom are mentioned in one of my more recent memristor pieces, a June 26, 2014 post.

Getting back to Northwestern,

Hersam and his team met this challenge by using single-layer molybdenum disulfide (MoS2), an atomically thin, two-dimensional nanomaterial semiconductor. Much like the way fibers are arranged in wood, atoms are arranged in a certain direction–called “grains”–within a material. The sheet of MoS2 that Hersam used has a well-defined grain boundary, which is the interface where two different grains come together.

“Because the atoms are not in the same orientation, there are unsatisfied chemical bonds at that interface,” Hersam explained. “These grain boundaries influence the flow of current, so they can serve as a means of tuning resistance.”

When a large electric field is applied, the grain boundary literally moves, causing a change in resistance. By using MoS2 with this grain boundary defect instead of the typical metal-oxide-metal memristor structure, the team presented a novel three-terminal memristive device that is widely tunable with a gate electrode.

“With a memristor that can be tuned with a third electrode, we have the possibility to realize a function you could not previously achieve,” Hersam said. “A three-terminal memristor has been proposed as a means of realizing brain-like computing. We are now actively exploring this possibility in the laboratory.”

Here’s a link to and a citation for the paper,

Gate-tunable memristive phenomena mediated by grain boundaries in single-layer MoS2 by Vinod K. Sangwan, Deep Jariwala, In Soo Kim, Kan-Sheng Chen, Tobin J. Marks, Lincoln J. Lauhon, & Mark C. Hersam. Nature Nanotechnology (2015) doi:10.1038/nnano.2015.56 Published online 06 April 2015

This paper is behind a paywall but there is a few preview available through ReadCube Access.

Dexter Johnson has written about this latest memristor development in an April 9, 2015 posting on his Nanoclast blog (on the IEEE [Institute for Electrical and Electronics Engineers] website) where he notes this (Note: A link has been removed),

The memristor seems to generate fairly polarized debate, especially here on this website in the comments on stories covering the technology. The controversy seems to fall along the lines that the device that HP Labs’ Stan Williams and Greg Snider developed back in 2008 doesn’t exactly line up with the original theory of the memristor proposed by Leon Chua back in 1971.

It seems the ‘debate’ has evolved from issues about how the memristor is categorized. I wonder if there’s still discussion about whether or not HP Labs is attempting to develop a patent thicket of sorts.

Graphene light bulb to hit UK stores later in 2015

I gather people at the University of Manchester are quite happy about the graphene light bulb which their spin-off (or spin-out) company, Graphene Lighting PLC, is due to deliver to the market sometime later in 2015. From a March 30, 2015 news item by Nancy Owano on phys.org (Note: A link has been removed),

The BBC reported on Saturday [March 28, 2015] that a graphene bulb is set for shops, to go on sale this year. UK developers said their graphene bulb will be the first commercially viable consumer product using the super-strong carbon; bulb was developed by a Canadian-financed company, Graphene Lighting, one of whose directors is Prof Colin Bailey at the University of Manchester. [emphasis mine]

I have not been able to track down the Canadian connection mentioned (*never in any detail) in some of the stories. A March 30, 2015 University of Manchester press release makes no mention of Canada or any other country in its announcement (Note: Links have been removed),

A graphene lightbulb with lower energy emissions, longer lifetime and lower manufacturing costs has been launched thanks to a University of Manchester research and innovation partnership.

Graphene Lighting PLC is a spin-out based on a strategic partnership with the National Graphene Institute (NGI) at The University of Manchester to create graphene applications.

The UK-registered company will produce the lightbulb, which is expected to perform significantly better and last longer than traditional LED bulbs.

It is expected that the graphene lightbulbs will be on the shelves in a matter of months, at a competitive cost.

The University of Manchester has a stake in Graphene Lighting PLC to ensure that the University benefits from commercial applications coming out of the NGI.

The graphene lightbulb is believed to be the first commercial application of graphene to emerge from the UK, and is the first application from the £61m NGI, which only opened last week.

Graphene was isolated at The University of Manchester in 2004 by Sir Andre Geim and Sir Kostya Novoselov, earning them the Nobel prize for Physics in 2010. The University is the home of graphene, with more than 200 researchers and an unrivalled breadth of graphene and 2D material research projects.

The NGI will see academic and commercial partners working side by side on graphene applications of the future. It is funded by £38m from the Engineering and Physical Sciences Research Council (EPSRC) and £23m from the European Regional Development Fund (ERDF).

There are currently more than 35 companies partnering with the NGI. In 2017, the University will open the Graphene Engineering Innovation Centre (GEIC), which will accelerate the process of bringing products to market.

Professor Colin Bailey, Deputy President and Deputy Vice-Chancellor of The University of Manchester said: “This lightbulb shows that graphene products are becoming a reality, just a little more than a decade after it was first isolated – a very short time in scientific terms.

“This is just the start. Our partners are looking at a range of exciting applications, all of which started right here in Manchester. It is very exciting that the NGI has launched its first product despite barely opening its doors yet.”

James Baker, Graphene Business Director, added: “The graphene lightbulb is proof of how partnering with the NGI can deliver real-life products which could be used by millions of people.

“This shows how The University of Manchester is leading the way not only in world-class graphene research but in commercialisation as well.”

Chancellor George Osborne and Sir Kostya Novoselov with the graphene lightbulb Courtesy: University of Manchester

Chancellor George Osborne and Sir Kostya Novoselov with the graphene lightbulb Courtesy: University of Manchester

This graphene light bulb announcement comes on the heels of the university’s official opening of its National Graphene Institute mentioned here in a March 26, 2015 post.

Getting back to graphene and light bulbs, Judy Lin in a March 30, 2015 post on LEDinside.com offers some details such as proposed pricing and more,

These new bulbs will be priced at GBP 15 (US $22.23) each.

The dimmable bulb incorporates a filament-shaped LED coated in graphene, which was designed by Manchester University, where the strong carbon material was first discovered.

$22 seems like an expensive light bulb but my opinion could change depending on how long it lasts. ‘Longer lasting’ (and other variants of the term) seen in the news stories and press release are not meaningful to me. Perhaps someone could specify how many hours and under what conditions?

* ‘but’ removed as it was unnecessary, April 3, 2015.

ETA April 3, 2105: Dexter Johnson has provided a thought-provoking commentary about this graphene light bulb in an April 2, 2015 post on his Nanoclast blog (on the IEEE [Institute for Electrical and Electronics Engineers] website), Note: Links have been removed,

The big story this week in graphene, after taking into account the discovery of “grapene,” [Dexter’s April Fool’s Day joke posting] has to be the furor that has surrounded news that a graphene-coated light bulb was to be the “first commercially viable consumer product” using graphene.

Since the product is not expected to be on store shelves until next year, “commercially viable” is both a good hedge and somewhat short on meaning. The list of companies with a commercially viable graphene-based product is substantial, graphene-based conductive inks and graphene-based lithium-ion anodes come immediately to mind. Even that list neglects products that are already commercially available, never mind “viable”, like Head’s graphene-based tennis racquets.

Dexter goes on to ask more pointed questions and shares the answers he got from Daniel Cochlin, the graphene communications and marketing manager at the University of Manchester. I confess I got caught up in the hype. It’s always good to have someone bringing things back down to earth. Thank you Dexter!

Institute for Electrical and Electronics Engineers’ (IEEE) Nano 2015 conference call for papers

The institute for Electrical and Electronics Engineers is holding its Nano 2015 conference in Rome, Italy from July 27 – 30, 2015. This is the second call for papers (I missed the first call),

We invite you to submit papers, proposals for tutorials, workshops to the International IEEE Conference on Nanotechnology which will be held in Rome, July 27-30, 2015. (See www.ieeenano15.org). The dead-line for abstract submission is 15th March 2015.

This conference is the 15th edition of the flagship annual event of the IEEE Nanotechnology Council. IEEE NANO 2015 will provide an international forum for the exchange of technical information in a wide variety of branches of Nanotechnology and Nanoscience, through feature tutorials, workshops, track sessions and special sessions; plenary and invited talks from the most renowned scientists and engineers; exhibition of software, hardware, equipment, materials, services and literature. With its fantastic setting in the centre of the Eternal City, at a walking distance from Colosseum and from the most exciting locations of ancient Rome, IEEE NANO 2015 will provide a perfect forum for inspiration, interactions and exchange of ideas.

All accepted papers will be published by IEEE Press, included in IEEE Xplore and Indexed by EI. Selected conference papers will be considered for publication on IEEE Transactions on Nanotechnology.

Important Dates

March 15, 2015:       Tutorial/Workshop Proposal
March 15, 2015:        Abstract Submission
April 15, 2015:           Acceptance Notification

May 15, 2015:            Full Paper Submission
June 1, 2015:              End of early Registration

Topics for contributing papers include but are not limited to:

Nanosensors, Actuators
Smart systems
Nanomaterials
Graphene-Based Materials
Nano-energy, Energy Harvesting
Nanobiology, Nanobiotechnology
Nanomedicine
Nanoelectronics
Nano-optoelectronics
MEMS/NEMS
Nano-optics, Nano-photonics
Nano-electromagnetics, NanoEMC
Nanofabrication, Nanoassemblies
Nanopackaging
Nanorobotics, Nanomanipulation
Nanometrology
Nanocharacterization
Nanofluidics
Nanomagnetics
Multiscale Modeling and Simulation

PLENARY SPEAKERS (See www.ieeenano15.org/program/plenary-speakers)
George Bourianoff, Intel (USA)
Michael Grätzel, EPFL (Switzerland)
Roberto Cingolani, IIT (Italy)
Rodney Ruoff, NIST (Korea)
Takao Someya, Tokyo Univ. (Japan)
Theresa Mayer, Pennsylvania State Univ. (USA)
Zhong Lin Wang, Georgia Tech (USA)

Proposed SPECIAL SESSIONS
1) Graphene
2) Nanoelectromagnetics and Nano-EMC
3) Nanometrology and device characterization
4) Nanotechnology for microwave and THz
5) Memristor
Part 1: Resistive switching: from fundamentals to production
Part 2: Memristive nanodevices and nanocircuits
6) Nanophononics
7) Drug Toxicity Mitigation. Nanotechnology-Enabled Strategies
8) Conformable Electronics and E-Skin
9) Organic Neurooptoelectronics

There are more details about the call in this PDF. Good luck!