Tag Archives: Israel

A potpourri of robot/AI stories: killers , kindergarten teachers, a Balenciaga-inspired AI fashion designer, a conversational android, and more

Following on my August 29, 2018 post (Sexbots, sexbot ethics, families, and marriage), I’m following up with a more general piece.

Robots, AI (artificial intelligence), and androids (humanoid robots), the terms can be confusing since there’s a tendency to use them interchangeably. Confession: I do it too, but, not this time. That said, I have multiple news bits.

Killer ‘bots and ethics

The U.S. military is already testing a Modular Advanced Armed Robotic System. Credit: Lance Cpl. Julien Rodarte, U.S. Marine Corps

That is a robot.

For the purposes of this posting, a robot is a piece of hardware which may or may not include an AI system and does not mimic a human or other biological organism such that you might, under circumstances, mistake the robot for a biological organism.

As for what precipitated this feature (in part), it seems there’s been a United Nations meeting in Geneva, Switzerland held from August 27 – 31, 2018 about war and the use of autonomous robots, i.e., robots equipped with AI systems and designed for independent action. BTW, it’s the not first meeting the UN has held on this topic.

Bonnie Docherty, lecturer on law and associate director of armed conflict and civilian protection, international human rights clinic, Harvard Law School, has written an August 21, 2018 essay on The Conversation (also on phys.org) describing the history and the current rules around the conduct of war, as well as, outlining the issues with the military use of autonomous robots (Note: Links have been removed),

When drafting a treaty on the laws of war at the end of the 19th century, diplomats could not foresee the future of weapons development. But they did adopt a legal and moral standard for judging new technology not covered by existing treaty language.

This standard, known as the Martens Clause, has survived generations of international humanitarian law and gained renewed relevance in a world where autonomous weapons are on the brink of making their own determinations about whom to shoot and when. The Martens Clause calls on countries not to use weapons that depart “from the principles of humanity and from the dictates of public conscience.”

I was the lead author of a new report by Human Rights Watch and the Harvard Law School International Human Rights Clinic that explains why fully autonomous weapons would run counter to the principles of humanity and the dictates of public conscience. We found that to comply with the Martens Clause, countries should adopt a treaty banning the development, production and use of these weapons.

Representatives of more than 70 nations will gather from August 27 to 31 [2018] at the United Nations in Geneva to debate how to address the problems with what they call lethal autonomous weapon systems. These countries, which are parties to the Convention on Conventional Weapons, have discussed the issue for five years. My co-authors and I believe it is time they took action and agreed to start negotiating a ban next year.

Docherty elaborates on her points (Note: A link has been removed),

The Martens Clause provides a baseline of protection for civilians and soldiers in the absence of specific treaty law. The clause also sets out a standard for evaluating new situations and technologies that were not previously envisioned.

Fully autonomous weapons, sometimes called “killer robots,” would select and engage targets without meaningful human control. They would be a dangerous step beyond current armed drones because there would be no human in the loop to determine when to fire and at what target. Although fully autonomous weapons do not yet exist, China, Israel, Russia, South Korea, the United Kingdom and the United States are all working to develop them. They argue that the technology would process information faster and keep soldiers off the battlefield.

The possibility that fully autonomous weapons could soon become a reality makes it imperative for those and other countries to apply the Martens Clause and assess whether the technology would offend basic humanity and the public conscience. Our analysis finds that fully autonomous weapons would fail the test on both counts.

I encourage you to read the essay in its entirety and for anyone who thinks the discussion about ethics and killer ‘bots is new or limited to military use, there’s my July 25, 2016 posting about police use of a robot in Dallas, Texas. (I imagine the discussion predates 2016 but that’s the earliest instance I have here.)

Teacher bots

Robots come in many forms and this one is on the humanoid end of the spectum,

Children watch a Keeko robot at the Yiswind Institute of Multicultural Education in Beijing, where the intelligent machines are telling stories and challenging kids with logic problems  [donwloaded from https://phys.org/news/2018-08-robot-teachers-invade-chinese-kindergartens.html]

Don’t those ‘eyes’ look almost heart-shaped? No wonder the kids love these robots, if an August  29, 2018 news item on phys.org can be believed,

The Chinese kindergarten children giggled as they worked to solve puzzles assigned by their new teaching assistant: a roundish, short educator with a screen for a face.

Just under 60 centimetres (two feet) high, the autonomous robot named Keeko has been a hit in several kindergartens, telling stories and challenging children with logic problems.

Round and white with a tubby body, the armless robot zips around on tiny wheels, its inbuilt cameras doubling up both as navigational sensors and a front-facing camera allowing users to record video journals.

In China, robots are being developed to deliver groceries, provide companionship to the elderly, dispense legal advice and now, as Keeko’s creators hope, join the ranks of educators.

At the Yiswind Institute of Multicultural Education on the outskirts of Beijing, the children have been tasked to help a prince find his way through a desert—by putting together square mats that represent a path taken by the robot—part storytelling and part problem-solving.

Each time they get an answer right, the device reacts with delight, its face flashing heart-shaped eyes.

“Education today is no longer a one-way street, where the teacher teaches and students just learn,” said Candy Xiong, a teacher trained in early childhood education who now works with Keeko Robot Xiamen Technology as a trainer.

“When children see Keeko with its round head and body, it looks adorable and children love it. So when they see Keeko, they almost instantly take to it,” she added.

Keeko robots have entered more than 600 kindergartens across the country with its makers hoping to expand into Greater China and Southeast Asia.

Beijing has invested money and manpower in developing artificial intelligence as part of its “Made in China 2025” plan, with a Chinese firm last year unveiling the country’s first human-like robot that can hold simple conversations and make facial expressions.

According to the International Federation of Robots, China has the world’s top industrial robot stock, with some 340,000 units in factories across the country engaged in manufacturing and the automotive industry.

Moving on from hardware/software to a software only story.

AI fashion designer better than Balenciaga?

Despite the title for Katharine Schwab’s August 22, 2018 article for Fast Company, I don’t think this AI designer is better than Balenciaga but from the pictures I’ve seen the designs are as good and it does present some intriguing possibilities courtesy of its neural network (Note: Links have been removed),

The AI, created by researcher Robbie Barat, has created an entire collection based on Balenciaga’s previous styles. There’s a fabulous pink and red gradient jumpsuit that wraps all the way around the model’s feet–like a onesie for fashionistas–paired with a dark slouchy coat. There’s a textural color-blocked dress, paired with aqua-green tights. And for menswear, there’s a multi-colored, shimmery button-up with skinny jeans and mismatched shoes. None of these looks would be out of place on the runway.

To create the styles, Barat collected images of Balenciaga’s designs via the designer’s lookbooks, ad campaigns, runway shows, and online catalog over the last two months, and then used them to train the pix2pix neural net. While some of the images closely resemble humans wearing fashionable clothes, many others are a bit off–some models are missing distinct limbs, and don’t get me started on how creepy [emphasis mine] their faces are. Even if the outfits aren’t quite ready to be fabricated, Barat thinks that designers could potentially use a tool like this to find inspiration. Because it’s not constrained by human taste, style, and history, the AI comes up with designs that may never occur to a person. “I love how the network doesn’t really understand or care about symmetry,” Barat writes on Twitter.

You can see the ‘creepy’ faces and some of the designs here,

Image: Robbie Barat

In contrast to the previous two stories, this all about algorithms, no machinery with independent movement (robot hardware) needed.

Conversational android: Erica

Hiroshi Ishiguro and his lifelike (definitely humanoid) robots have featured here many, many times before. The most recent posting is a March 27, 2017 posting about his and his android’s participation at the 2017 SXSW festival.

His latest work is featured in an August 21, 2018 news news item on ScienceDaily,

We’ve all tried talking with devices, and in some cases they talk back. But, it’s a far cry from having a conversation with a real person.

Now a research team from Kyoto University, Osaka University, and the Advanced Telecommunications Research Institute, or ATR, have significantly upgraded the interaction system for conversational android ERICA, giving her even greater dialog skills.

ERICA is an android created by Hiroshi Ishiguro of Osaka University and ATR, specifically designed for natural conversation through incorporation of human-like facial expressions and gestures. The research team demonstrated the updates during a symposium at the National Museum of Emerging Science in Tokyo.

Here’s the latest conversational android, Erica

Caption: The experimental set up when the subject (left) talks with ERICA (right) Credit: Kyoto University / Kawahara lab

An August 20, 2018 Kyoto University press release on EurekAlert, which originated the news item, offers more details,

When we talk to one another, it’s never a simple back and forward progression of information,” states Tatsuya Kawahara of Kyoto University’s Graduate School of Informatics, and an expert in speech and audio processing.

“Listening is active. We express agreement by nodding or saying ‘uh-huh’ to maintain the momentum of conversation. This is called ‘backchanneling’, and is something we wanted to implement with ERICA.”

The team also focused on developing a system for ‘attentive listening’. This is when a listener asks elaborating questions, or repeats the last word of the speaker’s sentence, allowing for more engaging dialogue.

Deploying a series of distance sensors, facial recognition cameras, and microphone arrays, the team began collecting data on parameters necessary for a fluid dialog between ERICA and a human subject.

“We looked at three qualities when studying backchanneling,” continues Kawahara. “These were: timing — when a response happens; lexical form — what is being said; and prosody, or how the response happens.”

Responses were generated through machine learning using a counseling dialogue corpus, resulting in dramatically improved dialog engagement. Testing in five-minute sessions with a human subject, ERICA demonstrated significantly more dynamic speaking skill, including the use of backchanneling, partial repeats, and statement assessments.

“Making a human-like conversational robot is a major challenge,” states Kawahara. “This project reveals how much complexity there is in listening, which we might consider mundane. We are getting closer to a day where a robot can pass a Total Turing Test.”

Erica seems to have been first introduced publicly in Spring 2017, from an April 2017 Erica: Man Made webpage on The Guardian website,

Erica is 23. She has a beautiful, neutral face and speaks with a synthesised voice. She has a degree of autonomy – but can’t move her hands yet. Hiroshi Ishiguro is her ‘father’ and the bad boy of Japanese robotics. Together they will redefine what it means to be human and reveal that the future is closer than we might think.

Hiroshi Ishiguro and his colleague Dylan Glas are interested in what makes a human. Erica is their latest creation – a semi-autonomous android, the product of the most funded scientific project in Japan. But these men regard themselves as artists more than scientists, and the Erica project – the result of a collaboration between Osaka and Kyoto universities and the Advanced Telecommunications Research Institute International – is a philosophical one as much as technological one.

Erica is interviewed about her hope and dreams – to be able to leave her room and to be able to move her arms and legs. She likes to chat with visitors and has one of the most advanced speech synthesis systems yet developed. Can she be regarded as being alive or as a comparable being to ourselves? Will she help us to understand ourselves and our interactions as humans better?

Erica and her creators are interviewed in the science fiction atmosphere of Ishiguro’s laboratory, and this film asks how we might form close relationships with robots in the future. Ishiguro thinks that for Japanese people especially, everything has a soul, whether human or not. If we don’t understand how human hearts, minds and personalities work, can we truly claim that humans have authenticity that machines don’t?

Ishiguro and Glas want to release Erica and her fellow robots into human society. Soon, Erica may be an essential part of our everyday life, as one of the new children of humanity.

Key credits

  • Director/Editor: Ilinca Calugareanu
  • Producer: Mara Adina
  • Executive producers for the Guardian: Charlie Phillips and Laurence Topham
  • This video is produced in collaboration with the Sundance Institute Short Documentary Fund supported by the John D and Catherine T MacArthur Foundation

You can also view the 14 min. film here.

Artworks generated by an AI system are to be sold at Christie’s auction house

KC Ifeanyi’s August 22, 2018 article for Fast Company may send a chill down some artists’ spines,

For the first time in its 252-year history, Christie’s will auction artwork generated by artificial intelligence.

Created by the French art collective Obvious, “Portrait of Edmond de Belamy” is part of a series of paintings of the fictional Belamy family that was created using a two-part algorithm. …

The portrait is estimated to sell anywhere between $7,000-$10,000, and Obvious says the proceeds will go toward furthering its algorithm.

… Famed collector Nicolas Laugero-Lasserre bought one of Obvious’s Belamy works in February, which could’ve been written off as a novel purchase where the story behind it is worth more than the piece itself. However, with validation from a storied auction house like Christie’s, AI art could shake the contemporary art scene.

“Edmond de Belamy” goes up for auction from October 23-25 [2018].

Jobs safe from automation? Are there any?

Michael Grothaus expresses more optimism about future job markets than I’m feeling in an August 30, 2018 article for Fast Company,

A 2017 McKinsey Global Institute study of 800 occupations across 46 countries found that by 2030, 800 million people will lose their jobs to automation. That’s one-fifth of the global workforce. A further one-third of the global workforce will need to retrain if they want to keep their current jobs as well. And looking at the effects of automation on American jobs alone, researchers from Oxford University found that “47 percent of U.S. workers have a high probability of seeing their jobs automated over the next 20 years.”

The good news is that while the above stats are rightly cause for concern, they also reveal that 53% of American jobs and four-fifths of global jobs are unlikely to be affected by advances in artificial intelligence and robotics. But just what are those fields? I spoke to three experts in artificial intelligence, robotics, and human productivity to get their automation-proof career advice.

Creatives

“Although I believe every single job can, and will, benefit from a level of AI or robotic influence, there are some roles that, in my view, will never be replaced by technology,” says Tom Pickersgill, …

Maintenance foreman

When running a production line, problems and bottlenecks are inevitable–and usually that’s a bad thing. But in this case, those unavoidable issues will save human jobs because their solutions will require human ingenuity, says Mark Williams, head of product at People First, …

Hairdressers

Mat Hunter, director of the Central Research Laboratory, a tech-focused co-working space and accelerator for tech startups, have seen startups trying to create all kinds of new technologies, which has given him insight into just what machines can and can’t pull off. It’s lead him to believe that jobs like the humble hairdresser are safer from automation than those of, says, accountancy.

Therapists and social workers

Another automation-proof career is likely to be one involved in helping people heal the mind, says Pickersgill. “People visit therapists because there is a need for emotional support and guidance. This can only be provided through real human interaction–by someone who can empathize and understand, and who can offer advice based on shared experiences, rather than just data-driven logic.”

Teachers

Teachers are so often the unsung heroes of our society. They are overworked and underpaid–yet charged with one of the most important tasks anyone can have: nurturing the growth of young people. The good news for teachers is that their jobs won’t be going anywhere.

Healthcare workers

Doctors and nurses will also likely never see their jobs taken by automation, says Williams. While automation will no doubt better enhance the treatments provided by doctors and nurses the fact of the matter is that robots aren’t going to outdo healthcare workers’ ability to connect with patients and make them feel understood the way a human can.

Caretakers

While humans might be fine with robots flipping their burgers and artificial intelligence managing their finances, being comfortable with a robot nannying your children or looking after your elderly mother is a much bigger ask. And that’s to say nothing of the fact that even today’s most advanced robots don’t have the physical dexterity to perform the movements and actions carers do every day.

Grothaus does offer a proviso in his conclusion: certain types of jobs are relatively safe until developers learn to replicate qualities such as empathy in robots/AI.

It’s very confusing

There’s so much news about robots, artificial intelligence, androids, and cyborgs that it’s hard to keep up with it let alone attempt to get a feeling for where all this might be headed. When you add the fact that the term robots/artificial inteligence are often used interchangeably and that the distinction between robots/androids/cyborgs is not always clear any attempts to peer into the future become even more challenging.

At this point I content myself with tracking the situation and finding definitions so I can better understand what I’m tracking. Carmen Wong’s August 23, 2018 posting on the Signals blog published by Canada’s Centre for Commercialization of Regenerative Medicine (CCRM) offers some useful definitions in the context of an article about the use of artificial intelligence in the life sciences, particularly in Canada (Note: Links have been removed),

Artificial intelligence (AI). Machine learning. To most people, these are just buzzwords and synonymous. Whether or not we fully understand what both are, they are slowly integrating into our everyday lives. Virtual assistants such as Siri? AI is at work. The personalized ads you see when you are browsing on the web or movie recommendations provided on Netflix? Thank AI for that too.

AI is defined as machines having intelligence that imitates human behaviour such as learning, planning and problem solving. A process used to achieve AI is called machine learning, where a computer uses lots of data to “train” or “teach” itself, without human intervention, to accomplish a pre-determined task. Essentially, the computer keeps on modifying its algorithm based on the information provided to get to the desired goal.

Another term you may have heard of is deep learning. Deep learning is a particular type of machine learning where algorithms are set up like the structure and function of human brains. It is similar to a network of brain cells interconnecting with each other.

Toronto has seen its fair share of media-worthy AI activity. The Government of Canada, Government of Ontario, industry and multiple universities came together in March 2018 to launch the Vector Institute, with the goal of using AI to promote economic growth and improve the lives of Canadians. In May, Samsung opened its AI Centre in the MaRS Discovery District, joining a network of Samsung centres located in California, United Kingdom and Russia.

There has been a boom in AI companies over the past few years, which span a variety of industries. This year’s ranking of the top 100 most promising private AI companies covers 25 fields with cybersecurity, enterprise and robotics being the hot focus areas.

Wong goes on to explore AI deployment in the life sciences and concludes that human scientists and doctors will still be needed although she does note this in closing (Note: A link has been removed),

More importantly, empathy and support from a fellow human being could never be fully replaced by a machine (could it?), but maybe this will change in the future. We will just have to wait and see.

Artificial empathy is the term used in Lisa Morgan’s April 25, 2018 article for Information Week which unfortunately does not include any links to actual projects or researchers working on artificial empathy. Instead, the article is focused on how business interests and marketers would like to see it employed. FWIW, I have found a few references: (1) Artificial empathy Wikipedia essay (look for the references at the end of the essay for more) and (2) this open access article: Towards Artificial Empathy; How Can Artificial Empathy Follow the Developmental Pathway of Natural Empathy? by Minoru Asada.

Please let me know in the comments if you should have an insights on the matter in the comments section of this blog.

Extinction of Experience (EOE)

‘Extinction of experience’ is a bit of an attention getter isn’t it? Well, it worked for me when I first saw it and it seems particularly apt after putting together my August 9, 2018 posting about the 2018 SIGGRAPH conference, in particular, the ‘Previews’ where I featured a synthetic sound project. Here’s a little more about EOE from a July 3, 2018 news item on phys.org,

Opportunities for people to interact with nature have declined over the past century, as most people now live in urban areas and spend much of their time indoors. And while adults are not only experiencing nature less, they are also less likely to take their children outdoors and shape their attitudes toward nature, creating a negative cycle. In 1978, ecologist Robert Pyle coined the phrase “extinction of experience” (EOE) to describe this alienation from nature, and argued that this process is one of the greatest causes of the biodiversity crisis. Four decades later, the question arises: How can we break the cycle and begin to reverse EOE?

A July 3, 2018 North Carolina Museum of Natural Sciences news release, which originated the news item, delves further,

In citizen science programs, people participate in real research, helping scientists conduct studies on local, regional and even global scales. In a study released today, researchers from the North Carolina Museum of Natural Sciences, North Carolina State University, Rutgers University, and the Technion-Israel Institute of Technology propose nature-based citizen science as a means to reconnect people to nature. For people to take the next step and develop a desire to preserve nature, they need to not only go outdoors or learn about nature, but to develop emotional connections to and empathy for nature. Because citizen science programs usually involve data collection, they encourage participants to search for, observe and investigate natural elements around them. According to co-author Caren Cooper, assistant head of the Biodiversity Lab at the N.C. Museum of Natural Sciences, “Nature-based citizen science provides a structure and purpose that might help people notice nature around them and appreciate it in their daily lives.”

To search for evidence of these patterns across programs and the ability of citizen science to reach non-scientific audiences, the researchers studied the participants of citizen science programs. They reviewed 975 papers, analyzed results from studies that included participants’ motivations and/or outcomes in nature-oriented programs, and found that nature-based citizen science fosters cognitive and emotional aspects of experiences in nature, giving it the potential to reverse EOE.

The eMammal citizen science programs offer children opportunities to use technology to observe nature in new ways. Photo: Matt Zeher. The eMammal citizen science programs offer children opportunities to use technology to observe nature in new ways. Photo: Matt Zeher.

The N.C. Museum of Natural Sciences’ Stephanie Schuttler, lead author on the study and scientist on the eMammal citizen science camera trapping program, saw anecdotal evidence of this reversal through her work incorporating camera trap research into K-12 classrooms. “Teachers would tell me how excited and surprised students were about the wildlife in their school yards,” Schuttler says. “They had no idea their campus flourished with coyotes, foxes and deer.” The study Schuttler headed shows citizen science increased participants’ knowledge, skills, interest in and curiosity about nature, and even produced positive behavioral changes. For example, one study revealed that participants in the Garden Butterfly Watch program changed gardening practices to make their yards more hospitable to wildlife. Another study found that participants in the Coastal Observation and Seabird Survey Team program started cleaning up beaches during surveys, even though this was never suggested by the facilitators.

While these results are promising, the EOE study also revealed that this work has only just begun and that most programs do not reach audiences who are not already engaged in science or nature. Only 26 of the 975 papers evaluated participants’ motivations and/or outcomes, and only one of these papers studied children, the most important demographic in reversing EOE. “Many studies were full of amazing stories on how citizen science awakened participants to the nature around them, however, most did not study outcomes,” Schuttler notes. “To fully evaluate the ability for nature-based citizen science to affect people, we encourage citizen science programs to formally study their participants and not just study the system in question.”

Additionally, most citizen science programs attracted or even recruited environmentally mindful participants who likely already spend more time outside than the average person. “If we really want to reconnect people to nature, we need to preach beyond the choir, and attract people who are not already interested in science and/or nature,” Schuttler adds. And as co-author Assaf Shwartz of Technion-Israel Institute of Technology asserts, “The best way to avert the extinction of experience is to create meaningful experiences of nature in the places where we all live and work – cities. Participating in citizen science is an excellent way to achieve this goal, as participation can enhance the sense of commitment people have to protect nature.”

Luckily, some other factors appear to influence participants’ involvement in citizen science. Desire for wellbeing, stewardship and community may provide a gateway for people to participate, an important first step in connecting people to nature. Though nature-based citizen science programs provide opportunities for people to interact with nature, further research on the mechanisms that drive this relationship is needed to strengthen our understanding of various outcomes of citizen science.

And, I because I love dragonflies,

Nature-based citizen science programs, like Dragonfly Pond Watch, offer participants opportunities to observe nature more closely. Credit: Lea Shell.

Here’s a link to and a citation for the paper,

Bridging the nature gap: can citizen science reverse the extinction of experience? by Stephanie G Schuttler, Amanda E Sorensen, Rebecca C Jordan, Caren Cooper, Assaf Shwartz. Frontiers in Ecology and the Environment. DOI: https://doi.org/10.1002/fee.1826 First published: 03 July 2018

This paper is behind a paywall.

In-home (one day in the future) eyesight correction

It’s easy to become blasé about ‘futuristic’ developments but every once in a while something comes along that shocks you out of your complacency as this March 8, 2018 news item did for me,

A revolutionary, cutting-edge technology, developed by researchers at Bar-Ilan University’s Institute of Nanotechnology and Advanced Materials (BINA), has the potential to provide a new alternative to eyeglasses, contact lenses, and laser correction for refractive errors.

The technology, known as Nano-Drops, was developed by opthamologist Dr. David Smadja from Shaare Zedek Medical Center, Prof. Zeev Zalevsky from Bar-Ilan’s Kofkin Faculty of Engineering, and Prof. Jean-Paul Moshe Lellouche, head of the Department of Chemistry at Bar-Ilan.

It seems like it would be eye drops, eh? This March 8, 2018 Bar-Ilan University press release, which originated the news item, proceeds to redefine eyedrops,

Nano-Drops achieve their optical effect and correction by locally modifying the corneal refractive index. The magnitude and nature of the optical correction is adjusted by an optical pattern that is stamped onto the superficial layer of the corneal epithelium with a laser source. The shape of the optical pattern can be adjusted for correction of myopia (nearsightedness), hyperopia (farsightedness) or presbyopia (loss of accommodation ability). The laser stamping onto the cornea [emphasis mine] takes a few milliseconds and enables the nanoparticles to enhance and ‘activate’ this optical pattern by locally changing the refractive index and ultimately modifying the trajectory of light passing through the cornea.

The laser stamping source does not relate to the commonly known ‘laser treatment for visual correction’ that ablates corneal tissue. It is rather a small laser device that can connect to a smartphone [emphasis mine] and stamp the optical pattern onto the corneal epithelium by placing numerous adjacent pulses in a very speedy and painless fashion.  Tiny corneal spots created by the laser allow synthetic and biocompatible nanoparticles to enter and locally modify the optical power of the eye [emphasis mine] at the desired correction.

In the future this technology may enable patients to have their vision corrected in the comfort of their own home. [emphasis mine] To accomplish this, they would open an application on their smartphone to measure their vision, connect the laser source device for stamping the optical pattern at the desired correction, and then apply the Nano-Drops to activate the pattern and provide the desired correction.

Upcoming in-vivo experiments in rabbits will allow the researchers to determine how long the effect of the Nano-Drops will last after the initial application. Meanwhile, this promising technology has been shown, through ex-vivo experiments, to efficiently correct nearly 3 diopters of both myopia and presbyopia in pig eyes.

The researchers do not seem to have published a paper about this work. However, there is a March 19, 2018 article by Shoshanna Solomon for the Times of Israel, which provides greater  detail about how you or I would use this technology,

The Israeli researchers came up with a way to reshape the cornea, which accounts for 60 percent of the eye’s optical power. They tried out their system on the eyes of dead pigs, which have an optical system that is very similar to that of humans.

There are three steps to the technology that is now in development.

The first step requires patients to measure their eyesight via their smartphones. There are already a number of apps that do this, said Smadja. The second step requires the patients to use a second app — being developed by the researchers — which would have a laser device clipped onto the smartphone. This device will deliver laser pulses to the eye in less than a second that etch a shallow shape onto the cornea to help correct its refractive error. During the last stage, the Nano-Drops — made up of nontoxic nanoparticles of proteins — are put into the eye and they activate the shape, thus correcting the patients’ vision.

“It’s like when you write something with fuel on the ground and the fuel dries up, and then you throw a flame onto the fuel and the fire takes the shape of the writing,” Smadja explained. “The drops activate the pattern.”

The technology, unlike current laser operations that correct eyesight, does not remove tissue and is thus noninvasive, and it suits most eyes, expanding the scope of patients who can correct their vision, he said.

It’s a good article and, if you have the time, it’s worth reading in its entirety. Of course, it’s a long from ‘being in development’ to ‘available at the store’.

AI x 2: the Amnesty International and Artificial Intelligence story

Amnesty International and artificial intelligence seem like an unexpected combination but it all makes sense when you read a June 13, 2018 article by Steven Melendez for Fast Company (Note: Links have been removed),

If companies working on artificial intelligence don’t take steps to safeguard human rights, “nightmare scenarios” could unfold, warns Rasha Abdul Rahim, an arms control and artificial intelligence researcher at Amnesty International in a blog post. Those scenarios could involve armed, autonomous systems choosing military targets with little human oversight, or discrimination caused by biased algorithms, she warns.

Rahim pointed at recent reports of Google’s involvement in the Pentagon’s Project Maven, which involves harnessing AI image recognition technology to rapidly process photos taken by drones. Google recently unveiled new AI ethics policies and has said it won’t continue with the project once its current contract expires next year after high-profile employee dissent over the project. …

“Compliance with the laws of war requires human judgement [sic] –the ability to analyze the intentions behind actions and make complex decisions about the proportionality or necessity of an attack,” Rahim writes. “Machines and algorithms cannot recreate these human skills, and nor can they negotiate, produce empathy, or respond to unpredictable situations. In light of these risks, Amnesty International and its partners in the Campaign to Stop Killer Robots are calling for a total ban on the development, deployment, and use of fully autonomous weapon systems.”

Rasha Abdul Rahim’s June 14, 2018 posting (I’m putting the discrepancy in publication dates down to timezone differences) on the Amnesty International website (Note: Links have been removed),

Last week [June 7, 2018] Google released a set of principles to govern its development of AI technologies. They include a broad commitment not to design or deploy AI in weaponry, and come in the wake of the company’s announcement that it will not renew its existing contract for Project Maven, the US Department of Defense’s AI initiative, when it expires in 2019.

The fact that Google maintains its existing Project Maven contract for now raises an important question. Does Google consider that continuing to provide AI technology to the US government’s drone programme is in line with its new principles? Project Maven is a litmus test that allows us to see what Google’s new principles mean in practice.

As details of the US drone programme are shrouded in secrecy, it is unclear precisely what role Google plays in Project Maven. What we do know is that US drone programme, under successive administrations, has been beset by credible allegations of unlawful killings and civilian casualties. The cooperation of Google, in any capacity, is extremely troubling and could potentially implicate it in unlawful strikes.

As AI technology advances, the question of who will be held accountable for associated human rights abuses is becoming increasingly urgent. Machine learning, and AI more broadly, impact a range of human rights including privacy, freedom of expression and the right to life. It is partly in the hands of companies like Google to safeguard these rights in relation to their operations – for us and for future generations. If they don’t, some nightmare scenarios could unfold.

Warfare has already changed dramatically in recent years – a couple of decades ago the idea of remote controlled bomber planes would have seemed like science fiction. While the drones currently in use are still controlled by humans, China, France, Israel, Russia, South Korea, the UK and the US are all known to be developing military robots which are getting smaller and more autonomous.

For example, the UK is developing a number of autonomous systems, including the BAE [Systems] Taranis, an unmanned combat aircraft system which can fly in autonomous mode and automatically identify a target within a programmed area. Kalashnikov, the Russian arms manufacturer, is developing a fully automated, high-calibre gun that uses artificial neural networks to choose targets. The US Army Research Laboratory in Maryland, in collaboration with BAE Systems and several academic institutions, has been developing micro drones which weigh less than 30 grams, as well as pocket-sized robots that can hop or crawl.

Of course, it’s not just in conflict zones that AI is threatening human rights. Machine learning is already being used by governments in a wide range of contexts that directly impact people’s lives, including policing [emphasis mine], welfare systems, criminal justice and healthcare. Some US courts use algorithms to predict future behaviour of defendants and determine their sentence lengths accordingly. The potential for this approach to reinforce power structures, discrimination or inequalities is huge.

In july 2017, the Vancouver Police Department announced its use of predictive policing software, the first such jurisdiction in Canada to make use of the technology. My Nov. 23, 2017 posting featured the announcement.

The almost too aptly named Campaign to Stop Killer Robots can be found here. Their About Us page provides a brief history,

Formed by the following non-governmental organizations (NGOs) at a meeting in New York on 19 October 2012 and launched in London in April 2013, the Campaign to Stop Killer Robots is an international coalition working to preemptively ban fully autonomous weapons. See the Chronology charting our major actions and achievements to date.

Steering Committee

The Steering Committee is the campaign’s principal leadership and decision-making body. It is comprised of five international NGOs, a regional NGO network, and four national NGOs that work internationally:

Human Rights Watch
Article 36
Association for Aid and Relief Japan
International Committee for Robot Arms Control
Mines Action Canada
Nobel Women’s Initiative
PAX (formerly known as IKV Pax Christi)
Pugwash Conferences on Science & World Affairs
Seguridad Humana en América Latina y el Caribe (SEHLAC)
Women’s International League for Peace and Freedom

For more information, see this Overview. A Terms of Reference is also available on request, detailing the committee’s selection process, mandate, decision-making, meetings and communication, and expected commitments.

For anyone who may be interested in joining Amnesty International, go here.

Do you want that coffee with some graphene on toast?

These scientists are excited:

For those who prefer text, here’s the Rice University Feb. 13, 2018 news release (received via email and available online here and on EurekAlert here) Note: Links have been removed),

Rice University scientists who introduced laser-induced graphene (LIG) have enhanced their technique to produce what may become a new class of edible electronics.

The Rice lab of chemist James Tour, which once turned Girl Scout cookies into graphene, is investigating ways to write graphene patterns onto food and other materials to quickly embed conductive identification tags and sensors into the products themselves.

“This is not ink,” Tour said. “This is taking the material itself and converting it into graphene.”

The process is an extension of the Tour lab’s contention that anything with the proper carbon content can be turned into graphene. In recent years, the lab has developed and expanded upon its method to make graphene foam by using a commercial laser to transform the top layer of an inexpensive polymer film.

The foam consists of microscopic, cross-linked flakes of graphene, the two-dimensional form of carbon. LIG can be written into target materials in patterns and used as a supercapacitor, an electrocatalyst for fuel cells, radio-frequency identification (RFID) antennas and biological sensors, among other potential applications.

The new work reported in the American Chemical Society journal ACS Nano demonstrated that laser-induced graphene can be burned into paper, cardboard, cloth, coal and certain foods, even toast.

“Very often, we don’t see the advantage of something until we make it available,” Tour said. “Perhaps all food will have a tiny RFID tag that gives you information about where it’s been, how long it’s been stored, its country and city of origin and the path it took to get to your table.”

He said LIG tags could also be sensors that detect E. coli or other microorganisms on food. “They could light up and give you a signal that you don’t want to eat this,” Tour said. “All that could be placed not on a separate tag on the food, but on the food itself.”

Multiple laser passes with a defocused beam allowed the researchers to write LIG patterns into cloth, paper, potatoes, coconut shells and cork, as well as toast. (The bread is toasted first to “carbonize” the surface.) The process happens in air at ambient temperatures.

“In some cases, multiple lasing creates a two-step reaction,” Tour said. “First, the laser photothermally converts the target surface into amorphous carbon. Then on subsequent passes of the laser, the selective absorption of infrared light turns the amorphous carbon into LIG. We discovered that the wavelength clearly matters.”

The researchers turned to multiple lasing and defocusing when they discovered that simply turning up the laser’s power didn’t make better graphene on a coconut or other organic materials. But adjusting the process allowed them to make a micro supercapacitor in the shape of a Rice “R” on their twice-lased coconut skin.

Defocusing the laser sped the process for many materials as the wider beam allowed each spot on a target to be lased many times in a single raster scan. That also allowed for fine control over the product, Tour said. Defocusing allowed them to turn previously unsuitable polyetherimide into LIG.

“We also found we could take bread or paper or cloth and add fire retardant to them to promote the formation of amorphous carbon,” said Rice graduate student Yieu Chyan, co-lead author of the paper. “Now we’re able to take all these materials and convert them directly in air without requiring a controlled atmosphere box or more complicated methods.”

The common element of all the targeted materials appears to be lignin, Tour said. An earlier study relied on lignin, a complex organic polymer that forms rigid cell walls, as a carbon precursor to burn LIG in oven-dried wood. Cork, coconut shells and potato skins have even higher lignin content, which made it easier to convert them to graphene.

Tour said flexible, wearable electronics may be an early market for the technique. “This has applications to put conductive traces on clothing, whether you want to heat the clothing or add a sensor or conductive pattern,” he said.

Rice alumnus Ruquan Ye is co-lead author of the study. Co-authors are Rice graduate student Yilun Li and postdoctoral fellow Swatantra Pratap Singh and Professor Christopher Arnusch of Ben-Gurion University of the Negev, Israel. Tour is the T.T. and W.F. Chao Chair in Chemistry as well as a professor of computer science and of materials science and nanoengineering at Rice.

The Air Force Office of Scientific Research supported the research.

Here’s a link to and a citation for the paper,

Laser-Induced Graphene by Multiple Lasing: Toward Electronics on Cloth, Paper, and Food by Yieu Chyan, Ruquan Ye†, Yilun Li, Swatantra Pratap Singh, Christopher J. Arnusch, and James M. Tour. ACS Nano DOI: 10.1021/acsnano.7b08539 Publication Date (Web): February 13, 2018

Copyright © 2018 American Chemical Society

This paper is behind a paywall.

h/t Feb. 13, 2018 news item on Nanowerk

Implanting a synthetic cornea in your eye

For anyone who needs a refresher, Simon Shapiro in a Nov. 5, 2017 posting on the Sci/Why blog offers a good introduction to how eyes work and further in his post describes Corneat Vision’s corneal implants,

A quick summary of how our eyes work: they refract (bend) light and focus it on the retina. The job of doing the refraction is split between the cornea and the lens. Two thirds of the refraction is done by the cornea, so it’s critical in enabling vision. After light passes through the cornea, it passes through the pupil (in the centre of the iris) to reach the lens. Muscles in the eye (the ciliary muscle) can change the shape of the lens and allow the eye to focus nearer or further. The lens focuses light on the retina, which passes signals to the brain via the optic nerve.

It’s all pretty neat, but some things can go wrong, especially as you get older. Common problems are that the lens and/or the cornea can become cloudy.

CoreNeat Vision, the Israeli ophthalmic devices startup company, released an Oct. 6, 2017 press release about their corneal implant on BusinessWire (Note: Links have been removed),

The CorNeat KPro implant is a patent-pending synthetic cornea that utilizes advanced cell technology to integrate artificial optics within resident ocular tissue. The CorNeat KPro is produced using nanoscale chemical engineering that stimulates cellular growth. Unlike previous devices, which attempted to integrate optics into the native cornea, the CorNeat KPro leverages a virtual space under the conjunctiva that is rich with fibroblast cells that heals quickly and provides robust long-term integration. Combined with a novel and simple 30-minute surgical procedure, the CorNeat KPro provides an esthetic, efficient, scalable remedy for millions of people with cornea-related visual impairments and is far superior to any available biological and synthetic alternatives.

A short animated movie that demonstrates the implantation and integration of the CorNeat KPro device to the human eye is available in the following link: www.corneat.com/product-animation.

“Corneal pathology is the second leading cause of blindness worldwide with 20-30 million patients in need of a remedy and around 2 million new cases/year, said CorNeat Vision CEO and VP R&D, Mr. Almog Aley-Raz. “Though a profound cause of distress and disability, existing solutions, such as corneal transplantation, are carried out only about 200,000 times/year worldwide. Together, corneal transplantation, and to a much lesser extent artificial implants (KPros), address only 5%-10% of cases, “There exists an urgent need for an efficient, long-lasting and affordable solution to corneal pathology, injury and blindness, which would alleviate the suffering and disability of millions of people. We are very excited to reach this important milestone in the development of our solution and are confident that the CorNeat KPro will enable millions to regain their sight”, he added.

“The groundbreaking results obtained in our proof of concept which is backed by conclusive histopathological evidence, are extremely encouraging. We are entering the next phase with great confidence that CorNeat KPro will address corneal blindness just like IOLs (Intra Ocular Lens) addressed cataract”, commented Dr. Gilad Litvin, CorNeat Vision’s Chief Medical Officer and founder and the CorNeat KPro inventor. “Our novel IP, now cleared by the European Patent Office, ensures long-term retention, robust integration into the eye and an operation that is significantly shorter and simpler than Keratoplasty (Corneal transplantation).

“The innovative approach behind CorNeat KPro coupled by the team’s execution ability present a unique opportunity to finally address the global corneal blindness challenge”, added Prof. Ehud Assia., head of the ophthalmic department at the Meir Hospital in Israel, a serial ophthalmic innovator, and a member of CorNeat Vision scientific advisory board. “I welcome our new advisory board members, Prof. David Rootman, a true pioneer in ophthalmic surgery and one of the top corneal specialist surgeons from the University of Toronto, Canada, and Prof. Eric Gabison., who’s a leading cornea surgeon at the Rothschild Ophthalmic Foundation research center at Bichat hospital – Paris, France. We are all looking forward to initiating the clinical trial later in 2018.”

About CorNeat Vision

CorNeat Vision is an ophthalmic medical device company with an overarching mission to promote human health, sustainability and equality worldwide. The objective of CorNeat Vision is to produce, test and market an innovative, safe and long-lasting scalable medical solution for corneal blindness, pathology and injury, a bio-artificial organ: The CorNeat KPro. For more information on CorNeat Vision and the CorNeat KPro device, visit us at www.corneat.com.

Unfortunately, I cannot find any more detail. Presumably the company principals are making sure that no competitive advantages are given away.

Gold’s origin in the universe due to cosmic collision

An hypothesis for gold’s origins was first mentioned here in a May 26, 2016 posting,

The link between this research and my side project on gold nanoparticles is a bit tenuous but this work on the origins for gold and other precious metals being found in the stars is so fascinating and I’m determined to find a connection.

An artist's impression of two neutron stars colliding. (Credit: Dana Berry / Skyworks Digital, Inc.) Courtesy: Kavli Foundation

An artist’s impression of two neutron stars colliding. (Credit: Dana Berry / Skyworks Digital, Inc.) Courtesy: Kavli Foundation

From a May 19, 2016 news item on phys.org,

The origin of many of the most precious elements on the periodic table, such as gold, silver and platinum, has perplexed scientists for more than six decades. Now a recent study has an answer, evocatively conveyed in the faint starlight from a distant dwarf galaxy.

In a roundtable discussion, published today [May 19, 2016?], The Kavli Foundation spoke to two of the researchers behind the discovery about why the source of these heavy elements, collectively called “r-process” elements, has been so hard to crack.

From the Spring 2016 Kavli Foundation webpage hosting the  “Galactic ‘Gold Mine’ Explains the Origin of Nature’s Heaviest Elements” Roundtable ,

Astronomers studying a galaxy called Reticulum II have just discovered that its stars contain whopping amounts of these metals—collectively known as “r-process” elements (See “What is the R-Process?”). Of the 10 dwarf galaxies that have been similarly studied so far, only Reticulum II bears such strong chemical signatures. The finding suggests some unusual event took place billions of years ago that created ample amounts of heavy elements and then strew them throughout the galaxy’s reservoir of gas and dust. This r-process-enriched material then went on to form Reticulum II’s standout stars.

Based on the new study, from a team of researchers at the Kavli Institute at the Massachusetts Institute of Technology, the unusual event in Reticulum II was likely the collision of two, ultra-dense objects called neutron stars. Scientists have hypothesized for decades that these collisions could serve as a primary source for r-process elements, yet the idea had lacked solid observational evidence. Now armed with this information, scientists can further hope to retrace the histories of galaxies based on the contents of their stars, in effect conducting “stellar archeology.”

Researchers have confirmed the hypothesis according to an Oct. 16, 2017 news item on phys.org,

Gold’s origin in the Universe has finally been confirmed, after a gravitational wave source was seen and heard for the first time ever by an international collaboration of researchers, with astronomers at the University of Warwick playing a leading role.

Members of Warwick’s Astronomy and Astrophysics Group, Professor Andrew Levan, Dr Joe Lyman, Dr Sam Oates and Dr Danny Steeghs, led observations which captured the light of two colliding neutron stars, shortly after being detected through gravitational waves – perhaps the most eagerly anticipated phenomenon in modern astronomy.

Marina Koren’s Oct. 16, 2017 article for The Atlantic presents a richly evocative view (Note: Links have been removed),

Some 130 million years ago, in another galaxy, two neutron stars spiraled closer and closer together until they smashed into each other in spectacular fashion. The violent collision produced gravitational waves, cosmic ripples powerful enough to stretch and squeeze the fabric of the universe. There was a brief flash of light a million trillion times as bright as the sun, and then a hot cloud of radioactive debris. The afterglow hung for several days, shifting from bright blue to dull red as the ejected material cooled in the emptiness of space.

Astronomers detected the aftermath of the merger on Earth on August 17. For the first time, they could see the source of universe-warping forces Albert Einstein predicted a century ago. Unlike with black-hole collisions, they had visible proof, and it looked like a bright jewel in the night sky.

But the merger of two neutron stars is more than fireworks. It’s a factory.

Using infrared telescopes, astronomers studied the spectra—the chemical composition of cosmic objects—of the collision and found that the plume ejected by the merger contained a host of newly formed heavy chemical elements, including gold, silver, platinum, and others. Scientists estimate the amount of cosmic bling totals about 10,000 Earth-masses of heavy elements.

I’m not sure exactly what this image signifies but it did accompany Koren’s article so presumably it’s a representation of colliding neutron stars,

NSF / LIGO / Sonoma State University /A. Simonnet. Downloaded from: https://www.theatlantic.com/science/archive/2017/10/the-making-of-cosmic-bling/543030/

An Oct. 16, 2017 University of Warwick press release (also on EurekAlert), which originated the news item on phys.org, provides more detail,

Huge amounts of gold, platinum, uranium and other heavy elements were created in the collision of these compact stellar remnants, and were pumped out into the universe – unlocking the mystery of how gold on wedding rings and jewellery is originally formed.

The collision produced as much gold as the mass of the Earth. [emphasis mine]

This discovery has also confirmed conclusively that short gamma-ray bursts are directly caused by the merging of two neutron stars.

The neutron stars were very dense – as heavy as our Sun yet only 10 kilometres across – and they collided with each other 130 million years ago, when dinosaurs roamed the Earth, in a relatively old galaxy that was no longer forming many stars.

They drew towards each other over millions of light years, and revolved around each other increasingly quickly as they got closer – eventually spinning around each other five hundred times per second.

Their merging sent ripples through the fabric of space and time – and these ripples are the elusive gravitational waves spotted by the astronomers.

The gravitational waves were detected by the Advanced Laser Interferometer Gravitational-Wave Observatory (Adv-LIGO) on 17 August this year [2017], with a short duration gamma-ray burst detected by the Fermi satellite just two seconds later.

This led to a flurry of observations as night fell in Chile, with a first report of a new source from the Swope 1m telescope.

Longstanding collaborators Professor Levan and Professor Nial Tanvir (from the University of Leicester) used the facilities of the European Southern Observatory to pinpoint the source in infrared light.

Professor Levan’s team was the first one to get observations of this new source with the Hubble Space Telescope. It comes from a galaxy called NGC 4993, 130 million light years away.

Andrew Levan, Professor in the Astronomy & Astrophysics group at the University of Warwick, commented: “Once we saw the data, we realised we had caught a new kind of astrophysical object. This ushers in the era of multi-messenger astronomy, it is like being able to see and hear for the first time.”

Dr Joe Lyman, who was observing at the European Southern Observatory at the time was the first to alert the community that the source was unlike any seen before.

He commented: “The exquisite observations obtained in a few days showed we were observing a kilonova, an object whose light is powered by extreme nuclear reactions. This tells us that the heavy elements, like the gold or platinum in jewellery are the cinders, forged in the billion degree remnants of a merging neutron star.”

Dr Samantha Oates added: “This discovery has answered three questions that astronomers have been puzzling for decades: what happens when neutron stars merge? What causes the short duration gamma-ray bursts? Where are the heavy elements, like gold, made? In the space of about a week all three of these mysteries were solved.”

Dr Danny Steeghs said: “This is a new chapter in astrophysics. We hope that in the next few years we will detect many more events like this. Indeed, in Warwick we have just finished building a telescope designed to do just this job, and we expect it to pinpoint these sources in this new era of multi-messenger astronomy”.

Congratulations to all of the researchers involved in this work!

Many, many research teams were  involved. Here’s a sampling of their news releases which focus on their areas of research,

University of the Witwatersrand (South Africa)

https://www.eurekalert.org/pub_releases/2017-10/uotw-wti101717.php

Weizmann Institute of Science (Israel)

https://www.eurekalert.org/pub_releases/2017-10/wios-cns101717.php

Carnegie Institution for Science (US)

https://www.eurekalert.org/pub_releases/2017-10/cifs-dns101217.php

Northwestern University (US)

https://www.eurekalert.org/pub_releases/2017-10/nu-adc101617.php

National Radio Astronomy Observatory (US)

https://www.eurekalert.org/pub_releases/2017-10/nrao-ru101317.php

Max-Planck-Gesellschaft (Germany)

https://www.eurekalert.org/pub_releases/2017-10/m-gwf101817.php

Penn State (Pennsylvania State University; US)

https://www.eurekalert.org/pub_releases/2017-10/ps-stl101617.php

University of California – Davis

https://www.eurekalert.org/pub_releases/2017-10/uoc–cns101717.php

The American Association for the Advancement of Science’s (AAAS) magazine, Science, has published seven papers on this research. Here’s an Oct. 16, 2017 AAAS news release with an overview of the papers,

https://www.eurekalert.org/pub_releases/2017-10/aaft-btf101617.php

I’m sure there are more news releases out there and that there will be many more papers published in many journals, so if this interests, I encourage you to keep looking.

Two final pieces I’d like to draw your attention to: one answers basic questions and another focuses on how artists knew what to draw when neutron stars collide.

Keith A Spencer’s Oct. 18, 2017 piece on salon.com answers a lot of basic questions for those of us who don’t have a background in astronomy. Here are a couple of examples,

What is a neutron star?

Okay, you know how atoms have protons, neutrons, and electrons in them? And you know how protons are positively charged, and electrons are negatively charged, and neutrons are neutral?

Yeah, I remember that from watching Bill Nye as a kid.

Totally. Anyway, have you ever wondered why the negatively-charged electrons and the positively-charged protons don’t just merge into each other and form a neutral neutron? I mean, they’re sitting there in the atom’s nucleus pretty close to each other. Like, if you had two magnets that close, they’d stick together immediately.

I guess now that you mention it, yeah, it is weird.

Well, it’s because there’s another force deep in the atom that’s preventing them from merging.

It’s really really strong.

The only way to overcome this force is to have a huge amount of matter in a really hot, dense space — basically shove them into each other until they give up and stick together and become a neutron. This happens in very large stars that have been around for a while — the core collapses, and in the aftermath, the electrons in the star are so close to the protons, and under so much pressure, that they suddenly merge. There’s a big explosion and the outer material of the star is sloughed off.

Okay, so you’re saying under a lot of pressure and in certain conditions, some stars collapse and become big balls of neutrons?

Pretty much, yeah.

So why do the neutrons just stick around in a huge ball? Aren’t they neutral? What’s keeping them together? 

Gravity, mostly. But also the strong nuclear force, that aforementioned weird strong force. This isn’t something you’d encounter on a macroscopic scale — the strong force only really works at the type of distances typified by particles in atomic nuclei. And it’s different, fundamentally, than the electromagnetic force, which is what makes magnets attract and repel and what makes your hair stick up when you rub a balloon on it.

So these neutrons in a big ball are bound by gravity, but also sticking together by virtue of the strong nuclear force. 

So basically, the new ball of neutrons is really small, at least, compared to how heavy it is. That’s because the neutrons are all clumped together as if this neutron star is one giant atomic nucleus — which it kinda is. It’s like a giant atom made only of neutrons. If our sun were a neutron star, it would be less than 20 miles wide. It would also not be something you would ever want to get near.

Got it. That means two giant balls of neutrons that weighed like, more than our sun and were only ten-ish miles wide, suddenly smashed into each other, and in the aftermath created a black hole, and we are just now detecting it on Earth?

Exactly. Pretty weird, no?

Spencer does a good job of gradually taking you through increasingly complex explanations.

For those with artistic interests, Neel V. Patel tries to answer a question about how artists knew what draw when neutron stars collided in his Oct. 18, 2017 piece for Slate.com,

All of these things make this discovery easy to marvel at and somewhat impossible to picture. Luckily, artists have taken up the task of imagining it for us, which you’ve likely seen if you’ve already stumbled on coverage of the discovery. Two bright, furious spheres of light and gas spiraling quickly into one another, resulting in a massive swell of lit-up matter along with light and gravitational waves rippling off speedily in all directions, towards parts unknown. These illustrations aren’t just alluring interpretations of a rare phenomenon; they are, to some extent, the translation of raw data and numbers into a tangible visual that gives scientists and nonscientists alike some way of grasping what just happened. But are these visualizations realistic? Is this what it actually looked like? No one has any idea. Which is what makes the scientific illustrators’ work all the more fascinating.

“My goal is to represent what the scientists found,” says Aurore Simmonet, a scientific illustrator based at Sonoma State University in Rohnert Park, California. Even though she said she doesn’t have a rigorous science background (she certainly didn’t know what a kilonova was before being tasked to illustrate one), she also doesn’t believe that type of experience is an absolute necessity. More critical, she says, is for the artist to have an interest in the subject matter and in learning new things, as well as a capacity to speak directly to scientists about their work.

Illustrators like Simmonet usually start off work on an illustration by asking the scientist what’s the biggest takeaway a viewer should grasp when looking at a visual. Unfortunately, this latest discovery yielded a multitude of papers emphasizing different conclusions and highlights. With so many scientific angles, there’s a stark challenge in trying to cram every important thing into a single drawing.

Clearly, however, the illustrations needed to center around the kilonova. Simmonet loves colors, so she began by discussing with the researchers what kind of color scheme would work best. The smash of two neutron stars lends itself well to deep, vibrant hues. Simmonet and Robin Dienel at the Carnegie Institution for Science elected to use a wide array of colors and drew bright cracking to show pressure forming at the merging. Others, like Luis Calcada at the European Southern Observatory, limited the color scheme in favor of emphasizing the bright moment of collision and the signal waves created by the kilonova.

Animators have even more freedom to show the event, since they have much more than a single frame to play with. The Conceptual Image Lab at NASA’s [US National Aeronautics and Space Administration] Goddard Space Flight Center created a short video about the new findings, and lead animator Brian Monroe says the video he and his colleagues designed shows off the evolution of the entire process: the rising action, climax, and resolution of the kilonova event.

The illustrators try to adhere to what the likely physics of the event entailed, soliciting feedback from the scientists to make sure they’re getting it right. The swirling of gas, the direction of ejected matter upon impact, the reflection of light, the proportions of the objects—all of these things are deliberately framed such that they make scientific sense. …

Do take a look at Patel’s piece, if for no other reason than to see all of the images he has embedded there. You may recognize Aurore Simmonet’s name from the credit line in the second image I have embedded here.

Cotton that glows ‘naturally’

Interesting, non? This is causing a bit of excitement but before first, here’s more from the Sept. 14, 2017 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

Cotton that’s grown with molecules that endow appealing properties – like fluorescence or magnetism – may one day eliminate the need for applying chemical treatments to fabrics to achieve such qualities, a new study suggests. Applying synthetic polymers to fabrics can result in a range of appealing properties, but anything added to a fabric can get washed or worn away. Furthermore, while many fibers used in fabrics are synthetic (e.g., polyester), some consumers prefer natural fibers to avoid issues related to sensation, skin irritation, smoothness, and weight. Here, Filipe Natalio and colleagues created cotton fibers that incorporate composites with fluorescent and magnetic properties. They synthesized glucose derivatives that deliver the desirable molecules into the growing ovules of the cotton plant, Gossypium hirsutum. Thus, the molecules are embedded into the cotton fibers themselves, rather than added in the form of a chemical treatment. The resulting fibers exhibited fluorescent or magnetic properties, respectively, although they were weaker than raw fibers lacking the embedded composites, the authors report. They propose that similar techniques could be expanded to other biological systems such as bacteria, bamboo, silk, and flax – essentially opening a new era of “material farming.”

Robert Service’s Sept. 14, 2017 article for Science explores the potential of growing cotton with new properties (Note: A link has been removed),

You may have heard about smartphones and smart homes. But scientists are also designing smart clothes, textiles that can harvest energy, light up, detect pollution, and even communicate with the internet. The problem? Even when they work, these often chemically treated fabrics wear out rapidly over time. Now, researchers have figured out a way to “grow” some of these functions directly into cotton fibers. If the work holds, it could lead to stronger, lighter, and brighter textiles that don’t wear out.

Yet, as the new paper went to press today in Science, editors at the journal were made aware of mistakes in a figure in the supplemental material that prompted them to issue an Editorial Expression of Concern, at least until they receive clarification from the authors. Filipe Natalio, lead author and chemist at the Weizmann Institute of Science in Rehovot, Israel, says the mistakes were errors in the names of pigments used in control experiments, which he is working with the editors to fix.

That hasn’t dampened enthusiasm for the work. “I like this paper a lot,” says Michael Strano, a chemical engineer at the Massachusetts Institute of Technology in Cambridge. The study, he says, lays out a new way to add new functions into plants without changing their genes through genetic engineering. Those approaches face steep regulatory hurdles for widespread use. “Assuming the methods claimed are correct, that’s a big advantage,” Strano says.

Sam Lemonick’s Sept. 14, 2017 article for forbes.com describes how the researchers introduced new properties (in this case, glowing colours) into the cotton plants,

His [Filipe Natalio] team of researchers in Israel, Germany, and Austria used sugar molecules to sneak new properties into cotton. Like a Trojan horse, Natalio says. They tested the method by tagging glucose with a fluorescent dye molecule that glows green when hit with the right kind of light.

They bathed cotton ovules—the part of the plant that makes the fibers—in the glucose. And just like flowers suck up dyed water in grade school experiments, the ovules absorbed the sugar solution and piped the tagged glucose molecules to their cells. As the fibers grew, they took on a yellowish tinge—and glowed bright green under ultraviolet light.

Glowing cotton wasn’t enough for Natalio. It took his group about six months to be sure they were actually delivering the fluorescent protein into the cotton cells and not just coating the fibers in it. Once they were certain, they decided to push the envelope with something very unnatural: magnets.

This time, Natalio’s team modified glucose with the rare earth metal dysprosium, making a molecule that acts like a magnet. And just like they did with the dye, the researchers fed it to cotton ovules and ended up with fibers with magnetic properties.

Both Service and Lemonwick note that the editor of the journal Science (where the research paper was published) Jeremy Berg has written an expression of editorial concern as of Sept. 14, 2017,

In the 15 September [2017] issue, Science published the Report “Biological fabrication of cellulose fibers with tailored properties” by F. Natalio et al. (1). After the issue went to press, we became aware of errors in the labeling and/or identification of the pigments used for the control experiments detailed in figs. S1 and S2 of the supplementary materials. Science is publishing this Editorial Expression of Concern to alert our readers to this information as we await full explanation and clarification from the authors.

The problem seems to be one of terminology (from the Lemonwick article),

… Filipe Natalio, lead author and chemist at the Weizmann Institute of Science in Rehovot, Israel, says the mistakes were errors in the names of pigments used in control experiments, which he is working with the editors to fix.

These things happen. Terminology and spelling aren’t always the same from one country to the next and it can result in confusion. I’m glad to see the discussion is being held openly.

Here’s a link to and a citation for the paper,

Biological fabrication of cellulose fibers with tailored properties by Filipe Natalio, Regina Fuchs, Sidney R. Cohen, Gregory Leitus, Gerhard Fritz-Popovski, Oskar Paris, Michael Kappl, Hans-Jürgen Butt. Science 15 Sep 2017: Vol. 357, Issue 6356, pp. 1118-1122 DOI: 10.1126/science.aan5830

This paper is behind a paywall.

Announcing Canada’s Chief Science Advisor: Dr. Mona Nemer

Thanks to the Canadian Science Policy Centre’s September 26, 2017 announcement (received via email) a burning question has been answered,

After great anticipation, Prime Minister Trudeau along with Minister Duncan have announced Canada’s Chief Science Advisor, Dr. Mona Nemer, [emphasis mine]  at a ceremony at the House of Commons. The Canadian Science Policy Centre welcomes this exciting news and congratulates Dr. Nemer on her appointment in this role and we wish her the best in carrying out her duties in this esteemed position. CSPC is looking forward to working closely with Dr. Nemer for the Canadian science policy community. Mehrdad Hariri, CEO & President of the CSPC, stated, “Today’s historic announcement is excellent news for science in Canada, for informed policy-making and for all Canadians. We look forward to working closely with the new Chief Science Advisor.”

In fulfilling our commitment to keep the community up to date and informed regarding science, technology, and innovation policy issues, CSPC has been compiling all news, publications, and editorials in recognition of the importance of the Federal Chief Science Officer as it has been developing, as you may see by clicking here.

We invite your opinions regarding the new Chief Science Advisor, to be published on our CSPC Featured Editorial page. We will publish your reactions on our website, sciencepolicy.ca on our Chief Science Advisor page.

Please send your opinion pieces to editorial@sciencepolicy.ca.

Here are a few (very few) details from the Prime Minister’s (Justin Trudeau) Sept. 26, 2017 press release making the official announcement,

The Government of Canada is committed to strengthen science in government decision-making and to support scientists’ vital work.

In keeping with these commitments, the Prime Minister, Justin Trudeau, today announced Dr. Mona Nemer as Canada’s new Chief Science Advisor, following an open, transparent, and merit-based selection process.  

We know Canadians value science. As the new Chief Science Advisor, Dr. Nemer will help promote science and its real benefits for Canadians—new knowledge, novel technologies, and advanced skills for future jobs. These breakthroughs and new opportunities form an essential part of the Government’s strategy to secure a better future for Canadian families and to grow Canada’s middle class.

Dr. Nemer is a distinguished medical researcher whose focus has been on the heart, particularly on the mechanisms of heart failure and congenital heart diseases. In addition to publishing over 200 scholarly articles, her research has led to new diagnostic tests for heart failure and the genetics of cardiac birth defects. Dr. Nemer has spent more than ten years as the Vice-President, Research at the University of Ottawa, has served on many national and international scientific advisory boards, and is a Fellow of the Royal Society of Canada, a Member of the Order of Canada, and a Chevalier de l’Ordre du Québec.

As Canada’s new top scientist, Dr. Nemer will provide impartial scientific advice to the Prime Minister and the Minister of Science. She will also make recommendations to help ensure that government science is fully available and accessible to the public, and that federal scientists remain free to speak about their work. Once a year, she will submit a report about the state of federal government science in Canada to the Prime Minister and the Minister of Science, which will also be made public.

Quotes

“We have taken great strides to fulfill our promise to restore science as a pillar of government decision-making. Today, we took another big step forward by announcing Dr. Mona Nemer as our Chief Science Advisor. Dr. Nemer brings a wealth of expertise to the role. Her advice will be invaluable and inform decisions made at the highest levels. I look forward to working with her to promote a culture of scientific excellence in Canada.”
— The Rt. Hon. Justin Trudeau, Prime Minister of Canada

“A respect for science and for Canada’s remarkable scientists is a core value for our government. I look forward to working with Dr. Nemer, Canada’s new Chief Science Advisor, who will provide us with the evidence we need to make decisions about what matters most to Canadians: their health and safety, their families and communities, their jobs, environment and future prosperity.”
— The Honourable Kirsty Duncan, Minister of Science

“I am honoured and excited to be Canada’s Chief Science Advisor. I am very pleased to be representing Canadian science and research – work that plays a crucial role in protecting and improving the lives of people everywhere. I look forward to advising the Prime Minister and the Minister of Science and working with the science community, policy makers, and the public to make science part of government policy making.”
— Dr. Mona Nemer, Chief Science Advisor, Canada

Quick Facts

  • Dr. Nemer is also a Knight of the Order of Merit of the French Republic, and has been awarded honorary doctorates from universities in France and Finland.
  • The Office of the Chief Science Advisor will be housed at Innovation, Science and Economic Development and supported by a secretariat.

Nemers’ Wikipedia entry does not provide much additional information although you can find out a bit more on her University of Ottawa page. Brian Owens in a Sept. 26, 2017 article for the American Association for the Advancement of Science’s (AAAS) Science Magazine provides a bit more detail, about this newly created office and its budget

Nemer’s office will have a $2 million budget, and she will report to both Trudeau and science minister Kirsty Duncan. Her mandate includes providing scientific advice to government ministers, helping keep government-funded science accessible to the public, and protecting government scientists from being muzzled.

Ivan Semeniuk’s Sept. 26, 2017 article for the Globe and Mail newspaper about Nemer’s appointment is the most informative (that I’ve been able to find),

Mona Nemer, a specialist in the genetics of heart disease and a long time vice-president of research at the University of Ottawa, has been named Canada’s new chief science advisor.

The appointment, announced Tuesday [Sept. 26, 2017] by Prime Minister Justin Trudeau, comes two years after the federal Liberals pledged to reinstate the position during the last election campaign and nearly a decade after the previous version of the role was cut by then prime minister Stephen Harper.

Dr. Nemer steps into the job of advising the federal government on science-related policy at a crucial time. Following a landmark review of Canada’s research landscape [Naylor report] released last spring, university-based scientists are lobbying hard for Ottawa to significantly boost science funding, one of the report’s key recommendations. At the same time, scientists and science-advocacy groups are increasingly scrutinizing federal actions on a range of sensitive environment and health-related issues to ensure the Trudeau government is making good on promises to embrace evidence-based decision making.

A key test of the position’s relevance for many observers will be the extent to which Dr. Nemer is able to speak her mind on matters where science may run afoul of political expediency.

Born in 1957, Dr. Nemer grew up in Lebanon and pursued an early passion for chemistry at a time and place where women were typically discouraged from entering scientific fields. With Lebanon’s civil war making it increasingly difficult for her to pursue her studies, her family was able to arrange for her to move to the United States, where she completed an undergraduate degree at Wichita State University in Kansas.

A key turning point came in the summer of 1977 when Dr. Nemer took a trip with friends to Montreal. She quickly fell for the city and, in short order, managed to secure acceptance to McGill University, where she received a PhD in 1982. …

It took a lot of searching to find out that Nemer was born in Lebanon and went to the United States first. A lot of immigrants and their families view Canada as a second choice and Nemer and her family would appear to have followed that pattern. It’s widely believed (amongst Canadians too) that the US is where you go for social mobility. I’m not sure if this is still the case but at one point in the 1980s Israel ranked as having the greatest social mobility in the world. Canada came in second while the US wasn’t even third or fourth ranked.

It’s the second major appointment by Justin Trudeau in the last few months to feature a woman who speaks French. The first was Julie Payette, former astronaut and Québecker, as the upcoming Governor General (there’s more detail and a whiff of sad scandal in this Aug. 21, 2017 Canadian Broadcasting Corporation online news item). Now there’s Dr. Mona Nemer who’s lived both in Québec and Ontario. Trudeau and his feminism, eh? Also, his desire to keep Québeckers happy (more or less).

I’m not surprised by the fact that Nemer has been based in Ottawa for several years. I guess they want someone who’s comfortable with the government apparatus although I for one think a little fresh air might be welcome. After all, the Minister of Science, Kirsty Duncan, is from Toronto which between Nemer and Duncan gives us the age-old Canadian government trifecta (geographically speaking), Ottawa-Montréal-Toronto.

Two final comments, I am surprised that Duncan did not make the announcement. After all, it was in her 2015 mandate letter.But perhaps Paul Wells in his acerbic June 29, 2017 article for Macleans hints at the reason as he discusses the Naylor report (review of fundamental science mentioned in Semeniuk’s article and for which Nemer is expected to provide advice),

The Naylor report represents Canadian research scientists’ side of a power struggle. The struggle has been continuing since Jean Chrétien left office. After early cuts, he presided for years over very large increases to the budgets of the main science granting councils. But since 2003, governments have preferred to put new funding dollars to targeted projects in applied sciences. …

Naylor wants that trend reversed, quickly. He is supported in that call by a frankly astonishingly broad coalition of university administrators and working researchers, who until his report were more often at odds. So you have the group representing Canada’s 15 largest research universities and the group representing all universities and a new group representing early-career researchers and, as far as I can tell, every Canadian scientist on Twitter. All backing Naylor. All fundamentally concerned that new money for research is of no particular interest if it does not back the best science as chosen by scientists, through peer review.

The competing model, the one preferred by governments of all stripes, might best be called superclusters. Very large investments into very large projects with loosely defined scientific objectives, whose real goal is to retain decorated veteran scientists and to improve the Canadian high-tech industry. Vast and sprawling labs and tech incubators, cabinet ministers nodding gravely as world leaders in sexy trendy fields sketch the golden path to Jobs of Tomorrow.

You see the imbalance. On one side, ribbons to cut. On the other, nerds experimenting on tapeworms. Kirsty Duncan, a shaky political performer, transparently a junior minister to the supercluster guy, with no deputy minister or department reporting to her, is in a structurally weak position: her title suggests she’s science’s emissary to the government, but she is not equipped to be anything more than government’s emissary to science.

Second,  our other science minister, Navdeep Bains, Minister of Innovation, Science  and Economic Development does not appear to have been present at the announcement. Quite surprising given where her office will located (from the government’s Sept. 26, 2017 press release in Quick Facts section ) “The Office of the Chief Science Advisor will be housed at Innovation, Science and Economic Development and supported by a secretariat.”

Finally, Wells’ article is well worth reading in its entirety and for those who are information gluttons, I have a three part series on the Naylor report, published June 8, 2017,

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 1 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 2 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 3 of 3

Carbon nanotubes for water desalination

In discussions about water desalination and carbon nanomaterials,  it’s graphene that’s usually mentioned these days. By contrast, scientists from the US Department of Energy’s Lawrence Livermore National Laboratory (LLNL) have turned to carbon nanotubes,

There are two news items about the work at LLNL on ScienceDaily, this first one originated by the American Association for the Advancement of Science (AAAS) offers a succinct summary of the work (from an August 24, 2017 news item on ScienceDaily,

At just the right size, carbon nanotubes can filter water with better efficiency than biological proteins, a new study reveals. The results could pave the way to new water filtration systems, at a time when demands for fresh water pose a global threat to sustainable development.

A class of biological proteins, called aquaporins, is able to effectively filter water, yet scientists have not been able to manufacture scalable systems that mimic this ability. Aquaporins usually exhibit channels for filtering water molecules at a narrow width of 0.3 nanometers, which forces the water molecules into a single-file chain.

Here, Ramya H. Tunuguntla and colleagues experimented with nanotubes of different widths to see which ones are best for filtering water. Intriguingly, they found that carbon nanotubes with a width of 0.8 nanometers outperformed aquaporins in filtering efficiency by a factor of six.

These narrow carbon nanotube porins (nCNTPs) were still slim enough to force the water molecules into a single-file chain. The researchers attribute the differences between aquaporins and nCNTPS to differences in hydrogen bonding — whereas pore-lining residues in aquaporins can donate or accept H bonds to incoming water molecules, the walls of CNTPs cannot form H bonds, permitting unimpeded water flow.

The nCNTPs in this study maintained permeability exceeding that of typical saltwater, only diminishing at very high salt concentrations. Lastly, the team found that by changing the charges at the mouth of the nanotube, they can alter the ion selectivity. This advancement is highlighted in a Perspective [in Science magazine] by Zuzanna Siwy and Francesco Fornasiero.

The second Aug. 24, 2017 news item on ScienceDaily offers a more technical  perspective,

Lawrence Livermore scientists, in collaboration with researchers at Northeastern University, have developed carbon nanotube pores that can exclude salt from seawater. The team also found that water permeability in carbon nanotubes (CNTs) with diameters smaller than a nanometer (0.8 nm) exceeds that of wider carbon nanotubes by an order of magnitude.

The nanotubes, hollow structures made of carbon atoms in a unique arrangement, are more than 50,000 times thinner than a human hair. The super smooth inner surface of the nanotube is responsible for their remarkably high water permeability, while the tiny pore size blocks larger salt ions.

There’s a rather lovely illustration for this work,

An artist’s depiction of the promise of carbon nanotube porins for desalination. The image depicts a stylized carbon nanotube pipe that delivers clean desalinated water from the ocean to a kitchen tap. Image by Ryan Chen/LLNL

An Aug. 24, 2017 LLNL news release (also on EurekAlert), which originated the second news item, proceeds

Increasing demands for fresh water pose a global threat to sustainable development, resulting in water scarcity for 4 billion people. Current water purification technologies can benefit from the development of membranes with specialized pores that mimic highly efficient and water selective biological proteins.

“We found that carbon nanotubes with diameters smaller than a nanometer bear a key structural feature that enables enhanced transport. The narrow hydrophobic channel forces water to translocate in a single-file arrangement, a phenomenon similar to that found in the most efficient biological water transporters,” said Ramya Tunuguntla, an LLNL postdoctoral researcher and co-author of the manuscript appearing in the Aug. 24 [2017]edition of Science.

Computer simulations and experimental studies of water transport through CNTs with diameters larger than 1 nm showed enhanced water flow, but did not match the transport efficiency of biological proteins and did not separate salt efficiently, especially at higher salinities. The key breakthrough achieved by the LLNL team was to use smaller-diameter nanotubes that delivered the required boost in performance.

“These studies revealed the details of the water transport mechanism and showed that rational manipulation of these parameters can enhance pore efficiency,” said Meni Wanunu, a physics professor at Northeastern University and co-author on the study.

“Carbon nanotubes are a unique platform for studying molecular transport and nanofluidics,” said Alex Noy, LLNL principal investigator on the CNT project and a senior author on the paper. “Their sub-nanometer size, atomically smooth surfaces and similarity to cellular water transport channels make them exceptionally suited for this purpose, and it is very exciting to make a synthetic water channel that performs better than nature’s own.”

This discovery by the LLNL scientists and their colleagues has clear implications for the next generation of water purification technologies and will spur a renewed interest in development of the next generation of high-flux membranes.

Here’s a link to and a citation for the paper,

Enhanced water permeability and tunable ion selectivity in subnanometer carbon nanotube porins by Ramya H. Tunuguntla, Robert Y. Henley, Yun-Chiao Yao, Tuan Anh Pham, Meni Wanunu, Aleksandr Noy. Science 25 Aug 2017: Vol. 357, Issue 6353, pp. 792-796 DOI: 10.1126/science.aan2438

This paper is behind a paywall.

And, Northeastern University issued an August 25, 2017 news release (also on EurekAlert) by Allie Nicodemo,

Earth is 70 percent water, but only a tiny portion—0.007 percent—is available to drink.

As potable water sources dwindle, global population increases every year. One potential solution to quenching the planet’s thirst is through desalinization—the process of removing salt from seawater. While tantalizing, this approach has always been too expensive and energy intensive for large-scale feasibility.

Now, researchers from Northeastern have made a discovery that could change that, making desalinization easier, faster and cheaper than ever before. In a paper published Thursday [August 24, 2017] in Science, the group describes how carbon nanotubes of a certain size act as the perfect filter for salt—the smallest and most abundant water contaminant.

Filtering water is tricky because water molecules want to stick together. The “H” in H2O is hydrogen, and hydrogen bonds are strong, requiring a lot of energy to separate. Water tends to bulk up and resist being filtered. But nanotubes do it rapidly, with ease.

A carbon nanotube is like an impossibly small rolled up sheet of paper, about a nanometer in diameter. For comparison, the diameter of a human hair is 50 to 70 micrometers—50,000 times wider. The tube’s miniscule size, exactly 0.8 nm, only allows one water molecule to pass through at a time. This single-file lineup disrupts the hydrogen bonds, so water can be pushed through the tubes at an accelerated pace, with no bulking.

“You can imagine if you’re a group of people trying to run through the hallway holding hands, it’s going to be a lot slower than running through the hallway single-file,” said co-author Meni Wanunu, associate professor of physics at Northeastern. Wanunu and post doctoral student Robert Henley collaborated with scientists at the Lawrence Livermore National Laboratory in California to conduct the research.

Scientists led by Aleksandr Noy at Lawrence Livermore discovered last year [2016] that carbon nanotubes were an ideal channel for proton transport. For this new study, Henley brought expertise and technology from Wanunu’s Nanoscale Biophysics Lab to Noy’s lab, and together they took the research one step further.

In addition to being precisely the right size for passing single water molecules, carbon nanotubes have a negative electric charge. This causes them to reject anything with the same charge, like the negative ions in salt, as well as other unwanted particles.

“While salt has a hard time passing through because of the charge, water is a neutral molecule and passes through easily,” Wanunu said. Scientists in Noy’s lab had theorized that carbon nanotubes could be designed for specific ion selectivity, but they didn’t have a reliable system of measurement. Luckily, “That’s the bread and butter of what we do in Meni’s lab,” Henley said. “It created a nice symbiotic relationship.”

“Robert brought the cutting-edge measurement and design capabilities of Wanunu’s group to my lab, and he was indispensable in developing a new platform that we used to measure the ion selectivity of the nanotubes,” Noy said.

The result is a novel system that could have major implications for the future of water security. The study showed that carbon nanotubes are better at desalinization than any other existing method— natural or man-made.

To keep their momentum going, the two labs have partnered with a leading water purification organization based in Israel. And the group was recently awarded a National Science Foundation/Binational Science Foundation grant to conduct further studies and develop water filtration platforms based on their new method. As they continue the research, the researchers hope to start programs where students can learn the latest on water filtration technology—with the goal of increasing that 0.007 percent.

As is usual in these cases there’s a fair degree of repetition but there’s always at least one nugget of new information, in this case, a link to Israel. As I noted many times, the Middle East is experiencing serious water issues. My most recent ‘water and the Middle East’ piece is an August 21, 2017 post about rainmaking at the Masdar Institute in United Arab Emirates. Approximately 50% of the way down the posting, I mention Israel and Palestine’s conflict over water.