Tag Archives: artificial intelligence (AI)

Second order memristor

I think this is my first encounter with a second-order memristor. An August 28, 2019 news item on Nanowerk announces the research (Note: A link has been removed),

Researchers from the Moscow Institute of Physics and Technology [MIPT} have created a device that acts like a synapse in the living brain, storing information and gradually forgetting it when not accessed for a long time. Known as a second-order memristor, the new device is based on hafnium oxide and offers prospects for designing analog neurocomputers imitating the way a biological brain learns.

An August 28, 2019 MIPT press release (also on EurekAlert), which originated the news item, provides an explanation for neuromorphic computing (analog neurocomputers; brainlike computing), the difference between a first-order and second-order memristor, and an in depth view of the research,

Neurocomputers, which enable artificial intelligence, emulate the way the brain works. It stores data in the form of synapses, a network of connections between the nerve cells, or neurons. Most neurocomputers have a conventional digital architecture and use mathematical models to invoke virtual neurons and synapses.

Alternatively, an actual on-chip electronic component could stand for each neuron and synapse in the network. This so-called analog approach has the potential to drastically speed up computations and reduce energy costs.

The core component of a hypothetical analog neurocomputer is the memristor. The word is a portmanteau of “memory” and “resistor,” which pretty much sums up what it is: a memory cell acting as a resistor. Loosely speaking, a high resistance encodes a zero, and a low resistance encodes a one. This is analogous to how a synapse conducts a signal between two neurons (one), while the absence of a synapse results in no signal, a zero.

But there is a catch: In an actual brain, the active synapses tend to strengthen over time, while the opposite is true for inactive ones. This phenomenon known as synaptic plasticity is one of the foundations of natural learning and memory. It explains the biology of cramming for an exam and why our seldom accessed memories fade.

Proposed in 2015, the second-order memristor is an attempt to reproduce natural memory, complete with synaptic plasticity. The first mechanism for implementing this involves forming nanosized conductive bridges across the memristor. While initially decreasing resistance, they naturally decay with time, emulating forgetfulness.

“The problem with this solution is that the device tends to change its behavior over time and breaks down after prolonged operation,” said the study’s lead author Anastasia Chouprik from MIPT’s Neurocomputing Systems Lab. “The mechanism we used to implement synaptic plasticity is more robust. In fact, after switching the state of the system 100 billion times, it was still operating normally, so my colleagues stopped the endurance test.”

Instead of nanobridges, the MIPT team relied on hafnium oxide to imitate natural memory. This material is ferroelectric: Its internal bound charge distribution — electric polarization — changes in response to an external electric field. If the field is then removed, the material retains its acquired polarization, the way a ferromagnet remains magnetized.

The physicists implemented their second-order memristor as a ferroelectric tunnel junction — two electrodes interlaid with a thin hafnium oxide film (fig. 1, right). The device can be switched between its low and high resistance states by means of electric pulses, which change the ferroelectric film’s polarization and thus its resistance.

“The main challenge that we faced was figuring out the right ferroelectric layer thickness,” Chouprik added. “Four nanometers proved to be ideal. Make it just one nanometer thinner, and the ferroelectric properties are gone, while a thicker film is too wide a barrier for the electrons to tunnel through. And it is only the tunneling current that we can modulate by switching polarization.”

What gives hafnium oxide an edge over other ferroelectric materials, such as barium titanate, is that it is already used by current silicon technology. For example, Intel has been manufacturing microchips based on a hafnium compound since 2007. This makes introducing hafnium-based devices like the memristor reported in this story far easier and cheaper than those using a brand-new material.

In a feat of ingenuity, the researchers implemented “forgetfulness” by leveraging the defects at the interface between silicon and hafnium oxide. Those very imperfections used to be seen as a detriment to hafnium-based microprocessors, and engineers had to find a way around them by incorporating other elements into the compound. Instead, the MIPT team exploited the defects, which make memristor conductivity die down with time, just like natural memories.

Vitalii Mikheev, the first author of the paper, shared the team’s future plans: “We are going to look into the interplay between the various mechanisms switching the resistance in our memristor. It turns out that the ferroelectric effect may not be the only one involved. To further improve the devices, we will need to distinguish between the mechanisms and learn to combine them.”

According to the physicists, they will move on with the fundamental research on the properties of hafnium oxide to make the nonvolatile random access memory cells more reliable. The team is also investigating the possibility of transferring their devices onto a flexible substrate, for use in flexible electronics.

Last year, the researchers offered a detailed description of how applying an electric field to hafnium oxide films affects their polarization. It is this very process that enables reducing ferroelectric memristor resistance, which emulates synapse strengthening in a biological brain. The team also works on neuromorphic computing systems with a digital architecture.

MIPT has provided this image illustrating the research,

Caption: The left image shows a synapse from a biological brain, the inspiration behind its artificial analogue (right). The latter is a memristor device implemented as a ferroelectric tunnel junction — that is, a thin hafnium oxide film (pink) interlaid between a titanium nitride electrode (blue cable) and a silicon substrate (marine blue), which doubles up as the second electrode. Electric pulses switch the memristor between its high and low resistance states by changing hafnium oxide polarization, and therefore its conductivity. Credit: Elena Khavina/MIPT Press Office

Here’s a link to and a citation for the paper,

Ferroelectric Second-Order Memristor by Vitalii Mikheev, Anastasia Chouprik, Yury Lebedinskii, Sergei Zarubin, Yury Matveyev, Ekaterina Kondratyuk, Maxim G. Kozodaev, Andrey M. Markeev, Andrei Zenkevich, Dmitrii Negrov. ACS Appl. Mater. Interfaces 2019113532108-32114 DOI: https://doi.org/10.1021/acsami.9b08189 Publication Date:August 12, 2019 Copyright © 2019 American Chemical Society

This paper is behind a paywall.

Science and technology, the 2019 Canadian federal government, and the Phoenix Pay System

This posting will focus on science, technology, the tragic consequence of bureaucratic and political bungling (the technology disaster that is is the Phoenix payroll system), and the puzzling lack of concern about some of the biggest upcoming technological and scientific changes in government and society in decades or more.

Setting the scene

After getting enough Liberal party members elected to the Canadian Parliament’s House of Commons to form a minority government in October 2019, Prime Minister Justin Trudeau announced a new cabinet and some changes to the ‘science’ portfolios in November 2019. You can read more about the overall cabinet announcement in this November 20, 2019 news item by Peter Zimonjic on the Canadian Broadcasting Corporation (CBC) website, my focus will be the science and technology. (Note: For those who don’t know, there is already much discussion about how long this Liberal minority government will last. All i takes is a ‘loss of confidence’ motion and a majority of the official opposition and other parties to vote ‘no confidence’ and Canada will back into the throes of an election. Mitigating against a speedy new federal election,, the Conservative party [official opposition] needs to choose a new leader and the other parties may not have the financial resources for another federal election so soon after the last one.)

Getting back to now and the most recent Cabinet announcements, it seems this time around, there’s significantly less interest in science. Concerns about this were noted in a November 22, 2019 article by Ivan Semeniuk for the Globe and Mail,

Canadian researchers are raising concerns that the loss of a dedicated science minister signals a reduced voice for their agenda around the federal cabinet table.

“People are wondering if the government thinks its science agenda is done,” said Marie Franquin, a doctoral student in neuroscience and co-president of Science and Policy Exchange, a student-led research-advocacy group. “There’s still a lot of work to do.”

While not a powerful player within cabinet, Ms. Duncan [Kirsty Duncan] proved to be an ardent booster of Canada’s research community and engaged with its issues, including the muzzling of federal scientists by the former Harper government and the need to improve gender equity in the research ecosystem.

Among Ms. Duncan’s accomplishments was the appointment of a federal chief science adviser [sic] and the commissioning of a landmark review of Ottawa’s support for fundamental research, chaired by former University of Toronto president David Naylor

… He [Andre Albinati, managing principal with Earnscliffe Strategy Group] added the role of science in government is now further bolstered by chief science adviser [sic] Mona Nemer and a growing network of departmental science advisers [sic]. .

Mehrdad Hariri, president of the Canadian Science Policy Centre …, cautioned that the chief science adviser’s [sic] role was best described as “science for policy,” meaning the use of science advice in decision-making. He added that the government still needed a separate role like that filled by Ms. Duncan … to champion “policy for science,” meaning decisions that optimize Canada’s research enterprise.

There’s one other commentary (by CresoSá) but I’m saving it for later.

The science minister disappears

There is no longer a separate position for Science. Kirsty Duncan was moved from her ‘junior’ position as Minister of Science (and Sport) to Deputy Leader of the government. Duncan’s science portfolio has been moved over to Navdeep Bains whose portfolio evolved from Minister of Innovation, Science and Economic Development (yes, there were two ‘ministers of science’) to Minister of Innovation, Science and Industry. (It doesn’t make a lot of sense to me. Sadly, nobody from the Prime Minister’s team called to ask for my input on the matter.)

Science (and technology) have to be found elsewhere

There’s the Natural Resources (i.e., energy, minerals and metals, forests, earth sciences, mapping, etc.) portfolio which was led by Catherine McKenna who’s been moved over to Infrastructure and Communities. There have been mumblings that she was considered ‘too combative’ in her efforts. Her replacement in Natural Resources is Seamus O’Regan. No word yet on whether or not, he might also be ‘too combative’. Of course, it’s much easier if you’re female to gain that label. (You can read about the spray-painted slurs found on the windows of McKenna’s campaign offices after she was successfully re-elected. See: Mike Blanchfield’s October 24, 2019 article for Huffington Post and Brigitte Pellerin’s October 31, 2019 article for the Ottawa Citizen.)

There are other portfolios which can also be said to include science such as Environment and Climate Change which welcomes a new minister, Jonathan Wilkinson moving over from his previous science portfolio, Fisheries, Oceans, and Canadian Coast Guard where Bernadette Jordan has moved into place. Patti Hajdu takes over at Heath Canada (which despite all of the talk about science muzzles being lifted still has its muzzle in place). While it’s not typically considered a ‘science’ portfolio in Canada, the military establishment regardless of country has long been considered a source of science innovation; Harjit Sajjan has retained his Minister of National Defence portfolio.

Plus there are at least half a dozen other portfolios that can be described as having significant science and/or technology elements folded into their portfolios, e.g., Transport Canada, Agriculture and Agri-Food, Safety and Emergency Preparedness, etc.

As I tend to focus on emerging science and technology, most of these portfolios are not ones I follow even on an irregular basis meaning I have nothing more to add about them in this posting. Mixing science and technology together in this posting is a reflection of how tightly the two are linked together. For example, university research into artificial intelligence is taking place on theoretical levels (science) and as applied in business and government (technology). Apologies to the mathematicians but this explanation is already complicated and I don’t think I can do justice to their importance.

Moving onto technology with a strong science link, this next portfolio received even less attention than the ‘science’ portfolios and I believe that’s undeserved.

The Minister of Digital Government and a bureaucratic débacle

These days people tend to take the digital nature of daily life for granted and that may be why this portfolio has escaped much notice. When the ministerial posting was first introduced, it was an addition to Scott Brison’s responsibilities as head of the Treasury Board. It continued to be linked to the Treasury Board when Joyce Murray* inherited Brison’s position, after his departure from politics. As of the latest announcement in November 2019, Digital Government and the Treasury Board are no longer tended to by the same cabinet member.

The new head of the Treasury Board is Jean-Yves Duclos while Joyce Murray has held on to the Minister of Digital Government designation. I’m not sure if the separation from the Treasury Board is indicative of the esteem the Prime Minister has for digital government or if this has been done to appease someone or some group, which means the digital government portfolio could well disappear in the future just as the ‘junior’ science portfolio did.

Regardless, here’s some evidence as to why I think ‘digital government’ is unfairly overlooked, from the minister’s December 13, 2019 Mandate Letter from the Prime Minister (Note: All of the emphases are mine],

I will expect you to work with your colleagues and through established legislative, regulatory and Cabinet processes to deliver on your top priorities. In particular, you will:

  • Lead work across government to transition to a more digital government in order to improve citizen service.
  • Oversee the Chief Information Officer and the Canadian Digital Service as they work with departments to develop solutions that will benefit Canadians and enhance the capacity to use modern tools and methodologies across Government.
  • Lead work to analyze and improve the delivery of information technology (IT) within government. This work will include identifying all core and at-risk IT systems and platforms. You will lead the renewal of SSC [Shared Services Canada which provides ‘modern, secure and reliable IT services so federal organizations can deliver digital programs and services to meet Canadians’ needs’] so that it is properly resourced and aligned to deliver common IT infrastructure that is reliable and secure.
  • Lead work to create a centre of expertise that brings together the necessary skills to effectively implement major transformation projects across government, including technical, procurement and legal expertise.
  • Support the Minister of Innovation, Science and Industry in continuing work on the ethical use of data and digital tools like artificial intelligence for better government.
  • With the support of the President of the Treasury Board and the Minister of Families, Children and Social Development, accelerate progress on a new Government of Canada service strategy that aims to create a single online window for all government services with new performance standards.
  • Support the Minister of Families, Children and Social Development in expanding and improving the services provided by Service Canada.
  • Support the Minister of National Revenue on additional steps required to meaningfully improve the satisfaction of Canadians with the quality, timeliness and accuracy of services they receive from the Canada Revenue Agency.
  • Support the Minister of Public Services and Procurement in eliminating the backlog of outstanding pay issues for public servants as a result of the Phoenix Pay System.
  • Lead work on the Next Generation Human Resources and Pay System to replace the Phoenix Pay System and support the President of the Treasury Board as he actively engages Canada’s major public sector unions.
  • Support the Minister of Families, Children and Social Development and the Minister of National Revenue to implement a voluntary, real-time e-payroll system with an initial focus on small businesses.
  • Fully implement lessons learned from previous information technology project challenges and failures [e,g, the Phoenix Payroll System], particularly around sunk costs and major multi-year contracts. Act transparently by sharing identified successes and difficulties within government, with the aim of constantly improving the delivery of projects large and small.
  • Encourage the use and development of open source products and open data, allowing for experimentation within existing policy directives and building an inventory of validated and secure applications that can be used by government to share knowledge and expertise to support innovation.

To be clear, the Minister of Digital Government is responsible (more or less) for helping to clean up a débacle, i.e., the implementation of the federal government’s Phoenix Payroll System and drive even more digitization and modernization of government data and processes.

They’ve been trying to fix the Phoenix problems since the day it was implemented in early 2016.That’s right, it will be four years in Spring 2020 when the Liberal government chose to implement a digital payroll system that had been largely untested and despite its supplier’s concerns.

The Phoenix Pay System and a great sadness

The Public Service Alliance of Canada (the largest union for federal employees; PSAC) has a separate space for Phoneix on its website, which features this video,

That video was posted on September 24, 2018 (on YouTube) and, to my knowledge, the situation has not changed appreciably. A November 8, 2019 article by Tom Spears for the Ottawa Citizen details a very personal story about what can only be described as a failure on just about every level you can imagine,

Linda Deschâtelets’s death by suicide might have been prevented if the flawed Phoenix pay system hadn’t led her to emotional and financial ruin, a Quebec coroner has found.

Deschâtelets died in December of 2017, at age 52. At the time she was struggling with chronic pain and massive mortgage payments.

The fear of losing her home weighed heavily on her. In her final text message to one of her sons she said she had run out of energy and wanted to die before she lost her house in Val des Monts.

But Deschâtelets might have lived, says a report from coroner Pascale Boulay, if her employer, the Canada Revenue Agency, had shown a little empathy.

“During the final months before her death, she experienced serious financial troubles linked to the federal government’s pay system, Phoenix, which cut off her pay in a significant way, making her fear she would lose her house,” said Boulay’s report.

“A thorough analysis of this case strongly suggests that this death could have been avoided if a search for a solution to the current financial, psychological and medical situation had been made.”

Boulay found “there is no indication that management sought to meet Ms. Deschâtelets to offer her options. In addition, the lack of prompt follow-up in the processing of requests for information indicates a distressing lack of empathy for an employee who is experiencing real financial insecurity.”

Pay records “indeed show that she was living through serious financial problems and that she received irregular payments since the beginning of October 2017,” the coroner wrote.

As well, “her numerous online applications using the form for a compensation problem, in which she expresses her fear of not being able to make her mortgage payments and says that she wants a detailed statement of account, remain unanswered.”

On top of that, she had chronic back pain and sciatica and had been missing work. She was scheduled to get an ergonomically designed work area, but this change was never made even though she waited for months.

Money troubles kept getting worse.

She ran out of paid sick leave, and her department sent her an email to explain that she had automatically been docked pay for taking sick days. “In this same email, she was also advised that in the event that she missed additional days, other amounts would be deducted. No further follow-up with her was done,” the coroner wrote.

That email came eight days before her death.

Deschâtelets was also taking cocaine but this did not alter the fact that she genuinely risked losing her home over her financial problems, the coroner wrote.

“Given the circumstances, it is highly likely that Ms. Deschâtelets felt trapped” and ended her life “because of her belief that she would lose the house anyway. It was only a matter of time.”

The situation is “even more sad” because CRA had advisers on site who dealt with Phoenix issues, and could meet with employees, Boulay wrote.

“The federal government does a lot of promotion of workplace wellness. Surprisingly, these wellness measures are silent on the subject of financial insecurity at work,” Boulay wrote.

I feel sad for the family and indignant that there doesn’t seem to have been enough done to mitigate the hardships due to an astoundingly ill-advised decision to implement an untested payroll system for the federal government’s 280,000 or more civil servants.

Canada’s Senate reports back on Phoenix

I’m highlighting the Senate report here although there are also two reports from the Auditor General should you care to chase them down. From an August 1, 2018 article by Brian Jackson for IT World Canada,

In February 2016, in anticipation of the start of the Phoenix system rolling out, the government laid off 2,700 payroll clerks serving 120,000 employees. [I’m guessing the discrepancy in numbers of employees may be due to how the clerks were laid off, i.e., if they were load off in groups scheduled to be made redundant at different intervals.]

As soon as Phoenix was launched, problems began. By May 2018 there were 60,000 pay requests backlogged. Now the government has dedicated resources to explaining to affected employees the best way to avoid pay-related problems, and to file grievances related to the system.

“The causes of the failure are multiple, including, failing to manage the pay system in an integrated fashion with human resources processes, not conducting a pilot project, removing essential processing functions to stay on budget, laying off experienced compensation advisors, and implementing a pay system that wasn’t ready,” the Senate report states. “We are dismayed that this project proceeded with minimal independent oversight, including from central agencies, and that no one has accepted responsibility for the failure of Phoenix or has been held to account. We believe that there is an underlying cultural problem that needs to be addressed. The government needs to move away from a culture that plays down bad news and avoids responsibility, [emphasis mine] to one that encourages employee engagement, feedback and collaboration.”

There is at least one estimate that the Phoenix failure will cost $2.2 billion but I’m reasonably certain that figure does not include the costs of suicide, substance abuse, counseling, marriage breakdown, etc. (Of course, how do you really estimate the cost of a suicide or a marriage breakdown or the impact that financial woes have on children?)

Also concerning the Senate report, there is a July 31, 2018 news item on CBC (Canadian Broadcasting Corporation) news online,

“We are not confident that this problem has been solved, that the lessons have all been learned,” said Sen. André Pratte, deputy chair of the committee. [emphases mine]

I haven’t seen much coverage about the Phoenix Pay System recently in the mainstream media but according to a December 4, 2019 PSAC update,

The Parliamentary Budget Officer has said the Phoenix situation could continue until 2023, yet government funding commitments so far have fallen significantly short of what is needed to end the Phoenix nightmare. 

PSAC will continue pressing for enough funding and urgent action:

  • eliminate the over 200,000 cases in the pay issues backlog
  • compensate workers for their many hardships
  • stabilize Phoenix
  • properly develop, test and launch a new pay system

2023 would mean the débacle had a seven year lifespan, assuming everything has been made better by then.

Finally, there seems to be one other minister tasked with the Phoenix Pay System ‘fix’ (December 13, 2019 mandate letter) and that is the Minister of Public Services and Procurement, Anita Anand. She is apparently a rookie MP (member of Parliament), which would make her a ‘cabinet rookie’ as well. Interesting choice.

More digital for federal workers and the Canadian public

Despite all that has gone before, the government is continuing in its drive to digitize itself as can be seen in the Minister of Digital Government’s mandate letter (excerpted above in ‘The Minister of Digital Government and some …’ subsection) and on the government’s Digital Government webspace,

Our digital shift to becoming more agile, open, and user-focused. We’re working on tomorrow’s Canada today.

I don’t find that particularly reassuring in light of the Phoenix Payroll System situation. However, on the plus side, Canada has a Digital Charter with 10 principles which include universal access, safety and security, control and consent, etc. Oddly, it looks like it’s the Minister of Justice and Attorney General of Canada, the Minister of Canadian Heritage and the Minister of Innovation, Science and Industry who are tasked with enhancing and advancing the charter. Shouldn’t this group also include the Minister of Digital Government?

The Minister of Digital Government, Joyce Murray, does not oversee a ministry and I think that makes this a ‘junior’ position in much the same way the Minister of Science was a junior position. It suggests a mindset where some of the biggest changes to come for both employees and the Canadian public are being overseen by someone without the resources to do the work effectively or the bureaucratic weight and importance to ensure the changes are done properly.

It’s all very well to have a section on the Responsible use of artificial intelligence (AI) on your Digital Government webspace but there is no mention of ways and means to fix problems. For example, what happens to people who somehow run into an issue that the AI system can’t fix or even respond to because the algorithm wasn’t designed that way. Ever gotten caught in an automated telephone system? Or perhaps more saliently, what about the people who died in two different airplane accidents due to the pilots’ poor training and an AI system? (For a more informed view of the Boeing 737 Max, AI, and two fatal plane crashes see: a June 2, 2019 article by Rachel Kraus for Mashable.)

The only other minister whose mandate letter includes AI is the Minister of Innovation, Science and Industry, Navdeep Bains (from his December 13, 2019 mandate letter),

  • With the support of the Minister of Digital Government, continue work on the ethical use of data and digital tools like artificial intelligence for better government.

So, the Minister of Digital Government, Joyce Murray, is supporting the Minister of Innovation, Science and Industry, Navdeep Bains. That would suggest a ‘junior’ position wouldn’t it? If you look closely at the Minister of Digital Services’ mandate letter, you’ll see the Minister is almost always supporting another minister.

Where the Phoenix Pay System is concerned, the Minister of Digital Services is supporting the Minister of Public Services and Procurement, the previously mentioned rookie MP and rookie Cabinet member, Anita Anand. Interestingly, the employees’ union, PSAC, has decided (as of a November 20, 2019 news release) to ramp up its ad campaign regarding the Phoenix Pay System and its bargaining issues by targeting the Prime Minister and the new President of the Treasury Board, Jean-Yves Duclos. Guess whose mandate letter makes no mention of Phoenix (December 13, 2019 mandate letter for the President of the Treasury Board).

Open government, eh?

Putting a gift bow on a pile of manure doesn’t turn it into a gift (for most people, anyway) and calling your government open and/or transparent doesn’t necessarily make it so even when you amend your Access to Information Act to make it more accessible (August 22, 2019 Digital Government news release by Ruth Naylor).

One of the Liberal government’s most heavily publicized ‘open’ initiatives was the lifting of the muzzles put on federal scientists in the Environment and Natural Resources ministries. Those muzzles were put into place by a Conservative government and the 2015 Liberal government gained a lot of political capital from its actions. No one seemed to remember that Health Canada also had been muzzled. That muzzle had been put into place by one of the Liberal governments preceding the Conservative one. To date there is no word as to whether or not that muzzle has ever been lifted.

However, even in the ministries where the muzzles were lifted, it seems scientists didn’t feel free to speak even many months later (from a Feb 21, 2018 article by Brian Owens for Science),

More than half of government scientists in Canada—53%—do not feel they can speak freely to the media about their work, even after Prime Minister Justin Trudeau’s government eased restrictions on what they can say publicly, according to a survey released today by a union that represents more than 16,000 federal scientists.

That union—the Professional Institute of the Public Service of Canada (PIPSC) based in Ottawa—conducted the survey last summer, a little more than a year and a half into the Trudeau government. It followed up on a similar survey the union released in 2013 at the height of the controversy over the then-Conservative government’s reported muzzling of scientists by preventing media interviews and curtailing travel to scientific conferences. The new survey found the situation much improved—in 2013, 90% of scientists felt unable to speak about their work. But the union says more work needs to be done. “The work needs to be done at the department level,” where civil servants may have been slow to implement political directives, PIPSC President Debi Daviau said. ”We need a culture change that promotes what we have heard from ministers.”

I found this a little chilling (from the PIPSC Defrosting Public Science; a 2017 survey of federal scientists webpage),

To better illustrate this concern, in 2013, The Big Chill revealed that 86% of respondents feared censorship or retaliation from their department or agency if they spoke out about a departmental decision or action that, based on their scientific knowledge, could bring harm to the public interest. In 2017, when asked the same question, 73% of respondents said they would not be able to do so without fear of censorship or retaliation – a mere 13% drop.

It’s possible things have improved but while the 2018 Senate report did not focus on scientists, it did highlight issues with the government’s openness and transparency or in their words: “… a culture that plays down bad news and avoids responsibility.” It seems the Senate is not the only group with concerns about government culture; so do the government’s employees (the scientists, anyway).

The other science commentary

I can’t find any commentary or editorials about the latest ministerial changes or the mandate letters on the Canadian Science Policy Centre website so was doubly pleased to find this December 6, 2019 commentary by Creso Sá for University Affairs,

The recently announced Liberal cabinet brings what appear to be cosmetic changes to the science file. Former Science Minister Kirsty Duncan is no longer in it, which sparked confusion among casual observers who believed that the elimination of her position signalled the termination of the science ministry or the downgrading of the science agenda. In reality, science was and remains part of the renamed Ministry of Innovation, Science, and (now) Industry (rather than Economic Development), where Minister Navdeep Bains continues at the helm.

Arguably, these reactions show that appearances have been central [emphasis mine] to the modus operandi of this government. Minister Duncan was an active, and generally well-liked, champion for the Trudeau government’s science platform. She carried the torch of team science over the last four years, becoming vividly associated with the launch of initiatives such as the Fundamental Science Review, the creation of the chief science advisor position, and the introduction of equity provisions in the Canada Research Chairs program. She talked a good talk, but her role did not in fact give her much authority to change the course of science policy in the country. From the start, her mandate was mostly defined around building bridges with members of cabinet, which was likely good experience for her new role of deputy house leader.

Upon the announcement of the new cabinet, Minister Bains took to Twitter to thank Dr. Duncan for her dedication to placing science in “its rightful place back at the centre of everything our government does.” He indicated that he will take over her responsibilities, which he was already formally responsible for. Presumably, he will now make time to place science at the centre of everything the government does.

This kind of sloganeering has been common [emphasis mine] since the 2015 campaign, which seems to be the strategic moment the Liberals can’t get out of. Such was the real and perceived hostility of the Harper Conservatives to science that the Liberals embraced the role of enlightened advocates. Perhaps the lowest hanging fruit their predecessors left behind was the sheer absence of any intelligible articulation of where they stood on the science file, which the Liberals seized upon with gusto. Virtue signalling [emphasis mine] became a first line of response.

When asked about her main accomplishments over the past year as chief science advisor at the recent Canadian Science Policy Conference in Ottawa, Mona Nemer started with the creation of a network of science advisors across government departments. Over the past four years, the government has indeed not been shy about increasing the number of appointments with “science” in their job titles. That is not a bad thing. We just do not hear much about how “science is at the centre of everything the government does.” Things get much fuzzier when the conversation turns to the bold promises of promoting evidence-based decision making that this government has been vocal about. Queried on how her role has impacted policy making, Dr. Nemer suggested the question should be asked to politicians. [emphasis mine]

I’m tempted to describe the ‘Digital Government’ existence and portfolio as virtue signalling.

Finally

There doesn’t seem to be all that much government interest in science or, even, technology for that matter. We have a ‘junior’ Minister of Science disappear so that science can become part of all the ministries. Frankly, I wish that science were integrated throughout all the ministries but when you consider the government culture, this move more easily lends itself to even less responsibility being taken by anyone. Take another look at the Canada’s Chief Science Advisor’s comment: “Queried on how her role has impacted policy making, Dr. Nemer suggested the question should be asked to politicians.” Meanwhile, we get a ‘junior Minister of Digital Government whose portfolio has the potential to affect Canadians of all ages and resident in Canada or not.

A ‘junior’ minister is not necessarily evil as Sá points out but I would like to see some indication that efforts are being made to shift the civil service culture and the attitude about how the government conducts its business and that the Minister of Digital Government will receive the resources and the respect she needs to do her job. I’d also like to see some understanding of how catastrophic a wrong move has already been and could be in the future along with options for how citizens are going to be making their way through this brave new digital government world and some options for fixing problems, especially the catastrophic ones.

*December 30, 2019 correction: After Scott Brison left his position as President of the Treasury Board and Minister of Digital Government in January 2019, Jane Philpott held the two positions until March 2019 when she left the Liberal Party. Carla Quatrough was acting head from March 4 – March 18, 2019 when Joyce Murray was appointed to the two positions which she held for eight months until November 2019 when, as I’ve noted, the ‘Minister of Digital Government’ was split from the ‘President of the Treasury Board’ appointment.

A deep look at atomic switches

A July 19, 2019 news item on phys.org describes research that may result in a substantive change for information technology,

A team of researchers from Tokyo Institute of Technology has gained unprecedented insight into the inner workings of an atomic switch. By investigating the composition of the tiny metal ‘bridge’ that forms inside the switch, their findings may spur the design of atomic switches with improved performance.

A July 22, 2019 Tokyo Institute of Technology press release (also on EurekAlert but published July 19, 2019), which originated the news item, explains how this research could have such an important impact,

Atomic switches are hailed as the tiniest of electrochemical switches that could change the face of information technology. Due to their nanoscale dimensions and low power consumption, they hold promise for integration into next-generation circuits that could drive the development of artificial intelligence (AI) and Internet of Things (IoT) devices.

Although various designs have emerged, one intriguing question concerns the nature of the metallic filament, or bridge, that is key to the operation of the switch. The bridge forms inside a metal sulfide layer sandwiched between two electrodes [see figure below], and is controlled by applying a voltage that induces an electrochemical reaction. The formation and annihilation of this bridge determines whether the switch is on or off.

Now, a research group including Akira Aiba and Manabu Kiguchi and colleagues at Tokyo Institute of Technology’s Department of Chemistry has found a useful way to examine precisely what the bridge is composed of.

By cooling the atomic switch enough so as to be able to investigate the bridge using a low-temperature measurement technique called point contact spectroscopy (PCS) [2], their study revealed that the bridge is made up of metal atoms from both the electrode and the metal sulfide layer. This surprising finding controverts the prevailing notion that the bridge derives from the electrode only, Kiguchi explains.

The team compared atomic switches with different combinations of electrodes (Pt and Ag, or Pt and Cu) and metal sulfide layers (Cu2S and Ag2S). In both cases, they found that the bridge is mainly composed of Ag.

The reason behind the dominance of Ag in the bridge is likely due to “the higher mobility of Ag ions compared to Cu ions”, the researchers say in their paper published in ACS Applied Materials & Interfaces.

They conclude that “it would be better to use metals with low mobility” for designing atomic switches with higher stability.

Much remains to be explored in the advancement of atomic switch technologies, and the team is continuing to investigate which combination of elements would be the most effective in improving performance.

###

Technical terms
[1] Atomic switch: The idea behind an atomic switch — one that can be controlled by the motion of a single atom — was introduced by Donald Eigler and colleagues at the IBM Almaden Research Center in 1991. Interest has since focused on how to realize and harness the potential of such extremely small switches for use in logic circuits and memory devices. Over the past two decades, researchers in Japan have taken a world-leading role in the development of atomic switch technologies.
[2] Point contact spectroscopy: A method of measuring the properties or excitations of single atoms at low temperature.

Caption: The ‘bridge’ that forms within the metal sulfide layer, connecting two metal electrodes, results in the atomic switch being turned on. Credit: Manabu Kiguchi

Here’s a link to and a citation for the paper,

Investigation of Ag and Cu Filament Formation Inside the Metal Sulfide Layer of an Atomic Switch Based on Point-Contact Spectroscopy by A. Aiba, R. Koizumi, T. Tsuruoka, K. Terabe, K. Tsukagoshi, S. Kaneko, S. Fujii, T. Nishino, M. Kiguchi. ACS Appl. Mater. Interfaces 2019 XXXXXXXXXX-XXX DOI: https://doi.org/10.1021/acsami.9b05523 Publication Date:July 5, 2019 Copyright © 2019 American Chemical Society

This paper is behind a paywall.

For anyone who might need a bit of a refresher for the chemical elements, Pt is platinum, Ag is silver, and Cu is copper. So, with regard to the metal sulfide layers Cu2S is copper sulfide and Ag2S is silver sulfide.

Using light to manipulate neurons

There are three (or more?) possible applications including neuromorphic computing for this new optoelectronic technology which is based on black phophorus. A July 16, 2019 news item on Nanowerk announces the research,

Researchers from RMIT University [Australia] drew inspiration from an emerging tool in biotechnology – optogenetics – to develop a device that replicates the way the brain stores and loses information.

Optogenetics allows scientists to delve into the body’s electrical system with incredible precision, using light to manipulate neurons so that they can be turned on or off.

The new chip is based on an ultra-thin material that changes electrical resistance in response to different wavelengths of light, enabling it to mimic the way that neurons work to store and delete information in the brain.

Caption: The new chip is based on an ultra-thin material that changes electrical resistance in response to different wavelengths of light. Credit: RMIT University

A July 17, 2019 RMIT University press release (also on EurekAlert but published on July 16, 2019), which originated the news item, expands on the theme,

Research team leader Dr Sumeet Walia said the technology moves us closer towards artificial intelligence (AI) that can harness the brain’s full sophisticated functionality.

“Our optogenetically-inspired chip imitates the fundamental biology of nature’s best computer – the human brain,” Walia said.

“Being able to store, delete and process information is critical for computing, and the brain does this extremely efficiently.

“We’re able to simulate the brain’s neural approach simply by shining different colours onto our chip.

“This technology takes us further on the path towards fast, efficient and secure light-based computing.

“It also brings us an important step closer to the realisation of a bionic brain – a brain-on-a-chip that can learn from its environment just like humans do.”

Dr Taimur Ahmed, lead author of the study published in Advanced Functional Materials, said being able to replicate neural behavior on an artificial chip offered exciting avenues for research across sectors.

“This technology creates tremendous opportunities for researchers to better understand the brain and how it’s affected by disorders that disrupt neural connections, like Alzheimer’s disease and dementia,” Ahmed said.

The researchers, from the Functional Materials and Microsystems Research Group at RMIT, have also demonstrated the chip can perform logic operations – information processing – ticking another box for brain-like functionality.

Developed at RMIT’s MicroNano Research Facility, the technology is compatible with existing electronics and has also been demonstrated on a flexible platform, for integration into wearable electronics.

How the chip works:

Neural connections happen in the brain through electrical impulses. When tiny energy spikes reach a certain threshold of voltage, the neurons bind together – and you’ve started creating a memory.

On the chip, light is used to generate a photocurrent. Switching between colors causes the current to reverse direction from positive to negative.

This direction switch, or polarity shift, is equivalent to the binding and breaking of neural connections, a mechanism that enables neurons to connect (and induce learning) or inhibit (and induce forgetting).

This is akin to optogenetics, where light-induced modification of neurons causes them to either turn on or off, enabling or inhibiting connections to the next neuron in the chain.

To develop the technology, the researchers used a material called black phosphorus (BP) that can be inherently defective in nature.

This is usually a problem for optoelectronics, but with precision engineering the researchers were able to harness the defects to create new functionality.

“Defects are usually looked on as something to be avoided, but here we’re using them to create something novel and useful,” Ahmed said.

“It’s a creative approach to finding solutions for the technical challenges we face.”

Here’s a link and a citation for the paper,

Multifunctional Optoelectronics via Harnessing Defects in Layered Black Phosphorus by Taimur Ahmed, Sruthi Kuriakose, Sherif Abbas,, Michelle J. S. Spencer, Md. Ataur Rahman, Muhammad Tahir, Yuerui Lu, Prashant Sonar, Vipul Bansal, Madhu Bhaskaran, Sharath Sriram, Sumeet Walia. Advanced Functional Materials DOI: https://doi.org/10.1002/adfm.201901991 First published (online): 17 July 2019

This paper is behind a paywall.

Large Interactive Virtual Environment Laboratory (LIVELab) located in McMaster University’s Institute for Music & the Mind (MIMM) and the MetaCreation Lab at Simon Fraser University

Both of these bits have a music focus but they represent two entirely different science-based approaches to that form of art and one is solely about the music and the other is included as one of the art-making processes being investigated..

Large Interactive Virtual Environment Laboratory (LIVELab) at McMaster University

Laurel Trainor and Dan J. Bosnyak both of McMaster University (Ontario, Canada) have written an October 27, 2019 essay about the LiveLab and their work for The Conversation website (Note: Links have been removed),

The Large Interactive Virtual Environment Laboratory (LIVELab) at McMaster University is a research concert hall. It functions as both a high-tech laboratory and theatre, opening up tremendous opportunities for research and investigation.

As the only facility of its kind in the world, the LIVELab is a 106-seat concert hall equipped with dozens of microphones, speakers and sensors to measure brain responses, physiological responses such as heart rate, breathing rates, perspiration and movements in multiple musicians and audience members at the same time.

Engineers, psychologists and clinician-researchers from many disciplines work alongside musicians, media artists and industry to study performance, perception, neural processing and human interaction.

In the LIVELab, acoustics are digitally controlled so the experience can change instantly from extremely silent with almost no reverberation to a noisy restaurant to a subway platform or to the acoustics of Carnegie Hall.

Real-time physiological data such as heart rate can be synchronized with data from other systems such as motion capture, and monitored and recorded from both performers and audience members. The result is that the reams of data that can now be collected in a few hours in the LIVELab used to take weeks or months to collect in a traditional lab. And having measurements of multiple people simultaneously is pushing forward our understanding of real-time human interactions.

Consider the implications of how music might help people with Parkinson’s disease to walk more smoothly or children with dyslexia to read better.

[…] area of ongoing research is the effectiveness of hearing aids. By the age of 60, nearly 49 per cent of people will suffer from some hearing loss. People who wear hearing aids are often frustrated when listening to music because the hearing aids distort the sound and cannot deal with the dynamic range of the music.

The LIVELab is working with the Hamilton Philharmonic Orchestra to solve this problem. During a recent concert, researchers evaluated new ways of delivering sound directly to participants’ hearing aids to enhance sounds.

Researchers hope new technologies can not only increase live musical enjoyment but alleviate the social isolation caused by hearing loss.

Imagine the possibilities for understanding music and sound: How it might help to improve cognitive decline, manage social performance anxiety, help children with developmental disorders, aid in treatment of depression or keep the mind focused. Every time we conceive and design a study, we think of new possibilities.

The essay also includes an embedded 12 min. video about LIVELab and details about studies conducted on musicians and live audiences. Apparently, audiences experience live performance differently than recorded performances and musicians use body sway to create cohesive performances. You can find the McMaster Institute for Music & the Mind here and McMaster’s LIVELab here.

Capturing the motions of a string quartet performance. Laurel Trainor, Author provided [McMaster University]

Metacreation Lab at Simon Fraser University (SFU)

I just recently discovered that there’s a Metacreation Lab at Simon Fraser University (Vancouver, Canada), which on its homepage has this ” Metacreation is the idea of endowing machines with creative behavior.” Here’s more from the homepage,

As the contemporary approach to generative art, Metacreation involves using tools and techniques from artificial intelligence, artificial life, and machine learning to develop software that partially or completely automates creative tasks. Through the collaboration between scientists, experts in artificial intelligence, cognitive sciences, designers and artists, the Metacreation Lab for Creative AI is at the forefront of the development of generative systems, be they embedded in interactive experiences or integrated into current creative software. Scientific research in the Metacreation Lab explores how various creative tasks can be automated and enriched. These tasks include music composition [emphasis mine], sound design, video editing, audio/visual effect generation, 3D animation, choreography, and video game design.

Besides scientific research, the team designs interactive and generative artworks that build upon the algorithms and research developed in the Lab. This work often challenges the social and cultural discourse on AI.

Much to my surprise I received the Metacreation Lab’s inaugural email newsletter (received via email on Friday, November 15, 2019),

Greetings,

We decided to start a mailing list for disseminating news, updates, and announcements regarding generative art, creative AI and New Media. In this newsletter: 

  1. ISEA 2020: The International Symposium on Electronic Art. ISEA return to Montreal, check the CFP bellow and contribute!
  2. ISEA 2015: A transcription of Sara Diamond’s keynote address “Action Agenda: Vancouver’s Prescient Media Arts” is now available for download. 
  3. Brain Art, the book: we are happy to announce the release of the first comprehensive volume on Brain Art. Edited by Anton Nijholt, and published by Springer.

Here are more details from the newsletter,

ISEA2020 – 26th International Symposium on Electronic Arts

Montreal, September 24, 2019
Montreal Digital Spring (Printemps numérique) is launching a call for participation as part of ISEA2020 / MTL connect to be held from May 19 to 24, 2020 in Montreal, Canada. Founded in 1990, ISEA is one of the world’s most prominent international arts and technology events, bringing together scholarly, artistic, and scientific domains in an interdisciplinary discussion and showcase of creative productions applying new technologies in art, interactivity, and electronic and digital media. For 2020, ISEA Montreal turns towards the theme of sentience.

ISEA2020 will be fully dedicated to examining the resurgence of sentience—feeling-sensing-making sense—in recent art and design, media studies, science and technology studies, philosophy, anthropology, history of science and the natural scientific realm—notably biology, neuroscience and computing. We ask: why sentience? Why and how does sentience matter? Why have artists and scholars become interested in sensing and feeling beyond, with and around our strictly human bodies and selves? Why has this notion been brought to the fore in an array of disciplines in the 21st century?
CALL FOR PARTICIPATION: WHY SENTIENCE? ISEA2020 invites artists, designers, scholars, researchers, innovators and creators to participate in the various activities deployed from May 19 to 24, 2020. To complete an application, please fill in the forms and follow the instructions.

The final submissions deadline is NOVEMBER 25, 2019. Submit your application for WORKSHOP and TUTORIAL Submit your application for ARTISTIC WORK Submit your application for FULL / SHORT PAPER Submit your application for PANEL Submit your application for POSTER Submit your application for ARTIST TALK Submit your application for INSTITUTIONAL PRESENTATION
Find Out More
You can apply for several categories. All profiles are welcome. Notifications of acceptance will be sent around January 13, 2020.

Important: please note that the Call for participation for MTL connect is not yet launched, but you can also apply to participate in the programming of the other Pavilions (4 other themes) when registrations are open (coming soon): mtlconnecte.ca/en TICKETS

Registration is now available to assist to ISEA2020 / MTL connect, from May 19 to 24, 2020. Book today your Full Pass and get the early-bird rate!
Buy Now

More from the newsletter,

ISEA 2015 was in Vancouver, Canada, and the proceedings and art catalog are still online. The news is that Sara Diamond released her 2015 keynote address as a paper: Action Agenda: Vancouver’s Prescient Media Arts. It is never too late so we thought we would let you know about this great read. See The 2015 Proceedings Here

The last item from the inaugural newsletter,

The first book that surveys how brain activity can be monitored and manipulated for artistic purposes, with contributions by interactive media artists, brain-computer interface researchers, and neuroscientists. View the Book Here

As per the Leonardo review from Cristina Albu:

“Another seminal contribution of the volume is the presentation of multiple taxonomies of “brain art,” which can help art critics develop better criteria for assessing this genre. Mirjana Prpa and Philippe Pasquier’s meticulous classification shows how diverse such works have become as artists consider a whole range of variables of neurofeedback.” Read the Review

For anyone not familiar with the ‘Leonardo’ cited in the above, it’s Leonardo; the International Society for the Arts, Sciences and Technology.

Should this kind of information excite and motivate you do start metacreating, you can get in touch with the lab,

Our mailing address is:
Metacreation Lab for Creative AI
School of Interactive Arts & Technology
Simon Fraser University
250-13450 102 Ave.
Surrey, BC V3T 0A3
Web: http://metacreation.net/
Email: metacreation_admin (at) sfu (dot) ca

Rijksmuseum’s ‘live’ restoration of Rembrandt’s masterpiece: The Nightwatch: is it or isn’t it like watching paint dry?

Somewhere in my travels, I saw ‘like watching paint dry’ as a description for the experience of watching researchers examining Rembrandt’s Night Watch. Granted it’s probably not that exciting but there has to be something to be said for being present while experts undertake an extraordinary art restoration effort. The Night Watch is not only a masterpiece—it’s huge.

This posting was written closer to the time the ‘live’ restoration first began. I have an update at the end of this posting.

A July 8, 2019 news item on the British Broadcasting Corporation’s (BBC) news online sketches in some details,

The masterpiece, created in 1642, has been placed inside a specially designed glass chamber so that it can still be viewed while being restored.

Enthusiasts can follow the latest on the restoration work online.

The celebrated painting was last restored more than 40 years ago after it was slashed with a knife.

The Night Watch is considered Rembrandt’s most ambitious work. It was commissioned by the mayor and leader of the civic guard of Amsterdam, Frans Banninck Cocq, who wanted a group portrait of his militia company.

The painting is nearly 4m tall and 4.5m wide (12.5 x 15 ft) and weighs 337kg (743lb) [emphasis mine]. As well as being famous for its size, the painting is acclaimed for its use of dramatic lighting and movement.

But experts at Amsterdam’s Rijksmuseum are concerned that aspects of the masterpiece are changing, pointing as an example to the blanching of the figure of a small dog. The museum said the multi-million euro research and restoration project under way would help staff gain a better understanding of the painting’s condition.

An October 16, 2018 Rijksmuseum press release announced the restoration work months prior to the start (Note: Some of the information is repetitive;),

Before the restoration begins, The Night Watch will be the centrepiece of the Rijksmuseum’s display of their entire collection of more than 400 works by Rembrandt in an exhibition to mark the 350th anniversary of the artist’s death opening on 15 February 2019.

Commissioned in 1642 by the mayor and leader of the civic guard of Amsterdam, Frans Banninck Cocq, to create a group portrait of his shooting company, The Night Watch is recognised as one of the most important works of art in the world today and hangs in the specially designed “Gallery of Honour” at the Rijksmuseum. It is more than 40 years since The Night Watch underwent its last major restoration, following an attack on the painting in 1975.

The Night Watch will be encased in a state-of-the-art clear glass chamber designed by the French architect Jean Michel Wilmotte. This will ensure that the painting can remain on display for museum visitors. A digital platform will allow viewers from all over the world to follow the entire process online [emphasis mine] continuing the Rijksmuseum innovation in the digital field.

Taco Dibbits, General Director Rijksmuseum: The Night Watch is one of the most famous paintings in the world. It belongs to us all, and that is why we have decided to conduct the restoration within the museum itself – and everyone, wherever they are, will be able to follow the process online.

The Rijksmuseum continually monitors the condition of The Night Watch, and it has been discovered that changes are occurring, such as the blanching [emphasis mine] on the dog figure at the lower right of the painting. To gain a better understanding of its condition as a whole, the decision has been taken to conduct a thorough examination. This detailed study is necessary to determine the best treatment plan, and will involve imaging techniques, high-resolution photography and highly advanced computer analysis. Using these and other methods, we will be able to form a very detailed picture of the painting – not only of the painted surface, but of each and every layer, from varnish to canvas.

A great deal of experience has been gained in the Rijksmuseum relating to the restoration of Rembrandt’s paintings. Last year saw the completion of the restoration of Rembrandt’s spectacular portraits of Marten Soolmans and Oopjen Coppit. The research team working on The Night Watch is made up of researchers, conservators and restorers from the Rijksmuseum, which will conduct this research in close collaboration with museums and universities in the Netherlands and abroad.

The Night Watch

The group portrait of the officers and other members of the militia company of District II, under the command of Captain Frans Banninck Cocq and Lieutenant Willem van Ruytenburch, now known as The Night Watch, is Rembrandt’s most ambitious painting. This 1642 commission by members of Amsterdam’s civic guard is Rembrandt’s first and only painting of a militia group. It is celebrated particularly for its bold and energetic composition, with the musketeers being depicted ‘in motion’, rather than in static portrait poses. The Night Watch belongs to the city of Amsterdam, and it been the highlight of the Rijksmuseum collection since 1808. The architect of the Rijksmuseum building Pierre Cuypers (1827-1921) even created a dedicated gallery of honour for The Night Watch, and it is now admired there by more than 2.2 million people annually.

2019, The Year of Rembrandt

The Year of Rembrandt, 2019, marks the 350th anniversary of the artist’s death with two major exhibitions honouring the great master painter. All the Rembrandts of the Rijksmuseum (15 February to 10 June 2019) will bring together the Rijksmuseum’s entire collection of Rembrandt’s paintings, drawings and prints, for the first time in history. The second exhibition, Rembrandt-Velázquez (11 October 2019 to 19 January 2020), will put the master in international context by placing 17th-century Spanish and Dutch masterpieces in dialogue with each another.

First, the restoration work is not being livestreamed; the digital platform Operation Night Watch is a collection of resources, which are being updated constantly, For example, the first scan was placed online in Operation Night Watch on July 16, 2019.

Second, ‘blanching’ reminded me of a June 22, 2017 posting where I featured research into why masterpieces were turning into soap, (Note: The second paragraph should be indented to indicated that it’s an excerpt fro the news release. Unfortunately, the folks at WordPress appear to have removed the tools that would allow me to do that and more),

This piece of research has made a winding trek through the online science world. First it was featured in an April 20, 2017 American Chemical Society news release on EurekAlert

A good art dealer can really clean up in today’s market, but not when some weird chemistry wreaks havoc on masterpieces. Art conservators started to notice microscopic pockmarks forming on the surfaces of treasured oil paintings that cause the images to look hazy. It turns out the marks are eruptions of paint caused, weirdly, by soap that forms via chemical reactions. Since you have no time to watch paint dry, we explain how paintings from Rembrandts to O’Keefes are threatened by their own compositions — and we don’t mean the imagery.

….

Getting back to the Night Watch, there’s a July 8, 2019 Rijksmuseum press release which provides some technical details,

On 8 July 2019 the Rijksmuseum starts Operation Night Watch. It will be the biggest and most wide-ranging research and conservation project in the history of Rembrandt’s masterpiece. The goal of Operation Night Watch is the long-term preservation of the painting. The entire operation will take place in a specially designed glass chamber so the visiting public can watch.

Never before has such a wide-ranging and thorough investigation been made of the condition of The Night Watch. The latest and most advanced research techniques will be used, ranging from digital imaging and scientific and technical research, to computer science and artificial intelligence. The research will lead to a better understanding of the painting’s original appearance and current state, and provide insight into the many changes that The Night Watch has undergone over the course of the last four centuries. The outcome of the research will be a treatment plan that will form the basis for the restoration of the painting.

Operation Night Watch can also be followed online from 8 July 2019 at rijksmuseum.nl/nightwatch

From art historical research to artificial intelligence

Operation Night Watch will look at questions regarding the original commission, Rembrandt’s materials and painting technique, the impact of previous treatments and later interventions, as well as the ageing, degradation and future of the painting. This will involve the newest and most advanced research methods and technologies, including art historical and archival research, scientific and technical research, computer science and artificial intelligence.

During the research phase The Night Watch will be unframed and placed on a specially designed easel. Two platform lifts will make it possible to study the entire canvas, which measures 379.5 cm in height and 454.5 cm in width.

Advanced imaging techniques

Researchers will make use of high resolution photography, as well as a variety of advanced imaging techniques, such as macro X-ray fluorescence scanning (macro-XRF) and hyperspectral imaging, also called infrared reflectance imaging spectroscopy (RIS), to accurately determine the condition of the painting.

56 macro-XRF scans

The Night Watch will be scanned millimetre by millimetre using a macro X-ray fluorescence scanner (macro-XRF scanner). This instrument uses X-rays to analyse the different chemical elements in the paint, such as calcium, iron, potassium and cobalt. From the resulting distribution maps of the various chemical elements in the paint it is possible to determine which pigments were used. The macro-XRF scans can also reveal underlying changes in the composition, offering insights into Rembrandt’s painting process. To scan the entire surface of the The Night Watch it will be necesary to make 56 scans, each one of which will take 24 hours.

12,500 high-resolution photographs

A total of some 12,500 photographs will be taken at extremely high resolution, from 180 to 5 micrometres, or a thousandth of a millimetre. Never before has such a large painting been photographed at such high resolution. In this way it will be possible to see details such as pigment particles that normally would be invisible to the naked eye. The cameras and lamps will be attached to a dynamic imaging frame designed specifically for this purpose.

Glass chamber

Operation Night Watch is for everyone to follow and will take place in full view of the visiting public in an ultra-transparent glass chamber designed by the French architect Jean Michel Wilmotte.

Research team

The Rijksmuseum has extensive experience and expertise in the investigation and treatment of paintings by Rembrandt. The conservation treatment of Rembrandt’s portraits of Marten Soolmans and Oopjen Coppit was completed in 2018. The research team working on The Night Watch is made up of more than 20 Rijksmuseum scientists, conservators, curators and photographers. For this research, the Rijksmuseum is also collaborating with museums and universities in the Netherlands and abroad, including the Dutch Cultural Heritage Agency (RCE), Delft University of Technology (TU Delft), the University of Amsterdam (UvA), Amsterdam University Medical Centre (AUMC), University of Antwerp (UA) and National Gallery of Art, Washington DC.

The Night Watch

Rembrandt’s Night Watch is one of the world’s most famous works of art. The painting is the property of the City of Amsterdam, and it is the heart of Amsterdam’s Rijksmuseum, where it is admired by more than two million visitors each year. The Night Watch is the Netherland’s foremost national artistic showpiece, and a must-see for tourists.

Rembrandt’s group portrait of officers and other civic guardsmen of District 2 in Amsterdam under the command of Captain Frans Banninck Cocq and Lieutenant Willem van Ruytenburch has been known since the 18th century as simply The Night Watch. It is the artist’s most ambitious painting. One of Amsterdam’s 20 civic guard companies commissioned the painting for its headquarters, the Kloveniersdoelen, and Rembrandt completed it in 1642. It is Rembrandt’s only civic guard piece, and it is famed for the lively and daring composition that portrays the troop in active poses rather than the traditional static ones.

Donors and partners

AkzoNobel is main partner of Operation Night Watch.

Operation Night Watch is made possible by The Bennink Foundation, PACCAR Foundation, Piet van der Slikke & Sandra Swelheim, American Express Foundation, Familie De Rooij, Het AutoBinck Fonds, Segula Technologies, Dina & Kjell Johnsen, Familie D. Ermia, Familie M. van Poecke, Henry M. Holterman Fonds, Irma Theodora Fonds, Luca Fonds, Piek-den Hartog Fonds, Stichting Zabawas, Cevat Fonds, Johanna Kast-Michel Fonds, Marjorie & Jeffrey A. Rosen, Stichting Thurkowfonds and the Night Watch Fund.

With the support of the Ministry of Education, Culture and Science, the City of Amsterdam, Founder Philips and main sponsors ING, BankGiro Loterij and KPN every year more than 2 million people visit the Rijksmuseum and The Night Watch.

Details:
Rembrandt van Rijn (1606-1669)
The Night Watch, 1642
oil on canvas
Rijksmuseum, on loan from the Municipality of Amsterdam

Update as of November 22, 2019

I just clicked on the Operation Night Watch link and found a collection of resources including videos of live updates from October 2019. As noted earlier, they’re not livestreaming the restoration. The October 29, 2019 ‘live update’ features a host speaking in Dutch (with English subtitles in the version I was viewing) and interviews with the scientists conducting the research necessary before they start actually restoring the painting.

Reading (2 of 2): Is zinc-infused underwear healthier for women?

This first part of this Reading ‘series’, Reading (1 of 2): an artificial intelligence story in British Columbia (Canada) was mostly about how one type of story, in this case,based on a survey, is presented and placed in one or more media outlets. The desired outcome is for more funding by government and for more investors (they tucked in an ad for an upcoming artificial intelligence conference in British Columbia).

This story about zinc-infused underwear for women also uses science to prove its case and it, too, is about raising money. In this case, it’s a Kickstarter campaign to raise money.

If Huha’s (that’s the company name) claims for ‘zinc-infused mineral undies’ are to be believed, the answer is an unequivocal yes. The reality as per the current research on the topic is not quite as conclusive.

The semiotics (symbolism)

Huha features fruit alongside the pictures of their underwear. You’ll see an orange, papaya, and melon in the kickstarter campaign images and on the company website. It seems to be one of those attempts at subliminal communication. Fruit is good for you therefore our underwear is good for you. In fact, our underwear (just like the fruit) has health benefits.

For a deeper dive into the world of semiotics, there’s the ‘be fruitful and multiply’ stricture which is found in more than one religious or cultural orientation and is hard to dismiss once considered.

There is no reason to add fruit to the images other than to suggest benefits from nature and fertility (or fruitfulness). They’re not selling fruit and these ones are not particularly high in zinc. If all you’re looking for is colour, why not vegetables or puppies?

The claims

I don’t have time to review all of the claims but I’ll highlight a few. My biggest problem with the claims is that there are no citations or links to studies, i.e., the research. So, something like this becomes hard to assess,

Most women’s underwear are made with chemical-based, synthetic fibers that lead to yeast and UTI [urinary tract infection] infections, odor, and discomfort. They’ve also been proven to disrupt human hormones, have been linked to cancer, pollute the planet aggressively, and stay in landfills far too long.

There’s more than one path to a UTI and/or odor and/or discomfort but I can see where fabrics that don’t breathe can exacerbate or cause problems of that nature. I have a little more difficulty with the list that follows. I’d like to see the research on underpants disrupting human hormones. Is this strictly a problem for women or could men also be affected? (If you should know, please leave a comment.)

As for ‘linked to cancer’, I’m coming to the conclusion that everything is linked to cancer. Offhand, I’ve been told peanuts, charcoal broiled items (I think it’s the char), and my negative thoughts are all linked to cancer.

One of the last claims in the excerpted section, ‘pollute the planet aggressively’ raises this question.When did underpants become aggressive’?

The final claim seems unexceptional. Our detritus is staying too long in our landfills. Of course, the next question is: how much faster do the Huha underpants degrade in a landfill? That question is not addressed in Kickstarter campaign material.

Talking to someone with more expertise

I contacted Dr. Andrew Maynard, Associate Director at Arizona State University (ASU) School for the Future of Innovation in Society, He has a PhD in physics and longstanding experience in research and evaluation of emerging technologies (for many years he specialized in nanoparticle analysis and aerosol exposure in occupational settings),.

Professor Maynard is a widely recognized expert and public commentator on emerging technologies and their safe and responsible development and use, and has testified before [US] congressional committees on a number of occasions. 

None of this makes him infallible but I trust that he always works with integrity and bases his opinions on the best information at hand. I’ve always found him to be a reliable source of information.

Here’s what he had to say (from an October 25, 2019 email),

I suspect that their claims are pushing things too far – from what I can tell, professionals tend to advise against synthetic underwear because of the potential build up of moisture and bacteria and the lack of breathability, and tend to suggest natural materials – which indicating that natural fibers and good practices should be all most people need. I haven’t seen any evidence for an underwear crisis here, and one concern is that the company is manufacturing a problem which they then claim to solve. That said, I can’t see anything totally egregious in what they are doing. And the zinc presence makes sense in that it prevents bacterial growth/activity within the fabric, thus reducing the chances of odor and infection.

Pharmaceutical grade zinc and research into underwear

I was a little curious about ‘pharmaceutical grade’ zinc as my online searches for a description were unsuccessful. Andrew explained that the term likely means ‘high purity’ zinc suitable for use in medications rather than the zinc found in roofing panels.

After the reference to ‘pharmaceutical grade’ zinc there’s a reference to ‘smartcel sensitive Zinc’. Here’s more from the smartcel sensitive webpage,

smartcel™ sensitive is skin friendly thanks to zinc oxide’s soothing and anti-inflammatory capabilities. This is especially useful for people with sensitive skin or skin conditions such as eczema or neurodermitis. Since zinc is a component of skin building enzymes, it operates directly on the skin. An active exchange between the fiber and the skin occurs when the garment is worn.

Zinc oxide also acts as a shield against harmful UVA and UVB radiation [it’s used in sunscreens], which can damage our skin cells. Depending on the percentage of smartcel™ sensitive used in any garment, it can provide up to 50 SPF.

Further to this, zinc oxide possesses strong antibacterial properties, especially against odour causing bacteria, which helps to make garments stay fresh longer. *

I couldn’t see how zinc helps the pH balance in anyone’s vagina as claimed in the Kickstarter campaign and smartcel, on its ‘sensitive’ webpage, doesn’t make that claim but I found an answer in an April 4, 2017 Q&A (question and answer) interview by Jocelyn Cavallo for Medium,

What women need to know about their vaginal p

Q & A with Dr. Joanna Ellington

A woman’s vagina is a pretty amazing body part. Not only can it be a source of pleasure but it also can help create and bring new life into the world. On top of all that, it has the extraordinary ability to keep itself clean by secreting natural fluids and maintaining a healthy pH to encourage the growth of good bacteria and discourage harmful bacteria from moving in. Despite being so important, many women are never taught the vital role that pH plays in their vaginal health or how to keep it in balance.

We recently interviewed renowned Reproductive Physiologist and inventor of IsoFresh Balancing Vaginal Gel, Dr. Joanna Ellington, to give us the low down on what every woman needs to know about their vaginal pH and how to maintain a healthy level.

What is pH?

Dr. Ellington: PH is a scale of acidity and alkalinity. The measurements range from 0 to 14: a pH lower than 7 is acidic and a pH higher than 7 is considered alkaline.

What is the “perfect” pH level for a woman’s vagina?

Dr. E.: For most women of a reproductive age vaginal pH should be 4.5 or less. For post-menopausal women this can go up to about 5. The vagina will naturally be at a high pH right after sex, during your period, after you have a baby or during ovulation (your fertile time).

Are there diet and environmental factors that affect a women’s vaginal pH level?

Dr. E.: Yes, iron zinc and manganese have been found to be critical for lactobacillus (healthy bacteria) to function. Many women don’t eat well and should supplement these, especially if they are vegetarian. Additionally, many vegetarians have low estrogen because they do not eat the animal fats that help make our sex steroids. Without estrogen, vaginal pH and bacterial imbalance can occur. It is important that women on these diets ensure good fat intake from other sources, and have estrogen and testosterone and iron levels checked each year.

Do clothing and underwear affect vaginal pH?

Dr. E.: Yes, tight clothing and thong underwear [emphasis mine] have been shown in studies to decrease populations of healthy vaginal bacteria and cause pH changes in the vagina. Even if you wear these sometimes, it is important for your vaginal ecosystem that loose clothing or skirts be worn some too.

Yes, Dr. Ellington has the IsoFresh Balancing Vaginal Gel and whether that’s a good product should be researched but all of the information in the excerpt accords with what I’ve heard over the years and fits in nicely with what Andrew said, zinc in underwear could be useful for its antimicrobial properties. Also, note the reference to ‘thong underwear’ as a possible source of difficulty and note that Huha is offering thong and very high cut underwear.

Of course, your underwear may already have zinc in it as this research suggests (thank you, Andrew, for the reference),

Exposure of women to trace elements through the skin by direct contact with underwear clothing by Thao Nguyen & Mahmoud A. Saleh. Journal of Environmental Science and Health, Part A Toxic/Hazardous Substances and Environmental Engineering Volume 52, 2017 – Issue 1 Pages 1-6 DOI: https://doi.org/10.1080/10934529.2016.1221212 Published online: 09 Sep 2016

This paper is behind a paywall but I have access through a membership in the Canadian Academy of Independent Scholars. So, here’s the part I found interesting,

… The main chemical pollutants present in textiles are dyes containing carcinogenic amines, metals, pentachlorophenol, chlorine bleaching, halogen carriers, free formaldehyde, biocides, fire retardants and softeners.[1] Metals are also found in textile products and clothing are used for many purposes: Co [cobalt], Cu [copper], Cr [chromium] and Pb [lead] are used as metal complex dyes, Cr as pigments mordant, Sn as catalyst in synthetic fabrics and as synergists of flame retardants,Ag [silver] as antimicrobials and Ti [titanium] and Zn [zinc] as water repellents and odor preventive agents.[2–5] When present in textile materials, the toxic elements mentioned above represent not only a major environmental problem in the textile industry but also they may impose potential danger to human health by absorption through the skin.[6,7] [emphasis mine] Chronic exposure to low levels of toxic elements has been associated with a number of adverse human health effects.[8–11] Also exposure to high concentration of elements which are considered as essential for humans such as Cu, Co, Fe [iron], Mn [manganese] or Zn among others, can also be harmful.[12] [emphasis mine] Co, Cr, Cu and Ni [nitrogen] are skin sensitizers,[13,14] which may lead to contact dermatitis, also Cr can lead to liver damage, pulmonary congestion and cancer.[15] [emphasis mine] The purpose of the present study was to determine the concentrations of a number of elements in various skin-contact clothes. For risk estimations, the determination of the extractable amounts of heavy metals is of importance, since they reflect their possible impact on human health. [p. 2 PDF]

So, there’s the link to cancer. Maybe.

Are zinc-infused undies a good idea?

It could go either way. (For specifics about the conclusions reached in the study, scroll down to the Ooops! subheading.) I like the idea of using sustainable Eucalyptus-based material (TencelL) for the underwear as I have heard that cotton isn’t sustainably cultivated. As for claims regarding the product’s environmental friendliness, it’s based on wood, specifically, cellulose, which Canadian researchers have been experimenting with at the nanoscale* and they certainly have been touting nanocellulose as environmentally friendly. Tencel’s sustainability page lists a number of environmental certifications from the European Union, Belgium, and the US.

*Somewhere in the Kickstarter campaign material, there’s a reference to nanofibrils and I’m guessing those nanofibrils are Tencel’s wood fibers at the nanoscale. As well, I’m guessing that smartcel’s fabric contains zinc oxide nanoparticles.

Whether or not you need more zinc is something you need to determine for yourself. Finding out if the pH balance in your vagina is within a healthy range might be a good way to start. It would also be nice to know how much zinc is in the underwear and whether it’s being used antimicrobial properties and/or as a source for one of minerals necessary for your health.

How the Kickstarter campaign is going

At the time of this posting, they’ve reached a little over $24,000 with six days left. The goal was $10,000. Sadly, there are no questions in the FAQ (frequently asked questions).

Reading tips

It’s exhausting trying to track down authenticity. In this case, there were health and environmental claims but I do have a few suggestions.

  1. Look at the imagery critically and try to ignore the hyperbole.
  2. How specific are the claims? e.g., How much zinc is there in the underpants?
  3. Who are their experts and how trustworthy are the agencies/companies mentioned?
  4. If research is cited, are the publishers reputable and is the journal reputable?
  5. Does it make sense given your own experience?
  6. What are the consequences if you make a mistake?

Overblown claims and vague intimations of disease are not usually good signs. Conversely, someone with great credential may not be trustworthy which is why I usually try to find more than one source for confirmation. The person behind this campaign and the Huha company is Alexa Suter. She’s based in Vancouver, Canada and seems to have spent most of her time as a writer and social media and video producer with a few forays into sales and real estate. I wonder if she’s modeling herself and her current lifestyle entrepreneurial effort on Gwyneth Paltrow and her lifestyle company, Goop.

Huha underwear may fulfill its claims or it may be just another pair of underwear or it may be unhealthy. As for the environmentally friendly claims, let’s hope that the case. On a personal level, I’m more hopeful about that.

Regardless, the underwear is not cheap. The smallest pledge that will get your underwear (a three-pack) is $65 CAD.

Ooops! ETA: November 8, 2019:

I forgot to include the conclusion the researchers arrived at and some details on how they arrived at those conclusions. First, they tested 120 pairs of underpants in all sorts of colours and made in different parts of the world.

Second, some underpants showed excessive levels of metals. Cotton was the most likely material to show excess although nylon and polyester can also be problematic. To put this into proportion and with reference to zinc, “Zn exceeded the limit in 4% of the tested samples
and was found mostly in samples manufactured in China.” [p. 6 PDF] Finally, dark colours tested for higher levels of metals than light colours.

While it doesn’t mention underpants as such, there’s a November 8, 2019 article ‘Five things everyone with a vagina should know‘ by Paula McGrath for BBC news online. McGrath’s health expert is Dr. Jen Gunter, a physician whose specialties are obstetrics, gynaecology, and pain.

Sonifying proteins to make music and brand new proteins

Markus Buehler at the Massachusetts Institute of Technology (MIT) has been working with music and science for a number of years. My December 9, 2011 posting, Music, math, and spiderwebs, was the first one here featuring his work. My November 28, 2012 posting, Producing stronger silk musically, was a followup to Buehler’s previous work.

A June 28, 2019 news item on Azonano provides a recent update,

Composers string notes of different pitch and duration together to create music. Similarly, cells join amino acids with different characteristics together to make proteins.

Now, researchers have bridged these two seemingly disparate processes by translating protein sequences into musical compositions and then using artificial intelligence to convert the sounds into brand-new proteins. …

Caption: Researchers at MIT have developed a system for converting the molecular structures of proteins, the basic building blocks of all living beings, into audible sound that resembles musical passages. Then, reversing the process, they can introduce some variations into the music and convert it back into new proteins never before seen in nature. Credit: Zhao Qin and Francisco Martin-Martinez

A June 26, 2019 American Chemical Society (ACS) news release, which originated the news item, provides more detail and a video,

To make proteins, cellular structures called ribosomes add one of 20 different amino acids to a growing chain in combinations specified by the genetic blueprint. The properties of the amino acids and the complex shapes into which the resulting proteins fold determine how the molecule will work in the body. To better understand a protein’s architecture, and possibly design new ones with desired features, Markus Buehler and colleagues wanted to find a way to translate a protein’s amino acid sequence into music.

The researchers transposed the unique natural vibrational frequencies of each amino acid into sound frequencies that humans can hear. In this way, they generated a scale consisting of 20 unique tones. Unlike musical notes, however, each amino acid tone consisted of the overlay of many different frequencies –– similar to a chord. Buehler and colleagues then translated several proteins into audio compositions, with the duration of each tone specified by the different 3D structures that make up the molecule. Finally, the researchers used artificial intelligence to recognize specific musical patterns that corresponded to certain protein architectures. The computer then generated scores and translated them into new-to-nature proteins. In addition to being a tool for protein design and for investigating disease mutations, the method could be helpful for explaining protein structure to broad audiences, the researchers say. They even developed an Android app [Amino Acid Synthesizer] to allow people to create their own bio-based musical compositions.

Here’s the ACS video,

A June 26, 2019 MIT news release (also on EurekAlert) provides some specifics and the MIT news release includes two embedded audio files,

Want to create a brand new type of protein that might have useful properties? No problem. Just hum a few bars.

In a surprising marriage of science and art, researchers at MIT have developed a system for converting the molecular structures of proteins, the basic building blocks of all living beings, into audible sound that resembles musical passages. Then, reversing the process, they can introduce some variations into the music and convert it back into new proteins never before seen in nature.

Although it’s not quite as simple as humming a new protein into existence, the new system comes close. It provides a systematic way of translating a protein’s sequence of amino acids into a musical sequence, using the physical properties of the molecules to determine the sounds. Although the sounds are transposed in order to bring them within the audible range for humans, the tones and their relationships are based on the actual vibrational frequencies of each amino acid molecule itself, computed using theories from quantum chemistry.

The system was developed by Markus Buehler, the McAfee Professor of Engineering and head of the Department of Civil and Environmental Engineering at MIT, along with postdoc Chi Hua Yu and two others. As described in the journal ACS Nano, the system translates the 20 types of amino acids, the building blocks that join together in chains to form all proteins, into a 20-tone scale. Any protein’s long sequence of amino acids then becomes a sequence of notes.

While such a scale sounds unfamiliar to people accustomed to Western musical traditions, listeners can readily recognize the relationships and differences after familiarizing themselves with the sounds. Buehler says that after listening to the resulting melodies, he is now able to distinguish certain amino acid sequences that correspond to proteins with specific structural functions. “That’s a beta sheet,” he might say, or “that’s an alpha helix.”

Learning the language of proteins

The whole concept, Buehler explains, is to get a better handle on understanding proteins and their vast array of variations. Proteins make up the structural material of skin, bone, and muscle, but are also enzymes, signaling chemicals, molecular switches, and a host of other functional materials that make up the machinery of all living things. But their structures, including the way they fold themselves into the shapes that often determine their functions, are exceedingly complicated. “They have their own language, and we don’t know how it works,” he says. “We don’t know what makes a silk protein a silk protein or what patterns reflect the functions found in an enzyme. We don’t know the code.”

By translating that language into a different form that humans are particularly well-attuned to, and that allows different aspects of the information to be encoded in different dimensions — pitch, volume, and duration — Buehler and his team hope to glean new insights into the relationships and differences between different families of proteins and their variations, and use this as a way of exploring the many possible tweaks and modifications of their structure and function. As with music, the structure of proteins is hierarchical, with different levels of structure at different scales of length or time.

The team then used an artificial intelligence system to study the catalog of melodies produced by a wide variety of different proteins. They had the AI system introduce slight changes in the musical sequence or create completely new sequences, and then translated the sounds back into proteins that correspond to the modified or newly designed versions. With this process they were able to create variations of existing proteins — for example of one found in spider silk, one of nature’s strongest materials — thus making new proteins unlike any produced by evolution.

Although the researchers themselves may not know the underlying rules, “the AI has learned the language of how proteins are designed,” and it can encode it to create variations of existing versions, or completely new protein designs, Buehler says. Given that there are “trillions and trillions” of potential combinations, he says, when it comes to creating new proteins “you wouldn’t be able to do it from scratch, but that’s what the AI can do.”

“Composing” new proteins

By using such a system, he says training the AI system with a set of data for a particular class of proteins might take a few days, but it can then produce a design for a new variant within microseconds. “No other method comes close,” he says. “The shortcoming is the model doesn’t tell us what’s really going on inside. We just know it works.

This way of encoding structure into music does reflect a deeper reality. “When you look at a molecule in a textbook, it’s static,” Buehler says. “But it’s not static at all. It’s moving and vibrating. Every bit of matter is a set of vibrations. And we can use this concept as a way of describing matter.”

The method does not yet allow for any kind of directed modifications — any changes in properties such as mechanical strength, elasticity, or chemical reactivity will be essentially random. “You still need to do the experiment,” he says. When a new protein variant is produced, “there’s no way to predict what it will do.”

The team also created musical compositions developed from the sounds of amino acids, which define this new 20-tone musical scale. The art pieces they constructed consist entirely of the sounds generated from amino acids. “There are no synthetic or natural instruments used, showing how this new source of sounds can be utilized as a creative platform,” Buehler says. Musical motifs derived from both naturally existing proteins and AI-generated proteins are used throughout the examples, and all the sounds, including some that resemble bass or snare drums, are also generated from the sounds of amino acids.

The researchers have created a free Android smartphone app, called Amino Acid Synthesizer, to play the sounds of amino acids and record protein sequences as musical compositions.

Here’s a link to and a citation for the paper,

A Self-Consistent Sonification Method to Translate Amino Acid Sequences into Musical Compositions and Application in Protein Design Using Artificial Intelligence by Chi-Hua Yu, Zhao Qin, Francisco J. Martin-Martinez, Markus J. Buehler. ACS Nano 2019 XXXXXXXXXX-XXX DOI: https://doi.org/10.1021/acsnano.9b02180 Publication Date:June 26, 2019 Copyright © 2019 American Chemical Society

This paper is behind a paywall.

ETA October 23, 2019 1000 hours: Ooops! I almost forgot the link to the Aminot Acid Synthesizer.

Ghosts, mechanical turks, and pseudo-AI (artificial intelligence)—Is it all a con game?

There’s been more than one artificial intelligence (AI) story featured here on this blog but the ones featured in this posting are the first I’ve stumbled across that suggest the hype is even more exaggerated than even the most cynical might have thought. (BTW, the 2019 material is later as I have taken a chronological approach to this posting.)

It seems a lot of companies touting their AI algorithms and capabilities are relying on human beings to do the work, from a July 6, 2018 article by Olivia Solon for the Guardian (Note: A link has been removed),

It’s hard to build a service powered by artificial intelligence. So hard, in fact, that some startups have worked out it’s cheaper and easier to get humans to behave like robots than it is to get machines to behave like humans.

“Using a human to do the job lets you skip over a load of technical and business development challenges. It doesn’t scale, obviously, but it allows you to build something and skip the hard part early on,” said Gregory Koberger, CEO of ReadMe, who says he has come across a lot of “pseudo-AIs”.

“It’s essentially prototyping the AI with human beings,” he said.

In 2017, the business expense management app Expensify admitted that it had been using humans to transcribe at least some of the receipts it claimed to process using its “smartscan technology”. Scans of the receipts were being posted to Amazon’s Mechanical Turk crowdsourced labour tool, where low-paid workers were reading and transcribing them.

“I wonder if Expensify SmartScan users know MTurk workers enter their receipts,” said Rochelle LaPlante, a “Turker” and advocate for gig economy workers on Twitter. “I’m looking at someone’s Uber receipt with their full name, pick-up and drop-off addresses.”

Even Facebook, which has invested heavily in AI, relied on humans for its virtual assistant for Messenger, M.

In some cases, humans are used to train the AI system and improve its accuracy. …

The Turk

Fooling people with machines that seem intelligent is not new according to a Sept. 10, 2018 article by Seth Stevenson for Slate.com (Note: Links have been removed),

It’s 1783, and Paris is gripped by the prospect of a chess match. One of the contestants is François-André Philidor, who is considered the greatest chess player in Paris, and possibly the world. Everyone is so excited because Philidor is about to go head-to-head with the other biggest sensation in the chess world at the time.

But his opponent isn’t a man. And it’s not a woman, either. It’s a machine.

This story may sound a lot like Garry Kasparov taking on Deep Blue, IBM’s chess-playing supercomputer. But that was only a couple of decades ago, and this chess match in Paris happened more than 200 years ago. It doesn’t seem like a robot that can play chess would even be possible in the 1780s. This machine playing against Philidor was making an incredible technological leap—playing chess, and not only that, but beating humans at chess.

In the end, it didn’t quite beat Philidor, but the chess master called it one of his toughest matches ever. It was so hard for Philidor to get a read on his opponent, which was a carved wooden figure—slightly larger than life—wearing elaborate garments and offering a cold, mean stare.

It seems like the minds of the era would have been completely blown by a robot that could nearly beat a human chess champion. Some people back then worried that it was black magic, but many folks took the development in stride. …

Debates about the hottest topic in technology today—artificial intelligence—didn’t starts in the 1940s, with people like Alan Turing and the first computers. It turns out that the arguments about AI go back much further than you might imagine. The story of the 18th-century chess machine turns out to be one of those curious tales from history that can help us understand technology today, and where it might go tomorrow.

[In future episodes our podcast, Secret History of the Future] we’re going to look at the first cyberattack, which happened in the 1830s, and find out how the Victorians invented virtual reality.

Philidor’s opponent was known as The Turk or Mechanical Turk and that ‘machine’ was in fact a masterful hoax as The Turk held a hidden compartment from which a human being directed his moves.

People pretending to be AI agents

It seems that today’s AI has something in common with the 18th century Mechanical Turk, there are often humans lurking in the background making things work. From a Sept. 4, 2018 article by Janelle Shane for Slate.com (Note: Links have been removed),

Every day, people are paid to pretend to be bots.

In a strange twist on “robots are coming for my job,” some tech companies that boast about their artificial intelligence have found that at small scales, humans are a cheaper, easier, and more competent alternative to building an A.I. that can do the task.

Sometimes there is no A.I. at all. The “A.I.” is a mockup powered entirely by humans, in a “fake it till you make it” approach used to gauge investor interest or customer behavior. Other times, a real A.I. is combined with human employees ready to step in if the bot shows signs of struggling. These approaches are called “pseudo-A.I.” or sometimes, more optimistically, “hybrid A.I.”

Although some companies see the use of humans for “A.I.” tasks as a temporary bridge, others are embracing pseudo-A.I. as a customer service strategy that combines A.I. scalability with human competence. They’re advertising these as “hybrid A.I.” chatbots, and if they work as planned, you will never know if you were talking to a computer or a human. Every remote interaction could turn into a form of the Turing test. So how can you tell if you’re dealing with a bot pretending to be a human or a human pretending to be a bot?

One of the ways you can’t tell anymore is by looking for human imperfections like grammar mistakes or hesitations. In the past, chatbots had prewritten bits of dialogue that they could mix and match according to built-in rules. Bot speech was synonymous with precise formality. In early Turing tests, spelling mistakes were often a giveaway that the hidden speaker was a human. Today, however, many chatbots are powered by machine learning. Instead of using a programmer’s rules, these algorithms learn by example. And many training data sets come from services like Amazon’s Mechanical Turk, which lets programmers hire humans from around the world to generate examples of tasks like asking and answering questions. These data sets are usually full of casual speech, regionalisms, or other irregularities, so that’s what the algorithms learn. It’s not uncommon these days to get algorithmically generated image captions that read like text messages. And sometimes programmers deliberately add these things in, since most people don’t expect imperfections of an algorithm. In May, Google’s A.I. assistant made headlines for its ability to convincingly imitate the “ums” and “uhs” of a human speaker.

Limited computing power is the main reason that bots are usually good at just one thing at a time. Whenever programmers try to train machine learning algorithms to handle additional tasks, they usually get algorithms that can do many tasks rather badly. In other words, today’s algorithms are artificial narrow intelligence, or A.N.I., rather than artificial general intelligence, or A.G.I. For now, and for many years in the future, any algorithm or chatbot that claims A.G.I-level performance—the ability to deal sensibly with a wide range of topics—is likely to have humans behind the curtain.

Another bot giveaway is a very poor memory. …

Bringing AI to life: ghosts

Sidney Fussell’s April 15, 2019 article for The Atlantic provides more detail about the human/AI interface as found in some Amazon products such as Alexa ( a voice-control system),

… Alexa-enabled speakers can and do interpret speech, but Amazon relies on human guidance to make Alexa, well, more human—to help the software understand different accents, recognize celebrity names, and respond to more complex commands. This is true of many artificial intelligence–enabled products. They’re prototypes. They can only approximate their promised functions while humans help with what Harvard researchers have called “the paradox of automation’s last mile.” Advancements in AI, the researchers write, create temporary jobs such as tagging images or annotating clips, even as the technology is meant to supplant human labor. In the case of the Echo, gig workers are paid to improve its voice-recognition software—but then, when it’s advanced enough, it will be used to replace the hostess in a hotel lobby.

A 2016 paper by researchers at Stanford University used a computer vision system to infer, with 88 percent accuracy, the political affiliation of 22 million people based on what car they drive and where they live. Traditional polling would require a full staff, a hefty budget, and months of work. The system completed the task in two weeks. But first, it had to know what a car was. The researchers paid workers through Amazon’s Mechanical Turk [emphasis mine] platform to manually tag thousands of images of cars, so the system would learn to differentiate between shapes, styles, and colors.

It may be a rude awakening for Amazon Echo owners, but AI systems require enormous amounts of categorized data, before, during, and after product launch. ..,

Isn’t interesting that Amazon also has a crowdsourcing marketplace for its own products. Calling it ‘Mechanical Turk’ after a famous 18th century hoax would suggest a dark sense of humour somewhere in the corporation. (You can find out more about the Amazon Mechanical Turk on this Amazon website and in its Wikipedia entry.0

Anthropologist, Mary L. Gray has coined the phrase ‘ghost work’ for the work that humans perform but for which AI gets the credit. Angela Chan’s May 13, 2019 article for The Verge features Gray as she promotes her latest book with Siddarth Suri ‘Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass’ (Note: A link has been removed),

“Ghost work” is anthropologist Mary L. Gray’s term for the invisible labor that powers our technology platforms. When Gray, a senior researcher at Microsoft Research, first arrived at the company, she learned that building artificial intelligence requires people to manage and clean up data to feed to the training algorithms. “I basically started asking the engineers and computer scientists around me, ‘Who are the people you pay to do this task work of labeling images and classification tasks and cleaning up databases?’” says Gray. Some people said they didn’t know. Others said they didn’t want to know and were concerned that if they looked too closely they might find unsavory working conditions.

So Gray decided to find out for herself. Who are the people, often invisible, who pick up the tasks necessary for these platforms to run? Why do they do this work, and why do they leave? What are their working conditions?

The interview that follows is interesting although it doesn’t seem to me that the question about working conditions is answered in any great detail. However, there is this rather interesting policy suggestion,

If companies want to happily use contract work because they need to constantly churn through new ideas and new aptitudes, the only way to make that a good thing for both sides of that enterprise is for people to be able to jump into that pool. And people do that when they have health care and other provisions. This is the business case for universal health care, for universal education as a public good. It’s going to benefit all enterprise.

I want to get across to people that, in a lot of ways, we’re describing work conditions. We’re not describing a particular type of work. We’re describing today’s conditions for project-based task-driven work. This can happen to everybody’s jobs, and I hate that that might be the motivation because we should have cared all along, as this has been happening to plenty of people. For me, the message of this book is: let’s make this not just manageable, but sustainable and enjoyable. Stop making our lives wrap around work, and start making work serve our lives.

Puts a different spin on AI and work, doesn’t it?