Tag Archives: Kevin Dunbar

Memories, science, archiving, and authenticity

This is going to be one of my more freewheeling excursions into archiving and memory. I’ll be starting with  a movement afoot in the US government to give citizens open access to science research moving onto a network dedicated to archiving nanoscience- and nanotechnology-oriented information, examining the notion of authenticity in regard to the Tiananmen Square incident on June 4, 1989, and finishing with the Council of Canadian Academies’ Expert Panel on Memory Institutions and the Digital Revolution.

In his June 4, 2013 posting on the Pasco Phronesis blog, David Bruggeman features information and an overview of  the US Office of Science and Technology Policy’s efforts to introduce open access to science research for citizens (Note: Links have been removed),

Back in February, the Office of Science and Technology Policy (OSTP) issued a memorandum to federal science agencies on public access for research results.  Federal agencies with over $100 million in research funding have until August 22 to submit their access plans to OSTP.  This access includes research publications, metadata on those publications, and underlying research data (in a digital format).

A collection of academic publishers, including the Association of American Publishers and the organization formerly known as the American Association for the Advancement of Science (publisher of Science), has offered a proposal for a publishing industry repository for pubic access to federally funded research that they publish.

David provides a somewhat caustic perspective on the publishers’ proposal while Jocelyn Kaiser’s June 4, 2013 article for ScienceInsider details the proposal in more detail (Note: Links have been removed),

Organized in part by the Association of American Publishers (AAP), which represents many commercial and nonprofit journals, the group calls its project the Clearinghouse for the Open Research of the United States (CHORUS). In a fact sheet that AAP gave to reporters, the publishers describe CHORUS as a “framework” that would “provide a full solution for agencies to comply with the OSTP memo.”

As a starting point, the publishers have begun to index papers by the federal grant numbers that supported the work. That index, called FundRef, debuted in beta form last week. You can search by agency and get a list of papers linked to the journal’s own websites through digital object identifiers (DOIs), widely used ID codes for individual papers. The pilot project involved just a few agencies and publishers, but many more will soon join FundRef, says Fred Dylla, executive director of the American Institute of Physics. (AAAS, which publishes ScienceInsider, is among them and has also signed on to CHORUS.)

The next step is to make the full-text papers freely available after agencies decide on embargo dates, Dylla says. (The OSTP memo suggests 12 months but says that this may need to be adjusted for some fields and journals.) Eventually, the full CHORUS project will also allow searches of the full-text articles. “We will make the corpus available for anybody’s search tool,” says Dylla, who adds that search agreements will be similar to those that publishers already have with Google Scholar and Microsoft Academic Search.

I couldn’t find any mention in Kaiser’s article as to how long the materials would be available. Is this supposed to be an archive, as well as, a repository? Regardless, I found the beta project, FundRef, a little confusing. The link from the ScienceInsider article takes you to this May 28, 2013 news release,

FundRef, the funder identification service from CrossRef [crossref.org], is now available for publishers to contribute funding data and for retrieval of that information. FundRef is the result of collaboration between funding agencies and publishers that correlates grants and other funding with the scholarly output of that support.

Publishers participating in FundRef add funding data to the bibliographic metadata they already provide to CrossRef for reference linking. FundRef data includes the name of the funder and a grant or award number. Manuscript tracking systems can incorporate a taxonomy of 4000 global funder names, which includes alternate names, aliases, and abbreviations enabling authors to choose from a standard list of funding names. Then the tagged funding data will travel through publishers’ production systems to be stored at CrossRef.

I was hoping that clicking on the FundRef button would take me to a database that I could test or tour. At this point, I wouldn’t have described the project as being at the beta stage (from a user’s perspective) as they are still building it and gathering data. However, there is lots of information on the FundRef webpage including an Additional Resources section featuring a webinar,

Attend an Introduction to FundRef Webinar – Thursday, June 6, 2013 at 11:00 am EDT

You do need to sign up for the webinar. Happily, it is open to international participants, as well as, US participants.

Getting back to my question on whether or not this effort is also an archive of sorts, there is a project closer to home (nanotechnologywise, anyway) that touches on these issues from an unexpected perspective, from the Nanoscience and Emerging Technologies in Society (NETS); sharing research and learning tools About webpage,

The Nanoscience and Emerging Technologies in Society: Sharing Research and Learning Tools (NETS) is an IMLS-funded [Institute of Museum and Library Services] project to investigate the development of a disciplinary repository for the Ethical, Legal and Social Implications (ELSI) of nanoscience and emerging technologies research. NETS partners will explore future integration of digital services for researchers studying ethical, legal, and social implications associated with the development of nanotechnology and other emerging technologies.

NETS will investigate digital resources to advance the collection, dissemination, and preservation of this body of research,  addressing the challenge of marshaling resources, academic collaborators, appropriately skilled data managers, and digital repository services for large-scale, multi-institutional and disciplinary research projects. The central activity of this project involves a spring 2013 workshop that will gather key researchers in the field and digital librarians together to plan the development of a disciplinary repository of data, curricula, and methodological tools.

Societal dimensions research investigating the impacts of new and emerging technologies in nanoscience is among the largest research programs of its kind in the United States, with an explicit mission to communicate outcomes and insights to the public. By 2015, scholars across the country affiliated with this program will have spent ten years collecting qualitative and quantitative data and developing analytic and methodological tools for examining the human dimensions of nanotechnology. The sharing of data and research tools in this field will foster a new kind of social science inquiry and ensure that the outcomes of research reach public audiences through multiple pathways.

NETS will be holding a stakeholders workshop June 27 – 28, 2013 (invite only), from the workshop description webpage,

What is the value of creating a dedicated Nano ELSI repository?
The benefits of having these data in a shared infrastructure are: the centralization of research and ease of discovery; uniformity of access; standardization of metadata and the description of projects; and facilitation of compliance with funder requirements for data management going forward. Additional benefits of this project will be the expansion of data curation capabilities for data repositories into the nanotechnology domain, and research into the development of disciplinary repositories, for which very little literature exists.

What would a dedicated Nano ELSI repository contain?
Potential materials that need to be curated are both qualitative and quantitative in nature, including:

  • survey instruments, data, and analyses
  • interview transcriptions and analyses
  • images or multimedia
  • reports
  • research papers, books, and their supplemental data
  • curricular materials

What will the Stakeholder Workshop accomplish?
The Stakeholder Workshop aims to bring together the key researchers and digital librarians to draft a detailed project plan for the implementation of a dedicated Nano ELSI repository. The Workshop will be used as a venue to discuss questions such as:

  • How can a repository extend research in this area?
  • What is the best way to collect all the research in this area?
  • What tools would users envision using with this resource?
  • Who should maintain and staff a repository like this?
  • How much would a repository like this cost?
  • How long will it take to implement?

What is expected of Workshop participants?
The workshop will bring together key researchers and digital librarians to discuss the requirements for a dedicated Nano ELSI repository. To inform that discussion, some participants will be requested to present on their current or past research projects and collaborations. In addition, workshop participants will be enlisted to contribute to the draft of the final project report and make recommendations for the implementation plan.

While my proposal did not get accepted (full disclosure), I do look forward to hearing more about the repository although I notice there’s no mention made of archiving the materials.

The importance of repositories and archives was brought home to me when I came across a June 4, 2013 article by Glyn Moody for Techdirt about the Tiananmen Square incident and subtle and unsubtle ways of censoring access to information,

Today is June 4th, a day pretty much like any other day in most parts of the world. But in China, June 4th has a unique significance because of the events that took place in Tiananmen Square on that day in 1989.

Moody recounts some of the ways in which people have attempted to commemorate the day online while evading the authorities’ censorship efforts. Do check out the article for the inside scoop on why ‘Big Yellow Duck’ is a censored term. One of the more subtle censorship efforts provides some chills (from the Moody article),

… according to this article in the Wall Street Journal, it looks like the Chinese authorities are trying out a new tactic for handling this dangerous topic:

On Friday, a China Real Time search for “Tiananmen Incident” did not return the customary message from Sina informing the user that search results could not be displayed due to “relevant laws, regulations and policies.” Instead the search returned results about a separate Tiananmen incident that occurred on Tomb Sweeping Day in 1976, when Beijing residents flooded the area to protest after they were prevented from mourning the recently deceased Premiere [sic] Zhou Enlai.

This business of eliminating and substituting a traumatic and disturbing historical event with something less contentious reminded me both of the saying ‘history is written by the victors’ and of Luciana Duranti and her talk titled, Trust and Authenticity in the Digital Environment: An Increasingly Cloudy Issue, which took place in Vancouver (Canada) last year (mentioned in my May 18, 2012 posting).

Duranti raised many, many issues that most of us don’t consider when we blithely store information in the ‘cloud’ or create blogs that turn out to be repositories of a sort (and then don’t know what to do with them; ça c’est moi). She also previewed a Sept. 26 – 28, 2013 conference to be hosted in Vancouver by UNESCO [United Nations Educational, Scientific, and Cultural Organization), “Memory of the World in the Digital Age: Digitization and Preservation.” (UNESCO’s Memory of the World programme hosts a number of these themed conferences and workshops.)

The Sept. 2013 UNESCO ‘memory of the world’ conference in Vancouver seems rather timely in retrospect. The Council of Canadian Academies (CCA) announced that Dr. Doug Owram would be chairing their Memory Institutions and the Digital Revolution assessment (mentioned in my Feb. 22, 2013 posting; scroll down 80% of the way) and, after checking recently, I noticed that the Expert Panel has been assembled and it includes Duranti. Here’s the assessment description from the CCA’s ‘memory institutions’ webpage,

Library and Archives Canada has asked the Council of Canadian Academies to assess how memory institutions, which include archives, libraries, museums, and other cultural institutions, can embrace the opportunities and challenges of the changing ways in which Canadians are communicating and working in the digital age.
Background

Over the past three decades, Canadians have seen a dramatic transformation in both personal and professional forms of communication due to new technologies. Where the early personal computer and word-processing systems were largely used and understood as extensions of the typewriter, advances in technology since the 1980s have enabled people to adopt different approaches to communicating and documenting their lives, culture, and work. Increased computing power, inexpensive electronic storage, and the widespread adoption of broadband computer networks have thrust methods of communication far ahead of our ability to grasp the implications of these advances.

These trends present both significant challenges and opportunities for traditional memory institutions as they work towards ensuring that valuable information is safeguarded and maintained for the long term and for the benefit of future generations. It requires that they keep track of new types of records that may be of future cultural significance, and of any changes in how decisions are being documented. As part of this assessment, the Council’s expert panel will examine the evidence as it relates to emerging trends, international best practices in archiving, and strengths and weaknesses in how Canada’s memory institutions are responding to these opportunities and challenges. Once complete, this assessment will provide an in-depth and balanced report that will support Library and Archives Canada and other memory institutions as they consider how best to manage and preserve the mass quantity of communications records generated as a result of new and emerging technologies.

The Council’s assessment is running concurrently with the Royal Society of Canada’s expert panel assessment on Libraries and Archives in 21st century Canada. Though similar in subject matter, these assessments have a different focus and follow a different process. The Council’s assessment is concerned foremost with opportunities and challenges for memory institutions as they adapt to a rapidly changing digital environment. In navigating these issues, the Council will draw on a highly qualified and multidisciplinary expert panel to undertake a rigorous assessment of the evidence and of significant international trends in policy and technology now underway. The final report will provide Canadians, policy-makers, and decision-makers with the evidence and information needed to consider policy directions. In contrast, the RSC panel focuses on the status and future of libraries and archives, and will draw upon a public engagement process.

Question

How might memory institutions embrace the opportunities and challenges posed by the changing ways in which Canadians are communicating and working in the digital age?

Sub-questions

With the use of new communication technologies, what types of records are being created and how are decisions being documented?
How is information being safeguarded for usefulness in the immediate to mid-term across technologies considering the major changes that are occurring?
How are memory institutions addressing issues posed by new technologies regarding their traditional roles in assigning value, respecting rights, and assuring authenticity and reliability?
How can memory institutions remain relevant as a trusted source of continuing information by taking advantage of the collaborative opportunities presented by new social media?

From the Expert Panel webpage (go there for all the links), here’s a complete listing of the experts,

Expert Panel on Memory Institutions and the Digital Revolution

Dr. Doug Owram, FRSC, Chair
Professor and Former Deputy Vice-Chancellor and Principal, University of British Columbia Okanagan Campus (Kelowna, BC)

Sebastian Chan     Director of Digital and Emerging Media, Smithsonian Cooper-Hewitt National Design Museum (New York, NY)

C. Colleen Cook     Trenholme Dean of Libraries, McGill University (Montréal, QC)

Luciana Duranti   Chair and Professor of Archival Studies, the School of Library, Archival and Information Studies at the University of British Columbia (Vancouver, BC)

Lesley Ellen Harris     Copyright Lawyer; Consultant, Author, and Educator; Owner, Copyrightlaws.com (Washington, D.C.)

Kate Hennessy     Assistant Professor, Simon Fraser University, School of Interactive Arts and Technology (Surrey, BC)

Kevin Kee     Associate Vice-President Research (Social Sciences and Humanities) and Canada Research Chair in Digital Humanities, Brock University (St. Catharines, ON)

Slavko Manojlovich     Associate University Librarian (Information Technology), Memorial University of Newfoundland (St. John’s, NL)

David Nostbakken     President/CEO of Nostbakken and Nostbakken, Inc. (N + N); Instructor of Strategic Communication and Social Entrepreneurship at the School of Journalism and Communication, Carleton University (Ottawa, ON)

George Oates     Art Director, Stamen Design (San Francisco, CA)

Seamus Ross     Dean and Professor, iSchool, University of Toronto (Toronto, ON)

Bill Waiser, SOM, FRSC     Professor of History and A.S. Morton Distinguished Research Chair, University of Saskatchewan (Saskatoon, SK)

Barry Wellman, FRSC     S.D. Clark Professor, Department of Sociology, University of Toronto (Toronto, ON)

I notice they have a lawyer whose specialty is copyright, Lesley Ellen Harris. I did check out her website, copyrightlaws.com and could not find anything that hinted at any strong opinions on the topic. She seems to feel that copyright is a good thing but how far she’d like to take this is a mystery to me based on the blog postings I viewed.

I’ve also noticed that this panel has 13 people, four of whom are women which equals a little more (June 5, 2013, 1:35 pm PDT, I substituted the word ‘less’ for the word ‘more’; my apologies for the arithmetic error) than 25% representation. That’s a surprising percentage given how heavily weighted the fields of library and archival studies are weighted towards women.

I have meandered somewhat but my key points are this:

  • How we are going to keep information available? It’s all very well to have repository but how long will the data be kept in the repository and where does it go afterwards?
  • There’s a bias certainly with the NETS workshop and, likely, the CCA Expert Panel on Memory Institutions and the Digital Revolution toward institutions as the source for information that’s worth keeping for however long or short a time that should be. What about individual efforts? e.g. Don’t Leave Canada Behind ; FrogHeart; Techdirt; The Last Word on Nothing, and many other blogs?
  • The online redirection of Tiananmen Square incident queries is chilling but I’ve often wondered what happen if someone wanted to remove ‘objectionable material’ from an e-book, e.g. To Kill a Mockingbird. A new reader wouldn’t notice the loss if the material has been excised in a subtle or professional  fashion.

As for how this has an impact on science, it’s been claimed that Isaac Newton attempted to excise Robert Hooke from history (my Jan. 19, 2012 posting). Whether it’s true or not, there is remarkably little about Robert Hooke despite his accomplishments and his languishment is a reminder that we must always take care that we retain our memories.

ETA June 6, 2013: David Bruggeman added some more information links about CHORUS in his June 5, 2013 post (On The Novelty Of Corporate-Government Partnership In STEM Education),

Before I dive into today’s post, a brief word about CHORUS. Thanks to commenter Joe Kraus for pointing me to this Inside Higher Ed post, which includes a link to the fact sheet CHORUS organizers distributed to reporters. While there are additional details, there are still not many details to sink one’s teeth in. And I remain surprised at the relative lack of attention the announcement has received. On a related note, nobody who’s been following open access should be surprised by Michael Eisen’s reaction to CHORUS.

I encourage you to check out David’s post as he provides some information about a new STEM (science, technology, engineering, mathematics) collaboration between the US National Science Foundation and companies such as GE and Intel.

Rainbows, what are we going to do with them?

The title is attention-getting initially then quickly leads to confusion for anyone not familiar with plasmonics, “Trapping a rainbow: Researchers slow broadband light waves with plasmonic structures.” I have to confess to being more interested in the use of the metaphor than I am in the science. However in deference to any readers who are more taken by the science, here’s more from the March 14, 2011 news item on Nanowerk,

A team of electrical engineers and chemists at Lehigh University have experimentally verified the “rainbow” trapping effect, demonstrating that plasmonic structures can slow down light waves over a broad range of wavelengths.

The idea that a rainbow of broadband light could be slowed down or stopped using plasmonic structures has only recently been predicted in theoretical studies of metamaterials. The Lehigh experiment employed focused ion beams to mill a series of increasingly deeper, nanosized grooves into a thin sheet of silver. By focusing light along this plasmonic structure, this series of grooves or nano-gratings slowed each wavelength of optical light, essentially capturing each individual color of the visible spectrum at different points along the grating. The findings hold promise for improved data storage, optical data processing, solar cells, bio sensors and other technologies.

While the notion of slowing light or trapping a rainbow sounds like ad speak, finding practical ways to control photons—the particles that makes up light— could significantly improve the capacity of data storage systems and speed the processing of optical data.

The research required the ability to engineer a metallic surface to produce nanoscale periodic gratings with varying groove depths. This alters the optical properties of the nanopatterned metallic surface, called Surface Dispersion Engineering. The broadband surface light waves are then trapped along this plasmonic metallic surface with each wavelength trapped at a different groove depth, resulting in a trapped rainbow of light.

You can get still more scientific detail in the item but I found a later posting, April 12, 2011 news item, also on Nanowerk, where the researcher Qiaoquiang Gan (pronounced “Chow-Chung” and “Gone”) gave this description for his work,

An electrical engineer at the University at Buffalo, who previously demonstrated experimentally the “rainbow trapping effect” [emphasis mine] — a phenomenon that could boost optical data storage and communications — is now working to capture all the colors of the rainbow.

In a paper published March 29 in the Proceedings of the National Academy of Sciences, Qiaoquiang Gan (pronounced “Chow-Chung” and “Gone”), PhD, an assistant professor of electrical engineering at the University at Buffalo’s School of Engineering and Applied Sciences, and his colleagues at Lehigh University, where he was a graduate student, described how they slowed broadband light waves using a type of material called nanoplasmonic structures.

Gan explains that the ultimate goal is to achieve a breakthrough in optical communications called multiplexed, multiwavelength communications, where optical data can potentially be tamed at different wavelengths, thus greatly increasing processing and transmission capacity.

“Light is usually very fast, but the structures I created can slow broadband light significantly,” says Gan. “It’s as though I can hold [emphasis mine] the light in my hand.”

I like the notion of ‘holding’ a rainbow better than ‘trapping’ one. (ETA April 18, 2011: The original sentence, now placed at the end of this posting, has been replaced with this: There’s a big difference between the two verbs, trapping and holding and each implies a difference relationship to the object. Which would you prefer, to be trapped or to be held? What does it mean to the one who does the trapping or the holding? Two difference relationships to the object and to the role of a scientist are implied.

It’s believed that the metaphors we use when describing science have a powerful impact on how science is viewed and practiced. One example I have at hand is a study by Kevin Dunbar mentioned in my Jan. 4, 2010 posting (scroll down) where he illustrates how scientists use metaphors to achieve scientific breakthroughs. Logically, if metaphors help us achieve breakthroughs, then they are quite capable of constraining us as well.

Meanwhile, this gives me an excuse to include this video of a Hawaiian singer, Israel Kamakawiwo’ole and his extraordinary version of Somewhere over the Rainbow. Happy Weekend!

The original (April 15, 2011) sentence:
It’s more gentle and implies a more humble attitude and I suspect it would ultimately prove more fruitful.

Quantum kind of day: metaphors, language and nanotechnology

I had a bonanza day on the Nanowerk website yesterday as I picked up three items, all of which featured the word ‘quantum’ in the title and some kind of word play or metaphor.

From the news item, Quantum dots go with the flow,

Quantum dots may be small. But they usually don’t let anyone push them around. Now, however, JQI [Joint Quantum Institute] Fellow Edo Waks and colleagues have devised a self-adjusting remote-control system that can place a dot 6 nanometers long to within 45 nm of any desired location. That’s the equivalent of picking up golf balls around a living room and putting them on a coffee table – automatically, from 100 miles away.

There’s a lot of detail in this item which gives you more insight (although the golf ball analogy does that job very well) into just how difficult it is to move a quantum dot and some of the problems that had to be solved.

Next, A quantum leap for cryptography,

To create random number lists for encryption purposes, cryptographers usually use mathematical algorithms called ‘pseudo random number generators’. But these are never entirely ‘random’ as the creators cannot be certain that any sequence of numbers isn’t predictable in some way.

Now a team of experimental physicists has made a breakthrough in random number generation by applying the principles of quantum mechanics to produce a string of numbers that is truly random.

‘Classical physics simply does not permit genuine randomness in the strict sense,’ explained research team leader Chris Monroe from the Joint Quantum Institute (JQI) at the University of Maryland in the US. ‘That is, the outcome of any classical physical process can ultimately be determined with enough information about initial conditions. Only quantum processes can be truly random — and even then, we must trust the device is indeed quantum and has no remnant of classical physics in it.’

This is a drier piece (I suspect that’s due to the project itself) so the language or word play is in the headline. I immediately thought of a US tv series titled, Quantum Leap where, for five seasons, a scientist’s personality/intellect/spirit is leaping into people’s bodies, randomly through time. There are, according to Wikipedia, two other associations, a scientific phenomenon and a 1980s era computer. You can go here to pursue links for the other two associations. This is very clever in that you don’t need to have any associations to understand the base concept in the headline but having one or more association adds a level or more of engagement.

The final item, Scientists climb the quantum ladder,

An EU [European Union]-funded team of scientists from Cardiff University in the UK has successfully fired photons (light particles) into a small tower of semiconducting material. The work could eventually lead to the development of faster computers. …

The scientists, from the university’s School of Physics and Astronomy, said a photon collides with an electron confined in a smaller structure within the tower. Before the light particles re-emerge, they oscillate for a short time between the states of light and matter.

While I find this business of particles oscillating between two different states, light and matter, quite fascinating this particular language play is the least successful. I think most people will do what I did and miss the relationship between the ‘tower’ in the news item’s first paragraph and the ‘ladder’ in the headline. I cannot find any other attempt to play with either linguistic image elsewhere in the item.

Given that I’m  a writer I’m going to argue that analogies, metaphors, and word play are essential when trying to explain concepts to audiences that may not have your expertise and that audience can include other scientists. Here’s an earlier posting about some work by a cognitive psychologist, Kevin Dunbar, who investigates how scientists think and communicate.

Carbon nanotubes the natural way; weaving carbon nanotubes into heaters; how designers think; robotic skin

Today I’ll be focusing, in a very mild way, on carbon nanotubes. First, a paper in Astrophysical Journal Letters (Feb. 2010 issue) titled, The Formation of Graphite Whiskers in the  Primitive Solar Nebula, is where an international team of scientists have shared an intriguing discovery about carbon nanotubes. From the news item on physorg.com,

Space apparently has its own recipe for making carbon nanotubes, one of the most intriguing contributions of nanotechnology here on Earth, and metals are conspicuously missing from the list of ingredients.

[Joesph] Nuth’s team [based at NASA’s Goddard Space Flight Center] describes the modest chemical reaction. Unlike current methods for producing carbon nanotubes—tiny yet strong structures with a range of applications in electronics and, ultimately, perhaps even medicine—the new approach does not need the aid of a metal catalyst. “Instead, nanotubes were produced when graphite dust particles were exposed to a mixture of carbon monoxide and hydrogen gases,” explains Nuth.

The structure of the carbon nanotubes produced in these experiments was determined by Yuki Kimura, a materials scientist at Tohoku University, Japan, who examined the samples under a powerful transmission electron microscope. He saw particles on which the original smooth graphite gradually morphed into an unstructured region and finally to an area rich in tangled hair-like masses. A closer look with an even more powerful microscope showed that these tendrils were in fact cup-stacked carbon nanotubes, which resemble a stack of Styrofoam cups with the bottoms cut out.

Since metals are used as catalysts for creating carbon nanotubes, this discovery hints at the possibility of a ‘greener’ process. In conjunction with the development at McGill (mentioned on this blog here) for making chemical reactions greener by using new nonmetallic catalysts, there may be some positive environmental impacts due to nanotechnology.

Meanwhile here on earth, there’s another new carbon nanotube development and this time it has to do with the material’s conductivity. From the news item on Nanowerk,

An interesting development using multifilament yarns is a new fabric heater made by weaving CNTEC® conductive yarns from Kuraray Living Co., Ltd. This fabric generates heat homogeneously all over the surface because of its outstanding conductivity and is supposed to be the first commercial use of Baytubes® CNTs from Bayer MaterialScience in the Japanese market.

The fabric heater is lightweight and thin, compact and shows a long-lasting bending resistance. It can be used for instance for car seats, household electrical appliances, for heating of clothes and as an anti-freezing material. Tests revealed that it may for example be installed in the water storage tank of JR Hokkaido’s “Ryuhyo-Norokko” train. Inside this train the temperature drops to around -20 °C in wintertime, because so far no heating devices other than potbelly stoves are available. According to JR Hokkaido railway company the fabric heater performed well in preventing the water from freezing. A seat heating application of the fabric heater is still on trial on another JR Hokkaido train line. It is anticipated that the aqueous dispersions might as well be suitable for the compounding of various kinds of materials.

I sometimes suspect that these kinds of nanotechnology-enabled applications are going to change the world in such a fashion that our ancestors (assuming we survive disasters) will be able to understand us only dimly. The closest analogy I have is with Chaucer. An English-speaker trying to read The Canterbury Tales in the language that Chaucer used to write, Middle English, needs to learn an unfamiliar language.

On a completely different topic, Cliff Kuang at Fast Company has written an item on designers and the Myer-Briggs personality test (industrial designer Michael Roller’s website with his data),

Designers love to debate about what personality type makes for the best designer. So Michael Roller took the extra step of getting a bunch of designers to take the Myers Briggs personality test, and published the results …

In other words, designers are less akin to the stereotypical touchy-feely artist, and more like engineers who always keep the big picture in mind.

This reminds me of a piece I wrote up on Kevin Dunbar (here) and his work investigating how scientists think. He came to the conclusion that when they use metaphors and analogies to describe their work to scientists in specialties not identical to their own, new insights and breakthroughs can occur. (Note: he takes a more nuanced approach than I’m able to use in a single, descriptive sentence.) What strikes me is that scientists often need to take a more ‘artistic and intuitive’ [my words] approach to convey information if they are to experience true breakthroughs.

My last bit is an item about more tactile robotic skin. From the news item on the Azonano website,

Peratech Limited, the leader in new materials designed for touch technology solutions, has announced that they have been commissioned by the MIT Media Lab to develop a new type of electronic ‘skin’ that enables robotic devices to detect not only that they have been touched but also where and how hard the touch was.

The key to the sensing technology is Peratech’s patented ‘QTC’ materials. QTC’s, or Quantum Tunnelling Composites, are a unique new material type which provides a measured response to force and/or touch by changing its electrical resistance – much as a dimmer light switch controls a light bulb. This enables a simple electronic circuit within the robot to determine touch. Being easily formed into unique shapes – including being ‘draped’ over an object much like a garment might, QTC’s provide a metaphor [emphasis mine] for how human skin works to detect touch.

Yes, I found another reference to metaphors although this metaphor is being used to convey information to a nontechnical audience. As for the ‘graphite whiskers’ in the title for the article which opened this posting, it is another metaphor and here, I suspect, it’s being used to describe something to other scientists who have specialties that are not identical to the researchers’ (as per Kevin Dunbar’s work).

More than the “Emperor’s New Clothes” insight

Happy 2010 to all! I’ve taken some time out as I have moved locations and it’s taken longer to settle down that I hoped. (sigh) I still have loads to do but can get back to posting regularly (I hope).

New Year’s Eve I came across a very interesting article about how scientists think thanks to a reference on the Foresight Institute website. The article, Accept Defeat: The Neuroscience of Screwing Up, by Jonah Lehrer for Wired Magazine uses a story about a couple of astronomers and their investigative frustrations to illustrate research on how scientists (and the rest of us, as it turns out) think.

Before going on about the article I’m going to arbitrarily divide beliefs about scientific thinking/processes into two schools. In the first there’s the scientific method with its belief in objectivity and incontrovertible truths waiting to be discovered and validated. Later in university I was introduced to the 2nd belief about scientific thinking with the notion that scientific facts are social creations and that objectivity does not exist. From the outside it appears that scientists tend to belong to the first school and social scientists to the second but, as the Wired article points out, things are a little more amorphous than that when you dig down into the neuroscience of it all.

From the article,

The reason we’re so resistant to anomalous information — the real reason researchers automatically assume that every unexpected result is a stupid mistake — is rooted in the way the human brain works. Over the past few decades, psychologists [and other social scientists] have dismantled the myth of objectivity. The fact is, we carefully edit our reality, searching for evidence that confirms what we already believe. Although we pretend we’re empiricists — our views dictated by nothing but the facts — we’re actually blinkered, especially when it comes to information that contradicts our theories. The problem with science, then, isn’t that most experiments fail — it’s that most failures are ignored.

The DLPFC [dorsolateral prefrontal cortex] is constantly censoring the world, erasing facts from our experience. If the ACC  [anterior cingulate cortex, typically associated with errors and contradictions]] is the “Oh shit!” circuit, the DLPFC is the Delete key. When the ACC and DLPFC “turn on together, people aren’t just noticing that something doesn’t look right,” [Kevin] Dunbar says. “They’re also inhibiting that information.”

Disregarding evidence is something I’ve noticed (in others more easily than in myself) and have wondered about the implications. As noted in the article, ignoring scientific failure stymies research and ultimately more effective applications for the research. For example, there’s been a lot of interest in a new surgical procedure (still being tested) for patients with multiple sclerosis (MS). The procedure was developed by an Italian surgeon who (after his wife was stricken with the disease) reviewed literature on the disease going back 100 years and found a line of research that wasn’t being pursued actively and was a radical departure from current accepted beliefs about the nature of MS. (You can read more about the MS work here in the Globe and Mail story or here in the CBC story.) Btw, there are a couple of happy endings. The surgeon’s wife is much better and a promising new procedure is being examined.

Innovation and new research can be so difficult to pursue it’s amazing that anyone ever succeeds. Kevin Dunbar, the researcher mentioned previously, arrived at a rather interesting conclusion in his investigation on how scientists think and how they get around the ACC/DLFPC action: other people.  He tells a story about two lab groups who each had a meeting,

Dunbar watched how each of these labs dealt with their protein problem. The E. coli group took a brute-force approach, spending several weeks methodically testing various fixes. “It was extremely inefficient,” Dunbar says. “They eventually solved it, but they wasted a lot of valuable time.”The diverse lab, in contrast, mulled the problem at a group meeting. None of the scientists were protein experts, so they began a wide-ranging discussion of possible solutions. At first, the conversation seemed rather useless. But then, as the chemists traded ideas with the biologists and the biologists bounced ideas off the med students, potential answers began to emerge. “After another 10 minutes of talking, the protein problem was solved,” Dunbar says. “They made it look easy.”

When Dunbar reviewed the transcripts of the meeting, he found that the intellectual mix generated a distinct type of interaction in which the scientists were forced to rely on metaphors and analogies [my emphasis] to express themselves. (That’s because, unlike the E. coli group, the second lab lacked a specialized language that everyone could understand.) These abstractions proved essential for problem-solving, as they encouraged the scientists to reconsider their assumptions. Having to explain the problem to someone else forced them to think, if only for a moment, like an intellectual on the margins, filled with self-skepticism.

As Dunbar notes, we usually need more than an outsider to experience a Eureka moment (the story about Italian surgeon notwithstanding and it should be noted that he was an MS outsider); we need metaphors and analogies. (I’ve taken it a bit further than Dunbar likely would but I am a writer, after all.)

If you are interested in Dunbar’s work, he’s at the University of Toronto with more information here.