Category Archives: writing

Margaret Atwood talks about technology and creativity

Joe Berkowitz has written an Oct. 29, 2015 article for Fast Company about Margaret Atwood, creativity, technology, and dystopias (I gather Ms. Atwood is doing publicity in aid of her new book, ‘The Heart Goes Last’; Note: Links have been removed),

In the latest thought-provoking, dystopian parable from noted words-genius, Margaret Atwood, society is experimenting with becoming a prison. The entire population of the unsettling community of Positron, as depicted in The Heart Goes Last, spends half the time as prisoners and half the time as guards. It does not go great. Considering that the story also involves sex-robots and other misfit gadgetry, the central premise serves as an apt metaphor for our occasionally adversarial relationship with technology. …

… One element of this symbiotic relationship Margaret Atwood is especially interested in, though, is the impact new technology has on creativity. The paradigm-shifting author doesn’t merely write about the future, she has also helped bring about changes to how we write in the future. As the creator of the LongPen, she’s made it so that authors can sign books from great distances; and as the first contributor to the Future Library project, she’s become a pioneer of writing novels intended strictly for later generations to read. A master at building future worlds in fiction, Atwood is also doing so in reality.

She has some things to say about the cloud and how the medium shapes the message (thank you, Marshall McLuhan),

… Being a selective early adopter means communicating with the tools one feels comfortable with, and avoiding others.

“I don’t trust the cloud,” she says. “Everybody knows that Moscow has gone back over to typewriters. Anything on the internet potentially leaks like a sieve. So we are currently exchanging scripts by FedEx because we don’t want them to be leaked. Anything you absolutely do not want to be leaked, unless you were a master of hackery and disguise, you should transfer and store some other way, especially since Mr. Snowden and what we know. …

… Being a selective early adopter [Atwood] means communicating with the tools one feels comfortable with, and avoiding others.

“I don’t trust the cloud,” she says. “Everybody knows that Moscow has gone back over to typewriters. Anything on the internet potentially leaks like a sieve. So we are currently exchanging scripts by FedEx because we don’t want them to be leaked. Anything you absolutely do not want to be leaked, unless you were a master of hackery and disguise, you should transfer and store some other way, especially since Mr. Snowden and what we know. …

“Any new technology or platform or medium is going to influence to a certain extent the shape of what gets put out there,” Atwood says. “On the other hand, human storytelling is very, very old. To a certain extent, technology shapes the bite-size of how you’re sending it into the world. For instance, people put writing on their phone in short chapters. So Proust would not have done well with that. We develop short forms because we’re limited in characters but we did that with the telegram. ‘6:15 Paddington, bring gun, Sherlock Holmes.’ Or better, ‘Holmes,’ actually.”

The last time I mentioned Margaret Atwood here was in regard to ‘Canadianness’ in my March 6, 2015 posting where I noted that Atwood is sometimes taken as an American or British author as her status as a Canadian is often omitted from articles about her.

Finally, Marshall McLuhan was a noted Canadian communications theorist who achieved awareness in pop culture during the 1960’s and 70’s with this phrase amongst others, The medium is the message.

arts@CERN: welcomes new artist-resident (Semiconductor) and opens calls for new artist-residencies

It’s exciting to hear that CERN (European Particle Physics Laboratory) has an open call for artists but it’s also a little complicated, so read carefully. From an Oct. 12, 2015 CERN press release,

CERN1 has today announced three new open calls giving a chance to artists to immerse themselves in the research of particle physics and its community. Two new international partners have joined the Accelerate @ CERN programme: the Abu Dhabi Music & Arts Foundation (ADMAF) from UAE2 and Rupert, the centre for Art and Education from Vilnius, Lithuania3. The Collide @ CERN Geneva award is also now calling for entries, continuing the fruitful collaboration with The Republic and Canton of Geneva and the City of Geneva4. Last but not least, the Collide @ CERN Ars Electronica winning artists start their residency at CERN this week.

“Science and the arts are essential parts of a vibrant, healthy culture, and the Arts @ CERN programme is bringing them closer together,” said CERN DG Rolf Heuer. “With CERN’s diverse research programme, including the LHC’s second run getting underway, there’s no better place in the world to do that than here.”

With the support of The Abu Dhabi Music & Arts Foundation (ADMAF), Arts @ CERN gives the chance for an Emirati visual artist to come to CERN for a fully funded immersion in high-energy physics in the Accelerate @ CERN programme. Thanks to the support by Rupert, Centre for Art and Education in Vilnius, the same door opens to Lithuanian artists who wish to deepen their knowledge in science and use it as a source of inspiration for their work. Each of the two open calls begins today for artists to win a one-month research stay at CERN. Applications can be submitted up to 11 January 2016.

Funded by The Republic and Canton of Geneva and The City of Geneva, Collide @ CERN Geneva has operated successfully since 2012. Arts @ CERN announces the fourth open call for artists from Geneva, this time celebrating the city’s strength in digital writing. Today, the competition opens to writers [emphasis mine] who were born, live or work in the Geneva region, and would like to win a three-month residency where scientific and artistic creativity collide.  The winner will also receive a stipend of 15,000CHF. The deadline for applications is 11 January 2016.

“Arts and science have always been interlinked as major cultural forces, and this is the fundamental reason for CERN to continue to proactively pursue this relationship,” said Mónica Bello, Head of Arts @ CERN. “The arts programme here continues to flourish.”

Semiconductor, the artist duo formed by Ruth Jarman and Joe Gerdhardt, are the winners of the Collide @ CERN Ars Electronica award5. Out of 161 projects from 53 countries, the jury6 awarded Semiconductor for their broad sense of speculation, complexity and wonder, using strategies of analysis and translation of the phenomena into tangible and beautiful forms. Their two-month Collide @ CERN residency starts on 12 October 2015.


1. CERN, the European Organization for Nuclear Research, is the world’s leading laboratory for particle physics. It has its headquarters in Geneva. At present, its member states are Austria, Belgium, Bulgaria, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Israel, Italy, the Netherlands, Norway, Poland, Portugal, Slovakia, Spain, Sweden, Switzerland and the United Kingdom. Romania is a Candidate for Accession. Serbia is an Associate Member in the pre-stage to Membership. Pakistan and Turkey are Associate Members. India, Japan, the Russian Federation, the United States of America, the European Union, JINR and UNESCO have observer status.

6. Members of the Jury for Collide @ CERN Ars Electronica were: Mónica Bello (ES), Michael Doser (AT), Horst Hörtner (AT), Gerfried Stocker (AT) and Mike Stubbs (UK).

Here are a few more links,

Online submissions for artists

Further information:

Arts@CERN website
Accelerate@CERN website
Collide@CERN Facebook site (link is external)
Twitter ArtsAtCern (link is external)

Good luck!

More about MUSE, a Canadian company and its brain sensing headband; women and startups; Canadianess

I first wrote about Ariel Garten and her Toronto-based (Canada) company, InteraXon, in a Dec. 5, 2012 posting where I featured a product, MUSE (Muse), then described as a brainwave controller. A March 5, 2015 article by Lydia Dishman for Fast Company provides an update on the product now described as a brainwave-sensing headband and on the company (Note: Links have been removed),

The technology that had captured the imagination of millions was then incorporated to develop a headband called Muse. It sells at retail stores like BestBuy for about $300 and works in conjunction with an app called Calm as a tool to increase focus and reduce stress.

If you always wanted to learn to meditate without those pesky distracting thoughts commandeering your mind, Muse can help by taking you through a brief exercise that translates brainwaves into the sound of wind. Losing focus or getting antsy brings on the gales. Achieving calm rewards you with a flock of birds across your screen.

The company has grown to 50 employees and has raised close to $10 million from investors including Ashton Kutcher. Garten [Ariel Garten, founder and Chief Executive Founder] says they’re about to close on a Series B round, “which will be significant.”

She says that listening plays an important role at InteraXon. Reflecting back on what you think you heard is an exercise she encourages, especially in meetings. When the development team is building a tool, for example, they use their Muses to meditate and focus, which then allows for listening more attentively and nonjudgmentally.

Women and startups

Dishman references gender and high tech financing in her article about Garten,

Garten doesn’t dwell on her status as a woman in a mostly male-dominated sector. That goes for securing funding for the startup too, despite the notorious bias venture-capital investors have against women startup founders.

“I am sure I lost deals because I am a woman, but also because the idea didn’t resonate,” she says, adding, “I’m sure I gained some because I am a woman, so it is unfair to put a blanket statement on it.”

Yet Garten is the only female member of her C-suite, something she says “is just the way it happened.” Casting the net recently to fill the role of chief operating officer [COO], Garten says there weren’t any women in the running, in part because the position required hardware experience as well as knowledge of working with the Chinese.

She did just hire a woman to be senior vice president of sales and marketing, and says, “When we are hiring younger staff, we are gender agnostic.”

I can understand wanting to introduce nuance into the ‘gender bias and tech startup discussion’ by noting that some rejections could have been due to issues with the idea or implementation. But the comment about being the only female in late stage funding as “just the way it happened” suggests she is extraordinarily naïve or willfully blind. Given her followup statement about her hiring practices, I’m inclined to go with willfully blind. It’s hard to believe she couldn’t find any woman with hardware experience and China experience. It seems more likely she needed a male COO to counterbalance a company with a female CEO. As for being gender agnostic where younger staff are concerned, that’s nice but it’s not reassuring as women have been able to get more junior positions. It’s the senior positions such as COO which remain out of reach and, troublingly, Garten seems to have blown off the question with a weak explanation and a glib assurance of equality at the lower levels of the company.

For more about gender, high tech companies, and hiring/promoting practices, you can read a March 5, 2015 article titled, Ellen Pao Trial Reveals the Subtle Sexism of Silicon Valley, by Amanda Marcotte for Slate.

Getting back to MUSE, you can find out more here. You can find out more about InterAxon here. Unusually, there doesn’t seem to be any information about the management team on the website.


I thought it was interesting that InterAxon’s status as a Canada-based company was mentioned nowhere in Dishman’s article. This is in stark contrast to Nancy Owano’s  Dec. 5, 2012 article for,

A Canadian company is talking about having a window, aka computer screen, into your mind. … InteraXon, a Canadian company, is focused on making a business out of mind-control technology via a headband device, and they are planning to launch this as a $199 brainwave computer controller called Muse. … [emphases mine]

This is not the only recent instance I’ve noticed. My Sept. 1, 2014 posting mentions what was then an upcoming Margaret Atwood event at Arizona State University,

… (from the center’s home page [Note: The center is ASU’s Center for Science and the Imagination]),

Internationally renowned novelist and environmental activist Margaret Atwood will visit Arizona State University this November [2014] to discuss the relationship between art and science, and the importance of creative writing and imagination for addressing social and environmental challenges.

Atwood’s visit will mark the launch of the Imagination and Climate Futures Initiative … Atwood, author of the MaddAddam trilogy of novels that have become central to the emerging literary genre of climate fiction, or “CliFi,” will offer the inaugural lecture for the initiative on Nov. 5.

“We are proud to welcome Margaret Atwood, one of the world’s most celebrated living writers, to ASU and engage her in these discussions around climate, science and creative writing,” …  “A poet, novelist, literary critic and essayist, Ms. Atwood epitomizes the creative and professional excellence our students aspire to achieve.”

There’s not a single mention that she is Canadian there or in a recent posting by Martin Robbins about a word purge from the Oxford Junior Dictionary published by the Guardian science blog network (March 3, 2015 posting). In fact, Atwood was initially described by Robbins as one of Britain’s literary giants. I assume there were howls of anguish once Canadians woke up to read the article since the phrase was later amended to “a number of the Anglosphere’s literary giants.”

The omission of InterAxon’s Canadianness in Dishman’s article for an American online magazine and Atwood’s Canadianness on the Arizona State University website and Martin Robbins’ initial appropriation and later change to the vague-sounding “Anglospere” in his post for the British newspaper, The Guardian, means the bulk of their readers will likely assume InterAxon is American and that Margaret Atwood, depending on where you read about her, is either an American or a Brit.

It’s flattering that others want to grab a little bit of Canada for themselves.

Coda: The Oxford Junior Dictionary and its excision of ‘nature’ words


Robbins’ March 3, 2015 posting focused on a heated literary discussion about the excision of these words from the Oxford Junior Dictionary (Note:  A link has been removed),

“The deletions,” according to Robert Macfarlane in another article on Friday, “included acorn, adder, ash, beech, bluebell, buttercup, catkin, conker, cowslip, cygnet, dandelion, fern, hazel, heather, heron, ivy, kingfisher, lark, mistletoe, nectar, newt, otter, pasture and willow. The words taking their places in the new edition included attachment, block-graph, blog, broadband, bullet-point, celebrity, chatroom, committee, cut-and-paste, MP3 player and voice-mail.”

I’m surprised the ‘junior’ dictionary didn’t have “attachment,” “celebrity,” and “committee” prior to the 2007 purge. By the way, it seems no one noticed the purge till recently. Robbins has an interesting take on the issue, one with which I do not entirely agree. I understand needing to purge words but what happens a child reading a classic such as “The Wind in the Willows’ attempts to look up the word ‘willows’?  (Thanks to Susan Baxter who in a private communication pointed out the problems inherent with reading new and/or classic books and not being able to find basic vocabulary.)

India and a National Seminar on Literature in the Emerging Contexts of Technology and Culture

I recently got a notice about an intriguing national seminar being held at Punjabi University (India). From a Dec. 12, 2014 notice,

The Department of English is pleased to invite you to the National Seminar on Literature in the Emerging Contexts of Technology and Culture being held on February 25 and 26, 2015.

There is an old, almost primal, bond between writing and technology. From the earliest tools of writing—probably a sharp-edged stone—to the stylus pen, from the clay tablet to the capacitive touch screen, this bond has proclaimed itself with all the force of technology’s materiality. However, the relatively rapid emergence and acceptance of the digital writing environment has foregrounded with unprecedented clarity how command and control are always already embedded in communication. Moreover, in the specific sphere of literary production, the opaqueness of creativity stands further complicated with the entry of the programmer, often in the very person of the writer. At the other end, reading struggles to break free from the constraints of both the verbal and the linear as it goes multimedia and hypertextual, making fresh demands upon the human sensorium. The result is that the received narratives of literary history face radical interruptions.

While cultures enfold and shape literatures and technologies, it must be admitted that they are also articulated and shaped by the latter. Technology in particular has advanced and proliferated so much in the last three decades that it has come to be regarded as a culture in its own right. It has come to acquire, particularly since the early decades of the twentieth century, a presence and authority it never really possessed before. With prosthetics, simulation and remote-sensing, for instance, it has brought within the horizon of realization the human aspiration for self-overcoming. Yet in spite of its numerous enabling, even liberating, tools, technology has also often tended to close off several modes of cognition and perception. While most of us would like to believe that we use technology, it is no less true that technology also uses us. Heidegger correctly warned of the potential, inherent in modern technology, to reduce the human beings to its resources and reserves. He also alerted us to its elusive ways, particularly the way it resists being thought and pre-empts any attempts to think beyond itself, thereby instituting itself as the exclusive horizon of thinking. Paradoxically, like a literary text or like thought itself, technology may have some chinks, certain gaps or spaces, through which it may be glimpsed against its larger, imposing tendencies.

The ostensible self-sufficiency and plenitude of the technological, as of the cultural, can be questioned and their nature examined probably most productively from a space which is structured self-reflexively, that is from the space of the literary. At the same time, the implications of the technological turn, especially in its digital avatar, for literature, as also for culture, demand thinking.

The proposed seminar will be an opportunity to reflect on these and related issues, with which a whole galaxy of thinkers have engaged — from Walter Benjamin, Martin Heidegger, Raymond Williams and Jean Baudrillard to Donna Haraway, George Landow, Lev Manovich, Bernard Steigler, Katherine Hayles, Henry Jenkins, Hubert Dreyfus, Mari-Laure Ryan, the Krokers, Manuel Castells, Fredrich Kittler, David J Bolter, Manuel De Landa, Nick Montfort, Noah Wardrip-Fruin and others. Among the areas on which papers/presentations for the seminar are expected are:

  • The Work of Literature/Art in the Digital Age
  • Cultures of Technology and Technologies of Culture
  • Resistance and Appropriation Online: Strategies and Subterfuges
  • Global Capitalism and Cyberspace
  • Posthumanist Culture and Its Literatures
  • Digital Humanities and the Literary Text
  • Reconsidering Literature: Between Technology and Theory
  • Virtuality and/as Fiction
  • Plotting the Mutating Networks: The Logics of Contingency
  • Writing Technologies and Literature
  • Reading Literature in the Digital Age
  • Literature and Gaming
  • After the Death of the Author: The Posthuman Authority
  • Cyberpunk Writing
  • Teaching Literature in the Post-Gutenberg Classroom

Submission of abstracts: By 20 January 2015
Submission of papers: By 10 February 2015
Registration Fee: Rs. 1000/- (Rs. 500 for Research Scholars/Students)

All submissions must be made through email to and/or

Lodging and hospitality shall be provided by the University to all outstation resource persons and, subject to availability, to paper presenters. In view of financial constraints, it may not be possible to reimburse travel expenses to all paper presenters.

Rajesh Sharma
Seminar Director
Professor and Head
Department of English
783 796 0942
0175-304 6246

Jaspreet Mander
Associate Professor of English
Seminar Coordinator
941 792 3373

I couldn’t agree with the sentiments more, applaud the organizers’ ambitious scope, and wish them the best!

PS: There is a Canada/India/Southeast Asia project, Cosmopolitanism and the Local in Science and Nature: Creating an East/West Partnership, that’s starting up soon as per my Dec. 12, 2014 post and this seminar would seem like an opportunity for those academics to reach out. Finally, you can get more information about Punjabi University here.

Live webcast about data journalism on July 30, 2014 and a webinar featuring the 2014 NNI (US National Nanotechnology Initiative) EHS (Environment, Health and Safety) Progress Review on July 31, 2014

The Woodrow Wilson International Center for Scholars is hosting a live webcast on data journalism scheduled for July 30, 2014. For those us who are a little fuzzy as to what the term ‘data journalism’ means, this is probably a good opportunity to find out as per the description in the Wilson Center’s July 23, 2014 email announcement,

What is data journalism? Why does it matter? How has the maturing field of data science changed the direction of journalism and global investigative reporting? Our speakers will discuss the implications for policymakers and institutional accountability, and how the balance of power in information gathering is shifting worldwide, with implications for decision-making and open government.

This event will be live webcast and you may follow it on twitter @STIPcommonslab and #DataJournalism

Wednesday, July 30th, 2014
10am – 12pm EST
5th Floor Conference Room
[Woodrow Wilson International Center for Scholars
Ronald Reagan Building and International Trade Center
One Woodrow Wilson Plaza – 1300 Pennsylvania Ave., NW, Washington, DC 20004-3027
T 1-202-691-4000]


Alexander B. Howard
Writer and Editor, TechRepublic and founder of the blog “E Pluribus Unum.” Previously, he was a fellow at the Tow Center for Digital Journalism at Columbia University, the Ash Center at Harvard University and the Washington Correspondent for O’Reilly Media.

Kalev H. Leetaru
Yahoo! Fellow at Georgetown University, a Council Member of the World Economic Forum’s Global Agenda Council on the Future of Government, and a Foreign Policy Magazine Top 100 Global Thinker of 2013. For nearly 20 years he has been studying the web and building systems to interact with and understand the way it is reshaping our global society.

Louise Lief (Moderator)
Public Policy Scholar at the Wilson Center. Her project, “Science and the Media” explores innovative ways to make environmental science more accessible and useful to all journalists. She is investigating how new technologies and civic innovation tools can benefit both the media and science.

I believe you need to RSVP if you are attending in person but it’s not necessary for the livestream.

The other announcement comes via a July 23, 2014 news item on Nanowerk,

The National Nanotechnology Coordination Office (NNCO) will hold a public webinar on Thursday, July 31, 2014, to provide a forum to answer questions related to the “Progress Review on the Coordinated Implementation of the National Nanotechnology Initiative (NNI) 2011 Environmental, Health, and Safety Research Strategy.”

The full notice can be found on the US website,

When: The webinar will be live on Thursday, July 31, 2014 from 12:00 pm-1 pm.
Where: Click here to register for the online webcast

While it’s open to the public, I suspect this is an event designed largely for highly interested parties such as the agencies involved in EHS activities, nongovernmental organizations that act as watchdogs, and various government policy wonks. Here’s how they describe their proposed discussions (from the event notice page),

Discussion during the webinar will focus on the research activities undertaken by NNI agencies to advance the current state of the science as highlighted in the Progress Review. Representative research activities as provided in the Progress Review will be discussed in the context of the 2011 NNI EHS Research Strategy’s six core research areas: Nanomaterial Measurement Infrastructure, Human Exposure Assessment, Human Health, the Environment, Risk Assessment and Risk Management Methods, and Informatics and Modeling.

How: During the question-and-answer segment of the webinar, submitted questions will be considered in the order received. A moderator will identify relevant questions and pose them to the panel of NNI agency representatives. Due to time constraints, not all questions may be addressed.  The moderator reserves the right to group similar questions and to skip questions, as appropriate. The NNCO will begin accepting questions and comments via email ( at 1 pm on Thursday, July 24th (EDT) until the close of the webinar at 1 pm (EDT) on July 31st.

The Panelists:  The panelists for the webinar are subject matter experts from the Federal Government.

Additional Information: A public copy of the “Progress Review on the Coordinated Implementation of the National Nanotechnology Initiative 2011 Environmental, Health, and Safety Research Strategy” can be accessed at The 2011 NNI EHS Research Strategy can be accessed at
– See more at:

Writing and AI or is a robot writing this blog?

In an interview almost 10 years ago for an article I was writing for a digital publishing magazine, I had a conversation with a very technically oriented individually that went roughly this way,

Him: (enthused and excited) We’re developing algorithms that will let us automatically create brochures, written reports, that will always have the right data and can be instantly updated.

Me: (pause)

Him: (no reaction)

Me: (breaking long pause) You realize you’re talking to a writer, eh? You’ve just told me that at some point in the future nobody will need writers.

Him: (pause) No. (then with more certainty) No. You don’t understand. We’re making things better for you. In the future, you won’t need to do the boring stuff.

It seems the future is now and in the hands of a company known as Automated Insights, You can find this at the base of one of the company’s news releases,


Automated Insights (Ai) transforms Big Data into written reports with the depth of analysis, personality and variability of a human writer. In 2014, Ai and its patented Wordsmith platform will produce over 1 billion personalized reports for clients like Yahoo!, The Associated Press, the NFL, and [emphasis mine] The Wordsmith platform uses artificial intelligence to dynamically spot patterns and trends in raw data and then describe those findings in plain English. Wordsmith authors insightful, personalized reports around individual user data at unprecedented scale and in real-time. Automated Insights also offers applications that run on its Wordsmith platform, including the recently launched Wordsmith for Marketing, which enables marketing agencies to automate reporting for clients. Learn more at

In the wake of the June 30, 2014 deal with Associated Press, there has been a flurry of media interest especially from writers who seem to have largely concluded that the robots will do the boring stuff and free human writers to do creative, innovative work. A July 2, 2014 news item on provides more details about the deal,

The Associated Press, the largest American-based news agency in the world, will now use story-writing software to produce U.S. corporate earnings stories.

In a recent blog post post AP Managing Editor Lou Ferarra explained that the software is capable of producing these stories, which are largely technical financial reports that range from 150 to 300 words, in “roughly the same time that it takes our reporters.” [emphasis mine]

AP staff members will initially edit the software-produced reports, but the agency hopes the process will soon be fully automated.

The Wordsmith software constructs narratives in plain English by using algorithms to analyze trends and patterns in a set of data and place them in an appropriate context depending on the nature of the story.

Representatives for the Associated Press have assured anyone who fears robots are making journalists obsolete that Wordsmith will not be taking the jobs of staffers. “We are going to use our brains and time in more enterprising ways during earnings season” Ferarra wrote, in the blog pos. “This is about using technology to free journalists to do more journalism and less data processing, not about eliminating jobs. [emphasis mine]

Russell Brandon’s July 11, 2014 article for The Verge provides more technical detail and context for this emerging field,

Last week, the Associated Press announced it would be automating its articles on quarterly earnings reports. Instead of 300 articles written by humans, the company’s new software will write 4,400 of them, each formatted for AP style, in mere seconds. It’s not the first time a company has tried out automatic writing: last year, a reporter at The LA Times wrote an automated earthquake-reporting program that combined prewritten sentences with automatic seismograph reports to report quakes just seconds after they happen. The natural language-generation company Narrative Science has been churning out automated sports reporting for years.

It appears that AP Managing Editor Lou Ferarra doesn’t know how long it takes to write 150 to 300 words (“roughly the same time that it takes our reporters”) or perhaps he or she wanted to ‘soften’ the news’s possible impact. Getting back to the technical aspects in Brandon’s article,

… So how do you make a robot that writes sentences?

In the case of AP style, a lot of the work has already been done. Every Associated Press article already comes with a clear, direct opening and a structure that spirals out from there. All the algorithm needs to do is code in the same reasoning a reporter might employ. Algorithms detect the most volatile or newsworthy shift in a given earnings report and slot that in as the lede. Circling outward, the program might sense that a certain topic has already been covered recently and decide it’s better to talk about something else. …

The staffers who keep the copy fresh are scribes and coders in equal measure. (Allen [Automated Insights CEO Robbie Allen] says he looks for “stats majors who worked on the school paper.”) They’re not writers in the traditional sense — most of the language work is done beforehand, long before the data is available — but each job requires close attention. For sports articles, the Automated Insights team does all its work during the off-season and then watches the articles write themselves from the sidelines, as soon as each game’s results are available. “I’m often quite surprised by the result,” says Joe Procopio, the company’s head of product engineering. “There might be four or five variables that determine what that lead sentence looks like.” …

A July 11, 2014 article by Catherine Taibi for Huffington Post offers a summary of the current ‘robot/writer’ situation (Automated Insights is not the only company offering this service) along with many links including one to this July 11, 2014 article by Kevin Roose for New York Magazine where he shares what appears to be a widely held opinion and which echoes my interviewee of 10 years ago (Note: A link has been removed),

By this point, we’re no longer surprised when machines replace human workers in auto factories or electronics-manufacturing plants. That’s the norm. But we hoity-toity journalists had long assumed that our jobs were safe from automation. (We’re knowledge workers, after all.) So when the AP announced its new automated workforce, you could hear the panic spread to old-line news desks across the nation. Unplug the printers, Bob! The robots are coming!

I’m not an alarmist, though. In fact, I welcome our new robot colleagues. Not only am I not scared of losing my job to a piece of software, I think the introduction of automated reporting is the best thing to happen to journalists in a long time.

For one thing, humans still have the talent edge. At the moment, the software created by Automated Insights is only capable of generating certain types of news stories — namely, short stories that use structured data as an input, and whose output follows a regular pattern. …

Robot-generated stories aren’t all fill-in-the-blank jobs; the more advanced algorithms use things like perspective, tone, and humor to tailor a story to its audience. …

But these robots, as sophisticated as they are, can’t approach the full creativity of a human writer. They can’t contextualize Emmy snubs like Matt Zoller Seitz, assail opponents of Obamacare like Jonathan Chait, or collect summer-camp sex stories like Maureen O’Connor. My colleagues’ jobs (and mine, knock wood) are too complex for today’s artificial intelligence to handle; they require human skills like picking up the phone, piecing together data points from multiple sources, and drawing original, evidence-based conclusions. [emphasis mine]

The stories that today’s robots can write are, frankly, the kinds of stories that humans hate writing anyway. … [emphasis mine]

Despite his blithe assurances, there is a little anxiety expressed in this piece “My colleagues’ jobs (and mine, knock wood) are too complex for today’s artificial intelligence … .”

I too am feeling a little uncertain. For example, there’s this April 29, 2014 posting by Adam Long on the Automated Insights blog and I can’t help wondering how much was actually written by Long and how much by the company’s robots. After all the company proudly proclaims the blog is powered by Wordsmith Marketing. For that matter, I’m not that sure about the piece, which has no byline.

For anyone interested in still more links and information, Automated Insights offers a listing of their press coverage here. Although it’s a bit dated now, there is an exhaustive May 22, 2013 posting by Tony Hirst on the blog which, despite the title: ‘Notes on Narrative Science and Automated Insights’, provides additional context for the work being done to automate the writing process since 2009.

For the record, this blog is not written by a robot. As for getting rid of the boring stuff, I can’t help but remember that part of how one learns any craft is by doing the boring, repetitive work needed to build skills.

One final and unrelated note, Automated Insights has done a nice piece of marketing with its name which abbreviates to Ai. One can’t help but be reminded of AI, a term connoting the field of artificial intelligence.

Competition, collaboration, and a smaller budget: the US nano community responds

Before getting to the competition, collaboration, and budget mentioned in the head for this posting, I’m supplying some background information.

Within the context of a May 20, 2014 ‘National Nanotechnology Initiative’ hearing before the U.S. House of Representatives Subcommittee on Research and Technology, Committee on Science, Space, and Technology, the US General Accountability Office (GAO) presented a 22 pp. précis (PDF; titled: NANOMANUFACTURING AND U.S. COMPETITIVENESS; Challenges and Opportunities) of its 125 pp. (PDF version report titled: Nanomanufacturing: Emergence and Implications for U.S. Competitiveness, the Environment, and Human Health).

Having already commented on the full report itself in a Feb. 10, 2014 posting, I’m pointing you to Dexter Johnson’s May 21, 2014 post on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) where he discusses the précis from the perspective of someone who was consulted by the US GAO when they were writing the full report (Note: Links have been removed),

I was interviewed extensively by two GAO economists for the accompanying [full] report “Nanomanufacturing: Emergence and Implications for U.S. Competitiveness, the Environment, and Human Health,” where I shared background information on research I helped compile and write on global government funding of nanotechnology.

While I acknowledge that the experts who were consulted for this report are more likely the source for its views than I am, I was pleased to see the report reflect many of my own opinions. Most notable among these is bridging the funding gap in the middle stages of the manufacturing-innovation process, which is placed at the top of the report’s list of challenges.

While I am in agreement with much of the report’s findings, it suffers from a fundamental misconception in seeing nanotechnology’s development as a kind of race between countries. [emphases mine]

(I encourage you to read the full text of Dexter’s comments as he offers more than a simple comment about competition.)

Carrying on from this notion of a ‘nanotechnology race’, at least one publication focused on that aspect. From the May 20, 2014 article by Ryan Abbott for,

Nanotech Could Keep U.S. Ahead of China

WASHINGTON (CN) – Four of the nation’s leading nanotechnology scientists told a U.S. House of Representatives panel Tuesday that a little tweaking could go a long way in keeping the United States ahead of China and others in the industry.

The hearing focused on the status of the National Nanotechnology Initiative, a federal program launched in 2001 for the advancement of nanotechnology.

As I noted earlier, the hearing was focused on the National Nanotechnology Initiative (NNI) and all of its efforts. It’s quite intriguing to see what gets emphasized in media reports and, in this case, the dearth of media reports.

I have one more tidbit, the testimony from Lloyd Whitman, Interim Director of the National Nanotechnology Coordination Office and Deputy Director of the Center for Nanoscale Science and Technology, National Institute of Standards and Technology. The testimony is in a May 21, 2014 news item on,

Testimony by Lloyd Whitman, Interim Director of the National Nanotechnology Coordination Office and Deputy Director of the Center for Nanoscale Science and Technology, National Institute of Standards and Technology

Chairman Bucshon, Ranking Member Lipinski, and Members of the Committee, it is my distinct privilege to be here with you today to discuss nanotechnology and the role of the National Nanotechnology Initiative in promoting its development for the benefit of the United States.

Highlights of the National Nanotechnology Initiative

Our current Federal research and development program in nanotechnology is strong. The NNI agencies continue to further the NNI’s goals of (1) advancing nanotechnology R&D, (2) fostering nanotechnology commercialization, (3) developing and maintaining the U.S. workforce and infrastructure, and (4) supporting the responsible and safe development of nanotechnology. …


The sustained, strategic Federal investment in nanotechnology R&D combined with strong private sector investments in the commercialization of nanotechnology-enabled products has made the United States the global leader in nanotechnology. The most recent (2012) NNAP report analyzed a wide variety of sources and metrics and concluded that “… in large part as a result of the NNI the United States is today… the global leader in this exciting and economically promising field of research and technological development.” n10 A recent report on nanomanufacturing by Congress’s own Government Accountability Office (GAO) arrived at a similar conclusion, again drawing on a wide variety of sources and stakeholder inputs. n11 As discussed in the GAO report, nanomanufacturing and commercialization are key to capturing the value of Federal R&D investments for the benefit of the U.S. economy. The United States leads the world by one important measure of commercial activity in nanotechnology: According to one estimate, n12 U.S. companies invested $4.1 billion in nanotechnology R&D in 2012, far more than investments by companies in any other country.  …

There’s cognitive dissonance at work here as Dexter notes in his own way,

… somewhat ironically, the [GAO] report suggests that one of the ways forward is more international cooperation, at least in the development of international standards. And in fact, one of the report’s key sources of information, Mihail Roco, has made it clear that international cooperation in nanotechnology research is the way forward.

It seems to me that much of the testimony and at least some of the anxiety about being left behind can be traced to a decreased 2015 budget allotment for nanotechnology (mentioned here in a March 31, 2014 posting [US National Nanotechnology Initiative’s 2015 budget request shows a decrease of $200M]).

One can also infer a certain anxiety from a recent presentation by Barbara Herr Harthorn, head of UCSB’s [University of California at Santa Barbara) Center for Nanotechnology in Society (CNS). She was at a February 2014 meeting of the Presidential Commission for the Study of Bioethical Issues (mentioned in parts one and two [the more substantive description of the meeting which also features a Canadian academic from the genomics community] of my recent series on “Brains, prostheses, nanotechnology, and human enhancement”). II noted in part five of the series what seems to be a shift towards brain research as a likely beneficiary of the public engagement work accomplished under NNI auspices and, in the case of the Canadian academic, the genomics effort.

The Americans are not the only ones feeling competitive as this tweet from Richard Jones, Pro-Vice Chancellor for Research and Innovation at Sheffield University (UK), physicist, and author of Soft Machines, suggests,

May 18

The UK has fewer than 1% of world patents on graphene, despite it being discovered here, according to the FT –

I recall reading a report a few years back which noted that experts in China were concerned about falling behind internationally in their research efforts. These anxieties are not new, CP Snow’s book and lecture The Two Cultures (1959) also referenced concerns in the UK about scientific progress and being left behind.

Competition/collaboration is an age-old conundrum and about as ancient as anxieties of being left behind. The question now is how are we all going to resolve these issues this time?

ETA May 28, 2014: The American Institute of Physics (AIP) has produced a summary of the May 20, 2014 hearing as part of their FYI: The AIP Bulletin of Science Policy News, May 27, 2014 (no. 93).

ETA Sept. 12, 2014: My first posting about the diminished budget allocation for the US NNI was this March 31, 2014 posting.

Apply for six month internship at Nature (journal) sponsored by Canada’s International Development Research Centre (IDRC)

The deadline is Feb. 26, 2014, Canadians and people resident in Canada are eligible, and this does involve some travel. Here are the details (from a Feb. 12, 2014 posting on the Nature blogs),

Canada’s International Development Research Centre (IDRC) is offering a six-month, full-time science journalism award worth up to CAD$60,000 to an English-speaking Canadian citizen or permanent resident of Canada. The successful applicant will receive training and work as an intern in the London news room of the leading international science journal Nature before spending up to four months reporting science stories from developing countries. He or she will be at an early stage of his or her career, but with at least three years’ experience as a journalist.

Candidates must have a keen interest in science and technology, particularly relating to development, as well as outstanding reporting and writing skills, and strong ideas for news and features suitable for publication in Nature. The internship is expected to begin in April or May 2014.

To apply, please e-mail the following to

  • A covering letter explaining your suitability for the award
  • A resume
  • Three recent story clips, ideally a mix of news and feature pieces
  • Three brief pitches for stories you think would appeal to Nature’s audience.

Deadline: Wednesday 26 February 2014

About the IDRC

The IDRC is a Canadian Crown corporation that works closely with researchers from the developing world in their search to build healthier, more equitable and more prosperous societies (see

About Nature

Nature is a weekly international journal publishing the finest peer-reviewed research in all fields of science and technology, and is the world’s most highly cited interdisciplinary science journal. It also has an international news team covering the latest science, policy and funding news in both online and print formats (see

About the award

Nature will manage the selection process and the IDRC will award up to CAD$60,000 to the successful applicant. This will cover travel costs, living expenses, research expenses, visa or other related costs, in London and in other countries visited during the six-month period. The award will also cover the cost of participating in a conference relevant to the award winner’s professional development as a journalist. For more information click here.

Good luck!

A wearable book (The Girl Who Was Plugged In) makes you feel the protagonists pain

A team of students taking an MIT (Massachusetts Institute of Technology) course called ‘Science Fiction to Science Fabrication‘ have created a new kind of category for books, sensory fiction.  John Brownlee in his Feb. 10, 2014 article for Fast Company describes it this way,

Have you ever felt your pulse quicken when you read a book, or your skin go clammy during a horror story? A new student project out of MIT wants to deepen those sensations. They have created a wearable book that uses inexpensive technology and neuroscientific hacking to create a sort of cyberpunk Neverending Story that blurs the line between the bodies of a reader and protagonist.

Called Sensory Fiction, the project was created by a team of four MIT students–Felix Heibeck, Alexis Hope, Julie Legault, and Sophia Brueckner …

Here’s the MIT video demonstrating the book in use (from the course’s sensory fiction page),

Here’s how the students have described their sensory book, from the project page,

Sensory fiction is about new ways of experiencing and creating stories.

Traditionally, fiction creates and induces emotions and empathy through words and images.  By using a combination of networked sensors and actuators, the Sensory Fiction author is provided with new means of conveying plot, mood, and emotion while still allowing space for the reader’s imagination. These tools can be wielded to create an immersive storytelling experience tailored to the reader.

To explore this idea, we created a connected book and wearable. The ‘augmented’ book portrays the scenery and sets the mood, and the wearable allows the reader to experience the protagonist’s physiological emotions.

The book cover animates to reflect the book’s changing atmosphere, while certain passages trigger vibration patterns.

Changes in the protagonist’s emotional or physical state triggers discrete feedback in the wearable, whether by changing the heartbeat rate, creating constriction through air pressure bags, or causing localized temperature fluctuations.

Our prototype story, ‘The Girl Who Was Plugged In’ by James Tiptree showcases an incredible range of settings and emotions. The main protagonist experiences both deep love and ultimate despair, the freedom of Barcelona sunshine and the captivity of a dark damp cellar.

The book and wearable support the following outputs:

  • Light (the book cover has 150 programmable LEDs to create ambient light based on changing setting and mood)
  • Sound
  • Personal heating device to change skin temperature (through a Peltier junction secured at the collarbone)
  • Vibration to influence heart rate
  • Compression system (to convey tightness or loosening through pressurized airbags)

One of the earliest stories about this project was a Jan. 28,2014 piece written by Alison Flood for the Guardian where she explains how vibration, etc. are used to convey/stimulate the reader’s sensations and emotions,

MIT scientists have created a ‘wearable’ book using temperature and lighting to mimic the experiences of a book’s protagonist

The book, explain the researchers, senses the page a reader is on, and changes ambient lighting and vibrations to “match the mood”. A series of straps form a vest which contains a “heartbeat and shiver simulator”, a body compression system, temperature controls and sound.

“Changes in the protagonist’s emotional or physical state trigger discrete feedback in the wearable [vest], whether by changing the heartbeat rate, creating constriction through air pressure bags, or causing localised temperature fluctuations,” say the academics.

Flood goes on to illuminate how science fiction has explored the notion of ‘sensory books’ (Note: Links have been removed) and how at least one science fiction novelist is responding to this new type of book,,

The Arthur C Clarke award-winning science fiction novelist Chris Beckett wrote about a similar invention in his novel Marcher, although his “sensory” experience comes in the form of a video game:

Adam Roberts, another prize-winning science fiction writer, found the idea of “sensory” fiction “amazing”, but also “infantalising, like reverting to those sorts of books we buy for toddlers that have buttons in them to generate relevant sound-effects”.

Elise Hu in her Feb. 6, 2014 posting on the US National Public Radio (NPR) blog, All Tech Considered, takes a different approach to the topic,

The prototype does work, but it won’t be manufactured anytime soon. The creation was only “meant to provoke discussion,” Hope says. It was put together as part of a class in which designers read science fiction and make functional prototypes to explore the ideas in the books.

If it ever does become more widely available, sensory fiction could have an unintended consequence. When I shared this idea with NPR editor Ellen McDonnell, she quipped, “If these device things are helping ‘put you there,’ it just means the writing won’t have to be as good.”

I hope the students are successful at provoking discussion as so far they seem to have primarily provoked interest.

As for my two cents, I think that in a world where it seems making personal connections  is increasingly difficult (i.e., people becoming more isolated) that sensory fiction which stimulates people into feeling something as they read a book seems a logical progression.  It’s also interesting to me that all of the focus is on the reader with no mention as to what writers might produce (other than McDonnell’s cheeky comment) if they knew their books were going to be given the ‘sensory treatment’. One more musing, I wonder if there might a difference in how males and females, writers and readers, respond to sensory fiction.

Now for a bit of wordplay. Feeling can be emotional but, in English, it can also refer to touch and researchers at MIT have also been investigating new touch-oriented media.  You can read more about that project in my Reaching beyond the screen with the Tangible Media Group at the Massachusetts Institute of Technology (MIT) posting dated Nov. 13, 2013. One final thought, I am intrigued by how interested scientists at MIT seem to be in feelings of all kinds.

Two bits about the brain: fiction affects your brain and the US’s BRAIN Initiative is soliciting grant submissions

As a writer I love to believe my words have a lasting impact and while this research is focused on fiction, something I write more rarely than nonfiction, hope springs eternal that one day nonfiction too will be proved as having an impact (in a good way) on the brain. From a Jan. 3, 2014 news release on EurekAlert (or you can read the Dec. 17, 2013 Emory University news release by Carol Clark),

Many people can recall reading at least one cherished story that they say changed their life. Now researchers at Emory University have detected what may be biological traces related to this feeling: Actual changes in the brain that linger, at least for a few days, after reading a novel.

“Stories shape our lives and in some cases help define a person,” says neuroscientist Gregory Berns, lead author of the study and the director of Emory’s Center for Neuropolicy. “We want to understand how stories get into your brain, and what they do to it.”

His co-authors included Kristina Blaine and Brandon Pye from the Center for Neuropolicy, and Michael Prietula from Emory’s Goizueta Business School.

Neurobiological research using functional magnetic resonance imaging (fMRI) has begun to identify brain networks associated with reading stories. Most previous studies have focused on the cognitive processes involved in short stories, while subjects are actually reading them while they are in the fMRI scanner.

All of the study subjects read the same novel, “Pompeii,” a 2003 thriller by Robert Harris that is based on the real-life eruption of Mount Vesuvius in ancient Italy.

“The story follows a protagonist, who is outside the city of Pompeii and notices steam and strange things happening around the volcano,” Berns says. “He tries to get back to Pompeii in time to save the woman he loves. Meanwhile, the volcano continues to bubble and nobody in the city recognizes the signs.”

The researchers chose the book due to its page-turning plot. “It depicts true events in a fictional and dramatic way,” Berns says. “It was important to us that the book had a strong narrative line.”

For the first five days, the participants came in each morning for a base-line fMRI scan of their brains in a resting state. Then they were fed nine sections of the novel, about 30 pages each, over a nine-day period. They were asked to read the assigned section in the evening, and come in the following morning. After taking a quiz to ensure they had finished the assigned reading, the participants underwent an fMRI scan of their brain in a non-reading, resting state. After completing all nine sections of the novel, the participants returned for five more mornings to undergo additional scans in a resting state.

The results showed heightened connectivity in the left temporal cortex, an area of the brain associated with receptivity for language, on the mornings following the reading assignments. “Even though the participants were not actually reading the novel while they were in the scanner, they retained this heightened connectivity,” Berns says. “We call that a ‘shadow activity,’ almost like a muscle memory.”

Heightened connectivity was also seen in the central sulcus of the brain, the primary sensory motor region of the brain. Neurons of this region have been associated with making representations of sensation for the body, a phenomenon known as grounded cognition. Just thinking about running, for instance, can activate the neurons associated with the physical act of running.

“The neural changes that we found associated with physical sensation and movement systems suggest that reading a novel can transport you into the body of the protagonist,” Berns says. “We already knew that good stories can put you in someone else’s shoes in a figurative sense. Now we’re seeing that something may also be happening biologically.”

The neural changes were not just immediate reactions, Berns says, since they persisted the morning after the readings, and for the five days after the participants completed the novel.

“It remains an open question how long these neural changes might last,” Berns says. “But the fact that we’re detecting them over a few days for a randomly assigned novel suggests that your favorite novels could certainly have a bigger and longer-lasting effect on the biology of your brain.”

Here’s a link to and a citation for the paper,

Short- and Long-Term Effects of a Novel on Connectivity in the Brain by Gregory S. Berns, Kristina Blaine, Michael J. Prietula, and Brandon E. Pye. Brain Connectivity. 2013, 3(6): 590-600. doi:10.1089/brain.2013.0166.

This is an open access paper where you’ll notice the participants cover a narrow range of ages (from the Materials and Methods section of the paper,

A total of 21 participants were studied. Two were excluded from the fMRI analyses: one for insufficient attendance, and the other for image abnormalities. Before the experiment, participants were screened for the presence of medical and psychiatric diagnoses, and none were taking medications. There were 12 female and 9 male participants between the ages of 19 and 27 (mean 21.5). Emory University’s Institutional Review Board approved all procedures, and all participants gave written informed consent.

It’s always good to remember that university research often draws from student populations and the question one might want to ask is whether or not those results will remain the same, more or less, throughout someone’s life span.In any event, I find this research intriguing and hope they follow this up.

Currently known as the BRAIN (Brain Research through Advancing Innovative Neurotechnologies), I first wrote about the project under its old name BAM (Brain Activity Map) in two postings, first in a March 4, 2013 posting featuring brain-to-brain communication and other brain-related tidbits, then again, in an April 2, 2013 posting featuring an announcement about its federal funding. Today (Jan. 6, 2014), I stumbled across some BRAIN funding opportunities for researchers, from the BRAIN Initiative funding opportunities webpage,

NIH released six funding opportunity announcements in support of the President’s BRAIN Initiative. Collectively, these opportunities focus on building a new arsenal of tools and technologies for helping scientists unlock the mysteries of the brain. NIH [US National Institutes of Health] plans to invest $40 million in Fiscal Year 2014 through these opportunities, contingent upon the submission of a sufficient number of scientifically meritorious applications.

The opportunities currently available are as follows:

  • Transformative Approaches for Cell-Type Classification in the Brain (U01) (RFA-MH-14-215) – aims to pilot classification strategies to generate a systematic inventory/cell census of cell types in the brain, integrating molecular identity of cell types with connectivity, morphology, and location. These pilot projects and methodologies should be designed to demonstrate their utility and scalability to ultimately complete a comprehensive cell census of the human brain.
    Contact Email:
    Application Receipt: March 13, 2014
  • Development and Validation of Novel Tools to Analyze Cell-Specific and Circuit-Specific Processes in the Brain (U01) (RFA-MH-14-216) – aims to develop and validate novel tools that possess a high degree of cell-type and/or circuit-level specificity to facilitate the detailed analysis of complex circuits and provide insights into cellular interactions that underlie brain function. A particular emphasis is the development of new genetic and non-genetic tools for delivering genes, proteins and chemicals to cells of interest; new approaches are also expected to target specific cell types and or circuits in the nervous system with greater precision and sensitivity than currently established methods.
    Contact Email:
    Application Receipt: March 13, 2014
  • New Technologies and Novel Approaches for Large-Scale Recording and Modulation in the Nervous System (U01) (RFA-NS-14-007) – focuses on development and proof-of-concept testing of new technologies and novel approaches for large scale recording and manipulation of neural activity, with cellular resolution, at multiple spatial and/or temporal scales, in any region and throughout the entire depth of the brain. The proposed research may be high risk, but if successful could profoundly change the course of neuroscience research.
    Contact Email:
    Application Receipt: March 24, 2014
  • Optimization of Transformative Technologies for Large Scale Recording and Modulation in the Nervous System (U01) (RFA-NS-14-008) – aims to optimize existing and emerging technologies and approaches that have the potential to address major challenges associated with recording and manipulating neural activity. This FOA is intended for the iterative refinement of emergent technologies and approaches that have already demonstrated their transformative potential through initial proof-of-concept testing, and are appropriate for accelerated engineering development with an end-goal of broad dissemination and incorporation into regular neuroscience research.
    Contact Email:
    Application Receipt: March 24, 2014
  • Integrated Approaches to Understanding Circuit Function in the Nervous System (U01) (RFA-NS-14-009) – focuses onexploratory studies that use new and emerging methods for large scale recording and manipulation to elucidate the contributions of dynamic circuit activity to a specific behavioral or neural system. Applications should propose teams of investigators that seek to cross boundaries of interdisciplinary collaboration, for integrated development of experimental, analytic and theoretical capabilities in preparation for a future competition for large-scale awards.
    Contact Email:
    Application Receipt: March 24, 2014
  • Planning for Next Generation Human Brain Imaging (R24) (RFA-MH-14-217) – aims to create teams of imaging scientist together with other experts from a range of disciplines such as engineering, material sciences, nanotechnology and computer science, to plan for a new generation of non-invasive imaging techniques that would be used to understand human brain function. Incremental improvements to existing technologies will not be funded under this announcement.
    Contact Email:
    Application Receipt: March 13, 2014

For the interested, in the near future there will be some informational conference calls regarding these opportunities,

Informational Conference Calls for Prospective Applicants

NIH will be hosting a series of informational conference calls to address technical questions regarding applications to each of the RFAs released under the BRAIN Initiative.   Information on dates and contacts for each of the conference calls is as follows:

January 10, 2014, 2:00-3:00 PM EST
RFA-MH-14-215, Transformative Approaches for Cell-Type Classification in the Brain

For call-in information, contact Andrea Beckel-Mitchener at

January 13, 2014, 2:00-3:00 PM EST
RFA-MH-14-216, Development and Validation of Novel Tools to Analyze Cell-Specific and Circuit-Specific Processes in the Brain

For call-in information, contact Michelle Freund at

January 15, 2014, 1:00-2:00 PM EST
RFA-MH-14-217, Planning for Next Generation Human Brain Imaging

For call-in information, contact Greg Farber at

February 4, 2014, 1:00-2:30 PM EST
RFA-NS-14-007, New Technologies and Novel Approaches for Large-Scale Recording and Modulation in the Nervous System
RFA-NS-14-008, Optimization of Transformative Technologies for Large Scale Recording and Modulation in the Nervous System
RFA-NS-14-009, Integrated Approaches to Understanding Circuit Function in the Nervous System

For call-in information, contact Karen David at
In addition to accessing the information provided in the upcoming conference calls, applicants are strongly encouraged to consult with the Scientific/Research Contacts listed in each of the RFAs to discuss the alignment of their proposed work with the goals of the RFA to which they intend to apply.

Good luck!

It’s kind of fascinating to see this much emphasis on brains what with the BRAIN Initiative in the US and the Human Brain Project in Europe (my Jan. 28, 2013 posting announcing the European Union’s winning Future and Emerging Technologies (FET) research projects, The prizes (1B Euros to be paid out over 10 years to each winner) had been won by the Human Brain FET project and the Graphene FET project, respectively