Tag Archives: Mike Roberts

Nanoscale imaging of a mouse brain

Researchers have developed a new brain imaging tool they would like to use as a founding element for a national brain observatory. From a July 30, 2015 news item on Azonano,

A new imaging tool developed by Boston scientists could do for the brain what the telescope did for space exploration.

In the first demonstration of how the technology works, published July 30 in the journal Cell, the researchers look inside the brain of an adult mouse at a scale previously unachievable, generating images at a nanoscale resolution. The inventors’ long-term goal is to make the resource available to the scientific community in the form of a national brain observatory.

A July 30, 2015 Cell Press news release on EurekAlert, which originated the news item, expands on the theme,

“I’m a strong believer in bottom up-science, which is a way of saying that I would prefer to generate a hypothesis from the data and test it,” says senior study author Jeff Lichtman, of Harvard University. “For people who are imagers, being able to see all of these details is wonderful and we’re getting an opportunity to peer into something that has remained somewhat intractable for so long. It’s about time we did this, and it is what people should be doing about things we don’t understand.”

The researchers have begun the process of mining their imaging data by looking first at an area of the brain that receives sensory information from mouse whiskers, which help the animals orient themselves and are even more sensitive than human fingertips. The scientists used a program called VAST, developed by co-author Daniel Berger of Harvard and the Massachusetts Institute of Technology, to assign different colors and piece apart each individual “object” (e.g., neuron, glial cell, blood vessel cell, etc.).

“The complexity of the brain is much more than what we had ever imagined,” says study first author Narayanan “Bobby” Kasthuri, of the Boston University School of Medicine. “We had this clean idea of how there’s a really nice order to how neurons connect with each other, but if you actually look at the material it’s not like that. The connections are so messy that it’s hard to imagine a plan to it, but we checked and there’s clearly a pattern that cannot be explained by randomness.”

The researchers see great potential in the tool’s ability to answer questions about what a neurological disorder actually looks like in the brain, as well as what makes the human brain different from other animals and different between individuals. Who we become is very much a product of the connections our neurons make in response to various life experiences. To be able to compare the physical neuron-to-neuron connections in an infant, a mathematical genius, and someone with schizophrenia would be a leap in our understanding of how our brains shape who we are (or vice versa).

The cost and data storage demands for this type of research are still high, but the researchers expect expenses to drop over time (as has been the case with genome sequencing). To facilitate data sharing, the scientists are now partnering with Argonne National Laboratory with the hopes of creating a national brain laboratory that neuroscientists around the world can access within the next few years.

“It’s bittersweet that there are many scientists who think this is a total waste of time as well as a big investment in money and effort that could be better spent answering questions that are more proximal,” Lichtman says. “As long as data is showing you things that are unexpected, then you’re definitely doing the right thing. And we are certainly far from being out of the surprise element. There’s never a time when we look at this data that we don’t see something that we’ve never seen before.”

Here’s a link to and a citation for the paper,

Saturated Reconstruction of a Volume of Neocortex by Narayanan Kasthuri, Kenneth Jeffrey Hayworth, Daniel Raimund Berger, Richard Lee Schalek, José Angel Conchello, Seymour Knowles-Barley, Dongil Lee, Amelio Vázquez-Reina, Verena Kaynig, Thouis Raymond Jones, Mike Roberts, Josh Lyskowski Morgan, Juan Carlos Tapia, H. Sebastian Seung, William Gray Roncal, Joshua Tzvi Vogelstein, Randal Burns, Daniel Lewis Sussman, Carey Eldin Priebe, Hanspeter Pfister, Jeff William Lichtman. Cell Volume 162, Issue 3, p648–661, 30 July 2015 DOI: http://dx.doi.org/10.1016/j.cell.2015.06.054

This appears to be an open access paper.

Digital disasters

What would happen if we had a digital disaster? Try to imagine a situation where all or most of our information has been destroyed on all global networks. It may seem unlikely but it’s not entirely impossible as Luciana Duranti, then a professor at the University of British Columbia School of Library, Archival, and Information Sciences, suggested to reporter Mike Roberts in a 2006 interview. She cited a few examples of what we had already lost, (excerpted from my March 9, 2010 posting)

… she commented about the memories we had already lost. From the article,

Alas, she says, every day something else is irretrievably lost.

The research records of the U.S. Marines for the past 25 years? Gone.

East German land-survey records vital to the reunification of Germany? Toast.

A piece of digital interactive music recorded by Canadian composer Keith Hamel just eight years ago?

“Inaccessible, over, finito,” says Duranti, educated in her native Italy and a UBC prof since 1987.

Duranti, director of InterPARES (International Research on Permanent Authentic Records in Electronic Systems), an international cyber-preservation project comprising 20 countries and 60 global archivists, says original documentation is a thing of the past.

Glyn Moody’s March 5, 2012 posting on Techdirt notes a recent attempt to address the possible loss of ‘memory’ along with other issues specific to the digitization of information (I have removed links),

But there’s a problem: as more people turn to digital books as their preferred way of consuming text, libraries are starting to throw out their physical copies. Some, because nobody reads them much these days; some, because they take up too much space, and cost too much to keep; some, even on the grounds that Google has already scanned the book, and so the physical copy isn’t needed. Whatever the underlying reason, the natural assumption that we can always go back to traditional libraries to digitize or re-scan works is looking increasingly dubious.

Fortunately, Brewster Kahle, the man behind the Alexa Web traffic and ranking company (named after the Library of Alexandria, and sold to Amazon), and the Internet Archive — itself a kind of digital Library of Alexandria — has spotted the danger, and is now creating yet another ambitious library, this time of physical books …

For some reason this all reminded me of a Canticle for Leibowitz, a book I read many years ago and remember chiefly as a warning that information can be lost. There’s more about the book here. As for Kahle’s plan, I wish him the best of luck.

Dem bones at McGill; innovation from the Canadian business community?; the archiving frontier; linking and copyright

I have a number of bits today amongst them, Canadian nanotechnology, Canadian business innovation, digital archiving, and copyrights and linking.

A Quebec biotech company, Enobia Pharma is working with Dr. Marc McKee on treatments for genetic bone diseases. From the news item on Nanowerk,

The field is known as biomineralization and it involves cutting-edge, nanotech investigation into the proteins, enzymes and other molecules that control the coupling of mineral ions (calcium and phosphate) to form nano-crystals within the bone structure. The treatment, enzyme replacement therapy to treat hypophosphatasia, is currently undergoing clinical testing in several countries including Canada. Hypophosphatasia is a rare and severe disorder resulting in poor bone mineralization. In infants, symptoms include respiratory insufficiency, failure to thrive and rickets.

This research in biomineralization (coupling of mineral ions to form nano-crystals) could lead to better treatments for other conditions such as cardiovascular diseases, arthritis, and kidney stones.

McKee’s research is being funded in part by the Canadian Institutes of Health Research  From the Nanowerk news item,

McKee’s research program is a concrete example of how university researchers are working with private sector partners as an integral part of Canada’s innovative knowledge economy, and the positive outcomes their collaborations can offer.

I don’t think that businesses partnering with academic institutions in research collaborations is precisely what they mean when they talk about business innovation (research and development). From a March 2, 2010 article about innovation by Preston Manning in the Globe & Mail,

Government competition policy and support for science, technology, and innovation (STI) can complement business leadership on the innovation front, but it is not a substitute for such leadership. Action to increase innovation in the economy is first and foremost a business responsibility.

Manning goes on to describe what he’s done on this matter and asks for suggestions on how to encourage Canadian business to be more innovative. (Thanks to Pasco Phronesis for pointing me to Manning’s article.) I guess the problem is that what we’ve been doing has worked well enough and so there’s no great incentive to change.

I’ve been on an archiving kick lately and so here’s some more. The British Library recently (Feb.25.10) announced public access to their UK Web Archive, a project where they have been saving online materials. From the news release,

British Library Chief Executive, Dame Lynne Brindley said:

“Since 2004 the British Library has led the UK Web Archive in its mission to archive a record of the major cultural and social issues being discussed online. Throughout the project the Library has worked directly with copyright holders to capture and preserve over 6,000 carefully selected websites, helping to avoid the creation of a ‘digital black hole’ in the nation’s memory.

“Limited by the existing legal position, at the current rate it will be feasible to collect just 1% of all free UK websites by 2011. We hope the current DCMS consultation will enact the 2003 Legal Deposit Libraries Act and extend the provision of legal deposit through regulationto cover freely available UK websites, providingregular snapshots ofthe free UK web domain for the benefit of future research.”

Mike Masnick at Techdirt notes (here) that the British Library has to get permission (the legal position Dame Brindley refers to) to archive these materials and this would seem to be an instance where ‘fair use’ should be made to apply.

On the subject of losing data, I read an article by Mike Roberts for the Vancouver Province, January 22, 2006, p. B5 (digital copy here) that posed this question, What if the world lost its memory? It was essentially an interview with Luciana Duranti (chair of the Master of Archival Studies programme and professor at the School of Library, Archival and Information Studies at the University of British Columbia, Canada) where she commented about the memories we had already lost. From the article,

Alas, she says, every day something else is irretrievably lost.

The research records of the U.S. Marines for the past 25 years? Gone.

East German land-survey records vital to the reunification of Germany? Toast.

A piece of digital interactive music recorded by Canadian composer Keith Hamel just eight years ago?

“Inaccessible, over, finito,” says Duranti, educated in her native Italy and a UBC prof since 1987.

Duranti, director of InterPARES (International Research on Permanent Authentic Records in Electronic Systems), an international cyber-preservation project comprising 20 countries and 60 global archivists, says original documentation is a thing of the past.

I was shocked by how much ‘important’ information had been lost and I assume still is. (Getting back to the UK Web Archives, if they can only save 1% of the UK’s online material then a lot has got to be missing.)

For anyone curious about InterPARES, I got my link for the Roberts article from this page on the InterPARES 1 website.

Back to Techdirt and Mike Masnick who has educated me as to a practice I had noted but not realized is ‘the way things are done amongst journalists’. If you spend enough time on the web, you’ll notice stories that make their way to newspapers without any acknowledgment of  their web or writerly origins and I’m not talking about news releases which are designed for immediate placement in the media or rewritten/reworked before placement. From the post on Techdirt,

We recently wrote about how the NY Post was caught taking a blogger’s story and rewriting it for itself — noting the hypocrisy of a News Corp. newspaper copying from someone else, after Rupert Murdoch and his top execs have been going around decrying various news aggregators (and Google especially) for “stealing” from News Corp. newspapers. It’s even more ridiculous when you think about it — because the “stealing” that Rupert is upset about is Google linking to the original story — a step that his NY Post writer couldn’t even be bothered to do.

Of course, as a few people pointed out in the comments, this sort of “re-reporting” is quite common in the traditional news business. You see it all the time in newspapers, magazines and broadcast TV. They take a story that was found somewhere else and just “re-report” it, so that they have their own version of it.

That’s right, it’s ‘re-reporting’ without attributions or links. Masnick’s post (he’s bringing in Felix Salmon’s comments) attributes this to a ‘print’ mentality where reporters are accustomed to claiming first place and see acknowledgments and links as failure while ‘digital natives’ acknowledge and link regularly since they view these as signs of respect. I’m not going to disagree but I would like to point out that citing sources is pretty standard for academics or anyone trained in that field. I imagine most reporters have one university or college degree, surely they learned the importance of citing one’s sources. So does training as a journalist erode that understanding?

And, getting back to this morning’s archival subtheme, at the end of Clark Hoyt’s (blogger for NY Times) commentary about the plagiarism he had this to say,

Finally, The Times owes readers a full accounting. I asked [Philip] Corbett [standards editor] for the examples of Kouwe’s plagiarism and suggested that editors’ notes be appended to those articles on the Web site and in The Times’s electronic archives. Corbett would not provide the examples and said the paper was not inclined to flag them, partly because there were some clear-cut cases and others that were less clear. “Where do you draw the line?” he asked.

I’d draw it at those he regards as clear. To do otherwise is to leave a corrupted record within the archives of The Times. It is not the way to close the case.

One last thing, Heather Haley is one of the guests appearing tonight in Rock Against Prisons.

Tuesday, March 9, 2010

7:00pm – 11:55pm

Little Mountain Gallery

195 east 26th Ave [Vancouver, Canada]

More details from my previous announcement about this event here.