Tag Archives: Boston University

Purpose in nature (and the universe): even scientists believe

An intriguing research article titled, Professional Physical Scientists Display Tenacious Teleological Tendencies: Purpose-Based Reasoning as a Cognitive Default, is behind a paywall making it difficult to do much more than comment on the Oct. 17, 2012 news item (on ScienceDaily),

A team of researchers in Boston University’s Psychology Department has found that, despite years of scientific training, even professional chemists, geologists, and physicists from major universities such as Harvard, MIT, and Yale cannot escape a deep-seated belief that natural phenomena exist for a purpose.

Although purpose-based “teleological” explanations are often found in religion, such as in creationist accounts of Earth’s origins, they are generally discredited in science. When physical scientists have time to ruminate about the reasons why natural objects and events occur, they explicitly reject teleological accounts, instead favoring causal, more mechanical explanations. However, the study by lead author Deborah Kelemen, associate professor of psychology, and collaborators Joshua Rottman and Rebecca Seston finds that when scientists are required to think under time pressure, an underlying tendency to find purpose in nature is revealed.

“It is quite surprising what these studies show,” says Kelemen. “Even though advanced scientific training can reduce acceptance of scientifically inaccurate teleological explanations, it cannot erase a tenacious early-emerging human tendency to find purpose in nature. It seems that our minds may be naturally more geared to religion than science.”

I did find the abstract for the paper,

… In Study 2, we explored this further and found that the teleological tendencies of professional scientists did not differ from those of humanities scholars. Thus, although extended education appears to produce an overall reduction in inaccurate teleological explanation, specialization as a scientist does not, in itself, additionally ameliorate scientifically inaccurate purpose-based theories about the natural world. A religion-consistent default cognitive bias toward teleological explanation tenaciously persists and may have subtle but profound consequences for scientific progress.

Here’s the full citation for the paper if you want examine it yourself,

Professional Physical Scientists Display Tenacious Teleological Tendencies: Purpose-Based Reasoning as a Cognitive Default. By Kelemen, Deborah; Rottman, Joshua; Seston, Rebecca

Journal of Experimental Psychology: General, Oct 15, 2012.

What I find particularly intriguing about this work is that it helps to provide an explanation for a phenomenon I’ve observed at science conferences and science talks and in science books. The phenomenon is a tendency to ignore a particular set of questions, how did it start? where did it come from? etc. when discussing nature or, indeed, the universe.

I noticed the tendency again last night (Oct. 16, 2012) at the CBC (Canadian Broadcasting Corporation) Massey Lecture being given by Neil Turok, director of the Canadian Perimeter Institute for Theoretical Physics, and held in Vancouver (Canada). The event was mentioned in my  Oct. 12, 2012 posting (scroll down 2/3 of the way).

During this third lecture (What Banged?)  in a series of five Massey lectures. Turok asked the audience (there were roughly 800 people by my count) to imagine a millimetre ball of light as the starting point for the universe. He never did tell us where this ball of light came from. The entire issue as to how it all started (What Banged?) was avoided. Turok’s avoidance is not unusual. Somehow the question is always set aside, while the scientist jumps into the part of the story she or he can or wants to explain.

 

Interestingly, Turok has given the What Banged? talk previously in 2008 in Waterloo, Ontario. According to this description of the 2008 What Banged? talk, he did modify the presentation for last night,

The evidence that the universe emerged 14 billion years ago from an event called ‘the big bang’ is overwhelming. Yet the cause of this event remains deeply mysterious. In the conventional picture, the ‘initial singularity’ is unexplained. It is simply assumed that the universe somehow sprang into existence full of ‘inflationary’ energy, blowing up the universe into the large, smooth state we observe today. While this picture is in excellent agreement with current observations, it is both contrived and incomplete, leading us to suspect that it is not the final word. In this lecture, the standard inflationary picture will be contrasted with a new view of the initial singularity suggested by string and M-theory, in which the bang is a far more normal, albeit violent, event which occurred in a pre-existing universe. [emphasis mine] According to the new picture, a cyclical model of the universe becomes feasible in which one bang is followed by another, in a potentially endless series of cosmic cycles. The presentation will also review exciting recent theoretical developments and forthcoming observational tests which could distinguish between the rival inflationary and cyclical hypotheses.

Even this explanation doesn’t really answer the question. If there, is as suggested, a pre-existing universe, where did that come from? At the end of last night’s lecture, Turok seemed to be suggesting some kind of endless loop where past, present, and future are linked, which still begs the question: where did it all come from?

I can certainly understand how scientists who are trained to avoid teleological explanations (with their religious overtones) would want to avoid or rush over any question that might occasion just such an explanation.

Last night, the whole talk was a physics and history of physics lesson for ‘dummies’ that didn’t quite manage to be ‘dumb’ enough for me and didn’t really deliver on the promise in this description, from the Oct. 16, 2012 posting by Brian Lynch on the Georgia Straight website,

Don’t worry if your grasp of relativistic wave equations isn’t what it once was. The Waterloo, Ontario–based physicist is speaking the language of the general public here. Even though his subject dwarfs pretty much everything else, the focus of the series as a whole is human in scale. Turok sees our species as standing on the brink of a scientific revolution, where we can understand “how our ideas regarding our place in the universe may develop, and how our very nature may change.” [emphasis mine]

Perhaps Turok is building up to a discussion about “our place  in the universe” and “how our very nature may change,” sometime in the next two lectures.

Organ chips for DARPA (Defense Advanced Research Projects Agency)

The Wyss Institute will receive up to  $37M US for a project that integrates ten different organ-on-a-chip projects into one system. From the July 24, 2012 news release on EurekAlert,

With this new DARPA funding, Institute researchers and a multidisciplinary team of collaborators seek to build 10 different human organs-on-chips, to link them together to more closely mimic whole body physiology, and to engineer an automated instrument that will control fluid flow and cell viability while permitting real-time analysis of complex biochemical functions. As an accurate alternative to traditional animal testing models that often fail to predict human responses, this instrumented “human-on-a-chip” will be used to rapidly assess responses to new drug candidates, providing critical information on their safety and efficacy.

This unique platform could help ensure that safe and effective therapeutics are identified sooner, and ineffective or toxic ones are rejected early in the development process. As a result, the quality and quantity of new drugs moving successfully through the pipeline and into the clinic may be increased, regulatory decision-making could be better informed, and patient outcomes could be improved.

Jesse Goodman, FDA Chief Scientist and Deputy Commissioner for Science and Public Health, commented that the automated human-on-chip instrument being developed “has the potential to be a better model for determining human adverse responses. FDA looks forward to working with the Wyss Institute in its development of this model that may ultimately be used in therapeutic development.”

Wyss Founding Director, Donald Ingber, M.D., Ph.D., and Wyss Core Faculty member, Kevin Kit Parker, Ph.D., will co-lead this five-year project.

I note that Kevin Kit Parker was mentioned in an earlier posting today (July 26, 2012) titled, Medusa, jellyfish, and tissue engineering, and Donald Ingber in my Dec.1e, 2011 posting about Shrilk and insect skeletons.

As for the Wyss Institute, here’s a description from the news release,

The Wyss Institute for Biologically Inspired Engineering at Harvard University (http://wyss.harvard.edu) uses Nature’s design principles to develop bioinspired materials and devices that will transform medicine and create a more sustainable world. Working as an alliance among Harvard’s Schools of Medicine, Engineering, and Arts & Sciences, and in partnership with Beth Israel Deaconess Medical Center, Boston Children’s Hospital, Brigham and Women’s Hospital, , Dana Farber Cancer Institute, Massachusetts General Hospital, the University of Massachusetts Medical School, Spaulding Rehabilitation Hospital, Tufts University, and Boston University, the Institute crosses disciplinary and institutional barriers to engage in high-risk research that leads to transformative technological breakthroughs. By emulating Nature’s principles for self-organizing and self-regulating, Wyss researchers are developing innovative new engineering solutions for healthcare, energy, architecture, robotics, and manufacturing. These technologies are translated into commercial products and therapies through collaborations with clinical investigators, corporate alliances, and new start-ups.

I hadn’t thought of an organ-on-a-chip as particularly bioinspired so I’ll have to think about that one for a while.

Billions lost to patent trolls; US White House asks for comments on intellectual property (IP) enforcement; and more on IP

It becomes clear after a time that science, intellectual property (patents, copyright, and trademarks), and business interests are intimately linked which is why I include items on the topic of intellectual property (where I am developing some strong opinions). As for business topics, I am more neutral as my understanding of business is quite limited.

All of this is to explain why I’m taking ‘another kick at the IP (intellectual property) can’. I’m going to start with patents and move on to copyright.

A June 26, 2012 news item from BBC News online highlights the costs associated with patent trolls,

The direct cost of actions taken by so-called “patent trolls” totalled $29bn (£18.5bn) in the US in 2011, according to a study by Boston University.

It analysed the effect of intellectual rights claims made by organisations that own and license patents without producing related goods of their own.

Such bodies say they help spur on innovation by ensuring inventors are compensated for their creations.

But the study’s authors said society lost more than it gained.

A June 27, 2012 commentary by Mike Masnick for Techdirt provides more detail,

The report then goes further to try to figure out whether the trolls are actually benefiting innovation and getting more money to inventors, as the trolls and their supporters like to claim. Unfortunately, the research shows quite a different story — with very little of the money actually flowing back to either inventors or actual innovation. In other words, we’re talking about a pretty massive economic dead-weight loss here. Money flowing from actual innovators and creators… to lawyers, basically. Innovators grow the economy. Lawyers do not.

Masnick’s commentary includes a table from the report showing how the costs have increased since 2005 (approximately $6B) to 2011 (approximately $29B).

The researchers are James E. Besson and Michael J. Meurer at Boston University and the open access report, The Direct Costs from NPE [non-practicing entities] Disputes, is available from the Social Science Research Network.

Interestingly the same day the study from Boston University was released was the same day that the US White House’s Intellectual Property Enforcement Coordinator, Victoria Espinel, announced she wanted comments about US IP enforcement efforts (from Espinel’s June 25, 2012 blog posting),

Today my office is starting the process of gathering input for the Administration’s new strategy for intellectual property enforcement. The overarching objective of the Strategy is to improve the effectiveness of the U.S. Government’s efforts to protect our intellectual property here and overseas. I want to make sure as many people as possible are aware that we are working on this so we can get the very best thoughts and recommendations possible. Part of the process of gathering public input is to publish a “Federal Register Notice” where we formally ask the public to give us their ideas. We will read all of your submissions – and we will make them publicly available so everyone can see them.

You can do so by following this link to Regulations.gov where you will find more details for submitting your strategy recommendations beginning today.

I believe that essential to the development of an effective enforcement strategy, is ensuring that any approaches that are considered to be particularly effective as well as any concerns with the present approach to intellectual property enforcement are understood by policymakers. [emphasis Mike Masnick of Techdirt] Recommendations may include, but need not be limited to: legislation, regulation, guidance, executive order, Presidential memoranda, or other executive action, including, but not limited to, changes to agency policies, practices or methods.

Beyond recommendations for government action as part of the next Strategy, we are looking for information on and recommendations for combating emerging or future threats to American innovation and economic competitiveness posed by violations of intellectual property rights. Additionally, it would be useful to the development of the Strategy to receive submissions from the public identifying threats to public health and safety posed by intellectual property infringement, [emphasis mine] in the U.S. and internationally as well as information relating to the costs to the U.S. economy resulting from infringement of intellectual property rights.

Aside: That bit about public health and safety being endangered by infringement is going to have to be explained to me. Moving along, Mike Masnick’s June 26, 2012 commentary about this matter on Techdirt includes an exhortation to participate,

I will be submitting my own thoughts, which I will also publish here, but for those thinking about what to say, I would focus on this sentence above [emphasized in the previous excerpt from the Espinel posting “I believe that essential …”]. Historically, many of the government’s approaches have not been at all effective, and have created a number of significant problems — most of which have been ignored by the government (either willfully or through ignorance). This really is a chance to provide examples of why the current policy is not effective (and will never be effective if it keeps on the current path) as well as the “concerns” with the current approach, such as the criminalization of expressive behavior and the outright censorship of media publications.

Meanwhile, we here in Canada are focused on copyright.

Michael Geist (the Canadian copyright guru) notes in his June 26, 2012 posting (Note: I have removed some links.),

Brian Brett, the former Chair of the Writers’ Union of Canada and an award winning author, has issued an explosive public letter that “breaks the ‘cone of silence’ that has obscured for too long some of the ugly practices of Access Copyright.”

You can get an idea why Geist described the letter as “explosive” from this excerpt (from the June 26, 2012 commentary in the Georgia Straight),

As a former Chair of the Writers’ Union of Canada (I’ve been a member more than thirty years), I have been asked to sign a letter to educational institutions supporting Access Copyright’s efforts to obtain collective licensing agreements with those institutions. I will not sign. I believe the time has come for action, not words. …

For the first time in history it has become too complex and expensive to quote the music of our era for many young writers. Writers are being charged exorbitantly for quoting other writers in their poems, fictions, and essays; yet are losing their own rights and income. Meanwhile, the Canadian Government has made legislation favouring educational institutions and media empires (at the expense of creators) in the name of supporting our nation’s culture.

As we earnestly discuss these issues, but do nothing to protect ourselves, we are seeing the rights of creators to fair compensation eroded to the point of where many are at risk of receiving nothing for their work.

Access Copyright, created specifically to collect fair compensation for creators, is central to this discussion. While I believe that educational institutions must pay writers, and will eventually pay them, it’s also necessary to call out the ugly regime of Access Copyright, which is collecting our copyright income. …

6. Access Copyright rewards textbook companies who demand that authors relinquish their copyright to their work by paying them both the publisher and creator copyright payment. Academic authors often consider textbook authorship crucial to tenure. Thus academic authors are open to being pressured by publishers out of their copyright. In effect Access Copyright is encouraging textbook publishers to undermine copyright by demanding a creators’ total copyright, and doubling the publisher’s payment for this ugly practice.

So, the academics who write those science and math (and other subject) texts are being pressured by financially motivated publishers to give up copyright while they are also being being pressured to publish for the well-being of their careers. Nicely done Access Copyright! (sarcasm)

While I suspect that I don’t agree with Betts on some issues, I do believe that content creators should receive some financial benefit from their work.

On a more hopeful note, the recent passage of Bill C-11 (Copyright) has some very good things indeed (from the June 21, 2012 commentary by Leigh Beadon on Techdirt [Note: I have removed a link.]),

Michael Geist has an excellent summary of C-11 with a comparison to previous phases of copyright law in Canada. The victories for smarter copyright law in C-11 sound almost like fantasy when compared to the American copyright debate. They include:

  • New fair dealing provisions (our version of fair use) to cover educational uses, plus parody and satire
  • New backup, format-shifting and time-shifting allowances that remove previous restrictions on networked DVRs and internet TV services (similar to those that have suffered in American courts)
  • Explicit copyright exceptions for “user-generated content”, aimed at protecting non-commercial fan-art and remixes
  • A bunch of explicit exceptions for schools, such as the right to stage public performances
  • A notice-and-notice system, not a notice-and-takedown system
  • A $5,000 cap on statutory damages for all non-commercial infringement

Sadly, there is the issue of the ‘digital lock’ provision which was rammed through Parliament despite almost universal condemnation from Canadians of all walks of life. Geist provides much more detail about this issue than I can. In fact, he offers two postings outlining both Canada’s Justice Dept. discussion about the digital lock provisions (June 25, 2012 posting) and the Competition Bureau’s (June 26, 2012 posting) and possible issues with constitutional rights.

On a much happier note for me personally is a recent Federal Court of Canada ruling about linking and posting, from the June 25, 2012 posting on the Michael Geist blog (Note: I have removed links.),

The Federal Court of Canada has issued an important decision involving copyright and posting content online. The case involves a lawsuit launched by Richard Warman and the National Post against Mark and Constance Fournier, who run the FreeDominion website. Warman and the National Post sued the site over the appearance of two articles and an inline link to photograph that appeared on the forum. The court dismissed all three claims.

While the first claim (Warman’s article) was dismissed on the basis that it took too long to file the lawsuit, the legal analysis on the National Post claim involving an article by Jonathan Kay assesses the copyright implications of posting several paragraphs from an article online. In this case, the article was 11 paragraphs long.  The reproduction on the Free Dominion site included the headline, three complete paragraphs and part of a fourth. The court ruled that this amount of copying did not constitute a “substantial part” of the work and therefore there was no infringement. The court added that in the alternative, the reproduction of the work was covered by fair dealing, concluding that a large and liberal interpretation of news reporting would include posts to the discussion forum.  The decision then includes an analysis of the six factor test and concludes that the use was fair.

So I can link to and quote from Canadian publications in peace, for now. (Great news!)

There is some additional analysis of the ruling in a (h/t) June 26, 2012 posting by Leigh Beadon on the Techdirt website.

No grand thoughts here. I just find this very fluid situation with regard to intellectual property important as I believe the outcomes will affect us all in many ways, including how we practice science.

US soldiers get batteries woven into their clothes

Last time I wrote about soldiers, equipment, and energy-efficiency (April 5, 2012 posting) the soldiers in question were British. Today’s posting focuses on US soldiers. From the May 7, 2012 news item on Nanowerk,

U.S. soldiers are increasingly weighed down by batteries to power weapons, detection devices and communications equipment. So the Army Research Laboratory has awarded a University of Utah-led consortium almost $15 million to use computer simulations to help design materials for lighter-weight, energy efficient devices and batteries.

“We want to help the Army make advances in fundamental research that will lead to better materials to help our soldiers in the field,” says computing Professor Martin Berzins, principal investigator among five University of Utah faculty members who will work on the project. “One of Utah’s main contributions will be the batteries.”

Of the five-year Army grant of $14,898,000, the University of Utah will retain $4.2 million for research plus additional administrative costs. The remainder will go to members of the consortium led by the University of Utah, including Boston University, Rensselaer Polytechnic Institute, Pennsylvania State University, Harvard University, Brown University, the University of California, Davis, and the Polytechnic University of Turin, Italy.

The new research effort is based on the idea that by using powerful computers to simulate the behavior of materials on multiple scales – from the atomic and molecular nanoscale to the large or “bulk” scale – new, lighter, more energy efficient power supplies and materials can be designed and developed. Improving existing materials also is a goal.

“We want to model everything from the nanoscale to the soldier scale,” Berzins says. “It’s virtual design, in some sense.”

“Today’s soldier enters the battle space with an amazing array of advanced electronic materials devices and systems,” the University of Utah said in its grant proposal. “The soldier of the future will rely even more heavily on electronic weaponry, detection devices, advanced communications systems and protection systems. Currently, a typical infantry soldier might carry up to 35 pounds of batteries in order to power these systems, and it is clear that the energy and power requirements for future soldiers will be much greater.” [emphasis mine]

“These requirements have a dramatic adverse effect on the survivability and lethality of the soldier by reducing mobility as well as the amount of weaponry, sensors, communication equipment and armor that the soldier can carry. Hence, the Army’s desire for greater lethality and survivability of its men and women in the field is fundamentally tied to the development of devices and systems with increased energy efficiency as well as dramatic improvement in the energy and power density of [battery] storage and delivery systems.”

Up to 35 lbs. of batteries? I’m trying to imagine what the rest of the equipment would weigh. In any event, they seem to be more interested in adding to the weaponry than reducing weight. At least, that’s how I understand “greater *lethality.” Nice of them to mention greater survivability too.

The British project is more modest, they are weaving e-textiles that harvest energy allowing British soldiers to carry fewer batteries. I believe field trials were scheduled for May 2012.

* Correction: leathility changed to lethality on July 31, 2013.

Talking nano

I’ve come across a couple interesting blog postings and a podcast about the journalistic, marketing, and communication problems posed by nanotechnology. First here’s my take as informed by reading the postings and listening to the podcast. The journalistic issue is that nanotechnology is one of those science stories that are tough to sell because if people don’t understand at least some of the underlying scientific principles making  nanotechnology very hard to discuss without a lot of ‘educational detail’ and that kind of detail can limit your potential audience.  You can find another perspective on this by Howard Lovy here.

From a marketing communications or public relations perspective, there’s a lot of promising research that suggests beneficial applications and/or potentially serious risks. It’s hard to tell if the word nano will be perceived as good, bad, or descriptive (e.g. electronic is a neutral description whereas atomic and nuclear have accrued negative connotations). Here‘s another take on the issue.

Making the whole writing/journalism/marketing communication/activism (aside: activists also want to stake nanotechnology territory) thing even harder is the (generally accepted but not official) definition of nanotechnology is a measurement. This fact is still debated within the scientific community (some don’t accept the current definition) and it doesn’t mean much to most people outside the scientific community. As for why it matters? We need ways to discuss things that affect us and it seems that if scientists have their way, nanotechnology will. For more about why it’s important to find ways to talk about nanotechnology, go here for a podcast interview with Stine Grodal, a professor at Boston University.