Tag Archives: NPR

American Association for the Advancement of Science 2016 Mass Media Fellows program is open for submissions

Before getting to the latest information for applying, Matt Miller has written an exuberant and enticing  description of his experiences as a 2016 American Association for the Advancement of Science (AAAS) Mass Media Fellow for Slate.com in his Oct. 17, 2016 article for them (Note: Links have been removed),

If you’ve ever wanted to write for Slate (or other major media organizations), now is your chance—provided you’re a graduate student or postdoc in science, math, engineering, or medicine [enrolled in a university and with a US citizenship or visa that allows you to receive paymet for work].* The American Association for the Advancement of Science will soon be opening applications for its 2017 Mass Media Fellowship. Along with Slate, publications like Wired, Scientific American, NPR [National Public Radio], and the Los Angeles Times will be hosting fellows who will work as science writers for 10 weeks starting in June of next year.

..

While many of my classmates were drawing blood and administering vaccines [Miller is a student in a School of Veterinary Medicine], I flew up to New York and started learning how to be a journalist. In Slate’s Brooklyn office, I read the abstracts of newly released journal articles and pitched countless story ideas. I drank lots of coffee, sat in on editorial meetings, and interviewed scientists from almost every field imaginable (entomologists are the best). Perhaps the highlight of the whole summer was being among the first to cover the rising cost of EpiPens, a scandal that has recently led to a congressional hearing.

A large part of what I did this summer involved explaining the scientific fundamentals behind the research and making the findings more accessible and exciting to a general audience. Science writing involves a great deal of translation; scientists often get so tied up in the particulars of their research—exactly how an enzyme cleaves this protein, or whether a newly discovered bird is technically a new species—that they forget to talk about the wider societal implications their research might have on culture and civilization. But science writing also matters for the same reason all journalism matters. Science journalism can play the important role of watchdog, holding the powerful accountable and airing out things that don’t quite seem right.

You can find the application here. Don’t forget to read the eligibility rules (no students enrolled in English, journalism, science journalism, or other non-technical fields need apply).

Good luck!

*ETA Oct. 18, 2016 9:52 am PDT: The deadline for applications is midnight EST Jan. 15, 2017.

Will AI ‘artists’ be able to fool a panel judging entries the Neukom Institute Prizes in Computational Arts?

There’s an intriguing competition taking place at Dartmouth College (US) according to a May 2, 2016 piece on phys.org (Note: Links have been removed),

Algorithms help us to choose which films to watch, which music to stream and which literature to read. But what if algorithms went beyond their jobs as mediators of human culture and started to create culture themselves?

In 1950 English mathematician and computer scientist Alan Turing published a paper, “Computing Machinery and Intelligence,” which starts off by proposing a thought experiment that he called the “Imitation Game.” In one room is a human “interrogator” and in another room a man and a woman. The goal of the game is for the interrogator to figure out which of the unknown hidden interlocutors is the man and which is the woman. This is to be accomplished by asking a sequence of questions with responses communicated either by a third party or typed out and sent back. “Winning” the Imitation Game means getting the identification right on the first shot.

Turing then modifies the game by replacing one interlocutor with a computer, and asks whether a computer will be able to converse sufficiently well that the interrogator cannot tell the difference between it and the human. This version of the Imitation Game has come to be known as the “Turing Test.”

On May 18 [2016] at Dartmouth, we will explore a different area of intelligence, taking up the question of distinguishing machine-generated art. Specifically, in our “Turing Tests in the Creative Arts,” we ask if machines are capable of generating sonnets, short stories, or dance music that is indistinguishable from human-generated works, though perhaps not yet so advanced as Shakespeare, O. Henry or Daft Punk.

The piece on phys.org is a crossposting of a May 2, 2016 article by Michael Casey and Daniel N. Rockmore for The Conversation. The article goes on to describe the competitions,

The dance music competition (“Algorhythms”) requires participants to construct an enjoyable (fun, cool, rad, choose your favorite modifier for having an excellent time on the dance floor) dance set from a predefined library of dance music. In this case the initial random “seed” is a single track from the database. The software package should be able to use this as inspiration to create a 15-minute set, mixing and modifying choices from the library, which includes standard annotations of more than 20 features, such as genre, tempo (bpm), beat locations, chroma (pitch) and brightness (timbre).

In what might seem a stiffer challenge, the sonnet and short story competitions (“PoeTix” and “DigiLit,” respectively) require participants to submit self-contained software packages that upon the “seed” or input of a (common) noun phrase (such as “dog” or “cheese grater”) are able to generate the desired literary output. Moreover, the code should ideally be able to generate an infinite number of different works from a single given prompt.

To perform the test, we will screen the computer-made entries to eliminate obvious machine-made creations. We’ll mix human-generated work with the rest, and ask a panel of judges to say whether they think each entry is human- or machine-generated. For the dance music competition, scoring will be left to a group of students, dancing to both human- and machine-generated music sets. A “winning” entry will be one that is statistically indistinguishable from the human-generated work.

The competitions are open to any and all comers [competition is now closed; the deadline was April 15, 2016]. To date, entrants include academics as well as nonacademics. As best we can tell, no companies have officially thrown their hats into the ring. This is somewhat of a surprise to us, as in the literary realm companies are already springing up around machine generation of more formulaic kinds of “literature,” such as earnings reports and sports summaries, and there is of course a good deal of AI automation around streaming music playlists, most famously Pandora.

The authors discuss issues with judging the entries,

Evaluation of the entries will not be entirely straightforward. Even in the initial Imitation Game, the question was whether conversing with men and women over time would reveal their gender differences. (It’s striking that this question was posed by a closeted gay man [Alan Turing].) The Turing Test, similarly, asks whether the machine’s conversation reveals its lack of humanity not in any single interaction but in many over time.

It’s also worth considering the context of the test/game. Is the probability of winning the Imitation Game independent of time, culture and social class? Arguably, as we in the West approach a time of more fluid definitions of gender, that original Imitation Game would be more difficult to win. Similarly, what of the Turing Test? In the 21st century, our communications are increasingly with machines (whether we like it or not). Texting and messaging have dramatically changed the form and expectations of our communications. For example, abbreviations, misspellings and dropped words are now almost the norm. The same considerations apply to art forms as well.

The authors also pose the question: Who is the artist?

Thinking about art forms leads naturally to another question: who is the artist? Is the person who writes the computer code that creates sonnets a poet? Is the programmer of an algorithm to generate short stories a writer? Is the coder of a music-mixing machine a DJ?

Where is the divide between the artist and the computational assistant and how does the drawing of this line affect the classification of the output? The sonnet form was constructed as a high-level algorithm for creative work – though one that’s executed by humans. Today, when the Microsoft Office Assistant “corrects” your grammar or “questions” your word choice and you adapt to it (either happily or out of sheer laziness), is the creative work still “yours” or is it now a human-machine collaborative work?

That’s an interesting question and one I asked in the context of two ‘mashup’ art exhibitions in Vancouver (Canada) in my March 8, 2016 posting.

Getting back to back to Dartmouth College and its Neukom Institute Prizes in Computational Arts, here’s a list of the competition judges from the competition homepage,

David Cope (Composer, Algorithmic Music Pioneer, UCSC Music Professor)
David Krakauer (President, the Santa Fe Institute)
Louis Menand (Pulitzer Prize winning author and Professor at Harvard University)
Ray Monk (Author, Biographer, Professor of Philosophy)
Lynn Neary (NPR: Correspondent, Arts Desk and Guest Host)
Joe Palca (NPR: Correspondent, Science Desk)
Robert Siegel (NPR: Senior Host, All Things Considered)

The announcements will be made Wednesday, May 18, 2016. I can hardly wait!

Addendum

Martin Robbins has written a rather amusing May 6, 2016 post for the Guardian science blogs on AI and art critics where he also notes that the question: What is art? is unanswerable (Note: Links have been removed),

Jonathan Jones is unhappy about artificial intelligence. It might be hard to tell from a casual glance at the art critic’s recent column, “The digital Rembrandt: a new way to mock art, made by fools,” but if you look carefully the subtle clues are there. His use of the adjectives “horrible, tasteless, insensitive and soulless” in a single sentence, for example.

The source of Jones’s ire is a new piece of software that puts… I’m so sorry… the ‘art’ into ‘artificial intelligence’. By analyzing a subset of Rembrandt paintings that featured ‘bearded white men in their 40s looking to the right’, its algorithms were able to extract the key features that defined the Dutchman’s style. …

Of course an artificial intelligence is the worst possible enemy of a critic, because it has no ego and literally does not give a crap what you think. An arts critic trying to deal with an AI is like an old school mechanic trying to replace the battery in an iPhone – lost, possessing all the wrong tools and ultimately irrelevant. I’m not surprised Jones is angry. If I were in his shoes, a computer painting a Rembrandt would bring me out in hives.
Advertisement

Can a computer really produce art? We can’t answer that without dealing with another question: what exactly is art? …

I wonder what either Robbins or Jones will make of the Dartmouth competition?

AI assistant makes scientific discovery at Tufts University (US)

In light of this latest research from Tufts University, I thought it might be interesting to review the “algorithms, artificial intelligence (AI), robots, and world of work” situation before moving on to Tufts’ latest science discovery. My Feb. 5, 2015 post provides a roundup of sorts regarding work and automation. For those who’d like the latest, there’s a May 29, 2015 article by Sophie Weiner for Fast Company, featuring a predictive interactive tool designed by NPR (US National Public Radio) based on data from Oxford University researchers, which tells you how likely automating your job could be, no one knows for sure, (Note: A link has been removed),

Paralegals and food service workers: the robots are coming.

So suggests this interactive visualization by NPR. The bare-bones graphic lets you select a profession, from tellers and lawyers to psychologists and authors, to determine who is most at risk of losing their jobs in the coming robot revolution. From there, it spits out a percentage. …

You can find the interactive NPR tool here. I checked out the scientist category (in descending order of danger: Historians [43.9%], Economists, Geographers, Survey Researchers, Epidemiologists, Chemists, Animal Scientists, Sociologists, Astronomers, Social Scientists, Political Scientists, Materials Scientists, Conservation Scientists, and Microbiologists [1.2%]) none of whom seem to be in imminent danger if you consider that bookkeepers are rated at  97.6%.

Here at last is the news from Tufts (from a June 4, 2015 Tufts University news release, also on EurekAlert),

An artificial intelligence system has for the first time reverse-engineered the regeneration mechanism of planaria–the small worms whose extraordinary power to regrow body parts has made them a research model in human regenerative medicine.

The discovery by Tufts University biologists presents the first model of regeneration discovered by a non-human intelligence and the first comprehensive model of planarian regeneration, which had eluded human scientists for over 100 years. The work, published in PLOS Computational Biology, demonstrates how “robot science” can help human scientists in the future.

To mine the fast-growing mountain of published experimental data in regeneration and developmental biology Lobo and Levin developed an algorithm that would use evolutionary computation to produce regulatory networks able to “evolve” to accurately predict the results of published laboratory experiments that the researchers entered into a database.

“Our goal was to identify a regulatory network that could be executed in every cell in a virtual worm so that the head-tail patterning outcomes of simulated experiments would match the published data,” Lobo said.

The paper represents a successful application of the growing field of “robot science” – which Levin says can help human researchers by doing much more than crunch enormous datasets quickly.

“While the artificial intelligence in this project did have to do a whole lot of computations, the outcome is a theory of what the worm is doing, and coming up with theories of what’s going on in nature is pretty much the most creative, intuitive aspect of the scientist’s job,” Levin said. “One of the most remarkable aspects of the project was that the model it found was not a hopelessly-tangled network that no human could actually understand, but a reasonably simple model that people can readily comprehend. All this suggests to me that artificial intelligence can help with every aspect of science, not only data mining but also inference of meaning of the data.”

Here’s a link to and a citation for the paper,

Inferring Regulatory Networks from Experimental Morphological Phenotypes: A Computational Method Reverse-Engineers Planarian Regeneration by Daniel Lobo and Michael Levin. PLOS (Computational Biology) DOI: DOI: 10.1371/journal.pcbi.1004295 Published: June 4, 2015

This paper is open access.

It will be interesting to see if attributing the discovery to an algorithm sets off criticism suggesting that the researchers overstated the role the AI assistant played.

A wearable book (The Girl Who Was Plugged In) makes you feel the protagonists pain

A team of students taking an MIT (Massachusetts Institute of Technology) course called ‘Science Fiction to Science Fabrication‘ have created a new kind of category for books, sensory fiction.  John Brownlee in his Feb. 10, 2014 article for Fast Company describes it this way,

Have you ever felt your pulse quicken when you read a book, or your skin go clammy during a horror story? A new student project out of MIT wants to deepen those sensations. They have created a wearable book that uses inexpensive technology and neuroscientific hacking to create a sort of cyberpunk Neverending Story that blurs the line between the bodies of a reader and protagonist.

Called Sensory Fiction, the project was created by a team of four MIT students–Felix Heibeck, Alexis Hope, Julie Legault, and Sophia Brueckner …

Here’s the MIT video demonstrating the book in use (from the course’s sensory fiction page),

Here’s how the students have described their sensory book, from the project page,

Sensory fiction is about new ways of experiencing and creating stories.

Traditionally, fiction creates and induces emotions and empathy through words and images.  By using a combination of networked sensors and actuators, the Sensory Fiction author is provided with new means of conveying plot, mood, and emotion while still allowing space for the reader’s imagination. These tools can be wielded to create an immersive storytelling experience tailored to the reader.

To explore this idea, we created a connected book and wearable. The ‘augmented’ book portrays the scenery and sets the mood, and the wearable allows the reader to experience the protagonist’s physiological emotions.

The book cover animates to reflect the book’s changing atmosphere, while certain passages trigger vibration patterns.

Changes in the protagonist’s emotional or physical state triggers discrete feedback in the wearable, whether by changing the heartbeat rate, creating constriction through air pressure bags, or causing localized temperature fluctuations.

Our prototype story, ‘The Girl Who Was Plugged In’ by James Tiptree showcases an incredible range of settings and emotions. The main protagonist experiences both deep love and ultimate despair, the freedom of Barcelona sunshine and the captivity of a dark damp cellar.

The book and wearable support the following outputs:

  • Light (the book cover has 150 programmable LEDs to create ambient light based on changing setting and mood)
  • Sound
  • Personal heating device to change skin temperature (through a Peltier junction secured at the collarbone)
  • Vibration to influence heart rate
  • Compression system (to convey tightness or loosening through pressurized airbags)

One of the earliest stories about this project was a Jan. 28,2014 piece written by Alison Flood for the Guardian where she explains how vibration, etc. are used to convey/stimulate the reader’s sensations and emotions,

MIT scientists have created a ‘wearable’ book using temperature and lighting to mimic the experiences of a book’s protagonist

The book, explain the researchers, senses the page a reader is on, and changes ambient lighting and vibrations to “match the mood”. A series of straps form a vest which contains a “heartbeat and shiver simulator”, a body compression system, temperature controls and sound.

“Changes in the protagonist’s emotional or physical state trigger discrete feedback in the wearable [vest], whether by changing the heartbeat rate, creating constriction through air pressure bags, or causing localised temperature fluctuations,” say the academics.

Flood goes on to illuminate how science fiction has explored the notion of ‘sensory books’ (Note: Links have been removed) and how at least one science fiction novelist is responding to this new type of book,,

The Arthur C Clarke award-winning science fiction novelist Chris Beckett wrote about a similar invention in his novel Marcher, although his “sensory” experience comes in the form of a video game:

Adam Roberts, another prize-winning science fiction writer, found the idea of “sensory” fiction “amazing”, but also “infantalising, like reverting to those sorts of books we buy for toddlers that have buttons in them to generate relevant sound-effects”.

Elise Hu in her Feb. 6, 2014 posting on the US National Public Radio (NPR) blog, All Tech Considered, takes a different approach to the topic,

The prototype does work, but it won’t be manufactured anytime soon. The creation was only “meant to provoke discussion,” Hope says. It was put together as part of a class in which designers read science fiction and make functional prototypes to explore the ideas in the books.

If it ever does become more widely available, sensory fiction could have an unintended consequence. When I shared this idea with NPR editor Ellen McDonnell, she quipped, “If these device things are helping ‘put you there,’ it just means the writing won’t have to be as good.”

I hope the students are successful at provoking discussion as so far they seem to have primarily provoked interest.

As for my two cents, I think that in a world where it seems making personal connections  is increasingly difficult (i.e., people becoming more isolated) that sensory fiction which stimulates people into feeling something as they read a book seems a logical progression.  It’s also interesting to me that all of the focus is on the reader with no mention as to what writers might produce (other than McDonnell’s cheeky comment) if they knew their books were going to be given the ‘sensory treatment’. One more musing, I wonder if there might a difference in how males and females, writers and readers, respond to sensory fiction.

Now for a bit of wordplay. Feeling can be emotional but, in English, it can also refer to touch and researchers at MIT have also been investigating new touch-oriented media.  You can read more about that project in my Reaching beyond the screen with the Tangible Media Group at the Massachusetts Institute of Technology (MIT) posting dated Nov. 13, 2013. One final thought, I am intrigued by how interested scientists at MIT seem to be in feelings of all kinds.

Plagiarism and cheating in the science community

In late January 2012 there seemed to be a bit of a flutter over scientific plagiarism. There was the Jan. 24, 2012 news item on physorg.com about Howard (Skip) Garner’s work detecting signs of scientific (specifically, medical science) plagiarism,

Garner, creator of eTBLAST plagiarism detection software, identified numerous instances of wholesale plagiarism among citations in MEDLINE [online database of medical science articles]. “When my colleagues and I introduced an automated process to spot similar citations in MEDLINE, we uncovered more than 150 suspected cases of plagiarism in March, 2009.

“Subsequent ethics investigations resulted in 56 retractions within a few months. However, as of November 2011, 12 (20 percent) of those “retracted” papers are still not so tagged in PubMed [clone sister to MEDLINE database]. Another two were labeled with errata that point to a website warning the papers are “duplicate” — but more than 95 percent of the text was identical, with no similar co-authors.”

Garner and Mounir Errami published a comentatary in the Jan. 24, 2012 online edition of Nature magazine about their joint study of plagiarism,

Are scientists publishing more duplicate papers? An automated search of seven million biomedical abstracts suggests that they are, report Mounir Errami and Harold Garner.

Given the pressure to publish, it is important to be aware of the ways in which community standards can be subverted. Our concern here is with the three major sins of modern publishing: duplication, co-submission and plagiarism.

I was quite interested to see the definition of these ‘sins’,

 The most unethical practices involve substantial reproduction of another study (bringing no novelty to the scientific community) without proper acknowledgement. If such duplicates have different authors, then they may be guilty of plagiarism, whereas papers with overlapping authors may represent self-plagiarism. Simultaneous submission of duplicate articles by the same authors to different journals also violates journal policies.

That last one about simultaneous submissions of the same article has never made sense to me. As long as you’re not pretending it’s different than the pieces being published elsewhere, I don’t see a problem other than the journal wants exclusive rights to your work. (I’m talking about scholarly publishing only.) If it’s yours, I think you should be able to publish it in as many places as you can.

After all, no one has time to read every single journal that might apply to their own specialty or look at journals that don’t apply but might have useful or applicable materials. In the interests of scholarship and sharing information, there’s a much better chance of stumbling across something if it’s published in a number of places.

Apparently, I’m not the first to think of this, although they are primarily considering the situation from the perspective of language (from the Nature Commentary),

One argument for duplicate publication is to make significant works available to a wider audience, especially in other languages. However, only 20% of manually verified duplicates in Déjà vu are translations into another language. What of the examples of text directly translated with no reference or credit to the original article? Is this justified or acceptable? And is such behaviour more widespread for review-type articles for which greater dissemination may be justified? We do not yet have answers to these questions.

The authors don’t seem to have considered this issue the problem of finding relevant material in a very ‘information-noisy’ environment.

As for self-plagiarizing, I’m a little muzzier about that. It’s not like you’re taking credit for someone else’s work (which is how I’ve always defined plagiarism). However, presenting your own work as if it’s new when it’s not is unacceptable to me.

Leonard Lopate did an interview with Garner and Professor Melissa Anderson about plagiarism in scholarly and medical journals for this NPR (National Public Radio) show Jan. 19, 2012. I haven’t listened to it all since Anderson begins by discussing the downloading of music from various archives. It seems she’s confused file sharing with plagiarism. She did go on to discuss plagiarism but had lost credibility with me and this is an almost 30 min. interview (or investment of my time).

I do think that plagiarism and cheating have a negative effect on the practice of science and I agree with the observers who all note the tremendous pressure placed on scientists to produce in a very competitive environment.  I just wish they had communicated a little more clearly.

Here’s an example of my problem with their discussion of duplicates (from the Nature Commentary),

In general, duplicates are often published in journals with lower impact factors (undoubtedly at least in part to minimize the odds of detection) but this does not prevent negative consequences — especially in clinical research. Duplication, particularly of the results of patient trials, can negatively affect the practice of medicine, as it can instill a false sense of confidence regarding the efficacy and safety of new drugs and procedures. There are very good reasons why multiple independent studies are required before a new medical practice makes it into the clinic, and duplicate publication subverts that crucial quality control (not to mention defrauding the original authors and journals).

If the duplicate lists someone other than the original author(s), wouldn’t it be plagiarism? This is my problem, there is a lack of clarity in this commentary.

Around the same time this commentary was published, Dennis Normile wrote an article, Whistleblower Uses YouTube to Assert Claims of Scientific Misconduct, for Science Insider about a Japanese whistleblower (I’ve removed links, plse. go to the original article to find them and more information),

ScienceInsider tracked down the whistleblower using an e-mail address connected to a blog linked to the Japanese version of the video. A man who said he posted the video agreed to a phone interview and later answered additional questions by e-mail. He asked to be identified by his online handle, “Juuichi Jigen.”

Juuichi Jigen means “11 dimensions” in Japanese. The phrase is taken from a case of misconduct (English, Japanese) the whistleblower had written about on his blog that involved a researcher who claimed to have developed an “11-dimensional theory of the universe.” According to University of Tokyo press releases, that scientist, Serkan Anilir, plagiarized numerous publications and falsified his resume. He resigned from an assistant professorship at the university in March 2010.

Jigen, who claims to be a life science researcher in the private sector, says his interest in scientific misconduct began in late 2010 when he couldn’t reproduce results reported by a researcher at Dokkyo Medical University in Mibu, Tochigi Prefecture. “This wasted time and money,” he says. After documenting problems with the papers, Jigen notified the university and posted all the evidence on a Web site. According to local press reports gathered on Jigen’s Web site, the researcher resigned his position. Many of his papers have been retracted, according to the Retraction Watch Web site.

Jigen has created separate Web sites for half a dozen cases in Japan in which he alleges scientific misconduct has occurred, and last week he posted details of what he believes is a case of image manipulation by researchers at a U.S. institution.

Not being able to reproduce the results means the data could have been an anomaly. However, if researchers cannot duplicate results from various research projects, then the data has been falsified.

In reading about ‘Juuichi Jigen’s’ work, it would seem that if you find someone who’s plagiarizing work, you might want to check the research data. I think that’s a much more compelling way to discuss plagiarism than worrying over copying and duplication. Ultimately, it’s about the practice of science.

Patents as weapons and obstacles

I’m going to start with the phones and finish with the genes. The news article titled Patents emerge as significant tech strategy by Janet I. Tu featured Oct. 27, 2011 on physorg.com provides some insight into problems with  phones and patents,

It seems not a week goes by these days without news of another patent battle or announcement: Microsoft reaching licensing agreements with various device manufacturers. Apple and various handset manufacturers filing suits and countersuits. Oracle suing Google over the use of Java in Android.

After Microsoft and Samsung announced a patent-licensing agreement last month involving Google’s Android operating system, Google issued a statement saying, in part: “This is the same tactic we’ve seen time and again from Microsoft. Failing to succeed in the smartphone market, they are resorting to legal measures to extort profit from others’ achievements and hinder the pace of innovation.”

Microsoft’s PR chief Frank Shaw shot back via Twitter: “Let me boil down the Google statement … from 48 words to 1: Waaaah.”

This was Microsoft’s PR chief??? I do find this to be impressive,but not in a good way. Note: Tu’s article was originally published in The Seattle Times. [Dec.17.11: I’ve edited my original sentence to make the meaning clearer, i. e., I changed it from ‘I don’t find this to be impressive …]

My Sept. 27, 2011 posting focused on the OECD (Organization for Economic Cooperation and Development) and their Science Technology and Industry 2011 Scorecard where they specifically name patenting practices as a worldwide problem for innovation. As both the scorecard and Tu note (from the Tu article),

… technology companies’ patent practices have evolved from using them to defend their own inventions to deploying them as a significant part of competitive strategies …

Tu notes,

Microsoft says it’s trying to protect its investment in research and development – an investment resulting in some 32,000 current and 36,500 pending patents. [emphasis mine] It consistently ranks among the top three computer-software patent holders in the U.S.

One reason these patent issues are being negotiated now is because smartphones are computing devices with features that “are generally in the sweet spot of the innovations investments Microsoft has made in the past 20 years,” said Microsoft Deputy General Counsel Horacio Gutierrez.

There’s no arguing Microsoft is gaining a lot strategically from its patents: financially, legally and competitively.

Royalties from Android phones have become a fairly significant revenue stream.

Investment firm Goldman Sachs has estimated that, based on royalties of $3 to $6 per device, Microsoft will get about $444 million in fiscal year 2012 from Android-based device makers with whom it has negotiated agreements.

Some think that estimate may be low.

Microsoft is not disclosing how much it gets in royalties, but Smith, the company’s attorney, has said $5 per device “seems like a fair price.”

Various tech companies wield patents also to slow down competitors or to frustrate, and sometimes stop, a rival from entering a market. [emphases mine]

It’s not just one industry sector either. Another major player in this ‘patenting innovation to death game’ is the health care industry. Mike Masnick in his Oct. 28, 2011 Techdirt posting (Deadly Monopolies: New Book Explores How Patenting Genes Has Made Us Less Healthy) notes,

A few years ago, David Koepsell, came out with the excellent book, Who Owns You?, with the subtitle, “The corporate gold rush to patent your genes.” It looks like there’s now a new book [Deadly Monopolies] out exploring the same subject, by medical ethicist Harriet Washington.

NPR (National Public Radio) highlights this story in their feature on  Washington’s book,

Restrictive patents on genes prevent competition that can keep the medical cost of treatment down, says Washington. In addition to genes, she also points to tissue samples, which are also being patented — sometimes without patients’ detailed knowledge and consent. Washington details one landmark case in California in which medically valuable tissue samples from a patient’s spleen were patented by a physician overseeing his treatment for hairy-cell leukemia. The physician then established a laboratory to determine whether tissue samples could be used to create various drugs without informing the patient.

“[The patient] was told that he had to come to [the physician’s] lab for tests … in the name of vigilance to treat his cancer and keep him healthy,” says Washington.

The patient, a man named John Moore, was never told that his discarded body parts could be used in other ways. He sued his doctor and the University of California, where the procedure took place, for lying to him about his tissue — and because he did not want to be the subject of a patent. The case went all the way to the California Supreme Court, where Moore lost. In the decision, the court noted that Moore had no right to any share of the profits obtained from anything developed from his discarded body parts.

According to the webpage featuring Deadly Monopolies on the NPR website, this state of affairs is due to a US Supreme Court ruling made in 1980 where the court ruled,

… living, human-made microorganisms could be patented by their developers. The ruling opened the gateway for cells, tissues, genetically modified plants and animals, and genes to be patented.

I gather the US Supreme Court is currently reconsidering their stance on patents and genes. (As for Canada, we didn’t take that route with the consequence that it is not possible to patent a gene or tissue culture here. Of course, things could change.)

Canadians and ‘smart’ Christmas trees

This isn’t my usual kind of thing but since it does involve Christmas trees, some science, and Canadians, why not? David Zax in his article, Scientists Build “Smart” Christmas Tree With Long-Lasting Needles and Fragrance (on the Fast Company website) writes,

We live in the era of smart grids, smart phones, smart entrepreneurs, and all other manners of smartness. It may be no surprise to learn, then, that we’re on our way towards having a “smart” Christmas tree–one capable of retaining its needles for twice the normal length of time.

That’s according to Dr. Raj Lada [Dr. Rajasekaran Lada], a plant physiologist at the Christmas Tree Research Centre at Nova Scotia Agricultural College in Truro. “The cutting edge is that we should have to have a tree,” Dr. Lada said on NPR, “which I call a smart tree.”

The idea came a few years ago when a devastated small-business owner called on Lada. The man was ruined: his entire crop of Christmas trees had already lost their needles. As Lada began to investigate, he learned that it wasn’t a blight or a disease that was likely to have caused this crop’s loss. Rather, it was a disorder common to many Christmas tree producers: trees often shed their needles quickly, and there was no consensus over how to fix the problem.

You can find the original interview (audio and transcript) on US National Public Radio here. From the transcript of the interview,

FLATOW [Ira Flatow, Science Friday, radio program host]: Raj, you have a new study that’s out now in the journal Trees, where you were able to make trees keep their needles twice as long as usual. How did you do that?

Dr. LADA: That’s true. We started with – I think the problem itself is widespread, basically. Some people talk about it, some people don’t. And it started with the producer, who sent a shipment of trees to Vancouver, B.C., and turned out to be all the needless dropped, and he has not even paid the check. So that is a severe problem.

And we looked at it as a scientific approach. And any of these physiological things now, any of these abscission or flowering, everything is regulated by hormonal changes in plants or trees, basically. And this is one of it, basically. But nobody knows about it. We didn’t even know that there is such a regulatory process.

FLATOW: Right.

Dr. LADA: We from our other herbaceous plants, like cotton and cut flowers and banana ripening, we know that there is a hormone that triggers and – that ages the cell and triggers the hormone level. And once the hormone level reaches to a certain point, that induces the organ shed, basically, the leaves or the fruits or flower petals or whatever it is that can abscise from their tree or plant.

FLATOW: So this is a natural hormone in the tree that sort of signals the tree to shed its needles.

Dr. LADA: Exactly. This is a natural hormone. We just call it the gaseous hormone. It’s (unintelligible) natural gaseous hormone that is produced by the plant cells, basically, in response to various factors. It could be environment. It could be physical, mechanical manipulations, or any abuse, basically.

FLATOW: What’s the name of the hormone?

Dr. LADA: It’s called ethylene.

The interview is quite interesting but the work has yet to move from the laboratory into the field, i.e., you can’t get a ‘smart’ Christmas tree this year. Still, Dr. Lada does have a tip for this year’s Christmas trees,

FLATOW: I see. And you have studied the effect of Christmas lights on trees?

Dr. LADA: Exactly. And that’s another very interesting story to tell about, especially in the Christmas time. The lights, what we used, you know, people think – sometimes, we turn off the lights, and we put on all kinds of lights, sometimes incandescent lights and sometimes fluorescent lights just on top, sometimes halogen lights beaming on the trees. It looks great, but they – each one of those light spectrum is so different physiologically, and they could alter these metabolic functions critically.

So what we identified was we tried to use the recent technology, which is the LED technology, which people use it on Christmas trees all the time. We tested different spectrums – white, blue, red spectrums. And also, we had a control, which were sitting in dark, and also one other control, which were sitting in the gentle, fluorescent light and incandescent light situations.

FLATOW: Mm-hmm.

Dr. LADA: And we found that the white light has got nearly 30, 35 days better needle retention capacity compared to the dark-retained ones, or the controls with the normal lighting.

FLATOW: Wow. So did you get a whole extra month?

Dr. LADA: Oh, we have a whole extra month, basically. Significant…

FLATOW: With the white – with white – would that be like a full-spectrum light?

Dr. LADA: It is a full-spectrum LED, I would say

FLATOW: Wow. And that’s the is that part of the lights you would string on the trees?

Dr. LADA: That’s important to spring, keep that white light in there, basically, especially from the LEDs. You should put more of the white lights in there, basically, rather than the other spectrum.

FLATOW: And so…

Dr. LADA: In fact, the worst performer in our experiment was the blue.

FLATOW: Wow. And so that would seem to say to me that you don’t want to turn your lights off at night. You want to keep them…

Dr. LADA: Absolutely. You should not turn your lights off at night, basically. Because the reason why I’m suggesting is, as you keep them in dark, it started respiring more. And then it’ll use all its carbohydrates that are in the trees, basically. And then it’s – it can be starved to death, (unintelligible).

There you have it.

Oil in the Gulf of Mexico, science, and not taking sides

Linda Hooper-Bui is a professor in Louisiana who studies insects.She’s also one of the scientists who’s been denied access to freely available (usually) areas in the Gulf of Mexico wetlands. She and her students want to gather data for examination about the impact that the oil spill has had on the insect populations. BP Oil and the US federal government are going court over the oil spill and both sides want scientific evidence to buttress their respective cases. Scientists wanting access to areas controlled by either of the parties are required to sign nondisclosure agreements (NDAs) by either BP Oil or the Natural Resource Damage Assessment federal agency. The NDA’s extend not just to the publication of data but also to informal sharing.

From the article by Hooper-Bui in The Scientist,

The ants, crickets, flies, bees, dragon flies, and spiders I study are important components of the coastal food web. They function as soil aerators, seed dispersers, pollinators, and food sources in complex ecosystems of the Gulf.

Insects were not a primary concern when oil was gushing into the Gulf, but now they may be the best indicator of stressor effects on the coastal northern Gulf of Mexico. Those stressors include oil, dispersants, and cleanup activities. If insect populations survive, then frogs, fish, and birds will survive. If frogs, fish, and birds are there, the fishermen and the birdwatchers will be there. The Gulf’s coastal communities will survive. But if the bugs suffer, so too will the people of the Gulf Coast.

This is why my continued research is important: to give us an idea of just how badly the health of the Gulf Coast ecosystems has been damaged and what, if anything, we can do to stave off a full-blown ecological collapse. But I am having trouble conducting my research without signing confidentiality agreements or agreeing to other conditions that restrict my ability to tell a robust and truthful scientific story.

I want to collect data to answer scientific questions absent a corporate or governmental agenda. I won’t collect data specifically to support the government’s lawsuit against BP nor will I collect data only to be used in BP’s defense. Whereas I think damage assessment is important, it’s my job to be independent — to tell an accurate, unbiased story. But because I choose not to work for BP’s consultants or NRDA, my job is difficult and access to study sites is limited.

Hooper-Bui goes on to describe a situation where she and her students had to surrender samples to a US Fish and Wildlife officer because their project (on public lands therefore they should have been freely accessible) had not been approved. Do read the article before it disappears behind a paywall but if you prefer. you can listen to a panel discussion with her and colleagues Christopher D’Elia and Cary Nelson on the US National Public Radio (NPR) website, here. One of the people who calls in to the show is another professor, this one from Texas, who has the same problem collecting data. He too refused to sign any NDAs. One group of nonaligned scientists has been able to get access and that’s largely because they acted before the bureaucracy snapped into place. They got permission (without having to sign NDAs) while the federal bureaucracy was still organizing itself in the early days of the spill.

These practices are antithetical to the practice of science. Meanwhile, the contrast between this situation and the move to increase access and make peer review a more open process (in my August 20, 2010 posting) could not be more glaring. Very simply, the institutions want more control while the grassroots science practitioners want a more open environment in which to work.

Hooper-Bui comments on NPR that she views her work as public service. It’s all that and more; it’s global public service.

What happens in the Gulf over the next decades will have a global impact. For example, there’s a huge colony of birds that make their way from the Gulf of Mexico to the Gaspé Peninsula in Québec for the summer returning to the Gulf in the winter.  They should start making their way back in the next few months. Who knows what’s going to happen to that colony and the impact this will have on other ecosystems?

We need policies that protect scientists and ensure, as much as possible, that their work be conducted in the public interest.