Tag Archives: National Public Radio

American Association for the Advancement of Science 2016 Mass Media Fellows program is open for submissions

Before getting to the latest information for applying, Matt Miller has written an exuberant and enticing  description of his experiences as a 2016 American Association for the Advancement of Science (AAAS) Mass Media Fellow for Slate.com in his Oct. 17, 2016 article for them (Note: Links have been removed),

If you’ve ever wanted to write for Slate (or other major media organizations), now is your chance—provided you’re a graduate student or postdoc in science, math, engineering, or medicine [enrolled in a university and with a US citizenship or visa that allows you to receive paymet for work].* The American Association for the Advancement of Science will soon be opening applications for its 2017 Mass Media Fellowship. Along with Slate, publications like Wired, Scientific American, NPR [National Public Radio], and the Los Angeles Times will be hosting fellows who will work as science writers for 10 weeks starting in June of next year.

..

While many of my classmates were drawing blood and administering vaccines [Miller is a student in a School of Veterinary Medicine], I flew up to New York and started learning how to be a journalist. In Slate’s Brooklyn office, I read the abstracts of newly released journal articles and pitched countless story ideas. I drank lots of coffee, sat in on editorial meetings, and interviewed scientists from almost every field imaginable (entomologists are the best). Perhaps the highlight of the whole summer was being among the first to cover the rising cost of EpiPens, a scandal that has recently led to a congressional hearing.

A large part of what I did this summer involved explaining the scientific fundamentals behind the research and making the findings more accessible and exciting to a general audience. Science writing involves a great deal of translation; scientists often get so tied up in the particulars of their research—exactly how an enzyme cleaves this protein, or whether a newly discovered bird is technically a new species—that they forget to talk about the wider societal implications their research might have on culture and civilization. But science writing also matters for the same reason all journalism matters. Science journalism can play the important role of watchdog, holding the powerful accountable and airing out things that don’t quite seem right.

You can find the application here. Don’t forget to read the eligibility rules (no students enrolled in English, journalism, science journalism, or other non-technical fields need apply).

Good luck!

*ETA Oct. 18, 2016 9:52 am PDT: The deadline for applications is midnight EST Jan. 15, 2017.

Will AI ‘artists’ be able to fool a panel judging entries the Neukom Institute Prizes in Computational Arts?

There’s an intriguing competition taking place at Dartmouth College (US) according to a May 2, 2016 piece on phys.org (Note: Links have been removed),

Algorithms help us to choose which films to watch, which music to stream and which literature to read. But what if algorithms went beyond their jobs as mediators of human culture and started to create culture themselves?

In 1950 English mathematician and computer scientist Alan Turing published a paper, “Computing Machinery and Intelligence,” which starts off by proposing a thought experiment that he called the “Imitation Game.” In one room is a human “interrogator” and in another room a man and a woman. The goal of the game is for the interrogator to figure out which of the unknown hidden interlocutors is the man and which is the woman. This is to be accomplished by asking a sequence of questions with responses communicated either by a third party or typed out and sent back. “Winning” the Imitation Game means getting the identification right on the first shot.

Turing then modifies the game by replacing one interlocutor with a computer, and asks whether a computer will be able to converse sufficiently well that the interrogator cannot tell the difference between it and the human. This version of the Imitation Game has come to be known as the “Turing Test.”

On May 18 [2016] at Dartmouth, we will explore a different area of intelligence, taking up the question of distinguishing machine-generated art. Specifically, in our “Turing Tests in the Creative Arts,” we ask if machines are capable of generating sonnets, short stories, or dance music that is indistinguishable from human-generated works, though perhaps not yet so advanced as Shakespeare, O. Henry or Daft Punk.

The piece on phys.org is a crossposting of a May 2, 2016 article by Michael Casey and Daniel N. Rockmore for The Conversation. The article goes on to describe the competitions,

The dance music competition (“Algorhythms”) requires participants to construct an enjoyable (fun, cool, rad, choose your favorite modifier for having an excellent time on the dance floor) dance set from a predefined library of dance music. In this case the initial random “seed” is a single track from the database. The software package should be able to use this as inspiration to create a 15-minute set, mixing and modifying choices from the library, which includes standard annotations of more than 20 features, such as genre, tempo (bpm), beat locations, chroma (pitch) and brightness (timbre).

In what might seem a stiffer challenge, the sonnet and short story competitions (“PoeTix” and “DigiLit,” respectively) require participants to submit self-contained software packages that upon the “seed” or input of a (common) noun phrase (such as “dog” or “cheese grater”) are able to generate the desired literary output. Moreover, the code should ideally be able to generate an infinite number of different works from a single given prompt.

To perform the test, we will screen the computer-made entries to eliminate obvious machine-made creations. We’ll mix human-generated work with the rest, and ask a panel of judges to say whether they think each entry is human- or machine-generated. For the dance music competition, scoring will be left to a group of students, dancing to both human- and machine-generated music sets. A “winning” entry will be one that is statistically indistinguishable from the human-generated work.

The competitions are open to any and all comers [competition is now closed; the deadline was April 15, 2016]. To date, entrants include academics as well as nonacademics. As best we can tell, no companies have officially thrown their hats into the ring. This is somewhat of a surprise to us, as in the literary realm companies are already springing up around machine generation of more formulaic kinds of “literature,” such as earnings reports and sports summaries, and there is of course a good deal of AI automation around streaming music playlists, most famously Pandora.

The authors discuss issues with judging the entries,

Evaluation of the entries will not be entirely straightforward. Even in the initial Imitation Game, the question was whether conversing with men and women over time would reveal their gender differences. (It’s striking that this question was posed by a closeted gay man [Alan Turing].) The Turing Test, similarly, asks whether the machine’s conversation reveals its lack of humanity not in any single interaction but in many over time.

It’s also worth considering the context of the test/game. Is the probability of winning the Imitation Game independent of time, culture and social class? Arguably, as we in the West approach a time of more fluid definitions of gender, that original Imitation Game would be more difficult to win. Similarly, what of the Turing Test? In the 21st century, our communications are increasingly with machines (whether we like it or not). Texting and messaging have dramatically changed the form and expectations of our communications. For example, abbreviations, misspellings and dropped words are now almost the norm. The same considerations apply to art forms as well.

The authors also pose the question: Who is the artist?

Thinking about art forms leads naturally to another question: who is the artist? Is the person who writes the computer code that creates sonnets a poet? Is the programmer of an algorithm to generate short stories a writer? Is the coder of a music-mixing machine a DJ?

Where is the divide between the artist and the computational assistant and how does the drawing of this line affect the classification of the output? The sonnet form was constructed as a high-level algorithm for creative work – though one that’s executed by humans. Today, when the Microsoft Office Assistant “corrects” your grammar or “questions” your word choice and you adapt to it (either happily or out of sheer laziness), is the creative work still “yours” or is it now a human-machine collaborative work?

That’s an interesting question and one I asked in the context of two ‘mashup’ art exhibitions in Vancouver (Canada) in my March 8, 2016 posting.

Getting back to back to Dartmouth College and its Neukom Institute Prizes in Computational Arts, here’s a list of the competition judges from the competition homepage,

David Cope (Composer, Algorithmic Music Pioneer, UCSC Music Professor)
David Krakauer (President, the Santa Fe Institute)
Louis Menand (Pulitzer Prize winning author and Professor at Harvard University)
Ray Monk (Author, Biographer, Professor of Philosophy)
Lynn Neary (NPR: Correspondent, Arts Desk and Guest Host)
Joe Palca (NPR: Correspondent, Science Desk)
Robert Siegel (NPR: Senior Host, All Things Considered)

The announcements will be made Wednesday, May 18, 2016. I can hardly wait!

Addendum

Martin Robbins has written a rather amusing May 6, 2016 post for the Guardian science blogs on AI and art critics where he also notes that the question: What is art? is unanswerable (Note: Links have been removed),

Jonathan Jones is unhappy about artificial intelligence. It might be hard to tell from a casual glance at the art critic’s recent column, “The digital Rembrandt: a new way to mock art, made by fools,” but if you look carefully the subtle clues are there. His use of the adjectives “horrible, tasteless, insensitive and soulless” in a single sentence, for example.

The source of Jones’s ire is a new piece of software that puts… I’m so sorry… the ‘art’ into ‘artificial intelligence’. By analyzing a subset of Rembrandt paintings that featured ‘bearded white men in their 40s looking to the right’, its algorithms were able to extract the key features that defined the Dutchman’s style. …

Of course an artificial intelligence is the worst possible enemy of a critic, because it has no ego and literally does not give a crap what you think. An arts critic trying to deal with an AI is like an old school mechanic trying to replace the battery in an iPhone – lost, possessing all the wrong tools and ultimately irrelevant. I’m not surprised Jones is angry. If I were in his shoes, a computer painting a Rembrandt would bring me out in hives.
Advertisement

Can a computer really produce art? We can’t answer that without dealing with another question: what exactly is art? …

I wonder what either Robbins or Jones will make of the Dartmouth competition?

Plagiarism and cheating in the science community

In late January 2012 there seemed to be a bit of a flutter over scientific plagiarism. There was the Jan. 24, 2012 news item on physorg.com about Howard (Skip) Garner’s work detecting signs of scientific (specifically, medical science) plagiarism,

Garner, creator of eTBLAST plagiarism detection software, identified numerous instances of wholesale plagiarism among citations in MEDLINE [online database of medical science articles]. “When my colleagues and I introduced an automated process to spot similar citations in MEDLINE, we uncovered more than 150 suspected cases of plagiarism in March, 2009.

“Subsequent ethics investigations resulted in 56 retractions within a few months. However, as of November 2011, 12 (20 percent) of those “retracted” papers are still not so tagged in PubMed [clone sister to MEDLINE database]. Another two were labeled with errata that point to a website warning the papers are “duplicate” — but more than 95 percent of the text was identical, with no similar co-authors.”

Garner and Mounir Errami published a comentatary in the Jan. 24, 2012 online edition of Nature magazine about their joint study of plagiarism,

Are scientists publishing more duplicate papers? An automated search of seven million biomedical abstracts suggests that they are, report Mounir Errami and Harold Garner.

Given the pressure to publish, it is important to be aware of the ways in which community standards can be subverted. Our concern here is with the three major sins of modern publishing: duplication, co-submission and plagiarism.

I was quite interested to see the definition of these ‘sins’,

 The most unethical practices involve substantial reproduction of another study (bringing no novelty to the scientific community) without proper acknowledgement. If such duplicates have different authors, then they may be guilty of plagiarism, whereas papers with overlapping authors may represent self-plagiarism. Simultaneous submission of duplicate articles by the same authors to different journals also violates journal policies.

That last one about simultaneous submissions of the same article has never made sense to me. As long as you’re not pretending it’s different than the pieces being published elsewhere, I don’t see a problem other than the journal wants exclusive rights to your work. (I’m talking about scholarly publishing only.) If it’s yours, I think you should be able to publish it in as many places as you can.

After all, no one has time to read every single journal that might apply to their own specialty or look at journals that don’t apply but might have useful or applicable materials. In the interests of scholarship and sharing information, there’s a much better chance of stumbling across something if it’s published in a number of places.

Apparently, I’m not the first to think of this, although they are primarily considering the situation from the perspective of language (from the Nature Commentary),

One argument for duplicate publication is to make significant works available to a wider audience, especially in other languages. However, only 20% of manually verified duplicates in Déjà vu are translations into another language. What of the examples of text directly translated with no reference or credit to the original article? Is this justified or acceptable? And is such behaviour more widespread for review-type articles for which greater dissemination may be justified? We do not yet have answers to these questions.

The authors don’t seem to have considered this issue the problem of finding relevant material in a very ‘information-noisy’ environment.

As for self-plagiarizing, I’m a little muzzier about that. It’s not like you’re taking credit for someone else’s work (which is how I’ve always defined plagiarism). However, presenting your own work as if it’s new when it’s not is unacceptable to me.

Leonard Lopate did an interview with Garner and Professor Melissa Anderson about plagiarism in scholarly and medical journals for this NPR (National Public Radio) show Jan. 19, 2012. I haven’t listened to it all since Anderson begins by discussing the downloading of music from various archives. It seems she’s confused file sharing with plagiarism. She did go on to discuss plagiarism but had lost credibility with me and this is an almost 30 min. interview (or investment of my time).

I do think that plagiarism and cheating have a negative effect on the practice of science and I agree with the observers who all note the tremendous pressure placed on scientists to produce in a very competitive environment.  I just wish they had communicated a little more clearly.

Here’s an example of my problem with their discussion of duplicates (from the Nature Commentary),

In general, duplicates are often published in journals with lower impact factors (undoubtedly at least in part to minimize the odds of detection) but this does not prevent negative consequences — especially in clinical research. Duplication, particularly of the results of patient trials, can negatively affect the practice of medicine, as it can instill a false sense of confidence regarding the efficacy and safety of new drugs and procedures. There are very good reasons why multiple independent studies are required before a new medical practice makes it into the clinic, and duplicate publication subverts that crucial quality control (not to mention defrauding the original authors and journals).

If the duplicate lists someone other than the original author(s), wouldn’t it be plagiarism? This is my problem, there is a lack of clarity in this commentary.

Around the same time this commentary was published, Dennis Normile wrote an article, Whistleblower Uses YouTube to Assert Claims of Scientific Misconduct, for Science Insider about a Japanese whistleblower (I’ve removed links, plse. go to the original article to find them and more information),

ScienceInsider tracked down the whistleblower using an e-mail address connected to a blog linked to the Japanese version of the video. A man who said he posted the video agreed to a phone interview and later answered additional questions by e-mail. He asked to be identified by his online handle, “Juuichi Jigen.”

Juuichi Jigen means “11 dimensions” in Japanese. The phrase is taken from a case of misconduct (English, Japanese) the whistleblower had written about on his blog that involved a researcher who claimed to have developed an “11-dimensional theory of the universe.” According to University of Tokyo press releases, that scientist, Serkan Anilir, plagiarized numerous publications and falsified his resume. He resigned from an assistant professorship at the university in March 2010.

Jigen, who claims to be a life science researcher in the private sector, says his interest in scientific misconduct began in late 2010 when he couldn’t reproduce results reported by a researcher at Dokkyo Medical University in Mibu, Tochigi Prefecture. “This wasted time and money,” he says. After documenting problems with the papers, Jigen notified the university and posted all the evidence on a Web site. According to local press reports gathered on Jigen’s Web site, the researcher resigned his position. Many of his papers have been retracted, according to the Retraction Watch Web site.

Jigen has created separate Web sites for half a dozen cases in Japan in which he alleges scientific misconduct has occurred, and last week he posted details of what he believes is a case of image manipulation by researchers at a U.S. institution.

Not being able to reproduce the results means the data could have been an anomaly. However, if researchers cannot duplicate results from various research projects, then the data has been falsified.

In reading about ‘Juuichi Jigen’s’ work, it would seem that if you find someone who’s plagiarizing work, you might want to check the research data. I think that’s a much more compelling way to discuss plagiarism than worrying over copying and duplication. Ultimately, it’s about the practice of science.

Patents as weapons and obstacles

I’m going to start with the phones and finish with the genes. The news article titled Patents emerge as significant tech strategy by Janet I. Tu featured Oct. 27, 2011 on physorg.com provides some insight into problems with  phones and patents,

It seems not a week goes by these days without news of another patent battle or announcement: Microsoft reaching licensing agreements with various device manufacturers. Apple and various handset manufacturers filing suits and countersuits. Oracle suing Google over the use of Java in Android.

After Microsoft and Samsung announced a patent-licensing agreement last month involving Google’s Android operating system, Google issued a statement saying, in part: “This is the same tactic we’ve seen time and again from Microsoft. Failing to succeed in the smartphone market, they are resorting to legal measures to extort profit from others’ achievements and hinder the pace of innovation.”

Microsoft’s PR chief Frank Shaw shot back via Twitter: “Let me boil down the Google statement … from 48 words to 1: Waaaah.”

This was Microsoft’s PR chief??? I do find this to be impressive,but not in a good way. Note: Tu’s article was originally published in The Seattle Times. [Dec.17.11: I’ve edited my original sentence to make the meaning clearer, i. e., I changed it from ‘I don’t find this to be impressive …]

My Sept. 27, 2011 posting focused on the OECD (Organization for Economic Cooperation and Development) and their Science Technology and Industry 2011 Scorecard where they specifically name patenting practices as a worldwide problem for innovation. As both the scorecard and Tu note (from the Tu article),

… technology companies’ patent practices have evolved from using them to defend their own inventions to deploying them as a significant part of competitive strategies …

Tu notes,

Microsoft says it’s trying to protect its investment in research and development – an investment resulting in some 32,000 current and 36,500 pending patents. [emphasis mine] It consistently ranks among the top three computer-software patent holders in the U.S.

One reason these patent issues are being negotiated now is because smartphones are computing devices with features that “are generally in the sweet spot of the innovations investments Microsoft has made in the past 20 years,” said Microsoft Deputy General Counsel Horacio Gutierrez.

There’s no arguing Microsoft is gaining a lot strategically from its patents: financially, legally and competitively.

Royalties from Android phones have become a fairly significant revenue stream.

Investment firm Goldman Sachs has estimated that, based on royalties of $3 to $6 per device, Microsoft will get about $444 million in fiscal year 2012 from Android-based device makers with whom it has negotiated agreements.

Some think that estimate may be low.

Microsoft is not disclosing how much it gets in royalties, but Smith, the company’s attorney, has said $5 per device “seems like a fair price.”

Various tech companies wield patents also to slow down competitors or to frustrate, and sometimes stop, a rival from entering a market. [emphases mine]

It’s not just one industry sector either. Another major player in this ‘patenting innovation to death game’ is the health care industry. Mike Masnick in his Oct. 28, 2011 Techdirt posting (Deadly Monopolies: New Book Explores How Patenting Genes Has Made Us Less Healthy) notes,

A few years ago, David Koepsell, came out with the excellent book, Who Owns You?, with the subtitle, “The corporate gold rush to patent your genes.” It looks like there’s now a new book [Deadly Monopolies] out exploring the same subject, by medical ethicist Harriet Washington.

NPR (National Public Radio) highlights this story in their feature on  Washington’s book,

Restrictive patents on genes prevent competition that can keep the medical cost of treatment down, says Washington. In addition to genes, she also points to tissue samples, which are also being patented — sometimes without patients’ detailed knowledge and consent. Washington details one landmark case in California in which medically valuable tissue samples from a patient’s spleen were patented by a physician overseeing his treatment for hairy-cell leukemia. The physician then established a laboratory to determine whether tissue samples could be used to create various drugs without informing the patient.

“[The patient] was told that he had to come to [the physician’s] lab for tests … in the name of vigilance to treat his cancer and keep him healthy,” says Washington.

The patient, a man named John Moore, was never told that his discarded body parts could be used in other ways. He sued his doctor and the University of California, where the procedure took place, for lying to him about his tissue — and because he did not want to be the subject of a patent. The case went all the way to the California Supreme Court, where Moore lost. In the decision, the court noted that Moore had no right to any share of the profits obtained from anything developed from his discarded body parts.

According to the webpage featuring Deadly Monopolies on the NPR website, this state of affairs is due to a US Supreme Court ruling made in 1980 where the court ruled,

… living, human-made microorganisms could be patented by their developers. The ruling opened the gateway for cells, tissues, genetically modified plants and animals, and genes to be patented.

I gather the US Supreme Court is currently reconsidering their stance on patents and genes. (As for Canada, we didn’t take that route with the consequence that it is not possible to patent a gene or tissue culture here. Of course, things could change.)

Oil in the Gulf of Mexico, science, and not taking sides

Linda Hooper-Bui is a professor in Louisiana who studies insects.She’s also one of the scientists who’s been denied access to freely available (usually) areas in the Gulf of Mexico wetlands. She and her students want to gather data for examination about the impact that the oil spill has had on the insect populations. BP Oil and the US federal government are going court over the oil spill and both sides want scientific evidence to buttress their respective cases. Scientists wanting access to areas controlled by either of the parties are required to sign nondisclosure agreements (NDAs) by either BP Oil or the Natural Resource Damage Assessment federal agency. The NDA’s extend not just to the publication of data but also to informal sharing.

From the article by Hooper-Bui in The Scientist,

The ants, crickets, flies, bees, dragon flies, and spiders I study are important components of the coastal food web. They function as soil aerators, seed dispersers, pollinators, and food sources in complex ecosystems of the Gulf.

Insects were not a primary concern when oil was gushing into the Gulf, but now they may be the best indicator of stressor effects on the coastal northern Gulf of Mexico. Those stressors include oil, dispersants, and cleanup activities. If insect populations survive, then frogs, fish, and birds will survive. If frogs, fish, and birds are there, the fishermen and the birdwatchers will be there. The Gulf’s coastal communities will survive. But if the bugs suffer, so too will the people of the Gulf Coast.

This is why my continued research is important: to give us an idea of just how badly the health of the Gulf Coast ecosystems has been damaged and what, if anything, we can do to stave off a full-blown ecological collapse. But I am having trouble conducting my research without signing confidentiality agreements or agreeing to other conditions that restrict my ability to tell a robust and truthful scientific story.

I want to collect data to answer scientific questions absent a corporate or governmental agenda. I won’t collect data specifically to support the government’s lawsuit against BP nor will I collect data only to be used in BP’s defense. Whereas I think damage assessment is important, it’s my job to be independent — to tell an accurate, unbiased story. But because I choose not to work for BP’s consultants or NRDA, my job is difficult and access to study sites is limited.

Hooper-Bui goes on to describe a situation where she and her students had to surrender samples to a US Fish and Wildlife officer because their project (on public lands therefore they should have been freely accessible) had not been approved. Do read the article before it disappears behind a paywall but if you prefer. you can listen to a panel discussion with her and colleagues Christopher D’Elia and Cary Nelson on the US National Public Radio (NPR) website, here. One of the people who calls in to the show is another professor, this one from Texas, who has the same problem collecting data. He too refused to sign any NDAs. One group of nonaligned scientists has been able to get access and that’s largely because they acted before the bureaucracy snapped into place. They got permission (without having to sign NDAs) while the federal bureaucracy was still organizing itself in the early days of the spill.

These practices are antithetical to the practice of science. Meanwhile, the contrast between this situation and the move to increase access and make peer review a more open process (in my August 20, 2010 posting) could not be more glaring. Very simply, the institutions want more control while the grassroots science practitioners want a more open environment in which to work.

Hooper-Bui comments on NPR that she views her work as public service. It’s all that and more; it’s global public service.

What happens in the Gulf over the next decades will have a global impact. For example, there’s a huge colony of birds that make their way from the Gulf of Mexico to the Gaspé Peninsula in Québec for the summer returning to the Gulf in the winter.  They should start making their way back in the next few months. Who knows what’s going to happen to that colony and the impact this will have on other ecosystems?

We need policies that protect scientists and ensure, as much as possible, that their work be conducted in the public interest.