Tag Archives: crowdsourcing

The role of empathy in science communication

Phys.org has a Dec. 12, 2016 essay by Nicole Miller-Struttmann on the topic of empathy and science communication,

Science communication remains as challenging as it is necessary in the era of big data. Scientists are encouraged to reach out to non-experts through social media, collaborations with citizen scientists, and non-technical abstracts. As a science enthusiast (and extrovert), I truly enjoy making these connections and having conversations that span expertise, interests and geographic barriers. However, recent divisive and impassioned responses to the surprising election results in the U.S. made me question how effective these approaches are for connecting with the public.

Are we all just stuck in our own echo chambers, ignoring those that disagree with us?

How do we break out of these silos to reach those that disengage from science or stop listening when we focus on evidence? Particularly evidence that is increasingly large in volume and in scale? Recent research suggests that a few key approaches might help: (1) managing our social media use with purpose, (2) tailoring outreach efforts to a distinct public, and (3) empathizing with our audience(s) in a deep, meaningful way.

The essay, which originally appeared on the PLOS Ecology Community blog in a Dec. 9, 2016 posting, goes on to discuss social media, citizen science/crowdsourcing, design thinking, and next gen data visualization (Note: Links have been removed),

Many of us attempt to broaden our impact by sharing interesting studies with friends, family, colleagues, and the broader public on social media. While the potential to interact directly with non-experts through social media is immense, confirmation bias (the tendency to interpret and share information that supports one’s existing beliefs) provides a significant barrier to reaching non-traditional and contrarian publics. Insights from network analyses suggest that these barriers can be overcome by managing our connections and crafting our messages carefully. …

Technology has revolutionized how the public engages in science, particularly data acquisition, interpretation and dissemination. The potential benefits of citizen science and crowd sourcing projects are immense, but there are significant challenges as well. Paramount among them is the reliance on “near-experts” and amateur scientists. Domroese and Johnson (2016) suggest that understanding what motivates citizen scientists to get involved – not what we think motivates them – is the first step to deepening their involvement and attracting diverse participants.

Design Thinking may provide a framework for reaching diverse and under-represented publics. While similar to scientific thinking in several ways,

design thinking includes a crucial step that scientific thinking does not: empathizing with your audience.

It requires that the designer put themselves in the shoes of their audience, understand what motivates them (as Domroese and Johnson suggest), consider how they will interact with and perceive the ‘product’, and appeal to the perspective. Yajima (2015) summarizes how design thinking can “catalyze scientific innovation” but also why it might be a strange fit for scientists. …

Connecting the public to big data is particularly challenging, as the data are often complex with multifaceted stories to tell. Recent work suggests that art-based, interactive displays are more effective at fostering understanding of complex issues, such as climate change.

Thomsen (2015) explains that by eliciting visceral responses and stimulating the imagination, interactive displays can deepen understanding and may elicit behavioral changes.

I recommend reading this piece in its entirety as Miller-Struttmann presents a more cohesive description of current science communication practices and ideas than is sometimes the case.

Final comment, I would like to add one suggestion and that’s the adoption of an attitude of ‘muscular’ empathy. People are going to disagree with you, sometimes quite strongly (aggressively), and it can be very difficult to maintain communication with people who don’t want (i.e., reject) the communication. Maintaining empathy in the face of failure and rejection which can extend for decades or longer requires a certain muscularity

The Future of Federal Citizen Science and Crowdsourcing; a Nov. 15, 2016 event at the Wilson Center (Washington, DC)

I received this Oct. 25, 2016 notice from the Wilson Center in Washington, DC (US) via email,

Citizen Science and Crowdsourcing, a form of open innovation that engages the public in authentic scientific research, has many documented benefits like advancing research, STEM education and addressing societal needs. This method has gained significant momentum in the U.S. Federal government in the past four years. In September 2015 the White House issued a memorandum asking federal agencies to report on their citizen science and crowdsourcing projects and appoint coordinators within each agency. In 2016 we witnessed the launch of www.citizenscience.gov, a platform with an extensive toolkit on how to conduct these projects as well as a catalog and community hub. In addition to these Executive Branch initiatives, a grassroots Federal Community of Practice for Crowdsourcing and Citizen Science (CCS) has emerged with 300 members across 59 agencies. The Science and Technology Program (STIP) at the Wilson Center has played a role in encouraging this momentum, providing support through building a cartographic catalog of federally supported citizen science and crowdsourcing projects and through extensive research into some of the legal, administrative and intellectual property concerns for conducting projects within the Federal government.

However, a new Administration often brings new priorities, and it’s vital to preserve this momentum and history for new leadership. STIP conducted interviews with twelve representatives of the Federal Community of practice and Agency Coordinators and conducted desk research to compile 10 strategic recommendations for advancing federal policies and programs in citizen science and crowdsourcing to facilitate the transfer of knowledge on this incredible momentum.

Please join us for a discussion of these recommendations, a celebration of the history of the movement and a dialogue on the future of citizen science and crowdsourcing in the Federal government.

The speakers are:

Elizabeth Tyson

Co-Director, Commons Lab/Program Associate, Science and Technology Innovation Program

Anne Bowser

Co-Director, Commons Lab/ Senior Program Associate, Science and Technology Innovation Program

David Rejeski

Global Fellow

The logistics:

Tuesday, November 15th, 2016
1:30pm – 3:00pm

5th floor conference room

Wilson Center
Ronald Reagan Building and
International Trade Center
One Woodrow Wilson Plaza
1300 Pennsylvania, Ave., NW
Washington, D.C. 20004

Phone: 202.691.4000

You can register here and you can find the Wilson Center Federal Crowdsourcing and Citizen Science Catalog here.

In the past, there would be livestreaming of these events but I didn’t see a notice on the event webpage.

Legal Issues and Intellectual Property Rights in Citizen Science (Dec. 10, 2015 event in Washington, DC)

Surprisingly (to me anyway), two of the speakers are Canadian.

Here’s more about the event from a Nov. 30, 2015 email notice,

Legal Issues and Intellectual Property Rights in Citizen Science

Capitalizing on the momentum from the recent White House event — which appointed citizen science coordinators in Federal agencies, highlighted legislation introduced in Congress concerning funding mechanisms and clarifying legal and administrative issues to using citizen science, and launched a new Federal toolkit on citizen science and crowdsourcing —  the Commons Lab is hosting a panel examining the legal issues affecting federal citizen science and the potential intellectual property rights that could arise from using citizen science.

This panel corresponds with the launch of two new Commons Lab Publications:
•    Managing Intellectual Property Rights in Citizen Science, by Teresa Scassa and Haewon Chung
•    Crowdsourcing, Citizen Science, and the Law: Legal Issues Affecting Federal Agencies, by Robert Gellman

As a project manager or researcher conducting citizen science, either at the federal level or in partnership with governmental agencies, there are certain issues like the Information Quality Act that will impact citizen science and crowdsourcing project design. Being aware of these issues prior to initiating projects will save time and provide avenues for complying with or “lawfully evading” potential barriers. The Commons Lab web-enabled policy tool will also be demonstrated at the event. This tool helps users navigate the complicated laws discussed in Robert Gellman’s report on legal issues affecting citizen science.
Intellectual property rights in the age of open source, open data, open science and also, citizen science, are complicated and require significant forethought before embarking on a citizen science project.  Please join us to hear from two experts on the legal barriers and intellectual property rights issues in citizen science and collect a hard copy of the reports.

Speakers

Teresa Scassa, Canada Research Chair in Information Law and Professor in the Faculty of Law, University of Ottawa
Haewon Chung, Doctoral Candidate in Law, University of Ottawa
Robert Gellman, Privacy and Information Policy Consultant in Washington, DC

Moderator

Jay Benforado, Office of Research and Development, U.S. Environmental Protection Agency

Here are the logistics, from the email,

Thursday, December 10th, 2015
11:00am – 12:30pm

6th Floor Auditorium

Directions

Wilson Center
Ronald Reagan Building and
International Trade Center
One Woodrow Wilson Plaza
1300 Pennsylvania, Ave., NW
Washington, D.C. 20004

Phone: 202.691.4000

You can register here for the event should you be attending or check this page for the webcast.

DARPA (US Defense Advanced Research Projects Agency) wants to crowdsource cheap brain-computer interfaces

The US Defense Advanced Research Projects Agency wants the DIY (or Maker community) to develop inexpensive brain-computer interfaces according to a Sept. 27, 2013 news item by John Hewitt on phys.org,

This past Saturday [Sept. 21, 2013], at the Maker Faire in New York, a new low-cost EEG recording front end was debuted at DARPA’s booth. Known as OpenBCI, the device can process eight channels of high quality EEG data, and interface it to popular platforms like Arduino. …

DARPA program manager William Casebeer said that his goal was to return next year to the Maker meeting with a device that costs under $30.

Adrianne Jeffries’ Sept. 22, 2013 article for The Verge provides more information (Note: Links have been removed),

A working prototype of a low-cost electroencephalography device funded by the US Defense Advanced Research Projects Agency (DARPA) made its debut in New York this weekend [Sept. 21 – 22, 2013], the first step in the agency’s effort to jumpstart a do-it-yourself revolution in neuroscience.
There are some products like those in the Neurosky lineup, which range from $99 to $130. But most neural monitoring tools are relatively expensive and proprietary, the OpenBCI [OpenBCI, an open source device built to capture signals from eight electrodes at a time] team explained, which makes it tough for the casual scientist, hacker, or artist to play with EEG. If neural monitoring were cheap and open, we’d start to see more science experiments, art projects, mind-controlled video games, and even serious research using brainwaves. You could use an at-home EEG to create a brain-powered keyboard, for example, Dr. Allen [Lindsey Allen, engineer for Creare;  OpenBCI was built by Creare and biofeedback scientist Joel Murphy, and the prototype was finished only two weeks ago] said, and learn how to type with your mind.

I have written about various brain-computer interfaces previously, the most recent being a Dec. 5, 2012 posting about Muse, a $199 brainwave computer controller.

A chance to game the future Sept. 25 and 26, 2013 starting 9 am PDT

Thanks to David Bruggeman at his Pasco Phronesis blog (his Sept. 20, 2013 posting)  for featuring a 36-hour conversation/game (which is recruiting players/participants). It is  called  Innovate 2038  and you do have to pre-register for this game. For anyone who likes a little more information before jumping into to join, here’s what the Innovate 2038 About page has to offer,

About Innovate2038

The traditional paths to research and technology innovation will no longer work for the critical challenges and new opportunities of 2038.

Increasingly constrained resources, the rise of mega-cities, and rapidly shifting developments in business processes, regulations, and consumer sentiment will present epic challenges to business as usual.

At the same time, opportunities will abound. Emerging fields like 3D-printing and additive manufacturing, synthetic biology, and data modeling will catalyze the next generation of products, services, and markets—if research and innovation can lead the way.

But managing all of these research and innovation efforts will require new tools and technologies, new skills in project and talent management, new players and collaborations, and ultimately a collective re-imagining of the value proposition of research to society.

Innovate2038 is a 36-hour global conversation based on IRI’s extensive IRI2038 research project to uncover new ideas and new strategies that can reinvent the very concept of R&D and technology innovation management for the 21st century.

On Sept 25 & 26, 2013, Innovate2038 will take place in corporations, labs, classrooms, but also hacker-spaces, online innovation communities, and networks of researchers and makers, in countries around the world.

Innovate2038 will bring together current leading voices together with emerging and below-the-radar new players that will be increasingly important to the practice of research and innovation.

The platform to support the conversation is itself a signal of the future—a cutting-edge crowdsourced game called Foresight Engine, developed and facilitated by the Institute for the Future. It’s designed to spark new ideas and inspire collaborations among hundreds of people around the world. [emphasis mine]

In as little as five minutes, you can log on to share rapid-fire micro-contributions that will help make the future of research and innovation heading out to 2038.
For a day and half, you can compete to win points, achieve awards, and gain the recognition of the leading thinkers in technology management today.

Pre-register right now as a ‘game insider’ to be the first to know about the game leading up to the Sept 25 launch.

David notes that this ’36-hour conversation/game is part of a larger project, from his Sept. 20, 2013 Pasco Phronesis posting (Note: Links have been removed),

… This is part of the Industrial Research Institute’s project on 2038 Future, which focuses on the art and science of research and development management.  That project has involved possible future scenarios, retrospective examinations of research management, and scanning the current environment.  The game engine was developed by the Institute for the Future, and is called the Foresight Engine.  The basics of the engine encourage participants to contribute short ideas, with points going to those ideas that get approved and/or built on by other participants.

Here’s more about the  Industrial Research Institute (IRI) from their History webpage,

Fourteen companies comprised the original membership of the Institute when it was formed in 1938, under the auspices of the [US] National Research Council (NRC). Four of these companies retain membership today: Colgate-Palmolive Company, Procter & Gamble Company, Hercules Powder Company (now Ashland, Inc.), and UOP, LLC, formerly known as Universal Oil Products (acquired by Honeywell). Four of the first five presidents were from the six charter-member-company category.

Maurice Holland, then Director of the NRC Division of Engineering, was largely responsible for bringing together about 50 representatives from industry, government, and universities to an initial organizational meeting in February 1938 in New York City. The Institute was an integral part of the National Research Council until 1945, when it separated to become a non-profit membership corporation in the State of New York. However, association with the Council continues unbroken.

At the founding meeting, several speakers stressed the need for an association of research directors–something different from the usual technical society–and that the benefits to be derived would depend on the extent of cooperation by its members. The greatest advantage, they said, would come through personal contacts with members of the group–still a major characteristic of IRI.

In more recent years, the activities of the Association have broadened considerably. IRI now offers services to the full range of R&D and innovation professionals in the United States and abroad.

I went exploring and found this about the game developer, Institute for the Future  (IFTF) on the website’s Who We Are page (Note: Links have been removed),

IFTF is an independent, non-profit research organization with a 45-year track record of helping all kinds of organizations make the futures they want. Our core research staff and creative design studio work together to provide practical foresight for a world undergoing rapid change.

….

Here’s more about the Foresight Engine , the “cutting-edge crowdsourced game,” from the IFTF website’s Collaborative Forecasting Games webpage,

Collaborative Forecasting Games: a crowd’s view of the future

Collaborative forecasting games engage a large and diverse group of people—potentially from around the world—to imagine futures that might go unnoticed by a team of experts. These crowds may include the general public, a targeted sector of the public, or the entire staff of a private organization. And the games themselves can range from futures brainstorming to virtual innovation gameboards and even rich narrative platforms for telling important stories about the future.

Foresight Engine

IFTF has a collaborative forecasting platform called Foresight Engine that makes it easy to set up games without a lot of investment in game design. In the tradition of brainstorming, the platform invites people to play positive or critical ideas about the future and then to build on these ideas to forms chains of discussion—complete with points, awards, and achievements for winning ideas. While the focus of the platform is on Twitter-length ideas of 140 characters or less, a Foresight Engine game does much more than harvest innovative ideas. It builds a literacy among players about the future issues addressed by the game, and it also provides a window on the crowd’s level of understanding of complex futures—laying the foundation for future literacy building. It shows who inspires the greatest following and often surfaces potential thought leaders.

It’s always interesting to dig into an organization’s history (from the IFTF’s History page,

The Institute for the Future has 45 years of forecasts on which to reflect. We’re based in California’s Silicon Valley—a community at the crossroads of technological innovation, social experimentation, and global interchange. Founded in 1968 by a group of former RAND Corporation researchers with a grant from the Ford Foundation to take leading-edge research methodologies into the public and business sectors, IFTF is committed to building the future by understanding it deeply.

I wonder if Innovate 2038 game/conversation will take place in any language other than English? In any event, I just tried to register and couldn’t.  I hope this is a problem on my end rather than the organizers’ as I know how devastating it can be to have your project encounter this kind of roadblock just before launching.

Sporty data science

Sarah Kessler’s July 20, 2012 article for Fast Company about big data and the latest Pew Research Center‘s survey notes some of the concerns and hopes,

Despite the usefulness of all of this information [big data], however, the idea of collecting more and more from consumers strikes a creepy chord for many survey respondents. Some argued that the benefits of big data would be companies, not individuals.

“The world is too complicated to be usefully encompassed in such an undifferentiated Big Idea,” wrote John Pike, the director of GlobalSecurity.org. “Whose ‘Big Data’ are we talking about? Wall Street, Google, the NSA? I am small, so generally I do not like Big.”

There is value to be found in this data, value in our newfound publicness,” argued Jeff Jarvis, the author of What Would Google Do?. “Google’s founders have urged government regulators not to require them to quickly delete searches because, in their patterns and anomalies, they have found the ability to track the outbreak of the flu before health officials could and they believe that by similarly tracking a pandemic, millions of lives could be saved.”

The July 20, 2102 press release from the Pew Research Center provides more detail about the study,

A new Pew Internet/Elon University survey of 1,021 Internet experts, observers and stakeholders measured current opinions about the potential impact of human and machine analysis of newly emerging large data sets in the years ahead. The survey is an opt-in, online canvassing. Some 53% of those surveyed predicted that the rise of Big Data is likely be “a huge positive for society in nearly all respects” by the year 2020. Some 39% of survey participants said it is likely to be “a big negative.”

“The analysts who expect we will see a mostly positive future say collection and analysis of Big Data will improve our understanding of ourselves and the world,” said researcher Lee Rainie, director of the Pew Research Center’s Internet & American Life Project. “They predict that the continuing development of real-time data analysis and enhanced pattern recognition could bring revolutionary change to personal life, to the business world and to government.”

As with all technological evolution, the experts also anticipate some negative outcomes. “The experts responding to this survey noted that the people controlling the resources to collect, manage and sort large data sets are generally governments or corporations with their own agendas to meet,” said Janna Anderson, director of Elon’s Imagining the Internet Center and a co-author of the study. “They also say there’s a glut of data and a shortage of human curators with the tools to sort it well, there are too many variables to be considered, the data can be manipulated or misread, and much of it is proprietary and unlikely to be shared.”

Here’s how these stakeholders, critics, and experts responded to two of the questions (from the news release),

53% agreed with the statement:

Thanks to many changes, including the building of “the Internet of Things,” human and machine analysis of large data sets will improve social, political, and economic intelligence by 2020. The rise of what is known as “Big Data” will facilitate things like  “nowcasting” (real-time “forecasting” of events); the development of “inferential software” that assesses data patterns to project outcomes; and the creation of algorithms for advanced correlations that enable new understanding of the world. Overall, the rise of Big Data is a huge positive for society in nearly all respects.

39% agreed with the alternate statement, which posited:

Thanks to many changes, including the building of “the Internet of Things,” human and machine analysis of Big Data will cause more problems than it solves by 2020. The existence of huge data sets for analysis will engender false confidence in our predictive powers and will lead many to make significant and hurtful mistakes. Moreover, analysis of Big Data will be misused by powerful people and institutions with selfish agendas who manipulate findings to make the case for what they want. And the advent of Big Data has a harmful impact because it serves the majority (at times inaccurately) while diminishing the minority and ignoring important outliers. Overall, the rise of Big Data is a big negative for society in nearly all respects.

Note:   A total of 8% did not respond.

Kessler did mention Kaggle, a website which hosts data science competitions or, as they prefer, sporting events. From the About Kaggle page,

Kaggle is an innovative solution for statistical/analytics outsourcing. We are the leading platform for predictive modeling competitions. Companies, governments and researchers present datasets and problems – the world’s best data scientists then compete to produce the best solutions. At the end of a competition, the competition host pays prize money in exchange for the intellectual property behind the winning model.

Interesting, oui? Give your work over for a prize of how much? and to whom? The music company EMI? Facebook? These companies have had or currently have competitions on Kaggle.

I can understand research scientists taking this route since they don’t usually have the financial wherewithal to pay for crunching giant data sets and, presumably, their work is to benefit the planet  rather than a few executives and major stockholders. Maybe it’s worth it? EMI will pay $10,000 for the winning model.