Caption: Western tanagers are migratory birds that are present in Northern California in the spring and again in late summer. A new study shows that observations by ‘citizen scientists’ using apps such as iNaturalist and eBird accurately reflect bird migrations and therefore can be used in scientific studies. Credit: Jonathan Eisen, UC Davis
Having heard a scientist during an online UNESCO (United Nations Educational, Scientific and Cultural Organization) press briefing (about their 2024 Water Report) express doubts about the accuracy of citizen science data, this study held special interest for me.
Platforms such as iNaturalist and eBird encourage people to observe and document nature, but how accurate is the ecological data that they collect?
In a new study published in Citizen Science: Theory and Practice March 28 [2025], researchers from the University of California, Davis, show that citizen science data from iNaturalist and eBird can reliably capture known seasonal patterns of bird migration in Northern California and Nevada — from year-round residents such as California Scrub-Jays, to transient migrants such as the Western Tanager and the Pectoral Sandpiper.
“This project shows that data from participatory science projects with different goals, observers and structure can be combined into reliable and robust datasets to address broad scientific questions,” said senior author Laci Gerhart, associate professor of teaching in the UC Davis Department of Evolution and Ecology. “Contributors to multiple, smaller projects can help make real discoveries about bigger issues.”
Wild Davis research
The study began as a student capstone project in Gerhart’s Wild Davis field course, which teaches students about urban ecology and California ecosystems. First author Cody Carroll, now an assistant professor at the University of San Francisco, took the course in 2020 while completing his doctorate in statistics at UC Davis.
Most Wild Davis capstone projects are focused on community service at the Stebbins Cold Canyon Nature Reserve, but students were restricted to computer-based projects during the COVID-19 shutdown, so Carroll decided to use his statistical expertise to analyze data from iNaturalist.
After Carroll graduated and began working at USF, the team regrouped and took the project a step further by combining the iNaturalist data with data from eBird, a different citizen science platform that is preferred by bird enthusiasts with significant birding experience.
Merging iNaturalist and eBird
Since iNaturalist and eBird differ substantially in the type of data they collect and the type of user they appeal to, the team wanted to investigate whether their data could be integrated.
“eBird is more geared toward trained and very active birders who are doing complete record keeping of the birds that they’re seeing in particular areas,” said Gerhart. “iNaturalist is intentionally geared toward more casual observers who are there as much to learn about the organisms as they are to document them scientifically.”
To merge the data, Carroll considered the relative frequency of observations rather than the overall number of observations and also took into account the cyclic, seasonal nature of bird migrations.
Overall, the researchers compared data for 254 different bird species that were observed in Northern California and Nevada in 2019 and 2022. They found that the two platforms showed similar seasonal patterns for over 97% of bird species.
An assortment of seasonal bird patterns
To “ground truth” their findings, Gerhart and Carroll teamed up with Rob Furrow, an assistant professor of teaching in the Department of Wildlife, Fish and Conservation Biology, who is an avid bird watcher and eBird user.
“We wanted to test whether we were seeing actual migratory patterns or whether these were just due to biases in the observations, so we reached out to Rob, who is an expert about birds,” said Gerhart.
With Furrow’s expertise, the team showed that the combined iNaturalist and eBird data recapitulated a variety of known bird seasonality patterns within the region — meaning that the patterns were representative of actual bird presence, not due to biases in the observations.
For example, their data showed that California Scrub-Jays are present in the region year-round, whereas Bufflehead ducks arrive in mid-fall and depart in early spring. Western Tanagers pass through in late spring when they journey south for winter, and again in late summer as they fly back northwards to breed.
“We were really pleasantly surprised that we could still get reliable data, despite the differences between eBird and iNaturalist,” said Furrow. “Even when you’re relying on casual hobbyists who are taking photos of what they like, when they like, you’re still getting a reliable representation of the birds in that area at that time.”
The power of publicly generated data
The study shows that in addition to inspiring people to connect with nature, platforms such as iNaturalist and eBird can help answer important biological questions.
“This is a good example of why interdisciplinarity is important — we each brought different knowledge to this project, and it pushed each of us intellectually,” said Gerhart. “It was a really fun experience for us to combine our skill sets, and I hope that Cody, Rob and I have a chance to work together again.”
To give back to the people who helped collect the data they used, the team made a point to publish their results in an open access journal. Carroll also created a dashboard, in collaboration with a student at USF, that allows people explore and visualize the seasonality patterns for all 254 bird species.
“It’s important for scientists who are relying on publicly generated data to make sure that their results are also publicly available,” said Gerhart.
Mutaz Musa, a physician at New York Presbyterian Hospital/Weill Cornell (Department of Emergency Medicine) and software developer in New York City, has penned an eyeopening opinion piece about artificial intelligence (or robots if you prefer) and the field of radiology. From a June 25, 2018 opinion piece for The Scientist (Note: Links have been removed),
Although artificial intelligence has raised fears of job loss for many, we doctors have thus far enjoyed a smug sense of security. There are signs, however, that the first wave of AI-driven redundancies among doctors is fast approaching. And radiologists seem to be first on the chopping block.
…
Andrew Ng, founder of online learning platform Coursera and former CTO of “China’s Google,” Baidu, recently announced the development of CheXNet, a convolutional neural net capable of recognizing pneumonia and other thoracic pathologies on chest X-rays better than human radiologists. Earlier this year, a Hungarian group developed a similar system for detecting and classifying features of breast cancer in mammograms. In 2017, Adelaide University researchers published details of a bot capable of matching human radiologist performance in detecting hip fractures. And, of course, Google achieved superhuman proficiency in detecting diabetic retinopathy in fundus photographs, a task outside the scope of most radiologists.
Beyond single, two-dimensional radiographs, a team at Oxford University developed a system for detecting spinal disease from MRI data with a performance equivalent to a human radiologist. Meanwhile, researchers at the University of California, Los Angeles, reported detecting pathology on head CT scans with an error rate more than 20 times lower than a human radiologist.
Although these particular projects are still in the research phase and far from perfect—for instance, often pitting their machines against a limited number of radiologists—the pace of progress alone is telling.
Others have already taken their algorithms out of the lab and into the marketplace. Enlitic, founded by Aussie serial entrepreneur and University of San Francisco researcher Jeremy Howard, is a Bay-Area startup that offers automated X-ray and chest CAT scan interpretation services. Enlitic’s systems putatively can judge the malignancy of nodules up to 50 percent more accurately than a panel of radiologists and identify fractures so small they’d typically be missed by the human eye. One of Enlitic’s largest investors, Capitol Health, owns a network of diagnostic imaging centers throughout Australia, anticipating the broad rollout of this technology. Another Bay-Area startup, Arterys, offers cloud-based medical imaging diagnostics. Arterys’s services extend beyond plain films to cardiac MRIs and CAT scans of the chest and abdomen. And there are many others.
Musa has offered a compelling argument with lots of links to supporting evidence.
[downloaded from https://www.the-scientist.com/news-opinion/opinion–rise-of-the-robot-radiologists-64356]
And evidence keeps mounting, I just stumbled across this June 30, 2018 news item on Xinhuanet.com,
An artificial intelligence (AI) system scored 2:0 against elite human physicians Saturday in two rounds of competitions in diagnosing brain tumors and predicting hematoma expansion in Beijing.
The BioMind AI system, developed by the Artificial Intelligence Research Centre for Neurological Disorders at the Beijing Tiantan Hospital and a research team from the Capital Medical University, made correct diagnoses in 87 percent of 225 cases in about 15 minutes, while a team of 15 senior doctors only achieved 66-percent accuracy.
The AI also gave correct predictions in 83 percent of brain hematoma expansion cases, outperforming the 63-percent accuracy among a group of physicians from renowned hospitals across the country.
The outcomes for human physicians were quite normal and even better than the average accuracy in ordinary hospitals, said Gao Peiyi, head of the radiology department at Tiantan Hospital, a leading institution on neurology and neurosurgery.
To train the AI, developers fed it tens of thousands of images of nervous system-related diseases that the Tiantan Hospital has archived over the past 10 years, making it capable of diagnosing common neurological diseases such as meningioma and glioma with an accuracy rate of over 90 percent, comparable to that of a senior doctor.
All the cases were real and contributed by the hospital, but never used as training material for the AI, according to the organizer.
Wang Yongjun, executive vice president of the Tiantan Hospital, said that he personally did not care very much about who won, because the contest was never intended to pit humans against technology but to help doctors learn and improve [emphasis mine] through interactions with technology.
“I hope through this competition, doctors can experience the power of artificial intelligence. This is especially so for some doctors who are skeptical about artificial intelligence. I hope they can further understand AI and eliminate their fears toward it,” said Wang.
Dr. Lin Yi who participated and lost in the second round, said that she welcomes AI, as it is not a threat but a “friend.” [emphasis mine]
AI will not only reduce the workload but also push doctors to keep learning and improve their skills, said Lin.
Bian Xiuwu, an academician with the Chinese Academy of Science and a member of the competition’s jury, said there has never been an absolute standard correct answer in diagnosing developing diseases, and the AI would only serve as an assistant to doctors in giving preliminary results. [emphasis mine]
Dr. Paul Parizel, former president of the European Society of Radiology and another member of the jury, also agreed that AI will not replace doctors, but will instead function similar to how GPS does for drivers. [emphasis mine]
Dr. Gauden Galea, representative of the World Health Organization in China, said AI is an exciting tool for healthcare but still in the primitive stages.
Based on the size of its population and the huge volume of accessible digital medical data, China has a unique advantage in developing medical AI, according to Galea.
China has introduced a series of plans in developing AI applications in recent years.
In 2017, the State Council issued a development plan on the new generation of Artificial Intelligence and the Ministry of Industry and Information Technology also issued the “Three-Year Action Plan for Promoting the Development of a New Generation of Artificial Intelligence (2018-2020).”
The Action Plan proposed developing medical image-assisted diagnostic systems to support medicine in various fields.
I note the reference to cars and global positioning systems (GPS) and their role as ‘helpers’;, it seems no one at the ‘AI and radiology’ competition has heard of driverless cars. Here’s Musa on those reassuring comments abut how the technology won’t replace experts but rather augment their skills,
To be sure, these services frame themselves as “support products” that “make doctors faster,” rather than replacements that make doctors redundant. This language may reflect a reserved view of the technology, though it likely also represents a marketing strategy keen to avoid threatening or antagonizing incumbents. After all, many of the customers themselves, for now, are radiologists.
Radiology isn’t the only area where experts might find themselves displaced.
Eye experts
It seems inroads have been made by artificial intelligence systems (AI) into the diagnosis of eye diseases. It got the ‘Fast Company’ treatment (exciting new tech, learn all about it) as can be seen further down in this posting. First, here’s a more restrained announcement, from an August 14, 2018 news item on phys.org (Note: A link has been removed),
An artificial intelligence (AI) system, which can recommend the correct referral decision for more than 50 eye diseases, as accurately as experts has been developed by Moorfields Eye Hospital NHS Foundation Trust, DeepMind Health and UCL [University College London].
The breakthrough research, published online by Nature Medicine, describes how machine-learning technology has been successfully trained on thousands of historic de-personalised eye scans to identify features of eye disease and recommend how patients should be referred for care.
Researchers hope the technology could one day transform the way professionals carry out eye tests, allowing them to spot conditions earlier and prioritise patients with the most serious eye diseases before irreversible damage sets in.
An August 13, 2018 UCL press release, which originated the news item, describes the research and the reasons behind it in more detail,
More than 285 million people worldwide live with some form of sight loss, including more than two million people in the UK. Eye diseases remain one of the biggest causes of sight loss, and many can be prevented with early detection and treatment.
Dr Pearse Keane, NIHR Clinician Scientist at the UCL Institute of Ophthalmology and consultant ophthalmologist at Moorfields Eye Hospital NHS Foundation Trust said: “The number of eye scans we’re performing is growing at a pace much faster than human experts are able to interpret them. There is a risk that this may cause delays in the diagnosis and treatment of sight-threatening diseases, which can be devastating for patients.”
“The AI technology we’re developing is designed to prioritise patients who need to be seen and treated urgently by a doctor or eye care professional. If we can diagnose and treat eye conditions early, it gives us the best chance of saving people’s sight. With further research it could lead to greater consistency and quality of care for patients with eye problems in the future.”
The study, launched in 2016, brought together leading NHS eye health professionals and scientists from UCL and the National Institute for Health Research (NIHR) with some of the UK’s top technologists at DeepMind to investigate whether AI technology could help improve the care of patients with sight-threatening diseases, such as age-related macular degeneration and diabetic eye disease.
Using two types of neural network – mathematical systems for identifying patterns in images or data – the AI system quickly learnt to identify 10 features of eye disease from highly complex optical coherence tomography (OCT) scans. The system was then able to recommend a referral decision based on the most urgent conditions detected.
To establish whether the AI system was making correct referrals, clinicians also viewed the same OCT scans and made their own referral decisions. The study concluded that AI was able to make the right referral recommendation more than 94% of the time, matching the performance of expert clinicians.
The AI has been developed with two unique features which maximise its potential use in eye care. Firstly, the system can provide information that helps explain to eye care professionals how it arrives at its recommendations. This information includes visuals of the features of eye disease it has identified on the OCT scan and the level of confidence the system has in its recommendations, in the form of a percentage. This functionality is crucial in helping clinicians scrutinise the technology’s recommendations and check its accuracy before deciding the type of care and treatment a patient receives.
Secondly, the AI system can be easily applied to different types of eye scanner, not just the specific model on which it was trained. This could significantly increase the number of people who benefit from this technology and future-proof it, so it can still be used even as OCT scanners are upgraded or replaced over time.
The next step is for the research to go through clinical trials to explore how this technology might improve patient care in practice, and regulatory approval before it can be used in hospitals and other clinical settings.
If clinical trials are successful in demonstrating that the technology can be used safely and effectively, Moorfields will be able to use an eventual, regulatory-approved product for free, across all 30 of their UK hospitals and community clinics, for an initial period of five years.
The work that has gone into this project will also help accelerate wider NHS research for many years to come. For example, DeepMind has invested significant resources to clean, curate and label Moorfields’ de-identified research dataset to create one of the most advanced eye research databases in the world.
Moorfields owns this database as a non-commercial public asset, which is already forming the basis of nine separate medical research studies. In addition, Moorfields can also use DeepMind’s trained AI model for future non-commercial research efforts, which could help advance medical research even further.
Mustafa Suleyman, Co-founder and Head of Applied AI at DeepMind Health, said: “We set up DeepMind Health because we believe artificial intelligence can help solve some of society’s biggest health challenges, like avoidable sight loss, which affects millions of people across the globe. These incredibly exciting results take us one step closer to that goal and could, in time, transform the diagnosis, treatment and management of patients with sight threatening eye conditions, not just at Moorfields, but around the world.”
Professor Sir Peng Tee Khaw, director of the NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology said: “The results of this pioneering research with DeepMind are very exciting and demonstrate the potential sight-saving impact AI could have for patients. I am in no doubt that AI has a vital role to play in the future of healthcare, particularly when it comes to training and helping medical professionals so that patients benefit from vital treatment earlier than might previously have been possible. This shows the transformative research than can be carried out in the UK combining world leading industry and NIHR/NHS hospital/university partnerships.”
Matt Hancock, Health and Social Care Secretary, said: “This is hugely exciting and exactly the type of technology which will benefit the NHS in the long term and improve patient care – that’s why we fund over a billion pounds a year in health research as part of our long term plan for the NHS.”
Here’s a link to and a citation for the study,
Clinically applicable deep learning for diagnosis and referral in retinal disease by Jeffrey De Fauw, Joseph R. Ledsam, Bernardino Romera-Paredes, Stanislav Nikolov, Nenad Tomasev, Sam Blackwell, Harry Askham, Xavier Glorot, Brendan O’Donoghue, Daniel Visentin, George van den Driessche, Balaji Lakshminarayanan, Clemens Meyer, Faith Mackinder, Simon Bouton, Kareem Ayoub, Reena Chopra, Dominic King, Alan Karthikesalingam, Cían O. Hughes, Rosalind Raine, Julian Hughes, Dawn A. Sim, Catherine Egan, Adnan Tufail, Hugh Montgomery, Demis Hassabis, Geraint Rees, Trevor Back, Peng T. Khaw, Mustafa Suleyman, Julien Cornebise, Pearse A. Keane, & Olaf Ronneberger. Nature Medicine (2018) DOI: https://doi.org/10.1038/s41591-018-0107-6 Published 13 August 2018
This paper is behind a paywall.
And now, Melissa Locker’s August 15, 2018 article for Fast Company (Note: Links have been removed),
In a paper published in Nature Medicine on Monday, Google’s DeepMind subsidiary, UCL, and researchers at Moorfields Eye Hospital showed off their new AI system. The researchers used deep learning to create algorithm-driven software that can identify common patterns in data culled from dozens of common eye diseases from 3D scans. The result is an AI that can identify more than 50 diseases with incredible accuracy and can then refer patients to a specialist. Even more important, though, is that the AI can explain why a diagnosis was made, indicating which part of the scan prompted the outcome. It’s an important step in both medicine and in making AIs slightly more human
The editor or writer has even highlighted the sentence about the system’s accuracy—not just good but incredible!
I will be publishing something soon [my August 21, 2018 posting] which highlights some of the questions one might want to ask about AI and medicine before diving headfirst into this brave new world of medicine.
Vyacheslav Polonski’s (University of Oxford researcher) January 10, 2018 piece (originally published Jan. 9, 2018 on The Conversation) on phys.org isn’t a gossip article although there are parts that could be read that way. Before getting to what I consider the juicy bits (Note: Links have been removed),
Artificial intelligence [AI] can already predict the future. Police forces are using it to map when and where crime is likely to occur [Note: See my Nov. 23, 2017 posting about predictive policing in Vancouver for details about the first Canadian municipality to introduce the technology]. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences.
Many decisions in our lives require a good forecast, and AI agents are almost always better at forecasting than their human counterparts. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictions. Recent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong.
The part (juicy bits) that satisfied some of my long held curiosity was this section on Watson and its life as a medical adjunct (Note: Links have been removed),
IBM’s attempt to promote its supercomputer programme to cancer doctors (Watson for Onology) was a PR [public relations] disaster. The AI promised to deliver top-quality recommendations on the treatment of 12 cancers that accounted for 80% of the world’s cases. As of today, over 14,000 patients worldwide have received advice based on its calculations.
But when doctors first interacted with Watson they found themselves in a rather difficult situation. On the one hand, if Watson provided guidance about a treatment that coincided with their own opinions, physicians did not see much value in Watson’s recommendations. The supercomputer was simply telling them what they already know, and these recommendations did not change the actual treatment. This may have given doctors some peace of mind, providing them with more confidence in their own decisions. But IBM has yet to provide evidence that Watson actually improves cancer survival rates.
On the other hand, if Watson generated a recommendation that contradicted the experts’ opinion, doctors would typically conclude that Watson wasn’t competent. And the machine wouldn’t be able to explain why its treatment was plausible because its machine learning algorithms were simply too complex to be fully understood by humans. Consequently, this has caused even more mistrust and disbelief, leading many doctors to ignore the seemingly outlandish AI recommendations and stick to their own expertise.
As a result, IBM Watson’s premier medical partner, the MD Anderson Cancer Center, recently announced it was dropping the programme. Similarly, a Danish hospital reportedly abandoned the AI programme after discovering that its cancer doctors disagreed with Watson in over two thirds of cases.
The problem with Watson for Oncology was that doctors simply didn’t trust it. Human trust is often based on our understanding of how other people think and having experience of their reliability. …
It seems to me there might be a bit more to the doctors’ trust issues and I was surprised it didn’t seem to have occurred to Polonski. Then I did some digging (from Polonski’s webpage on the Oxford Internet Institute website),
Vyacheslav Polonski (@slavacm) is a DPhil [PhD] student at the Oxford Internet Institute. His research interests are located at the intersection of network science, media studies and social psychology. Vyacheslav’s doctoral research examines the adoption and use of social network sites, focusing on the effects of social influence, social cognition and identity construction.
Vyacheslav is a Visiting Fellow at Harvard University and a Global Shaper at the World Economic Forum. He was awarded the Master of Science degree with Distinction in the Social Science of the Internet from the University of Oxford in 2013. He also obtained the Bachelor of Science degree with First Class Honours in Management from the London School of Economics and Political Science (LSE) in 2012.
Vyacheslav was honoured at the British Council International Student of the Year 2011 awards, and was named UK’s Student of the Year 2012 and national winner of the Future Business Leader of the Year 2012 awards by TARGETjobs.
Previously, he has worked as a management consultant at Roland Berger Strategy Consultants and gained further work experience at the World Economic Forum, PwC, Mars, Bertelsmann and Amazon.com. Besides, he was involved in several start-ups as part of the 2012 cohort of Entrepreneur First and as part of the founding team of the London office of Rocket Internet. Vyacheslav was the junior editor of the bi-lingual book ‘Inspire a Nation‘ about Barack Obama’s first presidential election campaign. In 2013, he was invited to be a keynote speaker at the inaugural TEDx conference of IE University in Spain to discuss the role of a networked mindset in everyday life.
Vyacheslav is fluent in German, English and Russian, and is passionate about new technologies, social entrepreneurship, philanthropy, philosophy and modern art.
Research interests
Network science, social network analysis, online communities, agency and structure, group dynamics, social interaction, big data, critical mass, network effects, knowledge networks, information diffusion, product adoption
Positions held at the OII
DPhil student, October 2013 –
MSc Student, October 2012 – August 2013
Polonski doesn’t seem to have any experience dealing with, participating in, or studying the medical community. Getting a doctor to admit that his or her approach to a particular patient’s condition was wrong or misguided runs counter to their training and, by extension, the institution of medicine. Also, one of the biggest problems in any field is getting people to change and it’s not always about trust. In this instance, you’re asking a doctor to back someone else’s opinion after he or she has rendered theirs. This is difficult even when the other party is another human doctor let alone a form of artificial intelligence.
If you want to get a sense of just how hard it is to get someone to back down after they’ve committed to a position, read this January 10, 2018 essay by Lara Bazelon, an associate professor at the University of San Francisco School of Law. This is just one of the cases (Note: Links have been removed),
Davontae Sanford was 14 years old when he confessed to murdering four people in a drug house on Detroit’s East Side. Left alone with detectives in a late-night interrogation, Sanford says he broke down after being told he could go home if he gave them “something.” On the advice of a lawyer whose license was later suspended for misconduct, Sanders pleaded guilty in the middle of his March 2008 trial and received a sentence of 39 to 92 years in prison.
Sixteen days after Sanford was sentenced, a hit man named Vincent Smothers told the police he had carried out 12 contract killings, including the four Sanford had pleaded guilty to committing. Smothers explained that he’d worked with an accomplice, Ernest Davis, and he provided a wealth of corroborating details to back up his account. Smothers told police where they could find one of the weapons used in the murders; the gun was recovered and ballistics matched it to the crime scene. He also told the police he had used a different gun in several of the other murders, which ballistics tests confirmed. Once Smothers’ confession was corroborated, it was clear Sanford was innocent. Smothers made this point explicitly in an 2015 affidavit, emphasizing that Sanford hadn’t been involved in the crimes “in any way.”
Guess what happened? (Note: Links have been removed),
But Smothers and Davis were never charged. Neither was Leroy Payne, the man Smothers alleged had paid him to commit the murders. …
Davontae Sanford, meanwhile, remained behind bars, locked up for crimes he very clearly didn’t commit.
Police failed to turn over all the relevant information in Smothers’ confession to Sanford’s legal team, as the law required them to do. When that information was leaked in 2009, Sanford’s attorneys sought to reverse his conviction on the basis of actual innocence. Wayne County Prosecutor Kym Worthy fought back, opposing the motion all the way to the Michigan Supreme Court. In 2014, the court sided with Worthy, ruling that actual innocence was not a valid reason to withdraw a guilty plea [emphasis mine]. Sanford would remain in prison for another two years.
…
Doctors are just as invested in their opinions and professional judgments as lawyers (just like the prosecutor and the judges on the Michigan Supreme Court) are.
There is one more problem. From the doctor’s (or anyone else’s perspective), if the AI is making the decisions, why do he/she need to be there? At best it’s as if AI were turning the doctor into its servant or, at worst, replacing the doctor. Polonski alludes to the problem in one of his solutions to the ‘trust’ issue (Note: A link has been removed),
…
Research suggests involving people more in the AI decision-making process could also improve trust and allow the AI to learn from human experience. For example,one study showed people were given the freedom to slightly modify an algorithm felt more satisfied with its decisions, more likely to believe it was superior and more likely to use it in the future.
…
Having input into the AI decision-making process somewhat addresses one of the problems but the commitment to one’s own judgment even when there is overwhelming evidence to the contrary is a perennially thorny problem. The legal case mentioned here earlier is clearly one where the contrarian is wrong but it’s not always that obvious. As well, sometimes, people who hold out against the majority are right.
US Army
Getting back to building trust, it turns out the US Army Research Laboratory is also interested in transparency where AI is concerned (from a January 11, 2018 US Army news release on EurekAlert),
U.S. Army Research Laboratory [ARL] scientists developed ways to improve collaboration between humans and artificially intelligent agents in two projects recently completed for the Autonomy Research Pilot Initiative supported by the Office of Secretary of Defense. They did so by enhancing the agent transparency [emphasis mine], which refers to a robot, unmanned vehicle, or software agent’s ability to convey to humans its intent, performance, future plans, and reasoning process.
“As machine agents become more sophisticated and independent, it is critical for their human counterparts to understand their intent, behaviors, reasoning process behind those behaviors, and expected outcomes so the humans can properly calibrate their trust [emphasis mine] in the systems and make appropriate decisions,” explained ARL’s Dr. Jessie Chen, senior research psychologist.
The U.S. Defense Science Board, in a 2016 report, identified six barriers to human trust in autonomous systems, with ‘low observability, predictability, directability and auditability’ as well as ‘low mutual understanding of common goals’ being among the key issues.
In order to address these issues, Chen and her colleagues developed the Situation awareness-based Agent Transparency, or SAT, model and measured its effectiveness on human-agent team performance in a series of human factors studies supported by the ARPI. The SAT model deals with the information requirements from an agent to its human collaborator in order for the human to obtain effective situation awareness of the agent in its tasking environment. At the first SAT level, the agent provides the operator with the basic information about its current state and goals, intentions, and plans. At the second level, the agent reveals its reasoning process as well as the constraints/affordances that the agent considers when planning its actions. At the third SAT level, the agent provides the operator with information regarding its projection of future states, predicted consequences, likelihood of success/failure, and any uncertainty associated with the aforementioned projections.
In one of the ARPI projects, IMPACT, a research program on human-agent teaming for management of multiple heterogeneous unmanned vehicles, ARL’s experimental effort focused on examining the effects of levels of agent transparency, based on the SAT model, on human operators’ decision making during military scenarios. The results of a series of human factors experiments collectively suggest that transparency on the part of the agent benefits the human’s decision making and thus the overall human-agent team performance. More specifically, researchers said the human’s trust in the agent was significantly better calibrated — accepting the agent’s plan when it is correct and rejecting it when it is incorrect– when the agent had a higher level of transparency.
The other project related to agent transparency that Chen and her colleagues performed under the ARPI was Autonomous Squad Member, on which ARL collaborated with Naval Research Laboratory scientists. The ASM is a small ground robot that interacts with and communicates with an infantry squad. As part of the overall ASM program, Chen’s group developed transparency visualization concepts, which they used to investigate the effects of agent transparency levels on operator performance. Informed by the SAT model, the ASM’s user interface features an at a glance transparency module where user-tested iconographic representations of the agent’s plans, motivator, and projected outcomes are used to promote transparent interaction with the agent. A series of human factors studies on the ASM’s user interface have investigated the effects of agent transparency on the human teammate’s situation awareness, trust in the ASM, and workload. The results, consistent with the IMPACT project’s findings, demonstrated the positive effects of agent transparency on the human’s task performance without increase of perceived workload. The research participants also reported that they felt the ASM as more trustworthy, intelligent, and human-like when it conveyed greater levels of transparency.
Chen and her colleagues are currently expanding the SAT model into bidirectional transparency between the human and the agent.
“Bidirectional transparency, although conceptually straightforward–human and agent being mutually transparent about their reasoning process–can be quite challenging to implement in real time. However, transparency on the part of the human should support the agent’s planning and performance–just as agent transparency can support the human’s situation awareness and task performance, which we have demonstrated in our studies,” Chen hypothesized.
The challenge is to design the user interfaces, which can include visual, auditory, and other modalities, that can support bidirectional transparency dynamically, in real time, while not overwhelming the human with too much information and burden.
Interesting, yes? Here’s a link and a citation for the paper,
In no particular order, here are some Frankenstein bits and bobs in celebration of the 200th anniversary of the publication of Mary Shelley’s book.
The Frankenstein Bicentennial Project
This project at Arizona State University has been featured here a few times and most recently in a October 26, 2016 posting about an artist using a Roomba (robotic vacuum cleaner) in an artistic query and about the Frankenstein at 200 online exhibition.
On the two hundredth anniversary of Mary Shelley’s Frankenstein, Arizona State University launches new educational products and publications for audiences of all ages.
A free, interactive, multiplatform experience for kids designed to inspire deeper engagement with STEM topics and promote the development of 21st century skills related to creative collaboration and critical thinking.
A collaborative, multimedia reading experiment with Mary Shelley’s timeless tale examining the the scientific, technological, political, and ethical dimensions of the novel, its historical context, and its enduring legacy.
A set of hands-on STEM making activities that use the Frankenstein story to inspire deeper conversations about scientific and technological creativity and social responsibility.
How to Make a Monster
Kathryn Harkup in a February 22, 2018 article about her recent book for the Guardian delves into the science behind Mary Shelley’s Frankenstein (Note: Links have been removed),
The bicentenary of the publication of Mary Shelley’s Frankenstein: or the Modern Prometheus has meant a lot of people are re-examining this brilliant work of science fiction. My particular interest is the science fact behind the science fiction. How much real science influenced Mary Shelley? Could a real-life Victor Frankenstein have constructed a creature?
In terms of the technical aspects of building a creature from scraps, many people focus on the collecting of the raw materials and reanimation stages. It’s understandable as there are many great stories about grave-robbers and dissection rooms as well as electrical experiments that were performed on recently executed murderers. But there quite a few stages between digging up dead bodies and reanimating a creature.
The months of tedious and fiddly surgery to bring everything together are often glossed over, but what virtually no one mentions is how difficult it would have been to keep the bits and pieces in a suitable state of preservation while Victor worked on his creation. Making a monster takes time, and bodies rot very quickly.
Preservation of anatomical material was of huge interest when Frankenstein was written, as it is now, though for very different reasons. Today the interest is in preserving organs and tissues suitable for transplant. Some individuals even want to cryogenically freeze their entire body in case future scientists are able to revive them and cure whatever disease caused their original death. In that respect the aims are not so different from what the fictional Victor Frankenstein was attempting two hundred years ago.
At the time Frankenstein is set, the late 18th century, few people were really thinking about organ transplant. Instead, tissue preservation was of concern for anatomy professors who wanted to maintain collections of interesting, unusual or instructive specimens to use as teaching aids for future students.
She provides fascinating insight into preservation techniques of the 18th century and their dangers,
To preserve soft tissues, various substances were injected into or used to coat or soak the dissected specimen. The substance in question had to be toxic enough to destroy mould and bacteria that could decompose the sample, but not corrosive or damaging to the tissues of the specimen itself.
Substances such as turpentine, mercury metal and mercury salts (which are even more toxic than the pure element) were all employed stop the decay process in its tracks. Killing off bacteria and mould means that some vital process within them has been stopped; however, many processes that are critical to mould and bacteria are also necessary for humans, making these substances toxic to us.
Working in cramped, poorly ventilated conditions with minimal regard for health and safety, the substances anatomical curators were using day in and day out took a serious toll on their health. Anatomical curators were described as emaciated, prematurely aged and with a hacking cough. …
One of the most successful techniques for tissue preservation was bottling in alcohol. …
…
In the 18th century the University of Edinburgh handed over twelve gallons of whisky annually to the anatomy museum for the preservation of specimens. Possible not all of those twelve gallons made it into the specimen jars. The nature of the curator’s work – the smell, the problems with vermin and toxic fumes – must have made the odd sip of whisky very tempting. Indeed, more than one curator was dismissed for being drunk on the job.
Shelley described Frankenstein working in a small attic room using candlelight to illuminate his work. Small rooms, toxic vapours, alcohol fumes and naked flames are not a healthy combination. No wonder Shelley wrote the work took such a toll on Frankenstein’s health.
The year 1818 saw the publication of one of the most influential science-fiction stories of all time. Frankenstein: Or, Modern Prometheus by Mary Shelley had a huge impact on gothic horror and science-fiction genres, and her creation has become part of our everyday culture, from cartoons to Hallowe’en costumes. Even the name ‘Frankenstein’ has become a by-word for evil scientists and dangerous experiments. How did a teenager with no formal education come up with the idea for an extraordinary novel such as Frankenstein?
Clues are dotted throughout Georgian science and popular culture. The years before the book’s publication saw huge advances in our understanding of the natural sciences, in areas such as electricity and physiology, for example. Sensational science demonstrations caught the imagination of the general public, while the newspapers were full of lurid tales of murderers and resurrectionists.
Making the Monster explores the scientific background behind Mary Shelley’s book. Is there any science fact behind the science fiction? And how might a real-life Victor Frankenstein have gone about creating his monster? From tales of volcanic eruptions, artificial life and chemical revolutions, to experimental surgery, ‘monsters’ and electrical experiments on human cadavers, Kathryn Harkup examines the science and scientists that influenced Shelley, and inspired her most famous creation.
Frankenstein 2018
Frankenstein 2018 is an online site designed to celebrate the 200th anniversary of the book, from the About page,
The Frankenstein 2018 project is based at Volda University College in Norway, but aims to engage and include people from elsewhere in Norway and around the world.
The project is led by Timothy Saunders, an Associate Professor of English Literature and Culture at Volda University College.
If you would like to get in touch, either to offer comments on the website, to provide information about related projects or activities taking place around the world, or even to offer relevant material of your own, please write to me at timothy.saunders@hivolda.no.
What a great idea and I wish the folks at Volda University College all the best.
The Monster Challenge
Washington University in St. Louis (WUSL; Missouri, US) is hosting a competition to create a ‘new Frankenstein’, from WUSL’s The Monster Challenge webpage,
On June 16, 1816, a 19-year-old woman sat quietly listening as her lover (the poet Percy Bysshe Shelley) and a small group of friends — including celebrated poet Lord Byron — discussed conducting a ghost-story contest. The couple was spending their holiday in a beautiful mansion on the banks of scenic Lake Geneva in Switzerland. As the conversation about ghost stories heated up, a discussion arose about the principle of life. Not surprisingly, the ensuing talk of graves and corpses led to a sleepless night filled with horrific nightmares for Mary Shelley. Later, she recalled her own contest entry began with eight words; “It was on a dreary night in November…” Just two years later, in 1818, that young woman, Mary Shelley, published her expanded submission as the novel Frankenstein, not only a classic of 19th-century fiction, but a work that has enjoyed immense influence on popular culture, science, medicine, philosophy and the arts all the way up to the present day.
THE MONSTER CHALLENGE
Commemorating the 200th anniversary of the novel’s publication in 1818, Washington University is hosting a competition open to WU students (full time and registered in fall 2018), both undergraduate and graduate. The submission deadline is October 15, 2018.
The prompt for our own WU “Monster Challenge” is “The New Frankenstein”:
If you learned of a contest today, similar to the one that inspired the publication of Mary Shelley’s Frankenstein in 1818, what new Frankenstein would you create? Winning entries will be those best exemplifying the spirit, tone and feeling of Frankenstein for our age.
Submissions are eligible in two categories: written (including poetry, fiction, nonfiction and theater; 5000 word limit) and visual (including new media, experimental media, sound art, performance art, and design). Only one submission is allowed per student or student collaboration group. The winners will be determined by a jury of faculty members and announced in the fall 2018 semester. Winning entries will also be featured on the Frankenstein Bicentennial website (frankenstein200.wustl.edu).
Through the generosity of Provost Holden Thorpe’s office, winners will receive a cash prize as well as the opportunity to have their submission read, exhibited, and/or performed during the fall 2018 semester. Prizes are as follows:
WRITTEN CATEGORY VISUAL CATEGORY
Grand Prize: $1000 Grand Prize: $1000
2nd Prize: $500 2nd Prize: $500
3rd Prize: $250 3rd Prize: $250
HOW TO SUBMIT
Please review the guidelines below and download the appropriate submission form … for your project.
All submissions are due by 3 pm on October 15, 2018.
Only one submission is allowed per student or student collaboration group.
Electronic submissions should be emailed to iph@wustl.edu along with the appropriate submission form (right).
Non-electronic submissions should be dropped off at the Performing Arts Department in Mallinckrodt Center, Room 312 (specific dates and times to be determined). All applicants submitting work here must also send an email to iph@wustl.edu with a digital image of the work and the appropriate submission form (right). Entries should fit into a case 74″ w x 87″ h x 23″ d. For exceptions, please contact Professor Patricia Olynyk (olynyk@wustl.edu).
FURTHER INFORMATION
For additional information about the contest, please contact the Interdisciplinary Project in the Humanities: iph@wustl.edu.
One of the most famous literary works of the last two centuries, Mary Shelley’s Frankenstein (1818) permeates our cultural imagination. A man of science makes dead matter live yet abandons his own creation. A creature is composed of human body parts yet denied a place in human society. The epic struggle that ensues between creator and creature poses enduring questions to all of us. What do we owe our non-human creations? How might the pursuit of scientific knowledge endanger or empower humanity? How do we combine social responsibility with our technological power to alter living matter? These moral quandaries drive the novel as well as our own hopes and fears about modernity.
Over the last 200 years, Frankenstein has also become one of our most culturally productive myths. The Black Frankenstein became a potent metaphor for racial otherness in the 19th century and remains so to this day. From Boris Karloff as the iconic Monster of 1931 to the transvestite Dr. Frank-N-Furter in The Rocky Horror Picture Show of 1975, the novel has inspired dozens of films and dramatizations. Female poets from Margaret Atwood to Liz Lochhead and Laurie Sheck continue to wrestle with the novel’s imaginative possibilities. And Frankenstein, of course, permeates our material culture. Think no further than Franken Berry cereal, Frankenstein action figures, and Frankenstein bed pillows.
Please join us at Washington University in St. Louis as we celebrate Mary Shelley’s iconic novel and its afterlives with a series of events organized by faculty, students and staff from across the arts, humanities and life sciences. Highlights include the conference Frankenstein at 200, sponsored by the Center for the Humanities; a special Frankenstein issue of The Common Reader; a staging of Nick Dear’s play Frankenstein; the symposium The Curren(t)cy of Frankenstein, sponsored by the Medical School; a film series; several lectures; and exhibits designed to showcase the university’s museum and library collections.
This site aggregates all events related to the celebration. Please visit again for updates!
They do have a page for Global Celebrations and while the listing isn’t really global at this point (I’m sure they’re hoping that will change) it does open up a number of possibilities for Frankenstein aficionados, experts, and enthusiasts,
Technologies of Frankenstein
Stevens Institute of Technology, College of Arts and Letters and IEEE History Center
The 200th anniversary year of the first edition of Mary Shelley’s Frankenstein: Or, The Modern Prometheus has drawn worldwide interest in revisiting the novel’s themes. What were those themes and what is their value to us in the early twenty-first century? In what ways might our tools of science and communication serve as an “elixir of life” since the age of Frankenstein?
Frankenstein@200 is a year-long series of academic courses and programs including a film festival, a play, a lecture series and an international Health Humanities Conference that will examine the numerous moral, scientific, sociological, ethical and spiritual dimensions of the work, and why Dr. Frankenstein and his monster still capture the moral imagination today..
San Jose State University, Santa Clara University, and University of San Francisco
During 2018, the San Francisco Bay area partners will host The Frankenstein Bicentennial. The novel brings together STEM fields with humanities & the arts in such a way to engage almost every discipline and major. The project’s events will address timely issues of our world in Silicon Valley and the advent of technology – a critical topic with questions important to our academic, regional and world communities. The novel, because it has been so popular for 200 years, lives on in discussions about what it means to be human in a digital world.
Next performance: Monday Feb. 26, 2018; 7 PM
Extended through 2018!
BroadwayWorld review!
“..it is a success of a show that should be considered
something great in the realm of musical theater.”
“A musical love letter”
– Local Theatre NY
“…infused with enough emotion to send chills down the spine…”
– Local Theatre NY
““ an ambitious theater piece that is refreshingly buoyed up by its music””
– Theater Scene
FRANKENSTEIN
a new Off-Broadway musical by Eric B. Sirota
based on Mary Shelley’s classic novel
Presented by John Lant, Tamra Pica & Write Act Repertory
at St. Luke’s Theater in the heart of the theatre district
…
. . . a sweeping romantic musical, about the human need for love and companionship,
which honors its source material.
Performances Monday nights at 7 PM
tickets to performances into March currently on sale
(scroll down for performance schedule)
…
Contact us for Special Group Sales and Buyouts at: info@TheFrankensteinMusical.com
St. Luke’s Theatre
an Off-Broadway venue in the heart of the theatre district on “Restaurant Row”
308 West 46th Street (btwn. 8th and 9th Ave.)
contact: info@TheFrankensteinMusical.com
– Book, Music & Lyrics: Eric B. Sirota
-Additional lyrics: Julia Sirota
– Director: Clint Hromsco
– Music Director: Austin Nuckols
(original music direction by Anessa Marie)
– Producer: John Lant, Tamra Pica and Write Act Repertory
– CAST: Jon Rose, Erick Sanchez-Canahuate, Gabriella Marzetta, Stephan Amenta, Cait Kiley, Adam Kee, Samantha Collette, Amy Londyn, Stephanie Lourenco Viegas, Bryan S. Walton
Eric Sirota developed Frankenstein under the working title of “Day of Wrath”, an Official Selection of the 2015 New York Musical Theatre Festival’s Reading Series
**********
Next performances
Feb 26, Mon; 7 PM
Mar 5, Mon; 7 PM
Tickets to later dates on sale soon. . .
March 12, 19, 24
April 2, 9, 16, 23, 30
May . . .
Jun . . .
running though 2018
2018 – Frankenstein bicentennial year!
The Purgatory Press*
The Purgatory Press blog’s* John Culbert (author and lecturer at the University of British Columbia) wrote a January 1, 2018 essay celebrating and examining Mary Shelley’s classic,
She was born in 1797, toward the end of the Little Ice Age. Wolves had been extirpated from the country, but not so long ago that one could forget. Man’s only predator in the British Isles was now a mental throwback. Does the shadow of extinction fall on the children of perpetrators? What strange gap is left in the mind of men suddenly raised from the humble status of prey?
In the winter of her sixteenth year, the river Thames froze in London for the last time. The final “Frost Fair,” a tradition dating back centuries, was held February 1814 on the river’s hard surface.
The following year, a volcano in present-day Indonesia erupted. It was the most powerful and destructive event of its kind in recorded history. Fallout caused a “volcanic winter” across the Northern Hemisphere. In 1816 – “the year without a summer” – she was in Switzerland, where she began writing her first novel, Frankenstein, published 200 years ago today — on January 1st, 1818.
…
Fascinating, yes? I encourage you to read the whole piece.
Frankenstein Festival
The Science Museum in London, UK, is splashing out with a Frankenstein Festival according to a February 13, 2018 press release,
Frankenstein Festival
3–8 April (with special events on 28 March and 27–28 April)
The Science Museum is celebrating the 200th anniversary of Mary Shelley’s Frankenstein or the Modern Prometheus with a free festival exploring the science behind this cultural phenomenon.
Through immersive theatre, experimental storytelling and hands-on activities visitors can examine the ethical and scientific questions surrounding the artificial creation of life. Families can step in Doctor Frankenstein’s shoes, creating a creature and bringing it to life using stop motion animation at our drop-in workshops.
In the Mystery at Frankenstein’s Lab visitors can solve puzzles and conduct experiments in an escape room-like interactive experience. Visitors are also invited to explore the Science Museum as you’ve never heard it before in It’s Alive, an immersive Frankenstein-themed audio tour. Both these activities have limited availability so pre-booking is advised.
In Pandemic, you decide how far Dr Victor should go to tackle a virus sweeping the world. Is it right to create new life to save others? You decide where to draw the line in this choose-your-own-adventure experience. Visitors can also see Humanity 2.0, a play created and performed by actor Emily Carding. Set in a post-apocalyptic future, the play examines what could happen if a benevolent AI recreated humanity.
As part of the festival, visitors will meet researchers at the cutting-edge of science—from bio chemists who manipulate DNA to engineers creating artificial intelligence—and discover fascinating scientific objects with our curators which could have influenced Shelley.
The Frankenstein Festival will run daily from 3–8 April at the Science Museum and is supported by players of People’s Postcode Lottery. Tickets for activities with limited availability are available from sciencemuseum.org.uk/Frankenstein.
Our free adult-only Frankenstein Lates on 28 March will focus on the darker themes of Shelley’s iconic novel, with the Promethean Tales Weekend on 27–28 April, featuring panel discussions and special screenings of Terminator 2: Judgement Day and The Curse of Frankenstein in our IMAX cinema.
…
Frankenstein Festival activities include:
It’s Alive!
An immersive audio tour created by Cmd+Shift in collaboration with the Science Museum. The tour takes 45 minutes and is limited to 15 people per session. Recommended for ages 8+. Tickets cost £3 and are available here.
Mystery at Frankenstein’s Lab
This interactive, theatrical puzzle experience has been created by Atomic Force Productions, in collaboration with the Science Museum. Each session lasts 45 minutes and is limited to 10 people per session. Recommended for ages 12+, under 16s must be accompanied by an adult. Tickets cost £10 and are available here.
Create Your Own Creature
Get hands on at our drop-in workshops and create your very own creature. Then bring your creature to life with stop motion animation. This activity takes approximately 20 minutes and is suitable for all ages.
Humanity 2.0 (3–5 April)
Step into a dystopian future and help shape the future of humanity in this unique interactive play created and performed by Emily Carding. Her full body make-up was created by award winning body painter Victoria Gugenheim in collaboration with the Science Museum. The play has a run time of 45 minutes and is recommended for ages 12+.
Pandemic (5–8 April)
This choose-your-own-adventure film puts you in control of a psychological thriller. Your decisions will guide Dr Victor on their quest to create artificial life.
Pandemic was created by John Bradburn in collaboration with the Science Museum. The film contains moderate psychological threat and horror sequences that some people may find disturbing. The experiences lasts 45 minutes and is recommended for ages 14+. Tickets are free and are available here.
Frankenstein Festival events include:
Frankenstein Lates
Wednesday 28 March, 18.45–22.00
Join us for a fun free evening of events, workshops and screenings as we ask the question ‘should we create life’.
Lates is a free themed-event for adults at the Science Museum on the last Wednesday of each month. Find out more about Lates at sciencemuseum.org.uk/Lates.
Artificial Life: Should We, Could We, Will We?
Wednesday 28 March as part of the Frankenstein Lates
Tickets: £5
A panel of expert scientists and researchers will discuss artificial life. Just how close are we to creating fully synthetic life and will this be achieved by biological or digital means?
Discussing those questions will be Professor of Cognitive Robotics at Imperial College and scientific advisor for the hit movie Ex Machina Murray Shanahan, Vice President of the International Society for Artificial Life Susan Stepney and Lead Curator of the Science Museum’s acclaimed 2017 exhibition Robots Ben Russell. Further speakers to be announced.
Promethean Tales Weekend
Terminator 2: Judgement Day + Panel Discussion
Friday 27 April, 19.30–22.35 (Doors open 19.00)
Tickets: £8, £6 Concessions
Age 15 and above
In part one of our Promethean Tales Weekend celebrating the 200th anniversary of Mary Shelley’s Frankenstein, we will be joined by a panel of experts in science, film and literature to discuss the topic of ‘Promethean Tales through the ages’ ahead of a screening of Terminator 2: Judgement Day.
The Curse of Frankenstein and Q&A with Sir Christopher Frayling
Saturday 28 April, 18.00–20.30 (Doors open 17.30)
Tickets: £8, £6 Concessions
In part two of our Promethean Tales Weekend, we are joined by Sir Christopher Frayling, author of Frankenstein: The First Two Hundred Years, to discuss the life and work of Shelley, the origins of her seminal story and its cultural impact.
The screening of The Curse of Frankenstein will be followed by a book signing with copies of Sir Christopher’s book available to purchase on the night.
You can find out more about the festival and get tickets to events, here.
Frankenreads
This initiative seems like a lot of fun, from the Frankenreads homepage,
Frankenreads is an NEH [US National Endowment for the Humanitities]-funded initiative of the Keats-Shelley Association of America and partners to hold a series of events and initiatives in honor of the 200th anniversary of Mary Shelley’s Frankenstein, featuring especially an international series of readings of the full text of the novel on Halloween 2018.
They have a very open approach as their FAQs webpage attests to,
Why host a Frankenreads event?
Frankenstein, or, The Modern Prometheus appeals to both novice and expert readers alike and is a work that remains highly relevant to contemporary issues. Thus it is perhaps no surprise that (according to the Open Syllabus project) Frankenstein is the most frequently taught work of literature in college English courses and the fifth most frequently taught book in college courses in all disciplines. It is certainly one of the most read British novels in the world. Hosting a Frankenreads event is an easy way both to celebrate the 200th anniversary of this important work and to foster discussion about issues such as ethics in science and the human tendency to demonize the unfamiliar. By participating in Frankenreads, you can make sure that your thoughts about Frankenstein are part of a global conversation.
What kind of event can I host?
You can host any kind of event you like! Below are some suggestions. Click on the event type for further guidance.
Complete Reading — A live, all-day reading (about 9 hours) of the full text of Frankenstein
Partial Reading — A live reading of selected passages from Frankenstein
Discussion — An informal discussion of some or all of the novel
Lesson — A class session, discussion, or exercise on the novel
Lecture — A lecture on the novel by a relevant expert
Viewing — A community viewing on Halloween 2018 of the livestream of the NEH reading or other online events
Other — Whatever other kind of in-person or online event you can think of!
Should I hold in-person events or online events?
Either or both! We encourage you to record in-person events and upload video to our YouTube channel. We will also be providing advice on holding events via Google Hangouts.
When should I hold the event?
You can hold a Frankenreads event any time you like, but we encourage you to schedule an event during Frankenweek: October 24-31, 2018.
Why post my event on the Frankenreads website?
Posting your event on the Frankenreads website enables the Frankenreads team to publicize your event widely, to give you help with your event, and to connect you with others who are holding nearby or similar events.
How do I post my event on the Frankenreads website?
To post your event on the Frankenreads website, first register an account, log in, and then submit your event. You should have the following information:
An event title (required)
An event description (required)
The event time and date
A square image no bigger than 128 Mb to represent the event
Venue information (e.g., name, address, phone number, website)
Organizer(s) information (e.g., name, email address, phone number)
Event website
Event cost
How can I get help?
Lots of ways! You can contact us via this site, message us on social media, or join our Frankenreads discussion group to ask and answer questions of like-minded people.
There you have it from the academic to the informal and more. There is one more thing,
Have a nice weekend!
*’Purgatory Press’ head changed to “The Purgatory Press’ and ‘The Purgatory blog’ changed to ‘The Purgatory Press blog’ on February 26, 2018