Tag Archives: University of Chicago

Machine learning programs learn bias

The notion of bias in artificial intelligence (AI)/algorithms/robots is gaining prominence (links to other posts featuring algorithms and bias are at the end of this post). The latest research concerns machine learning where an artificial intelligence system trains itself with ordinary human language from the internet. From an April 13, 2017 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

As artificial intelligence systems “learn” language from existing texts, they exhibit the same biases that humans do, a new study reveals. The results not only provide a tool for studying prejudicial attitudes and behavior in humans, but also emphasize how language is intimately intertwined with historical biases and cultural stereotypes. A common way to measure biases in humans is the Implicit Association Test (IAT), where subjects are asked to pair two concepts they find similar, in contrast to two concepts they find different; their response times can vary greatly, indicating how well they associated one word with another (for example, people are more likely to associate “flowers” with “pleasant,” and “insects” with “unpleasant”). Here, Aylin Caliskan and colleagues developed a similar way to measure biases in AI systems that acquire language from human texts; rather than measuring lag time, however, they used the statistical number of associations between words, analyzing roughly 2.2 million words in total. Their results demonstrate that AI systems retain biases seen in humans. For example, studies of human behavior show that the exact same resume is 50% more likely to result in an opportunity for an interview if the candidate’s name is European American rather than African-American. Indeed, the AI system was more likely to associate European American names with “pleasant” stimuli (e.g. “gift,” or “happy”). In terms of gender, the AI system also reflected human biases, where female words (e.g., “woman” and “girl”) were more associated than male words with the arts, compared to mathematics. In a related Perspective, Anthony G. Greenwald discusses these findings and how they could be used to further analyze biases in the real world.

There are more details about the research in this April 13, 2017 Princeton University news release on EurekAlert (also on ScienceDaily),

In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. But in a new study, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways. Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, the researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.

Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorization and automated translations.

“Questions about fairness and bias in machine learning are tremendously important for our society,” said researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, as well as an affiliate scholar at Stanford Law School’s Center for Internet and Society. “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”

The paper, “Semantics derived automatically from language corpora contain human-like biases,” published April 14  [2017] in Science. Its lead author is Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton; Joanna Bryson, a reader at University of Bath, and CITP affiliate, is a coauthor.

As a touchstone for documented human biases, the study turned to the Implicit Association Test, used in numerous social psychology studies since its development at the University of Washington in the late 1990s. The test measures response times (in milliseconds) by human subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter, the Implicit Association Test has repeatedly shown, when subjects are asked to pair two concepts they find similar, versus two concepts they find dissimilar.

Take flower types, like “rose” and “daisy,” and insects like “ant” and “moth.” These words can be paired with pleasant concepts, like “caress” and “love,” or unpleasant notions, like “filth” and “ugly.” People more quickly associate the flower words with pleasant concepts, and the insect terms with unpleasant ideas.

The Princeton team devised an experiment with a program where it essentially functioned like a machine learning version of the Implicit Association Test. Called GloVe, and developed by Stanford University researchers, the popular, open-source program is of the sort that a startup machine learning company might use at the heart of its product. The GloVe algorithm can represent the co-occurrence statistics of words in, say, a 10-word window of text. Words that often appear near one another have a stronger association than those words that seldom do.

The Stanford researchers turned GloVe loose on a huge trawl of contents from the World Wide Web, containing 840 billion words. Within this large sample of written human culture, Narayanan and colleagues then examined sets of so-called target words, like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside two sets of attribute words, such as “man, male” and “woman, female,” looking for evidence of the kinds of biases humans can unwittingly possess.

In the results, innocent, inoffensive biases, like for flowers over bugs, showed up, but so did examples along lines of gender and race. As it turned out, the Princeton machine learning experiment managed to replicate the broad substantiations of bias found in select Implicit Association Test studies over the years that have relied on live, human subjects.

For instance, the machine learning program associated female names more with familial attribute words, like “parents” and “wedding,” than male names. In turn, male names had stronger associations with career attributes, like “professional” and “salary.” Of course, results such as these are often just objective reflections of the true, unequal distributions of occupation types with respect to gender–like how 77 percent of computer programmers are male, according to the U.S. Bureau of Labor Statistics.

Yet this correctly distinguished bias about occupations can end up having pernicious, sexist effects. An example: when foreign languages are naively processed by machine learning programs, leading to gender-stereotyped sentences. The Turkish language uses a gender-neutral, third person pronoun, “o.” Plugged into the well-known, online translation service Google Translate, however, the Turkish sentences “o bir doktor” and “o bir hem?ire” with this gender-neutral pronoun are translated into English as “he is a doctor” and “she is a nurse.”

“This paper reiterates the important point that machine learning methods are not ‘objective’ or ‘unbiased’ just because they rely on mathematics and algorithms,” said Hanna Wallach, a senior researcher at Microsoft Research New York City, who was not involved in the study. “Rather, as long as they are trained using data from society and as long as society exhibits biases, these methods will likely reproduce these biases.”

Another objectionable example harkens back to a well-known 2004 paper by Marianne Bertrand of the University of Chicago Booth School of Business and Sendhil Mullainathan of Harvard University. The economists sent out close to 5,000 identical resumes to 1,300 job advertisements, changing only the applicants’ names to be either traditionally European American or African American. The former group was 50 percent more likely to be offered an interview than the latter. In an apparent corroboration of this bias, the new Princeton study demonstrated that a set of African American names had more unpleasantness associations than a European American set.

Computer programmers might hope to prevent cultural stereotype perpetuation through the development of explicit, mathematics-based instructions for the machine learning programs underlying AI systems. Not unlike how parents and mentors try to instill concepts of fairness and equality in children and students, coders could endeavor to make machines reflect the better angels of human nature.

“The biases that we studied in the paper are easy to overlook when designers are creating systems,” said Narayanan. “The biases and stereotypes in our society reflected in our language are complex and longstanding. Rather than trying to sanitize or eliminate them, we should treat biases as part of the language and establish an explicit way in machine learning of determining what we consider acceptable and unacceptable.”

Here’s a link to and a citation for the Princeton paper,

Semantics derived automatically from language corpora contain human-like biases by Aylin Caliskan, Joanna J. Bryson, Arvind Narayanan. Science  14 Apr 2017: Vol. 356, Issue 6334, pp. 183-186 DOI: 10.1126/science.aal4230

This paper appears to be open access.

Links to more cautionary posts about AI,

Aug 5, 2009: Autonomous algorithms; intelligent windows; pretty nano pictures

June 14, 2016:  Accountability for artificial intelligence decision-making

Oct. 25, 2016 Removing gender-based stereotypes from algorithms

March 1, 2017: Algorithms in decision-making: a government inquiry in the UK

There’s also a book which makes some of the current use of AI programmes and big data quite accessible reading: Cathy O’Neil’s ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’.

Regrowing bone

The ability to grow bone or bone-like material could change life substantially for people with certain kinds of injuries. Scientists at Northwestern University and the University of Chicago have been able to regrow bone in a skull (according to a March 8, 2017 Northwestern University news release (also on EurekAlert),

A team of researchers repaired a hole in a mouse’s skull by regrowing “quality bone,” a breakthrough that could drastically improve the care of people who suffer severe trauma to the skull or face.

The work by a joint team of Northwestern Engineering and University of Chicago researchers was a resounding success, showing that a potent combination of technologies was able to regenerate the skull bone with supporting blood vessels in just the discrete area needed without developing scar tissue — and more rapidly than with previous methods.

“The results are very exciting,” said Guillermo Ameer, professor of biomedical engineering at Northwestern’s McCormick School of Engineering, and professor of surgery at Feinberg School of Medicine.

Supported by the China Scholarship Council, National Institute of Dental and Craniofacial Research, Chicago Community Trust, and National Center for Advancing Translational Sciences, the research was published last week in the journal PLOS One. Russell Reid, associate professor of surgery at the University of Chicago Medical Center, is the article’s corresponding author. Reid, his long-time collaborator Dr. Tong-Chuan He, and colleagues in Hyde Park brought the surgical and biological knowledge and skills. Zari P. Dumanian, affiliated with the medical center’s surgery department, was the paper’s first author.

“This project was a true collaborative team effort in which our Regenerative Engineering Laboratory provided the biomaterials expertise,” Ameer said.

Injuries or defects in the skull or facial bones are very challenging to treat, often requiring the surgeon to graft bone from the patient’s pelvis, ribs, or elsewhere, a painful procedure in itself. Difficulties increase if the injury area is large or if the graft needs to be contoured to the angle of the jaw or the cranial curve.

But if all goes well with this new approach, it may make painful bone grafting obsolete.

In the experiment, the researchers harvested skull cells from the mouse and engineered them to produce a potent protein to promote bone growth. They then used Ameer’s hydrogel, which acted like a temporary scaffolding, to deliver and contain these cells to the affected area. It was the combination of all three technologies that proved so successful, Ameer said.

Using calvaria or skull cells from the subject meant the body didn’t reject those cells.

The protein, BMP9, has been shown to promote bone cell growth more rapidly than other types of BMPs. Importantly, BMP9 also appeared to improve the creation of blood vessels in the area. Being able to safely deliver skull cells that are capable of rapidly regrowing bone in the affected site, in vivo as opposed to using them to grow bone in the laboratory, which would take a very long time, promises a therapy that might be more “surgeon friendly, if you will, and not too complicated to scale up for the patients,” Ameer said.

The scaffolding developed in Ameer’s laboratory, which is a material based on citric acid and called PPCN-g, is a liquid that when warmed to body temperature becomes a gel-like elastic material. “When applied, the liquid, which contains cells capable of producing bone, will conform to the shape of the bone defect to make a perfect fit,” Ameer said. “It then stays in place as a gel, localizing the cells to the site for the duration of the repair.” As the bone regrows, the PPCN-g is reabsorbed by the body.

“What we found is that these cells make natural-looking bone in the presence of the PPCN-g,” Ameer said. “The new bone is very similar to normal bone in that location.”

In fact, the three-part method was successful on a number of fronts: The regenerated bone was better quality, the bone growth was contained to the area defined by the scaffolding, the area healed much more quickly, and the new and old bone were continuous with no scar tissue.

The potential, if the procedure can be adapted to treat people that suffered trauma from car accidents or aggressive cancers that have affected the skull or face, would be huge, and give surgeons a much-sought-after option.

“The reconstruction procedure is a lot easier when you can harvest a few cells, make them produce the BMP9 protein, mix them in the PPCN-g solution, and apply it to the bone defect site to jump-start the new bone growth process where you want it.” Ameer said.

Ameer cautioned that the technology is years away to being used in humans, but added, “We did show proof of concept that we can heal large defects in the skull that would normally not heal on their own using a protein, cells and a new material that come together in a completely new way. Our team is very excited about these findings and the future of reconstructive surgery.”

Here’s a link and a citation for the paper,

Repair of critical sized cranial defects with BMP9-transduced calvarial cells delivered in a thermoresponsive scaffold by Zari P. Dumanian, Viktor Tollemar, Jixing Ye, Minpeng Lu, Yunxiao Zhu, Junyi Liao, Guillermo A. Ameer, Tong-Chuan He, Russell R. Reid. PLOS http://dx.doi.org/10.1371/journal.pone.0172327 Published: March 1, 2017

This is an open access paper.

Nanotechnology and Pakistan

I don’t often get information about nanotechnology in Pakistan so this March 6, 2017 news article by Mrya Imran on the TheNews.com website provides some welcome insight,

Pakistan has the right level of expert human resource and scientific activity in the field of nanotechnology. A focused national strategy and sustainable funding can make Pakistan one of the leaders in this sector.

These views were expressed by Professor of Physics in University of Illinois and Founder and President of NanoSi Advanced Technology, Inc. Dr Munir H. Nayfeh.  Dr Nayfeh, along with Executive Director, Centre for Nanoscale Science and Technology, and Research Faculty, Department of Agricultural and Biological Engineering, University of Illinois, Dr. Irfan Ahmad and Associate Professor and Director of Medical Physics Programme, Pritzker School of Medicine, University of Chicago, Dr. Bulent Aydogan were invited by COMSATS Institute of Information Technology (CIIT) to deliver lectures on nanotechnology research and entrepreneurship with special focus on cancer nanomedicine.

The objective of the visit was to motivate and mentor faculty and students at COMSATS and also to provide feedback to campus administration and the Federal Ministry of Science and Technology on strategic initiatives to help develop the next generation of science and engineering workforce in Pakistan.

A story of success for the Muslim youth from areas affected by conflict and war, Dr Nayfeh, a Palestinian by origin, was brought up in a conflict area by a mother who did not know how to read and write. For him, the environment was actually a motivator to work hard and study. “My mother was uneducated but she always wanted her children to get the highest degree possible and both my parents supported us in whatever way possible to achieve our dreams,” he recalled.

Comparing Pakistan with other developing countries in scientific research enterprise, he said that despite lack of resources, he has observed some decent amount of research outcome from the existing setups. About their visits to different labs, he said that they found faculty members and researchers in need of for more and more funds. “I don’t blame them as I am also looking for more and more fund even in America. This is a positive sign which shows that these set ups are alive and want to do more.”

Dr. Nayfeh is greatly impressed with the number of women researchers and students in Pakistan. “In Tunisia and Algeria, there were decent number of women in this field but Pakistan has the most and there are more publications coming out of Pakistan as compared to other developing countries.”

If you have the time, I suggest you read the article in its entirety.

Would you like to invest in the Argonne National Laboratory’s reusable oil spill sponge?

A March 7, 2017 news item on phys.org describes some of the US Argonne National Laboratory’s research into oil spill cleanup technology,

When the Deepwater Horizon drilling pipe blew out seven years ago, beginning the worst oil spill [BP oil spill in the Gulf of Mexico] in U.S. history, those in charge of the recovery discovered a new wrinkle: the millions of gallons of oil bubbling from the sea floor weren’t all collecting on the surface where it could be skimmed or burned. Some of it was forming a plume and drifting through the ocean under the surface.

Now, scientists at the U.S. Department of Energy’s (DOE) Argonne National Laboratory have invented a new foam, called Oleo Sponge, that addresses this problem. The material not only easily adsorbs oil from water, but is also reusable and can pull dispersed oil from the entire water column—not just the surface.

A March 6, 2017 Argonne National Laboratory news release (also on EurekAlert) by Louise Lerner, which originated the news item, provides more information about the work,

“The Oleo Sponge offers a set of possibilities that, as far as we know, are unprecedented,” said co-inventor Seth Darling, a scientist with Argonne’s Center for Nanoscale Materials and a fellow of the University of Chicago’s Institute for Molecular Engineering.

We already have a library of molecules that can grab oil, but the problem is how to get them into a useful structure and bind them there permanently.

The scientists started out with common polyurethane foam, used in everything from furniture cushions to home insulation. This foam has lots of nooks and crannies, like an English muffin, which could provide ample surface area to grab oil; but they needed to give the foam a new surface chemistry in order to firmly attach the oil-loving molecules.

Previously, Darling and fellow Argonne chemist Jeff Elam had developed a technique called sequential infiltration synthesis, or SIS, which can be used to infuse hard metal oxide atoms within complicated nanostructures.

After some trial and error, they found a way to adapt the technique to grow an extremely thin layer of metal oxide “primer” near the foam’s interior surfaces. This serves as the perfect glue for attaching the oil-loving molecules, which are deposited in a second step; they hold onto the metal oxide layer with one end and reach out to grab oil molecules with the other.

The result is Oleo Sponge, a block of foam that easily adsorbs oil from the water. The material, which looks a bit like an outdoor seat cushion, can be wrung out to be reused—and the oil itself recovered.

Oleo Sponge

At tests at a giant seawater tank in New Jersey called Ohmsett, the National Oil Spill Response Research & Renewable Energy Test Facility, the Oleo Sponge successfully collected diesel and crude oil from both below and on the water surface.

“The material is extremely sturdy. We’ve run dozens to hundreds of tests, wringing it out each time, and we have yet to see it break down at all,” Darling said.

Oleo Sponge could potentially also be used routinely to clean harbors and ports, where diesel and oil tend to accumulate from ship traffic, said John Harvey, a business development executive with Argonne’s Technology Development and Commercialization division.

Elam, Darling and the rest of the team are continuing to develop the technology.

“The technique offers enormous flexibility, and can be adapted to other types of cleanup besides oil in seawater. You could attach a different molecule to grab any specific substance you need,” Elam said.

The team is actively looking to commercialize [emphasis mine] the material, Harvey said; those interested in licensing the technology or collaborating with the laboratory on further development may contact partners@anl.gov.

Here’s a link to and a citation for the paper,

Advanced oil sorbents using sequential infiltration synthesis by Edward Barry, Anil U. Mane, Joseph A. Libera, Jeffrey W. Elam, and Seth B. Darling. J. Mater. Chem. A, 2017,5, 2929-2935 DOI: 10.1039/C6TA09014A First published online 11 Jan 2017

This paper is behind a paywall.

The two most recent posts here featuring oil spill technology are my Nov. 3, 2016 piece titled: Oil spill cleanup nanotechnology-enabled solution from A*STAR and my Sept. 15, 2016 piece titled: Canada’s Ingenuity Lab receives a $1.7M grant to develop oil recovery system for oil spills. I hope that one of these days someone manages to commercialize at least one of the new oil spill technologies. It seems that there hasn’t been much progress since the BP (Deepwater Horizon) oil spill. If someone has better information than I do about the current state of oil spill cleanup technologies, please do leave a comment.

Phagocytosis for a bioelectronic future

The process by which a cell engulfs matter is known as phagocytosis. One of the best known examples of failed phagocytosis is that of asbestos fibres in the lungs where lung cells have attempted to engulf a fibre that’s just too big and ends up piercing the cell. When enough of the cells are pierced, the person is diagnosed with mesothelioma.

This particular example of phagocytosis is a happier one according to a Dec. 16, 2016 article by Meghan Rosen for ScienceNews,

Human cells can snack on silicon.

Cells grown in the lab devour nano-sized wires of silicon through an engulfing process known as phagocytosis, scientists report December 16 in Science Advances.

Silicon-infused cells could merge electronics with biology, says John Zimmerman, a biophysicist now at Harvard University. “It’s still very early days,” he adds, but “the idea is to get traditional electronic devices working inside of cells.” Such hybrid devices could one day help control cellular behavior, or even replace electronics used for deep brain stimulation, he says.

Scientists have been trying to load electronic parts inside cells for years. One way is to zap holes in cells with electricity, which lets big stuff, like silicon nanowires linked to bulky materials, slip in. Zimmerman, then at the University of Chicago, and colleagues were looking for a simpler technique, something that would let tiny nanowires in easily and could potentially allow them to travel through a person’s bloodstream — like a drug.

A Dec. 22, 2016 University of Chicago news release by Matt Wood provides more detail,

“You can treat it as a non-genetic, synthetic biology platform,” said Bozhi Tian, PhD, assistant professor of chemistry and senior author of the new study. “Traditionally in biology we use genetic engineering and modify genetic parts. Now we can use silicon parts, and silicon can be internalized. You can target those silicon parts to specific parts of the cell and modulate that behavior with light.”

In the new study, Tian and his team show how cells consume or internalize the nanowires through phagocytosis, the same process they use to engulf and ingest nutrients and other particles in their environment. The nanowires are simply added to cell media, the liquid solution the cells live in, the same way you might administer a drug, and the cells take it from there. Eventually, the goal would be to inject them into the bloodstream or package them into a pill.

Once inside, the nanowires can interact directly with individual parts of the cell, organelles like the mitochondria, nucleus and cytoskeletal filaments. Researchers can then stimulate the nanowires with light to see how individual components of the cell respond, or even change the behavior of the cell. They can last up to two weeks inside the cell before biodegrading.

Seeing how individual parts of a cell respond to stimulation could give researchers insight into how medical treatments that use electrical stimulation work at a more detailed level. For instance, deep brain stimulation helps treat tremors from movement disorders like Parkinson’s disease by sending electrical signals to areas of the brain. Doctors know it works at the level of tissues and brain structures, but seeing how individual components of nerve cells react to these signals could help fine tune and improve the treatment.

The experiments in the study used umbilical vascular endothelial cells, which make up blood vessel linings in the umbilical cord. These cells readily took up the nanowires, but others, like cardiac muscle cells, did not. Knowing that some cells consume the wires and some don’t could also prove useful in experimental settings and give researchers more ways to target specific cell types.

Tian and his team manufactures the nanowires in their lab with a chemical vapor deposition system that grows the silicon structures to different specifications. They can adjust size, shape, and electrical properties as needed, or even add defects on purpose for testing. They can also make wires with porous surfaces that could deliver drugs or genetic material to the cells. The process gives them a variety of ways to manipulate the properties of the nanowires for research.

Seeing how individual parts of a cell respond to stimulation could give researchers insight into how medical treatments that use electrical stimulation work at a more detailed level. For instance, deep brain stimulation helps treat tremors from movement disorders like Parkinson’s disease by sending electrical signals to areas of the brain. Doctors know it works at the level of tissues and brain structures, but seeing how individual components of nerve cells react to these signals could help fine tune and improve the treatment.

The experiments in the study used umbilical vascular endothelial cells, which make up blood vessel linings in the umbilical cord. These cells readily took up the nanowires, but others, like cardiac muscle cells, did not. Knowing that some cells consume the wires and some don’t could also prove useful in experimental settings and give researchers more ways to target specific cell types.

Tian and his team manufactures the nanowires in their lab with a chemical vapor deposition system that grows the silicon structures to different specifications. They can adjust size, shape, and electrical properties as needed, or even add defects on purpose for testing. They can also make wires with porous surfaces that could deliver drugs or genetic material to the cells. The process gives them a variety of ways to manipulate the properties of the nanowires for research.

Here’s a link to and a citation for the paper,

Cellular uptake and dynamics of unlabeled freestanding silicon nanowires by John F. Zimmerman, Ramya Parameswaran, Graeme Murray, Yucai Wang, Michael Burke, and Bozhi Tian. Science Advances  16 Dec 2016: Vol. 2, no. 12, e1601039 DOI: 10.1126/sciadv.1601039

This paper appears to be open access.

Innovation and two Canadian universities

I have two news bits and both concern the Canadian universities, the University of British Columbia (UBC) and the University of Toronto (UofT).

Creative Destruction Lab – West

First, the Creative Destruction Lab, a technology commercialization effort based at UofT’s Rotman School of Management, is opening an office in the west according to a Sept. 28, 2016 UBC media release (received via email; Note: Links have been removed; this is a long media release which interestingly does not mention Joseph Schumpeter the man who developed the economic theory which he called: creative destruction),

The UBC Sauder School of Business is launching the Western Canadian version of the Creative Destruction Lab, a successful seed-stage program based at UofT’s Rotman School of Management, to help high-technology ventures driven by university research maximize their commercial impact and benefit to society.

“Creative Destruction Lab – West will provide a much-needed support system to ensure innovations formulated on British Columbia campuses can access the funding they need to scale up and grow in-province,” said Robert Helsley, Dean of the UBC Sauder School of Business. “The success our partners at Rotman have had in helping commercialize the scientific breakthroughs of Canadian talent is remarkable and is exactly what we plan to replicate at UBC Sauder.”

Between 2012 and 2016, companies from CDL’s first four years generated over $800 million in equity value. It has supported a long line of emerging startups, including computer-human interface company Thalmic Labs, which announced nearly USD $120 million in funding on September 19, one of the largest Series B financings in Canadian history.

Focusing on massively scalable high-tech startups, CDL-West will provide coaching from world-leading entrepreneurs, support from dedicated business and science faculty, and access to venture capital. While some of the ventures will originate at UBC, CDL-West will also serve the entire province and extended western region by welcoming ventures from other universities. The program will closely align with existing entrepreneurship programs across UBC, including, e@UBC and HATCH, and actively work with the BC Tech Association [also known as the BC Technology Industry Association] and other partners to offer a critical next step in the venture creation process.

“We created a model for tech venture creation that keeps startups focused on their essential business challenges and dedicated to solving them with world-class support,” said CDL Founder Ajay Agrawal, a professor at the Rotman School of Management and UBC PhD alumnus.

“By partnering with UBC Sauder, we will magnify the impact of CDL by drawing in ventures from one of the country’s other leading research universities and B.C.’s burgeoning startup scene to further build the country’s tech sector and the opportunities for job creation it provides,” said CDL Director, Rachel Harris.

CDL uses a goal-setting model to push ventures along a path toward success. Over nine months, a collective of leading entrepreneurs with experience building and scaling technology companies – called the G7 – sets targets for ventures to hit every eight weeks, with the goal of maximizing their equity-value. Along the way ventures turn to business and technology experts for strategic guidance on how to reach goals, and draw on dedicated UBC Sauder students who apply state-of the-art business skills to help companies decide which market to enter first and how.

Ventures that fail to achieve milestones – approximately 50 per cent in past cohorts – are cut from the process. Those that reach their objectives and graduate from the program attract investment from the G7, as well as other leading venture-capital firms.

Currently being assembled, the CDL-West G7 will be comprised of entrepreneurial luminaries, including Jeff Mallett, the founding President, COO and Director of Yahoo! Inc. from 1995-2002 – a company he led to $4 billion in revenues and grew from a startup to a publicly traded company whose value reached $135 billion. He is now Managing Director of Iconica Partners and Managing Partner of Mallett Sports & Entertainment, with ventures including the San Francisco Giants, AT&T Park and Mission Rock Development, Comcast Bay Area Sports Network, the San Jose Giants, Major League Soccer, Vancouver Whitecaps FC, and a variety of other sports and online ventures.

Already bearing fruit, the Creative Destruction Lab partnership will see several UBC ventures accepted into a Machine Learning Specialist Track run by Rotman’s CDL this fall. This track is designed to create a support network for enterprises focused on artificial intelligence, a research strength at UofT and Canada more generally, which has traditionally migrated to the United States for funding and commercialization. In its second year, CDL-West will launch its own specialist track in an area of strength at UBC that will draw eastern ventures west.

“This new partnership creates the kind of high impact innovation network the Government of Canada wants to encourage,” said Brandon Lee, Canada’s Consul General in San Francisco, who works to connect Canadian innovation to customers and growth capital opportunities in Silicon Valley. “By collaborating across our universities to enhance our capacity to turn the scientific discoveries into businesses in Canada, we can further advance our nation’s global competitiveness in the knowledge-based industries.”

The Creative Destruction Lab is guided by an Advisory Board, co-chaired by Vancouver-based Haig Farris, a pioneer of the Canadian venture capitalist industry, and Bill Graham, Chancellor of Trinity College at UofT and former Canadian cabinet minister.

“By partnering with Rotman, UBC Sauder will be able to scale up its support for high-tech ventures extremely quickly and with tremendous impact,” said Paul Cubbon, Leader of CDL-West and a faculty member at UBC Sauder. “CDL-West will act as a turbo booster for ventures with great ideas, but which lack the strategic roadmap and funding to make them a reality.”

CDL-West launched its competitive application process for the first round of ventures that will begin in January 2017. Interested ventures are encouraged to submit applications via the CDL website at: www.creativedestructionlab.com

Background

UBC Technology ventures represented at media availability

Awake Labs is a wearable technology startup whose products measure and track anxiety in people with Autism Spectrum Disorder to better understand behaviour. Their first device, Reveal, monitors a wearer’s heart-rate, body temperature and sweat levels using high-tech sensors to provide insight into care and promote long term independence.

Acuva Technologies is a Vancouver-based clean technology venture focused on commercializing breakthrough UltraViolet Light Emitting Diode technology for water purification systems. Initially focused on point of use systems for boats, RVs and off grid homes in North American market, where they already have early sales, the company’s goal is to enable water purification in households in developing countries by 2018 and deploy large scale systems by 2021.

Other members of the CDL-West G7 include:

Boris Wertz: One of the top tech early-stage investors in North America and the founding partner of Version One, Wertz is also a board partner with Andreessen Horowitz. Before becoming an investor, Wertz was the Chief Operating Officer of AbeBooks.com, which sold to Amazon in 2008. He was responsible for marketing, business development, product, customer service and international operations. His deep operational experience helps him guide other entrepreneurs to start, build and scale companies.

Lisa Shields: Founder of Hyperwallet Systems Inc., Shields guided Hyperwallet from a technology startup to the leading international payments processor for business to consumer mass payouts. Prior to founding Hyperwallet, Lisa managed payments acceptance and risk management technology teams for high-volume online merchants. She was the founding director of the Wireless Innovation Society of British Columbia and is driven by the social and economic imperatives that shape global payment technologies.

Jeff Booth: Co-founder, President and CEO of Build Direct, a rapidly growing online supplier of home improvement products. Through custom and proprietary web analytics and forecasting tools, BuildDirect is reinventing and redefining how consumers can receive the best prices. BuildDirect has 12 warehouse locations across North America and is headquartered in Vancouver, BC. In 2015, Booth was awarded the BC Technology ‘Person of the Year’ Award by the BC Technology Industry Association.

Education:

CDL-west will provide a transformational experience for MBA and senior undergraduate students at UBC Sauder who will act as venture advisors. Replacing traditional classes, students learn by doing during the process of rapid equity-value creation.

Supporting venture development at UBC:

CDL-west will work closely with venture creation programs across UBC to complete the continuum of support aimed at maximizing venture value and investment. It will draw in ventures that are being or have been supported and developed in programs that span campus, including:

University Industry Liaison Office which works to enable research and innovation partnerships with industry, entrepreneurs, government and non-profit organizations.

e@UBC which provides a combination of mentorship, education, venture creation, and seed funding to support UBC students, alumni, faculty and staff.

HATCH, a UBC technology incubator which leverages the expertise of the UBC Sauder School of Business and entrepreneurship@UBC and a seasoned team of domain-specific experts to provide real-world, hands-on guidance in moving from innovative concept to successful venture.

Coast Capital Savings Innovation Hub, a program base at the UBC Sauder Centre for Social Innovation & Impact Investing focused on developing ventures with the goal of creating positive social and environmental impact.

About the Creative Destruction Lab in Toronto:

The Creative Destruction Lab leverages the Rotman School’s leading faculty and industry network as well as its location in the heart of Canada’s business capital to accelerate massively scalable, technology-based ventures that have the potential to transform our social, industrial, and economic landscape. The Lab has had a material impact on many nascent startups, including Deep Genomics, Greenlid, Atomwise, Bridgit, Kepler Communications, Nymi, NVBots, OTI Lumionics, PUSH, Thalmic Labs, Vertical.ai, Revlo, Validere, Growsumo, and VoteCompass, among others. For more information, visit www.creativedestructionlab.com

About the UBC Sauder School of Business

The UBC Sauder School of Business is committed to developing transformational and responsible business leaders for British Columbia and the world. Located in Vancouver, Canada’s gateway to the Pacific Rim, the school is distinguished for its long history of partnership and engagement in Asia, the excellence of its graduates, and the impact of its research which ranks in the top 20 globally. For more information, visit www.sauder.ubc.ca

About the Rotman School of Management

The Rotman School of Management is located in the heart of Canada’s commercial and cultural capital and is part of the University of Toronto, one of the world’s top 20 research universities. The Rotman School fosters a new way to think that enables graduates to tackle today’s global business and societal challenges. For more information, visit www.rotman.utoronto.ca.

It’s good to see a couple of successful (according to the news release) local entrepreneurs on the board although I’m somewhat puzzled by Mallett’s presence since, if memory serves, Yahoo! was not doing that well when he left in 2002. The company was an early success but utterly dwarfed by Google at some point in the early 2000s and these days, its stock (both financial and social) has continued to drift downwards. As for Mallett’s current successes, there is no mention of them.

Reuters Top 100 of the world’s most innovative universities

After reading or skimming through the CDL-West news you might think that the University of Toronto ranked higher than UBC on the Reuters list of the world’s most innovative universities. Before breaking the news about the Canadian rankings, here’s more about the list from a Sept, 28, 2016 Reuters news release (receive via email),

Stanford University, the Massachusetts Institute of Technology and Harvard University top the second annual Reuters Top 100 ranking of the world’s most innovative universities. The Reuters Top 100 ranking aims to identify the institutions doing the most to advance science, invent new technologies and help drive the global economy. Unlike other rankings that often rely entirely or in part on subjective surveys, the ranking uses proprietary data and analysis tools from the Intellectual Property & Science division of Thomson Reuters to examine a series of patent and research-related metrics, and get to the essence of what it means to be truly innovative.

In the fast-changing world of science and technology, if you’re not innovating, you’re falling behind. That’s one of the key findings of this year’s Reuters 100. The 2016 results show that big breakthroughs – even just one highly influential paper or patent – can drive a university way up the list, but when that discovery fades into the past, so does its ranking. Consistency is key, with truly innovative institutions putting out groundbreaking work year after year.

Stanford held fast to its first place ranking by consistently producing new patents and papers that influence researchers elsewhere in academia and in private industry. Researchers at the Massachusetts Institute of Technology (ranked #2) were behind some of the most important innovations of the past century, including the development of digital computers and the completion of the Human Genome Project. Harvard University (ranked #3), is the oldest institution of higher education in the United States, and has produced 47 Nobel laureates over the course of its 380-year history.

Some universities saw significant movement up the list, including, most notably, the University of Chicago, which jumped from #71 last year to #47 in 2016. Other list-climbers include the Netherlands’ Delft University of Technology (#73 to #44) and South Korea’s Sungkyunkwan University (#66 to #46).

The United States continues to dominate the list, with 46 universities in the top 100; Japan is once again the second best performing country, with nine universities. France and South Korea are tied in third, each with eight. Germany has seven ranked universities; the United Kingdom has five; Switzerland, Belgium and Israel have three; Denmark, China and Canada have two; and the Netherlands and Singapore each have one.

You can find the rankings here (scroll down about 75% of the way) and for the impatient, the University of British Columbia ranked 50th and the University of Toronto 57th.

The biggest surprise for me was that China, like Canada, had two universities on the list. I imagine that will change as China continues its quest for science and innovation dominance. Given how they tout their innovation prowess, I had one other surprise, the University of Waterloo’s absence.

Curbing police violence with machine learning

A rather fascinating Aug. 1, 2016 article by Hal Hodson about machine learning and curbing police violence has appeared in the New Scientist journal (Note: Links have been removed),

None of their colleagues may have noticed, but a computer has. By churning through the police’s own staff records, it has caught signs that an officer is at high risk of initiating an “adverse event” – racial profiling or, worse, an unwarranted shooting.

The Charlotte-Mecklenburg Police Department in North Carolina is piloting the system in an attempt to tackle the police violence that has become a heated issue in the US in the past three years. A team at the University of Chicago is helping them feed their data into a machine learning system that learns to spot risk factors for unprofessional conduct. The department can then step in before risk transforms into actual harm.

The idea is to prevent incidents in which officers who are stressed behave aggressively, for example, such as one in Texas where an officer pulled his gun on children at a pool party after responding to two suicide calls earlier that shift. Ideally, early warning systems would be able to identify individuals who had recently been deployed on tough assignments, and divert them from other sensitive calls.

According to Hodson, there are already systems, both human and algorithmic, in place but the goal is to make them better,

The system being tested in Charlotte is designed to include all of the records a department holds on an individual – from details of previous misconduct and gun use to their deployment history, such as how many suicide or domestic violence calls they have responded to. It retrospectively caught 48 out of 83 adverse incidents between 2005 and now – 12 per cent more than Charlotte-Mecklenberg’s existing early intervention system.

More importantly, the false positive rate – the fraction of officers flagged as being under stress who do not go on to act aggressively – was 32 per cent lower than the existing system’s. “Right now the systems that claim to do this end up flagging the majority of officers,” says Rayid Ghani, who leads the Chicago team. “You can’t really intervene then.”

There is some cautious optimism about this new algorithm (Note: Links have been removed),

Frank Pasquale, who studies the social impact of algorithms at the University of Maryland, is cautiously optimistic. “In many walks of life I think this algorithmic ranking of workers has gone too far – it troubles me,” he says. “But in the context of the police, I think it could work.”

Pasquale says that while such a system for tackling police misconduct is new, it’s likely that older systems created the problem in the first place. “The people behind this are going to say it’s all new,” he says. “But it could be seen as an effort to correct an earlier algorithmic failure. A lot of people say that the reason you have so much contact between minorities and police is because the CompStat system was rewarding officers who got the most arrests.”

CompStat, short for Computer Statistics, is a police management and accountability system that was used to implement the “broken windows” theory of policing, which proposes that coming down hard on minor infractions like public drinking and vandalism helps to create an atmosphere of law and order, bringing serious crime down in its wake. Many police researchers have suggested that the approach has led to the current dangerous tension between police and minority communities.

Ghani has not forgotten the human dimension,

One thing Ghani is certain of is that the interventions will need to be decided on and delivered by humans. “I would not want any of those to be automated,” he says. “As long as there is a human in the middle starting a conversation with them, we’re reducing the chance for things to go wrong.”

h/t Terkko Navigator

I have written about police and violence here in the context of the Dallas Police Department and its use of a robot in a violent confrontation with a sniper, July 25, 2016 posting titled: Robots, Dallas (US), ethics, and killing.

Small, soft, and electrically functional: an injectable biomaterial

This development could be looked at as a form of synthetic biology without the genetic engineering. From a July 1, 2016 news item on ScienceDaily,

Ideally, injectable or implantable medical devices should not only be small and electrically functional, they should be soft, like the body tissues with which they interact. Scientists from two UChicago labs set out to see if they could design a material with all three of those properties.

The material they came up with, published online June 27, 2016, in Nature Materials, forms the basis of an ingenious light-activated injectable device that could eventually be used to stimulate nerve cells and manipulate the behavior of muscles and organs.

“Most traditional materials for implants are very rigid and bulky, especially if you want to do electrical stimulation,” said Bozhi Tian, an assistant professor in chemistry whose lab collaborated with that of neuroscientist Francisco Bezanilla on the research.

The new material, in contrast, is soft and tiny — particles just a few micrometers in diameter (far less than the width of a human hair) that disperse easily in a saline solution so they can be injected. The particles also degrade naturally inside the body after a few months, so no surgery would be needed to remove them.

A July 1, 2016 University of Chicago news release (also on EurekAlert) by , which originated the news item, provides more detail,

Each particle is built of two types of silicon that together form a structure full of nano-scale pores, like a tiny sponge. And like a sponge, it is squishy — a hundred to a thousand times less rigid than the familiar crystalline silicon used in transistors and solar cells. “It is comparable to the rigidity of the collagen fibers in our bodies,” said Yuanwen Jiang, Tian’s graduate student. “So we’re creating a material that matches the rigidity of real tissue.”

The material constitutes half of an electrical device that creates itself spontaneously when one of the silicon particles is injected into a cell culture, or, eventually, a human body. The particle attaches to a cell, making an interface with the cell’s plasma membrane. Those two elements together — cell membrane plus particle — form a unit that generates current when light is shined on the silicon particle.

“You don’t need to inject the entire device; you just need to inject one component,” João L. Carvalho-de-Souza , Bezanilla’s postdoc said. “This single particle connection with the cell membrane allows sufficient generation of current that could be used to stimulate the cell and change its activity. After you achieve your therapeutic goal, the material degrades naturally. And if you want to do therapy again, you do another injection.”

The scientists built the particles using a process they call nano-casting. They fabricate a silicon dioxide mold composed of tiny channels — “nano-wires” — about seven nanometers in diameter (less than 10,000 times smaller than the width of a human hair) connected by much smaller “micro-bridges.” Into the mold they inject silane gas, which fills the pores and channels and decomposes into silicon.

And this is where things get particularly cunning. The scientists exploit the fact the smaller an object is, the more the atoms on its surface dominate its reactions to what is around it. The micro-bridges are minute, so most of their atoms are on the surface. These interact with oxygen that is present in the silicon dioxide mold, creating micro-bridges made of oxidized silicon gleaned from materials at hand. The much larger nano-wires have proportionately fewer surface atoms, are much less interactive, and remain mostly pure silicon. [I have a note regarding ‘micro’ and ‘nano’ later in this posting.]

“This is the beauty of nanoscience,” Jiang said. “It allows you to engineer chemical compositions just by manipulating the size of things.”

Web-like nanostructure

Finally, the mold is dissolved. What remains is a web-like structure of silicon nano-wires connected by micro-bridges of oxidized silicon that can absorb water and help increase the structure’s softness. The pure silicon retains its ability to absorb light.

Transmission electron microscopy image shows an ordered nanowire array. The 100-nanometer scale bar is 1,000 times narrower than a hair. Courtesy of Tian Lab

Transmission electron microscopy image shows an ordered nanowire array. The 100-nanometer scale bar is 1,000 times narrower than a hair. Courtesy of
Tian Lab

The scientists have added the particles onto neurons in culture in the lab, shone light on the particles, and seen current flow into the neurons which activates the cells. The next step is to see what happens in living animals. They are particularly interested in stimulating nerves in the peripheral nervous system that connect to organs. These nerves are relatively close to the surface of the body, so near-infra-red wavelength light can reach them through the skin.

Tian imagines using the light-activated devices to engineer human tissue and create artificial organs to replace damaged ones. Currently, scientists can make engineered organs with the correct form but not the ideal function.

To get a lab-built organ to function properly, they will need to be able to manipulate individual cells in the engineered tissue. The injectable device would allow a scientist to do that, tweaking an individual cell using a tightly focused beam of light like a mechanic reaching into an engine and turning a single bolt. The possibility of doing this kind of synthetic biology without genetic engineering [emphasis mine] is enticing.

“No one wants their genetics to be altered,” Tian said. “It can be risky. There’s a need for a non-genetic system that can still manipulate cell behavior. This could be that kind of system.”

Tian’s graduate student Yuanwen Jiang did the material development and characterization on the project. The biological part of the collaboration was done in the lab of Francisco Bezanilla, the Lillian Eichelberger Cannon Professor of Biochemistry and Molecular Biology, by postdoc João L. Carvalho-de-Souza. They were, said Tian, the “heroes” of the work.

I was a little puzzled about the use of the word ‘micro’ in a context suggesting it was smaller than something measured at the nanoscale. Dr. Tian very kindly cleared up my confusion with this response in a July 4, 2016 email,

In fact, the definition of ‘micro’ and ’nano’ have been quite ambiguous in literature. For example, microporous materials (e.g., zeolite) usually refer to materials with pore sizes of less than 2 nm — this is defined based on IUPAC [International Union of Pure and Applied Chemistry] definition (http://goldbook.iupac.org/M03853.html). We used ‘micro-bridges’ because they come from the ‘micropores’ in the original template.

Thank you Dr. Tian for that very clear reply and Steve Koppes for forwarding my request to Dr. Tian!

Here’s a link to and a citation for the paper,

Heterogeneous silicon mesostructures for lipid-supported bioelectric interfaces by Yuanwen Jiang, João L. Carvalho-de-Souza, Raymond C. S. Wong, Zhiqiang Luo, Dieter Isheim, Xiaobing Zuo, Alan W. Nicholls, Il Woong Jung, Jiping Yue, Di-Jia Liu, Yucai Wang, Vincent De Andrade, Xianghui Xiao, Luizetta Navrazhnykh, Dara E. Weiss, Xiaoyang Wu, David N. Seidman, Francisco Bezanilla, & Bozhi Tian. Nature Materials (2016)  doi:10.1038/nmat4673 Published online 27 June 2016

This paper is behind a paywall.

I gather animal testing will be the next step as they continue to develop this exciting technology. Good luck!

$81M for US National Nanotechnology Coordinated Infrastructure (NNCI)

Academics, small business, and industry researchers are the big winners in a US National Science Foundation bonanza according to a Sept. 16, 2015 news item on Nanowerk,

To advance research in nanoscale science, engineering and technology, the National Science Foundation (NSF) will provide a total of $81 million over five years to support 16 sites and a coordinating office as part of a new National Nanotechnology Coordinated Infrastructure (NNCI).

The NNCI sites will provide researchers from academia, government, and companies large and small with access to university user facilities with leading-edge fabrication and characterization tools, instrumentation, and expertise within all disciplines of nanoscale science, engineering and technology.

A Sept. 16, 2015 NSF news release provides a brief history of US nanotechnology infrastructures and describes this latest effort in slightly more detail (Note: Links have been removed),

The NNCI framework builds on the National Nanotechnology Infrastructure Network (NNIN), which enabled major discoveries, innovations, and contributions to education and commerce for more than 10 years.

“NSF’s long-standing investments in nanotechnology infrastructure have helped the research community to make great progress by making research facilities available,” said Pramod Khargonekar, assistant director for engineering. “NNCI will serve as a nationwide backbone for nanoscale research, which will lead to continuing innovations and economic and societal benefits.”

The awards are up to five years and range from $500,000 to $1.6 million each per year. Nine of the sites have at least one regional partner institution. These 16 sites are located in 15 states and involve 27 universities across the nation.

Through a fiscal year 2016 competition, one of the newly awarded sites will be chosen to coordinate the facilities. This coordinating office will enhance the sites’ impact as a national nanotechnology infrastructure and establish a web portal to link the individual facilities’ websites to provide a unified entry point to the user community of overall capabilities, tools and instrumentation. The office will also help to coordinate and disseminate best practices for national-level education and outreach programs across sites.

New NNCI awards:

Mid-Atlantic Nanotechnology Hub for Research, Education and Innovation, University of Pennsylvania with partner Community College of Philadelphia, principal investigator (PI): Mark Allen
Texas Nanofabrication Facility, University of Texas at Austin, PI: Sanjay Banerjee

Northwest Nanotechnology Infrastructure, University of Washington with partner Oregon State University, PI: Karl Bohringer

Southeastern Nanotechnology Infrastructure Corridor, Georgia Institute of Technology with partners North Carolina A&T State University and University of North Carolina-Greensboro, PI: Oliver Brand

Midwest Nano Infrastructure Corridor, University of  Minnesota Twin Cities with partner North Dakota State University, PI: Stephen Campbell

Montana Nanotechnology Facility, Montana State University with partner Carlton College, PI: David Dickensheets
Soft and Hybrid Nanotechnology Experimental Resource,

Northwestern University with partner University of Chicago, PI: Vinayak Dravid

The Virginia Tech National Center for Earth and Environmental Nanotechnology Infrastructure, Virginia Polytechnic Institute and State University, PI: Michael Hochella

North Carolina Research Triangle Nanotechnology Network, North Carolina State University with partners Duke University and University of North Carolina-Chapel Hill, PI: Jacob Jones

San Diego Nanotechnology Infrastructure, University of California, San Diego, PI: Yu-Hwa Lo

Stanford Site, Stanford University, PI: Kathryn Moler

Cornell Nanoscale Science and Technology Facility, Cornell University, PI: Daniel Ralph

Nebraska Nanoscale Facility, University of Nebraska-Lincoln, PI: David Sellmyer

Nanotechnology Collaborative Infrastructure Southwest, Arizona State University with partners Maricopa County Community College District and Science Foundation Arizona, PI: Trevor Thornton

The Kentucky Multi-scale Manufacturing and Nano Integration Node, University of Louisville with partner University of Kentucky, PI: Kevin Walsh

The Center for Nanoscale Systems at Harvard University, Harvard University, PI: Robert Westervelt

The universities are trumpeting this latest nanotechnology funding,

NSF-funded network set to help businesses, educators pursue nanotechnology innovation (North Carolina State University, Duke University, and University of North Carolina at Chapel Hill)

Nanotech expertise earns Virginia Tech a spot in National Science Foundation network

ASU [Arizona State University] chosen to lead national nanotechnology site

UChicago, Northwestern awarded $5 million nanotechnology infrastructure grant

That is a lot of excitement.

Science in the 21st Century: how short should your abstracts be and what about litigation?

Writing tips for abstracts

A May 1, 2015 news item on phys.org  profiles research that contradicts every writing tip you’ve ever gotten about abstracts for your science research,

When writing the abstracts for journal articles, most scientists receive similar advice: keep it short, dry, and simple. But a new analysis by University of Chicago researchers of over one million abstracts finds that many of these tips backfire, producing abstracts cited less than their long, flowery, and jargon-filled peers.

“What I think is funny is there’s this disconnect between what you’d like to read, and what scientists actually cite,” said Stefano Allesina, professor of evolution and ecology at the University of Chicago, Computation Institute fellow and faculty, and senior author of the study. “It’s very suggestive that we should not trust writing tips we take for granted.”

During a seminar for incoming graduate students on how to write effective abstracts, Allesina wondered whether there was hard evidence for the “rules” that were taught. So Allesina and Cody Weinberger, a University of Chicago undergraduate, gathered hundreds of writing suggestions from scientific literature and condensed them into “Ten Simple Rules,” including “Keep it short,” “Keep it simple,” “Signal novelty and importance,” and “Show confidence.”

Here’s a link to and a citation for the paper,

Ten Simple (Empirical) Rules for Writing Science by Cody J. Weinberger, James A. Evans, & Stefano Allesina. PLOS Published: April 30, 2015 DOI: 10.1371/journal.pcbi.1004205

This is an open access journal.

From the paper (Note: Links have been removed),

Scientists receive (and offer) much advice on how to write an effective paper that their colleagues will read, cite, and celebrate [2–15]. Fundamentally, the advice is similar to that given to journalists: keep the text short, simple, bold, and easy to understand. Many resources recommend the parsimonious use of adjectives and adverbs, the use of present tense, and a consistent style. Here we put this advice to the test, and measure the impact of certain features of academic writing on success, as proxied by citations.

The abstract epitomizes the scientific writing style, and many journals force their authors to follow a formula—including a very strict word-limit, a specific organization into paragraphs, and even the articulation of particular sentences and claims (e.g., “Here we show that…”).

For our analysis, we collected more than one million abstracts from eight disciplines, spanning 17 years. The disciplines were chosen so that biology was represented by three allied fields (Ecology, Evolution, and Genetics). We drew upon a wide range of comparison disciplines, namely Analytic Chemistry, Condensed Matter Physics, Geology, Mathematics, and Psychology (see table in S1 Text). We measured whether certain features of the abstract consistently led to more (or fewer) citations than expected, after accounting for other factors that certainly influence citations, such as article age (S1 Fig), number of authors and references, and the journal in which it was published.

Here are some of the results (from the paper),

We find that shorter abstracts (fewer words [R1a] and fewer sentences [R1b]) consistently lead to fewer citations, with short sentences (R2) being beneficial only in Mathematics and Physics. Similarly, using more (rather than fewer) adjectives and adverbs is beneficial (R5). Also, writing an abstract with fewer common (R3a) or easy (R3b) words results in more citations.

The use of the present tense (R4) is beneficial in Biology and Psychology, while it has a negative impact in Chemistry and Physics, possibly reflecting differences in disciplinary culture.

While matching the keywords (R6) leads to universally negative outcomes, signaling the novelty and importance of the work (R7) has positive effects. The use of superlatives (R8) is also positive, while avoiding “hedge” words is negative in Biology and Physics, but positive in Chemistry.

Finally, choosing “pleasant,” “active,” and “easy to imagine” words (R10) has positive effects across the board.

The issue the researchers particularized from the results may not be what you expect (from the paper),

… Despite the fact that anybody in their right mind would prefer to read short, simple, and well-written prose with few abstruse terms, when building an argument and writing a paper, the limiting step is the ability to find the right article. For this, scientists rely heavily on search techniques, especially search engines, where longer and more specific abstracts are favored. Longer, more detailed, prolix prose is simply more available for search. This likely explains our results, and suggests the new landscape of linguistic fitness in 21st century science. …

It seems to me that while prolix prose’s popularity, predtaing search engines and the internet, is now being reinforced by our digital media. In short, while there are many complaints about digital media and shortened attention spans, it seems that in some cases digital media is encouraging wordiness.

Litigation and research

A May 1, 2015 posting by Michael Halpern for the Guardian science blogs sheds light on some legal tactics that lend themselves quite well to intimidating science researchers (Note: Links have been removed),

In 2009, a law firm representing Philip Morris submitted freedom of information requests to the University of Stirling for the work of three scientists – Gerard Hastings, Anne Marie Mackintosh and Linda Bauld – who were studying the impact of tobacco marketing on adolescents. They sought all primary data, questionnaires, handbooks and documents related to the researchers’ work, much of which was confidential.

Although the requests were eventually dropped due to negative publicity, responding to and challenging them cost the scientists and the university’s lawyers many weeks of work. “The stress of all this is considerable,” the scientists involved, wrote afterwards. “We are not lawyers and, like most civilians, find the law abstruse and the overt threat of serious punishment extremely disconcerting.”

This was no isolated incident. Activists and corporations of all political stripes in a growing number of countries are increasingly harassing and intimidating university scientists, using public information laws which were originally designed for citizens to understand the workings of government.

In an editorial in this week’s Science magazine, climate scientist Michael Mann and I explore this problem and ask a pressing question: how do we balance public accountability with the privacy essential for scientific inquiry?

The post is well worth reading in its entirety as Halpern goes on to describe the situation in more detail.