Tag Archives: UK

See Nobel prize winner’s (Kostya Novoselov) collaborative art/science video project on August 17, 2018 (Manchester, UK)

Dr. Konstantin (Kostya) Novoselov, one of the two scientists at the University of Manchester (UK) who were awarded Nobel prizes for their work with graphene, has embarked on an artistic career of sorts. From an August 8, 2018 news item on Nanowwerk,

Nobel prize-winning physicist Sir Kostya Novoselov worked with artist Mary Griffiths to create Prospect Planes – a video artwork resulting from months of scientific and artistic research and experimentation using graphene.

Prospect Planes will be unveiled as part of The Hexagon Experiment series of events at the Great Exhibition of the North 2018, Newcastle, on August 17 [2018].

An August 9, 2018 University of Manchester press release, which originated the news item (differences in the dates are likely due to timezones), describes the art/science project in some detail,

The fascinating video art project aims to shed light on graphene’s unique qualities and potential.

Providing a fascinating insight into scientific research into graphene, Prospect Planes began with a graphite drawing by Griffiths, symbolising the chemical element carbon.

This was replicated in graphene by Sir Kostya Novoselov, creating a microscopic 2D graphene version of Griffiths’ drawing just one atom thick and invisible to the naked eye.

They then used Raman spectroscopy to record a molecular fingerprint of the graphene image, using that fingerprint to map a digital visual representation of graphene’s unique qualities.

The six-part Hexagon Experiment series was inspired by the creativity of the Friday evening sessions that led to the isolation of graphene at The University of Manchester by Novoselov and Sir Andre Geim.

Mary Griffiths, has previously worked on other graphene artworks including From Seathwaite an installation in the National Graphene Institute, which depicts the story of graphite and graphene – its geography, geology and development in the North West of England.

Mary Griffiths, who is also Senior Curator at The Whitworth said: “Having previously worked alongside Kostya on other projects, I was aware of his passion for art. This has been a tremendously exciting and rewarding project, which will help people to better understand the unique qualities of graphene, while bringing Manchester’s passion for collaboration and creativity across the arts, industry and science to life.

“In many ways, the story of the scientific research which led to the creation of Prospect Planes is as exciting as the artwork itself. By taking my pencil drawing and patterning it in 2D with a single layer of graphene atoms, then creating an animated digital work of art from the graphene data, we hope to provoke further conversations about the nature of the first 2D material and the potential benefits and purposes of graphene.”

Sir Kostya Novoselov said: “In this particular collaboration with Mary, we merged two existing concepts to develop a new platform, which can result in multiple art projects. I really hope that we will continue working together to develop this platform even further.”

The Hexagon Experiment is taking place just a few months before the official launch of the £60m Graphene Engineering Innovation Centre, part of a major investment in 2D materials infrastructure across Manchester, cementing its reputation as Graphene City.

Prospect Planes was commissioned by Manchester-based creative music charity Brighter Sound.

The Hexagon Experiment is part of Both Sides Now – a three-year initiative to support, inspire and showcase women in music across the North of England, supported through Arts Council England’s Ambition for Excellence fund.

It took some searching but I’ve found the specific Hexagon event featuring Sir Novoselov’s and Mary Griffin’s work. From ‘The Hexagon Experiment #3: Adventures in Flatland’ webpage,

Lauren Laverne is joined by composer Sara Lowes and visual artist Mary Griffiths to discuss their experiments with music, art and science. Followed by a performance of Sara Lowes’ graphene-inspired composition Graphene Suite, and the unveiling of new graphene art by Mary Griffiths and Professor Kostya Novoselov. Alongside Andre Geim, Novoselov was awarded the Nobel Prize in Physics in 2010 for his groundbreaking experiments with graphene.

About The Hexagon Experiment

Music, art and science collide in an explosive celebration of women’s creativity

A six-part series of ‘Friday night experiments’ featuring live music, conversations and original commissions from pioneering women at the forefront of music, art and science.

Inspired by the creativity that led to the discovery of the Nobel-Prize winning ‘wonder material’ graphene, The Hexagon Experiment brings together the North’s most exciting musicians and scientists for six free events – from music made by robots to a spectacular tribute to an unsung heroine.

Presented by Brighter Sound and the National Graphene Institute at The University of Manchester, as part of the Great Exhibition of the North.

Buy tickets here.

One final comment, the title for the evening appears to have been inspired by a novella, from the Flatland Wikipedia entry (Note: Links have been removed),

Flatland: A Romance of Many Dimensions is a satirical novella by the English schoolmaster Edwin Abbott Abbott, first published in 1884 by Seeley & Co. of London.

Written pseudonymously by “A Square”,[1] the book used the fictional two-dimensional world of Flatland to comment on the hierarchy of Victorian culture, but the novella’s more enduring contribution is its examination of dimensions.[2]

That’s all folks.

ETA August 14, 2018: Not quite all. Hopefully this attempt to add a few details for people not familiar with graphene won’t lead increased confusion. The Hexagon event ‘Advetures in Flatland’ which includes Novoselov’s and Griffiths’ video project features some wordplay based on graphene’s two dimensional nature.

SIGGRAPH (Special Interest Group on Computer GRAPHics and Interactive Techniques) and their art gallery from Aug. 12 – 16, 2018 (making the world synthetic) in Vancouver (Canada)

While my main interest is the group’s temporary art gallery, I am providing a brief explanatory introduction and a couple of previews for SIGGRAPH 2018.


For anyone unfamiliar with the Special Interest Group on Computer GRAPHics and Interactive Techniques (SIGGRAPH) and its conferences, from the SIGGRAPH Wikipedia entry Note: Links have been removed),

Some highlights of the conference are its Animation Theater and Electronic Theater presentations, where recently created CG films are played. There is a large exhibition floor, where several hundred companies set up elaborate booths and compete for attention and recruits. Most of the companies are in the engineering, graphics, motion picture, or video game industries. There are also many booths for schools which specialize in computer graphics or interactivity.

Dozens of research papers are presented each year, and SIGGRAPH is widely considered the most prestigious forum for the publication of computer graphics research.[1] The recent paper acceptance rate for SIGGRAPH has been less than 26%.[2] The submitted papers are peer-reviewed in a single-blind process.[3] There has been some criticism about the preference of SIGGRAPH paper reviewers for novel results rather than useful incremental progress.[4][5] …

This is the third SIGGRAPH Vancouver has hosted; the others were in 2011 and 2014.  The theme for the 2018 iteration is ‘Generations’; here’s more about it from an Aug. 2, 2018 article by Terry Flores for Variety,

While its focus is firmly forward thinking, SIGGRAPH 2018, the computer graphics, animation, virtual reality, games, digital art, mixed reality, and emerging technologies conference, is also tipping its hat to the past thanks to its theme this year: Generations. The conference runs Aug. 12-16 in Vancouver, B.C.

“In the literal people sense, pioneers in the computer graphics industry are standing shoulder to shoulder with researchers, practitioners and the future of the industry — young people — mentoring them, dabbling across multiple disciplines to innovate, relate, and grow,” says SIGGRAPH 2018 conference chair Roy C. Anthony, VP of creative development and operations at software and technology firm Ventuz. “This is really what SIGGRAPH has always been about. Generations really seemed like a very appropriate way of looking back and remembering where we all came from and how far we’ve come.”

SIGGRAPH 2018 has a number of treats in store for attendees, including the debut of Disney’s first VR film, the short “Cycles”; production sessions on the making of “Blade Runner 2049,” “Game of Thrones,” “Incredibles 2” and “Avengers: Infinity War”; as well as sneak peeks of Disney’s upcoming “Ralph Breaks the Internet: Wreck-It Ralph 2” and Laika’s “Missing Link.”

That list of ‘treats’ in the last paragraph makes the conference seem more like an iteration of a ‘comic-con’ than a technology conference.


I have four items about work that will be presented at SIGGRAPH 2018, First up, something about ‘redirected walking’ from a June 18, 2018 Association for Computing Machinery news release on EurekAlert,

CHICAGO–In the burgeoning world of virtual reality (VR) technology, it remains a challenge to provide users with a realistic perception of infinite space and natural walking capabilities in the virtual environment. A team of computer scientists has introduced a new approach to address this problem by leveraging a natural human phenomenon: eye blinks.

All humans are functionally blind for about 10 percent of the time under normal circumstances due to eye blinks and saccades, a rapid movement of the eye between two points or objects. Eye blinks are a common and natural cause of so-called “change blindness,” which indicates the inability for humans to notice changes to visual scenes. Zeroing in on eye blinks and change blindness, the team has devised a novel computational system that effectively redirects the user in the virtual environment during these natural instances, all with undetectable camera movements to deliver orientation redirection.

“Previous RDW [redirected walking] techniques apply rotations continuously while the user is walking. But the amount of unnoticeable rotations is limited,” notes Eike Langbehn, lead author of the research and doctoral candidate at the University of Hamburg. “That’s why an orthogonal approach is needed–we add some additional rotations when the user is not focused on the visuals. When we learned that humans are functionally blind for some time due to blinks, we thought, ‘Why don’t we do the redirection during eye blinks?'”

Human eye blinks occur approximately 10 to 20 times per minute, about every 4 to 19 seconds. Leveraging this window of opportunity–where humans are unable to detect major motion changes while in a virtual environment–the researchers devised an approach to synchronize a computer graphics rendering system with this visual process, and introduce any useful motion changes in virtual scenes to enhance users’ overall VR experience.

The researchers’ experiments revealed that imperceptible camera rotations of 2 to 5 degrees and translations of 4 to 9 cm of the user’s viewpoint are possible during a blink without users even noticing. They tracked test participants’ eye blinks by an eye tracker in a VR head-mounted display. In a confirmatory study, the team validated that participants could not reliably detect in which of two eye blinks their viewpoint was manipulated while walking a VR curved path. The tests relied on unconscious natural eye blinking, but the researchers say redirection during blinking could be carried out consciously. Since users can consciously blink multiple times a day without much effort, eye blinks provide great potential to be used as an intentional trigger in their approach.

The team will present their work at SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia. The annual conference and exhibition showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

“RDW is a big challenge since current techniques still need too much space to enable unlimited walking in VR,” notes Langbehn. “Our work might contribute to a reduction of space since we found out that unnoticeable rotations of up to five degrees are possible during blinks. This means we can improve the performance of RDW by approximately 50 percent.”

The team’s results could be used in combination with other VR research, such as novel steering algorithms, improved path prediction, and rotations during saccades, to name a few. Down the road, such techniques could some day enable consumer VR users to virtually walk beyond their living room.

Langbehn collaborated on the work with Frank Steinicke of University of Hamburg, Markus Lappe of University of Muenster, Gregory F. Welch of University of Central Florida, and Gerd Bruder, also of University of Central Florida. For the full paper and video, visit the team’s project page.



ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. ACM SIGGRAPH is a special interest group within ACM that serves as an interdisciplinary community for members in research, technology, and applications in computer graphics and interactive techniques. SIGGRAPH is the world’s leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques. SIGGRAPH 2018, marking the 45th annual conference hosted by ACM SIGGRAPH, will take place from 12-16 August at the Vancouver Convention Centre in Vancouver, B.C.

They have provided an image illustrating what they mean (I don’t find it especially informative),

Caption: The viewing behavior of a virtual reality user, including fixations (in green) and saccades (in red). A blink fully suppresses visual perception. Credit: Eike Langbehn

Next up (2), there’s Disney Corporation’s first virtual reality (VR) short, from a July 19, 2018  Association for Computing Machinery news release on EurekAlert,

Walt Disney Animation Studios will debut its first ever virtual reality short film at SIGGRAPH 2018, and the hope is viewers will walk away feeling connected to the characters as equally as they will with the VR technology involved in making the film.

Cycles, an experimental film directed by Jeff Gipson, centers around the true meaning of creating a home and the life it holds inside its walls. The idea for the film is personal, inspired by Gipson’s childhood spending time with his grandparents and creating memories in their home, and later, having to move them to an assisted living residence.

“Every house has a story unique to the people, the characters who live there,” says Gipson. “We wanted to create a story in this single place and be able to have the viewer witness life happening around them. It is an emotionally driven film, expressing the real ups and downs, the happy and sad moments in life.”

For Cycles, Gipson also drew from his past life as an architect, having spent several years designing skate parks, and from his passion for action sports, including freestyle BMX. In Los Angeles, where Gipson lives, it is not unusual to find homes with an empty swimming pool reserved for skating or freestyle biking. Part of the pitch for Cycles came out of Gipson’s experience riding in these empty pools and being curious about the homes attached to them, the families who lived there, and the memories they made.

SIGGRAPH attendees will have the opportunity to experience Cycles at the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; the Vrcade, a space for VR, AR, and MR games or experiences; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

The production team completed Cycles in four months with about 50 collaborators as part of a professional development program at the studio. A key difference in VR filmmaking includes getting creative with how to translate a story to the VR “screen.” Pre-visualizing the narrative, for one, was a challenge. Rather than traditional storyboarding, Gipson and his team instead used a mix of Quill VR painting techniques and motion capture to “storyboard” Cycles, incorporating painters and artists to generate sculptures or 3D models of characters early on and draw scenes for the VR space. The creators also got innovative with the use of light and color saturation in scenes to help guide the user’s eyes during the film.

“What’s cool for VR is that we are really on the edge of trying to figure out what it is and how to tell stories in this new medium,” says Gipson. “In VR, you can look anywhere and really be transported to a different world, experience it from different angles, and see every detail. We want people watching to feel alive and feel emotion, and give them a true cinematic experience.”

This is Gipson’s VR directorial debut. He joined Walt Disney Animation Studios in 2013, serving as a lighting artist on Disney favorites like Frozen, Zootopia, and Moana. Of getting to direct the studio’s first VR short, he says, “VR is an amazing technology and a lot of times the technology is what is really celebrated. We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story.”

Apparently this is a still from the ‘short’,

Caption: Disney Animation Studios will present ‘Cycles’ , its first virtual reality (VR) short, at ACM SIGGRAPH 2018. Credit: Disney Animation Studios

There’s also something (3) from Google as described in a July 26, 2018 Association of Computing Machinery news release on EurekAlert,

Google has unveiled a new virtual reality (VR) immersive experience based on a novel system that captures and renders high-quality, realistic images from the real world using light fields. Created by a team of leading researchers at Google, Welcome to Light Fields is the tech giant’s splash into the nascent arena of light fields VR experiences, an exciting corner of VR video technology gaining traction for its promise to deliver extremely high-quality imagery and experiences in the virtual world.

Google released Welcome to Light Fields earlier this year as a free app on Steam VR for HTC Vive, Oculus Rift, and Windows Mixed Reality headsets. The creators will demonstrate the VR experience at SIGGRAPH 2018, in the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the Vrcade, a space for VR, AR, and MR games or experiences; the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

Destinations in Welcome to Light Fields include NASA’s Space Shuttle Discovery, delivering to viewers an astronaut’s view inside the flight deck, which has never been open to the public; the pristine teak and mahogany interiors of the Gamble House, an architectural treasure in Pasadena, CA; and the glorious St. Stephen’s Church in Granada Hills, CA, home to a stunning wall of more than 14,000 pieces of glimmering stained glass.

“I love that light fields in VR can teleport you to exotic places in the real world, and truly make you believe you are there,” says Ryan Overbeck, software engineer at Google who co-led the project. “To me, this is magic.”

To bring this experience to life, Overbeck worked with a team that included Paul Debevec, senior staff engineer at Google, who managed the project and led the hardware piece with engineers Xueming Yu, Jay Busch, and Graham Fyffe. With Overbeck, Daniel Erickson and Daniel Evangelakos focused on the software end. The researchers designed a comprehensive system for capturing and rendering high-quality, spherical light field still images from footage captured in the real world. They developed two easy-to-use light field camera rigs, based on the GoPro Hero4action sports camera, that efficiently capture thousands of images on the surface of a sphere. Those images were then passed through a cloud-based light-field-processing pipeline.

Among other things, explains Overbeck, “The processing pipeline uses computer vision to place the images in 3D and generate depth maps, and we use a modified version of our vp9 video codec

to compress the light field data down to a manageable size.” To render a light field dataset, he notes, the team used a rendering algorithm that blends between the thousands of light field images in real-time.

The team relied on Google’s talented pool of engineers in computer vision, graphics, video compression, and machine learning to overcome the unique challenges posed in light fields technology. They also collaborated closely with the WebM team (who make the vp9 video codec) to develop the high-quality light field compression format incorporated into their system, and leaned heavily on the expertise of the Jump VR team to help pose the images and generate depth maps. (Jump is Google’s professional VR system for achieving 3D-360 video production at scale.)

Indeed, with Welcome to Light Fields, the Google team is demonstrating the potential and promise of light field VR technology, showcasing the technology’s ability to provide a truly immersive experience with a level of unmatched realism. Though light fields technology has been researched and explored in computer graphics for more than 30 years, practical systems for actually delivering high-quality light field experiences has not yet been possible.

Part of the team’s motivation behind creating this VR light field experience is to invigorate the nascent field.

“Welcome to Light Fields proves that it is now possible to make a compelling light field VR viewer that runs on consumer-grade hardware, and we hope that this knowledge will encourage others to get involved with building light field technology and media,” says Overbeck. “We understand that in order to eventually make compelling consumer products based on light fields, we need a thriving light field ecosystem. We need open light field codecs, we need artists creating beautiful light field imagery, and we need people using VR in order to engage with light fields.”

I don’t really understand why this image, which looks like something belongs on advertising material, would be chosen to accompany a news release on a science-based distribution outlet,

Caption: A team of leading researchers at Google, will unveil the new immersive virtual reality (VR) experience “Welcome to Lightfields” at ACM SIGGRAPH 2018. Credit: Image courtesy of Google/Overbeck

Finally (4), ‘synthesizing realistic sounds’ is announced in an Aug. 6, 2018 Stanford University (US) news release (also on EurekAlert) by Taylor Kubota,

Advances in computer-generated imagery have brought vivid, realistic animations to life, but the sounds associated with what we see simulated on screen, such as two objects colliding, are often recordings. Now researchers at Stanford University have developed a system that automatically renders accurate sounds for a wide variety of animated phenomena.

“There’s been a Holy Grail in computing of being able to simulate reality for humans. We can animate scenes and render them visually with physics and computer graphics, but, as for sounds, they are usually made up,” said Doug James, professor of computer science at Stanford University. “Currently there exists no way to generate realistic synchronized sounds for complex animated content, such as splashing water or colliding objects, automatically. This fills that void.”

The researchers will present their work on this sound synthesis system as part of ACM SIGGRAPH 2018, the leading conference on computer graphics and interactive techniques. In addition to enlivening movies and virtual reality worlds, this system could also help engineering companies prototype how products would sound before being physically produced, and hopefully encourage designs that are quieter and less irritating, the researchers said.

“I’ve spent years trying to solve partial differential equations – which govern how sound propagates – by hand,” said Jui-Hsien Wang, a graduate student in James’ lab and in the Institute for Computational and Mathematical Engineering (ICME), and lead author of the paper. “This is actually a place where you don’t just solve the equation but you can actually hear it once you’ve done it. That’s really exciting to me and it’s fun.”

Predicting sound

Informed by geometry and physical motion, the system figures out the vibrations of each object and how, like a loudspeaker, those vibrations excite sound waves. It computes the pressure waves cast off by rapidly moving and vibrating surfaces but does not replicate room acoustics. So, although it does not recreate the echoes in a grand cathedral, it can resolve detailed sounds from scenarios like a crashing cymbal, an upside-down bowl spinning to a stop, a glass filling up with water or a virtual character talking into a megaphone.

Most sounds associated with animations rely on pre-recorded clips, which require vast manual effort to synchronize with the action on-screen. These clips are also restricted to noises that exist – they can’t predict anything new. Other systems that produce and predict sounds as accurate as those of James and his team work only in special cases, or assume the geometry doesn’t deform very much. They also require a long pre-computation phase for each separate object.

“Ours is essentially just a render button with minimal pre-processing that treats all objects together in one acoustic wave simulation,” said Ante Qu, a graduate student in James’ lab and co-author of the paper.

The simulated sound that results from this method is highly detailed. It takes into account the sound waves produced by each object in an animation but also predicts how those waves bend, bounce or deaden based on their interactions with other objects and sound waves in the scene.

Challenges ahead

In its current form, the group’s process takes a while to create the finished product. But, now that they have proven this technique’s potential, they can focus on performance optimizations, such as implementing their method on parallel GPU hardware, that should make it drastically faster.

And, even in its current state, the results are worth the wait.

“The first water sounds we generated with the system were among the best ones we had simulated – and water is a huge challenge in computer-generated sound,” said James. “We thought we might get a little improvement, but it is dramatically better than previous approaches even right out of the box. It was really striking.”

Although the group’s work has faithfully rendered sounds of various objects spinning, falling and banging into each other, more complex objects and interactions – like the reverberating tones of a Stradivarius violin – remain difficult to model realistically. That, the group said, will have to wait for a future solution.

Timothy Langlois of Adobe Research is a co-author of this paper. This research was funded by the National Science Foundation and Adobe Research. James is also a professor, by courtesy, of music and a member of Stanford Bio-X.

Researchers Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang have created a video featuring highlights of animations with sounds synthesized using the Stanford researchers’ new system.,

The researchers have also provided this image,

By computing pressure waves cast off by rapidly moving and vibrating surfaces – such as a cymbal – a new sound synthesis system developed by Stanford researchers can automatically render realistic sound for computer animations. (Image credit: Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang)

It does seem like we’re synthesizing the world around us, eh?

The SIGGRAPH 2018 art gallery

Here’s what SIGGRAPH had to say about its 2018 art gallery in Vancouver and the themes for the conference and the gallery (from a May 18, 2018 Associating for Computing Machinery news release on globalnewswire.com (also on this 2018 SIGGRAPH webpage),

SIGGRAPH 2018, the world’s leading showcase of digital art created using computer graphics and interactive techniques, will present a special Art Gallery, entitled “Origins,” and historic Art Papers in Vancouver, B.C. The 45th SIGGRAPH conference will take place 12–16 August at the Vancouver Convention Centre. The programs will also honor the generations of creators that have come before through a special, 50th anniversary edition of the Leonard journal. To register for the conference, visit S2018.SIGGRAPH.ORG.

The SIGGRAPH 2018 ART GALLERY is a curated exhibition, conceived as a dialogical space that enables the viewer to reflect on man’s diverse cultural values and rituals through contemporary creative practices. Building upon an exciting and eclectic selection of creative practices mediated through technologies that represent the sophistication of our times, the SIGGRAPH 2018 Art Gallery will embrace the narratives of the indigenous communities based near Vancouver and throughout Canada as a source of inspiration. The exhibition will feature contemporary media artworks, art pieces by indigenous communities, and other traces of technologically mediated Ludic practices.

Andrés Burbano, SIGGRAPH 2018 Art Gallery chair and professor at Universidad de los Andes, said, “The Art Gallery aims to articulate myth and technology, science and art, the deep past and the computational present, and will coalesce around a theme of ‘Origins.’ Media and technological creative expressions will explore principles such as the origins of the cosmos, the origins of life, the origins of human presence, the origins of the occupation of territories in the Americas, and the origins of people living in the vast territories of the Arctic.”

He continued, “The venue [in Vancouver] hopes to rekindle the original spark that ignited the collaborative spirit of the SIGGRAPH community of engineers, scientists, and artists, who came together to create the very first conference in the early 1970s.”

Highlights from the 2018 Art Gallery include:

Transformation Mask (Canada) [Technology Based]
Shawn Hunt, independent; and Microsoft Garage: Andy Klein, Robert Butterworth, Jonathan Cobb, Jeremy Kersey, Stacey Mulcahy, Brendan O’Rourke, Brent Silk, and Julia Taylor-Hell, Microsoft Vancouver

TRANSFORMATION MASK is an interactive installation that features the Microsoft HoloLens. It utilizes electronics and mechanical engineering to express a physical and digital transformation. Participants are immersed in spatial sounds and holographic visuals.

Somnium (U.S.) [Science Based]
Marko Peljhan, Danny Bazo, and Karl Yerkes, University of California, Santa Barbara

Somnium is a cybernetic installation that provides visitors with the ability to sensorily, cognitively, and emotionally contemplate and experience exoplanetary discoveries, their macro and micro dimensions, and the potential for life in our Galaxy. Some might call it “space telescope.”

Ernest Edmonds Retrospective – Art Systems 1968-2018 (United Kingdom) [History Based]
Ernest Edmonds, De Montfort University

Celebrating one of the pioneers of computer graphics-based art since the early 1970s, this Ernest Edmonds career retrospective will showcase snapshots of Edmonds’ work as it developed over the years. With one piece from each decade, the retrospective will also demonstrate how vital the Leonardo journal has been throughout the 50-year journey.

In addition to the works above, the Art Gallery will feature pieces from notable female artists Ozge Samanci, Ruth West, and Nicole L’Hullier. For more information about the Edmonds retrospective, read THIS POST ON THE ACM SIGGRAPH BLOG.

The SIGGRAPH 2018 ART PAPERS program is designed to feature research from artists, scientists, theorists, technologists, historians, and more in one of four categories: project description, theory/criticism, methods, or history. The chosen work was selected by an international jury of scholars, artists, and immersive technology developers.

To celebrate the 50th anniversary of LEONARDO (MIT Press), and 10 years of its annual SIGGRAPH issue, SIGGRAPH 2018 is pleased to announce a special anniversary edition of the journal, which will feature the 2018 art papers. For 50 years, Leonardo has been the definitive publication for artist-academics. To learn more about the relationship between SIGGRAPH and the journal, listen to THIS EPISODE OF THE SIGGRAPH SPOTLIGHT PODCAST.

“In order to encourage a wider range of topics, we introduced a new submission type, short papers. This enabled us to accept more content than in previous years. Additionally, for the first time, we will introduce sessions that integrate the Art Gallery artist talks with Art Papers talks, promoting richer connections between these two creative communities,” said Angus Forbes, SIGGRAPH 2018 Art Papers chair and professor at University of California, Santa Cruz.

Art Papers highlights include:

Alienating the Familiar with CGI: A Recipe for Making a Full CGI Art House Animated Feature [Long]
Alex Counsell and Paul Charisse, University of Portsmouth

This paper explores the process of making and funding an art house feature film using full CGI in a marketplace where this has never been attempted. It explores cutting-edge technology and production approaches, as well as routes to successful fundraising.

Augmented Fauna and Glass Mutations: A Dialogue Between Material and Technique in Glassblowing and 3D Printing [Long]
Tobias Klein, City University of Hong Kong

The two presented artworks, “Augmented Fauna” and “Glass Mutations,” were created during an artist residence at the PILCHUCK GLASS SCHOOL. They are examples of the qualities and methods established through a synthesis between digital workflows and traditional craft processes and thus formulate the notion of digital craftsmanship.

Inhabitat: An Imaginary Ecosystem in a Children’s Science Museum [Short]
Graham Wakefield, York University, and Haru Hyunkyung Ji, OCAD University

“Inhabitat” is a mixed reality artwork in which participants become part of an imaginary ecology through three simultaneous perspectives of scale and agency; three distinct ways to see with other eyes. This imaginary world was exhibited at a children’s science museum for five months, using an interactive projection-augmented sculpture, a large screen and speaker array, and a virtual reality head-mounted display.

What’s the what?

My father used to say that and I always assumed it meant summarize the high points, if you need to, and get to the point—fast. In that spirit, I am both fascinated and mildly appalled. The virtual, mixed, and augmented reality technologies, as well as, the others being featured at SIGGRAPH 2018 are wondrous in many ways but it seems we are coming ever closer to a world where we no longer interact with nature or other humans directly. (see my August 10, 2018 posting about the ‘extinction of experience’ for research that encourages more direct interaction with nature) I realize that SIGGRAPH is intended as a primarily technical experience but I think a little more content questioning these technologies and their applications (social implications) might be in order. That’s often the artist’s role but I can’t see anything in the art gallery descriptions that hint at any sort of fundamental critique.

The CRISPR ((clustered regularly interspaced short palindromic repeats)-CAS9 gene-editing technique may cause new genetic damage kerfuffle

Setting the stage

Not unexpectedly, CRISPR-Cas9  or clustered regularly interspaced short palindromic repeats-CRISPR-associated protein 9 can be dangerous as these scientists note in a July 16, 2018 news item on phys.org,

Scientists at the Wellcome Sanger Institute have discovered that CRISPR/Cas9 gene editing can cause greater genetic damage in cells than was previously thought. These results create safety implications for gene therapies using CRISPR/Cas9 in the future as the unexpected damage could lead to dangerous changes in some cells.

Reported today (16 July 2018) in the journal Nature Biotechnology, the study also revealed that standard tests for detecting DNA changes miss finding this genetic damage, and that caution and specific testing will be required for any potential gene therapies.

This CRISPR-Cas9 image reminds me of popcorn,

CRISPR-associated protein Cas9 (white) from Staphylococcus aureus based on Protein Database ID 5AXW. Credit: Thomas Splettstoesser (Wikipedia, CC BY-SA 4.0)[ downloaded from https://phys.org/news/2018-07-genome-crisprcas9-gene-higher-thought.html#jCp]

A July 16, 2018 Wellcome Sanger Institute press release (also on EurekAlert), which originated the news item, offers a little more explanation,

CRISPR/Cas9 is one of the newest genome editing tools. It can alter sections of DNA in cells by cutting at specific points and introducing changes at that location. Already extensively used in scientific research, CRISPR/Cas9 has also been seen as a promising way to create potential genome editing treatments for diseases such as HIV, cancer or sickle cell disease. Such therapeutics could inactivate a disease-causing gene, or correct a genetic mutation. However, any potential treatments would have to prove that they were safe.

Previous research had not shown many unforeseen mutations from CRISPR/Cas9 in the DNA at the genome editing target site. To investigate this further the researchers carried out a full systematic study in both mouse and human cells and discovered that CRISPR/Cas9 frequently caused extensive mutations, but at a greater distance from the target site.

The researchers found many of the cells had large genetic rearrangements such as DNA deletions and insertions. These could lead to important genes being switched on or off, which could have major implications for CRISPR/Cas9 use in therapies. In addition, some of these changes were too far away from the target site to be seen with standard genotyping methods.

Prof Allan Bradley, corresponding author on the study from the Wellcome Sanger Institute, said: “This is the first systematic assessment of unexpected events resulting from CRISPR/Cas9 editing in therapeutically relevant cells, and we found that changes in the DNA have been seriously underestimated before now. It is important that anyone thinking of using this technology for gene therapy proceeds with caution, and looks very carefully to check for possible harmful effects.”

Michael Kosicki, the first author from the Wellcome Sanger Institute, said: “My initial experiment used CRISPR/Cas9 as a tool to study gene activity, however it became clear that something unexpected was happening. Once we realised the extent of the genetic rearrangements we studied it systematically, looking at different genes and different therapeutically relevant cell lines, and showed that the CRISPR/Cas9 effects held true.”

The work has implications for how CRISPR/Cas9 is used therapeutically and is likely to re-spark researchers’ interest in finding alternatives to the standard CRISPR/Cas9 method for gene editing.

Prof Maria Jasin, an independent researcher from Memorial Slone Kettering Cancer Centre, New York, who was not involved in the study said: “This study is the first to assess the repertoire of genomic damage arising at a CRISPR/Cas9 cleavage site. While it is not known if genomic sites in other cell lines will be affected in the same way, this study shows that further research and specific testing is needed before CRISPR/Cas9 is used clinically.”

For anyone who’d like to better understand the terms gene editing and CRISPR-Cas9, the Wellcome Sanger Institute provides these explanatory webpages, What is genome editing? and What is CRISPR-Cas9?

For the more advanced, here’s a link and a citation for the paper,

Repair of double-strand breaks induced by CRISPR–Cas9 leads to large deletions and complex rearrangements by Michael Kosicki, Kärt Tomberg, & Allan Bradley. Nature Biotechnology DOI: https://doi.org/10.1038/nbt.4192 Published 16 July 2018

This paper appears to be open access.

The kerfuffle

It seems this news has affected the CRISPR market. From a July 16, 2018 article by Cale Guthrie Weissman for Fast Company,

… CRISPR could unknowingly delete or alter non-targeted genes, which could lead to myriad unintended consequences. This is especially frightening, since the technology is going to be used in human clinical trials.

Meanwhile, other scientists working with CRISPR are trying to downplay the findings, telling STAT [a life sciences and business journalism website] that there have been no reported adverse effects similar to what the study describes. The news, however, has brought about a market reaction–at least three publicly traded companies that focus on CRISPR-based therapies are in stock nosedive. Crispr Therapeutics is down by over 6%; Editas fell by over 3%; and Intellia Therapeutics dropped by over 5%. [emphasis mine]

Damage control

Gaetan Burgio (geneticist, Australian National University)  in a July 16, 2018 essay on phys.org (originating from The Conversation) suggests some calm (Note: Links have been removed),

But a new study has called into question the precision of the technique [CRISPR gene editing technology].

The hope for gene editing is that it will be able to cure and correct diseases. To date, many successes have been reported, including curing deafness in mice, and in altering cells to cure cancer.

Some 17 clinical trials in human patients are registered [emphasis mine] testing gene editing on leukaemias, brain cancers and sickle cell anaemia (where red blood cells are misshaped, causing them to die). Before implementing CRISPR technology in clinics to treat cancer or congenital disorders, we must address whether the technique is safe and accurate.

There are a few options for getting around this problem. One option is to isolate the cells we wish to edit from the body and reinject only the ones we know have been correctly edited.

For example, lymphocytes (white blood cells) that are crucial to killing cancer cells could be taken out of the body, then modified using CRISPR to heighten their cancer-killing properties. The DNA of these cells could be sequenced in detail, and only the cells accurately and specifically gene-modified would be selected and delivered back into the body to kill the cancer cells.

While this strategy is valid for cells we can isolate from the body, some cells, such as neurons and muscles, cannot be removed from the body. These types of cells might not be suitable for gene editing using Cas9 scissors.

Fortunately, researchers have discovered other forms of CRISPR systems that don’t require the DNA to be cut. Some CRISPR systems only cut the RNA, not the DNA (DNA contains genetic instructions, RNA convey the instructions on how to synthesise proteins).

As RNA [ribonucleic acid] remains in our cells only for a specific period of time before being degraded, this would allow us to control the timing and duration of the CRISPR system delivery and reverse it (so the scissors are only functional for a short period of time).

This was found to be successful for dementia in mice. Similarly, some CRISPR systems simply change the letters of the DNA, rather than cutting them. This was successful for specific mutations causing diseases such as hereditary deafness in mice.

I agree with Burgio’s conclusion (not included here) that we have a lot more to learn and I can’t help wondering why there are 17 registered human clinical trials at this point.

My name is Steve and I’m a sub auroral ion drift

Photo: The Aurora Named STEVE Couresty: NASA Goddard

That stunning image is one of a series, many of which were taken by amateur photographers as noted in a March 14, 2018 US National Aeronautics and Space Agency (NASA)/Goddard Space Flight Center news release (also on EurekAlert) by Kasha Patel about how STEVE was discovered,

Notanee Bourassa knew that what he was seeing in the night sky was not normal. Bourassa, an IT technician in Regina, Canada, trekked outside of his home on July 25, 2016, around midnight with his two younger children to show them a beautiful moving light display in the sky — an aurora borealis. He often sky gazes until the early hours of the morning to photograph the aurora with his Nikon camera, but this was his first expedition with his children. When a thin purple ribbon of light appeared and starting glowing, Bourassa immediately snapped pictures until the light particles disappeared 20 minutes later. Having watched the northern lights for almost 30 years since he was a teenager, he knew this wasn’t an aurora. It was something else.

From 2015 to 2016, citizen scientists — people like Bourassa who are excited about a science field but don’t necessarily have a formal educational background — shared 30 reports of these mysterious lights in online forums and with a team of scientists that run a project called Aurorasaurus. The citizen science project, funded by NASA and the National Science Foundation, tracks the aurora borealis through user-submitted reports and tweets.

The Aurorasaurus team, led by Liz MacDonald, a space scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, conferred to determine the identity of this mysterious phenomenon. MacDonald and her colleague Eric Donovan at the University of Calgary in Canada talked with the main contributors of these images, amateur photographers in a Facebook group called Alberta Aurora Chasers, which included Bourassa and lead administrator Chris Ratzlaff. Ratzlaff gave the phenomenon a fun, new name, Steve, and it stuck.

But people still didn’t know what it was.

Scientists’ understanding of Steve changed that night Bourassa snapped his pictures. Bourassa wasn’t the only one observing Steve. Ground-based cameras called all-sky cameras, run by the University of Calgary and University of California, Berkeley, took pictures of large areas of the sky and captured Steve and the auroral display far to the north. From space, ESA’s (the European Space Agency) Swarm satellite just happened to be passing over the exact area at the same time and documented Steve.

For the first time, scientists had ground and satellite views of Steve. Scientists have now learned, despite its ordinary name, that Steve may be an extraordinary puzzle piece in painting a better picture of how Earth’s magnetic fields function and interact with charged particles in space. The findings are published in a study released today in Science Advances.

“This is a light display that we can observe over thousands of kilometers from the ground,” said MacDonald. “It corresponds to something happening way out in space. Gathering more data points on STEVE will help us understand more about its behavior and its influence on space weather.”

The study highlights one key quality of Steve: Steve is not a normal aurora. Auroras occur globally in an oval shape, last hours and appear primarily in greens, blues and reds. Citizen science reports showed Steve is purple with a green picket fence structure that waves. It is a line with a beginning and end. People have observed Steve for 20 minutes to 1 hour before it disappears.

If anything, auroras and Steve are different flavors of an ice cream, said MacDonald. They are both created in generally the same way: Charged particles from the Sun interact with Earth’s magnetic field lines.

The uniqueness of Steve is in the details. While Steve goes through the same large-scale creation process as an aurora, it travels along different magnetic field lines than the aurora. All-sky cameras showed that Steve appears at much lower latitudes. That means the charged particles that create Steve connect to magnetic field lines that are closer to Earth’s equator, hence why Steve is often seen in southern Canada.

Perhaps the biggest surprise about Steve appeared in the satellite data. The data showed that Steve comprises a fast moving stream of extremely hot particles called a sub auroral ion drift, or SAID. Scientists have studied SAIDs since the 1970s but never knew there was an accompanying visual effect. The Swarm satellite recorded information on the charged particles’ speeds and temperatures, but does not have an imager aboard.

“People have studied a lot of SAIDs, but we never knew it had a visible light. Now our cameras are sensitive enough to pick it up and people’s eyes and intellect were critical in noticing its importance,” said Donovan, a co-author of the study. Donovan led the all-sky camera network and his Calgary colleagues lead the electric field instruments on the Swarm satellite.

Steve is an important discovery because of its location in the sub auroral zone, an area of lower latitude than where most auroras appear that is not well researched. For one, with this discovery, scientists now know there are unknown chemical processes taking place in the sub auroral zone that can lead to this light emission.

Second, Steve consistently appears in the presence of auroras, which usually occur at a higher latitude area called the auroral zone. That means there is something happening in near-Earth space that leads to both an aurora and Steve. Steve might be the only visual clue that exists to show a chemical or physical connection between the higher latitude auroral zone and lower latitude sub auroral zone, said MacDonald.

“Steve can help us understand how the chemical and physical processes in Earth’s upper atmosphere can sometimes have local noticeable effects in lower parts of Earth’s atmosphere,” said MacDonald. “This provides good insight on how Earth’s system works as a whole.”

The team can learn a lot about Steve with additional ground and satellite reports, but recording Steve from the ground and space simultaneously is a rare occurrence. Each Swarm satellite orbits Earth every 90 minutes and Steve only lasts up to an hour in a specific area. If the satellite misses Steve as it circles Earth, Steve will probably be gone by the time that same satellite crosses the spot again.

In the end, capturing Steve becomes a game of perseverance and probability.

“It is my hope that with our timely reporting of sightings, researchers can study the data so we can together unravel the mystery of Steve’s origin, creation, physics and sporadic nature,” said Bourassa. “This is exciting because the more I learn about it, the more questions I have.”

As for the name “Steve” given by the citizen scientists? The team is keeping it as an homage to its initial name and discoverers. But now it is STEVE, short for Strong Thermal Emission Velocity Enhancement.

Other collaborators on this work are: the University of Calgary, New Mexico Consortium, Boston University, Lancaster University, Athabasca University, Los Alamos National Laboratory and the Alberta Aurora Chasers Facebook group.

If you live in an area where you may see STEVE or an aurora, submit your pictures and reports to Aurorasaurus through aurorasaurus.org or the free iOS and Android mobile apps. To learn how to spot STEVE, click here.

There is a video with MacDonald describing the work and featuring more images,

Katherine Kornei’s March 14, 2018 article for sciencemag.org adds more detail about the work,

Citizen scientists first began posting about Steve on social media several years ago. Across New Zealand, Canada, the United States, and the United Kingdom, they reported an unusual sight in the night sky: a purplish line that arced across the heavens for about an hour at a time, visible at lower latitudes than classical aurorae, mostly in the spring and fall. … “It’s similar to a contrail but doesn’t disperse,” says Notanee Bourassa, an aurora photographer in Saskatchewan province in Canada [Regina as mentioned in the news release is the capital of the province of Saskatchewan].

Traditional aurorae are often green, because oxygen atoms present in Earth’s atmosphere emit that color light when they’re bombarded by charged particles trapped in Earth’s magnetic field. They also appear as a diffuse glow—rather than a distinct line—on the northern or southern horizon. Without a scientific theory to explain the new sight, a group of citizen scientists led by aurora enthusiast Chris Ratzlaff of Canada’s Alberta province [usually referred to as Canada’s province of Alberta or simply, the province of Alberta] playfully dubbed it Steve, after a line in the 2006 children’s movie Over the Hedge.

Aurorae have been studied for decades, but people may have missed Steve because their cameras weren’t sensitive enough, says Elizabeth MacDonald, a space physicist at NASA Goddard Space Flight Center in Greenbelt, Maryland, and leader of the new research. MacDonald and her team have used data from a European satellite called Swarm-A to study Steve in its native environment, about 200 kilometers up in the atmosphere. Swarm-A’s instruments revealed that the charged particles in Steve had a temperature of about 6000°C, “impressively hot” compared with the nearby atmosphere, MacDonald says. And those ions were flowing from east to west at nearly 6 kilometers per second, …

Here’s a link to and a citation for the paper,

New science in plain sight: Citizen scientists lead to the discovery of optical structure in the upper atmosphere by Elizabeth A. MacDonald, Eric Donovan, Yukitoshi Nishimura, Nathan A. Case, D. Megan Gillies, Bea Gallardo-Lacourt, William E. Archer, Emma L. Spanswick, Notanee Bourassa, Martin Connors, Matthew Heavner, Brian Jackel, Burcu Kosar, David J. Knudsen, Chris Ratzlaff, and Ian Schofield. Science Advances 14 Mar 2018:
Vol. 4, no. 3, eaaq0030 DOI: 10.1126/sciadv.aaq0030

This paper is open access. You’ll note that Notanee Bourassa is listed as an author. For more about Bourassa, there’s his Twitter feed (@DJHardwired) and his YouTube Channel. BTW, his Twitter bio notes that he’s “Recently heartbroken,” as well as, “Seasoned human male. Expert storm chaser, aurora photographer, drone flyer and on-air FM radio DJ.” Make of that what you will.

Body-on-a-chip (10 organs)

Also known as human-on-a-chip, the 10-organ body-on-a-chip was being discussed at the 9th World Congress on Alternatives to Animal Testing in the Life Sciences in 2014 in Prague, Czech Republic (see this July 1, 2015 posting for more). At the time, scientists were predicting success at achieving their goal of 10 organs on-a-chip in 2017 (the best at the time was four organs). Only a few months past that deadline, scientists from the Massachusetts Institute of Technology (MIT) seem to have announced a ’10 organ chip’ in a March 14, 2018 news item on ScienceDaily,

MIT engineers have developed new technology that could be used to evaluate new drugs and detect possible side effects before the drugs are tested in humans. Using a microfluidic platform that connects engineered tissues from up to 10 organs, the researchers can accurately replicate human organ interactions for weeks at a time, allowing them to measure the effects of drugs on different parts of the body.

Such a system could reveal, for example, whether a drug that is intended to treat one organ will have adverse effects on another.

A March 14, 2018 MIT news release (also on EurekAlert), which originated the news item, expands on the theme,

“Some of these effects are really hard to predict from animal models because the situations that lead to them are idiosyncratic,” says Linda Griffith, the School of Engineering Professor of Teaching Innovation, a professor of biological engineering and mechanical engineering, and one of the senior authors of the study. “With our chip, you can distribute a drug and then look for the effects on other tissues, and measure the exposure and how it is metabolized.”

These chips could also be used to evaluate antibody drugs and other immunotherapies, which are difficult to test thoroughly in animals because they are designed to interact with the human immune system.

David Trumper, an MIT professor of mechanical engineering, and Murat Cirit, a research scientist in the Department of Biological Engineering, are also senior authors of the paper, which appears in the journal Scientific Reports. The paper’s lead authors are former MIT postdocs Collin Edington and Wen Li Kelly Chen.

Modeling organs

When developing a new drug, researchers identify drug targets based on what they know about the biology of the disease, and then create compounds that affect those targets. Preclinical testing in animals can offer information about a drug’s safety and effectiveness before human testing begins, but those tests may not reveal potential side effects, Griffith says. Furthermore, drugs that work in animals often fail in human trials.

“Animals do not represent people in all the facets that you need to develop drugs and understand disease,” Griffith says. “That is becoming more and more apparent as we look across all kinds of drugs.”

Complications can also arise due to variability among individual patients, including their genetic background, environmental influences, lifestyles, and other drugs they may be taking. “A lot of the time you don’t see problems with a drug, particularly something that might be widely prescribed, until it goes on the market,” Griffith says.

As part of a project spearheaded by the Defense Advanced Research Projects Agency (DARPA), Griffith and her colleagues decided to pursue a technology that they call a “physiome on a chip,” which they believe could offer a way to model potential drug effects more accurately and rapidly. To achieve this, the researchers needed new equipment — a platform that would allow tissues to grow and interact with each other — as well as engineered tissue that would accurately mimic the functions of human organs.

Before this project was launched, no one had succeeded in connecting more than a few different tissue types on a platform. Furthermore, most researchers working on this kind of chip were working with closed microfluidic systems, which allow fluid to flow in and out but do not offer an easy way to manipulate what is happening inside the chip. These systems also require external pumps.

The MIT team decided to create an open system, which essentially removes the lid and makes it easier to manipulate the system and remove samples for analysis. Their system, adapted from technology they previously developed and commercialized through U.K.-based CN BioInnovations, also incorporates several on-board pumps that can control the flow of liquid between the “organs,” replicating the circulation of blood, immune cells, and proteins through the human body. The pumps also allow larger engineered tissues, for example tumors within an organ, to be evaluated.

Complex interactions

The researchers created several versions of their chip, linking up to 10 organ types: liver, lung, gut, endometrium, brain, heart, pancreas, kidney, skin, and skeletal muscle. Each “organ” consists of clusters of 1 million to 2 million cells. These tissues don’t replicate the entire organ, but they do perform many of its important functions. Significantly, most of the tissues come directly from patient samples rather than from cell lines that have been developed for lab use. These so-called “primary cells” are more difficult to work with but offer a more representative model of organ function, Griffith says.

Using this system, the researchers showed that they could deliver a drug to the gastrointestinal tissue, mimicking oral ingestion of a drug, and then observe as the drug was transported to other tissues and metabolized. They could measure where the drugs went, the effects of the drugs on different tissues, and how the drugs were broken down. In a related publication, the researchers modeled how drugs can cause unexpected stress on the liver by making the gastrointestinal tract “leaky,” allowing bacteria to enter the bloodstream and produce inflammation in the liver.

Kevin Healy, a professor of bioengineering and materials science and engineering at the University of California at Berkeley, says that this kind of system holds great potential for accurate prediction of complex adverse drug reactions.

“While microphysiological systems (MPS) featuring single organs can be of great use for both pharmaceutical testing and basic organ-level studies, the huge potential of MPS technology is revealed by connecting multiple organ chips in an integrated system for in vitro pharmacology. This study beautifully illustrates that multi-MPS “physiome-on-a-chip” approaches, which combine the genetic background of human cells with physiologically relevant tissue-to-media volumes, allow accurate prediction of drug pharmacokinetics and drug absorption, distribution, metabolism, and excretion,” says Healy, who was not involved in the research.

Griffith believes that the most immediate applications for this technology involve modeling two to four organs. Her lab is now developing a model system for Parkinson’s disease that includes brain, liver, and gastrointestinal tissue, which she plans to use to investigate the hypothesis that bacteria found in the gut can influence the development of Parkinson’s disease.

Other applications include modeling tumors that metastasize to other parts of the body, she says.

“An advantage of our platform is that we can scale it up or down and accommodate a lot of different configurations,” Griffith says. “I think the field is going to go through a transition where we start to get more information out of a three-organ or four-organ system, and it will start to become cost-competitive because the information you’re getting is so much more valuable.”

The research was funded by the U.S. Army Research Office and DARPA.

Caption: MIT engineers have developed new technology that could be used to evaluate new drugs and detect possible side effects before the drugs are tested in humans. Using a microfluidic platform that connects engineered tissues from up to 10 organs, the researchers can accurately replicate human organ interactions for weeks at a time, allowing them to measure the effects of drugs on different parts of the body. Credit: Felice Frankel

Here’s a link to and a citation for the paper,

Interconnected Microphysiological Systems for Quantitative Biology and Pharmacology Studies by Collin D. Edington, Wen Li Kelly Chen, Emily Geishecker, Timothy Kassis, Luis R. Soenksen, Brij M. Bhushan, Duncan Freake, Jared Kirschner, Christian Maass, Nikolaos Tsamandouras, Jorge Valdez, Christi D. Cook, Tom Parent, Stephen Snyder, Jiajie Yu, Emily Suter, Michael Shockley, Jason Velazquez, Jeremy J. Velazquez, Linda Stockdale, Julia P. Papps, Iris Lee, Nicholas Vann, Mario Gamboa, Matthew E. LaBarge, Zhe Zhong, Xin Wang, Laurie A. Boyer, Douglas A. Lauffenburger, Rebecca L. Carrier, Catherine Communal, Steven R. Tannenbaum, Cynthia L. Stokes, David J. Hughes, Gaurav Rohatgi, David L. Trumper, Murat Cirit, Linda G. Griffith. Scientific Reports, 2018; 8 (1) DOI: 10.1038/s41598-018-22749-0 Published online:

This paper which describes testing for four-, seven-, and ten-organs-on-a-chip, is open access. From the paper’s Discussion,

In summary, we have demonstrated a generalizable approach to linking MPSs [microphysiological systems] within a fluidic platform to create a physiome-on-a-chip approach capable of generating complex molecular distribution profiles for advanced drug discovery applications. This adaptable, reusable system has unique and complementary advantages to existing microfluidic and PDMS-based approaches, especially for applications involving high logD substances (drugs and hormones), those requiring precise and flexible control over inter-MPS flow partitioning and drug distribution, and those requiring long-term (weeks) culture with reliable fluidic and sampling operation. We anticipate this platform can be applied to a wide range of problems in disease modeling and pre-clinical drug development, especially for tractable lower-order (2–4) interactions.

Congratulations to the researchers!

AI x 2: the Amnesty International and Artificial Intelligence story

Amnesty International and artificial intelligence seem like an unexpected combination but it all makes sense when you read a June 13, 2018 article by Steven Melendez for Fast Company (Note: Links have been removed),

If companies working on artificial intelligence don’t take steps to safeguard human rights, “nightmare scenarios” could unfold, warns Rasha Abdul Rahim, an arms control and artificial intelligence researcher at Amnesty International in a blog post. Those scenarios could involve armed, autonomous systems choosing military targets with little human oversight, or discrimination caused by biased algorithms, she warns.

Rahim pointed at recent reports of Google’s involvement in the Pentagon’s Project Maven, which involves harnessing AI image recognition technology to rapidly process photos taken by drones. Google recently unveiled new AI ethics policies and has said it won’t continue with the project once its current contract expires next year after high-profile employee dissent over the project. …

“Compliance with the laws of war requires human judgement [sic] –the ability to analyze the intentions behind actions and make complex decisions about the proportionality or necessity of an attack,” Rahim writes. “Machines and algorithms cannot recreate these human skills, and nor can they negotiate, produce empathy, or respond to unpredictable situations. In light of these risks, Amnesty International and its partners in the Campaign to Stop Killer Robots are calling for a total ban on the development, deployment, and use of fully autonomous weapon systems.”

Rasha Abdul Rahim’s June 14, 2018 posting (I’m putting the discrepancy in publication dates down to timezone differences) on the Amnesty International website (Note: Links have been removed),

Last week [June 7, 2018] Google released a set of principles to govern its development of AI technologies. They include a broad commitment not to design or deploy AI in weaponry, and come in the wake of the company’s announcement that it will not renew its existing contract for Project Maven, the US Department of Defense’s AI initiative, when it expires in 2019.

The fact that Google maintains its existing Project Maven contract for now raises an important question. Does Google consider that continuing to provide AI technology to the US government’s drone programme is in line with its new principles? Project Maven is a litmus test that allows us to see what Google’s new principles mean in practice.

As details of the US drone programme are shrouded in secrecy, it is unclear precisely what role Google plays in Project Maven. What we do know is that US drone programme, under successive administrations, has been beset by credible allegations of unlawful killings and civilian casualties. The cooperation of Google, in any capacity, is extremely troubling and could potentially implicate it in unlawful strikes.

As AI technology advances, the question of who will be held accountable for associated human rights abuses is becoming increasingly urgent. Machine learning, and AI more broadly, impact a range of human rights including privacy, freedom of expression and the right to life. It is partly in the hands of companies like Google to safeguard these rights in relation to their operations – for us and for future generations. If they don’t, some nightmare scenarios could unfold.

Warfare has already changed dramatically in recent years – a couple of decades ago the idea of remote controlled bomber planes would have seemed like science fiction. While the drones currently in use are still controlled by humans, China, France, Israel, Russia, South Korea, the UK and the US are all known to be developing military robots which are getting smaller and more autonomous.

For example, the UK is developing a number of autonomous systems, including the BAE [Systems] Taranis, an unmanned combat aircraft system which can fly in autonomous mode and automatically identify a target within a programmed area. Kalashnikov, the Russian arms manufacturer, is developing a fully automated, high-calibre gun that uses artificial neural networks to choose targets. The US Army Research Laboratory in Maryland, in collaboration with BAE Systems and several academic institutions, has been developing micro drones which weigh less than 30 grams, as well as pocket-sized robots that can hop or crawl.

Of course, it’s not just in conflict zones that AI is threatening human rights. Machine learning is already being used by governments in a wide range of contexts that directly impact people’s lives, including policing [emphasis mine], welfare systems, criminal justice and healthcare. Some US courts use algorithms to predict future behaviour of defendants and determine their sentence lengths accordingly. The potential for this approach to reinforce power structures, discrimination or inequalities is huge.

In july 2017, the Vancouver Police Department announced its use of predictive policing software, the first such jurisdiction in Canada to make use of the technology. My Nov. 23, 2017 posting featured the announcement.

The almost too aptly named Campaign to Stop Killer Robots can be found here. Their About Us page provides a brief history,

Formed by the following non-governmental organizations (NGOs) at a meeting in New York on 19 October 2012 and launched in London in April 2013, the Campaign to Stop Killer Robots is an international coalition working to preemptively ban fully autonomous weapons. See the Chronology charting our major actions and achievements to date.

Steering Committee

The Steering Committee is the campaign’s principal leadership and decision-making body. It is comprised of five international NGOs, a regional NGO network, and four national NGOs that work internationally:

Human Rights Watch
Article 36
Association for Aid and Relief Japan
International Committee for Robot Arms Control
Mines Action Canada
Nobel Women’s Initiative
PAX (formerly known as IKV Pax Christi)
Pugwash Conferences on Science & World Affairs
Seguridad Humana en América Latina y el Caribe (SEHLAC)
Women’s International League for Peace and Freedom

For more information, see this Overview. A Terms of Reference is also available on request, detailing the committee’s selection process, mandate, decision-making, meetings and communication, and expected commitments.

For anyone who may be interested in joining Amnesty International, go here.

World’s first ever graphene-enhanced sports shoes/sneakers/running shoes/runners/trainers

Regardless of what these shoes are called, they contain, apparently, some graphene. As to why you as a consumer might find that important, here’s more from a June 20, 2018 news item on Nanowerk,

The world’s first-ever sports shoes to utilise graphene – the strongest material on the planet – have been unveiled by The University of Manchester and British brand inov-8.

Collaborating with graphene experts at National Graphene Institute, the brand has been able to develop a graphene-enhanced rubber. They have developed rubber outsoles for running and fitness shoes that in testing have outlasted 1,000 miles and are scientifically proven to be 50% harder wearing.

The National Graphene Institute (located at the UK’s University of Manchester) June 20, 2018 press release, which originated the news item, provides a few details, none of them particularly technical or scientific, no mention of studies, etc.  (Note: Links have been removed),

Graphene is 200 times stronger than steel and at only a single atom thick it is the thinnest possible material, meaning it has many unique properties. inov-8 is the first brand in the world to use the superlative material in sports footwear, with its G-SERIES shoes available to pre-order from June 22nd [2018] ahead of going on sale from July 12th [2018].

The company first announced its intent to revolutionise the sports footwear industry in December last year. Six months of frenzied anticipation later, inov-8 has now removed all secrecy and let the world see these game-changing shoes.

Michael Price, inov-8 product and marketing director, said: “Over the last 18 months we have worked with the National Graphene Institute at The University of Manchester to bring the world’s toughest grip to the sports footwear market.

“Prior to this innovation, off-road runners and fitness athletes had to choose between a sticky rubber that works well in wet or sweaty conditions but wears down quicker and a harder rubber that is more durable but not quite as grippy. Through intensive research, hundreds of prototypes and thousands of hours of testing in both the field and laboratory, athletes now no longer need to compromise.”

Dr Aravind Vijayaraghavan, Reader in Nanomaterials at The University of Manchester, said: “Using graphene we have developed G-SERIES outsole rubbers that are scientifically tested to be 50% stronger, 50% more elastic and 50% harder wearing.

“We are delighted to put graphene on the shelves of 250 retail stores all over the world and make it accessible to everyone. Graphene is a versatile material with limitless potential and in coming years we expect to deliver graphene technologies in composites, coatings and sensors, many of which will further revolutionise sports products.”

The G-SERIES range is made up of three different shoes, each meticulously designed to meet the needs of athletes. THE MUDCLAW G 260 is for running over muddy mountains and obstacle courses, the TERRAULTRA G 260 for running long distances on hard-packed trails and the F-LITE G 290 for crossfitters working out in gyms. Each includes graphene-enhanced rubber outsoles and Kevlar – a material used in bulletproof vests – on the uppers.

Commenting on the patent-pending technology and the collaboration with The University of Manchester, inov-8 CEO Ian Bailey said: “This powerhouse forged in Northern England is going to take the world of sports footwear by storm. We’re combining science and innovation together with entrepreneurial speed and agility to go up against the major sports brands – and we’re going to win.

“We are at the forefront of a graphene sports footwear revolution and we’re not stopping at just rubber outsoles. This is a four-year innovation project which will see us incorporate graphene into 50% of our range and give us the potential to halve the weight of running/fitness shoes without compromising on performance or durability.”

Graphene is produced from graphite, which was first mined in the Lake District fells of Northern England more than 450 years ago. inov-8 too was forged in the same fells, albeit much more recently in 2003. The brand now trades in 68 countries worldwide.

The scientists who first isolated graphene from graphite were awarded the Nobel Prize in 2010. Building on their revolutionary work, a team of over 300 staff at The University of Manchester has pioneered projects into graphene-enhanced prototypes, from sports cars and medical devices to aeroplanes. Now the University can add graphene-enhanced sports footwear to its list of world-firsts.

A picture of the ‘shoes’ has been provided,

Courtesy: National Graphene Institute at University of Manchester

You can find the company inov-8 here. As for more information about their graphene-enhanced show, there’s this,from the company’s ‘graphene webpage‘,

1555Graphite was first mined in the Lake District fells of Northern England

2004Scientists at The University of Manchester isolate graphene from graphite.

2010The Nobel Prize is awarded to the scientists for their ground-breaking experiments with graphene.

2018inov-8 launch the first-ever sports footwear to utilise graphene, delivering the world’s toughest grip.

Ground-breaking technology

One atom thick carbon sheet

200 x stronger than steel

Thin, light, flexible, with limitless potential


Previously athletes had to choose between a sticky rubber that works well in wet or sweaty conditions but wears down quicker, and a harder rubber that is more durable but not quite as grippy. Through intensive research, hundreds of prototypes and thousands of hours of testing in both the field and laboratory, athletes now no longer need to compromise. The new rubber we have developed with the National Graphene Institute at The University of Manchester allows us to smash the limits of grip [sic]

The G-SERIES range is made up of three different shoes, each meticulously designed to meet the needs of athletes. Each includes graphene-enhanced rubber outsoles that deliver the world’s toughest grip and Kevlar – a material used in bulletproof vests – on the uppers.

Bulletproof material for running shoes?

As for Canadians eager to try out these shoes, you will likely have to go online or go to the US.  Given how recently (June 19, 2018) this occurred, I’m mentioning the US president’s (Donald Trump) comments that Canadians are notorious for buying shoes in the US and smuggling them across the border back into Canada. (Revelatory information for Canadians everywhere.) His bizarre comments occasioned this explanatory June 19, 2018 article by Jordan Weissmann for Slate.com,

During a characteristically rambling address before the National Federation of Independent Businesses on Tuesday [June 19, 2018], Donald Trump darted off into an odd tangent in which he suggested that Canadians were smuggling shoes across the U.S. border in order to avoid their country’s high tariffs.

There was a story two days ago in a major newspaper talking about people living in Canada coming into the United States and smuggling things back into Canada because the tariffs are so massive. The tariffs to get common items back into Canada are so high that they have to smuggle ‘em in. They buy shoes, then they wear ‘em. They scuff ‘em up. They make ‘em sound old or look old. No, we’re treated horribly. [emphasis mine]

Anyone engaged in this alleged practice would be avoiding payment to the Canadian government. How this constitutes poor treatment of the US government and/or US retailers is a bit a of puzzler.

Getting back to Weissman and his article, he focuses on the source of the US president’s ‘information’.

As for graphene-enhanced ‘shoes’, I hope they are as advertized.

How does sticky tape make graphene?

As I understand it, Andre Geim one of the two men (the other was Konstantin Novoselov) to first isolate graphene from a block of graphite by using sticky tape is not thrilled that it’s known in some quarters as the graphene sticky tape method. Still, the technique caught the imagination as Steve Connor’s March 18, 2013 article for the Independent made clear.

It seems scientists are still just as fascinated as anyone else as a February 27, 2018 news item for Nanowerk describes,

Scientists at UCL [University College London] have explained for the first time the mystery of why adhesive tape is so useful for graphene production.

The study, published in Advanced Materials (“Graphene–Graphene Interactions: Friction, Superlubricity, and Exfoliation”), used supercomputers to model the process through which graphene sheets are exfoliated from graphite, the material in pencils.

A February 26, 2018 UCL press release, which originated the news item, provides more detail,

There are various methods for exfoliating graphene, including the famous adhesive tape method developed by Nobel Prize winner Andre Geim. However little has been known until now about how the process of exfoliating graphene using sticky tape works.

Academics at UCL are now able to demonstrate how individual flakes of graphite can be exfoliated to make one atom thick layers. They also reveal that the process of peeling a layer of graphene demands 40% less energy than that of another common method called shearing. This is expected to have far reaching impacts for the commercial production of graphene.

“The sticky tape method works rather like peeling egg boxes apart with a vertical motion, it is easier than pulling one horizontally across another when they are neatly stacked,” explained Professor Peter Coveney, Director of the Centre for Computational Science (UCL Chemistry).

“If shearing, then you get held up by this egg carton configuration. But if you peel, you can get them apart much more easily. The polymethyl methacrylate adhesive on traditional sticky tape is ideal for picking up the edge of the graphene sheet so it can be lifted and peeled,” added Professor Coveney.

Graphite occurs naturally, its basic crystalline structure is stacks of flat sheets of strongly bonded carbon atoms in a honeycomb pattern. Graphite’s many layers are bound together by weak interactions and can easily slide large distances over one another with little friction due to their superlubricity.

The scientists at UCL simulated an experiment conducted in 2015 at Lawrence Berkeley Laboratory in Berkeley, California, which used a special microscope with atomic resolution to see how graphene flakes move around on a graphite surface.

The supercomputer’s results matched Berkeley’s observations showing that there is less movement when the graphene atoms neatly line up with the atoms below.

“Despite the vast amount of research carried out on graphene since its discovery, it is clear that until now our understanding of its behaviour on an atomic length scale was very poor,” explains PhD student Robert Sinclair (UCL Chemistry).

“The one reason above all others why the material is difficult to use is because it is hard to make. Even now, a dozen years after its discovery, companies have to apply sticky tape methods to pull it apart, as the Laureates did to uncover it; hardly a hi-tech and industrially simple process to implement. We’re now in a position to assist experimentalists to figure out how to prise it apart, or make it to order. That could have big cost implications for the emerging graphene industry,” said Professor Coveney.

Here’s a link to and a citation for the paper,

Graphene–Graphene Interactions: Friction, Superlubricity, and Exfoliation by Robert C. Sinclair, James L. Suter, and Peter V. Coveney. Advanced Materials DOI: 10.1002/adma.201705791 First published: 13 February 2018

This paper is open access.

Equality doesn’t necessarily lead to greater women’s STEM (science, technology, engineering, and mathematics) participation?

It seems counter-intuitive but societies where women have achieved greater equality see less participation by women in STEM (science, technology, engineering and mathematics) than countries where women are treated differently. This rather stunning research was released on February 14, 2018 (yes, Valentine’s Day).

Women, equality, STEM

Both universities involved in this research have made news/press releases available. First, there’s the February 14, 2018 Leeds Beckett University (UK) press release,

Countries with greater gender equality see a smaller proportion of women taking degrees in science, technology, engineering and mathematics (STEM), a new study by Leeds Beckett has found.

Dubbed the ‘gender equality paradox’, the research found that countries such as Albania and Algeria have a greater percentage of women amongst their STEM graduates than countries lauded for their high levels of gender equality, such as Finland, Norway or Sweden.

The researchers, from Leeds Beckett’s School of Social Sciences and the University of Missouri, believe this might be because countries with less gender equality often have little welfare support, making the choice of a relatively highly-paid STEM career more attractive.

The study, published in Psychological Science, also looked at what might motivate girls and boys to choose to study STEM subjects, including overall ability, interest or enjoyment in the subject and whether science subjects were a personal academic strength.

Using data on 475,000 adolescents across 67 countries or regions, the researchers found that while boys’ and girls’ achievement in STEM subjects was broadly similar, science was more likely to be boys’ best subject.

Girls, even when their ability in science equalled or excelled that of boys, were often likely to be better overall in reading comprehension, which relates to higher ability in non-STEM subjects.

Girls also tended to register a lower interest in science subjects. These differences were near-universal across all the countries and regions studied.

This could explain some of the gender disparity in STEM participation, according to Leeds Beckett Professor in Psychology Gijsbert Stoet.

“The further you get in secondary and then higher education, the more subjects you need to drop until you end with just one.

“We are inclined to choose what we are best at and also enjoy. This makes sense and matches common school advice.

“So, even though girls can match boys in terms of how well they do at science and mathematics in school, if those aren’t their best subjects and they are less interested in them, then they’re likely to choose to study something else.”

The researchers also looked at how many girls might be expected to choose further study in STEM based on these criteria.

They took the number of girls in each country who had the necessary ability in STEM and for whom it was also their best subject and compared this to the number of women graduating in STEM.

They found there was a disparity in all countries, but with the gap once again larger in more gender equal countries.

In the UK, 29 per cent of STEM graduates are female, whereas 48 per cent of UK girls might be expected to take those subjects based on science ability alone. This drops to 39 per cent when both science ability and interest in the subject are taken into account.

Countries with higher gender equality tend also to be welfare states, providing a high level of social security for their citizens.

Professor Stoet said: “STEM careers are generally secure and well-paid but the risks of not following such a path can vary.

“In more affluent countries where any choice of career feels relatively safe, women may feel able to make choices based on non-economic factors.

“Conversely, in countries with fewer economic opportunities, or where employment might be precarious, a well-paid and relatively secure STEM career can be more attractive to women.”

Despite extensive efforts to increase participation of women in STEM, levels have remained broadly stable for decades, but these findings could help target interventions to make them more effective, say the researchers.

“It’s important to take into account that girls are choosing not to study STEM for what they feel are valid reasons, so campaigns that target all girls may be a waste of energy and resources,” said Professor Stoet.

“If governments want to increase women’s participation in STEM, a more effective strategy might be to target the girls who are clearly being ‘lost’ from the STEM pathway: those for whom science and maths are their best subjects and who enjoy it but still don’t choose it.

“If we can understand their motivations, then interventions can be designed to help them change their minds.”

Then, there’s the February 14, 2018 University of Missouri news release, some of which will be repetitive,

The underrepresentation of girls and women in science, technology, engineering and mathematics (STEM) fields occurs globally. Although women currently are well represented in life sciences, they continue to be underrepresented in inorganic sciences, such as computer science and physics. Now, researchers from the University of Missouri and Leeds Beckett University in the United Kingdom have found that as societies become wealthier and more gender equal, women are less likely to obtain degrees in STEM. The researchers call this a “gender-equality paradox.” Researchers also discovered a near-universal sex difference in academic strengths and weaknesses that contributes to the STEM gap. Findings from the study could help refine education efforts and policies geared toward encouraging girls and women with strengths in science or math to participate in STEM fields.

The researchers found that, throughout the world, boys’ academic strengths tend to be in science or mathematics, while girls’ strengths are in reading. Students who have personal strengths in science or math are more likely to enter STEM fields, whereas students with reading as a personal strength are more likely to enter non-STEM fields, according to David Geary, Curators Professor of Psychological Sciences in the MU College of Arts and Science. These sex differences in academic strengths, as well as interest in science, may explain why the sex differences in STEM fields has been stable for decades, and why current approaches to address them have failed.

“We analyzed data on 475,000 adolescents across 67 countries or regions and found that while boys’ and girls’ achievements in STEM subjects were broadly similar in all countries, science was more likely to be boys’ best subject,” Geary said. “Girls, even when their abilities in science equaled or excelled that of boys, often were likely to be better overall in reading comprehension, which relates to higher ability in non-STEM subjects. As a result, these girls tended to seek out other professions unrelated to STEM fields.”

Surprisingly, this trend was larger for girls and women living in countries with greater gender equality. The authors call this a “gender-equality paradox,” because countries lauded for their high levels of gender equality, such as Finland, Norway or Sweden, have relatively few women among their STEM graduates. In contrast, more socially conservative countries such as Turkey or Algeria have a much larger percentage of women among their STEM graduates.

“In countries with greater gender equality, women are actively encouraged to participate in STEM; yet, they lose more girls because of personal academic strengths,” Geary said. “In more liberal and wealthy countries, personal preferences are more strongly expressed. One consequence is that sex differences in academic strengths and interests become larger and have a stronger influence college and career choices than in more conservative and less wealthy countries, creating the gender-equality paradox.”

The combination of personal academic strengths in reading, lower interest in science, and broader financial security explains why so few women choose a STEM career in highly developed nations.

“STEM careers are generally secure and well-paid but the risks of not following such a path can vary,” said Gijsbert Stoet, Professor in Psychology at Leeds Beckett University. “In more affluent countries where any choice of career feels relatively safe, women may feel able to make choices based on non-economic factors. Conversely, in countries with fewer economic opportunities, or where employment might be precarious, a well-paid and relatively secure STEM career can be more attractive to women.”

Findings from this study could help target interventions to make them more effective, say the researchers. Policymakers should reconsider failing national policies focusing on decreasing the gender imbalance in STEM, the researchers add.

The University of Missouri also produced a brief video featuring Professor David Geary discussing the work,

Here’s a link to and a citation for the paper,

The Gender-Equality Paradox in Science, Technology, Engineering, and Mathematics Education by Gijsbert Stoet, David C. Geary. Psychological Studies https://doi.org/10.1177/0956797617741719 First Published February 14, 2018 Research Article

This paper is behind a paywall.

Gender equality and STEM: a deeper dive

Olga Khazan in a February 18, 2018 article for The Atlantic provides additional insight (Note: Links have been removed),

Though their numbers are growing, only 27 percent of all students taking the AP Computer Science exam in the United States are female. The gender gap only grows worse from there: Just 18 percent of American computer-science college degrees go to women. This is in the United States, where many college men proudly describe themselves as “male feminists” and girls are taught they can be anything they want to be.

Meanwhile, in Algeria, 41 percent of college graduates in the fields of science, technology, engineering, and math—or “STEM,” as its known—are female. There, employment discrimination against women is rife and women are often pressured to make amends with their abusive husbands.

According to a report I covered a few years ago, Jordan, Qatar, and the United Arab Emirates were the only three countries in which boys are significantly less likely to feel comfortable working on math problems than girls are. In all of the other nations surveyed, girls were more likely to say they feel “helpless while performing a math problem.”

… this line of research, if it’s replicated, might hold useful takeaways for people who do want to see more Western women entering STEM fields. In this study, the percentage of girls who did excel in science or math was still larger than the number of women who were graduating with STEM degrees. That means there’s something in even the most liberal societies that’s nudging women away from math and science, even when those are their best subjects. The women-in-STEM advocates could, for starters, focus their efforts on those would-be STEM stars.

Final thoughts

This work upends notions (mine anyway) about equality and STEM with regard to women’s participation in countries usually described as ‘developed’ as opposed to ‘developing’. I am thankful to have my ideas shaken up and being forced to review my assumptions about STEM participation and equality of opportunity.

John Timmer in a February 19, 2018 posting on the Ars Technica blog offers a critique of the research and its conclusions,

… The countries where the science-degree gender gap is smaller tend to be less socially secure. The researchers suggest that the economic security provided by fields like engineering may have a stronger draw in these countries, pulling more women into the field.

They attempt to use a statistical pathway analysis to see if the data is consistent with this being the case, but the results are inconclusive. It may be right, but there would be at least one other strong factor that they have not identified involved.

Timmer’s piece is well worth reading.

For some reason the discussion about a lack of social safety nets and precarious conditions leading women to greater STEM participation reminds me of a truism about the arts. Constraints can force you into greater creativity. Although balance is necessary as you don’t want to destroy what you’re trying to encourage. In this case, it seems that comfortable lifestyles can lead women to pursue that which comes more easily whereas women trying to make a better life in difficult circumstance will pursue a more challenging path.

A 3D printed eye cornea and a 3D printed copy of your brain (also: a Brad Pitt connection)

Sometimes it’s hard to keep up with 3D tissue printing news. I have two news bits, one concerning eyes and another concerning brains.

3D printed human corneas

A May 29, 2018 news item on ScienceDaily trumpets the news,

The first human corneas have been 3D printed by scientists at Newcastle University, UK.

It means the technique could be used in the future to ensure an unlimited supply of corneas.

As the outermost layer of the human eye, the cornea has an important role in focusing vision.

Yet there is a significant shortage of corneas available to transplant, with 10 million people worldwide requiring surgery to prevent corneal blindness as a result of diseases such as trachoma, an infectious eye disorder.

In addition, almost 5 million people suffer total blindness due to corneal scarring caused by burns, lacerations, abrasion or disease.

The proof-of-concept research, published today [May 29, 2018] in Experimental Eye Research, reports how stem cells (human corneal stromal cells) from a healthy donor cornea were mixed together with alginate and collagen to create a solution that could be printed, a ‘bio-ink’.

Here are the proud researchers with their cornea,

Caption: Dr. Steve Swioklo and Professor Che Connon with a dyed cornea. Credit: Newcastle University, UK

A May 30,2018 Newcastle University press release (also on EurekAlert but published on May 29, 2018), which originated the news item, adds more details,

Using a simple low-cost 3D bio-printer, the bio-ink was successfully extruded in concentric circles to form the shape of a human cornea. It took less than 10 minutes to print.

The stem cells were then shown to culture – or grow.

Che Connon, Professor of Tissue Engineering at Newcastle University, who led the work, said: “Many teams across the world have been chasing the ideal bio-ink to make this process feasible.

“Our unique gel – a combination of alginate and collagen – keeps the stem cells alive whilst producing a material which is stiff enough to hold its shape but soft enough to be squeezed out the nozzle of a 3D printer.

“This builds upon our previous work in which we kept cells alive for weeks at room temperature within a similar hydrogel. Now we have a ready to use bio-ink containing stem cells allowing users to start printing tissues without having to worry about growing the cells separately.”

The scientists, including first author and PhD student Ms Abigail Isaacson from the Institute of Genetic Medicine, Newcastle University, also demonstrated that they could build a cornea to match a patient’s unique specifications.

The dimensions of the printed tissue were originally taken from an actual cornea. By scanning a patient’s eye, they could use the data to rapidly print a cornea which matched the size and shape.

Professor Connon added: “Our 3D printed corneas will now have to undergo further testing and it will be several years before we could be in the position where we are using them for transplants.

“However, what we have shown is that it is feasible to print corneas using coordinates taken from a patient eye and that this approach has potential to combat the world-wide shortage.”

Here’s a link to and a citation for the paper,

3D bioprinting of a corneal stroma equivalent by Abigail Isaacson, Stephen Swioklo, Che J. Connon. Experimental Eye Research Volume 173, August 2018, Pages 188–193 and 2018 May 14 pii: S0014-4835(18)30212-4. doi: 10.1016/j.exer.2018.05.010. [Epub ahead of print]

This paper is behind a paywall.

A 3D printed copy of your brain

I love the title for this May 30, 2018 Wyss Institute for Biologically Inspired Engineering news release: Creating piece of mind by Lindsay Brownell (also on EurekAlert),

What if you could hold a physical model of your own brain in your hands, accurate down to its every unique fold? That’s just a normal part of life for Steven Keating, Ph.D., who had a baseball-sized tumor removed from his brain at age 26 while he was a graduate student in the MIT Media Lab’s Mediated Matter group. Curious to see what his brain actually looked like before the tumor was removed, and with the goal of better understanding his diagnosis and treatment options, Keating collected his medical data and began 3D printing his MRI [magnetic resonance imaging] and CT [computed tomography] scans, but was frustrated that existing methods were prohibitively time-intensive, cumbersome, and failed to accurately reveal important features of interest. Keating reached out to some of his group’s collaborators, including members of the Wyss Institute at Harvard University, who were exploring a new method for 3D printing biological samples.

“It never occurred to us to use this approach for human anatomy until Steve came to us and said, ‘Guys, here’s my data, what can we do?” says Ahmed Hosny, who was a Research Fellow with at the Wyss Institute at the time and is now a machine learning engineer at the Dana-Farber Cancer Institute. The result of that impromptu collaboration – which grew to involve James Weaver, Ph.D., Senior Research Scientist at the Wyss Institute; Neri Oxman, [emphasis mine] Ph.D., Director of the MIT Media Lab’s Mediated Matter group and Associate Professor of Media Arts and Sciences; and a team of researchers and physicians at several other academic and medical centers in the US and Germany – is a new technique that allows images from MRI, CT, and other medical scans to be easily and quickly converted into physical models with unprecedented detail. The research is reported in 3D Printing and Additive Manufacturing.

“I nearly jumped out of my chair when I saw what this technology is able to do,” says Beth Ripley, M.D. Ph.D., an Assistant Professor of Radiology at the University of Washington and clinical radiologist at the Seattle VA, and co-author of the paper. “It creates exquisitely detailed 3D-printed medical models with a fraction of the manual labor currently required, making 3D printing more accessible to the medical field as a tool for research and diagnosis.”

Imaging technologies like MRI and CT scans produce high-resolution images as a series of “slices” that reveal the details of structures inside the human body, making them an invaluable resource for evaluating and diagnosing medical conditions. Most 3D printers build physical models in a layer-by-layer process, so feeding them layers of medical images to create a solid structure is an obvious synergy between the two technologies.

However, there is a problem: MRI and CT scans produce images with so much detail that the object(s) of interest need to be isolated from surrounding tissue and converted into surface meshes in order to be printed. This is achieved via either a very time-intensive process called “segmentation” where a radiologist manually traces the desired object on every single image slice (sometimes hundreds of images for a single sample), or an automatic “thresholding” process in which a computer program quickly converts areas that contain grayscale pixels into either solid black or solid white pixels, based on a shade of gray that is chosen to be the threshold between black and white. However, medical imaging data sets often contain objects that are irregularly shaped and lack clear, well-defined borders; as a result, auto-thresholding (or even manual segmentation) often over- or under-exaggerates the size of a feature of interest and washes out critical detail.

The new method described by the paper’s authors gives medical professionals the best of both worlds, offering a fast and highly accurate method for converting complex images into a format that can be easily 3D printed. The key lies in printing with dithered bitmaps, a digital file format in which each pixel of a grayscale image is converted into a series of black and white pixels, and the density of the black pixels is what defines the different shades of gray rather than the pixels themselves varying in color.

Similar to the way images in black-and-white newsprint use varying sizes of black ink dots to convey shading, the more black pixels that are present in a given area, the darker it appears. By simplifying all pixels from various shades of gray into a mixture of black or white pixels, dithered bitmaps allow a 3D printer to print complex medical images using two different materials that preserve all the subtle variations of the original data with much greater accuracy and speed.

The team of researchers used bitmap-based 3D printing to create models of Keating’s brain and tumor that faithfully preserved all of the gradations of detail present in the raw MRI data down to a resolution that is on par with what the human eye can distinguish from about 9-10 inches away. Using this same approach, they were also able to print a variable stiffness model of a human heart valve using different materials for the valve tissue versus the mineral plaques that had formed within the valve, resulting in a model that exhibited mechanical property gradients and provided new insights into the actual effects of the plaques on valve function.

“Our approach not only allows for high levels of detail to be preserved and printed into medical models, but it also saves a tremendous amount of time and money,” says Weaver, who is the corresponding author of the paper. “Manually segmenting a CT scan of a healthy human foot, with all its internal bone structure, bone marrow, tendons, muscles, soft tissue, and skin, for example, can take more than 30 hours, even by a trained professional – we were able to do it in less than an hour.”

The researchers hope that their method will help make 3D printing a more viable tool for routine exams and diagnoses, patient education, and understanding the human body. “Right now, it’s just too expensive for hospitals to employ a team of specialists to go in and hand-segment image data sets for 3D printing, except in extremely high-risk or high-profile cases. We’re hoping to change that,” says Hosny.

In order for that to happen, some entrenched elements of the medical field need to change as well. Most patients’ data are compressed to save space on hospital servers, so it’s often difficult to get the raw MRI or CT scan files needed for high-resolution 3D printing. Additionally, the team’s research was facilitated through a joint collaboration with leading 3D printer manufacturer Stratasys, which allowed access to their 3D printer’s intrinsic bitmap printing capabilities. New software packages also still need to be developed to better leverage these capabilities and make them more accessible to medical professionals.

Despite these hurdles, the researchers are confident that their achievements present a significant value to the medical community. “I imagine that sometime within the next 5 years, the day could come when any patient that goes into a doctor’s office for a routine or non-routine CT or MRI scan will be able to get a 3D-printed model of their patient-specific data within a few days,” says Weaver.

Keating, who has become a passionate advocate of efforts to enable patients to access their own medical data, still 3D prints his MRI scans to see how his skull is healing post-surgery and check on his brain to make sure his tumor isn’t coming back. “The ability to understand what’s happening inside of you, to actually hold it in your hands and see the effects of treatment, is incredibly empowering,” he says.

“Curiosity is one of the biggest drivers of innovation and change for the greater good, especially when it involves exploring questions across disciplines and institutions. The Wyss Institute is proud to be a space where this kind of cross-field innovation can flourish,” says Wyss Institute Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School (HMS) and the Vascular Biology Program at Boston Children’s Hospital, as well as Professor of Bioengineering at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS).

Here’s an image illustrating the work,

Caption: This 3D-printed model of Steven Keating’s skull and brain clearly shows his brain tumor and other fine details thanks to the new data processing method pioneered by the study’s authors. Credit: Wyss Institute at Harvard University

Here’s a link to and a citation for the paper,

From Improved Diagnostics to Presurgical Planning: High-Resolution Functionally Graded Multimaterial 3D Printing of Biomedical Tomographic Data Sets by Ahmed Hosny , Steven J. Keating, Joshua D. Dilley, Beth Ripley, Tatiana Kelil, Steve Pieper, Dominik Kolb, Christoph Bader, Anne-Marie Pobloth, Molly Griffin, Reza Nezafat, Georg Duda, Ennio A. Chiocca, James R.. Stone, James S. Michaelson, Mason N. Dean, Neri Oxman, and James C. Weaver. 3D Printing and Additive Manufacturing http://doi.org/10.1089/3dp.2017.0140 Online Ahead of Print:May 29, 2018

This paper appears to be open access.

A tangential Brad Pitt connection

It’s a bit of Hollywood gossip. There was some speculation in April 2018 that Brad Pitt was dating Dr. Neri Oxman highlighted in the Wyss Institute news release. Here’s a sample of an April 13, 2018 posting on Laineygossip (Note: A link has been removed),

It took him a long time to date, but he is now,” the insider tells PEOPLE. “He likes women who challenge him in every way, especially in the intellect department. Brad has seen how happy and different Amal has made his friend (George Clooney). It has given him something to think about.”

While a Pitt source has maintained he and Oxman are “just friends,” they’ve met up a few times since the fall and the insider notes Pitt has been flying frequently to the East Coast. He dropped by one of Oxman’s classes last fall and was spotted at MIT again a few weeks ago.

Pitt and Oxman got to know each other through an architecture project at MIT, where she works as a professor of media arts and sciences at the school’s Media Lab. Pitt has always been interested in architecture and founded the Make It Right Foundation, which builds affordable and environmentally friendly homes in New Orleans for people in need.

“One of the things Brad has said all along is that he wants to do more architecture and design work,” another source says. “He loves this, has found the furniture design and New Orleans developing work fulfilling, and knows he has a talent for it.”

It’s only been a week since Page Six first broke the news that Brad and Dr Oxman have been spending time together.

I’m fascinated by Oxman’s (and her colleagues’) furniture. Rose Brook writes about one particular Oxman piece in her March 27, 2014 posting for TCT magazine (Note: Links have been removed),

MIT Professor and 3D printing forerunner Neri Oxman has unveiled her striking acoustic chaise longue, which was made using Stratasys 3D printing technology.

Oxman collaborated with Professor W Craig Carter and Composer and fellow MIT Professor Tod Machover to explore material properties and their spatial arrangement to form the acoustic piece.

Christened Gemini, the two-part chaise was produced using a Stratasys Objet500 Connex3 multi-colour, multi-material 3D printer as well as traditional furniture-making techniques and it will be on display at the Vocal Vibrations exhibition at Le Laboratoire in Paris from March 28th 2014.

An Architect, Designer and Professor of Media, Arts and Science at MIT, Oxman’s creation aims to convey the relationship of twins in the womb through material properties and their arrangement. It was made using both subtractive and additive manufacturing and is part of Oxman’s ongoing exploration of what Stratasys’ ground-breaking multi-colour, multi-material 3D printer can do.

Brook goes on to explain how the chaise was made and the inspiration that led to it. Finally, it’s interesting to note that Oxman was working with Stratasys in 2014 and that this 2018 brain project is being developed in a joint collaboration with Statasys.

That’s it for 3D printing today.