Tag Archives: 3D printing

3D printed all liquid electronics

Even after watching the video, I still don’t quite believe it. A March 28, 2018 news item on ScienceDaily announces the work,

Scientists from the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab [or LBNL]) have developed a way to print 3-D structures composed entirely of liquids. Using a modified 3-D printer, they injected threads of water into silicone oil — sculpting tubes made of one liquid within another liquid.

They envision their all-liquid material could be used to construct liquid electronics that power flexible, stretchable devices. The scientists also foresee chemically tuning the tubes and flowing molecules through them, leading to new ways to separate molecules or precisely deliver nanoscale building blocks to under-construction compounds.

A March 28, 2018 Berkeley Lab March 26, 2018 news release (also on EurekAlert), which originated the news item, describe the work in more detail,

The researchers have printed threads of water between 10 microns and 1 millimeter in diameter, and in a variety of spiraling and branching shapes up to several meters in length. What’s more, the material can conform to its surroundings and repeatedly change shape.

“It’s a new class of material that can reconfigure itself, and it has the potential to be customized into liquid reaction vessels for many uses, from chemical synthesis to ion transport to catalysis,” said Tom Russell, a visiting faculty scientist in Berkeley Lab’s Materials Sciences Division. He developed the material with Joe Forth, a postdoctoral researcher in the Materials Sciences Division, as well as other scientists from Berkeley Lab and several other institutions. They report their research March 24 [2018] in the journal Advanced Materials.

The material owes its origins to two advances: learning how to create liquid tubes inside another liquid, and then automating the process.

For the first step, the scientists developed a way to sheathe tubes of water in a special nanoparticle-derived surfactant that locks the water in place. The surfactant, essentially soap, prevents the tubes from breaking up into droplets. Their surfactant is so good at its job, the scientists call it a nanoparticle supersoap.

The supersoap was achieved by dispersing gold nanoparticles into water and polymer ligands into oil. The gold nanoparticles and polymer ligands want to attach to each other, but they also want to remain in their respective water and oil mediums. The ligands were developed with help from Brett Helms at the Molecular Foundry, a DOE Office of Science User Facility located at Berkeley Lab.

In practice, soon after the water is injected into the oil, dozens of ligands in the oil attach to individual nanoparticles in the water, forming a nanoparticle supersoap. These supersoaps jam together and vitrify, like glass, which stabilizes the interface between oil and water and locks the liquid structures in position.

This stability means we can stretch water into a tube, and it remains a tube. Or we can shape water into an ellipsoid, and it remains an ellipsoid,” said Russell. “We’ve used these nanoparticle supersoaps to print tubes of water that last for several months.”

Next came automation. Forth modified an off-the-shelf 3-D printer by removing the components designed to print plastic and replacing them with a syringe pump and needle that extrudes liquid. He then programmed the printer to insert the needle into the oil substrate and inject water in a predetermined pattern.

“We can squeeze liquid from a needle, and place threads of water anywhere we want in three dimensions,” said Forth. “We can also ping the material with an external force, which momentarily breaks the supersoap’s stability and changes the shape of the water threads. The structures are endlessly reconfigurable.”

This image illustrates how the water is printed,

These schematics show the printing of water in oil using a nanoparticle supersoap. Gold nanoparticles in the water combine with polymer ligands in the oil to form an elastic film (nanoparticle supersoap) at the interface, locking the structure in place. (Credit: Berkeley Lab)

Here’s a link to and a citation for the paper,

Reconfigurable Printed Liquids by Joe Forth, Xubo Liu, Jaffar Hasnain, Anju Toor, Karol Miszta, Shaowei Shi, Phillip L. Geissler, Todd Emrick, Brett A. Helms, Thomas P. Russell. Advanced Materials https://doi.org/10.1002/adma.201707603 First published: 24 March 2018

This paper is behind a paywall.

A 3D printed eye cornea and a 3D printed copy of your brain (also: a Brad Pitt connection)

Sometimes it’s hard to keep up with 3D tissue printing news. I have two news bits, one concerning eyes and another concerning brains.

3D printed human corneas

A May 29, 2018 news item on ScienceDaily trumpets the news,

The first human corneas have been 3D printed by scientists at Newcastle University, UK.

It means the technique could be used in the future to ensure an unlimited supply of corneas.

As the outermost layer of the human eye, the cornea has an important role in focusing vision.

Yet there is a significant shortage of corneas available to transplant, with 10 million people worldwide requiring surgery to prevent corneal blindness as a result of diseases such as trachoma, an infectious eye disorder.

In addition, almost 5 million people suffer total blindness due to corneal scarring caused by burns, lacerations, abrasion or disease.

The proof-of-concept research, published today [May 29, 2018] in Experimental Eye Research, reports how stem cells (human corneal stromal cells) from a healthy donor cornea were mixed together with alginate and collagen to create a solution that could be printed, a ‘bio-ink’.

Here are the proud researchers with their cornea,

Caption: Dr. Steve Swioklo and Professor Che Connon with a dyed cornea. Credit: Newcastle University, UK

A May 30,2018 Newcastle University press release (also on EurekAlert but published on May 29, 2018), which originated the news item, adds more details,

Using a simple low-cost 3D bio-printer, the bio-ink was successfully extruded in concentric circles to form the shape of a human cornea. It took less than 10 minutes to print.

The stem cells were then shown to culture – or grow.

Che Connon, Professor of Tissue Engineering at Newcastle University, who led the work, said: “Many teams across the world have been chasing the ideal bio-ink to make this process feasible.

“Our unique gel – a combination of alginate and collagen – keeps the stem cells alive whilst producing a material which is stiff enough to hold its shape but soft enough to be squeezed out the nozzle of a 3D printer.

“This builds upon our previous work in which we kept cells alive for weeks at room temperature within a similar hydrogel. Now we have a ready to use bio-ink containing stem cells allowing users to start printing tissues without having to worry about growing the cells separately.”

The scientists, including first author and PhD student Ms Abigail Isaacson from the Institute of Genetic Medicine, Newcastle University, also demonstrated that they could build a cornea to match a patient’s unique specifications.

The dimensions of the printed tissue were originally taken from an actual cornea. By scanning a patient’s eye, they could use the data to rapidly print a cornea which matched the size and shape.

Professor Connon added: “Our 3D printed corneas will now have to undergo further testing and it will be several years before we could be in the position where we are using them for transplants.

“However, what we have shown is that it is feasible to print corneas using coordinates taken from a patient eye and that this approach has potential to combat the world-wide shortage.”

Here’s a link to and a citation for the paper,

3D bioprinting of a corneal stroma equivalent by Abigail Isaacson, Stephen Swioklo, Che J. Connon. Experimental Eye Research Volume 173, August 2018, Pages 188–193 and 2018 May 14 pii: S0014-4835(18)30212-4. doi: 10.1016/j.exer.2018.05.010. [Epub ahead of print]

This paper is behind a paywall.

A 3D printed copy of your brain

I love the title for this May 30, 2018 Wyss Institute for Biologically Inspired Engineering news release: Creating piece of mind by Lindsay Brownell (also on EurekAlert),

What if you could hold a physical model of your own brain in your hands, accurate down to its every unique fold? That’s just a normal part of life for Steven Keating, Ph.D., who had a baseball-sized tumor removed from his brain at age 26 while he was a graduate student in the MIT Media Lab’s Mediated Matter group. Curious to see what his brain actually looked like before the tumor was removed, and with the goal of better understanding his diagnosis and treatment options, Keating collected his medical data and began 3D printing his MRI [magnetic resonance imaging] and CT [computed tomography] scans, but was frustrated that existing methods were prohibitively time-intensive, cumbersome, and failed to accurately reveal important features of interest. Keating reached out to some of his group’s collaborators, including members of the Wyss Institute at Harvard University, who were exploring a new method for 3D printing biological samples.

“It never occurred to us to use this approach for human anatomy until Steve came to us and said, ‘Guys, here’s my data, what can we do?” says Ahmed Hosny, who was a Research Fellow with at the Wyss Institute at the time and is now a machine learning engineer at the Dana-Farber Cancer Institute. The result of that impromptu collaboration – which grew to involve James Weaver, Ph.D., Senior Research Scientist at the Wyss Institute; Neri Oxman, [emphasis mine] Ph.D., Director of the MIT Media Lab’s Mediated Matter group and Associate Professor of Media Arts and Sciences; and a team of researchers and physicians at several other academic and medical centers in the US and Germany – is a new technique that allows images from MRI, CT, and other medical scans to be easily and quickly converted into physical models with unprecedented detail. The research is reported in 3D Printing and Additive Manufacturing.

“I nearly jumped out of my chair when I saw what this technology is able to do,” says Beth Ripley, M.D. Ph.D., an Assistant Professor of Radiology at the University of Washington and clinical radiologist at the Seattle VA, and co-author of the paper. “It creates exquisitely detailed 3D-printed medical models with a fraction of the manual labor currently required, making 3D printing more accessible to the medical field as a tool for research and diagnosis.”

Imaging technologies like MRI and CT scans produce high-resolution images as a series of “slices” that reveal the details of structures inside the human body, making them an invaluable resource for evaluating and diagnosing medical conditions. Most 3D printers build physical models in a layer-by-layer process, so feeding them layers of medical images to create a solid structure is an obvious synergy between the two technologies.

However, there is a problem: MRI and CT scans produce images with so much detail that the object(s) of interest need to be isolated from surrounding tissue and converted into surface meshes in order to be printed. This is achieved via either a very time-intensive process called “segmentation” where a radiologist manually traces the desired object on every single image slice (sometimes hundreds of images for a single sample), or an automatic “thresholding” process in which a computer program quickly converts areas that contain grayscale pixels into either solid black or solid white pixels, based on a shade of gray that is chosen to be the threshold between black and white. However, medical imaging data sets often contain objects that are irregularly shaped and lack clear, well-defined borders; as a result, auto-thresholding (or even manual segmentation) often over- or under-exaggerates the size of a feature of interest and washes out critical detail.

The new method described by the paper’s authors gives medical professionals the best of both worlds, offering a fast and highly accurate method for converting complex images into a format that can be easily 3D printed. The key lies in printing with dithered bitmaps, a digital file format in which each pixel of a grayscale image is converted into a series of black and white pixels, and the density of the black pixels is what defines the different shades of gray rather than the pixels themselves varying in color.

Similar to the way images in black-and-white newsprint use varying sizes of black ink dots to convey shading, the more black pixels that are present in a given area, the darker it appears. By simplifying all pixels from various shades of gray into a mixture of black or white pixels, dithered bitmaps allow a 3D printer to print complex medical images using two different materials that preserve all the subtle variations of the original data with much greater accuracy and speed.

The team of researchers used bitmap-based 3D printing to create models of Keating’s brain and tumor that faithfully preserved all of the gradations of detail present in the raw MRI data down to a resolution that is on par with what the human eye can distinguish from about 9-10 inches away. Using this same approach, they were also able to print a variable stiffness model of a human heart valve using different materials for the valve tissue versus the mineral plaques that had formed within the valve, resulting in a model that exhibited mechanical property gradients and provided new insights into the actual effects of the plaques on valve function.

“Our approach not only allows for high levels of detail to be preserved and printed into medical models, but it also saves a tremendous amount of time and money,” says Weaver, who is the corresponding author of the paper. “Manually segmenting a CT scan of a healthy human foot, with all its internal bone structure, bone marrow, tendons, muscles, soft tissue, and skin, for example, can take more than 30 hours, even by a trained professional – we were able to do it in less than an hour.”

The researchers hope that their method will help make 3D printing a more viable tool for routine exams and diagnoses, patient education, and understanding the human body. “Right now, it’s just too expensive for hospitals to employ a team of specialists to go in and hand-segment image data sets for 3D printing, except in extremely high-risk or high-profile cases. We’re hoping to change that,” says Hosny.

In order for that to happen, some entrenched elements of the medical field need to change as well. Most patients’ data are compressed to save space on hospital servers, so it’s often difficult to get the raw MRI or CT scan files needed for high-resolution 3D printing. Additionally, the team’s research was facilitated through a joint collaboration with leading 3D printer manufacturer Stratasys, which allowed access to their 3D printer’s intrinsic bitmap printing capabilities. New software packages also still need to be developed to better leverage these capabilities and make them more accessible to medical professionals.

Despite these hurdles, the researchers are confident that their achievements present a significant value to the medical community. “I imagine that sometime within the next 5 years, the day could come when any patient that goes into a doctor’s office for a routine or non-routine CT or MRI scan will be able to get a 3D-printed model of their patient-specific data within a few days,” says Weaver.

Keating, who has become a passionate advocate of efforts to enable patients to access their own medical data, still 3D prints his MRI scans to see how his skull is healing post-surgery and check on his brain to make sure his tumor isn’t coming back. “The ability to understand what’s happening inside of you, to actually hold it in your hands and see the effects of treatment, is incredibly empowering,” he says.

“Curiosity is one of the biggest drivers of innovation and change for the greater good, especially when it involves exploring questions across disciplines and institutions. The Wyss Institute is proud to be a space where this kind of cross-field innovation can flourish,” says Wyss Institute Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School (HMS) and the Vascular Biology Program at Boston Children’s Hospital, as well as Professor of Bioengineering at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS).

Here’s an image illustrating the work,

Caption: This 3D-printed model of Steven Keating’s skull and brain clearly shows his brain tumor and other fine details thanks to the new data processing method pioneered by the study’s authors. Credit: Wyss Institute at Harvard University

Here’s a link to and a citation for the paper,

From Improved Diagnostics to Presurgical Planning: High-Resolution Functionally Graded Multimaterial 3D Printing of Biomedical Tomographic Data Sets by Ahmed Hosny , Steven J. Keating, Joshua D. Dilley, Beth Ripley, Tatiana Kelil, Steve Pieper, Dominik Kolb, Christoph Bader, Anne-Marie Pobloth, Molly Griffin, Reza Nezafat, Georg Duda, Ennio A. Chiocca, James R.. Stone, James S. Michaelson, Mason N. Dean, Neri Oxman, and James C. Weaver. 3D Printing and Additive Manufacturing http://doi.org/10.1089/3dp.2017.0140 Online Ahead of Print:May 29, 2018

This paper appears to be open access.

A tangential Brad Pitt connection

It’s a bit of Hollywood gossip. There was some speculation in April 2018 that Brad Pitt was dating Dr. Neri Oxman highlighted in the Wyss Institute news release. Here’s a sample of an April 13, 2018 posting on Laineygossip (Note: A link has been removed),

It took him a long time to date, but he is now,” the insider tells PEOPLE. “He likes women who challenge him in every way, especially in the intellect department. Brad has seen how happy and different Amal has made his friend (George Clooney). It has given him something to think about.”

While a Pitt source has maintained he and Oxman are “just friends,” they’ve met up a few times since the fall and the insider notes Pitt has been flying frequently to the East Coast. He dropped by one of Oxman’s classes last fall and was spotted at MIT again a few weeks ago.

Pitt and Oxman got to know each other through an architecture project at MIT, where she works as a professor of media arts and sciences at the school’s Media Lab. Pitt has always been interested in architecture and founded the Make It Right Foundation, which builds affordable and environmentally friendly homes in New Orleans for people in need.

“One of the things Brad has said all along is that he wants to do more architecture and design work,” another source says. “He loves this, has found the furniture design and New Orleans developing work fulfilling, and knows he has a talent for it.”

It’s only been a week since Page Six first broke the news that Brad and Dr Oxman have been spending time together.

I’m fascinated by Oxman’s (and her colleagues’) furniture. Rose Brook writes about one particular Oxman piece in her March 27, 2014 posting for TCT magazine (Note: Links have been removed),

MIT Professor and 3D printing forerunner Neri Oxman has unveiled her striking acoustic chaise longue, which was made using Stratasys 3D printing technology.

Oxman collaborated with Professor W Craig Carter and Composer and fellow MIT Professor Tod Machover to explore material properties and their spatial arrangement to form the acoustic piece.

Christened Gemini, the two-part chaise was produced using a Stratasys Objet500 Connex3 multi-colour, multi-material 3D printer as well as traditional furniture-making techniques and it will be on display at the Vocal Vibrations exhibition at Le Laboratoire in Paris from March 28th 2014.

An Architect, Designer and Professor of Media, Arts and Science at MIT, Oxman’s creation aims to convey the relationship of twins in the womb through material properties and their arrangement. It was made using both subtractive and additive manufacturing and is part of Oxman’s ongoing exploration of what Stratasys’ ground-breaking multi-colour, multi-material 3D printer can do.

Brook goes on to explain how the chaise was made and the inspiration that led to it. Finally, it’s interesting to note that Oxman was working with Stratasys in 2014 and that this 2018 brain project is being developed in a joint collaboration with Statasys.

That’s it for 3D printing today.

A cheaper way to make artificial organs

In the quest to develop artificial organs, the University of British Columbia (UBC) is the not the first research institution that comes to my mind. It seems I may need to reevaluate now that UBC (Okanagan) has announced some work on bio-inks and artificial organs in a Sept. 12, 2017 news  release (also on EurekAlert) by Patty Wellborn,,

A new bio-ink that may support a more efficient and inexpensive fabrication of human tissues and organs has been created by researchers at UBC’s Okanagan campus.

Keekyoung Kim, an assistant professor at UBC Okanagan’s School of Engineering, says this development can accelerate advances in regenerative medicine.

Using techniques like 3D printing, scientists are creating bio-material products that function alongside living cells. These products are made using a number of biomaterials including gelatin methacrylate (GelMA), a hydrogel that can serve as a building block in bio-printing. This type of bio-material—called bio-ink—are made of living cells, but can be printed and molded into specific organ or tissue shapes.

The UBC team analyzed the physical and biological properties of three different GelMA hydrogels—porcine skin, cold-water fish skin and cold-soluble gelatin. They found that hydrogel made from cold-soluble gelatin (gelatin which dissolves without heat) was by far the best performer and a strong candidate for future 3D organ printing.

“A big drawback of conventional hydrogel is its thermal instability. Even small changes in temperature cause significant changes in its viscosity or thickness,” says Kim. “This makes it problematic for many room temperature bio-fabrication systems, which are compatible with only a narrow range of hydrogel viscosities and which must generate products that are as uniform as possible if they are to function properly.”

Kim’s team created two new hydrogels—one from fish skin, and one from cold-soluble gelatin—and compared their properties to those of porcine skin GelMA. Although fish skin GelMA had some benefits, cold-soluble GelMA was the top overall performer. Not only could it form healthy tissue scaffolds, allowing cells to successfully grow and adhere to it, but it was also thermally stable at room temperature.

The UBC team also demonstrated that cold-soluble GelMA produces consistently uniform droplets at temperatures, thus making it an excellent choice for use in 3D bio-printing.

“We hope this new bio-ink will help researchers create improved artificial organs and lead to the development of better drugs, tissue engineering and regenerative therapies,” Kim says. “The next step is to investigate whether or not cold-soluble GelMA-based tissue scaffolds are can be used long-term both in the laboratory and in real-world transplants.”

Three times cheaper than porcine skin gelatin, cold-soluble gelatin is used primarily in culinary applications.

Here’s a link to and a citation for the paper,

Comparative study of gelatin methacrylate hydrogels from different sources for biofabrication applications by Zongjie Wang, Zhenlin Tian, Fredric Menard, and Keekyoung Kim. Biofabrication, Volume 9, Number 4 Special issue on Bioinks https://doi.org/10.1088/1758-5090/aa83cf Published 21 August 2017

© 2017 IOP Publishing Ltd

This paper is behind a paywall.

4D printing, what is that?

According to an April 12, 2017 news item on ScienceDaily, shapeshifting in response to environmental stimuli is the fourth dimension (I have a link to a posting about 4D printing with another fourth dimension),

A team of researchers from Georgia Institute of Technology and two other institutions has developed a new 3-D printing method to create objects that can permanently transform into a range of different shapes in response to heat.

The team, which included researchers from the Singapore University of Technology and Design (SUTD) and Xi’an Jiaotong University in China, created the objects by printing layers of shape memory polymers with each layer designed to respond differently when exposed to heat.

“This new approach significantly simplifies and increases the potential of 4-D printing by incorporating the mechanical programming post-processing step directly into the 3-D printing process,” said Jerry Qi, a professor in the George W. Woodruff School of Mechanical Engineering at Georgia Tech. “This allows high-resolution 3-D printed components to be designed by computer simulation, 3-D printed, and then directly and rapidly transformed into new permanent configurations by simply heating.”

The research was reported April 12 [2017] in the journal Science Advances, a publication of the American Association for the Advancement of Science. The work is funded by the U.S. Air Force Office of Scientific Research, the U.S. National Science Foundation and the Singapore National Research Foundation through the SUTD DManD Centre.

An April 12, 2017 Singapore University of Technology and Design (SUTD) press release on EurekAlert provides more detail,

4D printing is an emerging technology that allows a 3D-printed component to transform its structure by exposing it to heat, light, humidity, or other environmental stimuli. This technology extends the shape creation process beyond 3D printing, resulting in additional design flexibility that can lead to new types of products which can adjust its functionality in response to the environment, in a pre-programmed manner. However, 4D printing generally involves complex and time-consuming post-processing steps to mechanically programme the component. Furthermore, the materials are often limited to soft polymers, which limit their applicability in structural scenarios.

A group of researchers from the SUTD, Georgia Institute of Technology, Xi’an Jiaotong University and Zhejiang University has introduced an approach that significantly simplifies and increases the potential of 4D printing by incorporating the mechanical programming post-processing step directly into the 3D printing process. This allows high-resolution 3D-printed components to be designed by computer simulation, 3D printed, and then directly and rapidly transformed into new permanent configurations by using heat. This approach can help save printing time and materials used by up to 90%, while completely eliminating the time-consuming mechanical programming process from the design and manufacturing workflow.

“Our approach involves printing composite materials where at room temperature one material is soft but can be programmed to contain internal stress, and the other material is stiff,” said Dr. Zhen Ding of SUTD. “We use computational simulations to design composite components where the stiff material has a shape and size that prevents the release of the programmed internal stress from the soft material after 3D printing. Upon heating, the stiff material softens and allows the soft material to release its stress. This results in a change – often dramatic – in the product shape.” This new shape is fixed when the product is cooled, with good mechanical stiffness. The research demonstrated many interesting shape changing parts, including a lattice that can expand by almost 8 times when heated.

This new shape becomes permanent and the composite material will not return to its original 3D-printed shape, upon further heating or cooling. “This is because of the shape memory effect,” said Prof. H. Jerry Qi of Georgia Tech. “In the two-material composite design, the stiff material exhibits shape memory, which helps lock the transformed shape into a permanent one. Additionally, the printed structure also exhibits the shape memory effect, i.e. it can then be programmed into further arbitrary shapes that can always be recovered to its new permanent shape, but not its 3D-printed shape.”

Said SUTD’s Prof. Martin Dunn, “The key advance of this work, is a 4D printing method that is dramatically simplified and allows the creation of high-resolution complex 3D reprogrammable products; it promises to enable myriad applications across biomedical devices, 3D electronics, and consumer products. It even opens the door to a new paradigm in product design, where components are designed from the onset to inhabit multiple configurations during service.”

Here’s a video,


Uploaded on Apr 17, 2017

A research team led by the Singapore University of Technology and Design’s (SUTD) Associate Provost of Research, Professor Martin Dunn, has come up with a new and simplified 4D printing method that uses a 3D printer to rapidly create 3D objects, which can permanently transform into a range of different shapes in response to heat.

Here’s a link to and a citation for the paper,

Direct 4D printing via active composite materials by Zhen Ding, Chao Yuan, Xirui Peng, Tiejun Wang, H. Jerry Qi, and Martin L. Dunn. Science Advances  12 Apr 2017: Vol. 3, no. 4, e1602890 DOI: 10.1126/sciadv.1602890

This paper is open access.

Here is a link to a post about another 4th dimension, time,

4D printing: a hydrogel orchid (Jan. 28, 2016)

Hollywood and neurosurgery

Usually a story about Hollywood and science (in this case, neurosurgery) is focused on how scientifically accurate the portrayal is. This time the situation has been reversed and science has borrowed from Hollywood. From an April 25, 2017 Johns Hopkins University School of Medicine news release (also on EurekAlert), Note: A link has been removed,

A team of computer engineers and neurosurgeons, with an assist from Hollywood special effects experts, reports successful early tests of a novel, lifelike 3D simulator designed to teach surgeons to perform a delicate, minimally invasive brain operation.

A report on the simulator that guides trainees through an endoscopic third ventriculostomy (ETV) was published in the Journal of Neurosurgery: Pediatrics on April 25 [2017]. The procedure uses endoscopes, which are small, computer-guided tubes and instruments, to treat certain forms of hydrocephalus, a condition marked by an excessive accumulation of cerebrospinal fluid and pressure on the brain. ETV is a minimally invasive procedure that short-circuits the fluid back into normal channels in the brain, eliminating the need for implantation of a shunt, a lifelong device with the associated complications of a foreign body.

“For surgeons, the ability to practice a procedure is essential for accurate and safe performance of the procedure. Surgical simulation is akin to a golfer taking a practice swing,” says Alan R. Cohen, M.D., professor of neurosurgery at the Johns Hopkins University School of Medicine and a senior author of the report. “With surgical simulation, we can practice the operation before performing it live.”

While cadavers are the traditional choice for such surgical training, Cohen says they are scarce, expensive, nonreusable, and most importantly, unable to precisely simulate the experience of operating on the problem at hand, which Cohen says requires a special type of hand-eye coordination he dubs “Nintendo Neurosurgery.”

In an effort to create a more reliable, realistic and cost-effective way for surgeons to practice ETV, the research team worked with 3D printing and special effects professionals to create a lifelike, anatomically correct, full-size head and brain with the touch and feel of human skull and brain tissue.

The fusion of 3D printing and special effects resulted in a full-scale reproduction of a 14-year-old child’s head, modeled after a real patient with hydrocephalus, one of the most common problems seen in the field of pediatric neurosurgery. Special features include an electronic pump to reproduce flowing cerebrospinal fluid and brain pulsations. One version of the simulator is so realistic that it has facial features, hair, eyelashes and eyebrows.

To test the model, Cohen and his team randomly paired four neurosurgery fellows and 13 medical residents to perform ETV on either the ultra-realistic simulator or a lower-resolution simulator, which had no hair, lashes or brows.

After completing the simulation, fellows and residents each rated the simulator using a five-point scale. On average, both the surgical fellows and the residents rated the simulator more highly (4.88 out of 5) on its effectiveness for ETV training than on its aesthetic features (4.69). The procedures performed by the trainees were also recorded and later watched and graded by two fully trained neurosurgeons in a way that they could not identify who the trainees were or at what stage they were in their training.

The neurosurgeons assessed the trainees’ performance using criteria such as “flow of operation,” “instrument handling” and “time and motion.”

Neurosurgeons consistently rated the fellows higher than residents on all criteria measured, which accurately reflected their advanced training and knowledge, and demonstrated the simulator’s ability to distinguish between novice and expert surgeons.

Cohen says that further tests are needed to determine whether the simulator will actually improve performance in the operating room. “With this unique assortment of investigators, we were able to develop a high-fidelity simulator for minimally invasive neurosurgery that is realistic, reliable, reusable and cost-effective. The models can be designed to be patient-specific, enabling the surgeon to practice the operation before going into the operating room,” says Cohen.

Other authors on this paper include Roberta Rehder from the Johns Hopkins School of Medicine, and Peter Weinstock, Sanjay P. Parbhu, Peter W. Forbes and Christopher Roussin from Boston Children’s Hospital.

Funding for the study was provided by a grant from the Boston Investment Conference. The research team acknowledges the contribution of FracturedFX, an Emmy Award-winning special effects group from Hollywood, California, in the development of the surgical models.

The investigators report no financial stake or interests in the success of the simulator.

Here’s what the model looks like,

Caption: A. Low-fidelity simulated surgical model for ETV. B. High-fidelity model with hair, eyelashes and eyebrows. Credit: Copyright AANS. Used with permission.

An April 25, 2017 Journal of Neurosurgery news release on EurekAlert details the refinements applied to this replica (Note: There is some repetitive material),

….

A neurosurgery residency training program generally lasts seven years–longer than any other medical specialty. Trainees log countless hours observing surgeries performed by experienced neurosurgeons and developing operative skills in practice labs before touching patients. It is challenging to create a realistic surgical experience outside an operating room. Cadaveric specimens and virtual reality programs have been used, but they are costly and do not provide as realistic an experience as desired.

The new training simulation model described in this paper is a full-scale reproduction of the head of an adolescent patient with hydrocephalus. The external appearance of the head is uncannily accurate, as is the internal neuroanatomy.

One failing of 3D models is the stiffness of most sculpting material. This problem was overcome by addition of special-effects materials that reproduce the textures of external skin and internal brain structures. In addition, the operative environment in this training model is amazingly alive, with pulsations of a simulated basilar artery and ventricles as well as movement of cerebrospinal fluid. These advances provide visual and tactile feedback to the trainee that closely resembles that of the surgical experience.

The procedure selected to test the new training model was endoscopic third ventriculostomy (ETV), a minimally invasive surgical procedure increasingly used to treat hydrocephalus. The goal of ETV is to create a hole in the floor of the third ventricle. This provides a new pathway by which excess cerebrospinal fluid can circulate.

During ETV, the surgeon drills a small hole in the skull of the patient and inserts an endoscope into the ventricular system. The endoscope accommodates a lighted miniature video camera to visualize the operative site and specialized surgical instruments suited to perform operative tasks through the endoscope. The video camera sends a direct feed to external monitors in the operating room so that surgeons can clearly see what they are doing.

To evaluate the usefulness of the training simulation model of ETV, the researchers solicited feedback from users (neurosurgical residents and fellows) and their teachers. Trainees were asked to respond to a 14-item questionnaire focused on the external and internal appearances of the model and its tactile feel during simulated surgery (face validity) as well as on how closely the simulated procedure reproduced an actual ETV (content validity). The usefulness of the model in assessing trainees’ performances was then evaluated by two attending neurosurgeons blinded to the identity and training status (post-graduate year of training) of the residents and fellows (construct validity).

The neurosurgical residents and fellows gave high scores to the training model for both face and content validity (mean scores of 4.69 and 4.88, respectively; 5 would be a perfect score). The performance scores given to individual trainees by the attending neurosurgeons clearly distinguished novice surgeons from more experienced surgeons, accurately reflecting the trainees’ post-graduate years of training.

The training model described in this paper is not limited to hydrocephalus or treatment with ETV. The simulated head accommodates replaceable plug-and-play components to provide a fresh operative field for each training session. A variety of diseased or injured brain scenarios could be tested using different plug-and-play components. In addition, the ability to pop in new components between practice sessions greatly reduces training costs compared to other models.

When asked about the paper, the senior author, Alan R. Cohen, MD, at Johns Hopkins Hospital, said, “This unique collaboration of interdisciplinary experts resulted in the creation of an ultra-realistic 3D surgical training model. Simulation has become increasingly important for training in minimally invasive neurosurgery. It also has the potential to revolutionize training for all surgical procedures.

Here’s a link to and a citation for the paper,

Creation of a novel simulator for minimally invasive neurosurgery: fusion of 3D printing and special effects by Peter Weinstock, Roberta Rehder, Sanjay P. Parbhu, Peter W. Forbes, Christopher J. Roussin, and Alan R. Cohen. Journal of Neurosurgery: Pediatrics, published online, ahead of print, April 25, 2017; DOI: 10.3171/2017.1.PEDS16568

This paper appears to be open access.

Saving modern art with 3D-printed artwork

I first wrote about the NanoRestART project in an April 4. 2016 post highlighting work which focuses on a problem unique to modern and contemporary art, the rapid deterioration of the plastics and synthetic materials used to create the art and the lack of conservation techniques for preserving those materials. A Dec. 22, 2016 news item on phys.org provides an update on the project,

Many contemporary artworks are endangered due to their extremely fast degradation processes. NANORESTART—a project developing nanomaterials to protect and restore this cultural heritage—has created a 3-D printed artwork with a view to testing restoration methods.

The 3D printed sculpture was designed by engineer-artist Tom Lomax – a UK-based sculptor and painter specialised in 3D-printed colour sculpture. Drawing inspiration from the aesthetic of early 20th century artworks, the sculpture was made using state-of-the-art 3D printing processes and can be downloaded for free. [I believe the downloadable files are available at the end of the paper in Heritage Science in the section titled: Additional files, just prior to the References {see below for citation and link to the paper}

Fig. 1
Images of the RP artwork “Out of the Cauldron” designed by Tom Lomax produced with the most common RP Technologies: (1) stereolithography (SLA®) (2) polyjet (3) 3D printing (3DP) (4) selective laser sintering (SLS). Before (above) and after (below) photodegradation
Courtesy: Heritage Science

A Dec. 21, 2016 Cordis press release, which originated the news item, provides more information about the artist and his 3D printed sculpture,

‘As an artist I previously had little idea of the conservation threat facing contemporary art – preferring to leave these issues for conservators and focus on the creative process. But while working on this project with UCL [University College of London] I began to realise that artists themselves have a crucial role to play,’ Lomax explains.

The structure has been printed using the most common rapid prototyping (RP) technologies, which are gaining popularity among designers and artists. It will be a key tool for the project team to test how these structures degrade and come up with solutions to better preserve them.

As Caroline Coon, researcher at the UCL Institute for Sustainable Heritage, notes, ‘Art is being transformed by fast-changing new technologies and it is therefore vital to preempt conservation issues, rather than react to them, if we are to preserve our best contemporary works for future generations. This research project will benefit both artists and academics alike – but ultimately it is in the best interests of the public that art and science combine to preserve works.’

The NANORESTART team subjected the artwork to accelerated testing, discovering that many 3D-printing technologies use materials that degrade particularly rapidly. It is particularly true for polymers, whose only-recently achieved cultural heritage status also means that conservation experience is almost inexistent.

Preserving or not: an intricate question for artists

The experiments were part of a UCL paper entitled ‘Preserving Rapid Prototypes: A Review’, published in late November in Heritage Science. In this review, Caroline Coon and her team have critically assessed the most commonly used technologies used to tackle the degradation of materials, noting that ‘to conserve RP artworks it is necessary to have an understanding of the process of creation, the different technologies involved, the materials used as well as their chemical and mechanical properties.’

Besides technical concerns, the paper also voices those of artists, in particular the importance of the original artefact and the debate around the appropriateness of preventing the degradation process of artworks. Whilst digital conservation of these artworks would prevent degradation and allow designs to be printed on-demand, some artists argue that the original artefact is actually the one with artistic value as it references a specific time and place. On the other hand, some artists actually embrace and accept the natural degradation of their art as part of its charm.

With two more years to go before its completion, NANORESTART will undoubtedly bring valuable results, resources and reflexions to both conservators and artists. The nanomaterials it aims to develop will bring the EU at the forefront of a conservation market estimated at some EUR 5 billion per year.

Here`s a link to and a citation for the paper,

Preserving rapid prototypes: a review by Carolien Coon, Boris Pretzel, Tom Lomax, and Matija Strlič. Heritage Science 2016 4:40 DOI: 10.1186/s40494-016-0097-y Published: 22 November 2016

©  The Author(s) 2016

This paper is open access.

Korea Advanced Institute of Science and Technology (KAIST) at summer 2016 World Economic Forum in China

From the Ideas Lab at the 2016 World Economic Forum at Davos to offering expertise at the 2016 World Economic Forum in Tanjin, China that is taking place from June 26 – 28, 2016.

Here’s more from a June 24, 2016 KAIST news release on EurekAlert,

Scientific and technological breakthroughs are more important than ever as a key agent to drive social, economic, and political changes and advancements in today’s world. The World Economic Forum (WEF), an international organization that provides one of the broadest engagement platforms to address issues of major concern to the global community, will discuss the effects of these breakthroughs at its 10th Annual Meeting of the New Champions, a.k.a., the Summer Davos Forum, in Tianjin, China, June 26-28, 2016.

Three professors from the Korea Advanced Institute of Science and Technology (KAIST) will join the Annual Meeting and offer their expertise in the fields of biotechnology, artificial intelligence, and robotics to explore the conference theme, “The Fourth Industrial Revolution and Its Transformational Impact.” The Fourth Industrial Revolution, a term coined by WEF founder, Klaus Schwab, is characterized by a range of new technologies that fuse the physical, digital, and biological worlds, such as the Internet of Things, cloud computing, and automation.

Distinguished Professor Sang Yup Lee of the Chemical and Biomolecular Engineering Department will speak at the Experts Reception to be held on June 25, 2016 on the topic of “The Summer Davos Forum and Science and Technology in Asia.” On June 27, 2016, he will participate in two separate discussion sessions.

In the first session entitled “What If Drugs Are Printed from the Internet?” Professor Lee will discuss the future of medicine being impacted by advancements in biotechnology and 3D printing technology with Nita A. Farahany, a Duke University professor, under the moderation of Clare Matterson, the Director of Strategy at Wellcome Trust in the United Kingdom. The discussants will note recent developments made in the way patients receive their medicine, for example, downloading drugs directly from the internet and the production of yeast strains to make opioids for pain treatment through systems metabolic engineering, and predicting how these emerging technologies will transform the landscape of the pharmaceutical industry in the years to come.

In the second session, “Lessons for Life,” Professor Lee will talk about how to nurture life-long learning and creativity to support personal and professional growth necessary in an era of the new industrial revolution.

During the Annual Meeting, Professors Jong-Hwan Kim of the Electrical Engineering School and David Hyunchul Shim of the Aerospace Department will host, together with researchers from Carnegie Mellon University and AnthroTronix, an engineering research and development company, a technological exhibition on robotics. Professor Kim, the founder of the internally renowned Robot World Cup, will showcase his humanoid micro-robots that play soccer, displaying their various cutting-edge technologies such as imaging processing, artificial intelligence, walking, and balancing. Professor Shim will present a human-like robotic piloting system, PIBOT, which autonomously operates a simulated flight program, grabbing control sticks and guiding an airplane from take offs to landings.

In addition, the two professors will join Professor Lee, who is also a moderator, to host a KAIST-led session on June 26, 2016, entitled “Science in Depth: From Deep Learning to Autonomous Machines.” Professors Kim and Shim will explore new opportunities and challenges in their fields from machine learning to autonomous robotics including unmanned vehicles and drones.

Since 2011, KAIST has been participating in the World Economic Forum’s two flagship conferences, the January and June Davos Forums, to introduce outstanding talents, share their latest research achievements, and interact with global leaders.

KAIST President Steve Kang said, “It is important for KAIST to be involved in global talks that identify issues critical to humanity and seek answers to solve them, where our skills and knowledge in science and technology could play a meaningful role. The Annual Meeting in China will become another venue to accomplish this.”

I mentioned KAIST and the Ideas Lab at the 2016 Davos meeting in this Nov. 20, 2015 posting and was able to clear up my (and possible other people’s) confusion as to what the Fourth Industrial revolution might be in my Dec. 3, 2015 posting.

Printing in midair

Dexter Johnson’s May 16, 2016 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) was my first introduction to something wonder-inducing (Note: Links have been removed),

While the growth of 3-D printing has led us to believe we can produce just about any structure with it, the truth is that it still falls somewhat short.

Researchers at Harvard University are looking to realize a more complete range of capabilities for 3-D printing in fabricating both planar and freestanding 3-D structures and do it relatively quickly and on low-cost plastic substrates.

In research published in the journal Proceedings of the National Academy of Sciences (PNAS),  the researchers extruded a silver-nanoparticle ink and annealed it with a laser so quickly that the system let them easily “write” free-standing 3-D structures.

While this may sound humdrum, what really takes one’s breath away with this technique is that it can create 3-D structures seemingly suspended in air without any signs of support as though they were drawn there with a pen.

Laser-assisted direct ink writing allowed this delicate 3D butterfly to be printed without any auxiliary support structure (Image courtesy of the Lewis Lab/Harvard University)

Laser-assisted direct ink writing allowed this delicate 3D butterfly to be printed without any auxiliary support structure (Image courtesy of the Lewis Lab/Harvard University)

A May 16, 2016 Harvard University press release (also on EurekAlert) provides more detail about the work,

“Flat” and “rigid” are terms typically used to describe electronic devices. But the increasing demand for flexible, wearable electronics, sensors, antennas and biomedical devices has led a team at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS) and Wyss Institute for Biologically Inspired Engineering to innovate an eye-popping new way of printing complex metallic architectures – as though they are seemingly suspended in midair.

“I am truly excited by this latest advance from our lab, which allows one to 3D print and anneal flexible metal electrodes and complex architectures ‘on-the-fly,’ ” said Lewis [Jennifer Lewis, the Hansjörg Wyss Professor of Biologically Inspired Engineering at SEAS and Wyss Core Faculty member].

Lewis’ team used an ink composed of silver nanoparticles, sending it through a printing nozzle and then annealing it using a precisely programmed laser that applies just the right amount of energy to drive the ink’s solidification. The printing nozzle moves along x, y, and z axes and is combined with a rotary print stage to enable freeform curvature. In this way, tiny hemispherical shapes, spiral motifs, even a butterfly made of silver wires less than the width of a hair can be printed in free space within seconds. The printed wires exhibit excellent electrical conductivity, almost matching that of bulk silver.

When compared to conventional 3D printing techniques used to fabricate conductive metallic features, laser-assisted direct ink writing is not only superior in its ability to produce curvilinear, complex wire patterns in one step, but also in the sense that localized laser heating enables electrically conductive silver wires to be printed directly on low-cost plastic substrates.

According to the study’s first author, Wyss Institute Postdoctoral Fellow Mark Skylar-Scott, Ph.D., the most challenging aspect of honing the technique was optimizing the nozzle-to-laser separation distance.

“If the laser gets too close to the nozzle during printing, heat is conducted upstream which clogs the nozzle with solidified ink,” said Skylar-Scott. “To address this, we devised a heat transfer model to account for temperature distribution along a given silver wire pattern, allowing us to modulate the printing speed and distance between the nozzle and laser to elegantly control the laser annealing process ‘on the fly.’ ”

The result is that the method can produce not only sweeping curves and spirals but also sharp angular turns and directional changes written into thin air with silver inks, opening up near limitless new potential applications in electronic and biomedical devices that rely on customized metallic architectures.

Seeing is believing, eh?

Here’s a link to and a citation for the paper,

Laser-assisted direct ink writing of planar and 3D metal architectures by Mark A. Skylar-Scott, Suman Gunasekaran, and Jennifer A. Lewis. PNAS [Proceedings of the National Academy of Sciences] 2016 doi: 10.1073/pnas.1525131113

I believe this paper is open access.

A question: I wonder what conditions are necessary before you can 3D print something in midair? Much as I’m dying to try this at home, I’m pretty that’s not possible.

YBC 7289: a 3,800-year-old mathematical text and 3D printing at Yale University

1,300 years before Pythagoras came up with the theorem associated with his name, a school kid in Babylon formed a disc out of clay and scratched out the theorem when the surface was drying.  According to an April 12, 2016 news item on phys.org the Bablyonians got to the theorem first, (Note: A link has been removed),

Thirty-eight hundred years ago, on the hot river plains of what is now southern Iraq, a Babylonian student did a bit of schoolwork that changed our understanding of ancient mathematics. The student scooped up a palm-sized clump of wet clay, formed a disc about the size and shape of a hamburger, and let it dry down a bit in the sun. On the surface of the moist clay the student drew a diagram that showed the people of the Old Babylonian Period (1,900–1,700 B.C.E.) fully understood the principles of the “Pythagorean Theorem” 1300 years before Greek geometer Pythagoras was born, and were also capable of calculating the square root of two to six decimal places.

Today, thanks to the Internet and new digital scanning methods being employed at Yale, this ancient geometry lesson continues to be used in modern classrooms around the world.

Just when you think it’s all about the theorem, the story which originated in an April 11, 2016 Yale University news release by Patrick Lynch takes a turn,

“This geometry tablet is one of the most-reproduced cultural objects that Yale owns — it’s published in mathematics textbooks the world over,” says Professor Benjamin Foster, curator of the Babylonian Collection, which includes the tablet. It’s also a popular teaching tool in Yale classes. “At the Babylonian Collection we have a very active teaching and learning function, and we regard education as one of the core parts of our mission,” says Foster. “We have graduate and undergraduate groups in our collection classroom every week.”

The tablet, formally known as YBC 7289, “Old Babylonian Period Mathematical Text,” came to Yale in 1909 as part of a much larger collection of cuneiform tablets assembled by J. Pierpont Morgan and donated to Yale. In the ancient Mideast cuneiform writing was created by using a sharp stylus pressed into the surface of a soft clay tablet to produce wedge-like impressions representing pictographic words and numbers. Morgan’s donation of tablets and other artifacts formed the nucleus of the Yale Babylonian Collection, which now incorporates 45,000 items from the ancient Mesopotamian kingdoms.

Discoverying [sic] the tablet’s mathematical significance

The importance of the geometry tablet was first recognized by science historians Otto Neugebauer and Abraham Sachs in their 1945 book “Mathematical Cuneiform Texts.”

“Ironically, mathematicians today are much more fascinated with the Babylonians’ ability to accurately calculate irrational numbers like the square root of two than they are with the geometry demonstrations,” notes associate Babylonian Collection curator Agnete Lassen.

“The Old Babylonian Period produced many tablets that show complex mathematics, but it also produced things you might not expect from a culture this old, such as grammars, dictionaries, and word lists,” says Lassen “One of the two main languages spoken in early Babylonia  was dying out, and people were careful to document and save what they could on cuneiform tablets. It’s ironic that almost 4,000 years ago people were thinking about cultural preservation, [emphasis mine] and actively preserving their learning for future generations.”.

This business about ancient peoples trying to preserve culture and learning for future generations suggests that the efforts in Palmyra, Syria (my April 6, 2016 post about 3D printing parts of Palmyra) are born of an age-old impulse. And then the story takes another turn and becomes a 3D printing story (from the Yale University news release),

Today, however, the tablet is a fragile lump of clay that would not survive routine handling in a classroom. In looking for alternatives that might bring the highlights of the Babylonian Collection to a wider audience, the collection’s curators partnered with Yale’s Institute for the Preservation of Cultural Heritage (IPCH) to bring the objects into the digital world.

Scanning at the IPCH

The IPCH Digitization Lab’s first step was to do reflectance transformation imaging (RTI) on each of fourteen Babylonian Collection objects. RTI is a photographic technique that enables a student or researcher to look at a subject with many different lighting angles. That’s particularly important for something like a cuneiform tablet, where there are complex 3D marks incised into the surface. With RTI you can freely manipulate the lighting, and see subtle surface variations that no ordinary photograph would reveal.

Chelsea Graham of the IPCH Digitization Lab and her colleague Yang Ying Yang of the Yale Computer Graphics Group then did laser scanning of the tablet to create a three-dimensional geometric model that can be freely rotated onscreen. The resulting 3D models can be combined with many other types of digital imaging to give researchers and students a virtual tablet onscreen, and the same data can be use to create a 3D printed facsimile that can be freely used in the classroom without risk to the delicate original.
3D printing digital materials

While virtual models on the computer screen have proved to be a valuable teaching and research resource, even the most accurate 3D model on a computer screen doesn’t convey the tactile  impact, and physicality of the real object. Yale’s Center for Engineering Innovation and Design has collaborated with the IPCH on a number of cultural heritage projects, and the center’s assistant director, Joseph Zinter, has used its 3D printing expertise on a wide range of engineering, basic science, and cultural heritage projects.

“Whether it’s a sculpture, a rare skull, or a microscopic neuron or molecule highly magnified, you can pick up a 3D printed model and hold it, and it’s a very different and important way to understand the data. Holding something in your hand is a distinctive learning experience,” notes Zinter.

Sharing cultural heritage projects in the digital world

Once a cultural artifact has entered the digital world there are practical problems with how to share the information with students and scholars. IPCH postdoctoral fellows Goze Akoglu and Eleni Kotoula are working with Yale computer science faculty member Holly Rushmeier to create an integrated collaborative software platform to support the research and sharing of cultural heritage artifacts like the Babylonian tablet.

“Right now cultural heritage professionals must juggle many kinds of software, running several types of specialized 2D and 3D media viewers as well as conventional word processing and graphics programs. Our vision is to create a single virtual environment that accommodates many kinds of media, as well as supporting communication and annotation within the project,” says Kotoula.

The wide sharing and disseminating of cultural artifacts is one advantage of digitizing objects, notes professor Rushmeier, “but the key thing about digital is the power to study large virtual collections. It’s not about scanning and modeling the individual object. When the scanned object becomes part of a large collection of digital data, then machine learning and search analysis tools can be run over the collection, allowing scholars to ask questions and make comparisons that aren’t possible by other means,” says Rushmeier.

Reflecting on the process that brings state-of-the-art digital tools to one of humanity’s oldest forms of writing, Graham said “It strikes me that this tablet has made a very long journey from classroom to classroom. People sometimes think the digital or 3D-printed models are just a novelty, or just for exhibitions, but you can engage and interact much more with the 3D printed object, or 3D model on the screen. I think the creators of this tablet would have appreciated the efforts to bring this fragile object back to the classroom.”

There is also a video highlighting the work,