Category Archives: science

James Clerk Maxwell and his science mashup unified theories of magnetism, electricity, and optics

It’s the 150th anniversary for a series of equations electric charges and electric and magnetic fields that are still being explored. Jon Butterworth in a Nov. 22, 2015 posting on the Guardian science blog network explains (Note: A link has been removed),

The chances are that you are reading this article on some kind of electronic technology. You are definitely seeing it via visible light, unless you have a braille or audio converter. And it probably got to you via wifi or a mobile phone signal. All of those things are understood in terms of the relationships between electric charges and electric and magnetic fields summarised in Maxwell’s [James Clerk Maxwell] equations, published by the Royal Society in 1865, 150 years ago.

Verbally, the equations can be summarised as something like:

Electric and magnetic fields make electric charges move. Electric charges cause electric fields, but there are no magnetic charges. Changes in magnetic fields cause electric fields, and vice versa.

The equations specify precisely how it all happens, but that is the gist of it.

Butterworth got a rare opportunity to see the original manuscript,

 Original manuscript of Maxwell’s seminal paper Photograph: Jon Butterworth/Royal Society [downloaded from]

Original manuscript of Maxwell’s seminal paper Photograph: Jon Butterworth/Royal Society [downloaded from]

I love this description from Butterworth,

It was submitted in 1864 but, in a situation familiar to scientists everywhere, was held up in peer review. There’s a letter, dated March 1865, from William Thomson (later Lord Kelvin) saying he was sorry for being slow, that he’d read most of it and it seemed pretty good (“decidely suitable for publication”).

Then, there’s this,

The equations seem to have been very much a bottom-up affair, in that Maxwell collected together a number of known laws which were used to describe various experimental results, and (with a little extra ingredient of his own) fitted them into a unified framework. What is amazing is how much that framework then reveals, both in terms of deep physical principles, and rich physical phenomena.

I’m not excerpting any part of Butterworth’s description of how Maxwell fit these equations together for his unification theory as I think it should be read in its totality.

The section on quantum mechanics is surprising,

Now, one thing Maxwell’s equations don’t contain is quantum mechanics [emphasis mine]. They are classical equations. But if you take the quantum mechnical description of an electron, and you enforce the same charge conservation law/voltage symmetry that was contained in the classical Maxwell’s equations, something marvellous happens [emphasis mine]. The symmetry is denoted “U(1)”, and if you enforce it locally – that it, you say that you have to be allowed make different U(1) type changes to electrons at different points in space, you actually generate the quantum mechanical version of Maxwell’s equations out of nowhere [emphasis mine]. You produce the equations that describe the photon, and the whole of quantum electrodynamics.

I encourage you to read Butterworth’s Nov. 22, 2015 posting where he also mention two related art/science projects and has embedded a video animation of the principles discussed in his posting.

For anyone unfamiliar with Butterworth, there’s this description at the Guardian,

Jon Butterworth is a physics professor at University College London. He is a member of the UCL High Energy Physics group and works on the Atlas experiment at Cern’s Large Hadron Collider. His book Smashing Physics: The Inside Story of the Hunt for the Higgs was published in May 2014

A view to controversies about nanoparticle drug delivery, sticky-flares, and a PNAS surprise

Despite all the excitement and claims for nanoparticles as vehicles for drug delivery to ‘sick’ cells there is at least one substantive problem, the drug-laden nanoparticles don’t actually enter the interior of the cell. They are held in a kind of cellular ‘waiting room’.

Leonid Schneider in a Nov. 20, 2015 posting on his For Better Science blog describes the process in more detail,

A large body of scientific nanotechnology literature is dedicated to the biomedical aspect of nanoparticle delivery into cells and tissues. The functionalization of the nanoparticle surface is designed to insure their specificity at targeting only a certain type of cells, such as cancers cells. Other technological approaches aim at the cargo design, in order to ensure the targeted release of various biologically active agents: small pharmacological substances, peptides or entire enzymes, or nucleotides such as regulatory small RNAs or even genes. There is however a main limitation to this approach: though cells do readily take up nanoparticles through specific membrane-bound receptor interaction (endocytosis) or randomly (pinocytosis), these nanoparticles hardly ever truly reach the inside of the cell, namely its nucleocytoplasmic space. Solid nanoparticles are namely continuously surrounded by the very same membrane barrier they first interacted with when entering the cell. These outer-cell membrane compartments mature into endosomal and then lysosomal vesicles, where their cargo is subjected to low pH and enzymatic digestion. The nanoparticles, though seemingly inside the cell, remain actually outside. …

What follows is a stellar piece featuring counterclaims about and including Schneider’s own journalistic research into scientific claims that the problem of gaining entry to a cell’s true interior has been addressed by technologies developed in two different labs.

Having featured one of the technologies here in a July 24, 2015 posting titled: Sticky-flares nanotechnology to track and observe RNA (ribonucleic acid) regulation and having been contacted a couple of times by one of the scientists, Raphaël Lévy from the University of Liverpool (UK), challenging the claims made (Lévy’s responses can be found in the comments section of the July 2015 posting), I thought a followup of sorts was in order.

Scientific debates (then and now)

Scientific debates and controversies are part and parcel of the scientific process and what most outsiders, such as myself, don’t realize is how fraught it is. For a good example from the past, there’s Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life (from its Wikipedia entry), Note: Links have been removed),

Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life (published 1985) is a book by Steven Shapin and Simon Schaffer. It examines the debate between Robert Boyle and Thomas Hobbes over Boyle’s air-pump experiments in the 1660s.

The style seems more genteel than what a contemporary Canadian or US audience is accustomed to but Hobbes and Boyle (and proponents of both sides) engaged in bruising communication.

There was a lot at stake then and now. It’s not just the power, prestige, and money, as powerfully motivating as they are, it’s the research itself. Scientists work for years to achieve breakthroughs or to add more to our common store of knowledge. It’s painstaking and if you work at something for a long time, you tend to be invested in it. Saying you’ve wasted ten years of your life looking at the problem the wrong way or have misunderstood your data is not easy.

As for the current debate, Schneider’s description gives no indication that there is rancour between any of the parties but it does provide a fascinating view of two scientists challenging one of the US’s nanomedicine rockstars, Chad Mirkin. The following excerpt follows the latest technical breakthroughs to the interior portion of the cell through three phases of the naming conventions (Nano-Flares, also known by its trade name, SmartFlares, which is a precursor technology to Sticky-Flares), Note: Links have been removed,

The next family of allegedly nucleocytoplasmic nanoparticles which Lévy turned his attention to, was that of the so called “spherical nucleic acids”, developed in the lab of Chad Mirkin, multiple professor and director of the International Institute for Nanotechnology at the Northwestern University, USA. These so called “Nano-Flares” are gold nanoparticles, functionalized with fluorophore-coupled oligonucleotides matching the messenger RNA (mRNA) of interest (Prigodich et al., ACS Nano 3:2147-2152, 2009; Seferos et al., J Am. Chem.Soc. 129:15477-15479, 2007). The mRNA detection method is such that the fluorescence is initially quenched by the gold nanoparticle proximity. Yet when the oligonucleotide is displaced by the specific binding of the mRNA molecules present inside the cell, the fluorescence becomes detectable and serves thus as quantitative read-out for the intracellular mRNA abundance. Exactly this is where concerns arise. To find and bind mRNA, spherical nucleic acids must leave the endosomal compartments. Is there any evidence that Nano-Flares ever achieve this and reach intact the nucleocytoplasmatic space, where their target mRNA is?

Lévy’s lab has focused its research on the commercially available analogue of the Nano-Flares, based on the patent to Mirkin and Northwestern University and sold by Merck Millipore under the trade name of SmartFlares. These were described by Mirkin as “a powerful and prolific tool in biology and medical diagnostics, with ∼ 1,600 unique forms commercially available today”. The work, led by Lévy’s postdoctoral scientist David Mason, now available in post-publication process at ScienceOpen and on Figshare, found no experimental evidence for SmartFlares to be ever found outside the endosomal membrane vesicles. On the contrary, the analysis by several complementary approaches, i.e., electron, fluorescence and photothermal microscopy, revealed that the probes are retained exclusively within the endosomal compartments.

In fact, even Merck Millipore was apparently well aware of this problem when the product was developed for the market. As I learned, Merck performed a number of assays to address the specificity issue. Multiple hundred-fold induction of mRNA by biological cell stimulation (confirmed by quantitative RT-PCR) led to no significant changes in the corresponding SmartFlare signal. Similarly, biological gene downregulation or experimental siRNA knock-down had no effect on the corresponding SmartFlare fluorescence. Cell lines confirmed as negative for a certain biomarker proved highly positive in a SmartFlare assay.  Live cell imaging showed the SmartFlare signal to be almost entirely mitochondrial, inconsistent with reported patterns of the respective mRNA distributions.  Elsewhere however, cyanine dye-labelled oligonucleotides were found to unspecifically localise to mitochondria   (Orio et al., J. RNAi Gene Silencing 9:479-485, 2013), which might account to the often observed punctate Smart Flare signal.

More recently, Mirkin lab has developed a novel version of spherical nucleic acids, named Sticky-Flares (Briley et al., PNAS 112:9591-9595, 2015), which has also been patented for commercial use. The claim is that “the Sticky-flare is capable of entering live cells without the need for transfection agents and recognizing target RNA transcripts in a sequence-specific manner”. To confirm this, Lévy used the same approach as for the striped nanoparticles [not excerpted here]: he approached Mirkin by email and in person, requesting the original microscopy data from this publication. As Mirkin appeared reluctant, Lévy invoked the rules for data sharing by the journal PNAS, the funder NSF as well as the Northwestern University. After finally receiving Mirkin’s thin-optical microscopy data by air mail, Lévy and Mason re-analyzed it and determined the absence of any evidence for endosomal escape, while all Sticky-Flare particles appeared to be localized exclusively inside vesicular membrane compartments, i.e., endosomes (Mason & Levy, bioRxiv 2015).

I encourage you to read Schneider’s Nov. 20, 2015 posting in its entirety as these excerpts can’t do justice to it.

The PNAS surprise

PNAS (Proceedings of the National Academy of Science) published one of Mirkin’s papers on ‘Sticky-flares’ and is where scientists, Raphaël Lévy and David Mason, submitted a letter outlining their concerns with the ‘Sticky-flares’ research. Here’s the response as reproduced in Lévy’s Nov. 16, 2015 posting on his Rapha-Z-Lab blog

Dear Dr. Levy,

I regret to inform you that the PNAS Editorial Board has declined to publish your Letter to the Editor. After careful consideration, the Board has decided that your letter does not contribute significantly to the discussion of this paper.

Thank you for submitting your comments to PNAS.

Sincerely yours,
Inder Verma

Judge for yourself, Lévy’s and Mason’s letter can be found here (pdf) and here.


My primary interest in this story is in the view it provides of the scientific process and the importance of and difficulty associated with the debates.

I can’t venture an opinion about the research or the counterarguments other than to say that Lévy’s and Mason’s thoughtful challenge bears more examination than PNAS is inclined to accord. If their conclusions or Chad Mirkin’s are wrong, let that be determined in an open process.

I’ll leave the very last comment to Schneider who is both writer and cartoonist, from his Nov. 20, 2015 posting,


Café Scientifique (Vancouver, Canada) on climate change and rise of complex life on Nov. 24, 2015 and Member of Parliament Joyce Murray’s Paris Climate Conference breakfast meeting

On Tuesday, November 24, 2015 at 7:30 pm in the back room of The Railway Club (2nd floor of 579 Dunsmuir St. [at Seymour St.]), Café Scientifique will be hosting a talk about climate change and the rise of complex life (from the Nov. 12, 2015 announcement),

Our speaker for the evening will be Dr. Mark Jellinek.  The title of his talk is:

The Formation and Breakup of Earth’s Supercontinents and the Remarkable Link to Earth’s Climate and the Rise of Complex Life

Earth history is marked by the intermittent formation and breakup of “supercontinents”, where all the land mass is organized much like a completed jigsaw puzzle centered at the equator or pole of the planet. Such events disrupt the mantle convective motions that cool our planet, affecting the volcanic and weathering processes that maintain Earth’s remarkably hospitable climate, in turn. In this talk I will explore how the last two supercontinental cycles impelled Earth into profoundly different climate extreme’s: a ~150 million year long cold period involving protracted global glaciations beginning about 800 million years ago and a ~100 million year long period of extreme warming beginning about 170 million years ago. One of the most provocative features of the last period of global glaciation is the rapid emergence of complex, multicellular animals about 650 million years ago. Why global glaciation might stimulate such an evolutionary bifurcation is, however, unclear. Predictable environmental stresses related to effects of the formation and breakup of the supercontinent Rodinia on ocean chemistry and Earth’s surface climate may play a crucial and unexpected role that I will discuss.

A professor in the Dept. of Earth, Ocean and Atmospheric Sciences at the University of British Columbia, Dr. Jellinek’s research interests include Volcanology, Geodynamics, Planetary Science, Geological Fluid Mechanics. You can find out more about Dr. Jellinek and his work here.

Joyce Murray and the Paris Climate Conference (sold out)

Joyce Murray is a Canadian Member of Parliament, (Liberal) for the riding of Vancouver Quadra who hosts a regular breakfast meeting where topics of interest (child care, seniors, transportation, the arts, big data, etc.) are discussed. From a Nov. 13, 2015 email announcement,

You are invited to our first post-election Vancouver Quadra MP Breakfast Connections on November 27th at Enigma Restaurant, for a discussion with Dr. Mark Jaccard on why the heat will be on world leaders in Paris, in the days leading to December 12th,  at the Paris Climate Conference (COP 21).

After 20 years of UN negotiations, the world expects a legally binding universal agreement on climate to keep temperature increases below 2°C! The climate heat will especially be on laggards like Canada and Australia’s new Prime Ministers. What might be expected of the Right Honorable Justin Trudeau and his provincial premiers? What are the possible outcomes of COP21?

Dr. Jaccard has worked with leadership in countries like China and the United States, and helped develop British Columbia’s innovative Climate Action Plan and Carbon Tax.

Join us for this unique opportunity to engage with a climate policy expert who has participated in this critical global journey. From the occasion of the 1992 Rio Earth Summit resulting in the UN Framework Convention on Climate Change (UNFCCC), through the third Conference of Parties’ (COP3) Kyoto Protocol, to COP21 today, the building blocks for a binding international solution have been assembled. What’s still missing?

Mark has been a professor in the School of Resource and Environmental Management at Simon Fraser University since 1986 and is a global leader and consultant on structuring climate mitigation solutions. Former Chair and CEO of the British Columbia Utilities Commission, he has published over 100 academic papers, most of these related to his principal research focus: the design and application of energy-economy models that assess the effectiveness of sustainable energy and climate policies.

When: Friday November 27th 7:30 to 9:00AM

Where: Enigma Restaurant 4397 west 10th Avenue (at Trimble)

Cost: $20 includes a hot buffet breakfast; $10 for students (cash only please)

RSVP by emailing or call 604-664-9220


They’re not even taking names for a waiting list. You can find out more about Dr. Jaccard’s work here.

Lockheed Martin upgrades to 1000+ Qubit D-Wave system

D-Wave Systems, a Canadian quantum computing company, seems to be making new business announcements on a weekly basis. After last week’s US Los Alamos National Laboratory announcement (Nov. 12, 2015 posting) , there’s a Nov. 16, 2015 news item on Nanotechnology Now,

Harris & Harris Group, Inc. (NASDAQ:TINY), an investor in transformative companies enabled by disruptive science, notes that its portfolio company, D-Wave Systems, Inc., announced that it has entered into a multi-year agreement with Lockheed Martin to upgrade the company’s 512-qubit D-Wave Two™ quantum computer to the new D-Wave 2X™ system with 1,000+ qubits.

A Nov. 16, 2015 D-Wave Systems news release provides more details about the deal,

D-Wave Systems Inc., the world’s first quantum computing company, today announced that it has entered into a multi-year agreement with Lockheed Martin (NYSE: LMT) to upgrade the company’s 512-qubit D-Wave Two™ quantum computer to the new D-Wave 2X™ system with 1,000+ qubits. This represents the second system upgrade since Lockheed Martin became D-Wave’s first customer in 2011 with the purchase of a 128 qubit D-Wave One™ system. The agreement includes the system, maintenance and associated professional services.

“Our mission is to solve complex challenges, advance scientific discovery and deliver innovative solutions to our customers, which requires expertise in the most advanced technologies,” said Greg Tallant, Lockheed Martin fellow and lead for the University of Southern California-Lockheed Martin Quantum Computation Center (QCC). “Through our continued investment in D-Wave technology, we are able to push the boundaries of quantum computing and apply the latest technologies to address the real-world problems being faced by our customers.”

For quantum computing, the performance gain over traditional computing is most evident in exceedingly complex computational problems. This could be in areas such as validating the performance of software or vehicle planning and scheduling. With the new D-Wave system, Lockheed Martin researchers will be able to explore solutions for significantly larger computational problems with improved accuracy and execution time.

The new system will be hosted at the University of Southern California-Lockheed Martin Quantum Computation Center, which first began exploring the power of quantum computing with the D-Wave One, the world’s first quantum computer.

The installation of the D-Wave 2X system will be completed in January 2016.

Who knows what next week will bring for D-Wave, which by the way is located in Vancouver, Canada or, more accurately, Burnaby?

STEM for refugees and disaster relief

Just hours prior to the terrorist bombings in Paris (Friday, Nov. 13, 2015), Tash Reith-Banks published a Nov. 13, 2015 essay (one of a series) in the Guardian about science, technology, engineering, and mathematics (STEM) as those specialties apply to humanitarian aid with a special emphasis on Syrian refugee crisis.

This first essay focuses on how engineering and mathematics are essential when dealing with crises (from Reith-Banks’s Nov. 13, 2015 essay), Note: Links have been removed,

Engineering is a clear starting point: sanitation, shelter and supply lines are all essential in any crisis. As Martin McCann, CEO at RedR, which trains humanitarian NGO workers says: “There is the obvious work in providing water and sanitation and shelter. By shelter, we mean not only shelter or housing for disaster-affected people or refugees, but also structures to store both food and non-food items. Access is always critical, so once again engineers are needed to build roads or in some cases temporary landing strips.”

Emergency structures need to be light and fast to transport and erect, but tend not to be durable. One recent development comes from engineers Peter Brewin and Will Crawford of Concrete Canvas., The pair have developed a rapid-setting concrete-impregnated fabric that requires only air and water to harden into a water-proof, fire-resistant construction. This has been used to create rapidly deployable concrete shelters that can be carried in a bag and set up in an hour.

Here’s what one of the concrete shelters looks like,

A Concrete Canvas shelter. Once erected the structure takes 24 hours to harden, and then can be further insulated with earth or snow if necessary. Photograph: Gareth Phillips/Gareth Phillips for the Guardian

A Concrete Canvas shelter. Once erected the structure takes 24 hours to harden, and then can be further insulated with earth or snow if necessary. Photograph: Gareth Phillips/Gareth Phillips for the Guardian

There are many kinds of crises which can lead to a loss of shelter, access to water and food, and diminished safety and health as Reith-Banks also notes in a passage featuring mathematics (Note: A link has been removed),

Maths might seem a far cry from the sort of practical innovation described above, but of course it’s the root of great logistics. Alistair Clark from the University of the West of England is using advanced mathematical modelling to improve humanitarian supply chains to ensure aid is sent exactly where it is needed. Part of the Newton Mobility scheme, Clark’s project will partner with Brazilian disaster relief agencies and develop ways of modelling everything from landslides to torrential downpours in order to create sophisticated humanitarian supply chains that can rapidly adapt to a range of possible disaster scenarios and changing circumstances.

In a similar vein, Professor Amr Elnashai, founder and co-editor of the Journal of Earthquake Engineering, works in earthquake-hit areas to plan humanitarian relief for future earthquakes. He recently headed a large research and development effort funded by the Federal Emergency Management Agency in the USA (FEMA), to develop a computer model of the impact of earthquakes on the central eight states in the USA. This included social impact, temporary housing allocation, disaster relief, medical and educational care, as well as engineering damage and its economic impact.

Reith-Banks also references nanotechnology (Note: A link has been removed),

… Up to 115 people die every hour in Africa from diseases linked to contaminated drinking water and poor sanitation, particularly in the wake of conflicts and environmental disasters. Dr Askwar Hilonga recently won the Royal Academy of Engineering Africa Prize, which is dedicated to African inventions with the potential to bring major social and economic benefits to the continent. Hilonga has invented a low cost, sand-based water filter. The filter combines nanotechnology with traditional sand-filtering methods to provide safe drinking water without expensive treatment facilities.  …

Dr. Hilonga who is based in Tanzania was featured here in a June 16, 2015 posting about the Royal Academy of Engineering Prize, his research, and his entrepreneurial efforts.

Reith-Banks’s* essay provides a valuable and unexpected perspective on the humanitarian crises which afflict this planet *and I’m looking forward to the rest of the series*.

*’Reith-Banks’s’ replaced ‘This’ and ‘and I’m looking forward to the rest of the series’ was added Nov. 17, 2015 at 1620 hours PST.

US Los Alamos National Laboratory catches the D-Wave (buys a 1000+ Qubit quantum computer from D-Wave)

It can be euphoric experience making a major technical breakthrough (June 2015), selling to a new large customer (Nov. 2015) and impressing your important customers so they upgrade to the new system (Oct. 2015) within a few short months.* D-Wave Systems (a Vancouver-based quantum computer company) certainly has cause to experience it given the events of the last six weeks or so. Yesterday, in a Nov. 11, 2015, D-Wave news release, the company trumpeted its sale of a 1000+ Qubit system (Note: Links have been removed),

D-Wave Systems Inc., the world’s first quantum computing company, announced that Los Alamos National Laboratory will acquire and install the latest D-Wave quantum computer, the 1000+ qubit D-Wave 2X™ system. Los Alamos, a multidisciplinary research institution engaged in strategic science on behalf of national security, will lead a collaboration within the Department of Energy and with select university partners to explore the capabilities and applications of quantum annealing technology, consistent with the goals of the government-wide National Strategic Computing Initiative. The National Strategic Computing Initiative, created by executive order of President Obama in late July [2015], is intended “to maximize [the] benefits of high-performance computing (HPC) research, development, and deployment.”

“Los Alamos is a global leader in high performance computing and a pioneer in the application of new architectures to solve critical problems related to national security, energy, the environment, materials, health and earth science,” said Robert “Bo” Ewald, president of D-Wave U.S. “As we work jointly with scientists and engineers at Los Alamos we expect to be able to accelerate the pace of quantum software development to advance the state of algorithms, applications and software tools for quantum computing.”

A Nov. 11, 2015 news item on Nanotechnology Now is written from the company’s venture capitalist’s perspective,

Harris & Harris Group, Inc. (NASDAQ:TINY), an investor in transformative companies enabled by disruptive science, notes that its portfolio company, D-Wave Systems, Inc., announced that Los Alamos National Laboratory will acquire and install the latest D-Wave quantum computer, the 1000+ qubit D-Wave 2X™ system.

The news about the Los Alamos sale comes only weeks after D-Wave announced renewed agreements with Google, NASA (US National Aeronautics and Space Administration), and the Universities Space Research Association (USRA) in the aftermath of a technical breakthrough. See my Oct. 5, 2015 posting for more details about the agreements, the type of quantum computer D-Wave sells, and news of interesting and related research in Australia. Cracking the 512 qubit barrier also occasioned a posting here (June 26, 2015) where I described the breakthrough, the company, and included excerpts from an Economist article which mentioned D-Wave in its review of research in the field of quantum computing.

Congratulations to D-Wave!

*’It can be euphoric selling to your first large and/or important customers and D-Wave Systems (a Vancouver-based quantum computer company) certainly has cause to experience it. ‘ changed to more accurately express my thoughts to ‘It can be euphoric experience making a major technical breakthrough (June 2015), selling to a new large customer (Nov. 2015) and impressing your important customers so they upgrade to the new system (Oct. 2015) within a few short months.’ on Nov. 12, 2015 at 1025 hours PST.

Science and the movies (Bond’s Spectre and The Martian)

There’s some nanotechnology in the new James Bond movie, Spectre, according to Johnny Brayson in his Nov. 5, 2015 (?) article for Bustle (Note: A link has been removed),

James Bond has always been known for his gadgets, and although Daniel Craig’s version of the character has been considerably less doohickey-heavy than past iterations, he’s still managed to make use of a few over the years, from his in-car defibrillator in Casino Royale to his biometric-coded gun in Skyfall. But Spectre, the newest Bond film, changes up the formula and brings more gadgets than fans have seen in years. There are returning favorites like a tricked out Aston Martin and an exploding watch, but there’s also a new twist on an old gadget that allows Bond to be tracked by his bosses, an injected microchip that records his every move. …

To Bond fans, though, the technology isn’t totally new. In Casino Royale, Bond is injected with a microchip that tracks his location and monitors his vital signs. However, when he’s captured by the bad guys, the device is cut out of his arm, rendering it useless. MI6 seems to have learned their lesson in Spectre, because this time around Bond is injected with Smart Blood, consisting of nanotechnology that does the same thing while flowing microscopically through his veins. As for whether it could really happen, the answer is not yet, but someday it could be.

Brayson provides an introduction to some of the exciting developments taking place scientifically in an intriguing way by relating those developments to a James Bond movie. Unfortunately, some of  his details  are wrong. For example, he is describing a single microchip introduced subcutaneously (under the skin) synonymously with ‘smart blood’ which would be many, many microchips prowling your bloodstream.

So, enjoy the article but exercise some caution. For example, this part in his article is mostly right (Note: Links have been removed),

However, there does actually exist nanotechnology that has been safely inserted into a human body — just not for the purposes of tracking.  Some “nanobots”, microscopic robots, have been used within the human eye to deliver drugs directly to the area that needs them [emphasis mine], and the idea is that one day similar nanobots will be able to be injected into one’s bloodstream to administer medication or even perform surgery. Some scientists even believe that a swarm of nanobots in the bloodstream could eventually make humans immune to disease, as the bots would simply destroy or fix any issues as soon as they arrive.

According to a Jan. 30, 2015 article by Jacopo Prisco for CNN, scientists at ETH Zurich were planning to start human clinical trials to test ‘micro or nanobots’ in the human eye. I cannot find any additional information about the proposed trials. Similarly, Israeli researcher Ido Bachelet announced a clinical trial of DNA nanobots on one patient to cure their leukemia (my Jan. 7, 2015 posting). An unsuccessful attempt to get updated information can found in a May 2015 Reddit Futurology posting.

The Martian

That film has been doing very well and, for the most part, seems to have gotten kudos for its science. However for those who like to dig down for more iinformation, Jeffrey Kluger’s Sept. 30, 2015 article for Time magazine expresses some reservations about the science while enthusing over its quality as a film,

… Go see The Martian. But still: Don’t expect all of the science to be what it should be. The hard part about good science fiction has always been the fiction part. How many liberties can you take and how big should they be before you lose credibility? In the case of The Martian, the answer is mixed.

The story’s least honest device is also its most important one: the massive windstorm that sweeps astronaut Mark Watney (Matt Damon) away, causing his crew mates to abandon him on the planet, assuming he has been killed. That sets the entire castaway tale into motion, but on a false note, because while Mars does have winds, its atmosphere is barely 1% of the density of Earth’s, meaning it could never whip up anything like the fury it does in the story.

“I needed a way to force the astronauts off the planet, so I allowed myself some leeway,” Weir conceded in a statement accompanying the movie’s release. …

It was exceedingly cool actually, and for that reason Weir’s liberty could almost be forgiven, but then the story tries to have it both ways with the same bit of science. When a pressure leak causes an entire pod on Watney’s habitat to blow up, he patches a yawning opening in what’s left of the dwelling with plastic tarp and duct tape. That might actually be enough to do the job in the tenuous atmosphere that does exist on Mars. But in the violent one Weir invents for his story, the fix wouldn’t last a day.

There’s more to this entertaining and educational article including embedded images and a video.

Industry Standard vodka: a project that blurs the lines between art, science, and liquor distillery

“Industry City Distillery has been a beautiful accident from the start,” so begins Robb Todd’s Oct. 23, 2015 article for Fast Company about a remarkable vodka distillery situated in New York City,

Cofounders David Kyrejko and Zachary Bruner didn’t decide to make vodka because they love vodka. The distillery came about as the byproduct of a byproduct, faced challenges most distilleries don’t face, and had a goal very different from others in the drinking game.

“We make booze to pay for art and science,” Kyrejko says. [emphasis mine]

It all started with experiments focused on aquatic ecosystems and carbon dioxide production,

He [Kyrejko]  used fermentation to create CO2 [carbon dioxide] and the byproduct was alcohol. That byproduct made Kyrejko think about its applications and implications. Now, that thinking has manifested as a liquid that more and more people in New York City are coveting in the form of Industry Standard vodka.

At least part of the reason this vodka is so coveted (Note: A link has been removed),

“Vodka is one of the easiest things to make if you don’t care,” Kyrejko says, “and one of the hardest if you do.”

Vodka is difficult because there’s no way to mask the imperfections as with other liquors. To make a spirit there are usually three “cuts” made during distillation: heads, hearts, and tails. What most people drink comes from the hearts. But Kyrejko and Bruner cut theirs 30 times.

“The art is knowing how to blend cuts,” Kyrejko says, adding that other makers do not blend their vodka. “It’s a giant pain in the ass.”

Thought has been put into reducing the company’s footprint,

They say they’ve considered the waste they produce from business and environmental standpoints, as well as the energy they use to create their burning water. So they lean on beet sugar instead of grain, and sacrifice the aesthetics of their stills by insulating them rather than polishing the copper to impress tour groups. And even with about 10,000 square feet of space, they use very little of it for equipment.

“The truth is, running a distillery in an urban setting using ‘traditional’ technology just doesn’t make any sense at all,” Kyrejko says.

This is why their initial goal was to build machines that were three times more efficient than what is commercially available, he says. Now, though, he says their machines and processes are up to six times more efficient, and take up a fraction of the space and resources as traditional methods.

It’s an interesting story although I do have one quibble; I would have liked to have learned more about their art and scienceor art/science, efforts. Maybe next story, eh?

You can find the Industry City Distillery website here.

US White House’s grand computing challenge could mean a boost for research into artificial intelligence and brains

An Oct. 20, 2015 posting by Lynn Bergeson on Nanotechnology Now announces a US White House challenge incorporating nanotechnology, computing, and brain research (Note: A link has been removed),

On October 20, 2015, the White House announced a grand challenge to develop transformational computing capabilities by combining innovations in multiple scientific disciplines. See The Office of Science and Technology Policy (OSTP) states that, after considering over 100 responses to its June 17, 2015, request for information, it “is excited to announce the following grand challenge that addresses three Administration priorities — the National Nanotechnology Initiative, the National Strategic Computing Initiative (NSCI), and the BRAIN initiative.” The grand challenge is to “[c]reate a new type of computer that can proactively interpret and learn from data, solve unfamiliar problems using what it has learned, and operate with the energy efficiency of the human brain.”

Here’s where the Oct. 20, 2015 posting, which originated the news item, by Lloyd Whitman, Randy Bryant, and Tom Kalil for the US White House blog gets interesting,

 While it continues to be a national priority to advance conventional digital computing—which has been the engine of the information technology revolution—current technology falls far short of the human brain in terms of both the brain’s sensing and problem-solving abilities and its low power consumption. Many experts predict that fundamental physical limitations will prevent transistor technology from ever matching these twin characteristics. We are therefore challenging the nanotechnology and computer science communities to look beyond the decades-old approach to computing based on the Von Neumann architecture as implemented with transistor-based processors, and chart a new path that will continue the rapid pace of innovation beyond the next decade.

There are growing problems facing the Nation that the new computing capabilities envisioned in this challenge might address, from delivering individualized treatments for disease, to allowing advanced robots to work safely alongside people, to proactively identifying and blocking cyber intrusions. To meet this challenge, major breakthroughs are needed not only in the basic devices that store and process information and the amount of energy they require, but in the way a computer analyzes images, sounds, and patterns; interprets and learns from data; and identifies and solves problems. [emphases mine]

Many of these breakthroughs will require new kinds of nanoscale devices and materials integrated into three-dimensional systems and may take a decade or more to achieve. These nanotechnology innovations will have to be developed in close coordination with new computer architectures, and will likely be informed by our growing understanding of the brain—a remarkable, fault-tolerant system that consumes less power than an incandescent light bulb.

Recent progress in developing novel, low-power methods of sensing and computation—including neuromorphic, magneto-electronic, and analog systems—combined with dramatic advances in neuroscience and cognitive sciences, lead us to believe that this ambitious challenge is now within our reach. …

This is the first time I’ve come across anything that publicly links the BRAIN initiative to computing, artificial intelligence, and artificial brains. (For my own sake, I make an arbitrary distinction between algorithms [artificial intelligence] and devices that simulate neural plasticity [artificial brains].)The emphasis in the past has always been on new strategies for dealing with Parkinson’s and other neurological diseases and conditions.