Tag Archives: Bob Yirka

D-Wave’s new Advantage quantum computer

Thanks to Bob Yirka’s September 30, 2020 article for phys.org there’s an announcement about D-Wave Systems’ latest quantum computer and an explanation of how D-Wave’s quantum computer differs from other quantum computers. Here’s the explanation (Note: Links have been removed),

Over the past several years, several companies have dedicated resources to the development of a true quantum computer that can tackle problems conventional computers cannot handle. Progress on developing such computers has been slow, however, especially when compared with the early development of the conventional computer. As part of the research effort, companies have taken different approaches. Google and IBM, for example, are working on gate-model quantum computer technology, in which qubits are modified as an algorithm is executed. D-Wave, in sharp contrast, has been focused on developing so-called annealer technology, in which qubits are cooled during execution of an algorithm, which allows for passively changing their value.

Comparing the two is next to impossible because of their functional differences. Thus, using 5,000 qubits in the Advantage system does not necessarily mean that it is any more useful than the 100-qubit systems currently being tested by IBM or Google. Still, the announcement suggests that businesses are ready to start taking advantage of the increased capabilities of quantum systems. D-Wave notes that several customers are already using their system for a wide range of applications. Menten AI, for example, has used the system to design new proteins; grocery chain Save-On-Foods has been using it to optimize business operations; Accenture has been using it to develop business applications; Volkswagen has used the system to develop a more efficient car painting system.

Here’s the company’s Sept. 29, 2020 video announcement,

For those who might like some text, there’s a Sept. 29, 2020 D-Wave Systems press release (Note: Links have been removed; this is long),

D-Wave Systems Inc., the leader in quantum computing systems, software, and services, today [Sept. 29, 2020] announced the general availability of its next-generation quantum computing platform, incorporating new hardware, software, and tools to enable and accelerate the delivery of in-production quantum computing applications. Available today in the Leap™ quantum cloud service, the platform includes the Advantage™ quantum system, with more than 5000 qubits and 15-way qubit connectivity, in addition to an expanded hybrid solver service that can run problems with up to one million variables. The combination of the computing power of Advantage and the scale to address real-world problems with the hybrid solver service in Leap enables businesses to run performant, real-time, hybrid quantum applications for the first time.

As part of its commitment to enabling businesses to build in-production quantum applications, the company announced D-Wave Launch™, a jump-start program for businesses who want to get started building hybrid quantum applications today but may need additional support. Bringing together a team of applications experts and a robust partner community, the D-Wave Launch program provides support to help identify the best applications and to translate businesses’ problems into hybrid quantum applications. The extra support helps customers accelerate designing, building, and running their most important and complex applications, while delivering quantum acceleration and performance.

The company also announced a new hybrid solver. The discrete quadratic model (DQM) solver gives developers and businesses the ability to apply the benefits of hybrid quantum computing to new problem classes. Instead of accepting problems with only binary variables (0 or 1), the DQM solver uses other variable sets (e.g. integers from 1 to 500, or red, yellow, and blue), expanding the types of problems that can run on the quantum computer. The DQM solver will be generally available on October 8 [2020].

With support for new solvers and larger problem sizes backed by the Advantage system, customers and partners like Menten AI, Save-On-Foods, Accenture, and Volkswagen are building and running hybrid quantum applications that create solutions with business value today.

  • Protein design pioneer Menten AI has developed the first process using hybrid quantum programs to determine protein structure for de novo protein design with very encouraging results often outperforming classical solvers. Menten AI’s unique protein designs have been computationally validated, chemically synthesized, and are being advanced to live-virus testing against COVID-19.
  • Western Canadian grocery retailer Save-On-Foods is using hybrid quantum algorithms to bring grocery optimization solutions to their business, with pilot tests underway in-store. The company has been able to reduce the time an important optimization task takes from 25 hours to a mere 2 minutes of calculations each week. Even more important than the reduction in time is the ability to optimize performance across and between a significant number of business parameters in a way that is challenging using traditional methods.
  • Accenture, a leading global professional services company, is exploring quantum, quantum-inspired, and hybrid solutions to develop applications across industries. Accenture recently conducted a series of business experiments with a banking client to pilot quantum applications for currency arbitrage, credit scoring, and trading optimization, successfully mapping computationally challenging business problems to quantum formulations, enabling quantum readiness.
  • Volkswagen, an early adopter of D-Wave’s annealing quantum computer, has expanded its quantum use cases with the hybrid solver service to build a paint shop scheduling application. The algorithm is designed to optimize the order in which cars are being painted. By using the hybrid solver service, the number of color switches will be reduced significantly, leading to performance improvements.

The Advantage quantum computer and the Leap quantum cloud service include:

  • New Topology: The topology in Advantage makes it the most connected of any commercial quantum system in the world. In the D-Wave 2000Q™ system, qubits may connect to 6 other qubits. In the new Advantage system, each qubit may connect to 15 other qubits. With two-and-a-half times more connectivity, Advantage enables the embedding of larger problems with fewer physical qubits compared to using the D-Wave 2000Q system. The D-Wave Ocean™ software development kit (SDK) includes tools for using the new topology. Information on the topology in Advantage can be found in this white paper, and a getting started video on how to use the new topology can be found here.
  • Increased Qubit Count: With more than 5000 qubits, Advantage more than doubles the qubit count of the D-Wave 2000Q system. More qubits and richer connectivity provide quantum programmers access to a larger, denser, and more powerful graph for building commercial quantum applications.
  • Greater Performance & Problem Size: With up to one million variables, the hybrid solver service in Leap allows businesses to run large-scale, business-critical problems. This, coupled with the new topology and more than 5000 qubits in the Advantage system, expands the complexity and more than doubles the size of problems that can run directly on the quantum processing unit (QPU). In fact, the hybrid solver outperformed or matched the best of 27 classical optimization solvers on 87% of 45 application-relevant inputs tested in MQLib. Additionally, greater connectivity of the QPU allows for more compact embeddings of complex problems. Advantage can find optimal solutions 10 to 30 times faster in some cases, and can find better quality solutions up to 64% percent of the time, when compared to the D-Wave 2000Q LN QPU.
  • Expansion of Hybrid Software & Tools in Leap: Further investments in the hybrid solver service, new solver classes, ease-of-use, automation, and new tools provide an even more powerful hybrid rapid development environment in Python for business-scale problems.
  • Flexible Access: Advantage, the expanded hybrid solver service, and the upcoming DQM solver are available in the Leap quantum cloud service. All current Leap customers get immediate access with no additional charge, and new customers will benefit from all the new and existing capabilities in Leap. This means that developers and businesses can get started today building in-production hybrid quantum applications. Flexible purchase plans allow developers and forward-thinking businesses to access the D-Wave quantum system in the way that works for them and their business. 
  • Ongoing Releases: D-Wave continues to bring innovations to market with additional hybrid solvers, QPUs, and software updates through the cloud. Interested users and customers can get started today with Advantage and the hybrid solver service, and will benefit from new components of the platform through Leap as they become available.

“Today’s general availability of Advantage delivers the first quantum system built specifically for business, and marks the expansion into production scale commercial applications and new problem types with our hybrid solver services. In combination with our new jump-start program to get customers started, this launch continues what we’ve known at D-Wave for a long time: it’s not about hype, it’s about scaling, and delivering systems that provide real business value on real business applications,” said Alan Baratz, CEO, D-Wave. “We also continue to invest in the science of building quantum systems. Advantage was completely re-engineered from the ground up. We’ll take what we’ve learned about connectivity and scale and continue to push the limits of innovation for the next generations of our quantum computers. I’m incredibly proud of the team that has brought us here and the customers and partners who have collaborated with us to build hundreds of early applications and who now are putting applications into production.”

“We are using quantum to design proteins today. Using hybrid quantum applications, we’re able to solve astronomical protein design problems that help us create new protein structures,” said Hans Melo, Co-founder and CEO, Menten AI. “We’ve seen extremely encouraging results with hybrid quantum procedures often finding better solutions than competing classical solvers for de novo protein design. This means we can create better proteins and ultimately enable new drug discoveries.”

“At Save-On-Foods, we have been committed to bringing innovation to our customers for more than 105 years. To that end, we are always looking for new and creative ways to solve problems, especially in an environment that has gotten increasingly complex,” said Andrew Donaher, Vice President, Digital & Analytics at Save-On-Foods. “We’re new to quantum computing, and in a short period of time, we have seen excellent early results. In fact, the early results we see with Advantage and the hybrid solver service from D-Wave are encouraging enough that our goal is to turn our pilot into an in-production business application. Quantum is emerging as a potential competitive edge for our business.“

“Accenture is committed to helping our clients prepare for the arrival of mainstream quantum computing by exploring relevant use cases and conducting business experiments now,” said Marc Carrel-Billiard, Senior Managing Director and Technology Innovation Lead at Accenture. “We’ve been collaborating with D-Wave for several years and with early access to the Advantage system and hybrid solver service we’ve seen performance improvements and advancements in the platform that are important steps for helping to make quantum a reality for clients across industries, creating new sources of competitive advantage.”

“Embracing quantum computing is nothing new for Volkswagen. We were the first to run a hybrid quantum application in production in Lisbon last November with our bus routing application,” said Florian Neukart, Director of Advanced Technologies at Volkswagen Group of America. “At Volkswagen, we are focusing on building up a deep understanding of meaningful applications of quantum computing in a corporate context. The D-Wave system gives us the opportunity to address optimization tasks with a large number of variables at an impressive speed. With this we are taking a step further towards quantum applications that will be suitable for everyday business use.”

I found the description of D-Wave’s customers and how they’re using quantum computing to be quite interesting. For anyone curious about D-Wave Systems, you can find out more here. BTW, the company is located in metro Vancouver (Canada).

Graphene and smart textiles

Here’s one of the more recent efforts to create fibres that are electronic and capable of being woven into a smart textile. (Details about a previous effort can be found at the end of this post.) Now for this one, from a Dec. 3, 2018 news item on ScienceDaily,

The quest to create affordable, durable and mass-produced ‘smart textiles’ has been given fresh impetus through the use of the wonder material Graphene.

An international team of scientists, led by Professor Monica Craciun from the University of Exeter Engineering department, has pioneered a new technique to create fully electronic fibres that can be incorporated into the production of everyday clothing.

A Dec. 3, 2018 University of Exeter press release (also on EurekAlert), provides more detail about the problems associated with wearable electronics and the solution being offered (Note: A link has been removed),

Currently, wearable electronics are achieved by essentially gluing devices to fabrics, which can mean they are too rigid and susceptible to malfunctioning.

The new research instead integrates the electronic devices into the fabric of the material, by coating electronic fibres with light-weight, durable components that will allow images to be shown directly on the fabric.

The research team believe that the discovery could revolutionise the creation of wearable electronic devices for use in a range of every day applications, as well as health monitoring, such as heart rates and blood pressure, and medical diagnostics.

The international collaborative research, which includes experts from the Centre for Graphene Science at the University of Exeter, the Universities of Aveiro and Lisbon in Portugal, and CenTexBel in Belgium, is published in the scientific journal Flexible Electronics.

Professor Craciun, co-author of the research said: “For truly wearable electronic devices to be achieved, it is vital that the components are able to be incorporated within the material, and not simply added to it.

Dr Elias Torres Alonso, Research Scientist at Graphenea and former PhD student in Professor Craciun’s team at Exeter added “This new research opens up the gateway for smart textiles to play a pivotal role in so many fields in the not-too-distant future.  By weaving the graphene fibres into the fabric, we have created a new technique to all the full integration of electronics into textiles. The only limits from now are really within our own imagination.”

At just one atom thick, graphene is the thinnest substance capable of conducting electricity. It is very flexible and is one of the strongest known materials. The race has been on for scientists and engineers to adapt graphene for the use in wearable electronic devices in recent years.

This new research used existing polypropylene fibres – typically used in a host of commercial applications in the textile industry – to attach the new, graphene-based electronic fibres to create touch-sensor and light-emitting devices.

The new technique means that the fabrics can incorporate truly wearable displays without the need for electrodes, wires of additional materials.

Professor Saverio Russo, co-author and from the University of Exeter Physics department, added: “The incorporation of electronic devices on fabrics is something that scientists have tried to produce for a number of years, and is a truly game-changing advancement for modern technology.”

Dr Ana Neves, co-author and also from Exeter’s Engineering department added “The key to this new technique is that the textile fibres are flexible, comfortable and light, while being durable enough to cope with the demands of modern life.”

In 2015, an international team of scientists, including Professor Craciun, Professor Russo and Dr Ana Neves from the University of Exeter, have pioneered a new technique to embed transparent, flexible graphene electrodes into fibres commonly associated with the textile industry.

Here’s a link to and a citation for the paper,

Graphene electronic fibres with touch-sensing and light-emitting functionalities for smart textiles by Elias Torres Alonso, Daniela P. Rodrigues, Mukond Khetani, Dong-Wook Shin, Adolfo De Sanctis, Hugo Joulie, Isabel de Schrijver, Anna Baldycheva, Helena Alves, Ana I. S. Neves, Saverio Russo & Monica F. Craciun. Flexible Electronicsvolume 2, Article number: 25 (2018) DOI: https://doi.org/10.1038/s41528-018-0040-2 Published 25 September 2018

This paper is open access.

I have an earlier post about an effort to weave electronics into textiles for soldiers, from an April 5, 2012 posting,

I gather that today’s soldier (aka, warfighter)  is carrying as many batteries as weapons. Apparently, the average soldier carries a couple of kilos worth of batteries and cables to keep their various pieces of equipment operational. The UK’s Centre for Defence Enterprise (part of the Ministry of Defence) has announced that this situation is about to change as a consequence of a recently funded research project with a company called Intelligent Textiles. From Bob Yirka’s April 3, 2012 news item for physorg.com,

To get rid of the cables, a company called Intelligent Textiles has come up with a type of yarn that can conduct electricity, which can be woven directly into the fabric of the uniform. And because they allow the uniform itself to become one large conductive unit, the need for multiple batteries can be eliminated as well.

I dug down to find more information about this UK initiative and the Intelligent Textiles company but the trail seems to end in 2015. Still, I did find a Canadian connection (for those who don’t know I’m a Canuck) and more about Intelligent Textile’s work with the British military in this Sept. 21, 2015 article by Barry Collins for alphr.com (Note: Links have been removed),

A two-person firm operating from a small workshop in Staines-upon-Thames, Intelligent Textiles has recently landed a multimillion-pound deal with the US Department of Defense, and is working with the Ministry of Defence (MoD) to bring its potentially life-saving technology to British soldiers. Not bad for a company that only a few years ago was selling novelty cushions.

Intelligent Textiles was born in 2002, almost by accident. Asha Peta Thompson, an arts student at Central Saint Martins, had been using textiles to teach children with special needs. That work led to a research grant from Brunel University, where she was part of a team tasked with creating a “talking jacket” for the disabled. The garment was designed to help cerebral palsy sufferers to communicate, by pressing a button on the jacket to say “my name is Peter”, for example, instead of having a Stephen Hawking-like communicator in front of them.

Another member of that Brunel team was engineering lecturer Dr Stan Swallow, who was providing the electronics expertise for the project. Pretty soon, the pair realised the prototype waistcoat they were working on wasn’t going to work: it was cumbersome, stuffed with wires, and difficult to manufacture. “That’s when we had the idea that we could weave tiny mechanical switches into the surface of the fabric,” said Thompson.

The conductive weave had several advantages over packing electronics into garments. “It reduces the amount of cables,” said Thompson. “It can be worn and it’s also washable, so it’s more durable. It doesn’t break; it can be worn next to the skin; it’s soft. It has all the qualities of a piece of fabric, so it’s a way of repackaging the electronics in a way that’s more user-friendly and more comfortable.” The key to Intelligent Textiles’ product isn’t so much the nature of the raw materials used, but the way they’re woven together. “All our patents are in how we weave the fabric,” Thompson explained. “We weave two conductive yarns to make a tiny mechanical switch that is perfectly separated or perfectly connected. We can weave an electronic circuit board into the fabric itself.”

Intelligent Textiles’ big break into the military market came when they met a British textiles firm that was supplying camouflage gear to the Canadian armed forces. [emphasis mine] The firm was attending an exhibition in Canada and invited the Intelligent Textiles duo to join them. “We showed a heated glove and an iPod controller,” said Thompson. “The Canadians said ‘that’s really fantastic, but all we need is power. Do you think you could weave a piece of fabric that distributes power?’ We said, ‘we’re already doing it’.”Before long it wasn’t only power that the Canadians wanted transmitted through the fabric, but data.

“The problem a soldier faces at the moment is that he’s carrying 60 AA batteries [to power all the equipment he carries],” said Thompson. “He doesn’t know what state of charge those batteries are at, and they’re incredibly heavy. He also has wires and cables running around the system. He has snag hazards – when he’s going into a firefight, he can get caught on door handles and branches, so cables are a real no-no.”

The Canadians invited the pair to speak at a NATO conference, where they were approached by military brass with more familiar accents. “It was there that we were spotted by the British MoD, who said ‘wow, this is a British technology but you’re being funded by Canada’,” said Thompson. That led to £235,000 of funding from the Centre for Defence Enterprise (CDE) – the money they needed to develop a fabric wiring system that runs all the way through the soldier’s vest, helmet and backpack.

There are more details about the 2015 state of affairs, textiles-wise, in a March 11, 2015 article by Richard Trenholm for CNET.com (Note: A link has been removed),

Speaking at the Wearable Technology Show here, Swallow describes IT [Intelligent Textiles]L as a textile company that “pretends to be a military company…it’s funny how you slip into these domains.”

One domain where this high-tech fabric has seen frontline action is in the Canadian military’s IAV Stryker armoured personnel carrier. ITL developed a full QWERTY keyboard in a single piece of fabric for use in the Stryker, replacing a traditional hardware keyboard that involved 100 components. Multiple components allow for repair, but ITL knits in redundancy so the fabric can “degrade gracefully”. The keyboard works the same as the traditional hardware, with the bonus that it’s less likely to fall on a soldier’s head, and with just one glaring downside: troops can no longer use it as a step for getting in and out of the vehicle.

An armoured car with knitted controls is one thing, but where the technology comes into its own is when used about the person. ITL has worked on vests like the JTAC, a system “for the guys who call down airstrikes” and need “extra computing oomph.” Then there’s SWIPES, a part of the US military’s Nett Warrior system — which uses a chest-mounted Samsung Galaxy Note 2 smartphone — and British military company BAE’s Broadsword system.

ITL is currently working on Spirit, a “truly wearable system” for the US Army and United States Marine Corps. It’s designed to be modular, scalable, intuitive and invisible.

While this isn’t an ITL product, this video about Broadsword technology from BAE does give you some idea of what wearable technology for soldiers is like,

baesystemsinc

Uploaded on Jul 8, 2014

Broadsword™ delivers groundbreaking technology to the 21st Century warfighter through interconnecting components that inductively transfer power and data via The Spine™, a revolutionary e-textile that can be inserted into any garment. This next-generation soldier system offers enhanced situational awareness when used with the BAE Systems’ Q-Warrior® see-through display.

If anyone should have the latest news about Intelligent Textile’s efforts, please do share in the comments section.

I do have one other posting about textiles and the military, which is dated May 9, 2012, but while it does reference US efforts it is not directly related to weaving electronics into solder’s (warfighter’s) gear.

You can find CenTexBel (Belgian Textile Rsearch Centre) here and Graphenea here. Both are mentioned in the University of Exeter press release.

CRISPR-CAS9 and gold

As so often happens in the sciences, now that the initial euphoria has expended itself problems (and solutions) with CRISPR ((clustered regularly interspaced short palindromic repeats))-CAAS9 are being disclosed to those of us who are not experts. From an Oct. 3, 2017 article by Bob Yirka for phys.org,

A team of researchers from the University of California and the University of Tokyo has found a way to use the CRISPR gene editing technique that does not rely on a virus for delivery. In their paper published in the journal Nature Biomedical Engineering, the group describes the new technique, how well it works and improvements that need to be made to make it a viable gene editing tool.

CRISPR-Cas9 has been in the news a lot lately because it allows researchers to directly edit genes—either disabling unwanted parts or replacing them altogether. But despite many success stories, the technique still suffers from a major deficit that prevents it from being used as a true medical tool—it sometimes makes mistakes. Those mistakes can cause small or big problems for a host depending on what goes wrong. Prior research has suggested that the majority of mistakes are due to delivery problems, which means that a replacement for the virus part of the technique is required. In this new effort, the researchers report that they have discovered just a such a replacement, and it worked so well that it was able to repair a gene mutation in a Duchenne muscular dystrophy mouse model. The team has named the new technique CRISPR-Gold, because a gold nanoparticle was used to deliver the gene editing molecules instead of a virus.

An Oct. 2, 2017 article by Abby Olena for The Scientist lays out the CRISPR-CAS9 problems the scientists are trying to solve (Note: Links have been removed),

While promising, applications of CRISPR-Cas9 gene editing have so far been limited by the challenges of delivery—namely, how to get all the CRISPR parts to every cell that needs them. In a study published today (October 2) in Nature Biomedical Engineering, researchers have successfully repaired a mutation in the gene for dystrophin in a mouse model of Duchenne muscular dystrophy by injecting a vehicle they call CRISPR-Gold, which contains the Cas9 protein, guide RNA, and donor DNA, all wrapped around a tiny gold ball.

The authors have made “great progress in the gene editing area,” says Tufts University biomedical engineer Qiaobing Xu, who did not participate in the work but penned an accompanying commentary. Because their approach is nonviral, Xu explains, it will minimize the potential off-target effects that result from constant Cas9 activity, which occurs when users deliver the Cas9 template with a viral vector.

Duchenne muscular dystrophy is a degenerative disease of the muscles caused by a lack of the protein dystrophin. In about a third of patients, the gene for dystrophin has small deletions or single base mutations that render it nonfunctional, which makes this gene an excellent candidate for gene editing. Researchers have previously used viral delivery of CRISPR-Cas9 components to delete the mutated exon and achieve clinical improvements in mouse models of the disease.

“In this paper, we were actually able to correct [the gene for] dystrophin back to the wild-type sequence” via homology-directed repair (HDR), coauthor Niren Murthy, a drug delivery researcher at the University of California, Berkeley, tells The Scientist. “The other way of treating this is to do something called exon skipping, which is where you delete some of the exons and you can get dystrophin to be produced, but it’s not [as functional as] the wild-type protein.”

The research team created CRISPR-Gold by covering a central gold nanoparticle with DNA that they modified so it would stick to the particle. This gold-conjugated DNA bound the donor DNA needed for HDR, which the Cas9 protein and guide RNA bound to in turn. They coated the entire complex with a polymer that seems to trigger endocytosis and then facilitate escape of the Cas9 protein, guide RNA, and template DNA from endosomes within cells.

In order to do HDR, “you have to provide the cell [with] the Cas9 enzyme, guide RNA by which you target Cas9 to a particular part of the genome, and a big chunk of DNA, which will be used as a template to edit the mutant sequence to wild-type,” explains coauthor Irina Conboy, who studies tissue repair at the University of California, Berkeley. “They all have to be present at the same time and at the same place, so in our system you have a nanoparticle which simultaneously delivers all of those three key components in their active state.”

Olena’s article carries on to describe how the team created CRISPR-Gold and more.

Additional technical details are available in an Oct. 3, 2017 University of California at Berkeley news release by Brett Israel (also on EurekAlert), which originated the news item (Note: A link has been removed) ,

Scientists at the University of California, Berkeley, have engineered a new way to deliver CRISPR-Cas9 gene-editing technology inside cells and have demonstrated in mice that the technology can repair the mutation that causes Duchenne muscular dystrophy, a severe muscle-wasting disease. A new study shows that a single injection of CRISPR-Gold, as the new delivery system is called, into mice with Duchenne muscular dystrophy led to an 18-times-higher correction rate and a two-fold increase in a strength and agility test compared to control groups.

Diagram of CRISPR-Gold

CRISPR–Gold is composed of 15 nanometer gold nanoparticles that are conjugated to thiol-modified oligonucleotides (DNA-Thiol), which are hybridized with single-stranded donor DNA and subsequently complexed with Cas9 and encapsulated by a polymer that disrupts the endosome of the cell.

Since 2012, when study co-author Jennifer Doudna, a professor of molecular and cell biology and of chemistry at UC Berkeley, and colleague Emmanuelle Charpentier, of the Max Planck Institute for Infection Biology, repurposed the Cas9 protein to create a cheap, precise and easy-to-use gene editor, researchers have hoped that therapies based on CRISPR-Cas9 would one day revolutionize the treatment of genetic diseases. Yet developing treatments for genetic diseases remains a big challenge in medicine. This is because most genetic diseases can be cured only if the disease-causing gene mutation is corrected back to the normal sequence, and this is impossible to do with conventional therapeutics.

CRISPR/Cas9, however, can correct gene mutations by cutting the mutated DNA and triggering homology-directed DNA repair. However, strategies for safely delivering the necessary components (Cas9, guide RNA that directs Cas9 to a specific gene, and donor DNA) into cells need to be developed before the potential of CRISPR-Cas9-based therapeutics can be realized. A common technique to deliver CRISPR-Cas9 into cells employs viruses, but that technique has a number of complications. CRISPR-Gold does not need viruses.

In the new study, research lead by the laboratories of Berkeley bioengineering professors Niren Murthy and Irina Conboy demonstrated that their novel approach, called CRISPR-Gold because gold nanoparticles are a key component, can deliver Cas9 – the protein that binds and cuts DNA – along with guide RNA and donor DNA into the cells of a living organism to fix a gene mutation.

“CRISPR-Gold is the first example of a delivery vehicle that can deliver all of the CRISPR components needed to correct gene mutations, without the use of viruses,” Murthy said.

The study was published October 2 [2017] in the journal Nature Biomedical Engineering.

CRISPR-Gold repairs DNA mutations through a process called homology-directed repair. Scientists have struggled to develop homology-directed repair-based therapeutics because they require activity at the same place and time as Cas9 protein, an RNA guide that recognizes the mutation and donor DNA to correct the mutation.

To overcome these challenges, the Berkeley scientists invented a delivery vessel that binds all of these components together, and then releases them when the vessel is inside a wide variety of cell types, triggering homology directed repair. CRISPR-Gold’s gold nanoparticles coat the donor DNA and also bind Cas9. When injected into mice, their cells recognize a marker in CRISPR-Gold and then import the delivery vessel. Then, through a series of cellular mechanisms, CRISPR-Gold is released into the cells’ cytoplasm and breaks apart, rapidly releasing Cas9 and donor DNA.

Schematic of CRISPR-Gold's method of action

CRISPR-Gold’s method of action (Click to enlarge).

A single injection of CRISPR-Gold into muscle tissue of mice that model Duchenne muscular dystrophy restored 5.4 percent of the dystrophin gene, which causes the disease, to the wild- type, or normal, sequence. This correction rate was approximately 18 times higher than in mice treated with Cas9 and donor DNA by themselves, which experienced only a 0.3 percent correction rate.

Importantly, the study authors note, CRISPR-Gold faithfully restored the normal sequence of dystrophin, which is a significant improvement over previously published approaches that only removed the faulty part of the gene, making it shorter and converting one disease into another, milder disease.

CRISPR-Gold was also able to reduce tissue fibrosis – the hallmark of diseases where muscles do not function properly – and enhanced strength and agility in mice with Duchenne muscular dystrophy. CRISPR-Gold-treated mice showed a two-fold increase in hanging time in a common test for mouse strength and agility, compared to mice injected with a control.

“These experiments suggest that it will be possible to develop non-viral CRISPR therapeutics that can safely correct gene mutations, via the process of homology-directed repair, by simply developing nanoparticles that can simultaneously encapsulate all of the CRISPR components,” Murthy said.

CRISPR-Cas9

CRISPR in action: A model of the Cas9 protein cutting a double-stranded piece of DNA

The study found that CRISPR-Gold’s approach to Cas9 protein delivery is safer than viral delivery of CRISPR, which, in addition to toxicity, amplifies the side effects of Cas9 through continuous expression of this DNA-cutting enzyme. When the research team tested CRISPR-Gold’s gene-editing capability in mice, they found that CRISPR-Gold efficiently corrected the DNA mutation that causes Duchenne muscular dystrophy, with minimal collateral DNA damage.

The researchers quantified CRISPR-Gold’s off-target DNA damage and found damage levels similar to the that of a typical DNA sequencing error in a typical cell that was not exposed to CRISPR (0.005 – 0.2 percent). To test for possible immunogenicity, the blood stream cytokine profiles of mice were analyzed at 24 hours and two weeks after the CRISPR-Gold injection. CRISPR-Gold did not cause an acute up-regulation of inflammatory cytokines in plasma, after multiple injections, or weight loss, suggesting that CRISPR-Gold can be used multiple times safely, and that it has a high therapeutic window for gene editing in muscle tissue.

“CRISPR-Gold and, more broadly, CRISPR-nanoparticles open a new way for safer, accurately controlled delivery of gene-editing tools,” Conboy said. “Ultimately, these techniques could be developed into a new medicine for Duchenne muscular dystrophy and a number of other genetic diseases.”

A clinical trial will be needed to discern whether CRISPR-Gold is an effective treatment for genetic diseases in humans. Study co-authors Kunwoo Lee and Hyo Min Park have formed a start-up company, GenEdit (Murthy has an ownership stake in GenEdit), which is focused on translating the CRISPR-Gold technology into humans. The labs of Murthy and Conboy are also working on the next generation of particles that can deliver CRISPR into tissues from the blood stream and would preferentially target adult stem cells, which are considered the best targets for gene correction because stem and progenitor cells are capable of gene editing, self-renewal and differentiation.

“Genetic diseases cause devastating levels of mortality and morbidity, and new strategies for treating them are greatly needed,” Murthy said. “CRISPR-Gold was able to correct disease-causing gene mutations in vivo, via the non-viral delivery of Cas9 protein, guide RNA and donor DNA, and therefore has the potential to develop into a therapeutic for treating genetic diseases.”

The study was funded by the National Institutes of Health, the W.M. Keck Foundation, the Moore Foundation, the Li Ka Shing Foundation, Calico, Packer, Roger’s and SENS, and the Center of Innovation (COI) Program of the Japan Science and Technology Agency.

Here’s a link to and a citation for the paper,

Nanoparticle delivery of Cas9 ribonucleoprotein and donor DNA in vivo induces homology-directed DNA repair by Kunwoo Lee, Michael Conboy, Hyo Min Park, Fuguo Jiang, Hyun Jin Kim, Mark A. Dewitt, Vanessa A. Mackley, Kevin Chang, Anirudh Rao, Colin Skinner, Tamanna Shobha, Melod Mehdipour, Hui Liu, Wen-chin Huang, Freeman Lan, Nicolas L. Bray, Song Li, Jacob E. Corn, Kazunori Kataoka, Jennifer A. Doudna, Irina Conboy, & Niren Murthy. Nature Biomedical Engineering (2017) doi:10.1038/s41551-017-0137-2 Published online: 02 October 2017

This paper is behind a paywall.

Drive to operationalize transistors that outperform silicon gets a boost

Dexter Johnson has written a Jan. 19, 2017 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers]) about work which could lead to supplanting silicon-based transistors with carbon nanotube-based transistors in the future (Note: Links have been removed),

The end appears nigh for scaling down silicon-based complimentary metal-oxide semiconductor (CMOS) transistors, with some experts seeing the cutoff date as early as 2020.

While carbon nanotubes (CNTs) have long been among the nanomaterials investigated to serve as replacement for silicon in CMOS field-effect transistors (FETs) in a post-silicon future, they have always been bogged down by some frustrating technical problems. But, with some of the main technical showstoppers having been largely addressed—like sorting between metallic and semiconducting carbon nanotubes—the stage has been set for CNTs to start making their presence felt a bit more urgently in the chip industry.

Peking University scientists in China have now developed carbon nanotube field-effect transistors (CNT FETs) having a critical dimension—the gate length—of just five nanometers that would outperform silicon-based CMOS FETs at the same scale. The researchers claim in the journal Science that this marks the first time that sub-10 nanometer CNT CMOS FETs have been reported.

More importantly than just being the first, the Peking group showed that their CNT-based FETs can operate faster and at a lower supply voltage than their silicon-based counterparts.

A Jan. 20, 2017 article by Bob Yirka for phys.org provides more insight into the work at Peking University,

One of the most promising candidates is carbon nanotubes—due to their unique properties, transistors based on them could be smaller, faster and more efficient. Unfortunately, the difficulty in growing carbon nanotubes and their sometimes persnickety nature means that a way to make them and mass produce them has not been found. In this new effort, the researchers report on a method of creating carbon nanotube transistors that are suitable for testing, but not mass production.

To create the transistors, the researchers took a novel approach—instead of growing carbon nanotubes that had certain desired properties, they grew some and put them randomly on a silicon surface and then added electronics that would work with the properties they had—clearly not a strategy that would work for mass production, but one that allowed for building a carbon nanotube transistor that could be tested to see if it would verify theories about its performance. Realizing there would still be scaling problems using traditional electrodes, the researchers built a new kind by etching very tiny sheets of graphene. The result was a very tiny transistor, the team reports, capable of moving more current than a standard CMOS transistor using just half of the normal amount of voltage. It was also faster due to a much shorter switch delay, courtesy of a gate capacitance of just 70 femtoseconds.

Peking University has published an edited and more comprehensive version of the phys.org article first reported by Lisa Zyga and edited by Arthars,

Now in a new paper published in Nano Letters, researchers Tian Pei, et al., at Peking University in Beijing, China, have developed a modular method for constructing complicated integrated circuits (ICs) made from many FETs on individual CNTs. To demonstrate, they constructed an 8-bits BUS system–a circuit that is widely used for transferring data in computers–that contains 46 FETs on six CNTs. This is the most complicated CNT IC fabricated to date, and the fabrication process is expected to lead to even more complex circuits.

SEM image of an eight-transistor (8-T) unit that was fabricated on two CNTs (marked with two white dotted lines). The scale bar is 100 μm. (Copyright: 2014 American Chemical Society)

Ever since the first CNT FET was fabricated in 1998, researchers have been working to improve CNT-based electronics. As the scientists explain in their paper, semiconducting CNTs are promising candidates for replacing silicon wires because they are thinner, which offers better scaling-down potential, and also because they have a higher carrier mobility, resulting in higher operating speeds.

Yet CNT-based electronics still face challenges. One of the most significant challenges is obtaining arrays of semiconducting CNTs while removing the less-suitable metallic CNTs. Although scientists have devised a variety of ways to separate semiconducting and metallic CNTs, these methods almost always result in damaged semiconducting CNTs with degraded performance.

To get around this problem, researchers usually build ICs on single CNTs, which can be individually selected based on their condition. It’s difficult to use more than one CNT because no two are alike: they each have slightly different diameters and properties that affect performance. However, using just one CNT limits the complexity of these devices to simple logic and arithmetical gates.

The 8-T unit can be used as the basic building block of a variety of ICs other than BUS systems, making this modular method a universal and efficient way to construct large-scale CNT ICs. Building on their previous research, the scientists hope to explore these possibilities in the future.

“In our earlier work, we showed that a carbon nanotube based field-effect transistor is about five (n-type FET) to ten (p-type FET) times faster than its silicon counterparts, but uses much less energy, about a few percent of that of similar sized silicon transistors,” Peng said.

“In the future, we plan to construct large-scale integrated circuits that outperform silicon-based systems. These circuits are faster, smaller, and consume much less power. They can also work at extremely low temperatures (e.g., in space) and moderately high temperatures (potentially no cooling system required), on flexible and transparent substrates, and potentially be bio-compatible.”

Here’s a link to and a citation for the paper,

Scaling carbon nanotube complementary transistors to 5-nm gate lengths by Chenguang Qiu, Zhiyong Zhang, Mengmeng Xiao, Yingjun Yang, Donglai Zhong, Lian-Mao Peng. Science  20 Jan 2017: Vol. 355, Issue 6322, pp. 271-276 DOI: 10.1126/science.aaj1628

This paper is behind a paywall.

Feed your silkworms graphene or carbon nanotubes for stronger silk

This Oct. 11, 2016 news item on Nanowerk may make you wonder about a silkworm’s standard diet,

Researchers at Tsinghua University in Beijing, China, have demonstrated that mechanically enhanced silk fibers could be naturally produced by feeding silkworms with diets containing single-walled carbon nanotubes (SW[C]NTs) or graphene.

The as-spun silk fibers containing nanofillers showed evidently increased fracture strength and elongation-at-break, demonstrating the validity of SWNT or graphene incorporation into silkworm silk as reinforcement through an in situ functionalization approach.

The researchers conclude that “by analyzing the silk fibers and the excrement of silkworms, … parts of the fed carbon nanomaterials were incorporated into the as-spun silk fibers, while others went into excrement.

Bob Yirka in an Oct. 11, 2016 article for phys.org provides a little information about silkworms and their eating habits,

In this new effort, the researchers sought to add new properties to silk by adding carbon nanotubes and graphene to their diet.

To add the materials, the researchers sprayed a water solution containing .2 percent carbon nanotubes or graphene onto mulberry leaves and then fed the leaves to the silkworms. They then allowed the silkworms to make their silk in the normal way. Testing of the silks that were produced showed they could withstand approximately 50 percent more stress than traditional silk. A closer look showed that the new silk was made of a more orderly crystal structure than normal silk. And taking their experiments one step further, the researchers cooked the new silk at 1,050 °C causing it to be carbonized—that caused the silk to conduct electricity.

Here’s a link to and a citation for the paper,

Feeding Single-Walled Carbon Nanotubes or Graphene to Silkworms for Reinforced Silk Fibers by Qi Wang, Chunya Wang, Mingchao Zhang, Muqiang Jian, and Yingying Zhang. Nano Lett., Article ASAP DOI: 10.1021/acs.nanolett.6b03597 Publication Date (Web): September 13, 2016

Copyright © 2016 American Chemical Society

This paper is behind a paywall.

Science (magazine) investigates Sci-Hub (a pirate site for scientific papers)

Sci-Hub, a pirate website for scientific papers, and its progenitor, Alexandra Elbakyan, have generated a couple of articles and an editorial in Science magazine’s latest issue (April 28, 2016?). An April 29, 2016 article by Bob Yirka for phys.org describes one of the articles (Note: Links have been removed),

A correspondent for the Science family of journals has published an investigative piece in Science on Sci-Hub, a website that illegally publishes scholarly literature, i.e. research papers. In his article, John Bohannon describes how he made contact with Alexandra Elbakyan, the founder of what is now the world’s largest site for pirated scholarly articles, data she gave him, and commentary on what was revealed. Bohannon has also published another piece focused exclusively on Elbakyan, describing her as a frustrated science student. Marcia McNutt, Editor-in-Chief of the Science Family also weighs in on her “love-hate” relationship with Sci-Hub, and explains in detail why she believes the site is likely to cause problems for scholarly publishing heading into the future.

An April 28, 2016 American Association for the Advancement of Science (AAAS) news release provides some detail about the number of downloads from the Sci-Hub site,

In this investigative news piece from Science, contributing correspondent John Bohannon dives into data from Sci-Hub, the world’s largest pirate website for scholarly literature. For the first time, basic questions about Sci-Hub’s millions of users can be answered: Where are they and what are they reading? Bohannon’s statistical analysis is based on server log data supplied by Alexandra Elbakyan herself, the neuroscientist who created Sci-Hub in 2011. After establishing contact with her through an encrypted chat system, Bohannon and Elbakyan worked together to create a data set for public release: 28 million Sci-Hub download requests going back to 1 September 2015, including the digital object identifier (DOI) for every paper and the clustered locations of users based on their Internet Protocol address. In his story, Bohannon reveals that Sci-Hub usage is highest in China with 4.4 million download requests over the 6-month period, followed by India and Iran. But Sci-Hub users are not limited to the developing world, he reports; the U.S. is the fifth largest downloader and some of the most intense Sci-Hub activity seems to be happening on US and European university campuses, supporting the claim that many users could be accessing the papers through their libraries, but turn to Sci-Hub for convenience.

Bohanon’s piece appears to be open access. Here’s a link and a citation,

Who’s downloading pirated papers? Everyone by John Bohannon. Science (2016). DOI: 10.1126/science.aaf5664 Published April 28, 2016.

Comments

The analysis of the data is fascinating but I’m not sure why this is being billed as an ‘investigative’ piece. Generally speaking I would expect an investigative piece to unearth new information which has likely been hidden. At the very least, I would expect some juicy inside information (i.e., gossip).

Bohannon certainly had no difficulty getting information (from the April 28, 2016 Science article),

For someone denounced as a criminal by powerful corporations and scholarly societies, Elbakyan was surprisingly forthcoming and transparent. After establishing contact through an encrypted chat system, she worked with me over the course of several weeks to create a data set for public release: every download event over the 6-month period starting 1 September 2015, including the digital object identifier (DOI) for every paper. To protect the privacy of Sci-Hub users, we agreed that she would first aggregate users’ geographic locations to the nearest city using data from Google Maps; no identifying internet protocol (IP) addresses were given to me. (The data set and details on how it was analyzed are freely accessible)

Why would it be surprising that someone who has made a point of freeing scientific research and making it accessible also makes the data from her Sci-Hub site freely available? The action certainly seems consistent with her raison d’être.

Bohannon steers away from making any serious criticisms of the current publishing régimes although he does mention a few bones of contention while laying them to rest, more or less. This is no great surprise since he’s writing for one of the ‘big three’, a journal that could be described as having a vested interest in maintaining the status quo. (For those who are unaware, there are three journal considered the most prestigious or high impact for scientific studies: Nature, Cell, and Science.)

Characterizing Elbakyan as a ‘frustrated’ student in an April 28, 2016 profile by John Bohannon (The frustrated science student behind Sci-Hub) seems a bit dismissive. Sci-Hub may have been borne of frustration but it is an extraordinary accomplishment.

The piece has resulted in at least one very irate librarian, John Dupuis, from an April 29, 2016 posting on his Confessions of a Science Librarian blog,

Overall, the articles are pretty good descriptions of the Sci-Hub phenomenon and relatively even-handed [emphasis mine], especially coming from one of the big society publishers like AAAS.

There was one bit in the main article, Who’s downloading pirated papers? Everyone, that really stuck in my craw. Basically, Sci-Hub — and all that article piracy — is librarians’ fault.

And for all the researchers at Western universities who use Sci-Hub instead, the anonymous publisher lays the blame on librarians for not making their online systems easier to use and educating their researchers. “I don’t think the issue is access—it’s the perception that access is difficult,” he says.

Fortunately it was countered, in the true “give both sides of the story” style of mainstream journalism, by another quote, this time from a librarian.

“I don’t agree,” says Ivy Anderson, the director of collections for the California Digital Library in Oakland, which provides journal access to the 240,000 researchers of the University of California system. The authentication systems that university researchers must use to read subscription journals from off campus, and even sometimes on campus with personal computers, “are there to enforce publisher restrictions,” she says.

But of course, I couldn’t let it go. Anderson’s response is perfectly fine but somehow there just wasn’t enough rage and exasperation in it. So I stewed about it over night and tweeted up a tweetstorm of rage this morning, with the idea that if the rant was well-received I would capture the text as part of a blog post.

As you may have guessed by my previous comments, I didn’t find the article quite as even-handed as Dupuis did. As for the offence to librarians, I did notice but it seems in line with the rest of the piece which dismisses, downplays, and offloads a few serious criticisms while ignoring how significant issues (problematic peer review process,  charging exorbitant rates for access to publicly funded research, failure to adequately tag published papers that are under review after serious concerns are raised, failure to respond in a timely fashion when serious concerns are raised about a published paper, positive publication bias, …) have spawned the open access movement and also Sci-Hub. When you consider that governments rely on bibliometric data such as number of papers published and number of papers published in high impact journals (such as one of the ‘big three’), it’s clear there’s a great deal at stake.

Other Sci-Hub pieces here

My last piece about Sci-Hub was a February 25, 2016 posting titled,’ Using copyright to shut down easy access to scientific research‘ featuring some of the discussion around Elsevier and its legal suite against Sci-Hub.

Viewing quantum entanglement with the naked eye

A Feb. 18, 2016 article by Bob Yirka for phys.org suggests there may be a way to see quantum entanglement with the naked eye,

A trio of physicists in Europe has come up with an idea that they believe would allow a person to actually witness entanglement. Valentina Caprara Vivoli, with the University of Geneva, Pavel Sekatski, with the University of Innsbruck and Nicolas Sangouard, with the University of Basel, have together written a paper describing a scenario where a human subject would be able to witness an instance of entanglement—they have uploaded it to the arXiv server for review by others.
Entanglement, is of course, where two quantum particles are intrinsically linked to the extent that they actually share the same existence, even though they can be separated and moved apart. The idea was first proposed nearly a century ago, and it has not only been proven, but researchers routinely cause it to occur, but, to date, not one single person has every actually seen it happen—they only know it happens by conducting a series of experiments. It is not clear if anyone has ever actually tried to see it happen, but in this new effort, the research trio claim to have found a way to make it happen—if only someone else will carry out the experiment on a willing volunteer.

A Feb. 17, 2016 article for the MIT (Massachusetts Institute of Technology) Technology Review describes this proposed project in detail,

Finding a way for a human eye to detect entangled photons sounds straightforward. After all, the eye is a photon detector, so it ought to be possible for an eye to replace a photo detector in any standard entanglement detecting experiment.

Such an experiment might consist of a source of entangled pairs of photons, each of which is sent to a photo detector via an appropriate experimental setup.

By comparing the arrival of photons at each detector and by repeating the detecting process many times, it is possible to determine statistically whether entanglement is occurring.

It’s easy to imagine that this experiment can be easily repeated by replacing one of the photodetectors with an eye. But that turns out not to be the case.

The main problem is that the eye cannot detect single photons. Instead, each light-detecting rod at the back of the eye must be stimulated by a good handful of photons to trigger a detection. The lowest number of photons that can do the trick is thought to be about seven, but in practice, people usually see photons only when they arrive in the hundreds or thousands.

Even then, the eye is not a particularly efficient photodetector. A good optics lab will have photodetectors that are well over 90 percent efficient. By contrast, at the very lowest light levels, the eye is about 8 percent efficient. That means it misses lots of photons.

That creates a significant problem. If a human eye is ever to “see” entanglement in this way, then physicists will have to entangle not just two photons but at least seven, and ideally many hundreds or thousands of them.

And that simply isn’t possible with today’s technology. At best, physicists are capable of entangling half a dozen photons but even this is a difficult task.

But the researchers have come up with a solution to the problem,

Vivoli and co say they have devised a trick that effectively amplifies a single entangled photon into many photons that the eye can see. Their trick depends on a technique called a displacement operation, in which two quantum objects interfere so that one changes the phase of another.

One way to do this with photons is with a beam splitter. Imagine a beam of coherent photons from a laser that is aimed at a beam splitter. The beam is transmitted through the splitter but a change of phase can cause it to be reflected instead.

Now imagine another beam of coherent photons that interferes with the first. This changes the phase of the first beam so that it is reflected rather than transmitted. In other words, the second beam can switch the reflection on and off.

Crucially, the switching beam needn’t be as intense as the main beam—it only needs to be coherent. Indeed, a single photon can do this trick of switching more intense beam, at least in theory.

That’s the basis of the new approach. The idea is to use a single entangled photon to switch the passage of more powerful beam through a beam splitter. And it is this more powerful beam that the eye detects and which still preserves the quantum nature of the original entanglement.

… this experiment will be hard to do. Ensuring that the optical amplifier works as they claim will be hard, for example.

And even if it does, reliably recording each detection in the eye will be even harder. The test for entanglement is a statistical one that requires many counts from both detectors. That means an individual would have to sit in the experiment registering a yes or no answer for each run, repeated thousands or tens of thousands of times. Volunteers will need to have plenty of time on their hands.

Of course, experiments like this will quickly take the glamor and romance out of the popular perception of entanglement. Indeed, it’s hard to see why anybody would want to be entangled with a photodetector over the time it takes to do this experiment.

There is a suggestion as to how to make this a more attractive proposition for volunteers,

One way to increase this motivation would be to modify the experiment so that it entangles two humans. It’s not hard to imagine a people wanting to take part in such an experiment, perhaps even eagerly.

That will require a modified set up in which both detectors are human eyes, with their high triggering level and their low efficiency. Whether this will be possible with Vivoli and co’s setup isn’t yet clear.

Only then will volunteers be able to answer the question that sits uncomfortably with most physicists. What does it feel like to be entangled with another human?

Given the nature of this experiment, the answer will be “mind-numbingly boring.” But as Vivoli and co point out in their conclusion: “It is safe to say that probing human vision with quantum light is terra incognita. This makes it an attractive challenge on its own.”

You can read the arXiv paper,

What Does It Take to See Entanglement? by Valentina Caprara Vivoli, Pavel Sekatski, Nicolas Sangouard arxiv.org/abs/1602.01907 Submitted Feb. 5, 2016

This is an open access paper and this site encourages comments and peer review.

One final comment, the articles reminded me of a March 1, 2012 posting which posed this question Can we see entangled images? a question for physicists in the headline for a piece about a physicist’s (Geraldo Barbosa) challenge and his arXiv paper. Coincidentally, the source article was by Bob Yirka and was published on phys.org.

Antikythera; ancient computer and a 100 year adventure

This post has been almost two years in the making, which seems laughable when considering that the story starts in 100 BCE (before the common era).

Picture ancient Greece and a Roman sailing ship holding an object we know as an Antikythera, named after the Greek island near where the ship was wrecked and where it lay undiscovered until 1900. From the Dec.10, 2010 posting by GrrlScientist on the Guardian science blogs,

Two years ago [2008], a paper was published in Nature describing the function of the oldest known scientific computer, a device built in Greece around 100 BCE. Recovered in 1901 from a shipwreck near the island of Antikythera, this mechanism had been lost and unknown for 2000 years. It took one century for scientists to understand its purpose: it is an astronomical clock that determines the positions of celestial bodies with extraordinary precision. In 2010, a fully-functional replica was constructed out of Lego.

Here’s the video mentioned by Grrl Scientist,

As noted in the video, it is a replica that requires twice as many gears as the original to make the same calculations. It seems we still haven’t quite caught up with the past.

Bob Yirka’s April 4, 2011 article for phys.org describes some of the research involved in decoding the mechanism,

If modern research is correct, the device worked by hand cranking a main dial to display a chosen date, causing the wheels and gears inside to display (via tabs on separate dials) the position of the sun, moon, and the five known planets at that time, for that date; a mechanical and technical feat that would not be seen again until the fourteenth century in Europe with precision clocks.

Now James Evans and his colleagues at the University of Puget Sound in Washington State, have shown that instead of trying to use the same kind of gear mechanism to account for the elliptical path the Earth takes around the sun, and subsequent apparent changes in speed, the inventor of the device may have taken a different tack, and that was to stretch or distort the zodiac on the dial face to change the width of the spaces on the face to make up for the slightly different amount of time that is represented as the hand moves around the face.

In a paper published in the Journal for the History of Astronomy, Evans describes how he and his team were able to examine x-rays taken of the corroded machine (69 then later 88 degrees of the circle) and discovered that the two circles that were used to represent the Zodiac and Egyptian calendar respectively, did indeed differ just enough to account for what appeared to be the irregular movement during different parts of the year.

Though not all experts agree on the findings, this new evidence does appear to suggest that an attempt was made by the early inventor to take into account the elliptical nature of the Earth orbiting the sun, no small thing.

Jenny Winder’s June 11, 2012 article for Universe Today and republished on phys.org provides more details about the gears and the theories behind the device,

The device is made of bronze and contains 30 gears though it may have had as many as 72 originally. Each gear was meticulously hand cut with between 15 and 223 triangular teeth, which were the key to discovering the mechanism’s various functions. It was based on theories of astronomy and mathematics developed by Greek astronomers who may have drawn from earlier Babylonian astronomical theories and its construction could be attributed to the astronomer Hipparchus or, more likely, Archimedes the famous Greek mathematician, physicist, engineer, inventor and astronomer. … [emphases mine]

I’ve highlighted the verbs which suggest they’re still conjecturing as to where the theories and knowledge to develop this ancient computer came from. Yirka’s article mentions that some folks believe that the Antikythera may be the result of alien visitations, along with the more academic guesses about the Babylonians and the Greeks.

I strongly recommend reading the articles and chasing down more videos about the Antikythera on Youtube as the story is fascinating and given the plethora of material (including a book and website by Jo Marchant, Decoding the Heavens), I don’t seem to be alone in my fascination.

Ukrainians ease communication with $50 gloves that convert sign language to speech

Strictly speaking or otherwise, this is not a ‘nano’ story but it does speak (wordplay intended) to some longstanding interests of mine. Christina Chaey in her July 10, 2012 article for Fast Company notes,

More than 275 million hearing-impaired people are unable to use speech to communicate. Sign language is one solution, but it’s only as helpful as the number of people who know the language. That problem is what drove three Ukrainian students to develop EnableTalk, a pair of sensory gloves that help bridge that communication gap by turning sign language into speech.

The three-programmer team behind EnableTalk, who were inspired by interactions with hearing-impaired athletes at their school, took the $25,000 top prize in software design at Microsoft’s 10th annual Imagine Cup. The decade-old tech competition challenges students to design innovative technology across various categories including game design, Kinect, the Windows Phone, and Windows 8.

Bob Yirka in his July 11, 2012 article about Enable Talk for physorg.com provides some insight on why the team chose their project,

The team said the idea for their system came from the frustration they experienced when trying to communicate with hearing impaired athletes at their school. … The problem with sign language they point out, is that most people who can hear never learn it, thus those with hearing impairments are only able to communicate with a small part of the general population which generally includes those who cannot hear and those in their immediate circle.

The quadsquad receiving their $25,000US price,

downloaded from http://www.microsoft.com/en-us/news/events/imaginecup/

Yirka offers the best description of the technology that I was able to find (Note: I have removed links),

The gloves work through the use of five hardware components: flex sensors in the gloves record finger movements and a main controller coordinates information from an accelerometer/compass, an accelerometer/gyroscope, a microcontroller and a Bluetooth module. Windows mobile software was used to convert the gesture commands to sound signals for broadcast by the Bluetooth module. The sound waves are converted to voice using Microsoft Speech and Bing APIs running on a Smartphone, which ultimately serves as the voice for the person using the system.

For even more technical details, you can go to the Documentation page on the Enable Talk website.

The quad squad’s Imagine Cup presentation video is pretty glitzy, from the Enable Talk Gallery page,


I was surprised that everyone in those ‘street scenes’ seems to be about the same age and social class, that the streets are so clean, and, coming from the West Coast of Canada, that everyone is the same colour.

ETA July 12, 2012: The article by Christina Chaey indicated the gloves would cost $50 but I notice the video indicates a $200 price tag.  Perhaps the $50 price is what they’re hoping to charge after widespread commercialization?

British soldiers conduct field trials of uniforms made from e-textiles

I gather that today’s soldier (aka, warfighter)  is carrying as many batteries as weapons. Apparently, the average soldier carries a couple of kilos worth of batteries and cables to keep their various pieces of equipment operational. The UK’s Centre for Defence Enterprise (part of the Ministry of Defence) has announced that this situation is about to change as a consequence of a recently funded research project with a company called Intelligent Textiles. From Bob Yirka’s April 3, 2012 news item for physorg.com,

To get rid of the cables, a company called Intelligent Textiles has come up with a type of yarn that can conduct electricity, which can be woven directly into the fabric of the uniform. And because they allow the uniform itself to become one large conductive unit, the need for multiple batteries can be eliminated as well.

The company says it has found a way to weave the conductive yarn into virtually all parts of the uniform: vest, shirt, backpack, helmet, even gloves or the interactive parts of weapons. Different pieces of the uniform can then be connected via plug-and-play connections when the soldier dresses for battle, … They say they are currently also working on a keyboard that can also be integrated into a uniform to allow for interaction with a small computer that will also be carried as part of the uniform.

Field trials are scheduled for next month and uniforms made with e-textiles are expected to begin being worn by actual soldiers over the next two years.

You can find the Centre for Defence Enterprise (CDE) here, from the CDE’s home page,

The Centre for Defence Enterprise (CDE) is the first point of contact for anyone with a disruptive technology, new process or innovation that has a potential defence application. CDE funds research into novel high-risk, high-potential-benefit innovations sourced from the broadest possible range of science and technology providers, including academia and small companies, to enable development of cost-effective capability advantage for UK Armed Forces.

CDE is the entry point for new science and technology providers to defence, bringing together innovation and investment for the defence and security markets.

Here’s a link to a video featuring an employee from Intelligent Textiles discussing their new product and the joys of applying for funds from the CDE.

I did try to find out more about Intelligent Textiles. While they do have a website, it is currently under construction, here’s an excerpt from their home and only page,

Welcome to this very special first glimpse of a new 21st century world. A wonderful world of soft, safe, stylish, comfortable, colourful fabrics which not only do all the traditional fabric things but which discreetly and unobtrusively include a host of additional attributes.

The new world of Intelligent Textiles is limited only by your vision and needs, and the enthusiasm by innovative manufacturers to embrace a new world.

Building on the best of the past, see an amazing high tech future using traditional techniques and materials with the addition of the Intelligent Textiles globally patented technology.

Even after reading the news item, watching the video clip, and reading the information on Intelligent Textile’s home page, I don’t really understand the benefit of  the technology. It’s nice that cables are being eliminated but it sounds as if at least one battery is still needed (and probably one backup just in case something goes wrong) and they have plans to include a computer in the future. Are they eliminating five pounds of equipment and replacing it with one pound’s worth? If they include a computer in the future, how much weight will that add?