Category Archives: Technology

Intel’s 14nm chip: architecture revealed and scientist discusses the limits to computers

Anxieties about how much longer we can design and manufacture smaller, faster computer chips are commonplace even as companies continue to announce new, faster, smaller chips. Just before the US National Science Foundation (NSF) issued a press release concerning a Nature (journal) essay on the limits of computation, Intel announced a new microarchitecture for its 14nm chips .

First, there’s Intel. In an Aug. 12, 2014 news item on Azonano, Intel announced its newest microarchitecture optimization,

Intel today disclosed details of its newest microarchitecture that is optimized with Intel’s industry-leading 14nm manufacturing process. Together these technologies will provide high-performance and low-power capabilities to serve a broad array of computing needs and products from the infrastructure of cloud computing and the Internet of Things to personal and mobile computing.

An Aug. 11, 2014 Intel news release, which originated the news item, lists key points,

  • Intel disclosed details of the microarchitecture of the Intel® Core™ M processor, the first product to be manufactured using 14nm.
  • The combination of the new microarchitecture and manufacturing process will usher in a wave of innovation in new form factors, experiences and systems that are thinner and run silent and cool.
  • Intel architects and chip designers have achieved greater than two times reduction in the thermal design point when compared to a previous generation of processor while providing similar performance and improved battery life.
  • The new microarchitecture was optimized to take advantage of the new capabilities of the 14nm manufacturing process.
  • Intel has delivered the world’s first 14nm technology in volume production. It uses second-generation Tri-gate (FinFET) transistors with industry-leading performance, power, density and cost per transistor.
  • Intel’s 14nm technology will be used to manufacture a wide range of high-performance to low-power products including servers, personal computing devices and Internet of Things.
  • The first systems based on the Intel® Core™ M processor will be on shelves for the holiday selling season followed by broader OEM availability in the first half of 2015.
  • Additional products based on the Broadwell microarchitecture and 14nm process technology will be introduced in the coming months.

The company has made available supporting materials including videos titled, ‘Advancing Moore’s Law in 2014′, ‘Microscopic Mark Bohr: 14nm Explained’, and ‘Intel 14nm Manufacturing Process’ which can be found here. An earlier mention of Intel and its 14nm manufacturing process can be found in my July 9, 2014 posting.

Meanwhile, in a more contemplative mood, Igor Markov of the University of Michigan has written an essay for Nature questioning the limits of computation as per an Aug. 14, 2014 news item on Azonano,

From their origins in the 1940s as sequestered, room-sized machines designed for military and scientific use, computers have made a rapid march into the mainstream, radically transforming industry, commerce, entertainment and governance while shrinking to become ubiquitous handheld portals to the world.

This progress has been driven by the industry’s ability to continually innovate techniques for packing increasing amounts of computational circuitry into smaller and denser microchips. But with miniature computer processors now containing millions of closely-packed transistor components of near atomic size, chip designers are facing both engineering and fundamental limits that have become barriers to the continued improvement of computer performance.

Have we reached the limits to computation?

In a review article in this week’s issue of the journal Nature, Igor Markov of the University of Michigan reviews limiting factors in the development of computing systems to help determine what is achievable, identifying “loose” limits and viable opportunities for advancements through the use of emerging technologies. His research for this project was funded in part by the National Science Foundation (NSF).

An Aug. 13, 2014 NSF news release, which originated the news item, describes Markov’s Nature essay in greater detail,

“Just as the second law of thermodynamics was inspired by the discovery of heat engines during the industrial revolution, we are poised to identify fundamental laws that could enunciate the limits of computation in the present information age,” says Sankar Basu, a program director in NSF’s Computer and Information Science and Engineering Directorate. “Markov’s paper revolves around this important intellectual question of our time and briefly touches upon most threads of scientific work leading up to it.”

The article summarizes and examines limitations in the areas of manufacturing and engineering, design and validation, power and heat, time and space, as well as information and computational complexity.​

“What are these limits, and are some of them negotiable? On which assumptions are they based? How can they be overcome?” asks Markov. “Given the wealth of knowledge about limits to computation and complicated relations between such limits, it is important to measure both dominant and emerging technologies against them.”

Limits related to materials and manufacturing are immediately perceptible. In a material layer ten atoms thick, missing one atom due to imprecise manufacturing changes electrical parameters by ten percent or more. Shrinking designs of this scale further inevitably leads to quantum physics and associated limits.

Limits related to engineering are dependent upon design decisions, technical abilities and the ability to validate designs. While very real, these limits are difficult to quantify. However, once the premises of a limit are understood, obstacles to improvement can potentially be eliminated. One such breakthrough has been in writing software to automatically find, diagnose and fix bugs in hardware designs.

Limits related to power and energy have been studied for many years, but only recently have chip designers found ways to improve the energy consumption of processors by temporarily turning off parts of the chip. There are many other clever tricks for saving energy during computation. But moving forward, silicon chips will not maintain the pace of improvement without radical changes. Atomic physics suggests intriguing possibilities but these are far beyond modern engineering capabilities.

Limits relating to time and space can be felt in practice. The speed of light, while a very large number, limits how fast data can travel. Traveling through copper wires and silicon transistors, a signal can no longer traverse a chip in one clock cycle today. A formula limiting parallel computation in terms of device size, communication speed and the number of available dimensions has been known for more than 20 years, but only recently has it become important now that transistors are faster than interconnections. This is why alternatives to conventional wires are being developed, but in the meantime mathematical optimization can be used to reduce the length of wires by rearranging transistors and other components.

Several key limits related to information and computational complexity have been reached by modern computers. Some categories of computational tasks are conjectured to be so difficult to solve that no proposed technology, not even quantum computing, promises consistent advantage. But studying each task individually often helps reformulate it for more efficient computation.

When a specific limit is approached and obstructs progress, understanding the assumptions made is key to circumventing it. Chip scaling will continue for the next few years, but each step forward will meet serious obstacles, some too powerful to circumvent.

What about breakthrough technologies? New techniques and materials can be helpful in several ways and can potentially be “game changers” with respect to traditional limits. For example, carbon nanotube transistors provide greater drive strength and can potentially reduce delay, decrease energy consumption and shrink the footprint of an overall circuit. On the other hand, fundamental limits–sometimes not initially anticipated–tend to obstruct new and emerging technologies, so it is important to understand them before promising a new revolution in power, performance and other factors.

“Understanding these important limits,” says Markov, “will help us to bet on the right new techniques and technologies.”

Here’s a link to and a citation for Markov’s article,

Limits on fundamental limits to computation by Igor L. Markov. Nature 512, 147–154 (14 August 2014) doi:10.1038/nature13570 Published online 13 August 2014

This paper is behind a paywall but a free preview is available via ReadCube Access.

It’s a fascinating question, what are the limits? It’s one being asked not only with regard to computation but also to medicine, human enhancement, and artificial intelligence for just a few areas of endeavour.

TED Global would like to see you in Rio—USD $6,000 + application required

TED (technology, entertainment, design) Global is being held in Rio de Janeiro, Brazil in October 2014 and there are still a few spots left for participants according to a July 23, 2014 notice (I checked here, there are still openings as of Aug. 1, 2014),

In early October, Rio de Janeiro will host our first TEDGlobal in South America. The conference theme is “South” and you can meet here. Held in the historic Copacabana Palace Hotel on the eponymous beach, TEDGlobal 2014 promises speakers with amazing new ideas to stimulate your mind, while the rest of you takes in the beauty that is Rio: the ocean, the beach, the volcanic mountains, and the energetic Cariocas. It is simply one of the most beautiful cities on Earth.

We hope you will join us at this more intimately scaled event (half the size of TED in Vancouver), and celebrate ideas from across the Global South.

The conference takes place October 5-10, 2014. During five immersive days of talks, music, performances, tech demos, exhibits and wonderful parties, the conference will focus on the Global South’s rise in influence and power — plus relevant stories from the rest of the world.

A small number of passes remain for $6,000 and $12,000. …

Questions? Email [email protected].

Vê-lo no Rio (See you in Rio)

There is a list of their currently confirmed speakers here. It includes:

Grimanesa Amoros, Peruvian interdisciplinary artist

Séverine Autesserre, Congo scholar
Tasso Azevedo, Brazilian forest conservationist
Rodrigo Baggio, Brazilian digital inclusionist
Khalida Brohi, Pakistani equality activist

Wendy Freedman, Astronomer

Syed Karim, Satellite datacaster
...
Miguel Nicolelis, Brain interface pioneer

Mark Plotkin, Amazonian ethnobotanist

Matthieu Ricard, Buddhist monk

Steve Song, Africa connectivity tinkerer
Jorge Soto, Cancer detection technologist

Zeynep Tufekci, Technosociologist

Tashka Yawanawa, Amazonian chief

I recognized two names on the full list: Miguel Nicolelis (featured here many times and most recently in a May 20, 2014 posting) and Matthieu Ricard (mentioned here once.in an April 11, 2013 posting). Both of them were mentioned in regard to the field of neuroscience.

On that note, Happy Weekend on what is a long weekend for many Canadians including me!

Better RRAM memory devices in the short term

Given my recent spate of posts about computing and the future of the chip (list to follow at the end of this post), this Rice University [Texas, US] research suggests that some improvements to current memory devices might be coming to the market in the near future. From a July 12, 2014 news item on Azonano,

Rice University’s breakthrough silicon oxide technology for high-density, next-generation computer memory is one step closer to mass production, thanks to a refinement that will allow manufacturers to fabricate devices at room temperature with conventional production methods.

A July 10, 2014 Rice University news release, which originated the news item, provides more detail,

Tour and colleagues began work on their breakthrough RRAM technology more than five years ago. The basic concept behind resistive memory devices is the insertion of a dielectric material — one that won’t normally conduct electricity — between two wires. When a sufficiently high voltage is applied across the wires, a narrow conduction path can be formed through the dielectric material.

The presence or absence of these conduction pathways can be used to represent the binary 1s and 0s of digital data. Research with a number of dielectric materials over the past decade has shown that such conduction pathways can be formed, broken and reformed thousands of times, which means RRAM can be used as the basis of rewritable random-access memory.

RRAM is under development worldwide and expected to supplant flash memory technology in the marketplace within a few years because it is faster than flash and can pack far more information into less space. For example, manufacturers have announced plans for RRAM prototype chips that will be capable of storing about one terabyte of data on a device the size of a postage stamp — more than 50 times the data density of current flash memory technology.

The key ingredient of Rice’s RRAM is its dielectric component, silicon oxide. Silicon is the most abundant element on Earth and the basic ingredient in conventional microchips. Microelectronics fabrication technologies based on silicon are widespread and easily understood, but until the 2010 discovery of conductive filament pathways in silicon oxide in Tour’s lab, the material wasn’t considered an option for RRAM.

Since then, Tour’s team has raced to further develop its RRAM and even used it for exotic new devices like transparent flexible memory chips. At the same time, the researchers also conducted countless tests to compare the performance of silicon oxide memories with competing dielectric RRAM technologies.

“Our technology is the only one that satisfies every market requirement, both from a production and a performance standpoint, for nonvolatile memory,” Tour said. “It can be manufactured at room temperature, has an extremely low forming voltage, high on-off ratio, low power consumption, nine-bit capacity per cell, exceptional switching speeds and excellent cycling endurance.”

In the latest study, a team headed by lead author and Rice postdoctoral researcher Gunuk Wang showed that using a porous version of silicon oxide could dramatically improve Rice’s RRAM in several ways. First, the porous material reduced the forming voltage — the power needed to form conduction pathways — to less than two volts, a 13-fold improvement over the team’s previous best and a number that stacks up against competing RRAM technologies. In addition, the porous silicon oxide also allowed Tour’s team to eliminate the need for a “device edge structure.”

“That means we can take a sheet of porous silicon oxide and just drop down electrodes without having to fabricate edges,” Tour said. “When we made our initial announcement about silicon oxide in 2010, one of the first questions I got from industry was whether we could do this without fabricating edges. At the time we could not, but the change to porous silicon oxide finally allows us to do that.”

Wang said, “We also demonstrated that the porous silicon oxide material increased the endurance cycles more than 100 times as compared with previous nonporous silicon oxide memories. Finally, the porous silicon oxide material has a capacity of up to nine bits per cell that is highest number among oxide-based memories, and the multiple capacity is unaffected by high temperatures.”

Tour said the latest developments with porous silicon oxide — reduced forming voltage, elimination of need for edge fabrication, excellent endurance cycling and multi-bit capacity — are extremely appealing to memory companies.

“This is a major accomplishment, and we’ve already been approached by companies interested in licensing this new technology,” he said.

Here’s a link to and a citation for the paper,

Nanoporous Silicon Oxide Memory by Gunuk Wang, Yang Yang, Jae-Hwang Lee, Vera Abramova, Huilong Fei, Gedeng Ruan, Edwin L. Thomas, and James M. Tour. Nano Lett., Article ASAP DOI: 10.1021/nl501803s Publication Date (Web): July 3, 2014

Copyright © 2014 American Chemical Society

This paper is behind a paywall.

As for my recent spate of posts on computers and chips, there’s a July 11, 2014 posting about IBM, a 7nm chip, and much more; a July 9, 2014 posting about Intel and its 14nm low-power chip processing and plans for a 10nm chip; and, finally, a June 26, 2014 posting about HP Labs and its plans for memristive-based computing and their project dubbed ‘The Machine’.

The Space, a new digital museum opens with an international splash

Erica Berger in a June 14, 2014 article for Fast Company provides a fascinating account of a project where Arts Council England, the BBC, Open Data Institute, and other cultural groups partnered to create: The Space (Note: Links have been removed),

This Space is no final frontier. Rather, it’s just begun as a new place for digital and experimental art.

A free and public website aimed at discovering the best emerging digital artistic talent around the world, The Space opened yesterday and is launching with a weekend [June 14 - 15, 2014] hackathon hosted by the Tate Modern in London, a first for the formidable institution. Born from a partnership between Arts Council England, the BBC, Open Data Institute, and other cultural groups, it’s “a gallery without walls,” says Alex Graham, chair of The Space. The Space is putting out an international open call for projects, the first round of which is due July 11. The projects will be funded by the partnering groups with amounts ranging from £20,000 (about $34,000) to £60,000 ($101,000) for an individual commission, and up to 50% of the total cost. Each Friday, new collaborations will launch.

Among the first installations are pieces from high-profile artists, including Marina Abramovic, who broadcasted live on the site at midnight last night, and Ai Weiwei, who has an interactive piece on The Space. There will also be a live, Google hangout theater project with actors in London, Barcelona, and Lagos and directed by Erin Gilley.

The Space can be found here,

About The Space

The Space is a free website for artists and audiences to create and explore exciting new art, commissioned by us and shared around the Whole Wide World.

We commission new talent and great artists from all art forms, creative industries, technical and digital backgrounds, through Open Calls and partnerships. The Space is one of the most exciting places on the internet to find new art to explore and enjoy.

An open call was launched on June 12, 2014,

The Space launches first Open Call
Posted … on 12 June 2014

The Space Open Call is looking for original, groundbreaking ideas for digital art. We are encouraging artists to take risks and do crazy things with technology!

This is a great opportunity for artists to be bold, ambitious and experimental, creating a work which can communicate wi people round the World via mobile, tablets and desktops.

We are seeking artists working across a range of art forms and industries including, creative and digital, technology and coding, art and culture sectors, to pitch the very best original ideas to the Open Call.

If you have an idea for The Space, please go to thespace.org/opencall and complete the online form before the closing date: 12 noon (GMT) 11 July 2014.​

Organizers have produced an inspirational video for this call,

I don’t know if this offer is still available (from Erica Berger’s Fast Company article about The Space) but here it is,

Sign up to be one of the first 10,000 newsletter subscribers to The Space and receive a free digital work of art from Turner Prize winner Jeremy Deller.

I availed myself of the offer at approximately 1000 hours PDT, June 16, 2014.

Damage-free art authentication and spatially offset Raman spectroscopy (SORS)

In a world where people will shell out millions of dollars for a single painting, art authentication of some kind is mandatory from a buyer’s perspective while sellers might be a little more reluctant. Reliance on experts who have an intimate familiarity with an artist’s body of work, personal and professional history, as well as, the historical period in which the work was created is the norm. Technological means are not necessarily as heavily employed as one might expect. Given that most technical analyses require damage of some kind, no matter how miniscule, some reluctance is understandable.

A May 29, 2014 news item on phys.org describes a new, damage-free, art conservation and restoration process (which could easily be used for authentication purposes),

UK scientists, working on an international project to conserve precious works of art, have found a new way to analyse paintings without having to remove even a tiny speck of the paint to inspect the layers below.

Using laser spectroscopy, a method that uses light to probe under the surface of an object, the international team has developed a new, non-invasive way to identify the chemical content of the paint layers present.

This new technique will reduce the risk of damage to precious paintings, often worth thousands or even millions of pounds, when conservation and restoration work is being carried out.

Using laser spectroscopy, a method that uses light to probe under the surface of an object, the international team has developed a new, non-invasive way to identify the chemical content of the paint layers present.

This new technique will reduce the risk of damage to precious paintings, often worth thousands or even millions of pounds, when conservation and restoration work is being carried out.

Read more at: http://phys.org/news/2014-05-lasers-analyse-priceless-art.html#jCp

As noted in a March 24, 2014 posting about using surface-enhanced Raman spectroscopy (SERS) to determine the characteristics of red pigment in a Renoir painting, restoration, authentication, and conservation are all linked once researchers start a technical examination,

This next item is about forgery detection. A March 5, 2014 news release on EurekAlert describes the latest developments,

Gallery owners, private collectors, conservators, museums and art dealers face many problems in protecting and evaluating their collections such as determining origin, authenticity and discovery of forgery, as well as conservation issues. Today these problems are more accurately addressed through the application of modern, non-destructive, “hi-tech” techniques.

Getting back to this new technique, a May 28, 2014 Science and Technology Facilities Council news release, which originated the news item, provides information about the various agencies involved with this work and offers some technical detail about the new technique,

The new approach is derived from a technique called Spatially Offset Raman Spectroscopy (SORS). It was originally developed by UK researchers at the Science and Technology Research Council’s (STFC) Central Laser Facility within the Research Complex at Harwell. Now they have joined forces with researchers from the Institute for the Conservation and Promotion of Cultural Heritage (ICVBC), part of Italy’s National Research Council (CNR) to adapt this technology to test paintings without having to destroy any part of them.

The SORS technique involves shining the laser light onto an opaque object. A small number of photons (light ‘particles’) will scatter back, changing colour according to the different paint components they represent, and allowing the scientists to analyse the chemical composition in depth.

Professor Pavel Matousek, from STFC’s Central Laser Facility, explained. “Building on our earlier SORS research, we’ve transformed the method to allow us to probe the painted layers for the first time,” he said. “We’ve called it Micro-SORS because we can analyse the layers at the micrometer scale, rather than the usual millimetre scale”.

For comparison of scale, a human hair is about 100 micrometers wide.

Dr Claudia Conti, a scientist at the ICVBC in Italy, said, “When I heard about the potential of SORS and how it could be applied, I realised the huge contribution this method of analysis could bring to the conservation of artworks.”

The research team tested the Micro-SORS method by collecting data from the light scattered across a surface of painted layers, artificially prepared to mimic a real painting. They isolated the light signals of the individual paint layers, enabling them to assess the chemical make-up of each layer.

The next step in the team’s research is to optimise the sensitivity and depth of penetration, and apply the technique to real artwork.

SORS has been used in other applications, from the news release,

The original SORS technique has already been applied to a number of problems, including non-invasive breast cancer diagnosis and bone disease diagnosis.The Science and Technology Facilities Council (STFC) has also launched a spin-out company, Cobalt Light Systems, which uses the SORS technology and has recently developed products for scanning liquids in unopened bottles for airport security, and in pharmaceutical quality control.

Here’s a link to and a citation for the research paper,

Subsurface Raman Analysis of Thin Painted Layers by Claudia Conti, Chiara Colombo, Marco Realini, Giuseppe Zerbi, and Pavel Matousek. Applied Spectroscopy, Volume 68, Number 6, June 2014, pp. 686-691(6) doi.org/10.1366/13-07376 Available online via Ingentaconnect

This article is open access.

Agency of Science Communication, Technology and Innovation of Argentina (ACCTINA)

In a May 9, 2014 posting for SciDev.Net, Cecilia Rosen mentions an announcement about a new science communication agency for Argentina (Note: A link has been removed),

For a while now, Argentina has seemed serious about science as a means for development. This week, at the 13th International Public Communication of Science and Technology Conference (PCST2014), there was fresh evidence of this.

I learned that President Cristina Kirchner’s government is setting up a specialised agency within the science ministry to boost science communication in the country. This is part of the government’s strategic goals for 2014.

It will be called the Agency of Science Communication, Technology and Innovation of Argentina (ACCTINA), and should be formally launched by the end of this year, if things go smoothly, according to Vera Brudny, head of the project at the ministry.

On the sidelines of PCST2014, she told me that ACCTINA will replace the National Program for Science Popularisation.

That’s an interesting move and unfortunately following up on this at some future date is going to be tricky since I don’t have any Spanish language skills.

For anyone interested in more about SciDev.Net, there’s this from the What we do page,

SciDev.Net is committed to putting science at the heart of global development.

Our website is the world’s leading source of reliable and authoritative news, views and analysis on information about science and technology for global development.

We engage primarily with development professionals, policymakers, researchers, the media and the informed public.

Our main office is based in London but we have seven editions: Sub-Saharan Africa English, Sub-Saharan Africa French, South Asia, Latin America & Caribbean, South-East Asia & Pacific, Middle-East & North Africa and Global. Between us we manage a worldwide network of registered users, advisors, consultants and freelance journalists who drive our activities and vision.

The 13th International Public Communication of Science and Technology Conference (PCST2014) is produced by the Network for the Public Communication of Science and Technology (PCST). Here’s more from the About PCST page,

PCST is a network of individuals from around the world who are active in producing and studying PCST. It sponsors international conferences, an electronic discussion list, and symposiums. The aim is to encourage discussion and debate across professional, cultural, international, and disciplinary boundaries.

Members of the PCST Network come from a range of backgrounds:

  • Researchers working on the theory and practice of science communication
  • Communication staff working for research organisations
  • Staff at science centres and museums
  • Science journalists
  • Students on the ethics and philosophy of science and the public
  • Writers and editors of scientific material
  • Web designers
  • Scientists who communicate with the public
  • Visual and performing artists working on science themes.

The PCST international conference takes place every two years. The 2014 PCST conference took place in Salvador, Brazil. Conferences like this would seem to confirm the comments I made in a May 20, 2014 posting,

Returning to 2014, the [World Cup {soccer}] kickoff in Brazil (if successful) symbolizes more than an international athletic competition or a technical/medical achievement, this kick-off symbolizes a technological future for Brazil and its place on the world stage (despite the protests and social unrest) .

Perhaps Argentina is getting ready to give Brazil a run for its money (slang for ‘provide some competition’).

The Pantheon and technology, history of the world from Big Bang to the end, and architecture evolving into a dynamic, interactive process at TED 2014′s Session 2: Retrospect

Now to Retrospect, session two of the TED 2014. As the first scheduled speaker, Bran Ferren kicked off the session. From Ferren’s TED biography,

After dropping out of MIT in 1970, Bran Ferren became a designer and engineer for theater, touring rock bands, and dozens of movies, including Altered States and Little Shop of Horrors, before joining Disney as a lead Imagineer, then becoming president of R&D for the Walt Disney Company.

In 2000, Ferren and partner Danny Hillis left Disney to found Applied Minds, a playful design and invention firm dedicated to distilling game-changing inventions from an eclectic stew of the brightest creative minds culled from every imaginable discipline.

Ferren used a standard storytelling technique as do many of the TED speakers. (Note: Techniques become standard because they work.) He started with personal stories of his childhood which apparently included exposure to art and engineering. His family of origin was heavily involved in the visual arts while other family members were engineers. His moment of truth was during childhood when he was taken to view the Pantheon and its occulus (from its Wikipedia entry; Note: Links have been removed),

The Pantheon (/ˈpænθiən/ or US /ˈpænθiɒn/;[1] Latin: Pantheon,[nb 1] [pantʰewn] from Greek: Πάνθεον [ἱερόν], an adjective understood as “[temple consecrated] to all gods”) is a building in Rome, Italy, commissioned by Marcus Agrippa during the reign of Augustus (27 BC – 14 AD) as a temple to all the gods of ancient Rome, and rebuilt by the emperor Hadrian about 126 AD.[2]

The building is circular with a portico of large granite Corinthian columns (eight in the first rank and two groups of four behind) under a pediment. A rectangular vestibule links the porch to the rotunda, which is under a coffered concrete dome, with a central opening (oculus) to the sky. Almost two thousand years after it was built, the Pantheon’s dome is still the world’s largest unreinforced concrete dome.[3] The height to the oculus and the diameter of the interior circle are the same, 43.3 metres (142 ft).[4]

It is one of the best-preserved of all Roman buildings. It has been in continuous use throughout its history, and since the 7th century, the Pantheon has been used as a Roman Catholic church dedicated to “St. Mary and the Martyrs” but informally known as “Santa Maria Rotonda.”[5] The square in front of the Pantheon is called Piazza della Rotonda.

I cannot adequately convey Ferren’s appreciation and moment of inspiration where all in a moment he understood how engineering and art could be one and he also understood something new about light; it can have ‘weight’. He then describes the engineering feat in more detail and notes that we are barely able to achieve a structure like the Pantheon with today’s battery of technological innovations and understanding. He talked about what the ‘miracles’ need to achieve similar feats today and then he segued into autonomous cars and that’s where he lost me. Call me a peasant and an ignoramus (perhaps once these talks are made public it will be obvious I misunderstood his point)  but I am never going to view an autonomous car as being an engineering feat similar to the Pantheon. As I see it, Ferren left out the emotional/spiritual (not religious) aspect that great work can inspire in someone. While the light bulb was an extraordinary achievement in its own right, as is electricity for that matter, neither will are likely to take your breath away in an inspirational fashion.

Brian Greene (not listed on the programme) was introduced next. Greene’s Wikipedia entry (Note: Links have been removed),

Brian Randolph Greene [1] (born February 9, 1963) is an American theoretical physicist and string theorist. He has been a professor at Columbia University since 1996 and chairman of the World Science Festival since co-founding it in 2008. Greene has worked on mirror symmetry, relating two different Calabi–Yau manifolds (concretely, relating the conifold to one of its orbifolds). He also described the flop transition, a mild form of topology change, showing that topology in string theory can change at the conifold point. He has become known to a wider audience through his books for the general public, The Elegant Universe, Icarus at the Edge of Time, The Fabric of the Cosmos, The Hidden Reality, and related PBS television specials. Greene also appeared on The Big Bang Theory episode “The Herb Garden Germination”, as well as the films Frequency and The Last Mimzy.

He also recently launched World Science U (free science classes online) as per a Feb. 26, 2014 post by David Bruggeman on his Pasco Phronesis blog.

The presentation was a history of the world from Big Bang to the end of the world. It’s the fastest 18 minutes I’ve experienced so far and it provided a cosmic view of history. Briefly, everything disintegrates, the sun, the galaxy and, eventually, photons.

The last speaker I’m mentioning is Marc Kushner, architect. from his TED biography (Note: Links have been removed),

Marc Kushner is a practicing architect who splits his time between designing buildings at HWKN, the architecture firm he cofounded, and amassing the world’s architecture on the website he runs, Architizer.com. Both have the same mission: to reconnect the public with architecture.

Kushner’s core belief is that architecture touches everyone — and everyone is a fan of architecture, even if they don’t know it yet. New forms of media empower people to shape the built environment, and that means better buildings, which make better cities, which make a better world.

Kushner, too, started with a childhood story where he confessed he didn’t like the architecture of the home where he and his family lived. This loathing inspired him to pursue architecture and he then segued into a history of architecture from the 1970’s to present day. Apparently the 1970s spawned something called ‘brutalism’ which is very much about concrete. (Arthur Erickson a local, Vancouver (Canada) architect who was internationally lauded for his work loved concrete; I do not.) According to Kushner, I’m not the only one who doesn’t like ‘brutalism’ and so by the 1980s architects fell back on tried and true structures and symbols. Kushner noted a back and forth movement between architects attempting to push the limits of technology and alienating the populace and then attempting to please the populace and going overboard in their efforts with exaggerated and ornate forms which eventually become offputting. Kushner then pointed to Guggenheim Bilbao as an architecture game-changer (from the Guggenheim Museum Bilbao Wikipedia entry; Note: Links have been removed),

The Guggenheim Museum Bilbao is a museum of modern and contemporary art, designed by Canadian-American architect Frank Gehry, and located in Bilbao, Basque Country, Spain. The museum was inaugurated on 18 October 1997 by King Juan Carlos I of Spain.

One of the most admired works of contemporary architecture, the building has been hailed as a “signal moment in the architectural culture”, because it represents “one of those rare moments when critics, academics, and the general public were all completely united about something.”[3] The museum was the building most frequently named as one of the most important works completed since 1980 in the 2010 World Architecture Survey among architecture experts.[3]

Kushner’s own work has clearly been influenced by Gehry and others who changed architecture in the 1990s but his approach is focused on attempting to integrate the community into the process and he described how he and his team have released architectural illustrations onto the internet years before a building is constructed to make the process more accessible.

O’Reilly Media’s Solid Conference in San Francisco, California from May 21-22, 2014

Given that O’Reilly Media is best known  (by me, anyway) for its publishing/writing conferences, the notice about their Solid Conference abut the ‘internet of things’, etc. was unexpected’. From the O’Reilly Media Feb. 26, 2014 news release,

The “punctuated equilibrium” theory asserts that rapid bursts of change upend the leisurely pace of species stasis, creating events that result in new species and leave few fossils behind.

Technology has reached the cusp of such an event. Call it the Internet of Things, the Age of Intelligent Devices, the Industrial Internet, the Programmable World, a neologism of your own choosing—it amounts to the same thing—the intersection of software, the Internet, big data, and physical objects. Ultimately, our entire environment will be connected and intelligent.

To mark this seachange moment, O’Reilly Media introduces Solid Conference, scheduled for May 21-22 at Fort Mason in San Francisco.

“As big data moves from the Web into the physical world, it’s more important than ever that people who deal with software and people who deal with hardware and machinery understand each other,” says Jon Bruner, who chairs Solid with MIT Media Lab’s Joi Ito. “Solid is about creating an interdisciplinary mix of the sort that everyone—designers, engineers, investors, researchers, entrepreneurs—will need to tap in the coming year.”

Chairs Ito and Bruner have drafted a stellar lineup of innovators, funders, and visionaries for the conference, including:

  • Astro Teller, Captain of Moonshots at Google[x]
  • Rodney Brooks, CTO and Chairman of Rethink Robotics
  • Tim O’Reilly, Founder and CEO of O’Reilly Media
  • Andra Keay, Managing Director at Silicon Valley Robotics
  • Carl Bass, CEO of Autodesk, Inc.
  • Moe Tanabian, Director of Mobile Technology at Samsung Mobile
  • Aurora Thornhill, Head of the Project Specialist Team at Kickstarter
  • Ayah Bdeir, Founder and CEO of littleBits
  • Matthew Gardiner, Artist and Senior Lead Researcher at Ars Electronica Futurelab
  • Neil Gershenfeld, Director of the MIT Center for Bits and Atoms
  • Brian Gerkey, CEO of Open Source Robotics Foundation
  • Renee DiResta, Principal at OATV
  • Timothy Prestero, Founder and CEO of Design that Matters
  • Janos Veres, Manager of the Printed Electronics Team at PARC

Solid is more show than tell. “This isn’t about sitting in a conference room and getting your brain freeze-dried by PowerPoint presentations,” Bruner says. “You’ll see demonstrations of real networked products and participate in intensive colloquies with those leading us into this new era. People who come to Solid won’t just be attending a conference. They’ll be walking through a portal to a new world.”

Early registration discounts apply until March 20.

As expected, this is not a cheap conference; an early bird all access pass for the two-day conference is $1095.00 USD.

Here’s my recounting of the March 12, 2014 ‘Solid’ web presentation by Tim O’Reilly & Jim Stogdill.

11:01 am O’Reilly: Longstanding interest in ‘maker’ movement since early 2000’s .

11:03 am O’Reilly: everything is connected ‘internet of things’, big data, robotics, maker movement, etc.

11:05 Stogdill: not sure name Solid is bit enough to describe this upcoming conference

11:05 Stogdill: says hardware is malleable (?) … more accessible, i.e., parts are easier to access and it’s easier to customize

11:08 O’Reilly: moves to subject of design … massive dislocation due to computers, e.g. graphic design … we need process designers (?) .. collisions between specialties

11:09 O’Reilly: collective intelligence and man/machine symbiosis important ideas for our age

11:11 O’Reilly: how do we change the interaction with a thermostat … remove need for human input

11:14 Stodgill: business models not taking advantage of open source options

11:15 O’Reilly: different options for future such as Google/Apple/… Internet of things (proprietary model) or a freely interoperable system of things

11:17 Stodgill: shifting to robotics … integrate virtual/digital/macro worlds in their work and thinking

11:18 O’Reilly: our notion of robots is of autonomous (intelligent) devices but we are surrounded by robots, e.g., washing machine that aren’t autonomous

11:20 Stogdill: shifting to manufacturing … talking about frictionless manufacturing  … new relationship for Silicon Valley and China

11:23 O’Reilly: it doesn’t have to be China  .. all the relationships are changing

11:24 O’Reilly: replacing matter with mathematics

11:25 O’Reilly: how you remake an industry, e.g., Square which started as a hardware company which turns a phone into a point-of-sale system

11:29 Stogdill: change topic to surveillance and privacy .. digital thermostats recently put in Stogdill’s home  .. he had them taken them offline while he was on vacation as he didn’t want the info. on the internet while he was gone (?)

11:32 O’Reilly: not good to be afraid of the future .. Stogdill agrees

11:33 O’Reilly: solid is already big in agriculture .. sensors, robotics, etc.

11:42 O’Reilly: answer to my question (Will UK PM David Cameron’s latest ‘internet of things’ funding announcement have an impact on gov’t funding in US?) .. there’s already lots of government funding here [in US] e.g. Google purchases of DARPA-funded companies … didn’t see much impact other than it’s good when governments invest … [see March 10, 2014 article by Jessica Bland for the Guardian about Cameron's announcement]

11:45 off my Twitter feed, a tweet that seems synchronous in a Carl Jung kind of way:

claireoconnell @claireoconnell

High-tech maker space TechShop planned for Ireland at DCU Innovation Campus #TechShopsiliconrepublic.com/innovation/ite…via @siliconrepublic et moi

11:46 O’Reilly: sees big ‘Solid’ innovation in industrial space rather than consumer space

11:48 Stogdill: love the idea of generativity, i.e., innovation from unexpected quarters

11:49 Question: What is the stuff that matters

11:49 Stogdill: health care

11:50 O’Reilly: yes, health care and the environment .. e.g., keeping track of elderly parent and talks about mother-in-law, many years ago, having a stroke and laying on floor for days because family was not in town

11:51: question: How do we manage hacking?

11:52: O’Reilly: you have to be considering security but thoughtfully … not trying to anticipate everything that can go wrong and creating rules to avoid the problem .. but putting some thought into what might go wrong and responding appropriately when something does happen …

11:54 Stogdill: there’s an asymmetry problem when things go digital .. e.g. if you want to throw a rock throw his [Stogdill's] windows you have to be there physically … digitally, anyone from anywhere has access

11:55 Question: What do we need to know to get started (paraphrase)

11:55 O’Reilly: there are some great programmes at university but right now you can get at least as much by playing around

11:57 Question: Are you optimistic?

11:57 O’Reilly: Yes, I am optimistic… and we do have possibilities both positive and negative … most concerned about anti-science movement … worse case scenario: anti-science and anti-technology backlash hits just when water, climate change, and other issues become pressing …

11:59 Stogdill: James Watt thought they were building a steam engine but they also created modernism and many other isms

12 pm O’Reilly: Lots to be optimistic about and lots to care about

I don’t know if they’ll be making this video available but you can try looking here.

ETA March 17, 2014: You can find the video for the O’Reilly/Stogdill on the Solid YouTube playlist or you can go directly to the video here.

Water desalination by graphene and water purification by sapwood

I have two items about water. The first concerns a new technique from MIT (Massachusetts Institute of Technology) for desalination using graphene. From a Feb. 25, 2014 news release by David Chandler on EurekAlert,

Researchers have devised a way of making tiny holes of controllable size in sheets of graphene, a development that could lead to ultrathin filters for improved desalination or water purification.

The team of researchers at MIT, Oak Ridge National Laboratory, and in Saudi Arabia succeeded in creating subnanoscale pores in a sheet of the one-atom-thick material, which is one of the strongest materials known. …

The concept of using graphene, perforated by nanoscale pores, as a filter in desalination has been proposed and analyzed by other MIT researchers. The new work, led by graduate student Sean O’Hern and associate professor of mechanical engineering Rohit Karnik, is the first step toward actual production of such a graphene filter.

Making these minuscule holes in graphene — a hexagonal array of carbon atoms, like atomic-scale chicken wire — occurs in a two-stage process. First, the graphene is bombarded with gallium ions, which disrupt the carbon bonds. Then, the graphene is etched with an oxidizing solution that reacts strongly with the disrupted bonds — producing a hole at each spot where the gallium ions struck. By controlling how long the graphene sheet is left in the oxidizing solution, the MIT researchers can control the average size of the pores.

A big limitation in existing nanofiltration and reverse-osmosis desalination plants, which use filters to separate salt from seawater, is their low permeability: Water flows very slowly through them. The graphene filters, being much thinner, yet very strong, can sustain a much higher flow. “We’ve developed the first membrane that consists of a high density of subnanometer-scale pores in an atomically thin, single sheet of graphene,” O’Hern says.

For efficient desalination, a membrane must demonstrate “a high rejection rate of salt, yet a high flow rate of water,” he adds. One way of doing that is decreasing the membrane’s thickness, but this quickly renders conventional polymer-based membranes too weak to sustain the water pressure, or too ineffective at rejecting salt, he explains.

With graphene membranes, it becomes simply a matter of controlling the size of the pores, making them “larger than water molecules, but smaller than everything else,” O’Hern says — whether salt, impurities, or particular kinds of biochemical molecules.

The permeability of such graphene filters, according to computer simulations, could be 50 times greater than that of conventional membranes, as demonstrated earlier by a team of MIT researchers led by graduate student David Cohen-Tanugi of the Department of Materials Science and Engineering. But producing such filters with controlled pore sizes has remained a challenge. The new work, O’Hern says, demonstrates a method for actually producing such material with dense concentrations of nanometer-scale holes over large areas.

“We bombard the graphene with gallium ions at high energy,” O’Hern says. “That creates defects in the graphene structure, and these defects are more chemically reactive.” When the material is bathed in a reactive oxidant solution, the oxidant “preferentially attacks the defects,” and etches away many holes of roughly similar size. O’Hern and his co-authors were able to produce a membrane with 5 trillion pores per square centimeter, well suited to use for filtration. “To better understand how small and dense these graphene pores are, if our graphene membrane were to be magnified about a million times, the pores would be less than 1 millimeter in size, spaced about 4 millimeters apart, and span over 38 square miles, an area roughly half the size of Boston,” O’Hern says.

With this technique, the researchers were able to control the filtration properties of a single, centimeter-sized sheet of graphene: Without etching, no salt flowed through the defects formed by gallium ions. With just a little etching, the membranes started allowing positive salt ions to flow through. With further etching, the membranes allowed both positive and negative salt ions to flow through, but blocked the flow of larger organic molecules. With even more etching, the pores were large enough to allow everything to go through.

Scaling up the process to produce useful sheets of the permeable graphene, while maintaining control over the pore sizes, will require further research, O’Hern says.

Karnik says that such membranes, depending on their pore size, could find various applications. Desalination and nanofiltration may be the most demanding, since the membranes required for these plants would be very large. But for other purposes, such as selective filtration of molecules — for example, removal of unreacted reagents from DNA — even the very small filters produced so far might be useful.

“For biofiltration, size or cost are not as critical,” Karnik says. “For those applications, the current scale is suitable.”

Dexter Johnson in a Feb. 26,2014 posting provides some context for and insight into the work (from the Nanoclast blog on the IEEE [Institute of Electrical and Electronics Engineers]), Note: Links have been removed,

About 18 months ago, I wrote about an MIT project in which computer models demonstrated that graphene could act as a filter in the desalination of water through the reverse osmosis (RO) method. RO is slightly less energy intensive than the predominantly used multi-stage-flash process. The hope was that the nanopores of the graphene material would make the RO method even less energy intensive than current versions by making it easier to push the water through the filter membrane.

The models were promising, but other researchers in the field said at the time it was going to be a long road to translate a computer model to a real product.

It would seem that the MIT researchers agreed it was worth the effort and accepted the challenge to go from computer model to a real device as they announced this week that they had developed a method for creating selective pores in graphene that make it suitable for water desalination.

Here’s a link to and a citation for the paper,

Selective Ionic Transport through Tunable Subnanometer Pores in Single-Layer Graphene Membranes by Sean C. O’Hern, Michael S. H. Boutilier, Juan-Carlos Idrobo, Yi Song, Jing Kong, Tahar Laoui, Muataz Atieh, and Rohit Karnik. Nano Lett., Article ASAP DOI: 10.1021/nl404118f Publication Date (Web): February 3, 2014

Copyright © 2014 American Chemical Society

This article is behind a paywall.

The second item is also from MIT and concerns a low-tech means of purifying water. From a Feb. 27, 2014 news item on Azonano,

If you’ve run out of drinking water during a lakeside camping trip, there’s a simple solution: Break off a branch from the nearest pine tree, peel away the bark, and slowly pour lake water through the stick. The improvised filter should trap any bacteria, producing fresh, uncontaminated water.

In fact, an MIT team has discovered that this low-tech filtration system can produce up to four liters of drinking water a day — enough to quench the thirst of a typical person.

In a paper published this week in the journal PLoS ONE, the researchers demonstrate that a small piece of sapwood can filter out more than 99 percent of the bacteria E. coli from water. They say the size of the pores in sapwood — which contains xylem tissue evolved to transport sap up the length of a tree — also allows water through while blocking most types of bacteria.

Co-author Rohit Karnik, an associate professor of mechanical engineering at MIT, says sapwood is a promising, low-cost, and efficient material for water filtration, particularly for rural communities where more advanced filtration systems are not readily accessible.

“Today’s filtration membranes have nanoscale pores that are not something you can manufacture in a garage very easily,” Karnik says. “The idea here is that we don’t need to fabricate a membrane, because it’s easily available. You can just take a piece of wood and make a filter out of it.”

The Feb. 26, 2014 news release on EurekAlert, which originated the news item, describes current filtration techniques and the advantages associated with this new low-tech approach,

There are a number of water-purification technologies on the market today, although many come with drawbacks: Systems that rely on chlorine treatment work well at large scales, but are expensive. Boiling water to remove contaminants requires a great deal of fuel to heat the water. Membrane-based filters, while able to remove microbes, are expensive, require a pump, and can become easily clogged.

Sapwood may offer a low-cost, small-scale alternative. The wood is comprised of xylem, porous tissue that conducts sap from a tree’s roots to its crown through a system of vessels and pores. Each vessel wall is pockmarked with tiny pores called pit membranes, through which sap can essentially hopscotch, flowing from one vessel to another as it feeds structures along a tree’s length. The pores also limit cavitation, a process by which air bubbles can grow and spread in xylem, eventually killing a tree. The xylem’s tiny pores can trap bubbles, preventing them from spreading in the wood.

“Plants have had to figure out how to filter out bubbles but allow easy flow of sap,” Karnik observes. “It’s the same problem with water filtration where we want to filter out microbes but maintain a high flow rate. So it’s a nice coincidence that the problems are similar.”

The news release also describes the experimental procedure the scientists followed (from the news release),

To study sapwood’s water-filtering potential, the researchers collected branches of white pine and stripped off the outer bark. They cut small sections of sapwood measuring about an inch long and half an inch wide, and mounted each in plastic tubing, sealed with epoxy and secured with clamps.

Before experimenting with contaminated water, the group used water mixed with red ink particles ranging from 70 to 500 nanometers in size. After all the liquid passed through, the researchers sliced the sapwood in half lengthwise, and observed that much of the red dye was contained within the very top layers of the wood, while the filtrate, or filtered water, was clear. This experiment showed that sapwood is naturally able to filter out particles bigger than about 70 nanometers.

However, in another experiment, the team found that sapwood was unable to separate out 20-nanometer particles from water, suggesting that there is a limit to the size of particles coniferous sapwood can filter.

Finally, the team flowed inactivated, E. coli-contaminated water through the wood filter. When they examined the xylem under a fluorescent microscope, they saw that bacteria had accumulated around pit membranes in the first few millimeters of the wood. Counting the bacterial cells in the filtered water, the researchers found that the sapwood was able to filter out more than 99 percent of E. coli from water.

Karnik says sapwood likely can filter most types of bacteria, the smallest of which measure about 200 nanometers. However, the filter probably cannot trap most viruses, which are much smaller in size.

The researchers have future plans (from the news release),

Karnik says his group now plans to evaluate the filtering potential of other types of sapwood. In general, flowering trees have smaller pores than coniferous trees, suggesting that they may be able to filter out even smaller particles. However, vessels in flowering trees tend to be much longer, which may be less practical for designing a compact water filter.

Designers interested in using sapwood as a filtering material will also have to find ways to keep the wood damp, or to dry it while retaining the xylem function. In other experiments with dried sapwood, Karnik found that water either did not flow through well, or flowed through cracks, but did not filter out contaminants.

“There’s huge variation between plants,” Karnik says. “There could be much better plants out there that are suitable for this process. Ideally, a filter would be a thin slice of wood you could use for a few days, then throw it away and replace at almost no cost. It’s orders of magnitude cheaper than the high-end membranes on the market today.”

Here’s a link to and a citation for the paper,

Water Filtration Using Plant Xylem by Michael S. H. Boutilier, Jongho Lee, Valerie Chambers, Varsha Venkatesh, & Rohit Karnik. PLOS One Published: February 26, 2014 DOI: 10.1371/journal.pone.0089934

This paper is open access.

One final observation, two of the researchers listed as authors on the graphene/water desalination paper are also listed on the low-tech sapwood paper (Michael S. H. Boutilier & Rohit Karnik).