Tag Archives: augmented reality

Canada’s ‘Smart Cities’ will need new technology (5G wireless) and, maybe, graphene

I recently published [March 20, 2018] a piece on ‘smart cities’ both an art/science event in Toronto and a Canadian government initiative without mentioning the necessity of new technology to support all of the grand plans. On that note, it seems the Canadian federal government and two provincial (Québec and Ontario) governments are prepared to invest in one of the necessary ‘new’ technologies, 5G wireless. The Canadian Broadcasting Corporation’s (CBC) Shawn Benjamin reports about Canada’s 5G plans in suitably breathless (even in text only) tones of excitement in a March 19, 2018 article,

The federal, Ontario and Quebec governments say they will spend $200 million to help fund research into 5G wireless technology, the next-generation networks with download speeds 100 times faster than current ones can handle.

The so-called “5G corridor,” known as ENCQOR, will see tech companies such as Ericsson, Ciena Canada, Thales Canada, IBM and CGI kick in another $200 million to develop facilities to get the project up and running.

The idea is to set up a network of linked research facilities and laboratories that these companies — and as many as 1,000 more across Canada — will be able to use to test products and services that run on 5G networks.

Benjamin’s description of 5G is focused on what it will make possible in the future,

If you think things are moving too fast, buckle up, because a new 5G cellular network is just around the corner and it promises to transform our lives by connecting nearly everything to a new, much faster, reliable wireless network.

The first networks won’t be operational for at least a few years, but technology and telecom companies around the world are already planning to spend billions to make sure they aren’t left behind, says Lawrence Surtees, a communications analyst with the research firm IDC.

The new 5G is no tentative baby step toward the future. Rather, as Surtees puts it, “the move from 4G to 5G is a quantum leap.”

In a downtown Toronto soundstage, Alan Smithson recently demonstrated a few virtual reality and augmented reality projects that his company MetaVRse is working on.

The potential for VR and AR technology is endless, he said, in large part for its potential to help hurdle some of the walls we are already seeing with current networks.

Virtual Reality technology on the market today is continually increasing things like frame rates and screen resolutions in a constant quest to make their devices even more lifelike.

… They [current 4G networks] can’t handle the load. But 5G can do so easily, Smithson said, so much so that the current era of bulky augmented reality headsets could be replaced buy a pair of normal looking glasses.

In a 5G world, those internet-connected glasses will automatically recognize everyone you meet, and possibly be able to overlay their name in your field of vision, along with a link to their online profile. …

Benjamin also mentions ‘smart cities’,

In a University of Toronto laboratory, Professor Alberto Leon-Garcia researches connected vehicles and smart power grids. “My passion right now is enabling smart cities — making smart cities a reality — and that means having much more immediate and detailed sense of the environment,” he said.

Faster 5G networks will assist his projects in many ways, by giving planners more, instant data on things like traffic patterns, energy consumption, variou carbon footprints and much more.

Leon-Garcia points to a brightly lit map of Toronto [image embedded in Benjamin’s article] in his office, and explains that every dot of light represents a sensor transmitting real time data.

Currently, the network is hooked up to things like city buses, traffic cameras and the city-owned fleet of shared bicycles. He currently has thousands of data points feeding him info on his map, but in a 5G world, the network will support about a million sensors per square kilometre.

Very exciting but where is all this data going? What computers will be processing the information? Where are these sensors located? Benjamin does not venture into those waters nor does The Economist in a February 13, 2018 article about 5G, the Olympic Games in Pyeonchang, South Korea, but the magazine does note another barrier to 5G implementation,

“FASTER, higher, stronger,” goes the Olympic motto. So it is only appropriate that the next generation of wireless technology, “5G” for short, should get its first showcase at the Winter Olympics  under way in Pyeongchang, South Korea. Once fully developed, it is supposed to offer download speeds of at least 20 gigabits per second (4G manages about half that at best) and response times (“latency”) of below 1 millisecond. So the new networks will be able to transfer a high-resolution movie in two seconds and respond to requests in less than a hundredth of the time it takes to blink an eye. But 5G is not just about faster and swifter wireless connections.

The technology is meant to enable all sorts of new services. One such would offer virtual- or augmented-reality experiences. At the Olympics, for example, many contestants are being followed by 360-degree video cameras. At special venues sports fans can don virtual-reality goggles to put themselves right into the action. But 5G is also supposed to become the connective tissue for the internet of things, to link anything from smartphones to wireless sensors and industrial robots to self-driving cars. This will be made possible by a technique called “network slicing”, which allows operators quickly to create bespoke networks that give each set of devices exactly the connectivity they need.

Despite its versatility, it is not clear how quickly 5G will take off. The biggest brake will be economic. [emphasis mine] When the GSMA, an industry group, last year asked 750 telecoms bosses about the most salient impediment to delivering 5G, more than half cited the lack of a clear business case. People may want more bandwidth, but they are not willing to pay for it—an attitude even the lure of the fanciest virtual-reality applications may not change. …

That may not be the only brake, Dexter Johnson in a March 19, 2018 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website), covers some of the others (Note: Links have been removed),

Graphene has been heralded as a “wonder material” for well over a decade now, and 5G has been marketed as the next big thing for at least the past five years. Analysts have suggested that 5G could be the golden ticket to virtual reality and artificial intelligence, and promised that graphene could improve technologies within electronics and optoelectronics.

But proponents of both graphene and 5G have also been accused of stirring up hype. There now seems to be a rising sense within industry circles that these glowing technological prospects will not come anytime soon.

At Mobile World Congress (MWC) in Barcelona last month [February 2018], some misgivings for these long promised technologies may have been put to rest, though, thanks in large part to each other.

In a meeting at MWC with Jari Kinaret, a professor at Chalmers University in Sweden and director of the Graphene Flagship, I took a guided tour around the Pavilion to see some of the technologies poised to have an impact on the development of 5G.

Being invited back to the MWC for three years is a pretty clear indication of how important graphene is to those who are trying to raise the fortunes of 5G. But just how important became more obvious to me in an interview with Frank Koppens, the leader of the quantum nano-optoelectronic group at Institute of Photonic Sciences (ICFO) just outside of Barcelona, last year.

He said: “5G cannot just scale. Some new technology is needed. And that’s why we have several companies in the Graphene Flagship that are putting a lot of pressure on us to address this issue.”

In a collaboration led by CNIT—a consortium of Italian universities and national laboratories focused on communication technologies—researchers from AMO GmbH, Ericsson, Nokia Bell Labs, and Imec have developed graphene-based photodetectors and modulators capable of receiving and transmitting optical data faster than ever before.

The aim of all this speed for transmitting data is to support the ultrafast data streams with extreme bandwidth that will be part of 5G. In fact, at another section during MWC, Ericsson was presenting the switching of a 100 Gigabits per second (Gbps) channel based on the technology.

“The fact that Ericsson is demonstrating another version of this technology demonstrates that from Ericsson’s point of view, this is no longer just research” said Kinaret.

It’s no mystery why the big mobile companies are jumping on this technology. Not only does it provide high-speed data transmission, but it also does it 10 times more efficiently than silicon or doped silicon devices, and will eventually do it more cheaply than those devices, according to Vito Sorianello, senior researcher at CNIT.

Interestingly, Ericsson is one of the tech companies mentioned with regard to Canada’s 5G project, ENCQOR and Sweden’s Chalmers University, as Dexter Johnson notes, is the lead institution for the Graphene Flagship.. One other fact to note, Canada’s resources include graphite mines with ‘premium’ flakes for producing graphene. Canada’s graphite mines are located (as far as I know) in only two Canadian provinces, Ontario and Québec, which also happen to be pitching money into ENCQOR. My March 21, 2018 posting describes the latest entry into the Canadian graphite mining stakes.

As for the questions I posed about processing power, etc. It seems the South Koreans have found answers of some kind but it’s hard to evaluate as I haven’t found any additional information about 5G and its implementation in South Korea. If anyone has answers, please feel free to leave them in the ‘comments’. Thank you.

Humans can distinguish molecular differences by touch

Yesterday, in my December 18, 2017 post about medieval textiles, I posed the question, “How did medieval artisans create nanoscale and microscale gilding when they couldn’t see it?” I realized afterwards that an answer to that question might be in this December 13, 2017 news item on ScienceDaily,

How sensitive is the human sense of touch? Sensitive enough to feel the difference between surfaces that differ by just a single layer of molecules, a team of researchers at the University of California San Diego has shown.

“This is the greatest tactile sensitivity that has ever been shown in humans,” said Darren Lipomi, a professor of nanoengineering and member of the Center for Wearable Sensors at the UC San Diego Jacobs School of Engineering, who led the interdisciplinary project with V. S. Ramachandran, director of the Center for Brain and Cognition and distinguished professor in the Department of Psychology at UC San Diego.

So perhaps those medieval artisans were able to feel the difference before it could be seen in the textiles they were producing?

Getting back to the matter at hand, a December 13, 2017 University of California at San Diego (UCSD) news release (also on EurekAlert) by Liezel Labios offers more detail about the work,

Humans can easily feel the difference between many everyday surfaces such as glass, metal, wood and plastic. That’s because these surfaces have different textures or draw heat away from the finger at different rates. But UC San Diego researchers wondered, if they kept all these large-scale effects equal and changed only the topmost layer of molecules, could humans still detect the difference using their sense of touch? And if so, how?

Researchers say this fundamental knowledge will be useful for developing electronic skin, prosthetics that can feel, advanced haptic technology for virtual and augmented reality and more.

Unsophisticated haptic technologies exist in the form of rumble packs in video game controllers or smartphones that shake, Lipomi added. “But reproducing realistic tactile sensations is difficult because we don’t yet fully understand the basic ways in which materials interact with the sense of touch.”

“Today’s technologies allow us to see and hear what’s happening, but we can’t feel it,” said Cody Carpenter, a nanoengineering Ph.D. student at UC San Diego and co-first author of the study. “We have state-of-the-art speakers, phones and high-resolution screens that are visually and aurally engaging, but what’s missing is the sense of touch. Adding that ingredient is a driving force behind this work.”

This study is the first to combine materials science and psychophysics to understand how humans perceive touch. “Receptors processing sensations from our skin are phylogenetically the most ancient, but far from being primitive they have had time to evolve extraordinarily subtle strategies for discerning surfaces—whether a lover’s caress or a tickle or the raw tactile feel of metal, wood, paper, etc. This study is one of the first to demonstrate the range of sophistication and exquisite sensitivity of tactile sensations. It paves the way, perhaps, for a whole new approach to tactile psychophysics,” Ramachandran said.

Super-Sensitive Touch

In a paper published in Materials Horizons, UC San Diego researchers tested whether human subjects could distinguish—by dragging or tapping a finger across the surface—between smooth silicon wafers that differed only in their single topmost layer of molecules. One surface was a single oxidized layer made mostly of oxygen atoms. The other was a single Teflon-like layer made of fluorine and carbon atoms. Both surfaces looked identical and felt similar enough that some subjects could not differentiate between them at all.

According to the researchers, human subjects can feel these differences because of a phenomenon known as stick-slip friction, which is the jerking motion that occurs when two objects at rest start to slide against each other. This phenomenon is responsible for the musical notes played by running a wet finger along the rim of a wine glass, the sound of a squeaky door hinge or the noise of a stopping train. In this case, each surface has a different stick-slip frequency due to the identity of the molecules in the topmost layer.

In one test, 15 subjects were tasked with feeling three surfaces and identifying the one surface that differed from the other two. Subjects correctly identified the differences 71 percent of the time.

In another test, subjects were given three different strips of silicon wafer, each strip containing a different sequence of 8 patches of oxidized and Teflon-like surfaces. Each sequence represented an 8-digit string of 0s and 1s, which encoded for a particular letter in the ASCII alphabet. Subjects were asked to “read” these sequences by dragging a finger from one end of the strip to the other and noting which patches in the sequence were the oxidized surfaces and which were the Teflon-like surfaces. In this experiment, 10 out of 11 subjects decoded the bits needed to spell the word “Lab” (with the correct upper and lowercase letters) more than 50 percent of the time. Subjects spent an average of 4.5 minutes to decode each letter.

“A human may be slower than a nanobit per second in terms of reading digital information, but this experiment shows a potentially neat way to do chemical communications using our sense of touch instead of sight,” Lipomi said.

Basic Model of Touch

The researchers also found that these surfaces can be differentiated depending on how fast the finger drags and how much force it applies across the surface. The researchers modeled the touch experiments using a “mock finger,” a finger-like device made of an organic polymer that’s connected by a spring to a force sensor. The mock finger was dragged across the different surfaces using multiple combinations of force and swiping velocity. The researchers plotted the data and found that the surfaces could be distinguished given certain combinations of velocity and force. Meanwhile, other combinations made the surfaces indistinguishable from each other.

“Our results reveal a remarkable human ability to quickly home in on the right combinations of forces and swiping velocities required to feel the difference between these surfaces. They don’t need to reconstruct an entire matrix of data points one by one as we did in our experiments,” Lipomi said.

“It’s also interesting that the mock finger device, which doesn’t have anything resembling the hundreds of nerves in our skin, has just one force sensor and is still able to get the information needed to feel the difference in these surfaces. This tells us it’s not just the mechanoreceptors in the skin, but receptors in the ligaments, knuckles, wrist, elbow and shoulder that could be enabling humans to sense minute differences using touch,” he added.

This work was supported by member companies of the Center for Wearable Sensors at UC San Diego: Samsung, Dexcom, Sabic, Cubic, Qualcomm and Honda.

For those who prefer their news by video,

Here’s a link to and a citation for the paper,

Human ability to discriminate surface chemistry by touch by Cody W. Carpenter, Charles Dhong, Nicholas B. Root, Daniel Rodriquez, Emily E. Abdo, Kyle Skelil, Mohammad A. Alkhadra, Julian Ramírez, Vilayanur S. Ramachandran and Darren J. Lipomi. Mater. Horiz., 2018, Advance Article DOI: 10.1039/C7MH00800G

This paper is open access but you do need to have opened a free account on the website.

Eye, arm, & leg prostheses, cyborgs, eyeborgs, Deus Ex, and ableism

Companies are finding more ways to publicize and promote themselves and their products. For example there’s Intel, which seems to have been especially active lately with its Tomorrow Project (my August 22, 2011 posting) and its sponsorship (being one of only four companies to do so) of the Discovery Channel’s Curiosity television programme (my July 15, 2011 posting). What I find interesting in these efforts is their range and the use of old and new techniques.

Today I found (August 30, 2011 article by Nancy Owano) a documentary made by Robert Spence, Canadian filmmaker and eyeborg, for the recently released Deus Ex: Human Revolution game (both the game and Spence are mentioned in my August 18, 2011 posting) from the company, Eidos Montréal. If you’re squeamish (medical operation is featured), you might want to miss the first few minutes,

I found it quite informative but curiously US-centric. How could they discuss prostheses for the legs and not mention Oscar Pistorius, the history-making South African double amputee runner who successfully petitioned the Court for Arbitration for Sport for the right to compete with able-bodied athletes? (In July this year, Pistorius qualified for the 2012 Olympics.) By the way, they do mention the Icelandic company, Össur, which created Pistorius’ “cheetah” legs. (There’s more about Pistorius and human enhancement in my Feb. 2, 2010 posting. [scroll down about 1/3 of the way])

There’s some very interesting material about augmented reality masks for firefighters in this documentary. Once functional and commercially available, the masks would give firefighters information about toxic gases, temperature, etc. as they move through a burning building. There’s a lot of interest in making augmented reality commercially available via smartphones as Kit Eaton notes in an August 29, 2011 article for Fast Company,

Junaio’s 3.0 release is a big transformation for the software–it included limited object recognition powers for about a year, but the new system is far more sophisticated. As well as relying on the usual AR sensor suite of GPS (to tell the software where the smartphone is on the planet), compass, and gyros to work out what angle the phone’s camera is looking, it also uses feature tracking to give it a better idea of the objects in its field of view. As long as one of Junaio’s channels or databases or the platforms of its developer partners has information on the object, it’ll pop up on screen.

When it recognizes a barcode, for example, the software “combines and displays data sources from various partner platforms to provide useful consumer information on a given product,” which can be a “website, a shopping micro-site or other related information” such as finding recipes based on the ingredients. It’s sophisticated enough so you can scan numerous barcoded items from your fridge and add in extras like “onions” and then get it to find a recipe that uses them.

Eaton notes that people might have an objection to holding up their smartphones for long periods of time. That’s a problem that could be solved of course if we added a prosthetic to the eye or replaced an organic eye with a bionic eye as they do in the game and as they suggest in the documentary.

Not everyone is quite so sanguine about this bright new future. I featured a documentary, Fixed, about some of the discussion regarding disability, ability, and human enhancement in my August 3, 2010 posting. One of the featured academics is Gregor Wolbring, assistant professor, Dept of Community Health Sciences, Program in Community Rehabilitation and Disability Studies, University of Calgary; and president of the Canadian Disability Studies Association.  From Gregor’s June 17, 2011 posting on the FedCan blog,

The term ableism evolved from the disabled people rights movements in the United States and Britain during the 1960s and 1970s.  It questions and highlights the prejudice and discrimination experienced by persons whose body structure and ability functioning were labelled as ‘impaired’ as sub species-typical. Ableism of this flavor is a set of beliefs, processes and practices, which favors species-typical normative body structure based abilities. It labels ‘sub-normative’ species-typical biological structures as ‘deficient’, as not able to perform as expected.

The disabled people rights discourse and disability studies scholars question the assumption of deficiency intrinsic to ‘below the norm’ labeled body abilities and the favoritism for normative species-typical body abilities. The discourse around deafness and Deaf Culture would be one example where many hearing people expect the ability to hear. This expectation leads them to see deafness as a deficiency to be treated through medical means. In contrast, many Deaf people see hearing as an irrelevant ability and do not perceive themselves as ill and in need of gaining the ability to hear. Within the disabled people rights framework ableism was set up as a term to be used like sexism and racism to highlight unjust and inequitable treatment.

Ableism is, however, much more pervasive.

Ableism based on biological structure is not limited to the species-typical/ sub species-typical dichotomy. With recent science and technology advances, and envisioned advances to come, we will see the dichotomy of people exhibiting species-typical and the so-called sub species-typical abilities labeled as impaired, and in ill health. On the other side we will see people exhibiting beyond species-typical abilities as the new expectation norm. An ableism that favours beyond species-typical abilities over species-typical and sub species-typical abilities will enable a change in meaning and scope of concepts such as health, illness, rehabilitation, disability adjusted life years, medicine, health care, and health insurance. For example, one will only be labeled as healthy if one has received the newest upgrade to one’s body – meaning one would by default be ill until one receives the upgrade.

Here’s an excerpt from my Feb. 2, 2010 posting which reinforces what Gregor is saying,

This influx of R&D cash, combined with breakthroughs in materials science and processor speed, has had a striking visual and social result: an emblem of hurt and loss has become a paradigm of the sleek, modern, and powerful. Which is why Michael Bailey, a 24-year-old student in Duluth, Georgia, is looking forward to the day when he can amputate the last two fingers on his left hand.

“I don’t think I would have said this if it had never happened,” says Bailey, referring to the accident that tore off his pinkie, ring, and middle fingers. “But I told Touch Bionics I’d cut the rest of my hand off if I could make all five of my fingers robotic.” [originally excerpted from Paul Hochman’s Feb. 1, 2010 article, Bionic Legs, i-Limbs, and Other Super Human Prostheses You’ll Envy for Fast Company]

I don’t really know how to take the fact that the documentary is in fact product placement for the game, Deus Ex: Human Revolution. On the up side, it opens up a philosophical discussion in a very engaging way. On the down side, it closes down the discussion because drawbacks are not seriously mentioned.

Augmented reality: Star Trek’s holodeck or Bradbury’s Farenheit 451?

I sometimes take a walk on the wild side and simply post about something that interests me so today, I have two items about augmented reality projects from the Fast Company website. The first article by Cliff Kuang highlights a project at McGill University in Montréal, Canada where researchers have created a floor that can feature different textures. From the article,

What happens when display screen technology gets so cheap you can lay it down like carpeting? Researchers at Canada’s McGill University have an idea: floor tiles which use precisely calibrated vibrations to simulate snow, grass, sand, and myriad other surfaces–and can even be programmed to become virtual buttons and sliders.

It sounds like a promising start to Star Trek’s  holodeck suite doesn’t it? You can read more about it in Kuang’s article and if you’re interested in additional detail, you can go to Kristina Grifantini’s article in Technology Review where she notes that the project was presented at the IEEE (Institute of Electrical and Electronics Engineers) 2010 Haptics symposium in Waltham, Massachusetts this last March. From Grifantini’s article,

Yon Visell, a researcher at McGill’s Center for Intelligent Machines and first author of the paper, says the tiles could be used “either for human computer interaction or immersive virtual reality applications.”

This next augmented reality project is written up in a Fast Company article by Ariel Schwartz and features a focus on changing social behaviour. Ever been somewhere and observed someone getting beaten not knowing how to intervene and put a halt to the situation? This project in The Netherlands features a giant billboard where such a scene plays out but if you look up, you’ll see yourself incorporated (realtime) into the scene as a bystander. Here’s the video from YouTube,

Live interactive mega billboard against agression

The experience of watching  this piece (watching the watchers become part of the drama they watch on the big billboard) reminded me of the movie version of Ray Bradbury’ story Farenheit 451 where in a future time firemen are called in to burn books which are illegal to read or own. We meet one of the lead characters, a fireman played by Oskar Werner, as he and his team are called in to destroy a library in someone’s home. He later returns to his own home where his wife demands that he purchase a fourth video wall for the room where she watches her soap opera. She needs the fourth wall as it will give her an immersive experience where she’s entered and become part of the soap opera.

This project is lightyears from Farenheit 451 dystopic scenario in terms of how and to what uses these technologies can be implemented. The billboard offers you both a reflection of your own behaviour as a bystander (in what is thankfully a drama this time) while offering you practical options for dealing with the real life situation should it arise.

Nano augments reality; PEN’s consumer nano products inventory goes mobile and interactive; Two Cultures; Michael Geller’s ‘Look at Vancouver’ event

There was a nanotechnology mention hidden in a recent article (Augmented Reality is Both a Fad and the Future — Here’s Why by Farhad Manjoo in Fast Company) about a new iPhone application by Yelp, Monocle. From the article,

Babak Parviz, a bio-nanotechnologist at the University of Washington, has been working on augmented-reality contact lenses that would layer computer graphics on everything around us — in other words, we’d have Terminator eyes. “We have a vast amount of data on the Web, but today we see it on a flat screen,” says Michael Zöllner, an augmented-reality researcher at Germany’s Fraunhofer Institute for Computer Graphics Research. “It’s only a small step to see all of it superimposed on our lives.” Much of this sounds like a comic-book version of technology, and indeed, all of this buzz led the research firm Gartner to put AR on its “hype cycle” for emerging technologies — well on its way to the “peak of inflated expectations.”

Manjoo goes on to note that augmented reality is not new although he’s not able to go back to the 1890s as I did in yesterday’s (Nov. 11, 2009) posting about using clouds to display data.

The Project on Emerging Nanotechnologies (PEN) has produced an exciting new iPhone application, findNano which allows users to access PEN’s consumer products inventory via their mobile phones. From the news item on Azonano,

findNano allows users to browse an inventory of more than 1,000 nanotechnology-enabled consumer products, from sporting goods to food products and electronics to toys, using the iPhone and iPod Touch. Using the built-in camera, iPhone users can even submit new nanotech products to be included in future inventory updates.

That bit about users submitting information for their database reminds me of a news item about scientists in the UK setting up a database that can be accessed by mobile phones allowing ordinary citizens to participate in gathering science information (I posted about it here). I wonder how PEN will track participation and if they will produce a report on the results (good and/or bad).

One thing I did notice is that PEN’s consumer products inventory has over 1000 items while the new European inventory I mentioned in my Nov. 10, 2009 posting has 151 items.

I finally finished reading The Two Cultures: and A Second Look (a publication of the text for the original talk along with an updated view) by C. P. Snow. This year is the 50th anniversary. My interest in Snow’s talk was reanimated  by Andrew Maynard’s postings about the anniversary and the talk in his 2020 Science blog. He has three commentaries starting here with a poll, and his May 5, 2009 and May 6, 2009 postings on the topic.

I had heard of The Two Cultures but understood it to be about the culture gap between the sciences and the arts/humanities. This is a profound misunderstanding of Snow’s talk/publication which was more concerned with raising the standard of living and health globally. Snow’s second look was a failed attempt to redress the misunderstanding.

From a writer’s perspective, his problem started with the title which sets the frame for his whole talk. He then opened with a discussion of literary intellectuals and scientists (bringing us back to the number two), their differences and the culture gap that ensues. Finally, over 1/2 of his talk was over by the time he started the serious discussion about extending the benefits of what he termed ‘the scientific revolution’ globally.

It’s an interesting read and some of it (the discussion about education) is still quite timely.

Michael Geller,  local architect, planner, real estate consultant, and developer in Vancouver (Canada), has organized an event to review the happenings in the city since the last election in 2008. From the news release (on Frances Bula’s blog),

SATURDAY NOVEMBER 14, 20009 marks the one year anniversary of the last election day in Vancouver; a day that resulted in a significant change in the political landscape and leadership of our city.  The purpose of this event is to mark this anniversary with a review of the highlights of the past year in Vancouver municipal politics, particularly in terms of the accomplishments of Council and staff in the areas of housing, planning and development; fiscal management and economic development; and leadership.

The event will be held at the Morris J. Wosk Centre for Dialogue (lower level) at 515 West Hastings from 8:00 am to 12:30 pm. Admission by donation. Geller has arranged a pretty interesting lineup for his three panel discussions although one of the commenters on Bula’s blog is highly unimpressed with both the speakers and anyone who might foolishly attend.