This supremacy, refers to an engineering milestone and a October 23, 2019 news item on ScienceDaily announces the milestone has been reached,
Researchers in UC [University of California] Santa Barbara/Google scientist John Martinis’ group have made good on their claim to quantum supremacy. Using 53 entangled quantum bits (“qubits”), their Sycamore computer has taken on — and solved — a problem considered intractable for classical computers.
“A computation that would take 10,000 years on a classical supercomputer took 200 seconds on our quantum computer,” said Brooks Foxen, a graduate student researcher in the Martinis Group. “It is likely that the classical simulation time, currently estimated at 10,000 years, will be reduced by improved classical hardware and algorithms, but, since we are currently 1.5 trillion times faster, we feel comfortable laying claim to this achievement.”
The feat is outlined in a paper in the journal Nature.
The milestone comes after roughly two decades of quantum computing research conducted by Martinis and his group, from the development of a single superconducting qubit to systems including architectures of 72 and, with Sycamore, 54 qubits (one didn’t perform) that take advantage of the both awe-inspiring and bizarre properties of quantum mechanics.
“The algorithm was chosen to emphasize the strengths of the quantum computer by leveraging the natural dynamics of the device,” said Ben Chiaro, another graduate student researcher in the Martinis Group. That is, the researchers wanted to test the computer’s ability to hold and rapidly manipulate a vast amount of complex, unstructured data.
“We basically wanted to produce an entangled state involving all of our qubits as quickly as we can,” Foxen said, “and so we settled on a sequence of operations that produced a complicated superposition state that, when measured, returns bitstring with a probability determined by the specific sequence of operations used to prepare that particular superposition. The exercise, which was to verify that the circuit’s output correspond to the equence used to prepare the state, sampled the quantum circuit a million times in just a few minutes, exploring all possibilities — before the system could lose its quantum coherence.
‘A complex superposition state’
“We performed a fixed set of operations that entangles 53 qubits into a complex superposition state,” Chiaro explained. “This superposition state encodes the probability distribution. For the quantum computer, preparing this superposition state is accomplished by applying a sequence of tens of control pulses to each qubit in a matter of microseconds. We can prepare and then sample from this distribution by measuring the qubits a million times in 200 seconds.”
“For classical computers, it is much more difficult to compute the outcome of these operations because it requires computing the probability of being in any one of the 2^53 possible states, where the 53 comes from the number of qubits — the exponential scaling is why people are interested in quantum computing to begin with,” Foxen said. “This is done by matrix multiplication, which is expensive for classical computers as the matrices become large.”
According to the new paper, the researchers used a method called cross-entropy benchmarking to compare the quantum circuit’s output (a “bitstring”) to its “corresponding ideal probability computed via simulation on a classical computer” to ascertain that the quantum computer was working correctly.
“We made a lot of design choices in the development of our processor that are really advantageous,” said Chiaro. Among these advantages, he said, are the ability to experimentally tune the parameters of the individual qubits as well as their interactions.
While the experiment was chosen as a proof-of-concept for the computer, the research has resulted in a very real and valuable tool: a certified random number generator. Useful in a variety of fields, random numbers can ensure that encrypted keys can’t be guessed, or that a sample from a larger population is truly representative, leading to optimal solutions for complex problems and more robust machine learning applications. The speed with which the quantum circuit can produce its randomized bit string is so great that there is no time to analyze and “cheat” the system.
“Quantum mechanical states do things that go beyond our day-to-day experience and so have the potential to provide capabilities and application that would otherwise be unattainable,” commented Joe Incandela, UC Santa Barbara’s vice chancellor for research. “The team has demonstrated the ability to reliably create and repeatedly sample complicated quantum states involving 53 entangled elements to carry out an exercise that would take millennia to do with a classical supercomputer. This is a major accomplishment. We are at the threshold of a new era of knowledge acquisition.”
With an achievement like “quantum supremacy,” it’s tempting to think that the UC Santa Barbara/Google researchers will plant their flag and rest easy. But for Foxen, Chiaro, Martinis and the rest of the UCSB/Google AI Quantum group, this is just the beginning.
“It’s kind of a continuous improvement mindset,” Foxen said. “There are always projects in the works.” In the near term, further improvements to these “noisy” qubits may enable the simulation of interesting phenomena in quantum mechanics, such as thermalization, or the vast amount of possibility in the realms of materials and chemistry.
In the long term, however, the scientists are always looking to improve coherence times, or, at the other end, to detect and fix errors, which would take many additional qubits per qubit being checked. These efforts have been running parallel to the design and build of the quantum computer itself, and ensure the researchers have a lot of work before hitting their next milestone.
“It’s been an honor and a pleasure to be associated with this team,” Chiaro said. “It’s a great collection of strong technical contributors with great leadership and the whole team really synergizes well.”
Here’s a link to and a citation for the paper,
Quantum supremacy using a programmable superconducting processor by Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph C. Bardin, Rami Barends, Rupak Biswas, Sergio Boixo, Fernando G. S. L. Brandao, David A. Buell, Brian Burkett, Yu Chen, Zijun Chen, Ben Chiaro, Roberto Collins, William Courtney, Andrew Dunsworth, Edward Farhi, Brooks Foxen, Austin Fowler, Craig Gidney, Marissa Giustina, Rob Graff, Keith Guerin, Steve Habegger, Matthew P. Harrigan, Michael J. Hartmann, Alan Ho, Markus Hoffmann, Trent Huang, Travis S. Humble, Sergei V. Isakov, Evan Jeffrey, Zhang Jiang, Dvir Kafri, Kostyantyn Kechedzhi, Julian Kelly, Paul V. Klimov, Sergey Knysh, Alexander Korotkov, Fedor Kostritsa, David Landhuis, Mike Lindmark, Erik Lucero, Dmitry Lyakh, Salvatore Mandrà, Jarrod R. McClean, Matthew McEwen, Anthony Megrant, Xiao Mi, Kristel Michielsen, Masoud Mohseni, Josh Mutus, Ofer Naaman, Matthew Neeley, Charles Neill, Murphy Yuezhen Niu, Eric Ostby, Andre Petukhov, John C. Platt, Chris Quintana, Eleanor G. Rieffel, Pedram Roushan, Nicholas C. Rubin, Daniel Sank, Kevin J. Satzinger, Vadim Smelyanskiy, Kevin J. Sung, Matthew D. Trevithick, Amit Vainsencher, Benjamin Villalonga, Theodore White, Z. Jamie Yao, Ping Yeh, Adam Zalcman, Hartmut Neven & John M. Martinis. Nature volume 574, pages505–510 (2019) DOI: https://doi.org/10.1038/s41586-019-1666-5 Issue Date 24 October 2019
There’s been a lot of talk about wearable electronics, specifically e-textiles, but nothing seems to have entered the marketplace. Scaling up your lab discoveries for industrial production can be quite problematic. From an October 10, 2019 news item on ScienceDaily,
Producing functional fabrics that perform all the functions we want, while retaining the characteristics of fabric we’re accustomed to is no easy task.
Two groups of researchers at Drexel University — one, who is leading the development of industrial functional fabric production techniques, and the other, a pioneer in the study and application of one of the strongest, most electrically conductive super materials in use today — believe they have a solution.
They’ve improved a basic element of textiles: yarn. By adding technical capabilities to the fibers that give textiles their character, fit and feel, the team has shown that it can knit new functionality into fabrics without limiting their wearability.
In a paper recently published in the journal Advanced Functional Materials, the researchers, led by Yury Gogotsi, PhD, Distinguished University and Bach professor in Drexel’s College of Engineering, and Genevieve Dion, an associate professor in Westphal College of Media Arts & Design and director of Drexel’s Center for Functional Fabrics, showed that they can create a highly conductive, durable yarn by coating standard cellulose-based yarns with a type of conductive two-dimensional material called MXene.
“Current wearables utilize conventional batteries, which are bulky and uncomfortable, and can impose design limitations to the final product,” they write. “Therefore, the development of flexible, electrochemically and electromechanically active yarns, which can be engineered and knitted into full fabrics provide new and practical insights for the scalable production of textile-based devices.”
The team reported that its conductive yarn packs more conductive material into the fibers and can be knitted by a standard industrial knitting machine to produce a textile with top-notch electrical performance capabilities. This combination of ability and durability stands apart from the rest of the functional fabric field today.
Most attempts to turn textiles into wearable technology use stiff metallic fibers that alter the texture and physical behavior of the fabric. Other attempts to make conductive textiles using silver nanoparticles and graphene and other carbon materials raise environmental concerns and come up short on performance requirements. And the coating methods that are successfully able to apply enough material to a textile substrate to make it highly conductive also tend to make the yarns and fabrics too brittle to withstand normal wear and tear.
“Some of the biggest challenges in our field are developing innovative functional yarns at scale that are robust enough to be integrated into the textile manufacturing process and withstand washing,” Dion said. “We believe that demonstrating the manufacturability of any new conductive yarn during experimental stages is crucial. High electrical conductivity and electrochemical performance are important, but so are conductive yarns that can be produced by a simple and scalable process with suitable mechanical properties for textile integration. All must be taken into consideration for the successful development of the next-generation devices that can be worn like everyday garments.”
The winning combination
Dion has been a pioneer in the field of wearable technology, by drawing on her background on fashion and industrial design to produce new processes for creating fabrics with new technological capabilities. Her work has been recognized by the Department of Defense, which included Drexel, and Dion, in its Advanced Functional Fabrics of America effort to make the country a leader in the field.
She teamed with Gogotsi, who is a leading researcher in the area of two-dimensional conductive materials, to approach the challenge of making a conductive yarn that would hold up to knitting, wearing and washing.
Gogotsi’s group was part of the Drexel team that discovered highly conductive two-dimensional materials, called MXenes, in 2011 and have been exploring their exceptional properties and applications for them ever since. His group has shown that it can synthesize MXenes that mix with water to create inks and spray coatings without any additives or surfactants – a revelation that made them a natural candidate for making conductive yarn that could be used in functional fabrics. [Gogotsi’s work was featured here in a May 6, 2019 posting]
“Researchers have explored adding graphene and carbon nanotube coatings to yarn, our group has also looked at a number of carbon coatings in the past,” Gogotsi said. “But achieving the level of conductivity that we demonstrate with MXenes has not been possible until now. It is approaching the conductivity of silver nanowire-coated yarns, but the use of silver in the textile industry is severely limited due to its dissolution and harmful effect on the environment. Moreover, MXenes could be used to add electrical energy storage capability, sensing, electromagnetic interference shielding and many other useful properties to textiles.”
In its basic form, titanium carbide MXene looks like a black powder. But it is actually composed of flakes that are just a few atoms thick, which can be produced at various sizes. Larger flakes mean more surface area and greater conductivity, so the team found that it was possible to boost the performance of the yarn by infiltrating the individual fibers with smaller flakes and then coating the yarn itself with a layer of larger-flake MXene.
Putting it to the test
The team created the conductive yarns from three common, cellulose-based yarns: cotton, bamboo and linen. They applied the MXene material via dip-coating, which is a standard dyeing method, before testing them by knitting full fabrics on an industrial knitting machine – the kind used to make most of the sweaters and scarves you’ll see this fall.
Each type of yarn was knit into three different fabric swatches using three different stitch patterns – single jersey, half gauge and interlock – to ensure that they are durable enough to hold up in any textile from a tightly knit sweater to a loose-knit scarf.
“The ability to knit MXene-coated cellulose-based yarns with different stitch patterns allowed us to control the fabric properties, such as porosity and thickness for various applications,” the researchers write.
To put the new threads to the test in a technological application, the team knitted some touch-sensitive textiles – the sort that are being explored by Levi’s and Yves Saint Laurent as part of Google’s Project Jacquard.
Not only did the MXene-based conductive yarns hold up against the wear and tear of the industrial knitting machines, but the fabrics produced survived a battery of tests to prove its durability. Tugging, twisting, bending and – most importantly – washing, did not diminish the touch-sensing abilities of the yarn, the team reported – even after dozens of trips through the spin cycle.
But the researchers suggest that the ultimate advantage of using MXene-coated conductive yarns to produce these special textiles is that all of the functionality can be seamlessly integrated into the textiles. So instead of having to add an external battery to power the wearable device, or wirelessly connect it to your smartphone, these energy storage devices and antennas would be made of fabric as well – an integration that, though literally seamed, is a much smoother way to incorporate the technology.
“Electrically conducting yarns are quintessential for wearable applications because they can be engineered to perform specific functions in a wide array of technologies,” they write.
Using conductive yarns also means that a wider variety of technological customization and innovations are possible via the knitting process. For example, “the performance of the knitted pressure sensor can be further improved in the future by changing the yarn type, stitch pattern, active material loading and the dielectric layer to result in higher capacitance changes,” according to the authors.
Dion’s team at the Center for Functional Fabrics is already putting this development to the test in a number of projects, including a collaboration with textile manufacturer Apex Mills – one of the leading producers of material for car seats and interiors. And Gogotsi suggests the next step for this work will be tuning the coating process to add just the right amount of conductive MXene material to the yarn for specific uses.
“With this MXene yarn, so many applications are possible,” Gogotsi said. “You can think about making car seats with it so the car knows the size and weight of the passenger to optimize safety settings; textile pressure sensors could be in sports apparel to monitor performance, or woven into carpets to help connected houses discern how many people are home – your imagination is the limit.”
Researchers have produced a video about their work,
Here’s a link to and a citation for the paper,
Knittable and Washable Multifunctional MXene‐Coated Cellulose Yarns by Simge Uzun, Shayan Seyedin, Amy L. Stoltzfus, Ariana S. Levitt, Mohamed Alhabeb, Mark Anayee, Christina J. Strobel, Joselito M. Razal, Genevieve Dion, Yury Gogotsi. Advanced Functional Materials DOI: https://doi.org/10.1002/adfm.201905015 First published: 05 September 2019
The ‘smart city’ initiatives continue to fascinate. During the summer, Toronto’s efforts were described in a June 24, 2019 article by Katharine Schwab for Fast Company (Note: Links have been removed),
Today, Google sister company Sidewalk Labs released a draft of its master plan to transform 12 acres on the Toronto waterfront into a smart city. The document details the neighborhood’s buildings, street design, transportation, and digital infrastructure—as well as how the company plans to construct it.
When a leaked copy of the plan popped up online earlier this year, we learned that Sidewalk Labs plans to build the entire development, called Quayside, out of mass timber. But today’s release of the official plan reveals the key to doing so: Sidewalk proposes investing $80 million to build a timber factory and supply chain that would support its fully timber neighborhood. The company says the factory, which would be focused on manufacturing prefabricated building pieces that could then be assembled into fully modular buildings on site, could reduce building time by 35% compared to more traditional building methods.
“We would fund the creation of [a factory] somewhere in the greater Toronto area that we think could play a role in catalyzing a new industry around mass timber,” says Sidewalk Labs CEO and chairman Dan Doctoroff.
However, the funding of the factory is dependent on Sidewalk Labs being able to expand its development plan to the entire riverfront district. .. [emphasis mine].
Here’s where I think it gets very interesting,
Sidewalk proposes sourcing spruce and fir trees from the forests in Ontario, Quebec, and British Columbia. While Canada has 40% of the world’s sustainable forests, Sidewalk claims, the country has few factories that can turn these trees into the building material. That’s why the company proposes starting a factory to process two kinds of mass timber: Cross-laminated timber (CLT) and glulam beams. The latter is meant specifically to bear the weight of the 30-story buildings Sidewalk hopes to build. While Sidewalk says that 84% of the larger district would be handed over for development by local companies, the plan requires that these companies uphold the same sustainability standards when it comes to performance
Sidewalk says companies wouldn’t be required to build with CLT and glulam, but since the company’s reason for building the mass timber factory is that there aren’t many existing manufacturers to meet the needs for a full-scale development, the company’s plan might ultimately push any third-party developers toward using its [Google] factory to source materials. … [emphasis mine]
If I understand this rightly, Google wants to expand its plan to Toronto’s entire waterfront to make building a factory to produce the type of wood products Google wants to use in its Quayside development financially feasible (profitable). And somehow, local developers will not be forced to build the sames kinds of structures although Google will be managing the entire waterfront development. Hmmm.
Let’s take a look at one of Google’s other ‘city ventures’.
First, Alphabet is the name of Google’s parent company and it was Alphabet that offered the city of Louisville an opportunity for cheap, abundant internet service known as Google Fiber. From a May 6, 2019 article by Alex Correa for the The Edge (Note: Links have been removed),
In 2015, Alphabet chose several cities in Kentucky to host its Google Fiber project. Google Fiber is a service providing broadband internet and IPTV directly to a number of locations, and the initiative in Kentucky … . The tech giant dug up city streets to bury fibre optic cables of their own, touting a new technique that would only require the cables to be a few inches beneath the surface. However, after two years of delays and negotiations after the announcement, Google abandoned the project in Louisville, Kentucky.
Like an unwanted pest in a garden, sign of Google’s presence can be seen and felt in the city streets. Metro Councilman Brandon Coan criticized the state of the city’s infrastructure, pointing out that strands of errant, tar-like sealant, used to cover up the cables, are “everywhere.” Speaking outside of a Louisville coffee shop that ran Google Fiber lines before the departure, he said, “I’m confident that Google and the city are going to negotiate a deal… to restore the roads to as good a condition as they were when they got here. Frankly, I think they owe us more than that.”
Google’s disappearance did more than just damage roads [emphasis mine] in Louisville. Plans for promising projects were abandoned, including transformative economic development that could have provided the population with new jobs and vastly different career opportunities than what was available. Add to that the fact that media coverage of the aborted initiative cast Louisville as the site of a failed experiment, creating an impression of the city as an embarrassment. (Google has since announced plans to reimburse the city $3.84 million over 20 months to help repair the damage to the city’s streets and infrastructure.)
A February 22, 2019 article on CBC (Canadian Broadcasting Corporation) Radio news online offers images of the damaged roadways and a particle transcript of a Day 6 radio show hosted by Brent Bambury,
Google’s Sidewalk Labs is facing increased pushback to its proposal to build a futuristic neighbourhood in Toronto, after leaked documents revealed the company’s plans are more ambitious than the public had realized.
One particular proposal — which would see Sidewalk Labs taking a cut of property taxes in exchange for building a light rail transit line along Toronto’s waterfront — is especially controversial.
The company has developed an impressive list of promises for its proposed neighbourhood, including mobile pre-built buildings and office towers that tailor themselves to occupants’ behaviour.
But Louisville, Kentucky-based business reporter Chris Otts says that when Google companies come to town, it doesn’t always end well.
What was the promise Google Fiber made to Louisville back in 2015?
Well, it was just to be included as one of their Fiber cities, which was a pretty serious deal for Louisville at the time. A big coup for the mayor, and his administration had been working for years to get Google to consider adding Louisville to that list.
So if the city was eager, what sorts of accommodations were made for Google to entice them to come to Louisville?
Basically, the city did everything it could from a streamlining red tape perspective to get Google here … in terms of, you know, awarding them a franchise, and allowing them to be in the rights of way with this innovative technique they had for burying their cables here. And then also, they [the city] passed a policy, which, to be sure, they say is just good policy regardless of Google’s support for it. But it had to do with how new Internet companies like Google can access utility poles to install their networks.
And Louisville ended up spending hundreds of thousands of dollars to defend that new policy in court in lawsuits by AT&T and by the traditional cable company here.
When Google Fiber starts doing business, they’re offering cheaper high speed Internet access, and they start burying these cables in the ground.
When did things start to go sideways for this project?
I don’t know if I would say ‘almost immediately,’ but certainly the problems were evident fairly quickly.
So they started their work in 2017. If you picture it, [in] the streets you can see on either side there are these seams. They look like little strings … near the end of the streets on both sides. And there are cuts in the street where they buried the cable and they topped it off with this sealant
And fairly early on — within months, I would say, of them doing that — you could see the sealant popping out. The conduit in there [was] visible or exposed. And so it was fairly evident that there were problems with it pretty quickly
Was this the first time that they had used this system and the sealant that you’re describing?
It was the first time, according to them, that they had used such shallow trenches in the streets.
So these are as shallow as two inches below the pavement surface that they’d bury these cables. It’s the ultra-shallow version of this technique.
And what explanation did Google Fiber offer for their decision to leave Louisville?
That it was basically a business decision; that they were trying this construction method to see if it was sustainable and they just had too many problems with it.
And as they said directly in their … written statement about this, they decided that instead of doing things right and starting over, which they would have to do essentially to keep providing service in Louisville, that it was the better business decision for them to just pick up and leave.
Toronto’s Sidewalk Labs isn’t Google Fiber — but they’re both owned by Google’s parent company, Alphabet.
If Louisville could give Toronto a piece of advice about welcoming a Google infrastructure project to town, what do you think that advice would be?
The biggest lesson from this is that one day they can be next to you at the press conference saying what a great city you are and how happy they are to … provide new service in your market, and then the next day, with almost no notice, they can say, “You know what? This doesn’t make sense for us anymore. And by the way, see ya. Thanks for having us. Sorry it didn’t work out.”
The factory is also key to another of Sidewalk’s promises: Jobs. According to Sidewalk, the factory itself would create 2,500 jobs [emphasis mine] along the entire supply chain over a 20-year period. But even if the Canadian government approves Sidewalk’s plan and commits to building out the entire waterfront district to take advantage of the mass timber factory’s economies of scale, there are other regulatory hurdles to overcome. Right now, the building code in Toronto doesn’t allow for timber buildings over six stories tall. All of Sidewalk’s proposed buildings are over six stories, and many of them go up to 30 stories. Doctoroff said he was optimistic that the company will be able to get regulations changed if the city decides to adopt the plan. There are several examples of timber buildings that are already under construction, with a planned skyscraper in Japan that will be 70 stories.
Sidewalk’s proposal is the result of 18 months of planning, which involved getting feedback from community members and prototyping elements like a building raincoat that the company hopes to include in the final development. It has come under fire from privacy advocates in particular, and the Canadian government is currently facing a lawsuit from a civil liberties group over its decision to allow a corporation to propose public privacy governance standards.
Now that the company has released the plan, it will be up to the Canadian government to decide whether to move forward. And the mass timber factory, in particular, will be dependent on the government adopting Sidewalk’s plan wholesale, far beyond the Quayside development—a reminder that Sidewalk is a corporation that’s here to make money, dangling investment dollars in front of the government to incentivize it to embrace Sidewalk as the developer for the entire area.
A few thoughts
Those folks in Louisville made a lot of accommodations for Google only to have the company abandon them. They will get some money in compensation, finally, but it doesn’t make up for the lost jobs and the national, if not international, loss of face.
… Together with local partners, Sidewalk proposes to invest up to $80 million in a mass timber factory in Ontario to jumpstart this emerging industry.
So, Alphabet/Google/Sidewalk has proposed up to an $80M investment—with local partners. I wonder how much this factory is supposed to cost and what kinds of accommodations Alphabet/Google/Sidewalk will demand. Possibilities include policy changes, changes in municipal bylaws, and government money. In other words, Canadian taxpayers could end up footing part of the bill and/or local developers could be required to cover and outsize percentage of the costs for the factory as they jockey for the opportunity to develop part of Toronto’s waterfront.
Other than Louisville, what’s the company’s track record with regard to its partnerships with cities and municipalities? I Haven’t found any success stories in my admittedly brief search. Unusually, the company doesn’t seem to be promoting any of its successful city partnerships.
While my focus has been on the company’s failure with Louisville and the possible dangers inherent to Toronto in a partnership with this company, it shouldn’t be forgotten that all of this development is in the name of a ‘smart’ city and that means data-driven. My March 28, 2018 posting features some of the issues with the technology, 5G, that will be needed to make cities ‘smart’. There’s also my March 20, 2018 posting (scroll down about 30% of the way) which looks at ‘smart’ cities in Canada with a special emphasis on Vancouver.
Waterfront Toronto’s Digital Strategy Advisory Panel (DSAP) submitted a report to Google in August 2019 which was subsequently published as of September 10, 2019. To sum it up, the panel was not impressed with Google’s June 2019 draft master plan. From a September 11, 2019 news item on the Guardian (Note: Links have been removed),
A controversial smart city development in Canada has hit another roadblock after an oversight panel called key aspects of the proposal “irrelevant”, “unnecessary” and “frustratingly abstract” in a new report.
The project on Toronto’s waterfront, dubbed Quayside, is a partnership between the city and Google’s sister company Sidewalk Labs. It promises “raincoats” for buildings, autonomous vehicles and cutting-edge wood-frame towers, but has faced numerous criticisms in recent months.
A September 11, 2019 article by Ian Bick of Canadian Press published on the CBC (Canadian Broadcasting Corporation) website offers more detail,
Preliminary commentary from Waterfront Toronto’s digital strategy advisory panel (DSAP) released Tuesday said the plan from Google’s sister company Sidewalk is “frustratingly abstract” and that some of the innovations proposed were “irrelevant or unnecessary.”
“The document is somewhat unwieldy and repetitive, spreads discussions of topics across multiple volumes, and is overly focused on the ‘what’ rather than the ‘how,’ ” said the report on the panel’s comments.
Some on the 15-member panel, an arm’s-length body that gives expert advice to Waterfront Toronto, have also found the scope of the proposal to be unclear or “concerning.”
The report says that some members also felt the official Sidewalk plan did not appear to put the citizen at the centre of the design process for digital innovations, and raised issues with the way Sidewalk has proposed to manage data that is generated from the neighbourhood.
The panel’s early report is not official commentary from Waterfront Toronto, the multi-government body that is overseeing the Quayside development, but is meant to indicate areas that needs improvement.
The panel, chaired by University of Ottawa law professor Michael Geist, includes executives, professors, and other experts on technology, privacy, and innovation.
Sidewalk Labs spokeswoman Keerthana Rang said the company appreciates the feedback and already intends to release more details in October on the digital innovations it hopes to implement at Quayside.
Patriots quarterback Tom Brady has often credited his success to spending countless hours studying his opponent’s movements on film. This understanding of movement is necessary for all living species, whether it’s figuring out what angle to throw a ball at, or perceiving the motion of predators and prey. But simple videos can’t actually give us the full picture.
That’s because traditional videos and photos for studying motion are two-dimensional, and don’t show us the underlying 3-D structure of the person or subject of interest. Without the full geometry, we can’t inspect the small and subtle movements that help us move faster, or make sense of the precision needed to perfect our athletic form.
Recently, though, researchers from MIT’s [Massachusetts Institute of Technology] Computer Science and Artificial Intelligence Laboratory (CSAIL) have come up with a way to get a better handle on this understanding of complex motion.
The new system uses an algorithm that can take 2-D videos and turn them into 3-D printed “motion sculptures” that show how a human body moves through space. In addition to being an intriguing aesthetic visualization of shape and time, the team envisions that their “MoSculp” system could enable a much more detailed study of motion for professional athletes, dancers, or anyone who wants to improve their physical skills.
“Imagine you have a video of Roger Federer serving a ball in a tennis match, and a video of yourself learning tennis,” says PhD student Xiuming Zhang, lead author of a new paper about the system. “You could then build motion sculptures of both scenarios to compare them and more comprehensively study where you need to improve.”
Because motion sculptures are 3-D, users can use a computer interface to navigate around the structures and see them from different viewpoints, revealing motion-related information inaccessible from the original viewpoint.
Zhang wrote the paper alongside MIT professors William Freeman and Stefanie Mueller, PhD student Jiajun Wu, Google researchers Qiurui He and Tali Dekel, as well as U.C. Berkeley postdoc and former CSAIL PhD Andrew Owens.
How it works
Artists and scientists have long struggled to gain better insight into movement, limited by their own camera lens and what it could provide.
Previous work has mostly used so-called “stroboscopic” photography techniques, which look a lot like the images in a flip book stitched together. But since these photos only show snapshots of movement, you wouldn’t be able to see as much of the trajectory of a person’s arm when they’re hitting a golf ball, for example.
What’s more, these photographs also require laborious pre-shoot setup, such as using a clean background and specialized depth cameras and lighting equipment. All MoSculp needs is a video sequence.
Given an input video, the system first automatically detects 2-D key points on the subject’s body, such as the hip, knee, and ankle of a ballerina while she’s doing a complex dance sequence. Then, it takes the best possible poses from those points to be turned into 3-D “skeletons.”
After stitching these skeletons together, the system generates a motion sculpture that can be 3-D printed, showing the smooth, continuous path of movement traced out by the subject. Users can customize their figures to focus on different body parts, assign different materials to distinguish among parts, and even customize lighting.
In user studies, the researchers found that over 75 percent of subjects felt that MoSculp provided a more detailed visualization for studying motion than the standard photography techniques.
“Dance and highly-skilled athletic motions often seem like ‘moving sculptures’ but they only create fleeting and ephemeral shapes,” says Courtney Brigham, communications lead at Adobe. “This work shows how to take motions and turn them into real sculptures with objective visualizations of movement, providing a way for athletes to analyze their movements for training, requiring no more equipment than a mobile camera and some computing time.”
The system works best for larger movements, like throwing a ball or taking a sweeping leap during a dance sequence. It also works for situations that might obstruct or complicate movement, such as people wearing loose clothing or carrying objects.
Currently, the system only uses single-person scenarios, but the team soon hopes to expand to multiple people. This could open up the potential to study things like social disorders, interpersonal interactions, and team dynamics.
As for anyone wondering about the Muybridge comment, here’s an image the MIT researchers have made available,
A new system uses an algorithm that can take 2-D videos and turn them into 3-D-printed “motion sculptures” that show how a human body moves through space. Image courtesy of MIT CSAIL
Contrast that MIT image with some of the images in this video capturing parts of a theatre production, Studies in Motion: The Hauntings of Eadweard Muybridge,
Getting back to MIT, here’s their MoSculp video,
There are some startling similarities, eh? I suppose there are only so many ways one can capture movement be it in studies of Eadweard Muybridge, a theatre production about his work, or an MIT video the latest in motion capture technology.
This is strictly for folks who have media accreditation. First, the news about the summit and then some detail about how you might accreditation should you be interested in going to Switzerland. Warning: The International Telecommunications Union which is holding this summit is a United Nations agency and you will note almost an entire paragraph of ‘alphabet soup’ when all the ‘sister’ agencies involved are listed.
Geneva, 21 March 2019 Artificial Intelligence (AI) has taken giant leaps forward in recent years, inspiring growing confidence in AI’s ability to assist in solving some of humanity’s greatest challenges. Leaders in AI and humanitarian action are convening on the neutral platform offered by the United Nations to work towards AI improving the quality and sustainability of life on our planet. The 2017 summit marked the beginning of global dialogue on the potential of AI to act as a force for good. The action-oriented 2018 summit gave rise to numerous ‘AI for Good’ projects, including an ‘AI for Health’ Focus Group, now led by ITU and the World Health Organization (WHO). The 2019 summit will continue to connect AI innovators with public and private-sector decision-makers, building collaboration to maximize the impact of ‘AI for Good’.
Media are recommended to register in advance to receive key announcements in the run-up to the summit.
WHAT: The summit attracts a cross-section of AI experts from industry and academia, global business leaders, Heads of UN agencies, ICT ministers, non-governmental organizations, and civil society.
The summit is designed to generate ‘AI for Good’ projects able to be enacted in the near term, guided by the summit’s multi-stakeholder and inter-disciplinary audience. It also formulates supporting strategies to ensure trusted, safe and inclusive development of AI technologies and equitable access to their benefits.
The 2019 summit will highlight AI’s value in advancing education, healthcare and wellbeing, social and economic equality, space research, and smart and safe mobility. It will propose actions to assist high-potential AI solutions in achieving global scale. It will host debate around unintended consequences of AI as well as AI’s relationship with art and culture. A ‘learning day’ will offer potential AI adopters an audience with leading AI experts and educators.
A dynamic show floor will demonstrate innovations at the cutting edge of AI research and development, such as the IBM Watson live debater; the Fusion collaborative exoskeleton; RoboRace, the world’s first self-driving electric racing car; avatar prototypes, and the ElliQ social robot for the care of the elderly. Summit attendees can also look forward to AI-inspired performances from world-renowned musician Jojo Mayer and award-winning vocal and visual artist Reeps One
WHEN: 28-31 May 2019 WHERE: International Conference Centre Geneva, 17 Rue de Varembé, Geneva, Switzerland
WHO: Over 100 speakers have been confirmed to date, including:
Jim Hagemann Snabe – Chairman, Siemens Cédric Villani – AI advisor to the President of France, and Mathematics Fields Medal Winner Jean-Philippe Courtois – President of Global Operations, Microsoft Anousheh Ansari – CEO, XPRIZE Foundation, Space Ambassador Yves Daccord – Director General, International Committee of the Red Cross Yan Huang – Director AI Innovation, Baidu Timnit Gebru – Head of AI Ethics, Google Vladimir Kramnik – World Chess Champion Vicki Hanson – CEO, ACM Zoubin Ghahramani – Chief Scientist, Uber, and Professor of Engineering, University of Cambridge Lucas di Grassi – Formula E World Racing Champion, CEO of Roborac
Confirmed speakers also include C-level and expert representatives of Bosch, Botnar Foundation, Byton, Cambridge Quantum Computing, the cities of Montreal and Pittsburg, Darktrace, Deloitte, EPFL, European Space Agency, Factmata, Google, IBM, IEEE, IFIP, Intel, IPSoft, Iridescent, MasterCard, Mechanica.ai, Minecraft, NASA, Nethope, NVIDIA, Ocean Protocol, Open AI, Philips, PWC, Stanford University, University of Geneva, and WWF.
Please visit the summit programme for more information on the latest speakers, breakthrough sessions and panels.
The summit is organized in partnership with the following sister United Nations agencies:CTBTO, ICAO, ILO, IOM, UNAIDS, UNCTAD, UNDESA, UNDPA, UNEP, UNESCO, UNFPA, UNGP, UNHCR, UNICEF, UNICRI, UNIDIR, UNIDO, UNISDR, UNITAR, UNODA, UNODC, UNOOSA, UNOPS, UNU, WBG, WFP, WHO, and WIPO.
The 2019 summit is kindly supported by Platinum Sponsor and Strategic Partner, Microsoft; Gold Sponsors, ACM, the Kay Family Foundation, Mind.ai and the Autonomous Driver Alliance; Silver Sponsors, Deloitte and the Zero Abuse Project; and Bronze Sponsor, Live Tiles.
To gain media access, ITU must confirm your status as a bona fide member of the media. Therefore, please read ITU’s Media Accreditation Guidelines below so you are aware of the information you will be required to submit for ITU to confirm such status. Media accreditation is not granted to 1) non-editorial staff working for a publishing house (e.g. management, marketing, advertising executives, etc.); 2) researchers, academics, authors or editors of directories; 3) employees of information outlets of public, non-governmental or private entities that are not first and foremost media organizations; 4) members of professional broadcasting or media associations, 5) press or communication professionals accompanying member state delegations; and 6) citizen journalists under no apparent editorial board oversight. If you have questions about your eligibility, please email us at email@example.com.
Applications for accreditation are considered on a case-by-case basis and ITU reserves the right to request additional proof or documentation other than what is listed below. Media accreditation decisions rest with ITU and all decisions are final.
Accreditation eligibility & credentials 1. Journalists* should provide an official letter of assignment from the Editor-in-Chief (or the News Editor for radio/TV). One letter per crew/editorial team will suffice. Editors-in-Chief and Bureau Chiefs should submit a letter from their Director. Please email this to firstname.lastname@example.org along with the required supporting credentials, based on the type of media organization you work for:
Print and online publications should be available to the general public and published at least 6 times a year by an organization whose principle business activity is publishing and which generally carries paid advertising; o please submit 2 copies or links to recent byline articles published within the last 4 months.
News wire services should provide news coverage to subscribers, including newspapers, periodicals and/or television networks; o please submit 2 copies or links to recent byline articles or broadcasting material published within the last 4 months.
Broadcastmedia should provide news and information programmes to the general public. Independent film and video production companies can only be accredited if officially mandated by a broadcast station via a letter of assignment; o please submit broadcasting material published within the last 4 months.
Freelance journalists and photographers must provide clear documentation that they are on assignment from a specific news organization or publication. Evidence that they regularly supply journalistic content to recognized media may be acceptable in the absence of an assignment letter and at the discretion of the ITU Corporate Communication Division. o if possible, please submit a valid assignment letter from the news organization or publication.
2. Bloggers and community media may be granted accreditation if the content produced is deemed relevant to the industry, contains news commentary, is regularly updated and/or made publicly available. Corporate bloggers may register as normal participants (not media). Please see Guidelines for Bloggers and Community Media Accreditation below for more details:
Special guidelines for bloggers and community media accreditation
ITU is committed to working with independent and ‘new media’ reporters and columnists who reach their audiences via blogs, podcasts, video blogs, community or online radio, limited print formats which generally carry paid advertising and other online media. These are some of the guidelines we use to determine whether to accredit bloggers and community media representatives:
ITU reserves the right to request traffic data from a third party (Sitemeter, Technorati, Feedburner, iTunes or equivalent) when considering your application. While the decision to grant access is not based solely on traffic/subscriber data, we ask that applicants provide sufficient transparency into their operations to help us make a fair and timely decision. If your media outlet is new, you must have an established record of having written extensively on ICT issues and must present copies or links to two recently published videos, podcasts or articles with your byline.
Obtaining media accreditation for ITU events is an opportunity to meet and interact with key industry and political figures. While continued accreditation for ITU events is not directly contingent on producing coverage, owing to space limitations we may take this into consideration when processing future accreditation requests. Following any ITU event for which you are accredited, we therefore kindly request that you forward a link to your post/podcast/video blog to email@example.com.
Bloggers who are granted access to ITU events are expected to act professionally. Those who do not maintain the standards expected of professional media representatives run the risk of having their accreditation withdrawn. UN-accredited media
Media already accredited and badged by the United Nations are automatically accredited and registered by ITU. In this case, you only need to send a copy of your UN badge to firstname.lastname@example.org make sure you receive your event badge. Anyone joining an ITU event MUST have an event badge in order to access the premises. Please make sure you let us know in advance that you are planning to attend so your event badge is ready for printing and pick-up.
I’m not a big fan of DNA (deoxyribonucleic acid) companies that promise to tell you about your ancestors and, depending on the kit, predisposition to certain health issues as per their reports about your genetic code. (I regularly pray no one in my family has decided to pay one of these companies to analyze their spit.)
During Christmas season 2018, the DNA companies (23andMe and Ancestry) advertised special prices so you could gift someone in your family with a kit. All this corporate largesse may not be wholly in service of the Christmas spirit. After all, there’s money to be made once they’ve gotten your sample.
Monetizing your DNA in 2016
I don’t know when 23andMe started selling DNA information or if any similar company predated their efforts but this June 21, 2016 article by Antonio Regalado for MIT (Massachusetts Institute of Technology) Review offers the earliest information I found,
“Welcome to You.” So says the genetic test kit that 23andMe will send to your home. Pay $199, spit in a tube, and several weeks later you’ll get a peek into your DNA. Have you got the gene for blond hair? Which of 36 disease risks could you pass to a child?
Run by entrepreneur Anne Wojcicki, the ex-wife of Google founder Sergey Brin, and until last year housed alongside the Googleplex, the company created a test that has been attacked by regulators and embraced by a curious public. It remains, nine years after its introduction, the only one of its kind sold directly to consumers. 23andMe has managed to amass a collection of DNA information about 1.2 million people, which last year began to prove its value when the company revealed it had sold access to the data to more than 13 drug companies. One, Genentech, anted up $10 million for a look at the genes of people with Parkinson’s disease.
That means 23andMe is monetizing DNA rather the way Facebook makes money from our “likes.” What’s more, it gets its customers to pay for the privilege. That idea so appeals to investors that they have valued the still-unprofitable company at over $1 billion. “Money follows data,” says Barbara Evans, a legal scholar at the University of Houston, who studies personal genetics. “It takes a lot of labor and capital to get that information in a form that is useful.”
When 23andMe made a $300 million deal with GlaxoSmithKline [GSK] in July–so the pharmaceutical giant could access a vast store of genetic data as it works on new drugs–the consumers who actually provided that data didn’t get a cut of the proceeds. A new health platform is taking a different approach: If you choose to share your own DNA data or other health records, you’ll get company shares that will later pay you dividends if that data is sold.
Before getting to the start-up that would allow you rather than a company to profit or at least somewhat monetize your DNA, I’m including a general overview of the July 2018 GSK/23andMe deal in Jamie Ducharme’s July 26, 2018 article for TIME (Note: Links have been removed),
Consumer genetic testing company 23andMe announced on Wednesday [July 25, 2018] that GlaxoSmithKline purchased a $300 million stake in the company, allowing the pharmaceutical giant to use 23andMe’s trove of genetic data to develop new drugs — and raising new privacy concerns for consumers
The “collaboration” is a way to make “novel treatments and cures a reality,” 23andMe CEO Anne Wojcicki said in a company blog post. But, though it isn’t 23andMe’s first foray into drug discovery, the deal doesn’t seem quite so simple to some medical experts — or some of the roughly 5 million 23andMe customers who have sent off tubes of their spit in exchange for ancestry and health insights
Perhaps the most obvious issue is privacy, says Peter Pitts, president of the Center for Medicine in the Public Interest, a non-partisan non-profit that aims to promote patient-centered health care.
“If people are concerned about their social security numbers being stolen, they should be concerned about their genetic information being misused,” Pitts says. “This information is never 100% safe. The risk is magnified when one organization shares it with a second organization. When information moves from one place to another, there’s always a chance for it to be intercepted by unintended third parties.
That risk is real, agrees Dr. Arthur Caplan, head of the division of medical ethics at the New York University School of Medicine. Caplan says that any genetic privacy concerns also extend to your blood relatives, who likely did not consent to having their DNA tested — echoing some of the questions that arose after law enforcement officials used a genealogy website to find and arrest the suspected Golden State Killer in April .
“A lot of people paid money to 23andMe to get their ancestry determined — fun, recreational stuff,” Caplan says. “Even though they may have signed a thing saying, ‘I’m okay if you use this information for medical research,’ I’m not sure they understood what that really meant. I’m not sure they understood that it meant, ‘Yes, we’ll go to Glaxo, and that’s where we’re really going to make a lot of money off of you.’”
A 23andMe spokesperson told TIME that data privacy is a “top priority” for the company, emphasizing that customer data isn’t used in research without consent, and that GlaxoSmithKline will only receive “summary statistics from analyses 23andMe conducts so that no single individual can be identified.”
Yes the data is supposed to be stripped of identifying information but given how many times similar claims about geolocation data have been disproved, I am skeptical. DJ Pangburn’s September 26, 2017 article (Even This Data Guru Is Creeped Out By What Anonymous Location Data Reveals About Us) for Fast Company illustrate the fragility of ‘anonymized data’,
… as a number of studies have shown, even when it’s “anonymous,” stripped of so-called personally identifiable information, geographic data can help create a detailed portrait of a person and, with enough ancillary data, identify them by name
Curious to see this kind of data mining in action, I emailed Gilad Lotan, now vice president of BuzzFeed’s data science team. He agreed to look at a month’s worth of two different users’ anonymized location data, and to come up with individual profiles that were as accurate as possible
The results, produced in just a few days’ time, range from the expected to the surprisingly revealing, and demonstrate just how “anonymous” data can identify individuals.
Last fall Lotan taught a class at New York University on surveillance that kicked off with an assignment like the one I’d given him: link anonymous location data with other data sets–from LinkedIn, Facebook, home registration and mortgage records, and other online data. “It’s not hard to figure out who this [unnamed] person is,” says Lotan. In class, students found that tracking location data around holidays proved to be the easiest way to determine who, exactly, the data belonged to. “Basically,” he says, “visits to private homes that are owned and publicly registered.”
In 2013, researchers at MIT and the Université Catholique de Louvain in Belgium published a paper reporting on 15 months of study of human mobility data for over 1.5 million individuals. What they found is that only four spatio-temporal points are required to “uniquely identify 95% of the individuals.” The researchers concluded that there was very little privacy even in raw location data. Four years later, their calls for policies rectifying concerns about location tracking have fallen largely on deaf ears.
Getting back to DNA, there was also some concern at Fox News,
Other than warnings, I haven’t seen much about any possible legislation regarding DNA and privacy in either Canada or the US.
Now, let’s get to how you can monetize your self.
Me making money off me
I’ve found two possibilities for an individual who wants to consider monetizing their own DNA.
Adele Peters’ December 13, 2018 article describes a start-up company and the model they’re proposing to allow you profit from your own DNA (Note: Links have been removed),
“You can’t say data is valuable and then take that data away from everybody,” says Dawn Barry, president and cofounder of LunaPBC, the public benefit corporation that manages the community-owned platform, called LunaDNA, which recently got SEC approval to recognize health data as currency. “What we’re finding is that [our early adopters are] very excited about the transparency of this model–that when we all come together and create value, that value flows down to the individuals who shared their data.
The platform shares some anonymized data with nonprofits, such as foundations that study rare diseases. In that case, money wouldn’t initially change hands, but “there could be intellectual property that at some point in time is monetized, and the community would share in that,” says Bob Kain, CEO and cofounder of LunaPBC. “When we have enough data in the near future, then we’ll work with pharmaceutical companies, for instance, to drive discovery for those companies. And they will pay market rates.
The company doesn’t offer DNA analysis itself, but chose to focus on data management. If you’ve sent a tube of spit to 23andMe, AncestryDNA, MyHeritage, or FamilyTree DNA, you can contribute that data to LunaDNA and get shares. (If you’d rather not let the original testing company keep your data, you can also separately take the steps to delete it.
“We looked at a number of different models to enable people to have ownership, including cryptocurrency, which is a proxy for ownership, too,” says Kain. “Cryptocurrency is hard to understand for most people, and right now, the regulatory landscape is blurry. So we thought, to move forward, we’d go with something much more traditional and easy to understand, and that is stock shares, basically.
For sharing targeted genes, you get 10 shares. For sharing your whole genome, you get 300 shares. At the moment, that’s not worth very much–the valuation takes into account the risk that the data might not be monetized, and the fact that the startup isn’t the exclusive owner of your data. The SEC filing says that the estimated fair market value of a whole genome is only $21. Some other health information is worth far less; 20 days of data from a fitness tracker garners two shares, valued at 14¢. But as more people contribute data, the research value of the whole database (and dividends) will increase. If the shareholders ever decided to sell the company itself, they would also make money that way. …
At least one effort to introduce blockchain/cryptocurrency technology to the process for monetizing your DNA garnered a lot of attention in February 2018.
A February 8, 2018 article by Eric Rosenbaum for CNBC (a US cable tv channel) explores an effort by George Church (Note: Links have been removed),
It’s probably wise to be skeptical of anyone who says they have a new idea for a blockchain-based company, or worse still, a company changing its business model to focus on the crypto world. That ice tea company that shifted its model to the blockchain, or Kodak saying its road back to riches was managing photo rights using a blockchain system. Raise eyebrow, or move directly onto outright shake of head
However, when a world renown Harvard geneticist announces he’s launching a blockchain-based start-up, it merits some attention. And it’s not the crypto-angle itself that might make you do a double-take, but the assets that will be managed, and exchanged, using digital currency: your DNA
Harvard University genetics guru George Church — one of the scientists at the forefront of the CRISPR genetic engineering revolution — announced on Wednesday a start-up, Nebula Genomics, that will use the blockchain to not only allow individuals to share their personal genome for research purposes, but retain ownership and monetize their DNA through trading of a custom digital currency.
The genomics revolution has been exponentially advanced by drastic reductions in cost. As Nebula noted in a white paper explaining its business model, the first human genome was sequenced in 2001 at a cost of $3 billion. Today, human genome sequencing costs less than $1,000, and in a few years the price will drop below $100
In fact, some big Silicon Valley start-ups, led by 23andMe, have capitalized on this rapid advance and already offer personal DNA testing kits for around $100 (sometimes with discounts even less)
Nebula took direct aim at 23andMe in its white paper, and one reason why it can offer genetic testing for less
“Today, 23andMe (23andme.com) and Ancestry (ancestry.com) are the two leading personal genomics companies. Both use DNA microarray-based genotyping for their genetic tests. It is an outdated and significantly less powerful alternative to DNA sequencing. Instead of sequencing continuous stretches of DNA, genotyping identifies single letters spaced at approximately regular intervals across the genome. …
Outdated genetic tests? Interesting, eh? Zoë Corbyn provides more information about Church’s plans in her February 18, 2018 article for the Guardian,
“Under the current system, personal genomics companies effectively own your personal genomics data, and you don’t see any benefit at all,” says Grishin [Dennis Grishin, Nebula co-founder]. “We want to eliminate the middleman.
Although the aim isn’t to provide a get-rich-quick scheme, the company believes there is potential for substantial returns. Though speculative, its modelling suggests that someone in the US could earn up to 50 times the cost of sequencing their genome – about $50,000 at current rates – taking into account both what could be made from a lifetime of renting out their genetic data, and reductions in medical bills if the results throw up a potentially preventable disease
The startup also thinks it can solve the problem of the dearth of genetic data researchers have to draw on, due to individuals – put off by cost or privacy concerns – not getting sequenced.
Payouts when you grant access to your genome would come in the form of Nebula tokens, the company’s cryptocurrency, and companies would need to buy tokens from the startup to pay people whose data they wanted to access. Though the value of a token is yet to be set and the number of tokens defined, it might, for example, take one Nebula token to get your genome sequenced. An individual new to the system could begin to earn fractions of a token by taking part in surveys about their heath posted by prospective data buyers. When someone had earned enough, they could get sequenced and begin renting out their data and amassing tokens. Alternatively, if an individual wasn’t yet sequenced they may find data buyers willing to pay for or subsidise their genome sequencing in exchange for access to it. “Potentially you wouldn’t have to pay out of pocket for the sequencing of your genome,” says Grishin.
In all cases, stress Grishin and Obbad [Kamal Obbad, Nebula co-founder], the sequence would belong to the individual, so they could rent it out over and over, including to multiple companies simultaneously. And the data buyer would never take ownership or possession of it – rather, it would be stored by the individual (for example in their computer or on their Dropbox account) with Nebula then providing a secure computation platform on which the data buyer could compute on the data. “You stay in control of your data and you can share it securely with who you want to,” explains Obbad. Nebula makes money not by taking any transaction fee but by being a participant providing computing and storage services. The cryptocurrency would be able to be cashed out for real money via existing cryptocurrency exchanges.
Hopefully, Luna and Nebula, as well as, any competitors in this race to allow individuals to monetize their own DNA will have excellent security.
For the curious, you can find Luna here and Nebula here.Note: I am not endorsing either company or any others mentioned here. This posting is strictly informational.
The dates are November 7 -9, 2018 and as the opening draws closer I’m getting more ‘breathlessly enthusiastic’ announcements. Here are a few highlights from an October 23, 2018 announcement received via email,
CSPC 2018 is honoured to announce that the Honourable Kirsty Duncan, Minister of Science and Sport, will be delivering the keynote speech of the Gala Dinner on Thursday, November 8 at 7:00 PM. Minister Duncan will also hand out the 4th Science Policy Award of Excellence to the winner of this year’s competition.
CSPC 2018 features 250 speakers, a record number, and above is the breakdown of the positions they hold, over 43% of them being at the executive level and 57% of our speakers being women.
*All information as of October 15, 2018
If you think that you will not meet any new people at CSPC and all of the registrants are the same as last year, think again!
Over 57% of registrants are attending the conference for the FIRST TIME!
Secure your spot today!
*All information as of October 15, 2018
Here’s more from an October 31, 2018 announcement received via email,
One year after her appointment as Canada’s Chief Science Advisor, Dr. Mona Nemer will discuss her experience with the community. Don’t miss this opportunity.
[Canadian Science Policy Centre editorials in advance of conference]
Role Title: Director of Communications
Deadline: November 5, 2018
Salary: $115,000 to $165,000
About the Council of Canadian Academies
The Council of Canadian Academies (CCA) is a not-for-profit organization that conducts assessments of evidence on scientific topics of public interest to inform decision-making in Canada.
The CCA is seeking an experienced communications professional to join its senior management team as Director of Communications. Reporting to the President and CEO, the Director is responsible for developing and implementing a communications plan for the organization that promotes and highlights the CCA’s work, brand, and overall mission to a variety of potential users and stakeholders; overseeing the publication and dissemination of high-quality hard copy and online products; and providing strategic advice to the President and CCA’s Board, Committees, and Panels. In fulfilling these responsibilities, the Director of Communications is expected to work with a variety of interested groups including the media, the broad policy community, government, and non-governmental organizations.
Key Responsibilities and Accountabilities
Under the direction of the President and CEO, the Director leads a small team of communications and publishing professionals to meet the responsibilities and accountabilities outlined below.
Strategy Development and External Communications
• Develop and execute an overall strategic communications plan for the organization that promotes and highlights the CCA’s work, brand, and overall mission.
• Oversee the CCA’s presence and influence on digital and social platforms including the development and execution of a comprehensive content strategy for linking CCA’s work with the broader science and policy ecosystem with a focus on promoting and disseminating the findings of the CCA’s expert panel reports.
• Provide support, as needed for relevant government relations activities including liaising with communications counterparts, preparing briefing materials, responding to requests to share CCA information, and coordinating any appearances before Parliamentary committees or other bodies.
• Harness opportunities for advancing the uptake and use of CCA assessments, including leveraging the strengths of key partners particularly the founding Academies.
Publication and Creative Services
• Oversee the creative services, quality control, and publication of all CCA’s expert panel reports including translation, layout, quality assurance, graphic design, proofreading, and printing processes.
• Oversee the creative development and publication of all CCA’s corporate materials including the Annual Report and Corporate Plan through content development, editing, layout, translation, graphic design, proofreading, and printing processes.
Advice and Issues Management
• Provide strategic advice and support to the President’s Office, Board of Directors, Committees, and CCA staff about increasing the overall impact of CCA expert panel reports, brand awareness, outreach opportunities, and effective science communication.
• Provide support to the President by anticipating project-based or organizational issues, understanding potential implications, and suggesting strategic management solutions.
• Ensure consistent messages, style, and approaches in the delivery of all internal and external communications across the organization.
• Mentor, train, and advise up to five communications and publishing staff on a day-to-day basis and complete annual performance reviews and planning.
• Lead the development and implementation of all CCA-wide policy and procedures relating to all aspects of communications and publishing.
• Represent the issues, needs, and ongoing requirements for the communications and publishing staff as a member of the CCA senior management team.
The Director of Communications requires:
• Superior knowledge of communications and public relations principles – preferably as it applies in a non-profit or academic setting;
• Extensive experience in communications planning and issues management;
• Knowledge of current research, editorial, and publication production standards and procedures including but not limited to: translation, copy-editing, layout/design, proofreading and publishing;
• Knowledge of evaluating impact of reports and assessments;
• Knowledge in developing content strategy, knowledge mobilization techniques, and creative services and design;
• Knowledge of human resource management techniques and experience managing a team;
• Experience in coordinating, organizing and implementing communications activities including those involving sensitive topics;
• Knowledge of the relationships and major players in Canada’s intramural and extramural science and public policy ecosystem, including awareness of federal science departments and Parliamentary committees, funding bodies, and related research groups;
• Knowledge of Microsoft Office Suite, Adobe Creative Suite, WordPress and other related programs;
• Knowledge of a variety of social media platforms and measurement tools.
The Director of Communications must have:
• Superior time and project management skills
• Superior writing skills
• Superior ability to think strategically regarding how best to raise the CCA’s profile and ensure impact of the CCA’s expert panel reports
• Ability to be flexible and adaptable; able to respond quickly to unanticipated demands
• Strong advisory, negotiation, and problem-solving skills
• Strong skills in risk mitigation
• Superior ability to communicate in both written and oral forms, effectively and diplomatically
• Ability to mentor, train, and provide constructive feedback to direct reports
Education and Experience
This knowledge and skillset is typically obtained through the completion of a post-secondary degree in Journalism, Communications, Public Affairs or a related field, and/or a minimum of 10
years of progressive and related experience. Experience in an organization that has addressed topics in public policy would be valuable.
Language Requirements: This position is English Essential. Fluency in French is a strong asset.
To apply to this position please send your CV and cover letter to email@example.com before November 5, 2018. The cover letter should answer the following questions in 1,000 words or less:
1. How does your background and work experience make you well-suited for the position of Director of Communications at CCA?
2. What trends do you see emerging in the communications field generally, and in science and policy communications more specifically? How might CCA take advantage of these trends and developments?
3. Knowing that CCA is in the business of conducting assessments of evidence on important policy topics, how do you feel communicating this type of science differs from communicating other types of information and knowledge?
While research is world-class and technology start-ups are thriving, few companies grow and mature in Canada. This cycle — invent and sell, invent and sell — allows other countries to capture much of the economic and social benefits of Canadian-invented products, processes, marketing methods, and business models. …
So, the problem is ‘invent and sell’. Leaving aside the questionable conclusion that other countries are reaping the benefits of Canadian innovation (I’ll get back to that shortly), what questions could you ask about how to break the ‘invent and sell, invent and sell’ cycle? Hmm, maybe we should ask, How do we break the ‘invent and sell’ cycle in Canada?
… Escaping this cycle may be aided through education and training of innovation managers who can systematically manage ideas for commercial success and motivate others to reimagine innovation in Canada.
To understand how to better support innovation management in Canada, Innovation, Science and Economic Development Canada (ISED) asked the CCA two critical questions: What are the key skills required to manage innovation? And, what are the leading practices for teaching these skills in business schools, other academic departments, colleges/polytechnics, and industry?
As lawyers, journalists, scientists, doctors, librarians, and anyone who’s ever received misinformation can tell you, asking the right questions can make a big difference.
As for the conclusion that other countries are reaping the benefits of Canadian innovation, is there any supporting data? We enjoy a very high standard of living and have done so for at least a couple of generations. The Organization for Economic Cooperation and Development (OECD) has a Better Life Index, which ranks well-being on these 11 dimensions (from the OECD Better Life Index entry on Wikipedia), Note: Links have been removed,
Housing: housing conditions and spendings (e.g. real estate pricing)
Income: household income and financial wealth
Jobs: earnings, job security and unemployment
Community: quality of social support network
Education: education and what you get out of it
Environment: quality of environment (e.g. environmental health)
This notion that other countries are profiting from Canadian innovation while we lag behind has been repeated so often that it’s become an article of faith and I never questioned it until someone else challenged me. This article of faith is repeated internationally and sometimes seems that every country in the world is worried that someone else will benefit from their national innovation.
Getting back to the Canadian situation, we’ve decided to approach the problem by not asking questions about our article of faith or how to break the ‘invent and sell’ cycle. Instead of questioning an assumption and producing an open-ended question, we have these questions (1) What are the key skills required to manage innovation? (2) And, what are the leading practices for teaching these skills in business schools, other academic departments, colleges/polytechnics, and industry?
in my world that first question, would be a second tier question, at best. The second question, presupposes the answer: more training in universities and colleges. I took a look at the report’s Expert Panel webpage and found it populated by five individuals who are either academics or have strong ties to academe. They did have a workshop and the list of participants does include people who run businesses, from the Improving Innovation Through Better Management‘ report (Note: Formatting has not been preserved),
Former President and Vice-Chancellor of
Wilfrid Laurier University (Waterloo, ON)
Richard Boudreault, FCAE,
Chairman, Sigma Energy
Storage (Montréal, QC)
Judy Fairburn, FCAE,
Past Board Chair, Alberta Innovates;
retired EVP Business Innovation & Chief Digital Officer,
Cenovus Energy Inc. (Calgary, AB)
Tom Jenkins, O.C., FCAE,
Chair of the Board, OpenText
Director of the Institute for Gender and the
Economy and Distinguished Professor, Rotman School of
Management, University of Toronto (Toronto, ON)
Senior Vice President of Engineering,
Shopify Inc. (Ottawa, ON)
Academic Director and Professor, i2I, Beedie
School of Business, Simon Fraser University (Vancouver, BC)
John L. Mann, FCAE,
Owner, Mann Consulting
CEO, Volta Labs (Halifax, NS)
Professor of Higher Education and Director of
the Centre for the Study of Canadian and International
Higher Education, Ontario Institute for Studies in Education,
University of Toronto (Toronto, ON)
Professor and Chair, J. Herbert Smith
Centre for Technology Management & Entrepreneurship,
Faculty of Engineering, University of New Brunswick
Senior Executive, Innovation, IBM Canada
J. Mark Weber,
Eyton Director, Conrad School of
Entrepreneurship & Business, University of Waterloo
I am a little puzzled by the IBM executive’s presence (Dan Sinai) on this list. Wouldn’t Canadians holding onto their companies be counterproductive to IBM’s interests? As for John L. Mann, I’ve not been able to find him or his consulting company online. it’s unusual not to find any trace of an individual or company online these days.
In all there were nine individuals representing academic or government institutions in this list. The gender balance is 10 males and five females for the workshop participants and three males and two females for the expert panel. There is no representation from the North or from Manitoba, Saskatchewan, Prince Edward Island, or Newfoundland.
If they’re serious about looking at how to use innovation to drive higher standards of living, why aren’t there any people from Asian countries where they have been succeeding at that very project? South Korea and China come to mind.
I’m sure there are some excellent ideas in the report, I just wish they’d taken their topic to heart and actually tried to approach innovation in Canada in an innovative fashion.
Meanwhile, Vancouver gets another technology hub, from an October 30, 2018 article by Kenneth Chan for the Daily Hive (Vancouver [Canada]), Note: Links have been removed,
Vancouver’s rapidly growing virtual reality (VR) and augmented reality (AR) tech sectors will greatly benefit from a new VR and AR hub created by Launch Academy.
The technology incubator has opened a VR and AR hub at its existing office at 300-128 West Hastings Street in downtown, in partnership with VR/AR Association Vancouver. Immersive tech companies have access to desk space, mentorship programs, VR/AR equipment rentals, investor relations connected to Silicon Valley [emphasis mine], advisory services, and community events and workshops.
Within the Vancouver tech industry, the immersive sector has grown from 15 companies working in VR and AR in 2015 to 220 organizations today.
Globally, the VR and AR market is expected to hit a value of $108 billion by 2021, with tech giants like Amazon, Apple, Facebook, Google, and Microsoft [emphasis mine] investing billions into product development.
In the Vancouver region, the ‘invent and sell’ cycle can be traced back to the 19th century.
One more thing, as I was writing this piece I tripped across this news: “$7.7-billion pact makes Encana more American than Canadian‘ by Geoffrey Morgan. It’s in the Nov. 2, 2018 print edition of the Vancouver Sun’s front page for business. “Encana Corp., the storied Canadian company that had been slowly transitioning away from Canada and natural gas over the past few years under CEO [Chief Executive Officer] Doug Suttles, has pivoted aggressively to US shale basins. … Suttles, formerly as BP Plc. executive, moved from Calgary [Alberta, Canada] to Denver [Colorado, US], though the company said that was for personal reasons and not a precursor to relocation of Encana’s headquarters.” Yes, that’s quite believable. By the way, Suttles has spent* most of his life in the US (Wikipedia entry).
In any event, it’s not just Canadian emerging technology companies that get sold or somehow shifted out of Canada.
So, should we break the cycle and, if so, how are we going to do it?
*’spend’ corrected to ‘spent’ on November 6, 2018.
Mutaz Musa, a physician at New York Presbyterian Hospital/Weill Cornell (Department of Emergency Medicine) and software developer in New York City, has penned an eyeopening opinion piece about artificial intelligence (or robots if you prefer) and the field of radiology. From a June 25, 2018 opinion piece for The Scientist (Note: Links have been removed),
Although artificial intelligence has raised fears of job loss for many, we doctors have thus far enjoyed a smug sense of security. There are signs, however, that the first wave of AI-driven redundancies among doctors is fast approaching. And radiologists seem to be first on the chopping block.
Andrew Ng, founder of online learning platform Coursera and former CTO of “China’s Google,” Baidu, recently announced the development of CheXNet, a convolutional neural net capable of recognizing pneumonia and other thoracic pathologies on chest X-rays better than human radiologists. Earlier this year, a Hungarian group developed a similar system for detecting and classifying features of breast cancer in mammograms. In 2017, Adelaide University researchers published details of a bot capable of matching human radiologist performance in detecting hip fractures. And, of course, Google achieved superhuman proficiency in detecting diabetic retinopathy in fundus photographs, a task outside the scope of most radiologists.
Beyond single, two-dimensional radiographs, a team at Oxford University developed a system for detecting spinal disease from MRI data with a performance equivalent to a human radiologist. Meanwhile, researchers at the University of California, Los Angeles, reported detecting pathology on head CT scans with an error rate more than 20 times lower than a human radiologist.
Although these particular projects are still in the research phase and far from perfect—for instance, often pitting their machines against a limited number of radiologists—the pace of progress alone is telling.
Others have already taken their algorithms out of the lab and into the marketplace. Enlitic, founded by Aussie serial entrepreneur and University of San Francisco researcher Jeremy Howard, is a Bay-Area startup that offers automated X-ray and chest CAT scan interpretation services. Enlitic’s systems putatively can judge the malignancy of nodules up to 50 percent more accurately than a panel of radiologists and identify fractures so small they’d typically be missed by the human eye. One of Enlitic’s largest investors, Capitol Health, owns a network of diagnostic imaging centers throughout Australia, anticipating the broad rollout of this technology. Another Bay-Area startup, Arterys, offers cloud-based medical imaging diagnostics. Arterys’s services extend beyond plain films to cardiac MRIs and CAT scans of the chest and abdomen. And there are many others.
Musa has offered a compelling argument with lots of links to supporting evidence.
[downloaded from https://www.the-scientist.com/news-opinion/opinion–rise-of-the-robot-radiologists-64356]
An artificial intelligence (AI) system scored 2:0 against elite human physicians Saturday in two rounds of competitions in diagnosing brain tumors and predicting hematoma expansion in Beijing.
The BioMind AI system, developed by the Artificial Intelligence Research Centre for Neurological Disorders at the Beijing Tiantan Hospital and a research team from the Capital Medical University, made correct diagnoses in 87 percent of 225 cases in about 15 minutes, while a team of 15 senior doctors only achieved 66-percent accuracy.
The AI also gave correct predictions in 83 percent of brain hematoma expansion cases, outperforming the 63-percent accuracy among a group of physicians from renowned hospitals across the country.
The outcomes for human physicians were quite normal and even better than the average accuracy in ordinary hospitals, said Gao Peiyi, head of the radiology department at Tiantan Hospital, a leading institution on neurology and neurosurgery.
To train the AI, developers fed it tens of thousands of images of nervous system-related diseases that the Tiantan Hospital has archived over the past 10 years, making it capable of diagnosing common neurological diseases such as meningioma and glioma with an accuracy rate of over 90 percent, comparable to that of a senior doctor.
All the cases were real and contributed by the hospital, but never used as training material for the AI, according to the organizer.
Wang Yongjun, executive vice president of the Tiantan Hospital, said that he personally did not care very much about who won, because the contest was never intended to pit humans against technology but to help doctors learn and improve [emphasis mine] through interactions with technology.
“I hope through this competition, doctors can experience the power of artificial intelligence. This is especially so for some doctors who are skeptical about artificial intelligence. I hope they can further understand AI and eliminate their fears toward it,” said Wang.
Dr. Lin Yi who participated and lost in the second round, said that she welcomes AI, as it is not a threat but a “friend.” [emphasis mine]
AI will not only reduce the workload but also push doctors to keep learning and improve their skills, said Lin.
Bian Xiuwu, an academician with the Chinese Academy of Science and a member of the competition’s jury, said there has never been an absolute standard correct answer in diagnosing developing diseases, and the AI would only serve as an assistant to doctors in giving preliminary results. [emphasis mine]
Dr. Paul Parizel, former president of the European Society of Radiology and another member of the jury, also agreed that AI will not replace doctors, but will instead function similar to how GPS does for drivers. [emphasis mine]
Dr. Gauden Galea, representative of the World Health Organization in China, said AI is an exciting tool for healthcare but still in the primitive stages.
Based on the size of its population and the huge volume of accessible digital medical data, China has a unique advantage in developing medical AI, according to Galea.
China has introduced a series of plans in developing AI applications in recent years.
In 2017, the State Council issued a development plan on the new generation of Artificial Intelligence and the Ministry of Industry and Information Technology also issued the “Three-Year Action Plan for Promoting the Development of a New Generation of Artificial Intelligence (2018-2020).”
The Action Plan proposed developing medical image-assisted diagnostic systems to support medicine in various fields.
I note the reference to cars and global positioning systems (GPS) and their role as ‘helpers’;, it seems no one at the ‘AI and radiology’ competition has heard of driverless cars. Here’s Musa on those reassuring comments abut how the technology won’t replace experts but rather augment their skills,
To be sure, these services frame themselves as “support products” that “make doctors faster,” rather than replacements that make doctors redundant. This language may reflect a reserved view of the technology, though it likely also represents a marketing strategy keen to avoid threatening or antagonizing incumbents. After all, many of the customers themselves, for now, are radiologists.
Radiology isn’t the only area where experts might find themselves displaced.
It seems inroads have been made by artificial intelligence systems (AI) into the diagnosis of eye diseases. It got the ‘Fast Company’ treatment (exciting new tech, learn all about it) as can be seen further down in this posting. First, here’s a more restrained announcement, from an August 14, 2018 news item on phys.org (Note: A link has been removed),
An artificial intelligence (AI) system, which can recommend the correct referral decision for more than 50 eye diseases, as accurately as experts has been developed by Moorfields Eye Hospital NHS Foundation Trust, DeepMind Health and UCL [University College London].
The breakthrough research, published online by Nature Medicine, describes how machine-learning technology has been successfully trained on thousands of historic de-personalised eye scans to identify features of eye disease and recommend how patients should be referred for care.
Researchers hope the technology could one day transform the way professionals carry out eye tests, allowing them to spot conditions earlier and prioritise patients with the most serious eye diseases before irreversible damage sets in.
More than 285 million people worldwide live with some form of sight loss, including more than two million people in the UK. Eye diseases remain one of the biggest causes of sight loss, and many can be prevented with early detection and treatment.
Dr Pearse Keane, NIHR Clinician Scientist at the UCL Institute of Ophthalmology and consultant ophthalmologist at Moorfields Eye Hospital NHS Foundation Trust said: “The number of eye scans we’re performing is growing at a pace much faster than human experts are able to interpret them. There is a risk that this may cause delays in the diagnosis and treatment of sight-threatening diseases, which can be devastating for patients.”
“The AI technology we’re developing is designed to prioritise patients who need to be seen and treated urgently by a doctor or eye care professional. If we can diagnose and treat eye conditions early, it gives us the best chance of saving people’s sight. With further research it could lead to greater consistency and quality of care for patients with eye problems in the future.”
The study, launched in 2016, brought together leading NHS eye health professionals and scientists from UCL and the National Institute for Health Research (NIHR) with some of the UK’s top technologists at DeepMind to investigate whether AI technology could help improve the care of patients with sight-threatening diseases, such as age-related macular degeneration and diabetic eye disease.
Using two types of neural network – mathematical systems for identifying patterns in images or data – the AI system quickly learnt to identify 10 features of eye disease from highly complex optical coherence tomography (OCT) scans. The system was then able to recommend a referral decision based on the most urgent conditions detected.
To establish whether the AI system was making correct referrals, clinicians also viewed the same OCT scans and made their own referral decisions. The study concluded that AI was able to make the right referral recommendation more than 94% of the time, matching the performance of expert clinicians.
The AI has been developed with two unique features which maximise its potential use in eye care. Firstly, the system can provide information that helps explain to eye care professionals how it arrives at its recommendations. This information includes visuals of the features of eye disease it has identified on the OCT scan and the level of confidence the system has in its recommendations, in the form of a percentage. This functionality is crucial in helping clinicians scrutinise the technology’s recommendations and check its accuracy before deciding the type of care and treatment a patient receives.
Secondly, the AI system can be easily applied to different types of eye scanner, not just the specific model on which it was trained. This could significantly increase the number of people who benefit from this technology and future-proof it, so it can still be used even as OCT scanners are upgraded or replaced over time.
The next step is for the research to go through clinical trials to explore how this technology might improve patient care in practice, and regulatory approval before it can be used in hospitals and other clinical settings.
If clinical trials are successful in demonstrating that the technology can be used safely and effectively, Moorfields will be able to use an eventual, regulatory-approved product for free, across all 30 of their UK hospitals and community clinics, for an initial period of five years.
The work that has gone into this project will also help accelerate wider NHS research for many years to come. For example, DeepMind has invested significant resources to clean, curate and label Moorfields’ de-identified research dataset to create one of the most advanced eye research databases in the world.
Moorfields owns this database as a non-commercial public asset, which is already forming the basis of nine separate medical research studies. In addition, Moorfields can also use DeepMind’s trained AI model for future non-commercial research efforts, which could help advance medical research even further.
Mustafa Suleyman, Co-founder and Head of Applied AI at DeepMind Health, said: “We set up DeepMind Health because we believe artificial intelligence can help solve some of society’s biggest health challenges, like avoidable sight loss, which affects millions of people across the globe. These incredibly exciting results take us one step closer to that goal and could, in time, transform the diagnosis, treatment and management of patients with sight threatening eye conditions, not just at Moorfields, but around the world.”
Professor Sir Peng Tee Khaw, director of the NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology said: “The results of this pioneering research with DeepMind are very exciting and demonstrate the potential sight-saving impact AI could have for patients. I am in no doubt that AI has a vital role to play in the future of healthcare, particularly when it comes to training and helping medical professionals so that patients benefit from vital treatment earlier than might previously have been possible. This shows the transformative research than can be carried out in the UK combining world leading industry and NIHR/NHS hospital/university partnerships.”
Matt Hancock, Health and Social Care Secretary, said: “This is hugely exciting and exactly the type of technology which will benefit the NHS in the long term and improve patient care – that’s why we fund over a billion pounds a year in health research as part of our long term plan for the NHS.”
Here’s a link to and a citation for the study,
Clinically applicable deep learning for diagnosis and referral in retinal disease by Jeffrey De Fauw, Joseph R. Ledsam, Bernardino Romera-Paredes, Stanislav Nikolov, Nenad Tomasev, Sam Blackwell, Harry Askham, Xavier Glorot, Brendan O’Donoghue, Daniel Visentin, George van den Driessche, Balaji Lakshminarayanan, Clemens Meyer, Faith Mackinder, Simon Bouton, Kareem Ayoub, Reena Chopra, Dominic King, Alan Karthikesalingam, Cían O. Hughes, Rosalind Raine, Julian Hughes, Dawn A. Sim, Catherine Egan, Adnan Tufail, Hugh Montgomery, Demis Hassabis, Geraint Rees, Trevor Back, Peng T. Khaw, Mustafa Suleyman, Julien Cornebise, Pearse A. Keane, & Olaf Ronneberger. Nature Medicine (2018) DOI: https://doi.org/10.1038/s41591-018-0107-6 Published 13 August 2018
In a paper published in Nature Medicine on Monday, Google’s DeepMind subsidiary, UCL, and researchers at Moorfields Eye Hospital showed off their new AI system. The researchers used deep learning to create algorithm-driven software that can identify common patterns in data culled from dozens of common eye diseases from 3D scans. The result is an AI that can identify more than 50 diseases with incredible accuracy and can then refer patients to a specialist. Even more important, though, is that the AI can explain why a diagnosis was made, indicating which part of the scan prompted the outcome. It’s an important step in both medicine and in making AIs slightly more human
The editor or writer has even highlighted the sentence about the system’s accuracy—not just good but incredible!
I will be publishing something soon [my August 21, 2018 posting] which highlights some of the questions one might want to ask about AI and medicine before diving headfirst into this brave new world of medicine.
While my main interest is the group’s temporary art gallery, I am providing a brief explanatory introduction and a couple of previews for SIGGRAPH 2018.
For anyone unfamiliar with the Special Interest Group on Computer GRAPHics and Interactive Techniques (SIGGRAPH) and its conferences, from the SIGGRAPH Wikipedia entry Note: Links have been removed),
Some highlights of the conference are its Animation Theater and Electronic Theater presentations, where recently created CG films are played. There is a large exhibition floor, where several hundred companies set up elaborate booths and compete for attention and recruits. Most of the companies are in the engineering, graphics, motion picture, or video game industries. There are also many booths for schools which specialize in computer graphics or interactivity.
Dozens of research papers are presented each year, and SIGGRAPH is widely considered the most prestigious forum for the publication of computer graphics research. The recent paper acceptance rate for SIGGRAPH has been less than 26%. The submitted papers are peer-reviewed in a single-blind process. There has been some criticism about the preference of SIGGRAPH paper reviewers for novel results rather than useful incremental progress. …
This is the third SIGGRAPH Vancouver has hosted; the others were in 2011 and 2014. The theme for the 2018 iteration is ‘Generations’; here’s more about it from an Aug. 2, 2018 article by Terry Flores for Variety,
While its focus is firmly forward thinking, SIGGRAPH 2018, the computer graphics, animation, virtual reality, games, digital art, mixed reality, and emerging technologies conference, is also tipping its hat to the past thanks to its theme this year: Generations. The conference runs Aug. 12-16 in Vancouver, B.C.
“In the literal people sense, pioneers in the computer graphics industry are standing shoulder to shoulder with researchers, practitioners and the future of the industry — young people — mentoring them, dabbling across multiple disciplines to innovate, relate, and grow,” says SIGGRAPH 2018 conference chair Roy C. Anthony, VP of creative development and operations at software and technology firm Ventuz. “This is really what SIGGRAPH has always been about. Generations really seemed like a very appropriate way of looking back and remembering where we all came from and how far we’ve come.”
SIGGRAPH 2018 has a number of treats in store for attendees, including the debut of Disney’s first VR film, the short “Cycles”; production sessions on the making of “Blade Runner 2049,” “Game of Thrones,” “Incredibles 2” and “Avengers: Infinity War”; as well as sneak peeks of Disney’s upcoming “Ralph Breaks the Internet: Wreck-It Ralph 2” and Laika’s “Missing Link.”
That list of ‘treats’ in the last paragraph makes the conference seem more like an iteration of a ‘comic-con’ than a technology conference.
CHICAGO–In the burgeoning world of virtual reality (VR) technology, it remains a challenge to provide users with a realistic perception of infinite space and natural walking capabilities in the virtual environment. A team of computer scientists has introduced a new approach to address this problem by leveraging a natural human phenomenon: eye blinks.
All humans are functionally blind for about 10 percent of the time under normal circumstances due to eye blinks and saccades, a rapid movement of the eye between two points or objects. Eye blinks are a common and natural cause of so-called “change blindness,” which indicates the inability for humans to notice changes to visual scenes. Zeroing in on eye blinks and change blindness, the team has devised a novel computational system that effectively redirects the user in the virtual environment during these natural instances, all with undetectable camera movements to deliver orientation redirection.
“Previous RDW [redirected walking] techniques apply rotations continuously while the user is walking. But the amount of unnoticeable rotations is limited,” notes Eike Langbehn, lead author of the research and doctoral candidate at the University of Hamburg. “That’s why an orthogonal approach is needed–we add some additional rotations when the user is not focused on the visuals. When we learned that humans are functionally blind for some time due to blinks, we thought, ‘Why don’t we do the redirection during eye blinks?'”
Human eye blinks occur approximately 10 to 20 times per minute, about every 4 to 19 seconds. Leveraging this window of opportunity–where humans are unable to detect major motion changes while in a virtual environment–the researchers devised an approach to synchronize a computer graphics rendering system with this visual process, and introduce any useful motion changes in virtual scenes to enhance users’ overall VR experience.
The researchers’ experiments revealed that imperceptible camera rotations of 2 to 5 degrees and translations of 4 to 9 cm of the user’s viewpoint are possible during a blink without users even noticing. They tracked test participants’ eye blinks by an eye tracker in a VR head-mounted display. In a confirmatory study, the team validated that participants could not reliably detect in which of two eye blinks their viewpoint was manipulated while walking a VR curved path. The tests relied on unconscious natural eye blinking, but the researchers say redirection during blinking could be carried out consciously. Since users can consciously blink multiple times a day without much effort, eye blinks provide great potential to be used as an intentional trigger in their approach.
The team will present their work at SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia. The annual conference and exhibition showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.
“RDW is a big challenge since current techniques still need too much space to enable unlimited walking in VR,” notes Langbehn. “Our work might contribute to a reduction of space since we found out that unnoticeable rotations of up to five degrees are possible during blinks. This means we can improve the performance of RDW by approximately 50 percent.”
The team’s results could be used in combination with other VR research, such as novel steering algorithms, improved path prediction, and rotations during saccades, to name a few. Down the road, such techniques could some day enable consumer VR users to virtually walk beyond their living room.
Langbehn collaborated on the work with Frank Steinicke of University of Hamburg, Markus Lappe of University of Muenster, Gregory F. Welch of University of Central Florida, and Gerd Bruder, also of University of Central Florida. For the full paper and video, visit the team’s project page.
About ACM, ACM SIGGRAPH, and SIGGRAPH 2018
ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. ACM SIGGRAPH is a special interest group within ACM that serves as an interdisciplinary community for members in research, technology, and applications in computer graphics and interactive techniques. SIGGRAPH is the world’s leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques. SIGGRAPH 2018, marking the 45th annual conference hosted by ACM SIGGRAPH, will take place from 12-16 August at the Vancouver Convention Centre in Vancouver, B.C.
They have provided an image illustrating what they mean (I don’t find it especially informative),
Caption: The viewing behavior of a virtual reality user, including fixations (in green) and saccades (in red). A blink fully suppresses visual perception. Credit: Eike Langbehn
Walt Disney Animation Studios will debut its first ever virtual reality short film at SIGGRAPH 2018, and the hope is viewers will walk away feeling connected to the characters as equally as they will with the VR technology involved in making the film.
Cycles, an experimental film directed by Jeff Gipson, centers around the true meaning of creating a home and the life it holds inside its walls. The idea for the film is personal, inspired by Gipson’s childhood spending time with his grandparents and creating memories in their home, and later, having to move them to an assisted living residence.
“Every house has a story unique to the people, the characters who live there,” says Gipson. “We wanted to create a story in this single place and be able to have the viewer witness life happening around them. It is an emotionally driven film, expressing the real ups and downs, the happy and sad moments in life.”
For Cycles, Gipson also drew from his past life as an architect, having spent several years designing skate parks, and from his passion for action sports, including freestyle BMX. In Los Angeles, where Gipson lives, it is not unusual to find homes with an empty swimming pool reserved for skating or freestyle biking. Part of the pitch for Cycles came out of Gipson’s experience riding in these empty pools and being curious about the homes attached to them, the families who lived there, and the memories they made.
SIGGRAPH attendees will have the opportunity to experience Cycles at the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; the Vrcade, a space for VR, AR, and MR games or experiences; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.
The production team completed Cycles in four months with about 50 collaborators as part of a professional development program at the studio. A key difference in VR filmmaking includes getting creative with how to translate a story to the VR “screen.” Pre-visualizing the narrative, for one, was a challenge. Rather than traditional storyboarding, Gipson and his team instead used a mix of Quill VR painting techniques and motion capture to “storyboard” Cycles, incorporating painters and artists to generate sculptures or 3D models of characters early on and draw scenes for the VR space. The creators also got innovative with the use of light and color saturation in scenes to help guide the user’s eyes during the film.
“What’s cool for VR is that we are really on the edge of trying to figure out what it is and how to tell stories in this new medium,” says Gipson. “In VR, you can look anywhere and really be transported to a different world, experience it from different angles, and see every detail. We want people watching to feel alive and feel emotion, and give them a true cinematic experience.”
This is Gipson’s VR directorial debut. He joined Walt Disney Animation Studios in 2013, serving as a lighting artist on Disney favorites like Frozen, Zootopia, and Moana. Of getting to direct the studio’s first VR short, he says, “VR is an amazing technology and a lot of times the technology is what is really celebrated. We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story.”
Apparently this is a still from the ‘short’,
Caption: Disney Animation Studios will present ‘Cycles’ , its first virtual reality (VR) short, at ACM SIGGRAPH 2018. Credit: Disney Animation Studios
Google has unveiled a new virtual reality (VR) immersive experience based on a novel system that captures and renders high-quality, realistic images from the real world using light fields. Created by a team of leading researchers at Google, Welcome to Light Fields is the tech giant’s splash into the nascent arena of light fields VR experiences, an exciting corner of VR video technology gaining traction for its promise to deliver extremely high-quality imagery and experiences in the virtual world.
Google released Welcome to Light Fields earlier this year as a free app on Steam VR for HTC Vive, Oculus Rift, and Windows Mixed Reality headsets. The creators will demonstrate the VR experience at SIGGRAPH 2018, in the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the Vrcade, a space for VR, AR, and MR games or experiences; the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.
Destinations in Welcome to Light Fields include NASA’s Space Shuttle Discovery, delivering to viewers an astronaut’s view inside the flight deck, which has never been open to the public; the pristine teak and mahogany interiors of the Gamble House, an architectural treasure in Pasadena, CA; and the glorious St. Stephen’s Church in Granada Hills, CA, home to a stunning wall of more than 14,000 pieces of glimmering stained glass.
“I love that light fields in VR can teleport you to exotic places in the real world, and truly make you believe you are there,” says Ryan Overbeck, software engineer at Google who co-led the project. “To me, this is magic.”
To bring this experience to life, Overbeck worked with a team that included Paul Debevec, senior staff engineer at Google, who managed the project and led the hardware piece with engineers Xueming Yu, Jay Busch, and Graham Fyffe. With Overbeck, Daniel Erickson and Daniel Evangelakos focused on the software end. The researchers designed a comprehensive system for capturing and rendering high-quality, spherical light field still images from footage captured in the real world. They developed two easy-to-use light field camera rigs, based on the GoPro Hero4action sports camera, that efficiently capture thousands of images on the surface of a sphere. Those images were then passed through a cloud-based light-field-processing pipeline.
Among other things, explains Overbeck, “The processing pipeline uses computer vision to place the images in 3D and generate depth maps, and we use a modified version of our vp9 video codec
to compress the light field data down to a manageable size.” To render a light field dataset, he notes, the team used a rendering algorithm that blends between the thousands of light field images in real-time.
The team relied on Google’s talented pool of engineers in computer vision, graphics, video compression, and machine learning to overcome the unique challenges posed in light fields technology. They also collaborated closely with the WebM team (who make the vp9 video codec) to develop the high-quality light field compression format incorporated into their system, and leaned heavily on the expertise of the Jump VR team to help pose the images and generate depth maps. (Jump is Google’s professional VR system for achieving 3D-360 video production at scale.)
Indeed, with Welcome to Light Fields, the Google team is demonstrating the potential and promise of light field VR technology, showcasing the technology’s ability to provide a truly immersive experience with a level of unmatched realism. Though light fields technology has been researched and explored in computer graphics for more than 30 years, practical systems for actually delivering high-quality light field experiences has not yet been possible.
Part of the team’s motivation behind creating this VR light field experience is to invigorate the nascent field.
“Welcome to Light Fields proves that it is now possible to make a compelling light field VR viewer that runs on consumer-grade hardware, and we hope that this knowledge will encourage others to get involved with building light field technology and media,” says Overbeck. “We understand that in order to eventually make compelling consumer products based on light fields, we need a thriving light field ecosystem. We need open light field codecs, we need artists creating beautiful light field imagery, and we need people using VR in order to engage with light fields.”
I don’t really understand why this image, which looks like something belongs on advertising material, would be chosen to accompany a news release on a science-based distribution outlet,
Caption: A team of leading researchers at Google, will unveil the new immersive virtual reality (VR) experience “Welcome to Lightfields” at ACM SIGGRAPH 2018. Credit: Image courtesy of Google/Overbeck
Advances in computer-generated imagery have brought vivid, realistic animations to life, but the sounds associated with what we see simulated on screen, such as two objects colliding, are often recordings. Now researchers at Stanford University have developed a system that automatically renders accurate sounds for a wide variety of animated phenomena.
“There’s been a Holy Grail in computing of being able to simulate reality for humans. We can animate scenes and render them visually with physics and computer graphics, but, as for sounds, they are usually made up,” said Doug James, professor of computer science at Stanford University. “Currently there exists no way to generate realistic synchronized sounds for complex animated content, such as splashing water or colliding objects, automatically. This fills that void.”
The researchers will present their work on this sound synthesis system as part of ACM SIGGRAPH 2018, the leading conference on computer graphics and interactive techniques. In addition to enlivening movies and virtual reality worlds, this system could also help engineering companies prototype how products would sound before being physically produced, and hopefully encourage designs that are quieter and less irritating, the researchers said.
“I’ve spent years trying to solve partial differential equations – which govern how sound propagates – by hand,” said Jui-Hsien Wang, a graduate student in James’ lab and in the Institute for Computational and Mathematical Engineering (ICME), and lead author of the paper. “This is actually a place where you don’t just solve the equation but you can actually hear it once you’ve done it. That’s really exciting to me and it’s fun.”
Informed by geometry and physical motion, the system figures out the vibrations of each object and how, like a loudspeaker, those vibrations excite sound waves. It computes the pressure waves cast off by rapidly moving and vibrating surfaces but does not replicate room acoustics. So, although it does not recreate the echoes in a grand cathedral, it can resolve detailed sounds from scenarios like a crashing cymbal, an upside-down bowl spinning to a stop, a glass filling up with water or a virtual character talking into a megaphone.
Most sounds associated with animations rely on pre-recorded clips, which require vast manual effort to synchronize with the action on-screen. These clips are also restricted to noises that exist – they can’t predict anything new. Other systems that produce and predict sounds as accurate as those of James and his team work only in special cases, or assume the geometry doesn’t deform very much. They also require a long pre-computation phase for each separate object.
“Ours is essentially just a render button with minimal pre-processing that treats all objects together in one acoustic wave simulation,” said Ante Qu, a graduate student in James’ lab and co-author of the paper.
The simulated sound that results from this method is highly detailed. It takes into account the sound waves produced by each object in an animation but also predicts how those waves bend, bounce or deaden based on their interactions with other objects and sound waves in the scene.
In its current form, the group’s process takes a while to create the finished product. But, now that they have proven this technique’s potential, they can focus on performance optimizations, such as implementing their method on parallel GPU hardware, that should make it drastically faster.
And, even in its current state, the results are worth the wait.
“The first water sounds we generated with the system were among the best ones we had simulated – and water is a huge challenge in computer-generated sound,” said James. “We thought we might get a little improvement, but it is dramatically better than previous approaches even right out of the box. It was really striking.”
Although the group’s work has faithfully rendered sounds of various objects spinning, falling and banging into each other, more complex objects and interactions – like the reverberating tones of a Stradivarius violin – remain difficult to model realistically. That, the group said, will have to wait for a future solution.
Timothy Langlois of Adobe Research is a co-author of this paper. This research was funded by the National Science Foundation and Adobe Research. James is also a professor, by courtesy, of music and a member of Stanford Bio-X.
Researchers Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang have created a video featuring highlights of animations with sounds synthesized using the Stanford researchers’ new system.,
The researchers have also provided this image,
By computing pressure waves cast off by rapidly moving and vibrating surfaces – such as a cymbal – a new sound synthesis system developed by Stanford researchers can automatically render realistic sound for computer animations. (Image credit: Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang)
It does seem like we’re synthesizing the world around us, eh?
SIGGRAPH 2018, the world’s leading showcase of digital art created using computer graphics and interactive techniques, will present a special Art Gallery, entitled “Origins,” and historic Art Papers in Vancouver, B.C. The 45th SIGGRAPH conference will take place 12–16 August at the Vancouver Convention Centre. The programs will also honor the generations of creators that have come before through a special, 50th anniversary edition of the Leonard journal. To register for the conference, visit S2018.SIGGRAPH.ORG.
The SIGGRAPH 2018 ART GALLERY is a curated exhibition, conceived as a dialogical space that enables the viewer to reflect on man’s diverse cultural values and rituals through contemporary creative practices. Building upon an exciting and eclectic selection of creative practices mediated through technologies that represent the sophistication of our times, the SIGGRAPH 2018 Art Gallery will embrace the narratives of the indigenous communities based near Vancouver and throughout Canada as a source of inspiration. The exhibition will feature contemporary media artworks, art pieces by indigenous communities, and other traces of technologically mediated Ludic practices.
Andrés Burbano, SIGGRAPH 2018 Art Gallery chair and professor at Universidad de los Andes, said, “The Art Gallery aims to articulate myth and technology, science and art, the deep past and the computational present, and will coalesce around a theme of ‘Origins.’ Media and technological creative expressions will explore principles such as the origins of the cosmos, the origins of life, the origins of human presence, the origins of the occupation of territories in the Americas, and the origins of people living in the vast territories of the Arctic.”
He continued, “The venue [in Vancouver] hopes to rekindle the original spark that ignited the collaborative spirit of the SIGGRAPH community of engineers, scientists, and artists, who came together to create the very first conference in the early 1970s.”
Highlights from the 2018 Art Gallery include:
Transformation Mask (Canada) [Technology Based]
Shawn Hunt, independent; and Microsoft Garage: Andy Klein, Robert Butterworth, Jonathan Cobb, Jeremy Kersey, Stacey Mulcahy, Brendan O’Rourke, Brent Silk, and Julia Taylor-Hell, Microsoft Vancouver
TRANSFORMATION MASK is an interactive installation that features the Microsoft HoloLens. It utilizes electronics and mechanical engineering to express a physical and digital transformation. Participants are immersed in spatial sounds and holographic visuals.
Somnium (U.S.) [Science Based]
Marko Peljhan, Danny Bazo, and Karl Yerkes, University of California, Santa Barbara
Somnium is a cybernetic installation that provides visitors with the ability to sensorily, cognitively, and emotionally contemplate and experience exoplanetary discoveries, their macro and micro dimensions, and the potential for life in our Galaxy. Some might call it “space telescope.”
Ernest Edmonds Retrospective – Art Systems 1968-2018 (United Kingdom) [History Based]
Ernest Edmonds, De Montfort University
Celebrating one of the pioneers of computer graphics-based art since the early 1970s, this Ernest Edmonds career retrospective will showcase snapshots of Edmonds’ work as it developed over the years. With one piece from each decade, the retrospective will also demonstrate how vital the Leonardo journal has been throughout the 50-year journey.
In addition to the works above, the Art Gallery will feature pieces from notable female artists Ozge Samanci, Ruth West, and Nicole L’Hullier. For more information about the Edmonds retrospective, read THIS POST ON THE ACM SIGGRAPH BLOG.
The SIGGRAPH 2018 ART PAPERS program is designed to feature research from artists, scientists, theorists, technologists, historians, and more in one of four categories: project description, theory/criticism, methods, or history. The chosen work was selected by an international jury of scholars, artists, and immersive technology developers.
To celebrate the 50th anniversary of LEONARDO (MIT Press), and 10 years of its annual SIGGRAPH issue, SIGGRAPH 2018 is pleased to announce a special anniversary edition of the journal, which will feature the 2018 art papers. For 50 years, Leonardo has been the definitive publication for artist-academics. To learn more about the relationship between SIGGRAPH and the journal, listen to THIS EPISODE OF THE SIGGRAPH SPOTLIGHT PODCAST.
“In order to encourage a wider range of topics, we introduced a new submission type, short papers. This enabled us to accept more content than in previous years. Additionally, for the first time, we will introduce sessions that integrate the Art Gallery artist talks with Art Papers talks, promoting richer connections between these two creative communities,” said Angus Forbes, SIGGRAPH 2018 Art Papers chair and professor at University of California, Santa Cruz.
Art Papers highlights include:
Alienating the Familiar with CGI: A Recipe for Making a Full CGI Art House Animated Feature [Long]
Alex Counsell and Paul Charisse, University of Portsmouth
This paper explores the process of making and funding an art house feature film using full CGI in a marketplace where this has never been attempted. It explores cutting-edge technology and production approaches, as well as routes to successful fundraising.
Augmented Fauna and Glass Mutations: A Dialogue Between Material and Technique in Glassblowing and 3D Printing [Long]
Tobias Klein, City University of Hong Kong
The two presented artworks, “Augmented Fauna” and “Glass Mutations,” were created during an artist residence at the PILCHUCK GLASS SCHOOL. They are examples of the qualities and methods established through a synthesis between digital workflows and traditional craft processes and thus formulate the notion of digital craftsmanship.
Inhabitat: An Imaginary Ecosystem in a Children’s Science Museum [Short]
Graham Wakefield, York University, and Haru Hyunkyung Ji, OCAD University
“Inhabitat” is a mixed reality artwork in which participants become part of an imaginary ecology through three simultaneous perspectives of scale and agency; three distinct ways to see with other eyes. This imaginary world was exhibited at a children’s science museum for five months, using an interactive projection-augmented sculpture, a large screen and speaker array, and a virtual reality head-mounted display.
What’s the what?
My father used to say that and I always assumed it meant summarize the high points, if you need to, and get to the point—fast. In that spirit, I am both fascinated and mildly appalled. The virtual, mixed, and augmented reality technologies, as well as, the others being featured at SIGGRAPH 2018 are wondrous in many ways but it seems we are coming ever closer to a world where we no longer interact with nature or other humans directly. (see my August 10, 2018 posting about the ‘extinction of experience’ for research that encourages more direct interaction with nature) I realize that SIGGRAPH is intended as a primarily technical experience but I think a little more content questioning these technologies and their applications (social implications) might be in order. That’s often the artist’s role but I can’t see anything in the art gallery descriptions that hint at any sort of fundamental critique.
Amnesty International and artificial intelligence seem like an unexpected combination but it all makes sense when you read a June 13, 2018 article by Steven Melendez for Fast Company (Note: Links have been removed),
If companies working on artificial intelligence don’t take steps to safeguard human rights, “nightmare scenarios” could unfold, warns Rasha Abdul Rahim, an arms control and artificial intelligence researcher at Amnesty International in a blog post. Those scenarios could involve armed, autonomous systems choosing military targets with little human oversight, or discrimination caused by biased algorithms, she warns.
Rahim pointed at recent reports of Google’s involvement in the Pentagon’s Project Maven, which involves harnessing AI image recognition technology to rapidly process photos taken by drones. Google recently unveiled new AI ethics policies and has said it won’t continue with the project once its current contract expires next year after high-profile employee dissent over the project. …
“Compliance with the laws of war requires human judgement [sic] –the ability to analyze the intentions behind actions and make complex decisions about the proportionality or necessity of an attack,” Rahim writes. “Machines and algorithms cannot recreate these human skills, and nor can they negotiate, produce empathy, or respond to unpredictable situations. In light of these risks, Amnesty International and its partners in the Campaign to Stop Killer Robots are calling for a total ban on the development, deployment, and use of fully autonomous weapon systems.”
Rasha Abdul Rahim’s June 14, 2018 posting (I’m putting the discrepancy in publication dates down to timezone differences) on the Amnesty International website (Note: Links have been removed),
Last week [June 7, 2018] Google released a set of principles to govern its development of AI technologies. They include a broad commitment not to design or deploy AI in weaponry, and come in the wake of the company’s announcement that it will not renew its existing contract for Project Maven, the US Department of Defense’s AI initiative, when it expires in 2019.
The fact that Google maintains its existing Project Maven contract for now raises an important question. Does Google consider that continuing to provide AI technology to the US government’s drone programme is in line with its new principles? Project Maven is a litmus test that allows us to see what Google’s new principles mean in practice.
As details of the US drone programme are shrouded in secrecy, it is unclear precisely what role Google plays in Project Maven. What we do know is that US drone programme, under successive administrations, has been beset by credible allegations of unlawful killings and civilian casualties. The cooperation of Google, in any capacity, is extremely troubling and could potentially implicate it in unlawful strikes.
As AI technology advances, the question of who will be held accountable for associated human rights abuses is becoming increasingly urgent. Machine learning, and AI more broadly, impact a range of human rights including privacy, freedom of expression and the right to life. It is partly in the hands of companies like Google to safeguard these rights in relation to their operations – for us and for future generations. If they don’t, some nightmare scenarios could unfold.
Warfare has already changed dramatically in recent years – a couple of decades ago the idea of remote controlled bomber planes would have seemed like science fiction. While the drones currently in use are still controlled by humans, China, France, Israel, Russia, South Korea, the UK and the US are all known to be developing military robots which are getting smaller and more autonomous.
For example, the UK is developing a number of autonomous systems, including the BAE [Systems] Taranis, an unmanned combat aircraft system which can fly in autonomous mode and automatically identify a target within a programmed area. Kalashnikov, the Russian arms manufacturer, is developing a fully automated, high-calibre gun that uses artificial neural networks to choose targets. The US Army Research Laboratory in Maryland, in collaboration with BAE Systems and several academic institutions, has been developing micro drones which weigh less than 30 grams, as well as pocket-sized robots that can hop or crawl.
Of course, it’s not just in conflict zones that AI is threatening human rights. Machine learning is already being used by governments in a wide range of contexts that directly impact people’s lives, including policing [emphasis mine], welfare systems, criminal justice and healthcare. Some US courts use algorithms to predict future behaviour of defendants and determine their sentence lengths accordingly. The potential for this approach to reinforce power structures, discrimination or inequalities is huge.
In july 2017, the Vancouver Police Department announced its use of predictive policing software, the first such jurisdiction in Canada to make use of the technology. My Nov. 23, 2017 posting featured the announcement.
Formed by the following non-governmental organizations (NGOs) at a meeting in New York on 19 October 2012 and launched in London in April 2013, the Campaign to Stop Killer Robots is an international coalition working to preemptively ban fully autonomous weapons. See the Chronology charting our major actions and achievements to date.
The Steering Committee is the campaign’s principal leadership and decision-making body. It is comprised of five international NGOs, a regional NGO network, and four national NGOs that work internationally:
For more information, see this Overview. A Terms of Reference is also available on request, detailing the committee’s selection process, mandate, decision-making, meetings and communication, and expected commitments.
For anyone who may be interested in joining Amnesty International, go here.