Tag Archives: Google Home

China is world leader in nanotechnology and in other fields too?

State of Chinese nanoscience/nanotechnology

China claims to be the world leader in the field in a white paper announced in an August 29, 2017 Springer Nature press release,

Springer Nature, the National Center for Nanoscience and Technology, China and the National Science Library of the Chinese Academy of Sciences (CAS) released in both Chinese and English a white paper entitled “Small Science in Big China: An overview of the state of Chinese nanoscience and technology” at NanoChina 2017, an international conference on nanoscience and technology held August 28 and 29 in Beijing. The white paper looks at the rapid growth of China’s nanoscience research into its current role as the world’s leader [emphasis mine], examines China’s strengths and challenges, and makes some suggestions for how its contribution to the field can continue to thrive.

The white paper points out that China has become a strong contributor to nanoscience research in the world, and is a powerhouse of nanotechnology R&D. Some of China’s basic research is leading the world. China’s applied nanoscience research and the industrialization of nanotechnologies have also begun to take shape. These achievements are largely due to China’s strong investment in nanoscience and technology. China’s nanoscience research is also moving from quantitative increase to quality improvement and innovation, with greater emphasis on the applications of nanotechnologies.

“China took an initial step into nanoscience research some twenty years ago, and has since grown its commitment at an unprecedented rate, as it has for scientific research as a whole. Such a growth is reflected both in research quantity and, importantly, in quality. Therefore, I regard nanoscience as a window through which to observe the development of Chinese science, and through which we could analyze how that rapid growth has happened. Further, the experience China has gained in developing nanoscience and related technologies is a valuable resource for the other countries and other fields of research to dig deep into and draw on,” said Arnout Jacobs, President, Greater China, Springer Nature.

The white paper explores at China’s research output relative to the rest of the world in terms of research paper output, research contribution contained in the Nano database, and finally patents, providing insight into China’s strengths and expertise in nano research. The white paper also presents the results of a survey of experts from the community discussing the outlook for and challenges to the future of China’s nanoscience research.

China nano research output: strong rise in quantity and quality

In 1997, around 13,000 nanoscience-related papers were published globally. By 2016, this number had risen to more than 154,000 nano-related research papers. This corresponds to a compound annual growth rate of 14% per annum, almost four times the growth in publications across all areas of research of 3.7%. Over the same period of time, the nano-related output from China grew from 820 papers in 1997 to over 52,000 papers in 2016, a compound annual growth rate of 24%.

China’s contribution to the global total has been growing steadily. In 1997, Chinese researchers co-authored just 6% of the nano-related papers contained in the Science Citation Index (SCI). By 2010, this grew to match the output of the United States. They now contribute over a third of the world’s total nanoscience output — almost twice that of the United States.

Additionally, China’s share of the most cited nanoscience papers has kept increasing year on year, with a compound annual growth rate of 22% — more than three times the global rate. It overtook the United States in 2014 and its contribution is now many times greater than that of any other country in the world, manifesting an impressive progression in both quantity and quality.

The rapid growth of nanoscience in China has been enabled by consistent and strong financial support from the Chinese government. As early as 1990, the State Science and Technology Committee, the predecessor of the Ministry of Science and Technology (MOST), launched the Climbing Up project on nanomaterial science. During the 1990s, the National Natural Science Foundation of China (NSFC) also funded nearly 1,000 small-scale projects in nanoscience. In the National Guideline on Medium- and Long-Term Program for Science and Technology Development (for 2006−2020) issued in early 2006 by the Chinese central government, nanoscience was identified as one of four areas of basic research and received the largest proportion of research budget out of the four areas. The brain boomerang, with more and more foreign-trained Chinese researchers returning from overseas, is another contributor to China’s rapid rise in nanoscience.

The white paper clarifies the role of Chinese institutions, including CAS, in driving China’s rise to become the world’s leader in nanoscience. Currently, CAS is the world’s largest producer of high impact nano research, contributing more than twice as many papers in the 1% most-cited nanoscience literature than its closest competitors. In addition to CAS, five other Chinese institutions are ranked among the global top 20 in terms of output of top cited 1% nanoscience papers — Tsinghua University, Fudan University, Zhejiang University, University of Science and Technology of China and Peking University.

Nano database reveals advantages and focus of China’s nano research

The Nano database (http://nano.nature.com) is a comprehensive platform that has been recently developed by Nature Research – part of Springer Nature – which contains nanoscience-related papers published in 167 peer-reviewed journals including Advanced Materials, Nano Letters, Nature, Science and more. Analysis of the Nano database of nanomaterial-containing articles published in top 30 journals during 2014–2016 shows that Chinese scientists explore a wide range of nanomaterials, the five most common of which are nanostructured materials, nanoparticles, nanosheets, nanodevices and nanoporous materials.

In terms of the research of applications, China has a clear leading edge in catalysis research, which is the most popular area of the country’s quality nanoscience papers. Chinese nano researchers also contributed significantly to nanomedicine and energy-related applications. China is relatively weaker in nanomaterials for electronics applications, compared to other research powerhouses, but robotics and lasers are emerging applications areas of nanoscience in China, and nanoscience papers addressing photonics and data storage applications also see strong growth in China. Over 80% of research from China listed in the database explicitly mentions applications of the nanostructures and nanomaterials described, notably higher than from most other leading nations such as the United States, Germany, the UK, Japan and France.

Nano also reveals the extent of China’s international collaborations in nano research. China has seen the percentage of its internationally collaborated papers increasing from 36% in 2014 to 44% in 2016. This level of international collaboration, similar to that of South Korea, is still much lower than that of the western countries, and the rate of growth is also not as fast as those in the United States, France and Germany.

The United States is China’s biggest international collaborator, contributing to 55% of China’s internationally collaborated papers on nanoscience that are included in the top 30 journals in the Nano database. Germany, Australia and Japan follow in a descending order as China’s collaborators on nano-related quality papers.

China’s patent output: topping the world, mostly applied domestically

Analysis of the Derwent Innovation Index (DII) database of Clarivate Analytics shows that China’s accumulative total number of patent applications for the past 20 years, amounting to 209,344 applications, or 45% of the global total, is more than twice as many as that of the United States, the second largest contributor to nano-related patents. China surpassed the United States and ranked the top in the world since 2008.

Five Chinese institutions, including the CAS, Zhejiang University, Tsinghua University, Hon Hai Precision Industry Co., Ltd. and Tianjin University can be found among the global top 10 institutional contributors to nano-related patent applications. CAS has been at the top of the global rankings since 2008, with a total of 11,218 patent applications for the past 20 years. Interestingly, outside of China, most of the other big institutional contributors among the top 10 are commercial enterprises, while in China, research or academic institutions are leading in patent applications.

However, the number of nano-related patents China applied overseas is still very low, accounting for only 2.61% of its total patent applications for the last 20 years cumulatively, whereas the proportion in the United States is nearly 50%. In some European countries, including the UK and France, more than 70% of patent applications are filed overseas.

China has high numbers of patent applications in several popular technical areas for nanotechnology use, and is strongest in patents for polymer compositions and macromolecular compounds. In comparison, nano-related patent applications in the United States, South Korea and Japan are mainly for electronics or semiconductor devices, with the United States leading the world in the cumulative number of patents for semiconductor devices.

Outlook, opportunities and challenges

The white paper highlights that the rapid rise of China’s research output and patent applications has painted a rosy picture for the development of Chinese nanoscience, and in both the traditionally strong subjects and newly emerging areas, Chinese nanoscience shows great potential.

Several interviewed experts in the survey identify catalysis and catalytic nanomaterials as the most promising nanoscience area for China. The use of nanotechnology in the energy and medical sectors was also considered very promising.

Some of the interviewed experts commented that the industrial impact of China’s nanotechnology is limited and there is still a gap between nanoscience research and the industrialization of nanotechnologies. Therefore, they recommended that the government invest more in applied research to drive the translation of nanoscience research and find ways to encourage enterprises to invest more in R&D.

As more and more young scientists enter the field, the competition for research funding is becoming more intense. However, this increasing competition for funding was not found to concern most interviewed young scientists, rather, they emphasized that the soft environment is more important. They recommended establishing channels that allow the suggestions or creative ideas of the young to be heard. Also, some interviewed young researchers commented that they felt that the current evaluation system was geared towards past achievements or favoured overseas experience, and recommended the development of an improved talent selection mechanism to ensure a sustainable growth of China’s nanoscience.

I have taken a look at the white paper and found it to be well written. It also provides a brief but thorough history of nanotechnology/nanoscience even adding a bit of historical information that was new to me. As for the rest of the white paper, it relies on bibliometrics (number of published papers and number of citations) and number of patents filed to lay the groundwork for claiming Chinese leadership in nanotechnology. As I’ve stated many times before, these are problematic measures but as far as I can determine they are almost the only ones we have. Frankly, as a Canadian, it doesn’t much matter to me since Canada no matter how you slice or dice it is always in a lower tier relative to science leadership in major fields. It’s the Americans who might feel inclined to debate leadership with regard to nanotechnology and other major fields and I leave it to to US commentators to take up the cudgels should they be inclined. The big bonuses here are the history, the glimpse into the Chinese perspective on the field of nanotechnology/nanoscience, and the analysis of weaknesses and strengths.

Coming up fast on Google and Amazon

A November 16, 2017 article by Christina Bonnington for Slate explores the possibility that a Chinese tech giant, Baidu,  will provide Google and Amazon serious competition in their quests to dominate world markets (Note: Links have been removed,

raven_h
The company took a playful approach to the form—but it has functional reasons for the design, too. Baidu

 

One of the most interesting companies in tech right now isn’t based in Palo Alto, or San Francisco, or Seattle. Baidu, a Chinese company with headquarters in Beijing, is taking on America’s biggest and most innovative tech titans—with style.

Baidu, a titan in its own right, leapt onto the scene as a competitor to Google in the search engine space. Since then, the company, largely underappreciated here in the U.S., has focused on beefing up its artificial intelligence efforts. Former AI chief Andrew Ng, upon leaving the company in March, credited Baidu’s CEO Robin Li on being one of the first technology leaders to fully appreciate the value of deep learning. Baidu now has a 1,300 person AI group, and that investment in AI has helped the company catch up to older, more established companies like Google and Amazon—both in emerging spaces, such as autonomous vehicles, and in consumer tech, as its latest announcement shows.

On Thursday [November 16, 2017], Baidu debuted its entrants to the popular virtual assistant space: a connected speaker and two robots. Baidu aims for the speaker to compete against options such as Amazon’s Echo line, Google Home, and Apple HomePod. Inside, the $256 device will utilize Baidu’s DuerOS conversational artificial intelligence platform, which is already used in more than 100 different smart home brands’ products. DuerOS will let you use your voice to do things like ask the speaker for information, play music, or hail a cab. Called the Raven H, the speaker includes high-end audio components from Tymphany and a unique design jointly created by acquired startup Raven Tech and Swedish consumer electronics company Teenage Engineering.

While the focus is on exciting new technology products from Baidu, the subtext, such as it is, suggests US companies had best keep an eye on its Chinese competitor(s).

Dutch/Chinese partnership to produce nanoparticles at the touch of a button

Now back to China and nanotechnology leadership and the production of nanoparticles. This announcement was made in a November 17, 2017 news item on Azonano,

Delft University of Technology [Netherlands] spin-off VSPARTICLE enters the booming Chinese market with a radical technology that allows researchers to produce nanoparticles at the push of a button. VSPARTICLE’s nanoparticle generator uses atoms, the worlds’ smallest building blocks, to provide a controllable source of nanoparticles. The start-up from Delft signed a distribution agreement with Bio-Sun to make their VSP-G1 nanoparticle generator available in China.

A November 16, 2017 VSPARTICLE press release, which originated the news item,

“We are honoured to cooperate with VSPARTICLE and bring the innovative VSP-G1 nanoparticle generator into the Chinese market. The VSP-G1 will create new possibilities for researchers in catalysis, aerosol, healthcare and electronics,” says Yinghui Cai, CEO of Bio-Sun.

With an exponential growth in nanoparticle research in the last decade, China is one of the leading countries in the field of nanotechnology and its applications. Vincent Laban, CFO of VSPARTICLE, explains: “Due to its immense investments in IOT, sensors, semiconductor technology, renewable energy and healthcare applications, China will eventually become one of our biggest markets. The collaboration with Bio-Sun offers a valuable opportunity to enter the Chinese market at exactly the right time.”

NANOPARTICLES ARE THE BUILDING BLOCKS OF THE FUTURE

Increasingly, scientists are focusing on nanoparticles as a key technology in enabling the transition to a sustainable future. Nanoparticles are used to make new types of sensors and smart electronics; provide new imaging and treatment possibilities in healthcare; and reduce harmful waste in chemical processes.

CURRENT RESEARCH TOOLKIT LACKS A FAST WAY FOR MAKING SPECIFIC BUILDING BLOCKS

With the latest tools in nanotechnology, researchers are exploring the possibilities of building novel materials. This is, however, a trial-and-error method. Getting the right nanoparticles often is a slow struggle, as most production methods take a substantial amount of effort and time to develop.

VSPARTICLE’S VSP-G1 NANOPARTICLE GENERATOR

With the VSP-G1 nanoparticle generator, VSPARTICLE makes the production of nanoparticles as easy as pushing a button. . Easy and fast iterations enable researchers to fast forward their research cycle, and verify their hypotheses.

VSPARTICLE

Born out of the research labs of Delft University of Technology, with over 20 years of experience in the synthesis of aerosol, VSPARTICLE believes there is a whole new world of possibilities and materials at the nanoscale. The company was founded in 2014 and has an international sales network in Europe, Japan and China.

BIO-SUN

Bio-Sun was founded in Beijing in 2010 and is a leader in promoting nanotechnology and biotechnology instruments in China. It serves many renowned customers in life science, drug discovery and material science. Bio-Sun has four branch offices in Qingdao, Shanghai, Guangzhou and Wuhan City, and a nationwide sale network.

That’s all folks!

A customized cruise experience with wearable technology (and decreased personal agency?)

The days when you went cruising to ‘get away from it all’ seem to have passed (if they ever really existed) with the introduction of wearable technology that will register your every preference and make life easier according to Cliff Kuang’s Oct. 19, 2017 article for Fast Company,

This month [October 2017], the 141,000-ton Regal Princess will push out to sea after a nine-figure revamp of mind-boggling scale. Passengers won’t be greeted by new restaurants, swimming pools, or onboard activities, but will instead step into a future augured by the likes of Netflix and Uber, where nearly everything is on demand and personally tailored. An ambitious new customization platform has been woven into the ship’s 19 passenger decks: some 7,000 onboard sensors and 4,000 “guest portals” (door-access panels and touch-screen TVs), all of them connected by 75 miles of internal cabling. As the Carnival-owned ship cruises to Nassau, Bahamas, and Grand Turk, its 3,500 passengers will have the option of carrying a quarter-size device, called the Ocean Medallion, which can be slipped into a pocket or worn on the wrist and is synced with a companion app.

The platform will provide a new level of service for passengers; the onboard sensors record their tastes and respond to their movements, and the app guides them around the ship and toward activities aligned with their preferences. Carnival plans to roll out the platform to another seven ships by January 2019. Eventually, the Ocean Medallion could be opening doors, ordering drinks, and scheduling activities for passengers on all 102 of Carnival’s vessels across 10 cruise lines, from the mass-market Princess ships to the legendary ocean liners of Cunard.

Kuang goes on to explain the reasoning behind this innovation,

The Ocean Medallion is Carnival’s attempt to address a problem that’s become increasingly vexing to the $35.5 billion cruise industry. Driven by economics, ships have exploded in size: In 1996, Carnival Destiny was the world’s largest cruise ship, carrying 2,600 passengers. Today, Royal Caribbean’s MS Harmony of the Seas carries up to 6,780 passengers and 2,300 crew. Larger ships expend less fuel per passenger; the money saved can then go to adding more amenities—which, in turn, are geared to attracting as many types of people as possible. Today on a typical ship you can do practically anything—from attending violin concertos to bungee jumping. And that’s just onboard. Most of a cruise is spent in port, where each day there are dozens of experiences available. This avalanche of choice can bury a passenger. It has also made personalized service harder to deliver. …

Kuang also wrote this brief description of how the technology works from the passenger’s perspective in an Oct. 19, 2017 item for Fast Company,

1. Pre-trip

On the web or on the app, you can book experiences, log your tastes and interests, and line up your days. That data powers the recommendations you’ll see. The Ocean Medallion arrives by mail and becomes the key to ship access.

2. Stateroom

When you draw near, your cabin-room door unlocks without swiping. The room’s unique 43-inch TV, which doubles as a touch screen, offers a range of Carnival’s bespoke travel shows. Whatever you watch is fed into your excursion suggestions.

3. Food

When you order something, sensors detect where you are, allowing your server to find you. Your allergies and preferences are also tracked, and shape the choices you’re offered. In all, the back-end data has 45,000 allergens tagged and manages 250,000 drink combinations.

4. Activities

The right algorithms can go beyond suggesting wines based on previous orders. Carnival is creating a massive semantic database, so if you like pricey reds, you’re more apt to be guided to a violin concerto than a limbo competition. Your onboard choices—the casino, the gym, the pool—inform your excursion recommendations.

In Kuang’s Oct. 19, 2017 article he notes that the cruise ship line is putting a lot of effort into retraining their staff and emphasizing the ‘soft’ skills that aren’t going to be found in this iteration of the technology. No mention is made of whether or not there will be reductions in the number of staff members on this cruise ship nor is the possibility that ‘soft’ skills may in the future be incorporated into this technological marvel.

Personalization/customization is increasingly everywhere

How do you feel about customized news feeds? As it turns out, this is not a rhetorical question as Adrienne LaFrance notes in her Oct. 19, 2017 article for The Atlantic (Note: Links have been removed),

Today, a Google search for news runs through the same algorithmic filtration system as any other Google search: A person’s individual search history, geographic location, and other demographic information affects what Google shows you. Exactly how your search results differ from any other person’s is a mystery, however. Not even the computer scientists who developed the algorithm could precisely reverse engineer it, given the fact that the same result can be achieved through numerous paths, and that ranking factors—deciding which results show up first—are constantly changing, as are the algorithms themselves.

We now get our news in real time, on demand, tailored to our interests, across multiple platforms, without knowing just how much is actually personalized. It was technology companies like Google and Facebook, not traditional newsrooms, that made it so. But news organizations are increasingly betting that offering personalized content can help them draw audiences to their sites—and keep them coming back.

Personalization extends beyond how and where news organizations meet their readers. Already, smartphone users can subscribe to push notifications for the specific coverage areas that interest them. On Facebook, users can decide—to some extent—which organizations’ stories they would like to appear in their news feeds. At the same time, devices and platforms that use machine learning to get to know their users will increasingly play a role in shaping ultra-personalized news products. Meanwhile, voice-activated artificially intelligent devices, such as Google Home and Amazon Echo, are poised to redefine the relationship between news consumers and the news [emphasis mine].

While news personalization can help people manage information overload by making individuals’ news diets unique, it also threatens to incite filter bubbles and, in turn, bias [emphasis mine]. This “creates a bit of an echo chamber,” says Judith Donath, author of The Social Machine: Designs for Living Online and a researcher affiliated with Harvard University ’s Berkman Klein Center for Internet and Society. “You get news that is designed to be palatable to you. It feeds into people’s appetite of expecting the news to be entertaining … [and] the desire to have news that’s reinforcing your beliefs, as opposed to teaching you about what’s happening in the world and helping you predict the future better.”

Still, algorithms have a place in responsible journalism. “An algorithm actually is the modern editorial tool,” says Tamar Charney, the managing editor of NPR One, the organization’s customizable mobile-listening app. A handcrafted hub for audio content from both local and national programs as well as podcasts from sources other than NPR, NPR One employs an algorithm to help populate users’ streams with content that is likely to interest them. But Charney assures there’s still a human hand involved: “The whole editorial vision of NPR One was to take the best of what humans do and take the best of what algorithms do and marry them together.” [emphasis mine]

The skimming and diving Charney describes sounds almost exactly like how Apple and Google approach their distributed-content platforms. With Apple News, users can decide which outlets and topics they are most interested in seeing, with Siri offering suggestions as the algorithm gets better at understanding your preferences. Siri now has have help from Safari. The personal assistant can now detect browser history and suggest news items based on what someone’s been looking at—for example, if someone is searching Safari for Reykjavík-related travel information, they will then see Iceland-related news on Apple News. But the For You view of Apple News isn’t 100 percent customizable, as it still spotlights top stories of the day, and trending stories that are popular with other users, alongside those curated just for you.

Similarly, with Google’s latest update to Google News, readers can scan fixed headlines, customize sidebars on the page to their core interests and location—and, of course, search. The latest redesign of Google News makes it look newsier than ever, and adds to many of the personalization features Google first introduced in 2010. There’s also a place where you can preprogram your own interests into the algorithm.

Google says this isn’t an attempt to supplant news organizations, nor is it inspired by them. The design is rather an embodiment of Google’s original ethos, the product manager for Google News Anand Paka says: “Just due to the deluge of information, users do want ways to control information overload. In other words, why should I read the news that I don’t care about?” [emphasis mine]

Meanwhile, in May [2017?], Google briefly tested a personalized search filter that would dip into its trove of data about users with personal Google and Gmail accounts and include results exclusively from their emails, photos, calendar items, and other personal data related to their query. [emphasis mine] The “personal” tab was supposedly “just an experiment,” a Google spokesperson said, and the option was temporarily removed, but seems to have rolled back out for many users as of August [2017?].

Now, Google, in seeking to settle a class-action lawsuit alleging that scanning emails to offer targeted ads amounts to illegal wiretapping, is promising that for the next three years it won’t use the content of its users’ emails to serve up targeted ads in Gmail. The move, which will go into effect at an unspecified date, doesn’t mean users won’t see ads, however. Google will continue to collect data from users’ search histories, YouTube, and Chrome browsing habits, and other activity.

The fear that personalization will encourage filter bubbles by narrowing the selection of stories is a valid one, especially considering that the average internet user or news consumer might not even be aware of such efforts. Elia Powers, an assistant professor of journalism and news media at Towson University in Maryland, studied the awareness of news personalization among students after he noticed those in his own classes didn’t seem to realize the extent to which Facebook and Google customized users’ results. “My sense is that they didn’t really understand … the role that people that were curating the algorithms [had], how influential that was. And they also didn’t understand that they could play a pretty active role on Facebook in telling Facebook what kinds of news they want them to show and how to prioritize [content] on Google,” he says.

The results of Powers’s study, which was published in Digital Journalism in February [2017], showed that the majority of students had no idea that algorithms were filtering the news content they saw on Facebook and Google. When asked if Facebook shows every news item, posted by organizations or people, in a users’ newsfeed, only 24 percent of those surveyed were aware that Facebook prioritizes certain posts and hides others. Similarly, only a quarter of respondents said Google search results would be different for two different people entering the same search terms at the same time. [emphasis mine; Note: Respondents in this study were students.]

This, of course, has implications beyond the classroom, says Powers: “People as news consumers need to be aware of what decisions are being made [for them], before they even open their news sites, by algorithms and the people behind them, and also be able to understand how they can counter the effects or maybe even turn off personalization or make tweaks to their feeds or their news sites so they take a more active role in actually seeing what they want to see in their feeds.”

On Google and Facebook, the algorithm that determines what you see is invisible. With voice-activated assistants, the algorithm suddenly has a persona. “We are being trained to have a relationship with the AI,” says Amy Webb, founder of the Future Today Institute and an adjunct professor at New York University Stern School of Business. “This is so much more catastrophically horrible for news organizations than the internet. At least with the internet, I have options. The voice ecosystem is not built that way. It’s being built so I just get the information I need in a pleasing way.”

LaFrance’s article is thoughtful and well worth reading in its entirety. Now, onto some commentary.

Loss of personal agency

I have been concerned for some time about the increasingly dull results I get from a Google search and while I realize the company has been gathering information about me via my searches , supposedly in service of giving me better searches, I had no idea how deeply the company can mine for personal data. It makes me wonder what would happen if Google and Facebook attempted a merger.

More cogently, I rather resent the search engines and artificial intelligence agents (e.g. Facebook bots) which have usurped my role as the arbiter of what interests me, in short, my increasing loss of personal agency.

I’m also deeply suspicious of what these companies are going to do with my data. Will it be used to manipulate me in some way? Presumably, the data will be sold and used for some purpose. In the US, they have married electoral data with consumer data as Brent Bambury notes in an Oct. 13, 2017 article for his CBC (Canadian Broadcasting Corporation) Radio show,

How much of your personal information circulates in the free-market ether of metadata? It could be more than you imagine, and it might be enough to let others change the way you vote.

A data firm that specializes in creating psychological profiles of voters claims to have up to 5,000 data points on 220 million Americans. Cambridge Analytica has deep ties to the American right and was hired by the campaigns of Ben Carson, Ted Cruz and Donald Trump.

During the U.S. election, CNN called them “Donald Trump’s mind readers” and his secret weapon.

David Carroll is a Professor at the Parsons School of Design in New York City. He is one of the millions of Americans profiled by Cambridge Analytica and he’s taking legal action to find out where the company gets its masses of data and how they use it to create their vaunted psychographic profiles of voters.

On Day 6 [Banbury’s CBC radio programme], he explained why that’s important.

“They claim to have figured out how to project our voting behavior based on our consumer behavior. So it’s important for citizens to be able to understand this because it would affect our ability to understand how we’re being targeted by campaigns and how the messages that we’re seeing on Facebook and television are being directed at us to manipulate us.” [emphasis mine]

The parent company of Cambridge Analytica, SCL Group, is a U.K.-based data operation with global ties to military and political activities. David Carroll says the potential for sharing personal data internationally is a cause for concern.

“It’s the first time that this kind of data is being collected and transferred across geographic boundaries,” he says.

But that also gives Carroll an opening for legal action. An individual has more rights to access their personal information in the U.K., so that’s where he’s launching his lawsuit.

Reports link Michael Flynn, briefly Trump’s National Security Adviser, to SCL Group and indicate that former White House strategist Steve Bannon is a board member of Cambridge Analytica. Billionaire Robert Mercer, who has underwritten Bannon’s Breitbart operations and is a major Trump donor, also has a significant stake in Cambridge Analytica.

In the world of data, Mercer’s credentials are impeccable.

“He is an important contributor to the field of artificial intelligence,” says David Carroll.

“His work at IBM is seminal and really important in terms of the foundational ideas that go into big data analytics, so the relationship between AI and big data analytics. …

Banbury’s piece offers a lot more, including embedded videos, than I’ve not included in that excerpt but I also wanted to include some material from Carole Cadwalladr’s Oct. 1, 2017 Guardian article about Carroll and his legal fight in the UK,

“There are so many disturbing aspects to this. One of the things that really troubles me is how the company can buy anonymous data completely legally from all these different sources, but as soon as it attaches it to voter files, you are re-identified. It means that every privacy policy we have ignored in our use of technology is a broken promise. It would be one thing if this information stayed in the US, if it was an American company and it only did voter data stuff.”

But, he [Carroll] argues, “it’s not just a US company and it’s not just a civilian company”. Instead, he says, it has ties with the military through SCL – “and it doesn’t just do voter targeting”. Carroll has provided information to the Senate intelligence committee and believes that the disclosures mandated by a British court could provide evidence helpful to investigators.

Frank Pasquale, a law professor at the University of Maryland, author of The Black Box Society and a leading expert on big data and the law, called the case a “watershed moment”.

“It really is a David and Goliath fight and I think it will be the model for other citizens’ actions against other big corporations. I think we will look back and see it as a really significant case in terms of the future of algorithmic accountability and data protection. …

Nobody is discussing personal agency directly but if you’re only being exposed to certain kinds of messages then your personal agency has been taken from you. Admittedly we don’t have complete personal agency in our lives but AI along with the data gathering done online and increasingly with wearable and smart technology means that another layer of control has been added to your life and it is largely invisible. After all, the students in Elia Powers’ study didn’t realize their news feeds were being pre-curated.