There has been a lot of talk about Tim Cook (Chief Executive Officer of Apple Inc.) and his policy for data privacy at Apple and his push for better consumer data privacy. For example, there’s this, from a June 10, 2022 article by Kif Leswing for CNBC,
Key Points
Apple CEO Tim Cook said in a letter to Congress that lawmakers should advance privacy legislation that’s currently being debated “as soon as possible.”
The bill would give consumers protections and rights dealing with how their data is used online, and would require that companies minimize the amount of data they collect on their users.
Apple has long positioned itself as the most privacy-focused company among its tech peers.
…
Apple has long positioned itself as the most privacy-focused company among its tech peers, and Cook regularly addresses the issue in speeches and meetings. Apple says that its commitment to privacy is a deeply held value by its employees, and often invokes the phrase “privacy is a fundamental human right.”
It’s also strategic for Apple’s hardware business. Legislation that regulates how much data companies collect or how it’s processed plays into Apple’s current privacy features, and could even give Apple a head start against competitors that would need to rebuild their systems to comply with the law.
…
More recently with rising concerns regarding artificial intelligence (AI), Apple has rushed to assure customers that their data is still private, from a May 10, 2024 article by Kyle Orland for Ars Technica, Note: Links have been removed,
Apple’s AI promise: “Your data is never stored or made accessible to Apple”
And publicly reviewable server code means experts can “verify this privacy promise.”
With most large language models being run on remote, cloud-based server farms, some users have been reluctant to share personally identifiable and/or private data with AI companies. In its WWDC [Apple’s World Wide Developers Conference] keynote today, Apple stressed that the new “Apple Intelligence” system it’s integrating into its products will use a new “Private Cloud Compute” to ensure any data processed on its cloud servers is protected in a transparent and verifiable way.
“You should not have to hand over all the details of your life to be warehoused and analyzed in someone’s AI cloud,” Apple Senior VP of Software Engineering Craig Federighi said.-
…
Part of what Apple calls “a brand new standard for privacy and AI” is achieved through on-device processing. Federighi said “many” of Apple’s generative AI models can run entirely on a device powered by an A17+ or M-series chips, eliminating the risk of sending your personal data to a remote server.
When a bigger, cloud-based model is needed to fulfill a generative AI request, though, Federighi stressed that it will “run on servers we’ve created especially using Apple silicon,” which allows for the use of security tools built into the Swift programming language. The Apple Intelligence system “sends only the data that’s relevant to completing your task” to those servers, Federighi said, rather than giving blanket access to the entirety of the contextual information the device has access to.
…
But you don’t just have to trust Apple on this score, Federighi claimed. That’s because the server code used by Private Cloud Compute will be publicly accessible, meaning that “independent experts can inspect the code that runs on these servers to verify this privacy promise.” The entire system has been set up cryptographically so that Apple devices “will refuse to talk to a server unless its software has been publicly logged for inspection.”
While the keynote speech was light on details [emphasis mine] for the moment, the focus on privacy during the presentation shows that Apple is at least prioritizing security concerns in its messaging [emphasis mine] as it wades into the generative AI space for the first time. We’ll see what security experts have to say [emphasis mine] when these servers and their code are made publicly available in the near future.
Privacy. That’s Apple,’ the slogan proclaims. New research from Aalto University begs to differ.
Study after study has shown how voluntary third-party apps erode people’s privacy. Now, for the first time, researchers at Aalto University have investigated the privacy settings of Apple’s default apps; the ones that are pretty much unavoidable on a new device, be it a computer, tablet or mobile phone. The researchers will present their findings in mid-May at the prestigious CHI conference [ACM CHI Conference on Human Factors in Computing Systems, May 11, 2024 – May 16, 2024 in Honolulu, Hawaii], and the peer-reviewed research paper is already available online.
‘We focused on apps that are an integral part of the platform and ecosystem. These apps are glued to the platform, and getting rid of them is virtually impossible,’ says Associate Professor Janne Lindqvist, head of the computer science department at Aalto.
The researchers studied eight apps: Safari, Siri, Family Sharing, iMessage, FaceTime, Location Services, Find My and Touch ID. They collected all publicly available privacy-related information on these apps, from technical documentation to privacy policies and user manuals.
The fragility of the privacy protections surprised even the researchers. [emphasis mine]
‘Due to the way the user interface is designed, users don’t know what is going on. For example, the user is given the option to enable or not enable Siri, Apple’s virtual assistant. But enabling only refers to whether you use Siri’s voice control. Siri collects data in the background from other apps you use, regardless of your choice, unless you understand how to go into the settings and specifically change that,’ says Lindqvist.
Participants weren’t able to stop data sharing in any of the apps
In practice, protecting privacy on an Apple device requires persistent and expert clicking on each app individually. Apple’s help falls short.
‘The online instructions for restricting data access are very complex and confusing, and the steps required are scattered in different places. There’s no clear direction on whether to go to the app settings, the central settings – or even both,’ says Amel Bourdoucen, a doctoral researcher at Aalto.
In addition, the instructions didn’t list all the necessary steps or explain how collected data is processed.
The researchers also demonstrated these problems experimentally. They interviewed users and asked them to try changing the settings.
‘It turned out that the participants weren’t able to prevent any of the apps from sharing their data with other applications or the service provider,’ Bourdoucen says.
Finding and adjusting privacy settings also took a lot of time. ‘When making adjustments, users don’t get feedback on whether they’ve succeeded. They then get lost along the way, go backwards in the process and scroll randomly, not knowing if they’ve done enough,’ Bourdoucen says.
In the end, Bourdoucen explains, the participants were able to take one or two steps in the right direction, but none succeeded in following the whole procedure to protect their privacy.
Running out of options
If preventing data sharing is difficult, what does Apple do with all that data? [emphasis mine]
It’s not possible to be sure based on public documents, but Lindqvist says it’s possible to conclude that the data will be used to train the artificial intelligence system behind Siri and to provide personalised user experiences, among other things. [emphasis mine]
Many users are used to seamless multi-device interaction, which makes it difficult to move back to a time of more limited data sharing. However, Apple could inform users much more clearly than it does today, says Lindqvist. The study lists a number of detailed suggestions to clarify privacy settings and improve guidelines.
For individual apps, Lindqvist says that the problem can be solved to some extent by opting for a third-party service. For example, some participants in the study had switched from Safari to Firefox.
Lindqvist can’t comment directly on how Google’s Android works in similar respects [emphasis mine], as no one has yet done a similar mapping of its apps. But past research on third-party apps does not suggest that Google is any more privacy-conscious than Apple [emphasis mine].
So what can be learned from all this – are users ultimately facing an almost impossible task?
‘Unfortunately, that’s one lesson,’ says Lindqvist.
I have found two copies of the researchers’ paper. There’s a PDF version on Aalto University’s website that bears this caution,
This is an electronic reprint of the original article. This reprint may differ from the original in pagination and typographic detail.
Here’s a link to and a citation for the official version of the paper,
Privacy of Default Apps in Apple’s Mobile Ecosystem by Amel Bourdoucen and Janne Lindqvist. CHI. ’24: Proceedings of the CHI Conference on Human Factors in Computing Systems May 2024 Article No.: 786 Pages 1–32 DOI: https://doi.org/10.1145/3613904.3642831 Published:11 May 2024
The dates are November 7 -9, 2018 and as the opening draws closer I’m getting more ‘breathlessly enthusiastic’ announcements. Here are a few highlights from an October 23, 2018 announcement received via email,
CSPC 2018 is honoured to announce that the Honourable Kirsty Duncan, Minister of Science and Sport, will be delivering the keynote speech of the Gala Dinner on Thursday, November 8 at 7:00 PM. Minister Duncan will also hand out the 4th Science Policy Award of Excellence to the winner of this year’s competition.
…
CSPC 2018 features 250 speakers, a record number, and above is the breakdown of the positions they hold, over 43% of them being at the executive level and 57% of our speakers being women.
*All information as of October 15, 2018
…
If you think that you will not meet any new people at CSPC and all of the registrants are the same as last year, think again!
Over 57% of registrants are attending the conference for the FIRST TIME!
Secure your spot today!
*All information as of October 15, 2018
Here’s more from an October 31, 2018 announcement received via email,
One year after her appointment as Canada’s Chief Science Advisor, Dr. Mona Nemer will discuss her experience with the community. Don’t miss this opportunity.
…
[Canadian Science Policy Centre editorials in advance of conference]
Role Title: Director of Communications
Deadline: November 5, 2018
Salary: $115,000 to $165,000
About the Council of Canadian Academies
The Council of Canadian Academies (CCA) is a not-for-profit organization that conducts assessments of evidence on scientific topics of public interest to inform decision-making in Canada.
Role Summary
The CCA is seeking an experienced communications professional to join its senior management team as Director of Communications. Reporting to the President and CEO, the Director is responsible for developing and implementing a communications plan for the organization that promotes and highlights the CCA’s work, brand, and overall mission to a variety of potential users and stakeholders; overseeing the publication and dissemination of high-quality hard copy and online products; and providing strategic advice to the President and CCA’s Board, Committees, and Panels. In fulfilling these responsibilities, the Director of Communications is expected to work with a variety of interested groups including the media, the broad policy community, government, and non-governmental organizations.
Key Responsibilities and Accountabilities
Under the direction of the President and CEO, the Director leads a small team of communications and publishing professionals to meet the responsibilities and accountabilities outlined below.
Strategy Development and External Communications
• Develop and execute an overall strategic communications plan for the organization that promotes and highlights the CCA’s work, brand, and overall mission.
• Oversee the CCA’s presence and influence on digital and social platforms including the development and execution of a comprehensive content strategy for linking CCA’s work with the broader science and policy ecosystem with a focus on promoting and disseminating the findings of the CCA’s expert panel reports.
• Provide support, as needed for relevant government relations activities including liaising with communications counterparts, preparing briefing materials, responding to requests to share CCA information, and coordinating any appearances before Parliamentary committees or other bodies.
• Harness opportunities for advancing the uptake and use of CCA assessments, including leveraging the strengths of key partners particularly the founding Academies.
Publication and Creative Services
• Oversee the creative services, quality control, and publication of all CCA’s expert panel reports including translation, layout, quality assurance, graphic design, proofreading, and printing processes.
• Oversee the creative development and publication of all CCA’s corporate materials including the Annual Report and Corporate Plan through content development, editing, layout, translation, graphic design, proofreading, and printing processes.
Advice and Issues Management
• Provide strategic advice and support to the President’s Office, Board of Directors, Committees, and CCA staff about increasing the overall impact of CCA expert panel reports, brand awareness, outreach opportunities, and effective science communication.
• Provide support to the President by anticipating project-based or organizational issues, understanding potential implications, and suggesting strategic management solutions.
• Ensure consistent messages, style, and approaches in the delivery of all internal and external communications across the organization.
Leadership
• Mentor, train, and advise up to five communications and publishing staff on a day-to-day basis and complete annual performance reviews and planning.
• Lead the development and implementation of all CCA-wide policy and procedures relating to all aspects of communications and publishing.
• Represent the issues, needs, and ongoing requirements for the communications and publishing staff as a member of the CCA senior management team.
Knowledge Requirements
The Director of Communications requires:
• Superior knowledge of communications and public relations principles – preferably as it applies in a non-profit or academic setting;
• Extensive experience in communications planning and issues management;
• Knowledge of current research, editorial, and publication production standards and procedures including but not limited to: translation, copy-editing, layout/design, proofreading and publishing;
• Knowledge of evaluating impact of reports and assessments;
• Knowledge in developing content strategy, knowledge mobilization techniques, and creative services and design;
• Knowledge of human resource management techniques and experience managing a team;
• Experience in coordinating, organizing and implementing communications activities including those involving sensitive topics;
• Knowledge of the relationships and major players in Canada’s intramural and extramural science and public policy ecosystem, including awareness of federal science departments and Parliamentary committees, funding bodies, and related research groups;
• Knowledge of Microsoft Office Suite, Adobe Creative Suite, WordPress and other related programs;
• Knowledge of a variety of social media platforms and measurement tools.
Skills Requirements
The Director of Communications must have:
• Superior time and project management skills
• Superior writing skills
• Superior ability to think strategically regarding how best to raise the CCA’s profile and ensure impact of the CCA’s expert panel reports
• Ability to be flexible and adaptable; able to respond quickly to unanticipated demands
• Strong advisory, negotiation, and problem-solving skills
• Strong skills in risk mitigation
• Superior ability to communicate in both written and oral forms, effectively and diplomatically
• Ability to mentor, train, and provide constructive feedback to direct reports
Education and Experience
This knowledge and skillset is typically obtained through the completion of a post-secondary degree in Journalism, Communications, Public Affairs or a related field, and/or a minimum of 10
years of progressive and related experience. Experience in an organization that has addressed topics in public policy would be valuable.
Language Requirements: This position is English Essential. Fluency in French is a strong asset.
To apply to this position please send your CV and cover letter to careers@scienceadvice.ca before November 5, 2018. The cover letter should answer the following questions in 1,000 words or less:
1. How does your background and work experience make you well-suited for the position of Director of Communications at CCA?
2. What trends do you see emerging in the communications field generally, and in science and policy communications more specifically? How might CCA take advantage of these trends and developments?
3. Knowing that CCA is in the business of conducting assessments of evidence on important policy topics, how do you feel communicating this type of science differs from communicating other types of information and knowledge?
Improving Innovation Through Better Management
The Council of Canadian Academies released their ‘Improving Innovation Through Better Management‘ report on October 18, 2018..As some of my regular readers (assuming there are some) might have predicted, I have issues.
While research is world-class and technology start-ups are thriving, few companies grow and mature in Canada. This cycle — invent and sell, invent and sell — allows other countries to capture much of the economic and social benefits of Canadian-invented products, processes, marketing methods, and business models. …
So, the problem is ‘invent and sell’. Leaving aside the questionable conclusion that other countries are reaping the benefits of Canadian innovation (I’ll get back to that shortly), what questions could you ask about how to break the ‘invent and sell, invent and sell’ cycle? Hmm, maybe we should ask, How do we break the ‘invent and sell’ cycle in Canada?
The government presented two questions to deal with the problem and no, how to break the cycle is not one of the questions. From the ‘Improving Innovation Through Better Management‘ summary webpage,
… Escaping this cycle may be aided through education and training of innovation managers who can systematically manage ideas for commercial success and motivate others to reimagine innovation in Canada.
To understand how to better support innovation management in Canada, Innovation, Science and Economic Development Canada (ISED) asked the CCA two critical questions: What are the key skills required to manage innovation? And, what are the leading practices for teaching these skills in business schools, other academic departments, colleges/polytechnics, and industry?
As lawyers, journalists, scientists, doctors, librarians, and anyone who’s ever received misinformation can tell you, asking the right questions can make a big difference.
As for the conclusion that other countries are reaping the benefits of Canadian innovation, is there any supporting data? We enjoy a very high standard of living and have done so for at least a couple of generations. The Organization for Economic Cooperation and Development (OECD) has a Better Life Index, which ranks well-being on these 11 dimensions (from the OECD Better Life Index entry on Wikipedia), Note: Links have been removed,
Housing: housing conditions and spendings (e.g. real estate pricing)
Income: household income and financial wealth
Jobs: earnings, job security and unemployment
Community: quality of social support network
Education: education and what you get out of it
Environment: quality of environment (e.g. environmental health)
This notion that other countries are profiting from Canadian innovation while we lag behind has been repeated so often that it’s become an article of faith and I never questioned it until someone else challenged me. This article of faith is repeated internationally and sometimes seems that every country in the world is worried that someone else will benefit from their national innovation.
Getting back to the Canadian situation, we’ve decided to approach the problem by not asking questions about our article of faith or how to break the ‘invent and sell’ cycle. Instead of questioning an assumption and producing an open-ended question, we have these questions (1) What are the key skills required to manage innovation? (2) And, what are the leading practices for teaching these skills in business schools, other academic departments, colleges/polytechnics, and industry?
in my world that first question, would be a second tier question, at best. The second question, presupposes the answer: more training in universities and colleges. I took a look at the report’s Expert Panel webpage and found it populated by five individuals who are either academics or have strong ties to academe. They did have a workshop and the list of participants does include people who run businesses, from the Improving Innovation Through Better Management‘ report (Note: Formatting has not been preserved),
Workshop Participants
Max Blouw,
Former President and Vice-Chancellor of
Wilfrid Laurier University (Waterloo, ON)
Richard Boudreault, FCAE,
Chairman, Sigma Energy
Storage (Montréal, QC)
Judy Fairburn, FCAE,
Past Board Chair, Alberta Innovates;
retired EVP Business Innovation & Chief Digital Officer,
Cenovus Energy Inc. (Calgary, AB)
Tom Jenkins, O.C., FCAE,
Chair of the Board, OpenText
(Waterloo, ON)
Sarah Kaplan,
Director of the Institute for Gender and the
Economy and Distinguished Professor, Rotman School of
Management, University of Toronto (Toronto, ON)
Jean-Michel Lemieux,
Senior Vice President of Engineering,
Shopify Inc. (Ottawa, ON)
Elicia Maine,
Academic Director and Professor, i2I, Beedie
School of Business, Simon Fraser University (Vancouver, BC)
John L. Mann, FCAE,
Owner, Mann Consulting
(Blenheim, ON)
Jesse Rodgers,
CEO, Volta Labs (Halifax, NS)
Creso Sá,
Professor of Higher Education and Director of
the Centre for the Study of Canadian and International
Higher Education, Ontario Institute for Studies in Education,
University of Toronto (Toronto, ON)
Dhirendra Shukla,
Professor and Chair, J. Herbert Smith
Centre for Technology Management & Entrepreneurship,
Faculty of Engineering, University of New Brunswick
(Fredericton, NB)
Dan Sinai,
Senior Executive, Innovation, IBM Canada
(Toronto, ON)
J. Mark Weber,
Eyton Director, Conrad School of
Entrepreneurship & Business, University of Waterloo
(Waterloo, ON)
I am a little puzzled by the IBM executive’s presence (Dan Sinai) on this list. Wouldn’t Canadians holding onto their companies be counterproductive to IBM’s interests? As for John L. Mann, I’ve not been able to find him or his consulting company online. it’s unusual not to find any trace of an individual or company online these days.
In all there were nine individuals representing academic or government institutions in this list. The gender balance is 10 males and five females for the workshop participants and three males and two females for the expert panel. There is no representation from the North or from Manitoba, Saskatchewan, Prince Edward Island, or Newfoundland.
If they’re serious about looking at how to use innovation to drive higher standards of living, why aren’t there any people from Asian countries where they have been succeeding at that very project? South Korea and China come to mind.
I’m sure there are some excellent ideas in the report, I just wish they’d taken their topic to heart and actually tried to approach innovation in Canada in an innovative fashion.
Meanwhile, Vancouver gets another technology hub, from an October 30, 2018 article by Kenneth Chan for the Daily Hive (Vancouver [Canada]), Note: Links have been removed,
Vancouver’s rapidly growing virtual reality (VR) and augmented reality (AR) tech sectors will greatly benefit from a new VR and AR hub created by Launch Academy.
The technology incubator has opened a VR and AR hub at its existing office at 300-128 West Hastings Street in downtown, in partnership with VR/AR Association Vancouver. Immersive tech companies have access to desk space, mentorship programs, VR/AR equipment rentals, investor relations connected to Silicon Valley [emphasis mine], advisory services, and community events and workshops.
…
Within the Vancouver tech industry, the immersive sector has grown from 15 companies working in VR and AR in 2015 to 220 organizations today.
…
Globally, the VR and AR market is expected to hit a value of $108 billion by 2021, with tech giants like Amazon, Apple, Facebook, Google, and Microsoft [emphasis mine] investing billions into product development.
…
In the Vancouver region, the ‘invent and sell’ cycle can be traced back to the 19th century.
One more thing, as I was writing this piece I tripped across this news: “$7.7-billion pact makes Encana more American than Canadian‘ by Geoffrey Morgan. It’s in the Nov. 2, 2018 print edition of the Vancouver Sun’s front page for business. “Encana Corp., the storied Canadian company that had been slowly transitioning away from Canada and natural gas over the past few years under CEO [Chief Executive Officer] Doug Suttles, has pivoted aggressively to US shale basins. … Suttles, formerly as BP Plc. executive, moved from Calgary [Alberta, Canada] to Denver [Colorado, US], though the company said that was for personal reasons and not a precursor to relocation of Encana’s headquarters.” Yes, that’s quite believable. By the way, Suttles has spent* most of his life in the US (Wikipedia entry).
In any event, it’s not just Canadian emerging technology companies that get sold or somehow shifted out of Canada.
So, should we break the cycle and, if so, how are we going to do it?
*’spend’ corrected to ‘spent’ on November 6, 2018.
From time to time I check out the latest on attempts to shrink computer chips. In my July 11, 2014 posting I noted IBM’s announcement about developing a 7nm computer chip and later in my July 15, 2015 posting I noted IBM’s announcement of a working 7nm chip (from a July 9, 2015 IBM news release , “The breakthrough, accomplished in partnership with GLOBALFOUNDRIES and Samsung at SUNY Polytechnic Institute’s Colleges of Nanoscale Science and Engineering (SUNY Poly CNSE), could result in the ability to place more than 20 billion tiny switches — transistors — on the fingernail-sized chips that power everything from smartphones to spacecraft.”
I’m not sure what happened to the IBM/Global Foundries/Samsung partnership but Global Foundries recently announced that it will no longer be working on 7nm chips. From an August 27, 2018 Global Foundries news release,
GLOBALFOUNDRIES [GF] today announced an important step in its transformation, continuing the trajectory launched with the appointment of Tom Caulfield as CEO earlier this year. In line with the strategic direction Caulfield has articulated, GF is reshaping its technology portfolio to intensify its focus on delivering truly differentiated offerings for clients in high-growth markets.
GF is realigning its leading-edge FinFET roadmap to serve the next wave of clients that will adopt the technology in the coming years. The company will shift development resources to make its 14/12nm FinFET platform more relevant to these clients, delivering a range of innovative IP and features including RF, embedded memory, low power and more. To support this transition, GF is putting its 7nm FinFET program on hold indefinitely [emphasis mine] and restructuring its research and development teams to support its enhanced portfolio initiatives. This will require a workforce reduction, however a significant number of top technologists will be redeployed on 14/12nm FinFET derivatives and other differentiated offerings.
I tried to find a definition for FinFet but the reference to a MOSFET and in-gate transistors was too much incomprehensible information packed into a tight space, see the FinFET Wikipedia entry for more, if you dare.
Getting back to the 7nm chip issue, Samuel K. Moore (I don’t think he’s related to the Moore of Moore’s law) wrote an Aug. 28, 2018 posting on the Nanoclast blog (on the IEEE [Institute of Electronics and Electrical Engineers] website) which provides some insight (Note: Links have been removed),
In a major shift in strategy, GlobalFoundries is halting its development of next-generation chipmaking processes. It had planned to move to the so-called 7-nm node, then begin to use extreme-ultraviolet lithography (EUV) to make that process cheaper. From there, it planned to develop even more advanced lithography that would allow for 5- and 3-nanometer nodes. Despite having installed at least one EUV machine at its Fab 8 facility in Malta, N.Y., all those plans are now on indefinite hold, the company announced Monday.
The move leaves only three companies reaching for the highest rungs of the Moore’s Law ladder: Intel, Samsung, and TSMC.
It’s a huge turnabout for GlobalFoundries. …
GlobalFoundries rationale for the move is that there are not enough customers that need bleeding-edge 7-nm processes to make it profitable. “While the leading edge gets most of the headlines, fewer customers can afford the transition to 7 nm and finer geometries,” said Samuel Wang, research vice president at Gartner, in a GlobalFoundries press release.
“The vast majority of today’s fabless [emphasis mine] customers are looking to get more value out of each technology generation to leverage the substantial investments required to design into each technology node,” explained GlobalFoundries CEO Tom Caulfield in a press release. “Essentially, these nodes are transitioning to design platforms serving multiple waves of applications, giving each node greater longevity. This industry dynamic has resulted in fewer fabless clients designing into the outer limits of Moore’s Law. We are shifting our resources and focus by doubling down on our investments in differentiated technologies across our entire portfolio that are most relevant to our clients in growing market segments.”
(The dynamic Caulfield describes is something the U.S. Defense Advanced Research Agency is working to disrupt with its $1.5-billion Electronics Resurgence Initiative. Darpa’s [DARPA] partners are trying to collapse the cost of design and allow older process nodes to keep improving by using 3D technology.)
…
Fabless manufacturing is where the fabrication is outsourced and the manufacturing company of record is focused on other matters according to the Fabless manufacturing Wikipedia entry.
Roland Moore-Colyer (I don’t think he’s related to Moore of Moore’s law either) has written August 28, 2018 article for theinquirer.net which also explores this latest news from Global Foundries (Note: Links have been removed),
EVER PREPPED A SPREAD for a party to then have less than half the people you were expecting show up? That’s probably how GlobalFoundries [sic] feels at the moment.
The chip manufacturer, which was once part of AMD, had a fabrication process geared up for 7-nanometre chips which its customers – including AMD and Qualcomm – were expected to adopt.
But AMD has confirmed that it’s decided to move its 7nm GPU production to TSMC, and Intel is still stuck trying to make chips based on 10nm fabrication.
…
Arguably, this could mark a stymieing of innovation and cutting-edge designs for chips in the near future. But with processors like AMD’s Threadripper 2990WX overclocked to run at 6GHz across all its 32 cores, in the real-world PC fans have no need to worry about consumer chips running out of puff anytime soon. µ
That’s all folks.
Maybe that’s not all
Steve Blank in a Sept. 10, 2018 posting on the Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) provides some provocative commentary on the Global Foundries announcement (Note: A link has been removed),
For most of our lives, the idea that computers and technology would get better, faster, and cheaper every year was as assured as the sun rising every morning. The story “GlobalFoundries Halts 7-nm Chip Development” doesn’t sound like the end of that era, but for you and anyone who uses an electronic device, it most certainly is.
Technology innovation is going to take a different direction.
…
This story just goes on and on
There was a new development according to a Sept. 12, 2018 posting on the Nanoclast blog by, again, Samuel K. Moore (Note Links have been removed),
At an event today [sept. 12, 2018], Apple executives said that the new iPhone Xs and Xs Max will contain the first smartphone processor to be made using 7 nm manufacturing technology, the most advanced process node. Huawei made the same claim, to less fanfare, late last month and it’s unclear who really deserves the accolades. If anybody does, it’s TSMC, which manufactures both chips.
TSMC went into volume production with 7-nm tech in April, and rival Samsung is moving toward commercial 7-nm production later this year or in early 2019. GlobalFoundries recently abandoned its attempts to develop a 7 nm process, reasoning that the multibillion-dollar investment would never pay for itself. And Intel announced delays in its move to its next manufacturing technology, which it calls a 10-nm node but which may be equivalent to others’ 7-nm technology.
…
There’s a certain ‘soap opera’ quality to this with all the twists and turns.
Taking up from where I left off with my comments on Competing in a Global Innovation Economy: The Current State of R and D in Canada or as I prefer to call it the Third assessment of Canadas S&T (science and technology) and R&D (research and development). (Part 1 for anyone who missed it).
Is it possible to get past Hedy?
Interestingly (to me anyway), one of our R&D strengths, the visual and performing arts, features sectors where a preponderance of people are dedicated to creating culture in Canada and don’t spend a lot of time trying to make money so they can retire before the age of 40 as so many of our start-up founders do. (Retiring before the age of 40 just reminded me of Hollywood actresses {Hedy] who found and still do find that work was/is hard to come by after that age. You may be able but I’m not sure I can get past Hedy.) Perhaps our business people (start-up founders) could take a leaf out of the visual and performing arts handbook? Or, not. There is another question.
Does it matter if we continue to be a ‘branch plant’ economy? Somebody once posed that question to me when I was grumbling that our start-ups never led to larger businesses and acted more like incubators (which could describe our R&D as well),. He noted that Canadians have a pretty good standard of living and we’ve been running things this way for over a century and it seems to work for us. Is it that bad? I didn’t have an answer for him then and I don’t have one now but I think it’s a useful question to ask and no one on this (2018) expert panel or the previous expert panel (2013) seems to have asked.
I appreciate that the panel was constrained by the questions given by the government but given how they snuck in a few items that technically speaking were not part of their remit, I’m thinking they might have gone just a bit further. The problem with answering the questions as asked is that if you’ve got the wrong questions, your answers will be garbage (GIGO; garbage in, garbage out) or, as is said, where science is concerned, it’s the quality of your questions.
On that note, I would have liked to know more about the survey of top-cited researchers. I think looking at the questions could have been quite illuminating and I would have liked some information on from where (geographically and area of specialization) they got most of their answers. In keeping with past practice (2012 assessment published in 2013), there is no additional information offered about the survey questions or results. Still, there was this (from the report released April 10, 2018; Note: There may be some difference between the formatting seen here and that seen in the document),
3.1.2 International Perceptions of Canadian Research
As with the 2012 S&T report, the CCA commissioned a survey of top-cited researchers’ perceptions of Canada’s research strength in their field or subfield relative to that of other countries (Section 1.3.2). Researchers were asked to identify the top five countries in their field and subfield of expertise: 36% of respondents (compared with 37% in the 2012 survey) from across all fields of research rated Canada in the top five countries in their field (Figure B.1 and Table B.1 in the appendix). Canada ranks fourth out of all countries, behind the United States, United Kingdom, and Germany, and ahead of France. This represents a change of about 1 percentage point from the overall results of the 2012 S&T survey. There was a 4 percentage point decrease in how often France is ranked among the top five countries; the ordering of the top five countries, however, remains the same.
When asked to rate Canada’s research strength among other advanced countries in their field of expertise, 72% (4,005) of respondents rated Canadian research as “strong” (corresponding to a score of 5 or higher on a 7-point scale) compared with 68% in the 2012 S&T survey (Table 3.4). [pp. 40-41 Print; pp. 78-70 PDF]
Before I forget, there was mention of the international research scene,
Growth in research output, as estimated by number of publications, varies considerably for the 20 top countries. Brazil, China, India, Iran, and South Korea have had the most significant increases in publication output over the last 10 years. [emphases mine] In particular, the dramatic increase in China’s output means that it is closing the gap with the United States. In 2014, China’s output was 95% of that of the United States, compared with 26% in 2003. [emphasis mine]
Table 3.2 shows the Growth Index (GI), a measure of the rate at which the research output for a given country changed between 2003 and 2014, normalized by the world growth rate. If a country’s growth in research output is higher than the world average, the GI score is greater than 1.0. For example, between 2003 and 2014, China’s GI score was 1.50 (i.e., 50% greater than the world average) compared with 0.88 and 0.80 for Canada and the United States, respectively. Note that the dramatic increase in publication production of emerging economies such as China and India has had a negative impact on Canada’s rank and GI score (see CCA, 2016).
As long as I’ve been blogging (10 years), the international research community (in particular the US) has been looking over its shoulder at China.
Patents and intellectual property
As an inventor, Hedy got more than one patent. Much has been made of the fact that despite an agreement, the US Navy did not pay her or her partner (George Antheil) for work that would lead to significant military use (apparently, it was instrumental in the Bay of Pigs incident, for those familiar with that bit of history), GPS, WiFi, Bluetooth, and more.
Some comments about patents. They are meant to encourage more innovation by ensuring that creators/inventors get paid for their efforts .This is true for a set time period and when it’s over, other people get access and can innovate further. It’s not intended to be a lifelong (or inheritable) source of income. The issue in Lamarr’s case is that the navy developed the technology during the patent’s term without telling either her or her partner so, of course, they didn’t need to compensate them despite the original agreement. They really should have paid her and Antheil.
The current patent situation, particularly in the US, is vastly different from the original vision. These days patents are often used as weapons designed to halt innovation. One item that should be noted is that the Canadian federal budget indirectly addressed their misuse (from my March 16, 2018 posting),
Surprisingly, no one else seems to have mentioned a new (?) intellectual property strategy introduced in the document (from Chapter 2: Progress; scroll down about 80% of the way, Note: The formatting has been changed),
Budget 2018 proposes measures in support of a new Intellectual Property Strategy to help Canadian entrepreneurs better understand and protect intellectual property, and get better access to shared intellectual property.
What Is a Patent Collective?
A Patent Collective is a way for firms to share, generate, and license or purchase intellectual property. The collective approach is intended to help Canadian firms ensure a global “freedom to operate”, mitigate the risk of infringing a patent, and aid in the defence of a patent infringement suit.
Budget 2018 proposes to invest $85.3 million over five years, starting in 2018–19, with $10 million per year ongoing, in support of the strategy. The Minister of Innovation, Science and Economic Development will bring forward the full details of the strategy in the coming months, including the following initiatives to increase the intellectual property literacy of Canadian entrepreneurs, and to reduce costs and create incentives for Canadian businesses to leverage their intellectual property:
To better enable firms to access and share intellectual property, the Government proposes to provide $30 million in 2019–20 to pilot a Patent Collective. This collective will work with Canada’s entrepreneurs to pool patents, so that small and medium-sized firms have better access to the critical intellectual property they need to grow their businesses.
To support the development of intellectual property expertise and legal advice for Canada’s innovation community, the Government proposes to provide $21.5 million over five years, starting in 2018–19, to Innovation, Science and Economic Development Canada. This funding will improve access for Canadian entrepreneurs to intellectual property legal clinics at universities. It will also enable the creation of a team in the federal government to work with Canadian entrepreneurs to help them develop tailored strategies for using their intellectual property and expanding into international markets.
To support strategic intellectual property tools that enable economic growth, Budget 2018 also proposes to provide $33.8 million over five years, starting in 2018–19, to Innovation, Science and Economic Development Canada, including $4.5 million for the creation of an intellectual property marketplace. This marketplace will be a one-stop, online listing of public sector-owned intellectual property available for licensing or sale to reduce transaction costs for businesses and researchers, and to improve Canadian entrepreneurs’ access to public sector-owned intellectual property.
The Government will also consider further measures, including through legislation, in support of the new intellectual property strategy.
Helping All Canadians Harness Intellectual Property
Intellectual property is one of our most valuable resources, and every Canadian business owner should understand how to protect and use it.
To better understand what groups of Canadians are benefiting the most from intellectual property, Budget 2018 proposes to provide Statistics Canada with $2 million over three years to conduct an intellectual property awareness and use survey. This survey will help identify how Canadians understand and use intellectual property, including groups that have traditionally been less likely to use intellectual property, such as women and Indigenous entrepreneurs. The results of the survey should help the Government better meet the needs of these groups through education and awareness initiatives.
The Canadian Intellectual Property Office will also increase the number of education and awareness initiatives that are delivered in partnership with business, intermediaries and academia to ensure Canadians better understand, integrate and take advantage of intellectual property when building their business strategies. This will include targeted initiatives to support underrepresented groups.
Finally, Budget 2018 also proposes to invest $1 million over five years to enable representatives of Canada’s Indigenous Peoples to participate in discussions at the World Intellectual Property Organization related to traditional knowledge and traditional cultural expressions, an important form of intellectual property.
It’s not wholly clear what they mean by ‘intellectual property’. The focus seems to be on patents as they are the only intellectual property (as opposed to copyright and trademarks) singled out in the budget. As for how the ‘patent collective’ is going to meet all its objectives, this budget supplies no clarity on the matter. On the plus side, I’m glad to see that indigenous peoples’ knowledge is being acknowledged as “an important form of intellectual property” and I hope the discussions at the World Intellectual Property Organization are fruitful.
Over the past decade, the Canadian patent flow in all technical sectors has consistently decreased. Patent flow provides a partial picture of how patents in Canada are exploited. A negative flow represents a deficit of patented inventions owned by Canadian assignees versus the number of patented inventions created by Canadian inventors. The patent flow for all Canadian patents decreased from about −0.04 in 2003 to −0.26 in 2014 (Figure 4.7). This means that there is an overall deficit of 26% of patent ownership in Canada. In other words, fewer patents were owned by Canadian institutions than were invented in Canada.
This is a significant change from 2003 when the deficit was only 4%. The drop is consistent across all technical sectors in the past 10 years, with Mechanical Engineering falling the least, and Electrical Engineering the most (Figure 4.7). At the technical field level, the patent flow dropped significantly in Digital Communication and Telecommunications. For example, the Digital Communication patent flow fell from 0.6 in 2003 to −0.2 in 2014. This fall could be partially linked to Nortel’s US$4.5 billion patent sale [emphasis mine] to the Rockstar consortium (which included Apple, BlackBerry, Ericsson, Microsoft, and Sony) (Brickley, 2011). Food Chemistry and Microstructural [?] and Nanotechnology both also showed a significant drop in patent flow. [p. 83 Print; p. 121 PDF]
Despite a fall in the number of parents for ‘Digital Communication’, we’re still doing well according to statistics elsewhere in this report. Is it possible that patents aren’t that big a deal? Of course, it’s also possible that we are enjoying the benefits of past work and will miss out on future work. (Note: A video of the April 10, 2018 report presentation by Max Blouw features him saying something like that.)
One last note, Nortel died many years ago. Disconcertingly, this report, despite more than one reference to Nortel, never mentions the company’s demise.
Boxed text
While the expert panel wasn’t tasked to answer certain types of questions, as I’ve noted earlier they managed to sneak in a few items. One of the strategies they used was putting special inserts into text boxes including this (from the report released April 10, 2018),
Box 4.2
The FinTech Revolution
Financial services is a key industry in Canada. In 2015, the industry accounted for 4.4%
of Canadia jobs and about 7% of Canadian GDP (Burt, 2016). Toronto is the second largest financial services hub in North America and one of the most vibrant research hubs in FinTech. Since 2010, more than 100 start-up companies have been founded in Canada, attracting more than $1 billion in investment (Moffatt, 2016). In 2016 alone, venture-backed investment in Canadian financial technology companies grew by 35% to $137.7 million (Ho, 2017). The Toronto Financial Services Alliance estimates that there are approximately 40,000 ICT specialists working in financial services in Toronto alone.
AI, blockchain, [emphasis mine] and other results of ICT research provide the basis for several transformative FinTech innovations including, for example, decentralized transaction ledgers, cryptocurrencies (e.g., bitcoin), and AI-based risk assessment and fraud detection. These innovations offer opportunities to develop new markets for established financial services firms, but also provide entry points for technology firms to develop competing service offerings, increasing competition in the financial services industry. In response, many financial services companies are increasing their investments in FinTech companies (Breznitz et al., 2015). By their own account, the big five banks invest more than $1 billion annually in R&D of advanced software solutions, including AI-based innovations (J. Thompson, personal communication, 2016). The banks are also increasingly investing in university research and collaboration with start-up companies. For instance, together with several large insurance and financial management firms, all big five banks have invested in the Vector Institute for Artificial Intelligence (Kolm, 2017).
I’m glad to see the mention of blockchain while AI (artificial intelligence) is an area where we have innovated (from the report released April 10, 2018),
AI has attracted researchers and funding since the 1960s; however, there were periods of stagnation in the 1970s and 1980s, sometimes referred to as the “AI winter.” During this period, the Canadian Institute for Advanced Research (CIFAR), under the direction of Fraser Mustard, started supporting AI research with a decade-long program called Artificial Intelligence, Robotics and Society, [emphasis mine] which was active from 1983 to 1994. In 2004, a new program called Neural Computation and Adaptive Perception was initiated and renewed twice in 2008 and 2014 under the title, Learning in Machines and Brains. Through these programs, the government provided long-term, predictable support for high- risk research that propelled Canadian researchers to the forefront of global AI development. In the 1990s and early 2000s, Canadian research output and impact on AI were second only to that of the United States (CIFAR, 2016). NSERC has also been an early supporter of AI. According to its searchable grant database, NSERC has given funding to research projects on AI since at least 1991–1992 (the earliest searchable year) (NSERC, 2017a).
The University of Toronto, the University of Alberta, and the Université de Montréal have emerged as international centres for research in neural networks and deep learning, with leading experts such as Geoffrey Hinton and Yoshua Bengio. Recently, these locations have expanded into vibrant hubs for research in AI applications with a diverse mix of specialized research institutes, accelerators, and start-up companies, and growing investment by major international players in AI development, such as Microsoft, Google, and Facebook. Many highly influential AI researchers today are either from Canada or have at some point in their careers worked at a Canadian institution or with Canadian scholars.
…
As international opportunities in AI research and the ICT industry have grown, many of Canada’s AI pioneers have been drawn to research institutions and companies outside of Canada. According to the OECD, Canada’s share of patents in AI declined from 2.4% in 2000 to 2005 to 2% in 2010 to 2015. Although Canada is the sixth largest producer of top-cited scientific publications related to machine learning, firms headquartered in Canada accounted for only 0.9% of all AI-related inventions from 2012 to 2014 (OECD, 2017c). Canadian AI researchers, however, remain involved in the core nodes of an expanding international network of AI researchers, most of whom continue to maintain ties with their home institutions. Compared with their international peers, Canadian AI researchers are engaged in international collaborations far more often than would be expected by Canada’s level of research output, with Canada ranking fifth in collaboration. [p. 97-98 Print; p. 135-136 PDF]
The only mention of robotics seems to be here in this section and it’s only in passing. This is a bit surprising given its global importance. I wonder if robotics has been somehow hidden inside the term artificial intelligence, although sometimes it’s vice versa with robot being used to describe artificial intelligence. I’m noticing this trend of assuming the terms are synonymous or interchangeable not just in Canadian publications but elsewhere too. ’nuff said.
Getting back to the matter at hand, t he report does note that patenting (technometric data) is problematic (from the report released April 10, 2018),
The limitations of technometric data stem largely from their restricted applicability across areas of R&D. Patenting, as a strategy for IP management, is similarly limited in not being equally relevant across industries. Trends in patenting can also reflect commercial pressures unrelated to R&D activities, such as defensive or strategic patenting practices. Finally, taxonomies for assessing patents are not aligned with bibliometric taxonomies, though links can be drawn to research publications through the analysis of patent citations. [p. 105 Print; p. 143 PDF]
It’s interesting to me that they make reference to many of the same issues that I mention but they seem to forget and don’t use that information in their conclusions.
Box 6.3
Open Science: An Emerging Approach to Create New Linkages
Open Science is an umbrella term to describe collaborative and open approaches to
undertaking science, which can be powerful catalysts of innovation. This includes
the development of open collaborative networks among research performers, such
as the private sector, and the wider distribution of research that usually results when
restrictions on use are removed. Such an approach triggers faster translation of ideas
among research partners and moves the boundaries of pre-competitive research to
later, applied stages of research. With research results freely accessible, companies
can focus on developing new products and processes that can be commercialized.
Two Canadian organizations exemplify the development of such models. In June
2017, Genome Canada, the Ontario government, and pharmaceutical companies
invested $33 million in the Structural Genomics Consortium (SGC) (Genome Canada,
2017). Formed in 2004, the SGC is at the forefront of the Canadian open science
movement and has contributed to many key research advancements towards new
treatments (SGC, 2018). McGill University’s Montréal Neurological Institute and
Hospital has also embraced the principles of open science. Since 2016, it has been
sharing its research results with the scientific community without restriction, with
the objective of expanding “the impact of brain research and accelerat[ing] the
discovery of ground-breaking therapies to treat patients suffering from a wide range
of devastating neurological diseases” (neuro, n.d.).
This is exciting stuff and I’m happy the panel featured it. (I wrote about the Montréal Neurological Institute initiative in a Jan. 22, 2016 posting.)
More than once, the report notes the difficulties with using bibliometric and technometric data as measures of scientific achievement and progress and open science (along with its cousins, open data and open access) are contributing to the difficulties as James Somers notes in his April 5, 2018 article ‘The Scientific Paper is Obsolete’ for The Atlantic (Note: Links have been removed),
The scientific paper—the actual form of it—was one of the enabling inventions of modernity. Before it was developed in the 1600s, results were communicated privately in letters, ephemerally in lectures, or all at once in books. There was no public forum for incremental advances. By making room for reports of single experiments or minor technical advances, journals made the chaos of science accretive. Scientists from that point forward became like the social insects: They made their progress steadily, as a buzzing mass.
The earliest papers were in some ways more readable than papers are today. They were less specialized, more direct, shorter, and far less formal. Calculus had only just been invented. Entire data sets could fit in a table on a single page. What little “computation” contributed to the results was done by hand and could be verified in the same way.
The more sophisticated science becomes, the harder it is to communicate results. Papers today are longer than ever and full of jargon and symbols. They depend on chains of computer programs that generate data, and clean up data, and plot data, and run statistical models on data. These programs tend to be both so sloppily written and so central to the results that it’s [sic] contributed to a replication crisis, or put another way, a failure of the paper to perform its most basic task: to report what you’ve actually discovered, clearly enough that someone else can discover it for themselves.
Perhaps the paper itself is to blame. Scientific methods evolve now at the speed of software; the skill most in demand among physicists, biologists, chemists, geologists, even anthropologists and research psychologists, is facility with programming languages and “data science” packages. And yet the basic means of communicating scientific results hasn’t changed for 400 years. Papers may be posted online, but they’re still text and pictures on a page.
What would you get if you designed the scientific paper from scratch today? A little while ago I spoke to Bret Victor, a researcher who worked at Apple on early user-interface prototypes for the iPad and now runs his own lab in Oakland, California, that studies the future of computing. Victor has long been convinced that scientists haven’t yet taken full advantage of the computer. “It’s not that different than looking at the printing press, and the evolution of the book,” he said. After Gutenberg, the printing press was mostly used to mimic the calligraphy in bibles. It took nearly 100 years of technical and conceptual improvements to invent the modern book. “There was this entire period where they had the new technology of printing, but they were just using it to emulate the old media.”Victor gestured at what might be possible when he redesigned a journal article by Duncan Watts and Steven Strogatz, “Collective dynamics of ‘small-world’ networks.” He chose it both because it’s one of the most highly cited papers in all of science and because it’s a model of clear exposition. (Strogatz is best known for writing the beloved “Elements of Math” column for The New York Times.)
The Watts-Strogatz paper described its key findings the way most papers do, with text, pictures, and mathematical symbols. And like most papers, these findings were still hard to swallow, despite the lucid prose. The hardest parts were the ones that described procedures or algorithms, because these required the reader to “play computer” in their head, as Victor put it, that is, to strain to maintain a fragile mental picture of what was happening with each step of the algorithm.Victor’s redesign interleaved the explanatory text with little interactive diagrams that illustrated each step. In his version, you could see the algorithm at work on an example. You could even control it yourself….
For anyone interested in the evolution of how science is conducted and communicated, Somers’ article is a fascinating and in depth look at future possibilities.
Subregional R&D
I didn’t find this quite as compelling as the last time and that may be due to the fact that there’s less information and I think the 2012 report was the first to examine the Canadian R&D scene with a subregional (in their case, provinces) lens. On a high note, this report also covers cities (!) and regions, as well as, provinces.
Ontario leads Canada in R&D investment and performance. The province accounts for almost half of R&D investment and personnel, research publications and collaborations, and patents. R&D activity in Ontario produces high-quality publications in each of Canada’s five R&D strengths, reflecting both the quantity and quality of universities in the province. Quebec lags Ontario in total investment, publications, and patents, but performs as well (citations) or better (R&D intensity) by some measures. Much like Ontario, Quebec researchers produce impactful publications across most of Canada’s five R&D strengths. Although it invests an amount similar to that of Alberta, British Columbia does so at a significantly higher intensity. British Columbia also produces more highly cited publications and patents, and is involved in more international research collaborations. R&D in British Columbia and Alberta clusters around Vancouver and Calgary in areas such as physics and ICT and in clinical medicine and energy, respectively. [emphasis mine] Smaller but vibrant R&D communities exist in the Prairies and Atlantic Canada [also referred to as the Maritime provinces or Maritimes] (and, to a lesser extent, in the Territories) in natural resource industries.
Globally, as urban populations expand exponentially, cities are likely to drive innovation and wealth creation at an increasing rate in the future. In Canada, R&D activity clusters around five large cities: Toronto, Montréal, Vancouver, Ottawa, and Calgary. These five cities create patents and high-tech companies at nearly twice the rate of other Canadian cities. They also account for half of clusters in the services sector, and many in advanced manufacturing.
Many clusters relate to natural resources and long-standing areas of economic and research strength. Natural resource clusters have emerged around the location of resources, such as forestry in British Columbia, oil and gas in Alberta, agriculture in Ontario, mining in Quebec, and maritime resources in Atlantic Canada. The automotive, plastics, and steel industries have the most individual clusters as a result of their economic success in Windsor, Hamilton, and Oshawa. Advanced manufacturing industries tend to be more concentrated, often located near specialized research universities. Strong connections between academia and industry are often associated with these clusters. R&D activity is distributed across the country, varying both between and within regions. It is critical to avoid drawing the wrong conclusion from this fact. This distribution does not imply the existence of a problem that needs to be remedied. Rather, it signals the benefits of diverse innovation systems, with differentiation driven by the needs of and resources available in each province. [pp. 132-133 Print; pp. 170-171 PDF]
Intriguingly, there’s no mention that in British Columbia (BC), there are leading areas of research: Visual & Performing Arts, Psychology & Cognitive Sciences, and Clinical Medicine (according to the table on p. 117 Print, p. 153 PDF).
As I said and hinted earlier, we’ve got brains; they’re just not the kind of brains that command respect.
Final comments
My hat’s off to the expert panel and staff of the Council of Canadian Academies. Combining two previous reports into one could not have been easy. As well, kudos to their attempts to broaden the discussion by mentioning initiative such as open science and for emphasizing the problems with bibliometrics, technometrics, and other measures. I have covered only parts of this assessment, (Competing in a Global Innovation Economy: The Current State of R&D in Canada), there’s a lot more to it including a substantive list of reference materials (bibliography).
While I have argued that perhaps the situation isn’t quite as bad as the headlines and statistics may suggest, there are some concerning trends for Canadians but we have to acknowledge that many countries have stepped up their research game and that’s good for all of us. You don’t get better at anything unless you work with and play with others who are better than you are. For example, both India and Italy surpassed us in numbers of published research papers. We slipped from 7th place to 9th. Thank you, Italy and India. (And, Happy ‘Italian Research in the World Day’ on April 15, 2018, the day’s inaugural year. In Italian: Piano Straordinario “Vivere all’Italiana” – Giornata della ricerca Italiana nel mondo.)
Unfortunately, the reading is harder going than previous R&D assessments in the CCA catalogue. And in the end, I can’t help thinking we’re just a little bit like Hedy Lamarr. Not really appreciated in all of our complexities although the expert panel and staff did try from time to time. Perhaps the government needs to find better ways of asking the questions.
***ETA April 12, 2018 at 1500 PDT: Talking about missing the obvious! I’ve been ranting on about how research strength in visual and performing arts and in philosophy and theology, etc. is perfectly fine and could lead to ‘traditional’ science breakthroughs without underlining the point by noting that Antheil was a musician, Lamarr was as an actress and they set the foundation for work by electrical engineers (or people with that specialty) for their signature work leading to WiFi, etc.***
There is, by the way, a Hedy-Canada connection. In 1998, she sued Canadian software company Corel, for its unauthorized use of her image on their Corel Draw 8 product packaging. She won.
More stuff
For those who’d like to see and hear the April 10, 2017 launch for “Competing in a Global Innovation Economy: The Current State of R&D in Canada” or the Third Assessment as I think of it, go here.
For anyone curious about ‘Bombshell: The Hedy Lamarr Story’ to be broadcast on May 18, 2018 as part of PBS’s American Masters series, there’s this trailer,
For the curious, I did find out more about the Hedy Lamarr and Corel Draw. John Lettice’s December 2, 1998 article The Rgister describes the suit and her subsequent victory in less than admiring terms,
Our picture doesn’t show glamorous actress Hedy Lamarr, who yesterday [Dec. 1, 1998] came to a settlement with Corel over the use of her image on Corel’s packaging. But we suppose that following the settlement we could have used a picture of Corel’s packaging. Lamarr sued Corel earlier this year over its use of a CorelDraw image of her. The picture had been produced by John Corkery, who was 1996 Best of Show winner of the Corel World Design Contest. Corel now seems to have come to an undisclosed settlement with her, which includes a five-year exclusive (oops — maybe we can’t use the pack-shot then) licence to use “the lifelike vector illustration of Hedy Lamarr on Corel’s graphic software packaging”. Lamarr, bless ‘er, says she’s looking forward to the continued success of Corel Corporation, …
There’s this excerpt from a Sept. 21, 2015 posting (a pictorial essay of Lamarr’s life) by Shahebaz Khan on The Blaze Blog,
6. CorelDRAW:
For several years beginning in 1997, the boxes of Corel DRAW’s software suites were graced by a large Corel-drawn image of Lamarr. The picture won Corel DRAW’s yearly software suite cover design contest in 1996. Lamarr sued Corel for using the image without her permission. Corel countered that she did not own rights to the image. The parties reached an undisclosed settlement in 1998.
There’s also a Nov. 23, 1998 Corel Draw 8 product review by Mike Gorman on mymac.com, which includes a screenshot of the packaging that precipitated the lawsuit. Once they settled, it seems Corel used her image at least one more time.
The days when you went cruising to ‘get away from it all’ seem to have passed (if they ever really existed) with the introduction of wearable technology that will register your every preference and make life easier according to Cliff Kuang’s Oct. 19, 2017 article for Fast Company,
This month [October 2017], the 141,000-ton Regal Princess will push out to sea after a nine-figure revamp of mind-boggling scale. Passengers won’t be greeted by new restaurants, swimming pools, or onboard activities, but will instead step into a future augured by the likes of Netflix and Uber, where nearly everything is on demand and personally tailored. An ambitious new customization platform has been woven into the ship’s 19 passenger decks: some 7,000 onboard sensors and 4,000 “guest portals” (door-access panels and touch-screen TVs), all of them connected by 75 miles of internal cabling. As the Carnival-owned ship cruises to Nassau, Bahamas, and Grand Turk, its 3,500 passengers will have the option of carrying a quarter-size device, called the Ocean Medallion, which can be slipped into a pocket or worn on the wrist and is synced with a companion app.
The platform will provide a new level of service for passengers; the onboard sensors record their tastes and respond to their movements, and the app guides them around the ship and toward activities aligned with their preferences. Carnival plans to roll out the platform to another seven ships by January 2019. Eventually, the Ocean Medallion could be opening doors, ordering drinks, and scheduling activities for passengers on all 102 of Carnival’s vessels across 10 cruise lines, from the mass-market Princess ships to the legendary ocean liners of Cunard.
Kuang goes on to explain the reasoning behind this innovation,
The Ocean Medallion is Carnival’s attempt to address a problem that’s become increasingly vexing to the $35.5 billion cruise industry. Driven by economics, ships have exploded in size: In 1996, Carnival Destiny was the world’s largest cruise ship, carrying 2,600 passengers. Today, Royal Caribbean’s MS Harmony of the Seas carries up to 6,780 passengers and 2,300 crew. Larger ships expend less fuel per passenger; the money saved can then go to adding more amenities—which, in turn, are geared to attracting as many types of people as possible. Today on a typical ship you can do practically anything—from attending violin concertos to bungee jumping. And that’s just onboard. Most of a cruise is spent in port, where each day there are dozens of experiences available. This avalanche of choice can bury a passenger. It has also made personalized service harder to deliver. …
Kuang also wrote this brief description of how the technology works from the passenger’s perspective in an Oct. 19, 2017 item for Fast Company,
1. Pre-trip
On the web or on the app, you can book experiences, log your tastes and interests, and line up your days. That data powers the recommendations you’ll see. The Ocean Medallion arrives by mail and becomes the key to ship access.
2. Stateroom
When you draw near, your cabin-room door unlocks without swiping. The room’s unique 43-inch TV, which doubles as a touch screen, offers a range of Carnival’s bespoke travel shows. Whatever you watch is fed into your excursion suggestions.
3. Food
When you order something, sensors detect where you are, allowing your server to find you. Your allergies and preferences are also tracked, and shape the choices you’re offered. In all, the back-end data has 45,000 allergens tagged and manages 250,000 drink combinations.
4. Activities
The right algorithms can go beyond suggesting wines based on previous orders. Carnival is creating a massive semantic database, so if you like pricey reds, you’re more apt to be guided to a violin concerto than a limbo competition. Your onboard choices—the casino, the gym, the pool—inform your excursion recommendations.
In Kuang’s Oct. 19, 2017 article he notes that the cruise ship line is putting a lot of effort into retraining their staff and emphasizing the ‘soft’ skills that aren’t going to be found in this iteration of the technology. No mention is made of whether or not there will be reductions in the number of staff members on this cruise ship nor is the possibility that ‘soft’ skills may in the future be incorporated into this technological marvel.
Personalization/customization is increasingly everywhere
How do you feel about customized news feeds? As it turns out, this is not a rhetorical question as Adrienne LaFrance notes in her Oct. 19, 2017 article for The Atlantic (Note: Links have been removed),
…
Today, a Google search for news runs through the same algorithmic filtration system as any other Google search: A person’s individual search history, geographic location, and other demographic information affects what Google shows you. Exactly how your search results differ from any other person’s is a mystery, however. Not even the computer scientists who developed the algorithm could precisely reverse engineer it, given the fact that the same result can be achieved through numerous paths, and that ranking factors—deciding which results show up first—are constantly changing, as are the algorithms themselves.
We now get our news in real time, on demand, tailored to our interests, across multiple platforms, without knowing just how much is actually personalized. It was technology companies like Google and Facebook, not traditional newsrooms, that made it so. But news organizations are increasingly betting that offering personalized content can help them draw audiences to their sites—and keep them coming back.
Personalization extends beyond how and where news organizations meet their readers. Already, smartphone users can subscribe to push notifications for the specific coverage areas that interest them. On Facebook, users can decide—to some extent—which organizations’ stories they would like to appear in their news feeds. At the same time, devices and platforms that use machine learning to get to know their users will increasingly play a role in shaping ultra-personalized news products. Meanwhile, voice-activated artificially intelligent devices, such as Google Home and Amazon Echo, are poised to redefine the relationship between news consumers and the news [emphasis mine].
While news personalization can help people manage information overload by making individuals’ news diets unique, it also threatens to incite filter bubbles and, in turn, bias [emphasis mine]. This “creates a bit of an echo chamber,” says Judith Donath, author of The Social Machine: Designs for Living Online and a researcher affiliated with Harvard University ’s Berkman Klein Center for Internet and Society. “You get news that is designed to be palatable to you. It feeds into people’s appetite of expecting the news to be entertaining … [and] the desire to have news that’s reinforcing your beliefs, as opposed to teaching you about what’s happening in the world and helping you predict the future better.”
…
Still, algorithms have a place in responsible journalism. “An algorithm actually is the modern editorial tool,” says Tamar Charney, the managing editor of NPR One, the organization’s customizable mobile-listening app. A handcrafted hub for audio content from both local and national programs as well as podcasts from sources other than NPR, NPR One employs an algorithm to help populate users’ streams with content that is likely to interest them. But Charney assures there’s still a human hand involved: “The whole editorial vision of NPR One was to take the best of what humans do and take the best of what algorithms do and marry them together.” [emphasis mine]
…
The skimming and diving Charney describes sounds almost exactly like how Apple and Google approach their distributed-content platforms. With Apple News, users can decide which outlets and topics they are most interested in seeing, with Siri offering suggestions as the algorithm gets better at understanding your preferences. Siri now has have help from Safari. The personal assistant can now detect browser history and suggest news items based on what someone’s been looking at—for example, if someone is searching Safari for Reykjavík-related travel information, they will then see Iceland-related news on Apple News. But the For You view of Apple News isn’t 100 percent customizable, as it still spotlights top stories of the day, and trending stories that are popular with other users, alongside those curated just for you.
Similarly, with Google’s latest update to Google News, readers can scan fixed headlines, customize sidebars on the page to their core interests and location—and, of course, search. The latest redesign of Google News makes it look newsier than ever, and adds to many of the personalization features Google first introduced in 2010. There’s also a place where you can preprogram your own interests into the algorithm.
Google says this isn’t an attempt to supplant news organizations, nor is it inspired by them. The design is rather an embodiment of Google’s original ethos, the product manager for Google News Anand Paka says: “Just due to the deluge of information, users do want ways to control information overload. In other words, why should I read the news that I don’t care about?” [emphasis mine]
…
Meanwhile, in May [2017?], Google briefly tested a personalized search filter that would dip into its trove of data about users with personal Google and Gmail accounts and include results exclusively from their emails, photos, calendar items, and other personal data related to their query. [emphasis mine] The “personal” tab was supposedly “just an experiment,” a Google spokesperson said, and the option was temporarily removed, but seems to have rolled back out for many users as of August [2017?].
Now, Google, in seeking to settle a class-action lawsuit alleging that scanning emails to offer targeted ads amounts to illegal wiretapping, is promising that for the next three years it won’t use the content of its users’ emails to serve up targeted ads in Gmail. The move, which will go into effect at an unspecified date, doesn’t mean users won’t see ads, however. Google will continue to collect data from users’ search histories, YouTube, and Chrome browsing habits, and other activity.
The fear that personalization will encourage filter bubbles by narrowing the selection of stories is a valid one, especially considering that the average internet user or news consumer might not even be aware of such efforts. Elia Powers, an assistant professor of journalism and news media at Towson University in Maryland, studied the awareness of news personalization among students after he noticed those in his own classes didn’t seem to realize the extent to which Facebook and Google customized users’ results. “My sense is that they didn’t really understand … the role that people that were curating the algorithms [had], how influential that was. And they also didn’t understand that they could play a pretty active role on Facebook in telling Facebook what kinds of news they want them to show and how to prioritize [content] on Google,” he says.
The results of Powers’s study, which was published in Digital Journalism in February [2017], showed that the majority of students had no idea that algorithms were filtering the news content they saw on Facebook and Google. When asked if Facebook shows every news item, posted by organizations or people, in a users’ newsfeed, only 24 percent of those surveyed were aware that Facebook prioritizes certain posts and hides others. Similarly, only a quarter of respondents said Google search results would be different for two different people entering the same search terms at the same time. [emphasis mine; Note: Respondents in this study were students.]
This, of course, has implications beyond the classroom, says Powers: “People as news consumers need to be aware of what decisions are being made [for them], before they even open their news sites, by algorithms and the people behind them, and also be able to understand how they can counter the effects or maybe even turn off personalization or make tweaks to their feeds or their news sites so they take a more active role in actually seeing what they want to see in their feeds.”
On Google and Facebook, the algorithm that determines what you see is invisible. With voice-activated assistants, the algorithm suddenly has a persona. “We are being trained to have a relationship with the AI,” says Amy Webb, founder of the Future Today Institute and an adjunct professor at New York University Stern School of Business. “This is so much more catastrophically horrible for news organizations than the internet. At least with the internet, I have options. The voice ecosystem is not built that way. It’s being built so I just get the information I need in a pleasing way.”
…
LaFrance’s article is thoughtful and well worth reading in its entirety. Now, onto some commentary.
Loss of personal agency
I have been concerned for some time about the increasingly dull results I get from a Google search and while I realize the company has been gathering information about me via my searches , supposedly in service of giving me better searches, I had no idea how deeply the company can mine for personal data. It makes me wonder what would happen if Google and Facebook attempted a merger.
More cogently, I rather resent the search engines and artificial intelligence agents (e.g. Facebook bots) which have usurped my role as the arbiter of what interests me, in short, my increasing loss of personal agency.
I’m also deeply suspicious of what these companies are going to do with my data. Will it be used to manipulate me in some way? Presumably, the data will be sold and used for some purpose. In the US, they have married electoral data with consumer data as Brent Bambury notes in an Oct. 13, 2017 article for his CBC (Canadian Broadcasting Corporation) Radio show,
How much of your personal information circulates in the free-market ether of metadata? It could be more than you imagine, and it might be enough to let others change the way you vote.
A data firm that specializes in creating psychological profiles of voters claims to have up to 5,000 data points on 220 million Americans. Cambridge Analytica has deep ties to the American right and was hired by the campaigns of Ben Carson, Ted Cruz and Donald Trump.
During the U.S. election, CNN called them “Donald Trump’s mind readers” and his secret weapon.
…
David Carroll is a Professor at the Parsons School of Design in New York City. He is one of the millions of Americans profiled by Cambridge Analytica and he’s taking legal action to find out where the company gets its masses of data and how they use it to create their vaunted psychographic profiles of voters.
On Day 6 [Banbury’s CBC radio programme], he explained why that’s important.
“They claim to have figured out how to project our voting behavior based on our consumer behavior. So it’s important for citizens to be able to understand this because it would affect our ability to understand how we’re being targeted by campaigns and how the messages that we’re seeing on Facebook and television are being directed at us to manipulate us.” [emphasis mine]
…
The parent company of Cambridge Analytica, SCL Group, is a U.K.-based data operation with global ties to military and political activities. David Carroll says the potential for sharing personal data internationally is a cause for concern.
“It’s the first time that this kind of data is being collected and transferred across geographic boundaries,” he says.
But that also gives Carroll an opening for legal action. An individual has more rights to access their personal information in the U.K., so that’s where he’s launching his lawsuit.
…
Reports link Michael Flynn, briefly Trump’s National Security Adviser, to SCL Group and indicate that former White House strategist Steve Bannon is a board member of Cambridge Analytica. Billionaire Robert Mercer, who has underwritten Bannon’s Breitbart operations and is a major Trump donor, also has a significant stake in Cambridge Analytica.
In the world of data, Mercer’s credentials are impeccable.
“He is an important contributor to the field of artificial intelligence,” says David Carroll.
“His work at IBM is seminal and really important in terms of the foundational ideas that go into big data analytics, so the relationship between AI and big data analytics. …
Banbury’s piece offers a lot more, including embedded videos, than I’ve not included in that excerpt but I also wanted to include some material from Carole Cadwalladr’s Oct. 1, 2017 Guardian article about Carroll and his legal fight in the UK,
“There are so many disturbing aspects to this. One of the things that really troubles me is how the company can buy anonymous data completely legally from all these different sources, but as soon as it attaches it to voter files, you are re-identified. It means that every privacy policy we have ignored in our use of technology is a broken promise. It would be one thing if this information stayed in the US, if it was an American company and it only did voter data stuff.”
But, he [Carroll] argues, “it’s not just a US company and it’s not just a civilian company”. Instead, he says, it has ties with the military through SCL – “and it doesn’t just do voter targeting”. Carroll has provided information to the Senate intelligence committee and believes that the disclosures mandated by a British court could provide evidence helpful to investigators.
Frank Pasquale, a law professor at the University of Maryland, author of The Black Box Society and a leading expert on big data and the law, called the case a “watershed moment”.
“It really is a David and Goliath fight and I think it will be the model for other citizens’ actions against other big corporations. I think we will look back and see it as a really significant case in terms of the future of algorithmic accountability and data protection. …
Nobody is discussing personal agency directly but if you’re only being exposed to certain kinds of messages then your personal agency has been taken from you. Admittedly we don’t have complete personal agency in our lives but AI along with the data gathering done online and increasingly with wearable and smart technology means that another layer of control has been added to your life and it is largely invisible. After all, the students in Elia Powers’ study didn’t realize their news feeds were being pre-curated.
This is a roundup post of four items to cross my path this morning (Dec. 17, 2015), all of them concerned with wearable technology.
The first, a Dec. 16, 2015 news item on phys.org, is a fluffy little piece concerning the imminent arrival of a new generation of wearable technology,
It’s not every day that there’s a news story about socks. But in November [2015], a pair won the Best New Wearable Technology Device Award at a Silicon Valley conference. The smart socks, which track foot landings and cadence, are at the forefront of a new generation of wearable electronics, according to an article in Chemical & Engineering News (C&EN), the weekly newsmagazine of the American Chemical Society [ACS].
Marc S. Reisch, a senior correspondent at C&EN, notes that stiff wristbands like the popular FitBit that measure heart rate and the number of steps people take have become common. But the long-touted technology needed to create more flexible monitoring devices has finally reached the market. Developers have successfully figured out how to incorporate stretchable wiring and conductive inks in clothing fabric, program them to transmit data wirelessly and withstand washing.
In addition to smart socks, fitness shirts and shoe insoles are on the market already or are nearly there. Although athletes are among the first to gain from the technology, the less fitness-oriented among us could also benefit. One fabric concept product — designed not for covering humans but a car steering-wheel — could sense driver alertness and make roads safer.
Reisch’s Dec. 7, 2015 article (C&EN vol. 93, issue 48, pp. 28-90) provides more detailed information and market information such as this,
Materials suppliers, component makers, and apparel developers gathered at a printed-electronics conference in Santa Clara, Calif., within a short drive of tech giants such as Google and Apple, to compare notes on embedding electronics into the routines of daily life. A notable theme was the effort to stealthily [emphasis mine] place sensors on exercise shirts, socks, and shoe soles so that athletes and fitness buffs can wirelessly track their workouts and doctors can monitor the health of their patients.
“Wearable technology is becoming more wearable,” said Raghu Das, chief executive officer of IDTechEx [emphasis mine], the consulting firm that organized the conference. By that he meant the trend is toward thinner and more flexible devices that include not just wrist-worn fitness bands but also textiles printed with stretchable wiring and electronic sensors, thanks to advances in conductive inks.
Interesting use of the word ‘stealthy’, which often suggests something sneaky as opposed to merely secretive. I imagine what’s being suggested is that the technology will not impose itself on the user (i.e., you won’t have to learn how to use it as you did with phones and computers).
Leading into my second item, IDC (International Data Corporation), not to be confused with IDTechEx, is mentioned in a Dec. 17, 2015 news item about wearable technology markets on phys.org,
The global market for wearable technology is seeing a surge, led by watches, smart clothing and other connected gadgets, a research report said Thursday [Dec. 16, 2015].
IDC said its forecast showed the worldwide wearable device market will reach a total of 111.1 million units in 2016, up 44.4 percent from this year.
By 2019, IDC sees some 214.6 million units, or a growth rate averaging 28 percent.
“The most common type of wearables today are fairly basic, like fitness trackers, but over the next few years we expect a proliferation of form factors and device types,” said Jitesh Ubrani , Senior Research Analyst for IDC Mobile Device Trackers. “Smarter clothing, eyewear, and even hearables (ear-worn devices) are all in their early stages of mass adoption. Though at present these may not be significantly smarter than their analog counterparts, the next generation of wearables are on track to offer vastly improved experiences and perhaps even augment human abilities.”
One of the most popular types of wearables will be smartwatches, reaching a total of 34.3 million units shipped in 2016, up from the 21.3 million units expected to ship in 2015. By 2019, the final year of the forecast, total shipments will reach 88.3 million units, resulting in a five-year CAGR of 42.8%.
“In a short amount of time, smartwatches have evolved from being extensions of the smartphone to wearable computers capable of communications, notifications, applications, and numerous other functionalities,” noted Ramon Llamas , Research Manager for IDC’s Wearables team. “The smartwatch we have today will look nothing like the smartwatch we will see in the future. Cellular connectivity, health sensors, not to mention the explosive third-party application market all stand to change the game and will raise both the appeal and value of the market going forward.
“Smartwatch platforms will lead the evolution,” added Llamas. “As the brains of the smartwatch, platforms manage all the tasks and processes, not the least of which are interacting with the user, running all of the applications, and connecting with the smartphone. Once that third element is replaced with cellular connectivity, the first two elements will take on greater roles to make sense of all the data and connections.”
Top Five Smartwatch Platform Highlights
Apple’s watchOS will lead the smartwatch market throughout our forecast, with a loyal fanbase of Apple product owners and a rapidly growing application selection, including both native apps and Watch-designed apps. Very quickly, watchOS has become the measuring stick against which other smartwatches and platforms are compared. While there is much room for improvement and additional features, there is enough momentum to keep it ahead of the rest of the market.
Android/Android Wear will be a distant second behind watchOS even as its vendor list grows to include technology companies (ASUS, Huawei, LG, Motorola, and Sony) and traditional watchmakers (Fossil and Tag Heuer). The user experience on Android Wear devices has been largely the same from one device to the next, leaving little room for OEMs to develop further and users left to select solely on price and smartwatch design.
Smartwatch pioneer Pebble will cede market share to AndroidWear and watchOS but will not disappear altogether. Its simple user interface and devices make for an easy-to-understand use case, and its price point relative to other platforms makes Pebble one of the most affordable smartwatches on the market.
Samsung’s Tizen stands to be the dark horse of the smartwatch market and poses a threat to Android Wear, including compatibility with most flagship Android smartphones and an application selection rivaling Android Wear. Moreover, with Samsung, Tizen has benefited from technology developments including a QWERTY keyboard on a smartwatch screen, cellular connectivity, and new user interfaces. It’s a combination that helps Tizen stand out, but not enough to keep up with AndroidWear and watchOS.
There will be a small, but nonetheless significant market for smart wristwear running on a Real-Time Operating System (RTOS), which is capable of running third-party applications, but not on any of these listed platforms. These tend to be proprietary operating systems and OEMs will use them when they want to champion their own devices. These will help within specific markets or devices, but will not overtake the majority of the market.
The company has provided a table with five-year CAGR (compound annual growth rate) growth estimates, which can be found with the Dec. 17, 2015 IDC press release.
Disclaimer: I am not endorsing IDC’s claims regarding the market for wearable technology.
For the third and fourth items, it’s back to the science. A Dec. 17, 2015 news item on Nanowerk, describes, in general terms, some recent wearable technology research at the University of Manchester (UK), Note: A link has been removed),
Cheap, flexible, wireless graphene communication devices such as mobile phones and healthcare monitors can be directly printed into clothing and even skin, University of Manchester academics have demonstrated.
In a breakthrough paper in Scientific Reports (“Highly Flexible and Conductive Printed Graphene for Wireless Wearable Communications Applications”), the researchers show how graphene could be crucial to wearable electronic applications because it is highly-conductive and ultra-flexible.
The research could pave the way for smart, battery-free healthcare and fitness monitoring, phones, internet-ready devices and chargers to be incorporated into clothing and ‘smart skin’ applications – printed graphene sensors integrated with other 2D materials stuck onto a patient’s skin to monitor temperature, strain and moisture levels.
• In a hospital, a patient wears a printed graphene RFID tag on his or her arm. The tag, integrated with other 2D materials, can sense the patient’s body temperature and heartbeat and sends them back to the reader. The medical staff can monitor the patient’s conditions wirelessly, greatly simplifying the patient’s care.
• In a care home, battery-free printed graphene sensors can be printed on elderly peoples’ clothes. These sensors could detect and collect elderly people’s health conditions and send them back to the monitoring access points when they are interrogated, enabling remote healthcare and improving quality of life.
Existing materials used in wearable devices are either too expensive, such as silver nanoparticles, or not adequately conductive to have an effect, such as conductive polymers.
Graphene, the world’s thinnest, strongest and most conductive material, is perfect for the wearables market because of its broad range of superlative qualities. Graphene conductive ink can be cheaply mass produced and printed onto various materials, including clothing and paper.
“Sir Kostya Novoselov
To see evidence that cheap, scalable wearable communication devices are on the horizon is excellent news for graphene commercial applications.
Sir Kostya Novoselov (tweet)„
The researchers, led by Dr Zhirun Hu, printed graphene to construct transmission lines and antennas and experimented with these in communication devices, such as mobile and Wifi connectivity.
Using a mannequin, they attached graphene-enabled antennas on each arm. The devices were able to ‘talk’ to each other, effectively creating an on-body communications system.
The results proved that graphene enabled components have the required quality and functionality for wireless wearable devices.
Dr Hu, from the School of Electrical and Electronic Engineering, said: “This is a significant step forward – we can expect to see a truly all graphene enabled wireless wearable communications system in the near future.
“The potential applications for this research are huge – whether it be for health monitoring, mobile communications or applications attached to skin for monitoring or messaging.
“This work demonstrates that this revolutionary scientific material is bringing a real change into our daily lives.”
Co-author Sir Kostya Novoselov, who with his colleague Sir Andre Geim first isolated graphene at the University in 2004, added: “Research into graphene has thrown up significant potential applications, but to see evidence that cheap, scalable wearable communication devices are on the horizon is excellent news for graphene commercial applications.”
The next and final item concerns supercapacitors for wearable tech, which makes it slightly different from the other items and is why, despite the date, this is the final item. The research comes from Case Western Research University (CWRU; US) according to a Dec. 16, 2015 news item on Nanowerk (Note: A link has been removed),
Wearable power sources for wearable electronics are limited by the size of garments.
With that in mind, researchers at Case Western Reserve University have developed flexible wire-shaped microsupercapacitors that can be woven into a jacket, shirt or dress (Energy Storage Materials, “Flexible and wearable wire-shaped microsupercapacitors based on highly aligned titania and carbon nanotubes”).
A Dec. 16, 2015 CWRU news release (on EurekAlert), which originated the news item, provides more detail about a device that would make wearable tech more wearable (after all, you don’t want to recharge your clothes the same way you do your phone and other mobile devices),
By their design or by connecting the capacitors in series or parallel, the devices can be tailored to match the charge storage and delivery needs of electronics donned.
While there’s been progress in development of those electronics–body cameras, smart glasses, sensors that monitor health, activity trackers and more–one challenge remaining is providing less obtrusive and cumbersome power sources.
“The area of clothing is fixed, so to generate the power density needed in a small area, we grew radially-aligned titanium oxide nanotubes on a titanium wire used as the main electrode,” said Liming Dai, the Kent Hale Smith Professor of Macromolecular Science and Engineering. “By increasing the surface area of the electrode, you increase the capacitance.”
Dai and Tao Chen, a postdoctoral fellow in molecular science and engineering at Case Western Reserve, published their research on the microsupercapacitor in the journal Energy Storage Materials this week. The study builds on earlier carbon-based supercapacitors.
A capacitor is cousin to the battery, but offers the advantage of charging and releasing energy much faster.
How it works
In this new supercapacitor, the modified titanium wire is coated with a solid electrolyte made of polyvinyl alcohol and phosphoric acid. The wire is then wrapped with either yarn or a sheet made of aligned carbon nanotubes, which serves as the second electrode. The titanium oxide nanotubes, which are semiconducting, separate the two active portions of the electrodes, preventing a short circuit.
In testing, capacitance–the capability to store charge–increased from 0.57 to 0.9 to 1.04 milliFarads per micrometer as the strands of carbon nanotube yarn were increased from 1 to 2 to 3.
When wrapped with a sheet of carbon nanotubes, which increases the effective area of electrode, the microsupercapactitor stored 1.84 milliFarads per micrometer. Energy density was 0.16 x 10-3 milliwatt-hours per cubic centimeter and power density .01 milliwatt per cubic centimeter.
Whether wrapped with yarn or a sheet, the microsupercapacitor retained at least 80 percent of its capacitance after 1,000 charge-discharge cycles. To match various specific power needs of wearable devices, the wire-shaped capacitors can be connected in series or parallel to raise voltage or current, the researchers say.
When bent up to 180 degrees hundreds of times, the capacitors showed no loss of performance. Those wrapped in sheets showed more mechanical strength.
“They’re very flexible, so they can be integrated into fabric or textile materials,” Dai said. “They can be a wearable, flexible power source for wearable electronics and also for self-powered biosensors or other biomedical devices, particularly for applications inside the body.” [emphasis mine]
Dai ‘s lab is in the process of weaving the wire-like capacitors into fabric and integrating them with a wearable device.
So one day we may be carrying supercapacitors in our bodies? I’m not sure how I feel about that goal. In any event, here’s a link and a citation for the paper,
The International Telecommunications Union (ITU) patent summit being held today (Oct. 10, 2012) in Geneva, Switzerland was announced in July 2012 as noted in this July 6, 2012 news item on the BBC News website,
A rash of patent lawsuits has prompted the UN to call smartphone makers and others mobile industry bodies together.
It said the parties needed to address the “innovation-stifling use of intellectual property” which had led to several devices being banned from sale.
It said innovations deemed essential to industry standards, such as 3G or Jpeg photos, would be the meeting’s focus.
It noted that if just one patent holder demanded unreasonable compensation the cost of a device could “skyrocket”.
Microsoft and Apple are among firms that have called on others not to enforce sales bans on the basis of such standards-essential patents.
However, lawyers have noted that doing so would deprive other companies of way to counter-attacking other types of patent lawsuits pursued by the two companies.
Here’s a sample of the activity that has led to convening this summit (excerpted from the BBC news item),
“We are seeing an unwelcome trend in today’s marketplace to use standards-essential patents to block markets,” said the ITU secretary general Dr Hamadoun Toure.
…
Motorola Mobility – now owned by Google – managed to impose a brief sales ban of iPhone and iPads in Germany last year after Apple refused to pay it a licence fee. The dispute centred on a patent deemed crucial to the GPRS data transmission standard used by GSM cellular networks.
…
Samsung has also attempted to use its 3G patents to bar Apple from selling products in Europe, Japan and the US.
…
However, industry watchers note that Apple has used lawsuits to ban Samsung products in both the US and Australia and attempted to restrict sales of other companies’ devices powered by Android.
Mike Masnick commented briefly about the summit in his July 12, 2012 posting on Techdirt,
The UN’s International Telecommunication Union (ITU) — the same unit looking at very questionable plans concerning taxing the internet — has apparently decided that it also needs to step in over the massive patent thicket around smartphones. It’s convening a summit … it looks like they’re only inviting the big companies who make products, and leaving the many trolls out of it. Also, it’s unclear from the description if the ITU really grasps the root causes of the problem: the system itself. …
This Roundtable will assess the effectiveness of RAND (reasonable and non-discriminatory) – based patent policies. The purpose of this initiative is to provide a neutral venue for industry, standards bodies and regulators to exchange innovative ideas that can guide future discussions on whether current patent policies and existing industry practices adequately respond to the needs of the various stakeholders.
Segment 1 (Part II: Specific perspectives of certain key stakeholders in “360 view” format):
Moderator: Mr. Knut Blind, Rotterdam School of Management [ Biography ]
Perspectives from certain key stakeholders:
Standard Development Organizations:
Mr. Antoine Dore, ITU
[ Biography ]
Mr. Dirk Weiler, ETSI
[ Biography ]
Industry players:
Mr. BJ Watrous, Apple
[ Biography ]
Mr. Ray Warren, Motorola Mobility
[ Biography ]
Mr. Michael Fröhlich, RIM [emphasis mine]
[ Biography ]
Patent offices:
Mr. Michel Goudelis, European Patent Office
[ Biography ]
Mr. Stuart Graham, United States Patent and Trademark Office
[ Biography ]
Academic Institution:
Mr. Tim Pohlmann, Technical University of Berlin
I was surprised to note the presence of a Canadian company at the summit.
In general, hopes do not seem high that anything will be resolved putting me in mind of Middle Eastern peace talks, which have stretched on for decades with no immediate end in sight. We’ll see.
The Organization for Economic Cooperation and Development (OECD) has published its Science, Technology and Industry Scoreboard for 2011 and one section shows that patent quality over the past 20 years has declined dramatically, mainly the authors say, due to excessive litigation by so-called non-practicing entities that seek to exploit patent laws. The result they say, is a glut of minor or incremental patent applications that add little to scientific progress.
Of course, the real way to fix this problem is to make the bar to get a patent much, much higher. If you do that, you get less [sic] bogus patent apps being submitted, and it makes it easier to reject such bogus patents.
What Masnick means by bogus is clarified in this quote from the Sept. 23, 2011 news item,
The problem it appears has come about due to the rise of non-practicing entities [patent trolls]; groups that form for the sole purpose of applying for patents in the hopes of suing someone else who happens to use the same ideas, rather than as a means for building an actual product; though not all of the rise can be attributed to such entities as large corporations have apparently become much more litigious as well.
Canada’s Research in Motiion (RIM), maker of Blackberry mobile devices, was sued by a non-practicing entity, NTP, Inc. Here’s a little more about the situation (from a Wikipedia essay on NTP),
NTP has been characterized as a patent troll because it is a non-practicing entity that aggressively enforces its patent porfolio against larger, well established companies. The most notable case was against Research in Motion, makers of the BlackBerry mobile email system.
…
In 2000, NTP sent notice of their wireless email patents to a number of companies and offered to license the patents to them. None of the companies took a license. NTP brought a patent infringement lawsuit against one of the companies, Research in Motion, in the United States District Court for the Eastern District of Virginia. …
During the trial, RIM tried to show that a functional wireless email system was already in the public domain at the time the NTP inventions had been made. This would have invalidated the NTP patents. The prior system was called System for Automated Messages (SAM). RIM demonstrated SAM in court and it appeared to work. But the NTP attorneys discovered that RIM was not using vintage SAM software, but a more modern version that came after NTP’s inventions were made. Therefore the judge instructed the jury to disregard the demonstration as invalid.
The jury eventually found that the NTP patents were valid, that RIM had infringed them, that the infringement had been “willful”, and that the infringement had cost NTP $33 million in damages (the greater of a reasonable royalty or lost profits). The judge, James R. Spencer increased the damages to $53 million as a punitive measure because the infringement had been willful. He also instructed RIM to pay NTP’s legal fees of $4.5 million and issued an injunction ordering RIM to cease and desist infringing the patents. This would have shut down the BlackBerry systems in the US.
There was a settlement made by RIM with NTP in 2006. Simultaneously however, RIM continued to request patent reexaminations and so the patents are still being fought over.
All this makes one wonder just how much innovation and invention could have been stimulated with the funds used to fight and settle this court case.
Intriguingly, RIM was part of a consortium of six companies that during July 2011 successfully purchased former communications giant Nortel Networks’ patent portfolio. From the July 1, 2011 article by Charles Arther for the Guardian,
Apple, Microsoft, Sony and BlackBerry maker Research in Motion are part of a winning consortium of six companies which have bought a valuable tranche of patents from the bankrupt Nortel Networks patent portfolio for $4.5bn (£2.8bn), in a hotly contested auction that saw Google and Intel lose out.
Early signs had suggested that Google might be the winning bidder for the patents, which will provide valuable armoury for expected disputes in the communications – and especially smartphone – field.
The result could give Apple and Microsoft the upper hand in any forthcoming patents rows. [emphasis mine] Microsoft is already extracting payments from a number of companies that use Google’s Android mobile operating system on the basis that it owns patents that they were infringing. Oracle has big court case against Google alleging that Android infringes a number of Java patents, and claiming $6.1bn in damages.
The other two companies partnering in the consortium are EMC, a storage company, and Ericsson, a communications company.
As Arthur’s article makes clear, this deal is designed facilitate cash grabs based on Nortel’s patent portfolio and/or to constrain innovation. It’s fascinating to note that RIM is both a target vis à vis its NTP experience and a possible aggressor as part of this consortium. Again, imagine how those billions of dollars could have been used for greater innovation and invention.
Other topics were covered as well, the page hosting the OECD scorecard information boasts a couple of animations, one of particular interest to me (sadly I cannot embed it here). The item of interest is the animation featuring 30 years of R&D investments in OECD and non-OECD countries. It’s a very lively 16 seconds and you may need to view it a few times. You’ll see some countries rocket out of nowhere to make their appearance on the chart (Finland and Korea come to mind) and you’ll see some countries progress steadily while others fall back. The Canadian trajectory shows slow and steady growth until approximately 2000 when we fall back for a year or two after which we remain stagnant.
There’ve been a lot of online articles about e-readers in the last few weeks in particular as debate rages as to whether or not this technology will be viable. It got me to thinking about e-literature, e-readers, e-books, e-paper, e-ink, e-publishing, literacy and on and on. I’ve divided my musings (or attempts to distinguish some sort of pattern within all these contradictory developments) into three parts.This first part is more concerned with the technology/business end of things.
Samsung just announced that it was moving out of the e-reader business. From an article (Aug. 25 2010) by Kit Eaton in Fast Company,
Need any evidence that the dedicated e-reader is destined to become a mere niche-appeal device? Here you go: Tech giant Samsung is ditching its clever e-paper business after years of clever successes and a ton of research into what may be the future for the technology.
Back in 2009 at CES Samsung teased its good-looking Kindle-challenging e-reader, the Papyrus, which used Samsung’s own proprietary electronic ink system for the display. At CES this year it followed up with its “E6” device, with a rumored cost of $400. Samsung had been shaking the e-paper world since late in 2008 with numerous e-paper announcements, including revealing a color 14-inch flexible e-paper display as long ago as October 2008, which used carbon nanotube tech to achieve its sharp image quality.
Now it seems that revolutions in the e-reader market (namely that odd race-to-the-bottom in pricing over quality of service) combined with revolutions in the tablet PC market (which means the iPad, which can do a million more things than the Papyrus or E6 could) and pricing that neatly undercuts Samsung’s planned price points has resulted in Samsung killing its e-paper research and development.
According to Eaton, Samsung hasn’t entirely withdrawn from the e-reader business; the company will be concentrating on its LCD-based systems instead. Samsung is also releasing its own tablet, Galaxy Tab as competition to Apple’s iPad, in mid-September 2010 (Sept. 2, 2010 news item at Financial Post website).
Dan Nosowitz also writing for Fast Company presents an opinion (Aug. 12, 2010 posting) which sheds light on why Samsung is focusing on LCD -based readers over e-ink-based readers such as Kindle and Nook,
E-ink is one of the more unusual technologies to spring up in recent years. It’s both more expensive and less versatile than LCD, a long-established product seen in everything from iPods to TVs. It’s incredibly specific, but also incredibly good at its one job: reading text.
E-ink e-book readers like the Amazon Kindle and Barnes & Noble Nook offer, in the opinion of myself and many others, the best digital book-reading experience available. …
…
E-ink will die mostly because it fundamentally can’t compete with tablets. That’s why announcements like today’s, in which E-Ink (it’s a company as well as that company’s main–or only?–product) claimed it will release both a color and a touchscreen version by early 2011, is so confusing. But color and interface are hardly the only obstacles e-ink has to overcome to compete with tablets: Its refresh rates make video largely impossible, it can’t cram in enough pixels to make still photos look any more crisp than a day-old McDonald’s french fry, and, most damnably, it’s still extremely expensive.
…
Amazon showed that the way to make e-book readers sell like blazes is to lower the price to near-impulse-item territory. Its new $140 Kindle sold out of pre-orders almost immediately, and there’s been more buzz around the next version than can be explained through hardware upgrades alone. It’s a great reader, don’t get me wrong, but its incredible sales numbers are due in large part to the price cut.
That comment about the price cut for the e-reader as being key to its current success can certainly be borne out by this article E-reader faceoff: Kindle or Nook? Here’s a comparison by Mark W. Smith on physorg.com
There’s a titanic battle brewing in the e-reader market. The Amazon Kindle and Barnes & Noble Nook are leaving competitors in the dust this summer and are locked in a war that has dropped prices by more than half in just a year.
The Wall Street Journal and Tech News Daily have a few things you should consider before wading into the increasingly crowded e-book market, as well as new research that reveals folks with an e-reader tend to read a whole lot more than ever before. The Barnes and Noble Nook is trying to wrestle some market share away from the big boys, and Sharper Image just announced a new e-reader called the Literati that hopes to, maybe, nail down more male readers? It’s got a color screen, in any event.
Borders has slashed the prices of E-Readers Kobo and Aluratek by $20, illustrating just how meh they’ve become in the tech world. The price drop is nothing new–both the Kindle and Nook, Amazon and Barnes & Noble’s market leaders, have seen their prices slashed recently, and they’re thought to be the most exciting brands in the sector. But who does the news bode worst for?
…
But most of all, this news proves that, as my colleague Kit Eaton pointed out a few months back, this is about as good as it gets for the e-Reader. It’s not quite dead, but it’s looking a bit peaky, like. The reason is, of course, the tablet.
There are efforts that may revive e-readers/e-books/e-paper such as this, a new development in the e-paper/e-reader market was announced in a news item on Azonano (Aug.27, 2010),
The FlexTech Alliance, focused on developing the electronic display and the flexible, printed electronics industry supply chain, today announced a contract award to Nyx Illuminated Clothing Company to develop a foldable display constructed from a panel of multiple e-paper screens.
Applications for this type of product are numerous. For consumer electronics, a foldable display can increase the size of e-reader screens without increasing the device foot-print. In military applications, maps may be read and stored more easily in the field. Medical devices can be enhanced with more accessible and convenient patient charts.
“To enable this unique technology to work, our engineers will develop circuitry to simultaneously drive six separate e-paper screens as one single display,” described John Bell, project manager for Nyx. “The screen panels will be able to be folded up into the area of a single panel or unfolded to the full six panel area on demand.”
Convenience is always important and a flexible screen that I could fold up and fits easily into a purse or a pocket offers a big advantage over an e-book or an iPad (or other tablet device). I’d be especially interested if there’s a sizing option, e.g., being able to view in 1-screen, 2-screen, 3-screen and up to 6-screen options.
As for the debate about tablets vs e-readers such as Kindle, Nook, and their brethren, I really don’t know. E-readers apparently offer superior reading experiences but that presupposes interest in reading will be maintained. Something like Mongoliad (as described in my Sept. 7, 2010 posting), for example, would seem ideally suited to a tablet environment where the reader becomes a listener and/or a participant in the story environment.
Tomorrow: Part 2 where I look at the reading and writing experience in this digital world.