Tag Archives: General Motors

Is your smart TV or your car spying on you?

Simple answer: Yes.

Smart television sets (TVs)

A December 10, 2024 Universidad Carlos III de Madrid press release (also on EurekAlert) offers details about the data collected by smart TVs,

A scientific team from Universidad Carlos III de Madrid (UC3M), in collaboration with University College London (England) and the University of California, Davis (USA), has found that smart TVs send viewing data to their servers. This allows brands to generate detailed profiles of consumers’ habits and tailor advertisements based on their behaviour.

The research revealed that this technology captures screenshots or audio to identify the content displayed on the screen using Automatic Content Recognition (ACR) technology. This data is then periodically sent to specific servers, even when the TV is used as an external screen or connected to a laptop.

“Automatic Content Recognition works like a kind of visual Shazam, taking screenshots or audio to create a viewer profile based on their content consumption habits. This technology enables manufacturers’ platforms to profile users accurately, much like the internet does,” explains one of the study’s authors, Patricia Callejo, a professor in UC3M’s Department of Telematics Engineering and a fellow at the UC3M-Santander Big Data Institute. “In any case, this tracking—regardless of the usage mode—raises serious privacy concerns, especially when the TV is used solely as a monitor.”

The findings, presented in November [2024] at the Internet Measurement Conference (IMC) 2024, highlight the frequency with which these screenshots are transmitted to the servers of the brands analysed: Samsung and LG. Specifically, the research showed that Samsung TVs sent this information every minute, while LG devices did so every 15 seconds. “This gives us an idea of the intensity of the monitoring and shows that smart TV platforms collect large volumes of data on users, regardless of how they consume content—whether through traditional TV viewing or devices connected via HDMI, like laptops or gaming consoles,” Callejo emphasises.

To test the ability of TVs to block ACR tracking, the research team experimented with various privacy settings on smart TVs. The results demonstrated that, while users can voluntarily block the transmission of this data to servers, the default setting is for TVs to perform ACR. “The problem is that not all users are aware of this,” adds Callejo, who considers this lack of transparency in initial settings concerning. “Moreover, many users don’t know how to change the settings, meaning these devices function by default as tracking mechanisms for their activity.”

This research opens up new avenues for studying the tracking capabilities of cloud-connected devices that communicate with each other (commonly known as the Internet of Things, or IoT). It also suggests that manufacturers and regulators must urgently address the challenges that these new devices will present in the near future.

Here’s a link to and a citation for the paper,

Watching TV with the Second-Party: A First Look at Automatic Content Recognition Tracking in Smart TVs by Gianluca Anselmi, Yash Vekaria, Alexander D’Souza, Patricia Callejo, Anna Maria Mandalari, Zubair Shafiq. IMC ’24: Proceedings of the 2024 ACM on Internet Measurement Conference Pages 622 – 634 DOI: https://doi.org/10.1145/3646547.3689013 Published: 04 November 2024

This paper is open access.

Cars

This was on the Canadian Broadcasting Corporation’s (CBC) Day Six radio programme and the segment is embedded in a January 19, 2025 article by Philip Drost, Note: A link has been removed,

When a Tesla Cybertruck exploded outside Trump International Hotel in Las Vegas on New Year’s Day [2025], authorities were quickly able to gather information, crediting Elon Musk and Tesla for sending them info about the car and its driver. 

But for some, it’s alarming to discover that kind of information is so readily available.

“Most carmakers are selling drivers’ personal information. That’s something that we know based on their privacy policies,” Zoë MacDonald, a writer and researcher focussing on online privacy and digital rights, told Day 6 host Brent Bambury.

The Las Vegas Metropolitan Police Department said the Tesla CEO was able to provide key details about the truck’s driver, who authorities believe died by self-inflicted gun wound at the scene, and its movement leading up to the destination. 

With that data, they were able to determine that the explosives came from a device in the truck, not the vehicle itself.  

“We have now confirmed that the explosion was caused by very large fireworks and/or a bomb carried in the bed of the rented Cybertruck and is unrelated to the vehicle itself,” Musk wrote on X following the explosion.

To privacy experts, it’s another example of how your personal information can be used in ways you may not be aware of. And while this kind of data can useful in an investigation, it’s by no means the only way companies use the information.  

“This is unfortunately not surprising that they have this data,” said David Choffnes, executive director of the Cybersecurity and Privacy Institute at Northeastern University in Boston.

“When you see it all together and know that a company has that information and continues at any point in time to hand it over to law enforcement, then you start to be a little uncomfortable, even if — in this case — it was a good thing for society.”

CBC News reached out to Tesla for comment but did not hear back before publication. 

I found this to be eye-opening, Note: A link has been removed,

MacDonald says the privacy concerns are a byproduct of all the technology new cars come with these days, including microphones, cameras, and sensors. The app that often accompanies a new car is collecting your information, too, she says.

The former writer for the Mozilla Foundation worked on a report in 2023 that examined vehicle privacy policies. For that study, MacDonald sifted through privacy policies from auto manufacturers. And she says the findings were staggering.

Most shocking of all is the information the car can learn from you, MacDonald says. It’s not just when you gas up or start your engine. Your vehicle can learn your sexual activity, disability status, and even your religious beliefs [emphasis mine].

MacDonald says it’s unclear how they car companies do this, because the information in the policies are so vague.

It can also collect biometric data, such as facial geometric features, iris scans, and fingerprints [emphasis mine].

This extends far past the driver. MacDonald says she read one privacy policy that required drivers to read out a statement every time someone entered the vehicle, to make them aware of the data the car collects, something that seems unlikely to go down before your Uber ride.

If that doesn’t bother you, then this might, Note: A link has been removed,

And car companies aren’t just keeping that information to themselves.

Confronted with these types of privacy concerns, many people simply say they have nothing to hide, Choffnes says. But when money is involved, they change their tune. 

According to an investigation from the New York Times in March of 2024, General Motors shared information on how people drive their cars with data brokers that create risk profiles for the insurance industry, which resulted in people’s insurance premiums going up [emphases mine]. General Motors has since said it has stopped sharing those details [emphasis mine].

“The issue with these kinds of services is that it’s not clear that it is being done in a correct or fair way, and that those costs are actually unfair to consumers,” said Choffnes. 

For example, if you make a hard stop to avoid an accident because of something the car in front of you did, the vehicle could register it as poor driving.

Drost’s January 19, 2025 article notes that the US Federal Trade Commission has proposed a five year moratorium to prevent General Motors from selling geolocation and driver behavior data to consumer report agencies. In the meantime,

“Cars are a privacy nightmare. And that is not a problem that Canadian consumers can solve or should solve or should have the burden to try to solve for themselves,” said MacDonald.

If you have the time, read Drost’s January 19, 2025 article and/or listen to the embedded radio segment.

Vector Institute and Canada’s artificial intelligence sector

On the heels of the March 22, 2017 federal budget announcement of $125M for a Pan-Canadian Artificial Intelligence Strategy, the University of Toronto (U of T) has announced the inception of the Vector Institute for Artificial Intelligence in a March 28, 2017 news release by Jennifer Robinson (Note: Links have been removed),

A team of globally renowned researchers at the University of Toronto is driving the planning of a new institute staking Toronto’s and Canada’s claim as the global leader in AI.

Geoffrey Hinton, a University Professor Emeritus in computer science at U of T and vice-president engineering fellow at Google, will serve as the chief scientific adviser of the newly created Vector Institute based in downtown Toronto.

“The University of Toronto has long been considered a global leader in artificial intelligence research,” said U of T President Meric Gertler. “It’s wonderful to see that expertise act as an anchor to bring together researchers, government and private sector actors through the Vector Institute, enabling them to aim even higher in leading advancements in this fast-growing, critical field.”

As part of the Government of Canada’s Pan-Canadian Artificial Intelligence Strategy, Vector will share $125 million in federal funding with fellow institutes in Montreal and Edmonton. All three will conduct research and secure talent to cement Canada’s position as a world leader in AI.

In addition, Vector is expected to receive funding from the Province of Ontario and more than 30 top Canadian and global companies eager to tap this pool of talent to grow their businesses. The institute will also work closely with other Ontario universities with AI talent.

(See my March 24, 2017 posting; scroll down about 25% for the science part, including the Pan-Canadian Artificial Intelligence Strategy of the budget.)

Not obvious in last week’s coverage of the Pan-Canadian Artificial Intelligence Strategy is that the much lauded Hinton has been living in the US and working for Google. These latest announcements (Pan-Canadian AI Strategy and Vector Institute) mean that he’s moving back.

A March 28, 2017 article by Kate Allen for TorontoStar.com provides more details about the Vector Institute, Hinton, and the Canadian ‘brain drain’ as it applies to artificial intelligence, (Note:  A link has been removed)

Toronto will host a new institute devoted to artificial intelligence, a major gambit to bolster a field of research pioneered in Canada but consistently drained of talent by major U.S. technology companies like Google, Facebook and Microsoft.

The Vector Institute, an independent non-profit affiliated with the University of Toronto, will hire about 25 new faculty and research scientists. It will be backed by more than $150 million in public and corporate funding in an unusual hybridization of pure research and business-minded commercial goals.

The province will spend $50 million over five years, while the federal government, which announced a $125-million Pan-Canadian Artificial Intelligence Strategy in last week’s budget, is providing at least $40 million, backers say. More than two dozen companies have committed millions more over 10 years, including $5 million each from sponsors including Google, Air Canada, Loblaws, and Canada’s five biggest banks [Bank of Montreal (BMO). Canadian Imperial Bank of Commerce ({CIBC} President’s Choice Financial},  Royal Bank of Canada (RBC), Scotiabank (Tangerine), Toronto-Dominion Bank (TD Canada Trust)].

The mode of artificial intelligence that the Vector Institute will focus on, deep learning, has seen remarkable results in recent years, particularly in image and speech recognition. Geoffrey Hinton, considered the “godfather” of deep learning for the breakthroughs he made while a professor at U of T, has worked for Google since 2013 in California and Toronto.

Hinton will move back to Canada to lead a research team based at the tech giant’s Toronto offices and act as chief scientific adviser of the new institute.

Researchers trained in Canadian artificial intelligence labs fill the ranks of major technology companies, working on tools like instant language translation, facial recognition, and recommendation services. Academic institutions and startups in Toronto, Waterloo, Montreal and Edmonton boast leaders in the field, but other researchers have left for U.S. universities and corporate labs.

The goals of the Vector Institute are to retain, repatriate and attract AI talent, to create more trained experts, and to feed that expertise into existing Canadian companies and startups.

Hospitals are expected to be a major partner, since health care is an intriguing application for AI. Last month, researchers from Stanford University announced they had trained a deep learning algorithm to identify potentially cancerous skin lesions with accuracy comparable to human dermatologists. The Toronto company Deep Genomics is using deep learning to read genomes and identify mutations that may lead to disease, among other things.

Intelligent algorithms can also be applied to tasks that might seem less virtuous, like reading private data to better target advertising. Zemel [Richard Zemel, the institute’s research director and a professor of computer science at U of T] says the centre is creating an ethics working group [emphasis mine] and maintaining ties with organizations that promote fairness and transparency in machine learning. As for privacy concerns, “that’s something we are well aware of. We don’t have a well-formed policy yet but we will fairly soon.”

The institute’s annual funding pales in comparison to the revenues of the American tech giants, which are measured in tens of billions. The risk the institute’s backers are taking is simply creating an even more robust machine learning PhD mill for the U.S.

“They obviously won’t all stay in Canada, but Toronto industry is very keen to get them,” Hinton said. “I think Trump might help there.” Two researchers on Hinton’s new Toronto-based team are Iranian, one of the countries targeted by U.S. President Donald Trump’s travel bans.

Ethics do seem to be a bit of an afterthought. Presumably the Vector Institute’s ‘ethics working group’ won’t include any regular folks. Is there any thought to what the rest of us think about these developments? As there will also be some collaboration with other proposed AI institutes including ones at the University of Montreal (Université de Montréal) and the University of Alberta (Kate McGillivray’s article coming up shortly mentions them), might the ethics group be centered in either Edmonton or Montreal? Interestingly, two Canadians (Timothy Caulfield at the University of Alberta and Eric Racine at Université de Montréa) testified at the US Commission for the Study of Bioethical Issues Feb. 10 – 11, 2014 meeting, the Brain research, ethics, and nanotechnology. Still speculating here but I imagine Caulfield and/or Racine could be persuaded to extend their expertise in ethics and the human brain to AI and its neural networks.

Getting back to the topic at hand the ‘AI sceneCanada’, Allen’s article is worth reading in its entirety if you have the time.

Kate McGillivray’s March 29, 2017 article for the Canadian Broadcasting Corporation’s (CBC) news online provides more details about the Canadian AI situation and the new strategies,

With artificial intelligence set to transform our world, a new institute is putting Toronto to the front of the line to lead the charge.

The Vector Institute for Artificial Intelligence, made possible by funding from the federal government revealed in the 2017 budget, will move into new digs in the MaRS Discovery District by the end of the year.

Vector’s funding comes partially from a $125 million investment announced in last Wednesday’s federal budget to launch a pan-Canadian artificial intelligence strategy, with similar institutes being established in Montreal and Edmonton.

“[A.I.] cuts across pretty well every sector of the economy,” said Dr. Alan Bernstein, CEO and president of the Canadian Institute for Advanced Research, the organization tasked with administering the federal program.

“Silicon Valley and England and other places really jumped on it, so we kind of lost the lead a little bit. I think the Canadian federal government has now realized that,” he said.

Stopping up the brain drain

Critical to the strategy’s success is building a homegrown base of A.I. experts and innovators — a problem in the last decade, despite pioneering work on so-called “Deep Learning” by Canadian scholars such as Yoshua Bengio and Geoffrey Hinton, a former University of Toronto professor who will now serve as Vector’s chief scientific advisor.

With few university faculty positions in Canada and with many innovative companies headquartered elsewhere, it has been tough to keep the few graduates specializing in A.I. in town.

“We were paying to educate people and shipping them south,” explained Ed Clark, chair of the Vector Institute and business advisor to Ontario Premier Kathleen Wynne.

The existence of that “fantastic science” will lean heavily on how much buy-in Vector and Canada’s other two A.I. centres get.

Toronto’s portion of the $125 million is a “great start,” said Bernstein, but taken alone, “it’s not enough money.”

“My estimate of the right amount of money to make a difference is a half a billion or so, and I think we will get there,” he said.

Jessica Murphy’s March 29, 2017 article for the British Broadcasting Corporation’s (BBC) news online offers some intriguing detail about the Canadian AI scene,

Canadian researchers have been behind some recent major breakthroughs in artificial intelligence. Now, the country is betting on becoming a big player in one of the hottest fields in technology, with help from the likes of Google and RBC [Royal Bank of Canada].

In an unassuming building on the University of Toronto’s downtown campus, Geoff Hinton laboured for years on the “lunatic fringe” of academia and artificial intelligence, pursuing research in an area of AI called neural networks.

Also known as “deep learning”, neural networks are computer programs that learn in similar way to human brains. The field showed early promise in the 1980s, but the tech sector turned its attention to other AI methods after that promise seemed slow to develop.

“The approaches that I thought were silly were in the ascendancy and the approach that I thought was the right approach was regarded as silly,” says the British-born [emphasis mine] professor, who splits his time between the university and Google, where he is a vice-president of engineering fellow.

Neural networks are used by the likes of Netflix to recommend what you should binge watch and smartphones with voice assistance tools. Google DeepMind’s AlphaGo AI used them to win against a human in the ancient game of Go in 2016.

Foteini Agrafioti, who heads up the new RBC Research in Machine Learning lab at the University of Toronto, said those recent innovations made AI attractive to researchers and the tech industry.

“Anything that’s powering Google’s engines right now is powered by deep learning,” she says.

Developments in the field helped jumpstart innovation and paved the way for the technology’s commercialisation. They also captured the attention of Google, IBM and Microsoft, and kicked off a hiring race in the field.

The renewed focus on neural networks has boosted the careers of early Canadian AI machine learning pioneers like Hinton, the University of Montreal’s Yoshua Bengio, and University of Alberta’s Richard Sutton.

Money from big tech is coming north, along with investments by domestic corporations like banking multinational RBC and auto parts giant Magna, and millions of dollars in government funding.

Former banking executive Ed Clark will head the institute, and says the goal is to make Toronto, which has the largest concentration of AI-related industries in Canada, one of the top five places in the world for AI innovation and business.

The founders also want it to serve as a magnet and retention tool for top talent aggressively head-hunted by US firms.

Clark says they want to “wake up” Canadian industry to the possibilities of AI, which is expected to have a massive impact on fields like healthcare, banking, manufacturing and transportation.

Google invested C$4.5m (US$3.4m/£2.7m) last November [2016] in the University of Montreal’s Montreal Institute for Learning Algorithms.

Microsoft is funding a Montreal startup, Element AI. The Seattle-based company also announced it would acquire Montreal-based Maluuba and help fund AI research at the University of Montreal and McGill University.

Thomson Reuters and General Motors both recently moved AI labs to Toronto.

RBC is also investing in the future of AI in Canada, including opening a machine learning lab headed by Agrafioti, co-funding a program to bring global AI talent and entrepreneurs to Toronto, and collaborating with Sutton and the University of Alberta’s Machine Intelligence Institute.

Canadian tech also sees the travel uncertainty created by the Trump administration in the US as making Canada more attractive to foreign talent. (One of Clark’s the selling points is that Toronto as an “open and diverse” city).

This may reverse the ‘brain drain’ but it appears Canada’s role as a ‘branch plant economy’ for foreign (usually US) companies could become an important discussion once more. From the ‘Foreign ownership of companies of Canada’ Wikipedia entry (Note: Links have been removed),

Historically, foreign ownership was a political issue in Canada in the late 1960s and early 1970s, when it was believed by some that U.S. investment had reached new heights (though its levels had actually remained stable for decades), and then in the 1980s, during debates over the Free Trade Agreement.

But the situation has changed, since in the interim period Canada itself became a major investor and owner of foreign corporations. Since the 1980s, Canada’s levels of investment and ownership in foreign companies have been larger than foreign investment and ownership in Canada. In some smaller countries, such as Montenegro, Canadian investment is sizable enough to make up a major portion of the economy. In Northern Ireland, for example, Canada is the largest foreign investor. By becoming foreign owners themselves, Canadians have become far less politically concerned about investment within Canada.

Of note is that Canada’s largest companies by value, and largest employers, tend to be foreign-owned in a way that is more typical of a developing nation than a G8 member. The best example is the automotive sector, one of Canada’s most important industries. It is dominated by American, German, and Japanese giants. Although this situation is not unique to Canada in the global context, it is unique among G-8 nations, and many other relatively small nations also have national automotive companies.

It’s interesting to note that sometimes Canadian companies are the big investors but that doesn’t change our basic position. And, as I’ve noted in other postings (including the March 24, 2017 posting), these government investments in science and technology won’t necessarily lead to a move away from our ‘branch plant economy’ towards an innovative Canada.

You can find out more about the Vector Institute for Artificial Intelligence here.

BTW, I noted that reference to Hinton as ‘British-born’ in the BBC article. He was educated in the UK and subsidized by UK taxpayers (from his Wikipedia entry; Note: Links have been removed),

Hinton was educated at King’s College, Cambridge graduating in 1970, with a Bachelor of Arts in experimental psychology.[1] He continued his study at the University of Edinburgh where he was awarded a PhD in artificial intelligence in 1977 for research supervised by H. Christopher Longuet-Higgins.[3][12]

It seems Canadians are not the only ones to experience  ‘brain drains’.

Finally, I wrote at length about a recent initiative taking place between the University of British Columbia (Vancouver, Canada) and the University of Washington (Seattle, Washington), the Cascadia Urban Analytics Cooperative in a Feb. 28, 2017 posting noting that the initiative is being funded by Microsoft to the tune $1M and is part of a larger cooperative effort between the province of British Columbia and the state of Washington. Artificial intelligence is not the only area where US technology companies are hedging their bets (against Trump’s administration which seems determined to terrify people from crossing US borders) by investing in Canada.

For anyone interested in a little more information about AI in the US and China, there’s today’s (March 31, 2017)earlier posting: China, US, and the race for artificial intelligence research domination.

Corporate influence, nanotechnology regulation, and Friends of the Earth (FoE) Australia

The latest issue of the newsletter, Chain Reaction # 121, July 2014, published by Friends of the Earth (FoE) Australia features an article by Louise Sales ‘Corporate influence over nanotechnology regulation‘ that has given me pause. From the Sales article,

I recently attended an Organisation for Economic Co-operation and Development (OECD) seminar on the risk assessment and risk management of nanomaterials. This was an eye-opening experience that graphically illustrated the extent of corporate influence over nanotechnology regulation globally. Representatives of the chemical companies DuPont and Evonik; the Nanotechnology Industries Association; and the Business and Industry Advisory Committee to the OECD (BIAC) sat alongside representatives of countries such as Australia, the US and Canada and were given equal speaking time.

BIAC gave a presentation on their work with the Canadian and United States Governments to harmonise nanotechnology regulation between the two countries. [US-Canada Regulatory Cooperative Council] [emphasis mine] Repeated reference to the involvement of ‘stakeholders’ prompted me to ask if any NGOs [nongovernmental organizations] were involved in the process. Only in the earlier stages apparently − ‘stakeholders’ basically meant industry.

A representative of the Nanotechnology Industries Association told us about the European NANoREG project they are leading in collaboration with regulators, industry and scientists. This is intended to ‘develop … new testing strategies adapted to innovation requirements’ and to ‘establish a close collaboration among authorities, industry and science leading to efficient and practically applicable risk management approaches’. In other words industry will be helping write the rules.

Interestingly, when I raised concerns about this profound intertwining of government and industry with one of the other NGO representatives they seemed almost dismissive of my concerns. I got the impression that most of the parties concerned thought that this was just the ‘way things were’. As under-resourced regulators struggle with the regulatory challenges posed by nanotechnology − the offer of industry assistance is probably very appealing. And from the rhetoric at the meeting one could be forgiven for thinking that their objectives are very similar − to ensure that their products are safe. Right? Wrong.

I just published an update about the US-Canada Regulatory Cooperation Council (RCC; in  my July 14, 2014 posting) where I noted the RCC has completed its work and final reports are due later this summer. Nowhere in any of the notices is there mention of BIAC’s contribution (whatever it might have been) to this endeavour.

Interestingly. BIAC is not an OECD committee but a separate organization as per its About us page,

BIAC is an independent international business association devoted to advising government policymakers at OECD and related fora on the many diversified issues of globalisation and the world economy.

Officially recognised since its founding in 1962 as being representative of the OECD business community, BIAC promotes the interests of business by engaging, understanding and advising policy makers on a broad range of issues with the overarching objectives of:

  • Positively influencing the direction of OECD policy initiatives;

  • Ensuring business and industry needs are adequately addressed in OECD policy decision instruments (policy advocacy), which influence national legislation;

  • Providing members with timely information on OECD policies and their implications for business and industry.

Through its 38 policy groups, which cover the major aspects of OECD work most relevant to business, BIAC members participate in meetings, global forums and consultations with OECD leadership, government delegates, committees and working groups.

I don’t see any mention of safety either in the excerpt or elsewhere on their About us page.

As Sales notes in her article,

Ultimately corporations have one primary driver and that’s increasing their bottom line.

I do wonder why there doesn’t seem to have been any transparency regarding BIAC’s involvement with the RCC and why no NGOs (according to Sales) were included as stakeholders.

While I sometimes find FoE and its fellow civil society groups a bit shrill and over-vehement at times, It never does to get too complacent. For example, who would have thought that General Motors would ignore safety issues (there were car crashes and fatalities as a consequence) over the apparently miniscule cost of changing an ignition switch. From What is the timeline of the GM recall scandal? on Vox.com,

March 2005: A GM project engineering manager closed the investigation into the faulty switches, noting that they were too costly to fix. In his words: “lead time for all solutions is too long” and “the tooling cost and piece price are too high.” Later emails unearthed by Reuters suggested that the fix would have cost GM 90 cents per car. [emphasis mine]

March 2007: Safety regulators inform GM of the death of Amber Rose, who crashed her Chevrolet Cobalt in 2005 after the ignition switch shut down the car’s electrical system and air bags failed to deploy. Neither the company nor regulators open an investigation.

End of 2013: GM determines that the faulty ignition switch is to blame for at least 31 crashes and 13 deaths.

According to a July 17, 2014 news item on CBC (Canadian Broadcasting Corporation) news online, Mary Barra, CEO of General Motors, has testified on the mater before the US Senate for a 2nd time, this year,

A U.S. Senate panel posed questions to a new set of key players Thursday [July 17, 2014] as it delves deeper into General Motors’ delayed recall of millions of small cars.

An internal report found GM attorneys signed settlements with the families of crash victims but didn’t tell engineers or top executives about mounting problems with ignition switches. It also found that GM’s legal staff acted without urgency.

GM says faulty ignition switches were responsible for at least 13 deaths. It took the company 11 years to recall the cars.

Barra will certainly be asked about how she’s changing a corporate culture that allowed a defect with ignition switches to remain hidden from the car-buying public for 11 years. It will be Barra’s second time testifying before the panel.

H/T ICON (International Council on Nanotechnology) July 16, 2014 news item. Following on the topic of transparency, ICON based at Rice University in Texas (US) has a Sponsors webpage.