Tag Archives: Zoe Kleinman

UK AI Summit (November 1 – 2, 2023) at Bletchley Park finishes

This is the closest I’ve ever gotten to writing a gossip column (see my October 18, 2023 posting and scroll down to the “Insight into political jockeying [i.e., some juicy news bits]” subhead )for the first half.

Given the role that Canadian researchers (for more about that see my May 25, 2023 posting and scroll down to “The Panic” subhead) have played in the development of artificial intelligence (AI), it’s been surprising that the Canadian Broadcasting Corporation (CBC) has given very little coverage to the event in the UK. However, there is an October 31, 2023 article by Kelvin Chang and Jill Lawless for the Associated Press posted on the CBC website,

Digital officials, tech company bosses and researchers are converging Wednesday [November 1, 2023] at a former codebreaking spy base [Bletchley Park] near London [UK] to discuss and better understand the extreme risks posed by cutting-edge artificial intelligence.

The two-day summit focusing on so-called frontier AI notched up an early achievement with officials from 28 nations and the European Union signing an agreement on safe and responsible development of the technology.

Frontier AI is shorthand for the latest and most powerful general purpose systems that take the technology right up to its limits, but could come with as-yet-unknown dangers. They’re underpinned by foundation models, which power chatbots like OpenAI’s ChatGPT and Google’s Bard and are trained on vast pools of information scraped from the internet.

The AI Safety Summit is a labour of love for British Prime Minister Rishi Sunak, a tech-loving former banker who wants the U.K. to be a hub for computing innovation and has framed the summit as the start of a global conversation about the safe development of AI.[emphasis mine]

But U.S. Vice President Kamala Harris may divert attention Wednesday [November 1, 2023] with a separate speech in London setting out the Biden administration’s more hands-on approach.

Canada’s Minister of Innovation, Science and Industry Francois-Philippe Champagne said AI would not be constrained by national borders, and therefore interoperability between different regulations being put in place was important.

As the meeting began, U.K. Technology Secretary Michelle Donelan announced that the 28 countries and the European Union had signed the Bletchley Declaration on AI Safety. It outlines the “urgent need to understand and collectively manage potential risks through a new joint global effort.”

South Korea has agreed to host a mini virtual AI summit in six months, followed by an in-person one in France in a year’s time, the U.K. government said.

Chris Stokel-Walker’s October 31, 2023 article for Fast Company presents a critique of the summit prior to the opening, Note: Links have been removed,

… one problem, critics say: The summit, which begins on November 1, is too insular and its participants are homogeneous—an especially damning critique for something that’s trying to tackle the huge, possibly intractable questions around AI. The guest list is made up of 100 of the great and good of governments, including representatives from China, Europe, and Vice President Kamala Harris. And it also includes luminaries within the tech sector. But precious few others—which means a lack of diversity in discussions about the impact of AI.

“Self-regulation didn’t work for social media companies, it didn’t work for the finance sector, and it won’t work for AI,” says Carsten Jung, a senior economist at the Institute for Public Policy Research, a progressive think tank that recently published a report advising on key policy pillars it believes should be discussed at the summit. (Jung isn’t on the guest list.) “We need to learn lessons from our past mistakes and create a strong supervisory hub for all things AI, right from the start.”

Kriti Sharma, chief product officer for legal tech at Thomson Reuters, who will be watching from the wings, not receiving an invite, is similarly circumspect about the goals of the summit. “I hope to see leaders moving past the doom to take practical steps to address known issues and concerns in AI, giving businesses the clarity they urgently need,” she says. “Ideally, I’d like to see movement towards putting some fundamental AI guardrails in place, in the form of a globally aligned, cross-industry regulatory framework.”

But it’s uncertain whether the summit will indeed discuss the more practical elements of AI. Already it seems as if the gathering is designed to quell public fears around AI while convincing those developing AI products that the U.K. will not take too strong an approach in regulating the technology, perhaps in contrasts to near neighbors in the European Union, who have been open about their plans to ensure the technology is properly fenced in to ensure user safety.

Already, there are suggestions that the summit has been drastically downscaled in its ambitions, with others, including the United States, where President Biden just announced a sweeping executive order on AI, and the United Nations, which announced its AI advisory board last week.

Ingrid Lunden in her October 31, 2023 article for TechCrunch is more blunt,

As we wrote yesterday, the U.K. is partly using this event — the first of its kind, as it has pointed out — to stake out a territory for itself on the AI map — both as a place to build AI businesses, but also as an authority in the overall field.

That, coupled with the fact that the topics and approach are focused on potential issues, the affair feel like one very grand photo opportunity and PR exercise, a way for the government to show itself off in the most positive way at the same time that it slides down in the polls and it also faces a disastrous, bad-look inquiry into how it handled the COVID-19 pandemic. On the other hand, the U.K. does have the credentials for a seat at the table, so if the government is playing a hand here, it’s able to do it because its cards are strong.

The subsequent guest list, predictably, leans more toward organizations and attendees from the U.K. It’s also almost as revealing to see who is not participating.

Lunden’s October 30, 2023 article “Existential risk? Regulatory capture? AI for one and all? A look at what’s going on with AI in the UK” includes a little ‘inside’ information,

That high-level aspiration is also reflected in who is taking part: top-level government officials, captains of industry, and notable thinkers in the space are among those expected to attend. (Latest late entry: Elon Musk; latest no’s reportedly include President Biden, Justin Trudeau and Olaf Scholz.) [Scholz’s no was mentioned in my my October 18, 2023 posting]

It sounds exclusive, and it is: “Golden tickets” (as Azeem Azhar, a London-based tech founder and writer, describes them) to the Summit are in scarce supply. Conversations will be small and mostly closed. So because nature abhors a vacuum, a whole raft of other events and news developments have sprung up around the Summit, looping in the many other issues and stakeholders at play. These have included talks at the Royal Society (the U.K.’s national academy of sciences); a big “AI Fringe” conference that’s being held across multiple cities all week; many announcements of task forces; and more.

Earlier today, a group of 100 trade unions and rights campaigners sent a letter to the prime minister saying that the government is “squeezing out” their voices in the conversation by not having them be a part of the Bletchley Park event. (They may not have gotten their golden tickets, but they were definitely canny how they objected: The group publicized its letter by sharing it with no less than the Financial Times, the most elite of economic publications in the country.)

And normal people are not the only ones who have been snubbed. “None of the people I know have been invited,” Carissa Véliz, a tutor in philosophy at the University of Oxford, said during one of the AI Fringe events today [October 30, 2023].

More broadly, the summit has become an anchor and only one part of the bigger conversation going on right now. Last week, U.K. prime minister Rishi Sunak outlined an intention to launch a new AI safety institute and a research network in the U.K. to put more time and thought into AI implications; a group of prominent academics, led by Yoshua Bengio [University of Montreal, Canada) and Geoffrey Hinton [University of Toronto, Canada], published a paper called “Managing AI Risks in an Era of Rapid Progress” to put their collective oar into the the waters; and the UN announced its own task force to explore the implications of AI. Today [October 30, 2023], U.S. president Joe Biden issued the country’s own executive order to set standards for AI security and safety.

There are a couple more articles* from the BBC (British Broadcasting Corporation) covering the start of the summit, a November 1, 2023 article by Zoe Kleinman & Tom Gerken, “King Charles: Tackle AI risks with urgency and unity” and another November 1, 2023 article this time by Tom Gerken & Imran Rahman-Jones, “Rishi Sunak: AI firms cannot ‘mark their own homework‘.”

Politico offers more US-centric coverage of the event with a November 1, 2023 article by Mark Scott, Tom Bristow and Gian Volpicelli, “US and China join global leaders to lay out need for AI rulemaking,” a November 1, 2023 article by Vincent Manancourt and Eugene Daniels, “Kamala Harris seizes agenda as Rishi Sunak’s AI summit kicks off,” and a November 1, 2023 article by Vincent Manancourt, Eugene Daniels and Brendan Bordelon, “‘Existential to who[m]?’ US VP Kamala Harris urges focus on near-term AI risks.”

I want to draw special attention to the second Politico article,

Kamala just showed Rishi who’s boss.

As British Prime Minister Rishi Sunak’s showpiece artificial intelligence event kicked off in Bletchley Park on Wednesday, 50 miles south in the futuristic environs of the American Embassy in London, U.S. Vice President Kamala Harris laid out her vision for how the world should govern artificial intelligence.

It was a raw show of U.S. power on the emerging technology.

Did she or was this an aggressive interpretation of events?

*’article’ changed to ‘articles’ on January 17, 2024.

AI safety talks at Bletchley Park in November 2023

There’s a very good article about the upcoming AI (artificial intelligence) safety talks on the British Broadcasting Corporation (BBC) news website (plus some juicy perhaps even gossipy news about who may not be attending the event) but first, here’s the August 24, 2023 UK government press release making the announcement,

Iconic Bletchley Park to host UK AI Safety Summit in early November [2023]

Major global event to take place on the 1st and 2nd of November.[2023]

– UK to host world first summit on artificial intelligence safety in November

– Talks will explore and build consensus on rapid, international action to advance safety at the frontier of AI technology

– Bletchley Park, one of the birthplaces of computer science, to host the summit

International governments, leading AI companies and experts in research will unite for crucial talks in November on the safe development and use of frontier AI technology, as the UK Government announces Bletchley Park as the location for the UK summit.

The major global event will take place on the 1st and 2nd November to consider the risks of AI, especially at the frontier of development, and discuss how they can be mitigated through internationally coordinated action. Frontier AI models hold enormous potential to power economic growth, drive scientific progress and wider public benefits, while also posing potential safety risks if not developed responsibly.

To be hosted at Bletchley Park in Buckinghamshire, a significant location in the history of computer science development and once the home of British Enigma codebreaking – it will see coordinated action to agree a set of rapid, targeted measures for furthering safety in global AI use.

Preparations for the summit are already in full flow, with Matt Clifford and Jonathan Black recently appointed as the Prime Minister’s Representatives. Together they’ll spearhead talks and negotiations, as they rally leading AI nations and experts over the next three months to ensure the summit provides a platform for countries to work together on further developing a shared approach to agree the safety measures needed to mitigate the risks of AI.

Prime Minister Rishi Sunak said:

“The UK has long been home to the transformative technologies of the future, so there is no better place to host the first ever global AI safety summit than at Bletchley Park this November.

To fully embrace the extraordinary opportunities of artificial intelligence, we must grip and tackle the risks to ensure it develops safely in the years ahead.

With the combined strength of our international partners, thriving AI industry and expert academic community, we can secure the rapid international action we need for the safe and responsible development of AI around the world.”

Technology Secretary Michelle Donelan said:

“International collaboration is the cornerstone of our approach to AI regulation, and we want the summit to result in leading nations and experts agreeing on a shared approach to its safe use.

The UK is consistently recognised as a world leader in AI and we are well placed to lead these discussions. The location of Bletchley Park as the backdrop will reaffirm our historic leadership in overseeing the development of new technologies.

AI is already improving lives from new innovations in healthcare to supporting efforts to tackle climate change, and November’s summit will make sure we can all realise the technology’s huge benefits safely and securely for decades to come.”

The summit will also build on ongoing work at international forums including the OECD, Global Partnership on AI, Council of Europe, and the UN and standards-development organisations, as well as the recently agreed G7 Hiroshima AI Process.

The UK boasts strong credentials as a world leader in AI. The technology employs over 50,000 people, directly supports one of the Prime Minister’s five priorities by contributing £3.7 billion to the economy, and is the birthplace of leading AI companies such as Google DeepMind. It has also invested more on AI safety research than any other nation, backing the creation of the Foundation Model Taskforce with an initial £100 million.

Foreign Secretary James Cleverly said:

“No country will be untouched by AI, and no country alone will solve the challenges posed by this technology. In our interconnected world, we must have an international approach.

The origins of modern AI can be traced back to Bletchley Park. Now, it will also be home to the global effort to shape the responsible use of AI.”

Bletchley Park’s role in hosting the summit reflects the UK’s proud tradition of being at the frontier of new technology advancements. Since Alan Turing’s celebrated work some eight decades ago, computing and computer science have become fundamental pillars of life both in the UK and across the globe.

Iain Standen, CEO of the Bletchley Park Trust, said:

“Bletchley Park Trust is immensely privileged to have been chosen as the venue for the first major international summit on AI safety this November, and we look forward to welcoming the world to our historic site.

It is fitting that the very spot where leading minds harnessed emerging technologies to influence the successful outcome of World War 2 will, once again, be the crucible for international co-ordinated action.

We are incredibly excited to be providing the stage for discussions on global safety standards, which will help everyone manage and monitor the risks of artificial intelligence.”

The roots of AI can be traced back to the leading minds who worked at Bletchley during World War 2, with codebreakers Jack Good and Donald Michie among those who went on to write extensive works on the technology. In November [2023], it will once again take centre stage as the international community comes together to agree on important guardrails which ensure the opportunities of AI can be realised, and its risks safely managed.

The announcement follows the UK government allocating £13 million to revolutionise healthcare research through AI, unveiled last week. The funding supports a raft of new projects including transformations to brain tumour surgeries, new approaches to treating chronic nerve pain, and a system to predict a patient’s risk of developing future health problems based on existing conditions.

Tom Gerken’s August 24, 2023 BBC news article (an analysis by Zoe Kleinman follows as part of the article) fills in a few blanks, Note: Links have been removed,

World leaders will meet with AI companies and experts on 1 and 2 November for the discussions.

The global talks aim to build an international consensus on the future of AI.

The summit will take place at Bletchley Park, where Alan Turing, one of the pioneers of modern computing, worked during World War Two.

It is unknown which world leaders will be invited to the event, with a particular question mark over whether the Chinese government or tech giant Baidu will be in attendance.

The BBC has approached the government for comment.

The summit will address how the technology can be safely developed through “internationally co-ordinated action” but there has been no confirmation of more detailed topics.

It comes after US tech firm Palantir rejected calls to pause the development of AI in June, with its boss Alex Karp saying it was only those with “no products” who wanted a pause.

And in July [2023], children’s charity the Internet Watch Foundation called on Mr Sunak to tackle AI-generated child sexual abuse imagery, which it says is on the rise.

Kleinman’s analysis includes this, Note: A link has been removed,

Will China be represented? Currently there is a distinct east/west divide in the AI world but several experts argue this is a tech that transcends geopolitics. Some say a UN-style regulator would be a better alternative to individual territories coming up with their own rules.

If the government can get enough of the right people around the table in early November [2023], this is perhaps a good subject for debate.

Three US AI giants – OpenAI, Anthropic and Palantir – have all committed to opening London headquarters.

But there are others going in the opposite direction – British DeepMind co-founder Mustafa Suleyman chose to locate his new AI company InflectionAI in California. He told the BBC the UK needed to cultivate a more risk-taking culture in order to truly become an AI superpower.

Many of those who worked at Bletchley Park decoding messages during WW2 went on to write and speak about AI in later years, including codebreakers Irving John “Jack” Good and Donald Michie.

Soon after the War, [Alan] Turing proposed the imitation game – later dubbed the “Turing test” – which seeks to identify whether a machine can behave in a way indistinguishable from a human.

There is a Bletchley Park website, which sells tickets for tours.

Insight into political jockeying (i.e., some juicy news bits)

This has recently been reported by BBC, from an October 17 (?). 2023 news article by Jessica Parker & Zoe Kleinman on BBC news online,

German Chancellor Olaf Scholz may turn down his invitation to a major UK summit on artificial intelligence, the BBC understands.

While no guest list has been published of an expected 100 participants, some within the sector say it’s unclear if the event will attract top leaders.

A government source insisted the summit is garnering “a lot of attention” at home and overseas.

The two-day meeting is due to bring together leading politicians as well as independent experts and senior execs from the tech giants, who are mainly US based.

The first day will bring together tech companies and academics for a discussion chaired by the Secretary of State for Science, Innovation and Technology, Michelle Donelan.

The second day is set to see a “small group” of people, including international government figures, in meetings run by PM Rishi Sunak.

Though no final decision has been made, it is now seen as unlikely that the German Chancellor will attend.

That could spark concerns of a “domino effect” with other world leaders, such as the French President Emmanuel Macron, also unconfirmed.

Government sources say there are heads of state who have signalled a clear intention to turn up, and the BBC understands that high-level representatives from many US-based tech giants are going.

The foreign secretary confirmed in September [2023] that a Chinese representative has been invited, despite controversy.

Some MPs within the UK’s ruling Conservative Party believe China should be cut out of the conference after a series of security rows.

It is not known whether there has been a response to the invitation.

China is home to a huge AI sector and has already created its own set of rules to govern responsible use of the tech within the country.

The US, a major player in the sector and the world’s largest economy, will be represented by Vice-President Kamala Harris.

Britain is hoping to position itself as a key broker as the world wrestles with the potential pitfalls and risks of AI.

However, Berlin is thought to want to avoid any messy overlap with G7 efforts, after the group of leading democratic countries agreed to create an international code of conduct.

Germany is also the biggest economy in the EU – which is itself aiming to finalise its own landmark AI Act by the end of this year.

It includes grading AI tools depending on how significant they are, so for example an email filter would be less tightly regulated than a medical diagnosis system.

The European Commission President Ursula von der Leyen is expected at next month’s summit, while it is possible Berlin could send a senior government figure such as its vice chancellor, Robert Habeck.

A source from the Department for Science, Innovation and Technology said: “This is the first time an international summit has focused on frontier AI risks and it is garnering a lot of attention at home and overseas.

“It is usual not to confirm senior attendance at major international events until nearer the time, for security reasons.”

Fascinating, eh?

Belated posting for Ada Lovelace Day (it was on Tuesday, Oct. 13, 2020)

For anyone who doesn’t know who Ada Lovelace was (from my Oct. 13, 2015 posting, ‘Ada Lovelace “… manipulative, aggressive, a drug addict …” and a genius but was she likable?‘)

Ada Lovelace was the daughter of the poet Lord Byron and mathematician Annabella Milbanke.

Her [Ada Lovelace’s] foresight was so extraordinary that it would take another hundred years and Alan Turing to recognise the significance of her work. But it was an achievement that was probably as much a product of her artistic heritage as her scientific training.

You can take the title of that October 13, 2015 post as a hint that I was using ‘Ada Lovelace “… manipulative, aggressive, a drug addict …” and a genius but was she likable?‘ to comment on the requirement that women be likable in a way that men never have to consider.

Hard to believe that 2015 was the last time I stumbled across information about the day. ’nuff said. This year I was lucky enough to see this Oct. 13, 2020 article by Zoe Kleinman for British Broadcasting Corporation (BBC) news online,

From caravans [campers] to kitchen tables, and podcast production to pregnancy, I’ve been speaking to many women in and around the technology sector about how they have adapted to the challenges of working during the coronavirus pandemic.

Research suggests women across the world have shouldered more family and household responsibilities than men as the coronavirus pandemic continues, alongside their working lives.

And they share their inspirations, frustrations but also their optimism.

“I have a new business and a new life,” says Clare Muscutt, who lost work, her relationship and her flatmate as lockdown hit.

This Tuesday [Oct. 13, 2020] is Ada Lovelace Day – an annual celebration of women working in the male-dominated science, technology, engineering and maths (Stem) sectors.

And, this year, it has a very different vibe.

Claire Broadley, technical writer, Leeds

Before lockdown, my husband and I ran our own company, producing user guides and written content for websites.

Business income dropped by about two-thirds during lockdown.

We weren’t eligible for any government grants. And because we still had a small amount of work, we couldn’t furlough ourselves.

It felt like we were slowly marching our family towards a cliff edge.

In May [2020], to my astonishment and relief, I was offered my dream job, remote writing about the internet and technology.

Working from home with the children has been the most difficult thing we’ve ever done.

My son is seven. He is very scared.

Sometimes, we can’t spend the time with him that we would like to. And most screen-time rules have gone completely out of the window.

The real issue for us now is testing.

My young daughter caught Covid in July [2020]. And she recently had a temperature again. But it took six days to get a test result, so my son was off school again. And my husband was working until midnight to fit everything in.

There are many other stories in Kleinman’s Oct. 13, 2020 article.

Nancy Doyle’s October 13, 2020 article for Forbes tends to an expected narrative about women in science, technology, engineering, and mathematics (STEM),

“21st century science has a problem. It is short of scientists. Technological innovations mean that the world needs many more specialists in the STEM (Science, Technology, Engineering and Maths) subjects than it is currently training. And this problem is compounded by the fact that women, despite clear evidence of aptitude and ability for science subjects, are not choosing to study STEM subjects, are not being recruited into the STEM workforce, are not staying in the STEM workplace.”

Why Don’t Women Do Science?

Professor Rippon [Gina Rippon, Professor of Neuroscience at Aston University in the UK] walked me through the main “neurotrash” arguments about the female brain and its feebleness.

“There is a long and fairly well-rehearsed ‘blame the brain’ story, with essentialist or biology-is-destiny type arguments historically asserting that women’s brains were basically inferior (thanks, Gustave le Bon and Charles Darwin!) or too vulnerable to withstand the rigours of higher education. A newer spin on this is that female brains do not endow their owners with the appropriate cognitive skills for science. Specifically, they are poor at the kind of spatial thinking that is core to success in science or, more generally, are not ‘hard-wired’ for the necessary understanding of systems fundamental to the theory and practice of science.

The former ‘spatial deficit’ description has been widely touted as one of the most robust of sex differences, quite possibly present from birth. But updated and more nuanced research has not been able to uphold this claim; spatial ability appears to be more a function of spatial experience (think toys, videogames, hobbies, sports, occupations) than sex. And it is very clearly trainable (in both sexes), resulting in clearly measurable brain changes as well as improvements in skill.”

You can find out more about women in STEM, Ada Lovelace, and events (year round) to celebrate her at the Ada Lovelace Day website.

Plus, I found this on Twitter about a new series of films about women in science from a Science Friday (a US National Public Radio podcast) tweet,

Science Friday @scifri

Celebrate #WomenInScience with a brand new season of #BreakthroughFilms, dropping today [October 14, 2020]! Discover the innovative research & deeply personal stories of six women working at the forefront of their STEM fields.

Get inspired at BreakthroughFilms.org

Here’s the Breakthrough Films trailer,

Enjoy!

L’Oréal introduces wearable cosmetic electronic patch (my UV patch)

You don’t (well, I don’t) expect a cosmetics company such as L’Oréal to introduce products at the Consumer Electronics Show (CES) held in Las Vegas (Nevada, US) annually (Jan. 6 – 9, 2016).

A Jan. 6, 2016 article by Zoe Kleinman for BBC (British Broadcasting Corporation) news online explains,

Beauty giant L’Oreal has unveiled a smart skin patch that can track the skin’s exposure to harmful UV rays at the technology show CES in Las Vegas.

The product will be launched in 16 countries including the UK this summer, and will be available for free [emphasis mine].

It contains a photosensitive blue dye, which changes colour when exposed to ultraviolet light.

But the wearer must take a photo of it and then upload it to an app to see the results.

It’s a free app, eh? A cynic might suggest that the company will be getting free data in return.

A Jan. 6, 2016 L’Oréal press release, also on PR Newswire, provides more details (Note: Links have been removed),

Today [Jan. 6, 2016] at the Consumer Electronics Show, L’Oréal unveiled My UV Patch, the first-ever stretchable skin sensor designed to monitor UV exposure and help consumers educate themselves about sun protection. The new technology arrives at a time when sun exposure has become a major health issue, with 90% of nonmelanoma skin cancers being associated with exposure to ultraviolet (UV) radiation from sun* in addition to attributing to skin pigmentation and photoaging.

To address these growing concerns, L’Oréal Group’s leading dermatological skincare brand, La Roche-Posay, is introducing a first-of-its kind stretchable electronic, My UV Patch. The patch is a transparent adhesive that, unlike the rigid wearables currently on the market, stretches and adheres directly to any area of skin that consumers want to monitor. Measuring approximately one square inch in area and 50 micrometers thick – half the thickness of an average strand of hair – the patch contains photosensitive dyes that factor in the baseline skin tone and change colors when exposed to UV rays to indicate varying levels of sun exposure.

Consumers will be able to take a photo of the patch and upload it to the La Roche-Posay My UV Patch mobile app, which analyzes the varying photosensitive dye squares to determine the amount of UV exposure the wearer has received. The My UV Patch mobile app will be available on both iOS and Android, incorporating Near Field Communications (NFC)-enabled technology into the patch-scanning process for Android. My UV Patch is expected to be made available to consumers later this year.

“Connected technologies have the potential to completely disrupt how we monitor the skin’s exposure to various external factors, including UV,” says Guive Balooch, Global Vice President of L’Oréal’s Technology Incubator. “Previous technologies could only tell users the amount of potential sun exposure they were receiving per hour while wearing a rigid, non-stretchable device. The key was to design a sensor that was thin, comfortable and virtually weightless so people would actually want to wear it. We’re excited to be the first beauty company entering the stretchable electronics field and to explore the many potential applications for this technology within our industry and beyond.”

My UV Patch was developed by L’Oréal’s U.S.-based Technology Incubator, a business division dedicated entirely to technological innovation, alongside MC10, Inc., a leading stretchable electronics company using cutting-edge innovation to create the most intelligent, stretchable systems for biometric healthcare analytics. L’Oréal also worked with PCH who design engineered the sensor. The stretchable, peel-and-stick wearable unites L’Oréal Group’s extensive scientific research on the skin and expertise with UV protection with MC10’s strong technological capabilities in physiological sensing and pattern recognition algorithms to measure skin changes over time, and PCH’s 20-year experience in product development, manufacturing and supply chain.

“With My UV Patch, L’Oréal is taking the lead in developing the next generation of smart skincare technology powered by MC10’s unique, stretchable electronics platform, that truly addresses a consumer need,” said Scott Pomerantz, CEO of MC10. “This partnership with L’Oréal marks an exciting new milestone for MC10 and underscores the intersection of tech and beauty and the boundless potential of connected devices within the beauty market.”

*Source: Skin Cancer Foundation 2015

“Together with La Roche-Posay dermatologists like myself, we share a mission to help increase sun safe behavior,” added Alysa Herman, MD.  “La Roche-Posay recently commissioned a global study in 23 countries, which surveyed 19,000 women and men and found a huge gap in consumer behavior: even though 92% were aware that unprotected sun exposure can cause health problems, only 26% of Americans protect themselves all year round, whatever the season. With the new My UV Patch, for the first time, we are leveraging technology to help incite a true behavioral change through real-time knowledge. ”

About L’Oréal

L’Oréal has devoted itself to beauty for over 105 years. With its unique international portfolio of 32 diverse and complementary brands, the Group generated sales amounting to 22.5 billion euros in 2014 and employs 78,600 people worldwide. As the world’s leading beauty company, L’Oréal is present across all distribution networks: mass market, department stores, pharmacies and drugstores, hair salons, travel retail and branded retail.

Research and innovation, and a dedicated research team of 3,700 people, are at the core of L’Oréal’s strategy, working to meet beauty aspirations all over the world and attract one billion new consumers in the years to come. L’Oréal’s new sustainability commitment for 2020 “Sharing Beauty With All” sets out ambitious sustainable development objectives across the Group’s value chain. www.loreal.com

About LA ROCHE-POSAY and ANTHELIOS

Recommended by more than 25,000 dermatologists worldwide, La Roche-Posay offers a unique range of daily skincare developed with dermatologists to meet their standards in efficacy, tolerance and elegant textures for increased compliance. The products, which are developed using a strict formulation charter, include a minimal number of ingredients to reduce side effects and reactivity and are formulated with effective ingredients at optimal concentrations for increased efficacy. Additionally, La Roche-Posay products undergo stringent clinical testing to guarantee efficacy and safety, even on sensitive skin.

About MC10

MC10’s mission is to improve human health through digital healthcare solutions. The company combines its proprietary ultra-thin, stretchable body-worn sensors with advanced analytics to unlock health insights from physiological data. MC10 partners with healthcare organizations and researchers to advance medical knowledge and create monitoring and diagnostic solutions for patients and physicians. Backed by a strong syndicate of financial and strategic investors, MC10 has received widespread recognition for its innovative technology, including being named a 2014 CES Innovation in Design Honoree. MC10 is headquartered in Lexington, MA.  Visit MC10 online at www.mc10inc.com.

About PCH

PCH designs custom product solutions for startups and Fortune 500 companies. Whether design engineering and development, manufacturing and fulfilment, distribution or retail, PCH takes on the toughest challenges. If it can be imagined, it can be made. At PCH, we make. www.pchintl.com. Twitter: @PCH_Intl

Ryan O’Hare’s Jan. 6, 2016 article for the UK’s DailyMailOnline provides some additional technology details and offers images of the proposed patch, not reproduced here, (Note: A link has been removed),

The patch and free app, which will be launched in the summer, have been welcomed by experts.

Dr Christopher Rowland Payne, consultant dermatologist to The London Clinic, said: ‘This is an exciting device that will motivate people in a positive way to take control of their sun exposure and will encourage them to know when it is time to leave the sun or to reapply their sunscreen.

‘It is an ingenious way of giving people the information they need. I hope it will also get people talking to each other about safe sun exposure.’

The technology used in the UV patches is based on ‘biostamps’ designed by tech firm MC10.

They were originally designed to help medical teams measure the health of their patients either remotely, or without the need for large expensive machinery.

Motorola were exploring the patches as an alternative to using traditional passwords for security and access to devices.

Getting back to this ‘free app’ business, the data gathered could be used to help the company create future skincare products. If they are planning to harvest your data, there’s nothing inherently wrong with the practice but the company isn’t being as straightforward as it could be. In any event, you may want to take a good at the user agreement and decide for yourself.

Finally, I think it’s time to acknowledge medical writer, Dr. Susan Baxter, (not for the first time and not the last either) as I likely wouldn’t have thought past my general cynicism about data harvesting for a reason, additional to any humanitarian motivations L’Oréal might have, for offering a free mobile app. She doesn’t post on her blog that frequently but it’s always worth taking a look (http://www.susanbaxter.ca/blog-page/) and I recommend this July 30, 2014 post titled, ‘Civil Scientific Discourse RIP’ which focuses on vaccination and anti-vaccination positions. Do not expect a comfortable read.

Drawing pictures with your eyes at FutureEverything digital celebration

The title is meant literally, i.e., drawing pictures using your eyes only. What makes the feat even more extraordinary is that the designers hacked a Playstation 3 webcam to create the Eyewriter and (from the BBC article by Zoe Kleinman) “You could put it together at home without a soldering iron for about £30.”

The project won first prize at the FutureEverything festival in Manchester, England. From the BBC article,

Artists, musicians, engineers and hackers from around the world recently descended on Manchester for a three day celebration of digital culture.

Now in its 15th year, FutureEverything (previously called Futuresonic) has quietly established itself as an annual gathering for the technology avant garde.

With a £10,000 prize up for grabs for the best innovation, the stakes were high for the exhibitors at a local pop-up art gallery called The Hive.

The first prize went to Eyewriter, a team who developed a pair of glasses designed to track and record eye movement, enabling people to draw pictures using their eyes.

It was designed for Californian graffiti artist Tony Quan, who has ALS, a form of motor neurone disease. His eyes are the only part of his body that he can move.

Kleinman’s article features details about other projects that were shown at the festival as well as a video which features the artist, Tony Quan, putting the Eyewriter to the test, and an interview with the festival founder and organizer.

Something like the Eyewriter points to exciting possibilities for leveling the playground so everyone (no matter what physical limitations they may have) can participate. It also points to the benefits of hacking.