Tag Archives: Yann LeCun

World Science Festival May 29 – June 3, 2018 in New York City

I haven’t featured the festival since 2014 having forgotten all about it but I received (via email) an April 30, 2018 news release announcing the latest iteration,

ANNOUNCING WORLD SCIENCE FESTIVAL NEW YORK CITY

MAY 29 THROUGH JUNE 3, 2018

OVER 70 INSPIRING SCIENCE-THEMED EVENTS EXPLORE THE VERY EDGE OF
KNOWLEDGE

Over six extraordinary days in New York City, from May 29 through June
3, 2018; the world’s leading scientists will explore the very edge of
knowledge and share their insights with the public.  Festival goers of
all ages can experience vibrant discussions and debates, evocative
performances and films, world-changing research updates,
thought-provoking town hall gatherings and fireside chats, hands-on
experiments and interactive outdoor explorations.  It’s an action
adventure for your mind!

See the full list of programs here:
https://www.worldsciencefestival.com/festival/world-science-festival-2018/

This year will highlight some of the incredible achievements of Women in
Science, celebrating and exploring their impact on the history and
future of scientific discovery. Perennial favorites will also return in
full force, including WSF main stage Big Ideas programs, the Flame
Challenge, Cool Jobs, and FREE outdoor events.

The World Science Festival makes the esoteric understandable and the
familiar fascinating. It has drawn more than 2.5 million participants
since its launch in 2008, with millions more experiencing the programs
online.

THE 2018 WORLD SCIENCE FESTIVAL IS NOT TO BE MISSED, SO MARK YOUR
CALENDAR AND SAVE THE DATES!

Here are a few items from the 2018 Festival’s program page,

Thursday, May 31, 2018

6:00 pm – 9:00 pm

American Museum of Natural History

Host: Faith Salie

How deep is the ocean? Why do whales sing? How far is 20,000 leagues—and what is a league anyway? Raise a glass and take a deep dive into the foamy waters of oceanic arcana under the blue whale in the Museum’s Hall of Ocean Life. Comedian and journalist Faith Salie will regale you with a pub-style night of trivia questions, physical challenges, and hilarity to celebrate the Museum’s newest temporary exhibition, Unseen Oceans. Don’t worry. When the going gets tough, we won’t let you drown. Teams of top scientists—and even a surprise guest or two—will be standing by to assist you. Program includes one free drink and private access to the special exhibition Unseen Oceans. Special exhibition access is available to ticket holders beginning one hour before the program, from 6–7pm.

Learn More

Buy Tickets

Thursday, May 31, 2018

8:00 pm – 9:30 pm

Gerald W. Lynch Theater at John Jay College

Participants: Alvaro Pascual-Leone, Nim Tottenham, Carla Shatz, And Others

What if your brain at 77 were as plastic as it was at 7? What if you could learn Mandarin with the ease of a toddler or play Rachmaninoff without breaking a sweat? A growing understanding of neuroplasticity suggests these fantasies could one day become reality. Neuroplasticity may also be the key to solving diseases like Alzheimer’s, depression, and autism. This program will guide you through the intricate neural pathways inside our skulls, as leading neuroscientists discuss their most recent findings and both the tantalizing possibilities and pitfalls for our future cognitive selves.

The Big Ideas Series is supported in part by the John Templeton Foundation. 

Learn More

Buy Tickets

Friday, June 1, 2018

8:00 pm – 9:30 pm

NYU Skirball Center for the Performing Arts

Participants: Yann LeCun, Susan Schneider, Max Tegmark, And Others

“Success in creating effective A.I.,” said the late Stephen Hawking, “could be the biggest event in the history of our civilization. Or the worst. We just don’t know.” Elon Musk called A.I. “a fundamental risk to the existence of civilization.” Are we creating the instruments of our own destruction or exciting tools for our future survival? Once we teach a machine to learn on its own—as the programmers behind AlphaGo have done, to wondrous results—where do we draw moral and computational lines? Leading specialists in A.I, neuroscience, and philosophy will tackle the very questions that may define the future of humanity.

The Big Ideas Series is supported in part by the John Templeton Foundation. 

Learn More

Buy Tickets

Friday, June 1, 2018

8:00 pm – 9:30 pm

Gerald W. Lynch Theater at John Jay College

Participants Marcela Carena, Janet Conrad, Michael Doser, Hitoshi Murayama, Neil Turok

“If I had a world of my own,” said the Mad Hatter, “nothing would be what it is, because everything would be what it isn’t. And contrary wise, what is, it wouldn’t be.” Nonsensical as this may sound, it comes close to describing an interesting paradox: You exist. You shouldn’t. Stars and galaxies and planets exist. They shouldn’t. The nascent universe contained equal parts matter and antimatter that should have instantly obliterated each other, turning the Big Bang into the Big Fizzle. And yet, here we are: flesh, blood, stars, moons, sky. Why? Come join us as we dive deep down the rabbit hole of solving the mystery of the missing antimatter.

The Big Ideas Series is supported in part by the John Templeton Foundation.

Learn More

Buy Tickets

Saturday, June 2, 2018

10:00 am – 11:00 am

Museum of the City of New York

ParticipantsKubi Ackerman

What makes a city a city? How do you build buildings, plan streets, and design parks with humans and their needs in mind? Join architect and Future Lab Project Director, Kubi Ackerman, on an exploration in which you’ll venture outside to examine New York City anew, seeing it through the eyes of a visionary museum architect, and then head to the Future City Lab’s awesome interactive space where you will design your own park. This is a student-only program for kids currently enrolled in the 4th grade – 8th grade. Parents/Guardians should drop off their children for this event.

Supported by the Bezos Family Foundation.

Learn More

Buy Tickets

Saturday, June 2, 2018

11:00 am – 12:30 pm

NYU Global Center, Grand Hall

Kerouac called it “the only truth.” Shakespeare called it “the food of love.” Maya Angelou called it “my refuge.” And now scientists are finally discovering what these thinkers, musicians, or even any of us with a Spotify account and a set of headphones could have told you on instinct: music lights up multiple corners of the brain, strengthening our neural networks, firing up memory and emotion, and showing us what it means to be human. In fact, music is as essential to being human as language and may even predate it. Can music also repair broken networks, restore memory, and strengthen the brain? Join us as we speak with neuroscientists and other experts in the fields of music and the brain as we pluck the notes of these fascinating phenomenon.

The Big Ideas Series is supported in part by the John Templeton Foundation.

Learn More

Buy Tickets

Saturday, June 2, 2018

3:00 pm – 4:00 pm

NYU Skirball Center for the Performing Arts

Moderator“Science Bob” Pflugfelder

Participants William Clark, Matt Lanier, Michael Meacham, Casie Parish Fisher, Mike Ressler

Most people think of scientists as people who work in funny-smelling labs filled with strange equipment. But there are lots of scientists whose jobs often take them out of the lab, into the world, and beyond. Come join some of the coolest of them in Cool Jobs. You’ll get to meet a forensic scientist, a venomous snake-loving herpetologist, a NASA engineer who lands spacecrafts on Mars, and inventors who are changing the future of sports.

Learn More

Buy Tickets

Saturday, June 2, 2018

4:00 pm – 5:30 pm

NYU Global Center, Grand Hall

“We can rebuild him. We have the technology,” began the opening sequence of the hugely popular 70’s TV show, “The Six Million Dollar Man.” Forty-five years later, how close are we, in reality, to that sci-fi fantasy? More thornily, now that artificial intelligence may soon pass human intelligence, and the merging of human with machine is potentially on the table, what will it then mean to “be human”? Join us for an important discussion with scientists, technologists and ethicists about the path toward superhumanism and the quest for immortality.

The Big Ideas Series is supported in part by the John Templeton Foundation.

Learn More

Buy Tickets

Saturday, June 2, 2018

4:00 pm – 5:30 pm

Gerald W. Lynch Theater at John Jay College

Participants Brett Frischmann, Tim Hwang, Aviv Ovadya, Meredith Whittaker

“Move fast and break things,” went the Silicon Valley rallying cry, and for a long time we cheered along. Born in dorm rooms and garages, implemented by iconoclasts in hoodies, Big Tech, in its infancy, spouted noble goals of bringing us closer. But now, in its adolescence, it threatens to tear us apart. Some worry about an “Infocalypse”: a dystopian disruption so deep and dire we will no longer trust anything we see, hear, or read. Is this pessimistic vision of the future real or hyperbole? Is it time for tech to slow down, grow up, and stop breaking things? Big names in Big Tech will offer big thoughts on this massive societal shift, its terrifying pitfalls, and practical solutions both for ourselves and for future generations.

The Big Ideas Series is supported in part by the John Templeton Foundation.

Learn More

Buy Tickets

This looks like an exciting lineup and there’s a lot more for you to see on the 2018 Festival’s program page. You may also want to take a look at the list of participants which features some expected specialty speakers, an architect, a mathematician, a neuroscientist and some unexpected names such Kareem Abdul-Jabbar who I know as a basketball player and currently, a contestant on Dancing with the Stars. Bringing to mind that Walt Whitman quote, “I am large, I contain multitudes.” (from Whitman’s Song of Myself Wikipedia entry).

If you’re going, there are free events and note a few of the event are already sold out.

Deep learning and some history from the Swiss National Science Foundation (SNSF)

A June 27, 2016 news item on phys.org provides a measured analysis of deep learning and its current state of development (from a Swiss perspective),

In March 2016, the world Go champion Lee Sedol lost 1-4 against the artificial intelligence AlphaGo. For many, this was yet another defeat for humanity at the hands of the machines. Indeed, the success of the AlphaGo software was forged in an area of artificial intelligence that has seen huge progress over the last decade. Deep learning, as it’s called, uses artificial neural networks to process algorithmic calculations. This software architecture therefore mimics biological neural networks.

Much of the progress in deep learning is thanks to the work of Jürgen Schmidhuber, director of the IDSIA (Istituto Dalle Molle di Studi sull’Intelligenza Artificiale) which is located in the suburbs of Lugano. The IDSIA doctoral student Shane Legg and a group of former colleagues went on to found DeepMind, the startup acquired by Google in early 2014 for USD 500 million. The DeepMind algorithms eventually wound up in AlphaGo.

“Schmidhuber is one of the best at deep learning,” says Boi Faltings of the EPFL Artificial Intelligence Lab. “He never let go of the need to keep working at it.” According to Stéphane Marchand-Maillet of the University of Geneva computing department, “he’s been in the race since the very beginning.”

A June 27, 2016 SNSF news release (first published as a story in Horizons no. 109 June 2016) by Fabien Goubet, which originated the news item, goes on to provide a brief history,

The real strength of deep learning is structural recognition, and winning at Go is just an illustration of this, albeit a rather resounding one. Elsewhere, and for some years now, we have seen it applied to an entire spectrum of areas, such as visual and vocal recognition, online translation tools and smartphone personal assistants. One underlying principle of machine learning is that algorithms must first be trained using copious examples. Naturally, this has been helped by the deluge of user-generated content spawned by smartphones and web 2.0, stretching from Facebook photo comments to official translations published on the Internet. By feeding a machine thousands of accurately tagged images of cats, for example, it learns first to recognise those cats and later any image of a cat, including those it hasn’t been fed.

Deep learning isn’t new; it just needed modern computers to come of age. As far back as the early 1950s, biologists tried to lay out formal principles to explain the working of the brain’s cells. In 1956, the psychologist Frank Rosenblatt of the New York State Aeronautical Laboratory published a numerical model based on these concepts, thereby creating the very first artificial neural network. Once integrated into a calculator, it learned to recognise rudimentary images.

“This network only contained eight neurones organised in a single layer. It could only recognise simple characters”, says Claude Touzet of the Adaptive and Integrative Neuroscience Laboratory of Aix-Marseille University. “It wasn’t until 1985 that we saw the second generation of artificial neural networks featuring multiple layers and much greater performance”. This breakthrough was made simultaneously by three researchers: Yann LeCun in Paris, Geoffrey Hinton in Toronto and Terrence Sejnowski in Baltimore.

Byte-size learning

In multilayer networks, each layer learns to recognise the precise visual characteristics of a shape. The deeper the layer, the more abstract the characteristics. With cat photos, the first layer analyses pixel colour, and the following layer recognises the general form of the cat. This structural design can support calculations being made upon thousands of layers, and it was this aspect of the architecture that gave rise to the name ‘deep learning’.

Marchand-Maillet explains: “Each artificial neurone is assigned an input value, which it computes using a mathematical function, only firing if the output exceeds a pre-defined threshold”. In this way, it reproduces the behaviour of real neurones, which only fire and transmit information when the input signal (the potential difference across the entire neural circuit) reaches a certain level. In the artificial model, the results of a single layer are weighted, added up and then sent as the input signal to the following layer, which processes that input using different functions, and so on and so forth.

For example, if a system is trained with great quantities of photos of apples and watermelons, it will progressively learn to distinguish them on the basis of diameter, says Marchand-Maillet. If it cannot decide (e.g., when processing a picture of a tiny watermelon), the subsequent layers take over by analysing the colours or textures of the fruit in the photo, and so on. In this way, every step in the process further refines the assessment.

Video games to the rescue

For decades, the frontier of computing held back more complex applications, even at the cutting edge. Industry walked away, and deep learning only survived thanks to the video games sector, which eventually began producing graphics chips, or GPUs, with an unprecedented power at accessible prices: up to 6 teraflops (i.e., 6 trillion calculations per second) for a few hundred dollars. “There’s no doubt that it was this calculating power that laid the ground for the quantum leap in deep learning”, says Touzet. GPUs are also very good at parallel calculations, a useful function for executing the innumerable simultaneous operations required by neural networks.
Although image analysis is getting great results, things are more complicated for sequential data objects such as natural spoken language and video footage. This has formed part of Schmidhuber’s work since 1989, and his response has been to develop recurrent neural networks in which neurones communicate with each other in loops, feeding processed data back into the initial layers.

Such sequential data analysis is highly dependent on context and precursory data. In Lugano, networks have been instructed to memorise the order of a chain of events. Long Short Term Memory (LSTM) networks can distinguish ‘boat’ from ‘float’ by recalling the sound that preceded ‘oat’ (i.e., either ‘b’ or ‘fl’). “Recurrent neural networks are more powerful than other approaches such as the Hidden Markov models”, says Schmidhuber, who also notes that Google Voice integrated LSTMs in 2015. “With looped networks, the number of layers is potentially infinite”, says Faltings [?].

For Schmidhuber, deep learning is just one aspect of artificial intelligence; the real thing will lead to “the most important change in the history of our civilisation”. But Marchand-Maillet sees deep learning as “a bit of hype, leading us to believe that artificial intelligence can learn anything provided there’s data. But it’s still an open question as to whether deep learning can really be applied to every last domain”.

It’s nice to get an historical perspective and eye-opening to realize that scientists have been working on these concepts since the 1950s.