Tag Archives: Milind Tambe

How might artificial intelligence affect urban life in 2030? A study

Peering into the future is always a chancy business as anyone who’s seen those film shorts from the 1950’s and 60’s which speculate exuberantly as to what the future will bring knows.

A sober approach (appropriate to our times) has been taken in a study about the impact that artificial intelligence might have by 2030. From a Sept. 1, 2016 Stanford University news release (also on EurekAlert) by Tom Abate (Note: Links have been removed),

A panel of academic and industrial thinkers has looked ahead to 2030 to forecast how advances in artificial intelligence (AI) might affect life in a typical North American city – in areas as diverse as transportation, health care and education ­– and to spur discussion about how to ensure the safe, fair and beneficial development of these rapidly emerging technologies.

Titled “Artificial Intelligence and Life in 2030,” this year-long investigation is the first product of the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted by Stanford to inform societal deliberation and provide guidance on the ethical development of smart software, sensors and machines.

“We believe specialized AI applications will become both increasingly common and more useful by 2030, improving our economy and quality of life,” said Peter Stone, a computer scientist at the University of Texas at Austin and chair of the 17-member panel of international experts. “But this technology will also create profound challenges, affecting jobs and incomes and other issues that we should begin addressing now to ensure that the benefits of AI are broadly shared.”

The new report traces its roots to a 2009 study that brought AI scientists together in a process of introspection that became ongoing in 2014, when Eric and Mary Horvitz created the AI100 endowment through Stanford. AI100 formed a standing committee of scientists and charged this body with commissioning periodic reports on different aspects of AI over the ensuing century.

“This process will be a marathon, not a sprint, but today we’ve made a good start,” said Russ Altman, a professor of bioengineering and the Stanford faculty director of AI100. “Stanford is excited to host this process of introspection. This work makes practical contribution to the public debate on the roles and implications of artificial intelligence.”

The AI100 standing committee first met in 2015, led by chairwoman and Harvard computer scientist Barbara Grosz. It sought to convene a panel of scientists with diverse professional and personal backgrounds and enlist their expertise to assess the technological, economic and policy implications of potential AI applications in a societally relevant setting.

“AI technologies can be reliable and broadly beneficial,” Grosz said. “Being transparent about their design and deployment challenges will build trust and avert unjustified fear and suspicion.”

The report investigates eight domains of human activity in which AI technologies are beginning to affect urban life in ways that will become increasingly pervasive and profound by 2030.

The 28,000-word report includes a glossary to help nontechnical readers understand how AI applications such as computer vision might help screen tissue samples for cancers or how natural language processing will allow computerized systems to grasp not simply the literal definitions, but the connotations and intent, behind words.

The report is broken into eight sections focusing on applications of AI. Five examine application arenas such as transportation where there is already buzz about self-driving cars. Three other sections treat technological impacts, like the section on employment and workplace trends which touches on the likelihood of rapid changes in jobs and incomes.

“It is not too soon for social debate on how the fruits of an AI-dominated economy should be shared,” the researchers write in the report, noting also the need for public discourse.

“Currently in the United States, at least sixteen separate agencies govern sectors of the economy related to AI technologies,” the researchers write, highlighting issues raised by AI applications: “Who is responsible when a self-driven car crashes or an intelligent medical device fails? How can AI applications be prevented from [being used for] racial discrimination or financial cheating?”

The eight sections discuss:

Transportation: Autonomous cars, trucks and, possibly, aerial delivery vehicles may alter how we commute, work and shop and create new patterns of life and leisure in cities.

Home/service robots: Like the robotic vacuum cleaners already in some homes, specialized robots will clean and provide security in live/work spaces that will be equipped with sensors and remote controls.

Health care: Devices to monitor personal health and robot-assisted surgery are hints of things to come if AI is developed in ways that gain the trust of doctors, nurses, patients and regulators.

Education: Interactive tutoring systems already help students learn languages, math and other skills. More is possible if technologies like natural language processing platforms develop to augment instruction by humans.

Entertainment: The conjunction of content creation tools, social networks and AI will lead to new ways to gather, organize and deliver media in engaging, personalized and interactive ways.

Low-resource communities: Investments in uplifting technologies like predictive models to prevent lead poisoning or improve food distributions could spread AI benefits to the underserved.

Public safety and security: Cameras, drones and software to analyze crime patterns should use AI in ways that reduce human bias and enhance safety without loss of liberty or dignity.

Employment and workplace: Work should start now on how to help people adapt as the economy undergoes rapid changes as many existing jobs are lost and new ones are created.

“Until now, most of what is known about AI comes from science fiction books and movies,” Stone said. “This study provides a realistic foundation to discuss how AI technologies are likely to affect society.”

Grosz said she hopes the AI 100 report “initiates a century-long conversation about ways AI-enhanced technologies might be shaped to improve life and societies.”

You can find the A100 website here, and the group’s first paper: “Artificial Intelligence and Life in 2030” here. Unfortunately, I don’t have time to read the report but I hope to do so soon.

The AI100 website’s About page offered a surprise,

This effort, called the One Hundred Year Study on Artificial Intelligence, or AI100, is the brainchild of computer scientist and Stanford alumnus Eric Horvitz who, among other credits, is a former president of the Association for the Advancement of Artificial Intelligence.

In that capacity Horvitz convened a conference in 2009 at which top researchers considered advances in artificial intelligence and its influences on people and society, a discussion that illuminated the need for continuing study of AI’s long-term implications.

Now, together with Russ Altman, a professor of bioengineering and computer science at Stanford, Horvitz has formed a committee that will select a panel to begin a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues.

“Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life,” said Stanford President John Hennessy, who helped initiate the project. “Given’s Stanford’s pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children’s children.”

Five leading academicians with diverse interests will join Horvitz and Altman in launching this effort. They are:

  • Barbara Grosz, the Higgins Professor of Natural Sciences at HarvardUniversity and an expert on multi-agent collaborative systems;
  • Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley, who collaborates with technologists to advance privacy and other democratic values through technical design and policy;

    This effort, called the One Hundred Year Study on Artificial Intelligence, or AI100, is the brainchild of computer scientist and Stanford alumnus Eric Horvitz who, among other credits, is a former president of the Association for the Advancement of Artificial Intelligence.

    In that capacity Horvitz convened a conference in 2009 at which top researchers considered advances in artificial intelligence and its influences on people and society, a discussion that illuminated the need for continuing study of AI’s long-term implications.

    Now, together with Russ Altman, a professor of bioengineering and computer science at Stanford, Horvitz has formed a committee that will select a panel to begin a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues.

    “Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life,” said Stanford President John Hennessy, who helped initiate the project. “Given’s Stanford’s pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children’s children.”

    Five leading academicians with diverse interests will join Horvitz and Altman in launching this effort. They are:

    • Barbara Grosz, the Higgins Professor of Natural Sciences at HarvardUniversity and an expert on multi-agent collaborative systems;
    • Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley, who collaborates with technologists to advance privacy and other democratic values through technical design and policy;
    • Yoav Shoham, a professor of computer science at Stanford, who seeks to incorporate common sense into AI;
    • Tom Mitchell, the E. Fredkin University Professor and chair of the machine learning department at Carnegie Mellon University, whose studies include how computers might learn to read the Web;
    • and Alan Mackworth, a professor of computer science at the University of British Columbia [emphases mine] and the Canada Research Chair in Artificial Intelligence, who built the world’s first soccer-playing robot.

    I wasn’t expecting to see a Canadian listed as a member of the AI100 standing committee and then I got another surprise (from the AI100 People webpage),

    Study Panels

    Study Panels are planned to convene every 5 years to examine some aspect of AI and its influences on society and the world. The first study panel was convened in late 2015 to study the likely impacts of AI on urban life by the year 2030, with a focus on typical North American cities.

    2015 Study Panel Members

    • Peter Stone, UT Austin, Chair
    • Rodney Brooks, Rethink Robotics
    • Erik Brynjolfsson, MIT
    • Ryan Calo, University of Washington
    • Oren Etzioni, Allen Institute for AI
    • Greg Hager, Johns Hopkins University
    • Julia Hirschberg, Columbia University
    • Shivaram Kalyanakrishnan, IIT Bombay
    • Ece Kamar, Microsoft
    • Sarit Kraus, Bar Ilan University
    • Kevin Leyton-Brown, [emphasis mine] UBC [University of British Columbia]
    • David Parkes, Harvard
    • Bill Press, UT Austin
    • AnnaLee (Anno) Saxenian, Berkeley
    • Julie Shah, MIT
    • Milind Tambe, USC
    • Astro Teller, Google[X]
  • [emphases mine] and the Canada Research Chair in Artificial Intelligence, who built the world’s first soccer-playing robot.

I wasn’t expecting to see a Canadian listed as a member of the AI100 standing committee and then I got another surprise (from the AI100 People webpage),

Study Panels

Study Panels are planned to convene every 5 years to examine some aspect of AI and its influences on society and the world. The first study panel was convened in late 2015 to study the likely impacts of AI on urban life by the year 2030, with a focus on typical North American cities.

2015 Study Panel Members

  • Peter Stone, UT Austin, Chair
  • Rodney Brooks, Rethink Robotics
  • Erik Brynjolfsson, MIT
  • Ryan Calo, University of Washington
  • Oren Etzioni, Allen Institute for AI
  • Greg Hager, Johns Hopkins University
  • Julia Hirschberg, Columbia University
  • Shivaram Kalyanakrishnan, IIT Bombay
  • Ece Kamar, Microsoft
  • Sarit Kraus, Bar Ilan University
  • Kevin Leyton-Brown, [emphasis mine] UBC [University of British Columbia]
  • David Parkes, Harvard
  • Bill Press, UT Austin
  • AnnaLee (Anno) Saxenian, Berkeley
  • Julie Shah, MIT
  • Milind Tambe, USC
  • Astro Teller, Google[X]

I see they have representation from Israel, India, and the private sector as well. Refreshingly, there’s more than one woman on the standing committee and in this first study group. It’s good to see these efforts at inclusiveness and I’m particularly delighted with the inclusion of an organization from Asia. All too often inclusiveness means Europe, especially the UK. So, it’s good (and I think important) to see a different range of representation.

As for the content of report, should anyone have opinions about it, please do let me know your thoughts in the blog comments.

Artificial intelligence used for wildlife protection

PAWS (Protection Assistant for Wildlife Security), an artificial intelligence (AI) program, has been tested in Uganda and Malaysia. according to an April 22, 2016 US National Science Foundation (NSF) news release (also on EurekAlert but dated April 21, 2016), Note: Links have been removed,

A century ago, more than 60,000 tigers roamed the wild. Today, the worldwide estimate has dwindled to around 3,200. Poaching is one of the main drivers of this precipitous drop. Whether killed for skins, medicine or trophy hunting, humans have pushed tigers to near-extinction. The same applies to other large animal species like elephants and rhinoceros that play unique and crucial roles in the ecosystems where they live.

Human patrols serve as the most direct form of protection of endangered animals, especially in large national parks. However, protection agencies have limited resources for patrols.

With support from the National Science Foundation (NSF) and the Army Research Office, researchers are using artificial intelligence (AI) and game theory to solve poaching, illegal logging and other problems worldwide, in collaboration with researchers and conservationists in the U.S., Singapore, Netherlands and Malaysia.

“In most parks, ranger patrols are poorly planned, reactive rather than pro-active, and habitual,” according to Fei Fang, a Ph.D. candidate in the computer science department at the University of Southern California (USC).

Fang is part of an NSF-funded team at USC led by Milind Tambe, professor of computer science and industrial and systems engineering and director of the Teamcore Research Group on Agents and Multiagent Systems.

Their research builds on the idea of “green security games” — the application of game theory to wildlife protection. Game theory uses mathematical and computer models of conflict and cooperation between rational decision-makers to predict the behavior of adversaries and plan optimal approaches for containment. The Coast Guard and Transportation Security Administration have used similar methods developed by Tambe and others to protect airports and waterways.

“This research is a step in demonstrating that AI can have a really significant positive impact on society and allow us to assist humanity in solving some of the major challenges we face,” Tambe said.

PAWS puts the claws in anti-poaching

The team presented papers describing how they use their methods to improve the success of human patrols around the world at the AAAI Conference on Artificial Intelligence in February [2016].

The researchers first created an AI-driven application called PAWS (Protection Assistant for Wildlife Security) in 2013 and tested the application in Uganda and Malaysia in 2014. Pilot implementations of PAWS revealed some limitations, but also led to significant improvements.

Here’s a video describing the issues and PAWS,

For those who prefer to read about details rather listen, there’s more from the news release,

PAWS uses data on past patrols and evidence of poaching. As it receives more data, the system “learns” and improves its patrol planning. Already, the system has led to more observations of poacher activities per kilometer.

Its key technical advance lies in its ability to incorporate complex terrain information, including the topography of protected areas. That results in practical patrol routes that minimize elevation changes, saving time and energy. Moreover, the system can also take into account the natural transit paths that have the most animal traffic – and thus the most poaching – creating a “street map” for patrols.

“We need to provide actual patrol routes that can be practically followed,” Fang said. “These routes need to go back to a base camp and the patrols can’t be too long. We list all possible patrol routes and then determine which is most effective.”

The application also randomizes patrols to avoid falling into predictable patterns.

“If the poachers observe that patrols go to some areas more often than others, then the poachers place their snares elsewhere,” Fang said.

Since 2015, two non-governmental organizations, Panthera and Rimbat, have used PAWS to protect forests in Malaysia. The research won the Innovative Applications of Artificial Intelligence award for deployed application, as one of the best AI applications with measurable benefits.

The team recently combined PAWS with a new tool called CAPTURE (Comprehensive Anti-Poaching Tool with Temporal and Observation Uncertainty Reasoning) that predicts attacking probability even more accurately.

In addition to helping patrols find poachers, the tools may assist them with intercepting trafficked wildlife products and other high-risk cargo, adding another layer to wildlife protection. The researchers are in conversations with wildlife authorities in Uganda to deploy the system later this year. They will present their findings at the 15th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2016) in May.

“There is an urgent need to protect the natural resources and wildlife on our beautiful planet, and we computer scientists can help in various ways,” Fang said. “Our work on PAWS addresses one facet of the problem, improving the efficiency of patrols to combat poaching.”

There is yet another potential use for PAWS, the prevention of illegal logging,

While Fang and her colleagues work to develop effective anti-poaching patrol planning systems, other members of the USC team are developing complementary methods to prevent illegal logging, a major economic and environmental problem for many developing countries.

The World Wildlife Fund estimates trade in illegally harvested timber to be worth between $30 billion and $100 billion annually. The practice also threatens ancient forests and critical habitats for wildlife.

Researchers at USC, the University of Texas at El Paso and Michigan State University recently partnered with the non-profit organization Alliance Vohoary Gasy to limit the illegal logging of rosewood and ebony trees in Madagascar, which has caused a loss of forest cover on the island nation.

Forest protection agencies also face limited budgets and must cover large areas, making sound investments in security resources critical.

The research team worked to determine the balance of security resources in which Madagascar should invest to maximize protection, and to figure out how to best deploy those resources.

Past work in game theory-based security typically involved specified teams — the security workers assigned to airport checkpoints, for example, or the air marshals deployed on flight tours. Finding optimal security solutions for those scenarios is difficult; a solution involving an open-ended team had not previously been feasible.

To solve this problem, the researchers developed a new method called SORT (Simultaneous Optimization of Resource Teams) that they have been experimentally validating using real data from Madagascar.

The research team created maps of the national parks, modeled the costs of all possible security resources using local salaries and budgets, and computed the best combination of resources given these conditions.

“We compared the value of using an optimal team determined by our algorithm versus a randomly chosen team and the algorithm did significantly better,” said Sara Mc Carthy, a Ph.D. student in computer science at USC.

The algorithm is simple and fast, and can be generalized to other national parks with different characteristics. The team is working to deploy it in Madagascar in association with the Alliance Vohoary Gasy.

“I am very proud of what my PhD students Fei Fang and Sara Mc Carthy have accomplished in this research on AI for wildlife security and forest protection,” said Tambe, the team lead. “Interdisciplinary collaboration with practitioners in the field was key in this research and allowed us to improve our research in artificial intelligence.”

Moreover, the project shows other computer science researchers the potential impact of applying their research to the world’s problems.

“This work is not only important because of the direct beneficial impact that it has on the environment, protecting wildlife and forests, but on the way that it can inspire other to dedicate their efforts into making the world a better place,” Mc Carthy said.

The curious can find out more about Panthera here and about Alliance Vohoary Gasy here (be prepared to use your French language skills). Unfortunately, I could not find more information about Rimbat.