Schedule
Saturday June 15th See the videos here

8:45 - 9:00 Welcoming and Poster set-up

9:00 - 9:05 Opening remarks

Yoshua Bengio

Mila

Yoshua Bengio is Full Professor of the Department of Computer Science and Operations Research, scientific director of Mila, CIFAR Program co-director of the CIFAR Learning in Machines and Brains program (formerly Neural Computation and Adaptive Perception), scientific director of IVADO and Canada Research Chair in Statistical Learning Algorithms. His main research ambition is to understand principles of learning that yield intelligence. He supervises a large group of graduate students and post-docs. His research is widely cited (over 130000 citations found by Google Scholar in August 2018, with an H-index over 120, and rising fast).

9:05 - 9:45 Keynote - Solving societal challenges with AI through partnerships

Wadhwani AI was inaugurated a little more than a year ago with the mission of bringing the power of AI to address societal challenges, especially among underserved communities throughout the world. We aim to address problems all major domains including health, agriculture, education, infrastructure, and financial inclusion. We are currently working on three solutions (two in health and one in agriculture) and are exploring more areas where we can apply AI for social good.The most important lesson that we have learned during our short stint is the importance of working in close partnership with other stakeholders and players in the social sectors, especially NGOs and Government organizations. In this talk, I will use one case, namely that of developing an AI based approach for Integrated Pest Management (IPM) in Cotton Farming, to describe how this partnership based approach has evolved and been critical to our solution development and implementation.

Padmanabhan Anandan

Wadhwani Institute of Artificial Intelligence

Dr. P. Anandan is the CEO of Wadhwani Institute of Artificial Intelligence. His prior experience includes - Adobe Research Lab India (2016-2017) as a VP for Research and a Distinguished Scientist and Managing Director at Microsoft Research (1997-2014). He was also the founding director of Microsoft Research India which he ran from 2005-2014. Earlier stint was at Sarnoff Corporation (1991-1997) as a researcher and an Assistant Professor of Computer Science at Yale University (1987-1991). His primary research area is Computer vision where he is well known for his fundamental and lasting contributions to the problem of visual motion analysis. He received his PhD in Computer Science from University of Massachusetts, Amherst in 1987, a Masters in Computer Science from University of Nebraska, Lincoln in 1979 and his B.Tech in Electrical Engineering from IIT Madras, India in 1977. He is a distinguished alumnus of IIT Madras, and UMass, Amherst and is on the Nebraska Hall of Computing. His hobbies include playing African drums, writing poems (in Tamil) and travel which makes his work related travel interesting.

9:45 - 9:50 AI Commons

AI Commons is a collective project whose goal is to make the benefits of AI available to all. Since AI research can benefit from the input of a large range of talents across the world, the project seeks to develop ways for developers and organizations to collaborate more easily and effectively. As a community operating in an environment of trust and problem-solving, AI Commons can empower researchers to tackle the world's important problems using all the possibilities of cutting-edge AI.

Yoshua Bengio

Mila

Yoshua Bengio is Full Professor of the Department of Computer Science and Operations Research, scientific director of Mila, CIFAR Program co-director of the CIFAR Learning in Machines and Brains program (formerly Neural Computation and Adaptive Perception), scientific director of IVADO and Canada Research Chair in Statistical Learning Algorithms. His main research ambition is to understand principles of learning that yield intelligence. He supervises a large group of graduate students and post-docs. His research is widely cited (over 130000 citations found by Google Scholar in August 2018, with an H-index over 120, and rising fast).

9:50 - 10:00 Problem proposal - Detecting Waterborne Debris with Sim2Real and Randomization

Marine debris pollution is one of the most ubiquitous and pressing environmental issues affecting our oceans today. Clean up efforts such as the Great Pacific Garbage Patch project have been implemented across the planet to combat this problem. However, resources to accomplish this goal are limited, and the afflicted area is vast. To this end, unmanned vehicles that are capable of automatically detecting and removing small-sized debris would be a great complementary approach to existing large-scale garbage collectors. Due to the complexity of fully functioning unmanned vehicles for both detecting and removing debris, in this project, we focus on the detection task as a first step. From the perspective of machine learning, there is an unfortunate lack of sufficient labeled data for training a specialized detector, e.g., a classifier that can distinguish debris from other objects like wild animals. Moreover, pre-trained detectors on other domains would be ineffective while creating such datasets manually would be very costly. Due to the recent progress of training deep models with synthetic data and domain randomization, we propose to train a debris detector based on a mixture of real and synthetic images.

Kris Sankaran

Mila

Postdoc at Mila working with Yoshua Bengio on problems related to Humanitarian AI. He is generally interested in ways to broaden the scope of problems studied by the machine learning community and am curious about the ways to bridge statistical and computational thinking.

10:00 - 10:10 Problem proposal - Conversational agents to address abusive online behaviors

Technologies to address cyber bullying are limited to detecting and hiding abusive messages. We propose to investigate the potential of conversational technologies for addressing abusers. We will outline directions for studying the effectiveness dialog strategies (e.g., to educate or deter abusers, or keep them busy with chatbots rather than their victims) and for initiating new research on chatbot-mediated mitigation of online abuse.

Emma Beauxis-Aussalet

Amsterdam University of Applied Science

Emma Beauxis-Aussalet is a Senior Track Associate at the Digital Society School of Amsterdam University of Applied Science, where she investigates how data-driven technologies can be applied for the best interests of society. She holds a PhD on classification errors and biases from Utrecht University. Her interests include ethical and explainable AI, data literacy in the general public, and the synergy between human & artificial intelligence to tackle job automation.

10:10 - 10:20 Problem proposal - Computer Vision For Food Quality: The Case of Injera

The use of Teff as an exclusive crop for making Injera, Ethiopian national staple, has changed overtime.Driven by the ever increasing price of Teff, producers have added other ingredients, of which some are good (maze and rice), while others are not. Hence, households opting for the industrial solution of Injera, are disturbed by the fact that hey can not figure out what exactly is contained in their Injera. Thousands of local producers and local shopkeepers work together to make fresh Injera available to millions around the country. However, consumers are finding it more and more difficult to find a safe Injera for purchase. This injera is usually sold unpacked, unlabeled and in an unsafe way through local shops. This being so, consumers face more and more health risks, all the more as it is impossible to evaluate the ingredients contained in the Injera they are buying There are two kinds of risks: (a) the local producers might try to reduce the cost by using cheap ingredients, including risky additives, and (b) the shops might sell expired Injera warmed up. We discuss here the growing food safety problem faced by millions of Injera consumers in Ethiopia, and the possibility of using AI to solve this problem.

Wondimagegnhue Tsegaye

Addis Ababa University

Wondimagegnehu is a master’s student in Information Science at Addis Ababa University. He is working on a master's thesis in learning an optimal representation of word structure for morphological complex languages under a constrained settings: limited training data and human supervision. He is interested in exploring research challenges in using AI on a social setting.

10:20 - 10:30 Problem proposal - AI for Mitigating Effects of Climate and Weather Changes in Agriculture

In recent years, floods, landslides and droughts have become an annual occurrence in Sri Lanka. Despite the efforts made by the government and other entities, these natural disasters remain challenging mainly to the people who live in high risk areas. It is also crucial to predict such disasters early on to facilitate evacuation of people living in these areas. Furthermore, Sri Lankan economy largely depends on agriculture, yet this sector still remains untouched by recent advancements of AI and other predictive analytics techniques. The solution is to develop an AI based platform that generates insights from emerging data sources. It will be modular, extensible and open source. Similar to any other real world AI system, the end solution will consist of multiple data pipelines to extract data, analyze and present results through APIs. The presentation layer will be a public API that can be consumed through a portal such as Disaster Management Centre of Sri Lanka.

Narmada Balasooriya

ConscientAI Labs

Narmada is research engineer at ConscientAI Labs based in Sri Lanka. She is also a visiting research student at the Memorial University of Newfoundland, Canada. She is interested in research on climate change and effects of it on human lifestyle and Deep Learning for Computer Vision.

10:30 - 11:00 Break / Poster Session

11:00 - 11:40 Keynote - AI for Ecology and Conservation

AI can help solve big data and decision-making problems to understand and protect the environment. I’ll survey several projects the area and discuss how to approach environmental problems using AI. The Dark Ecology project uses weather radar and machine learning to unravel mysteries of bird migration. A surprising probabilistic inference problem arises when analyzing animal survey data to monitor populations. Novel optimization algorithms can help reason about dams, hydropower, and the ecology of river networks.

Daniel Sheldon

University of Massachusetts Amherst & Mount Holyoke College

Daniel Sheldon is an Assistant Professor of Computer Science at the University of Massachusetts Amherst and Mount Holyoke College. His research investigates fundamental problems in machine learning and AI motived by large-scale environmental data, dynamic ecological processes, and real-world network phenomena.

11:40 - 11:50 Contributed talk - Using AI for Economic Upliftment of Handicraft Industry

The handicraft industry is a strong pillar of Indian economy which provides large-scale employment opportunities to artisans in rural and underprivileged communities. However, in this era of globalization, diverse modern designs have rendered traditional designs old and monotonous, causing an alarming decline of handicraft sales. In this talk, we will discuss our approach leveraging techniques like GANs, Color Transfer, Pattern Generation etc. to generate contemporary designs for two popular Indian handicrafts - Ikat and Block Print. The resultant designs are evaluated to be significantly more likeable and marketable than the current designs used by artisans.

Sonam Damani

Microsoft

Sonam Damani is an Applied Scientist in Microsoft, India where she has worked on several projects in the field of AI and Deep Learning, including Microsoft's human-like-chatbot Ruuh, Cortana personality, novel art generation using AI, Bing search relevance, among others. In the past year, she has co-authored a bunch of publications in the field of conversational AI and AI creativity that were presented in NeurIPS, WWW and CODS-COMAD.

11:50 - 12:00 Contributed talk - Deep Neural Networks Improve Radiologists' Performance in Breast Cancer Screening

We present a deep CNN for breast cancer screening exam classification, trained and evaluated on over 200,000 exams (over 1,000,000 images). Our network achieves an AUC of 0.895 in predicting whether there is a cancer in the breast, when tested on the screening population. We attribute the high accuracy of our model to a two-stage training procedure, which allows us to use a very high-capacity patch-level network to learn from pixel-level labels alongside a network learning from macroscopic breast-level labels. To validate our model, we conducted a reader study with 14 readers, each reading 720 screening mammogram exams, and find our model to be as accurate as experienced radiologists when presented with the same data. Finally, we show that a hybrid model, averaging probability of malignancy predicted by a radiologist with a prediction of our neural network, is more accurate than either of the two separately.

Krzysztof Jerzy Geras

New York University

Krzysztof is an assistant professor at NYU School of Medicine and an affiliated faculty at NYU Center for Data Science. His main interests are in unsupervised learning with neural networks, model compression, transfer learning, evaluation of machine learning models and applications of these techniques to medical imaging. He previously did a postdoc at NYU with Kyunghyun Cho, a PhD at the University of Edinburgh with Charles Sutton and an MSc as a visiting student at the University of Edinburgh with Amos Storkey. His BSc is from the University of Warsaw.

Nan Wu

New York University

Nan is a PhD student at NYU Center for Data Science. She is interested in data science with application to healthcare and currently working on medical image analysis. Before joining NYU, she graduated from School for Gifted Young, University of Science and Technology of China, receiving B.S in Statistics and B.A. in Business Administration.

12:00 - 14:00 Lunch - on your own

14:00 - 14:30 Keynote: AI for Whose Social Good?

Luke Stark will discuss two recent papers (Greene, Hoffmann & Stark 2019; Stark & Hoffmann 2019) that use discursive analysis to examine a) recent high-profile value statements endorsing ethical design for artificial intelligence and machine learning and b) professional ethics codes in computer science, statistics, and other fields. Guided by insights from Science and Technology Studies, values in design, and the sociology of business ethics, he will discuss the grounding assumptions and terms of debate that shape current conversations about ethical design in data science and AI. He will also advocate for an expanded view of expertise in understanding what ethical AI/ML/AI for Social Good should mean.

Luke Stark

Microsoft Research

Luke Stark is a Postdoctoral Researcher in the Fairness, Accountability, Transparency and Ethics (FATE) Group at Microsoft Research Montreal, and an Affiliate of the Berkman Klein Center for Internet & Society at Harvard University. Luke holds a PhD from the Department of Media, Culture, and Communication at New York University, and an Honours BA and MA in History from the University of Toronto. Trained as a media historian, his scholarship centers on the interconnected histories of artificial intelligence (AI) and behavioral science, and on the ways the social and ethical contexts of AI are changing how we work, communicate, and participate in civic life.

14:30 - 15:00 Keynote: How to Volunteer as an AI Researcher

Over the past six years, Will High has volunteered his expertise as a data scientist to various nonprofits and civic causes. He's contributed to work on homelessness, improving charter schools and optimizing water distribution. Will will talk about his experience doing pro-bono work with DataKind, a global nonprofit based in New York that connects leading social change organizations with data science talent to collaborate on cutting-edge analytics and advanced algorithms developed to maximize social impact. He'll comment on DataKind's mission, how to structure effective pro-bono engagements, and broader principles of the pro bono model applied to machine learning, analytics and engineering.

William High

Joymode

Will is a data science executive at Joymode in Los Angeles and works with DataKind as Data Ambassador, consultant and facilitator. Will was previously a Senior Data Scientist at Netflix. He holds a PhD in physics from Harvard.

15:00 - 15:30 Break / Poster Session

15:30 - 15:50 Poster Session

15:50 - 16:20 Keynote: Creating constructive change and avoiding unintended consequences from machine learning

This talk will give an overview of some of the known failure modes that are leading to unintended consequences in AI development, as well as research agendas and initiatives to mitigate them, including a number that are underway at the Partnership on AI (PAI). Important case studies include the use of algorithmic risk assessment tools in the US criminal justice system, and the side-effects that are caused by using deliberate or unintended optimization processes to design high-stakes technical and bureaucratic system. These are important in their own right, but they are also important contributors to conversations about social good applications of AI, which are also subject to significant potential for unintended consequences.

Peter Eckersley

Partnership on AI

Peter Eckersley is Director of Research at the Partnership on AI, a collaboration between the major technology companies, civil society and academia to ensure that AI is designed and used to benefit humanity. He leads PAI's research on machine learning policy and ethics, including projects within PAI itself and projects in collaboration with the Partnership's extensive membership. Peter's AI research interests are broad, including measuring progress in the field, figuring out how to translate ethical and safety concerns into mathematical constraints, finding the right metaphors and ways of thinking about AI development, and setting sound policies around high-stakes applications such as self-driving vehicles, recidivism prediction, cybersecurity, and military applications of AI. Prior to joining PAI, Peter was Chief Computer Scientist for the Electronic Frontier Foundation. At EFF he lead a team of technologists that launched numerous computer security and privacy projects including Let's Encrypt and Certbot, Panopticlick, HTTPS Everywhere, the SSL Observatory and Privacy Badger; they also worked on diverse Internet policy issues including campaigning to preserve open wireless networks; fighting to keep modern computing platforms open; helping to start the campaign against the SOPA/PIPA Internet blacklist legislation; and running the first controlled tests to confirm that Comcast was using forged reset packets to interfere with P2P protocols. Peter holds a PhD in computer science and law from the University of Melbourne; his research focused on the practicality and desirability of using alternative compensation systems to legalize P2P file sharing and similar distribution tools while still paying authors and artists for their work. He currently serves on the board of the Internet Security Research Group and the Advisory Council of the Open Technology Fund; he is an Affiliate of the Center for International Security and Cooperation at Stanford University and a Distinguished Technology Fellow at EFF.

16:20 - 16:30 Contributed talk - Learning Global Variations in Outdoor PM_2.5 Concentrations with Satellite Images

The World Health Organization identifies outdoor fine particulate air pollution (PM2.5) as a leading risk factor for premature mortality globally. As such, understanding the global distribution of PM2.5 is an essential precursor towards implementing pollution mitigation strategies and modelling global public health. Here, we present a convolutional neural network based approach for estimating annual average outdoor PM2.5 concentrations using only satellite images. The resulting model achieves comparable performance to current state-of-the-art statistical models.

Kris Y Hong

McGill University

Kris is a research assistant and prospective PhD student in the Weichenthal Lab at McGill University, in Montreal, Canada. His interests lie in applying current statistical and machine learning techniques towards solving humanitarian and environmental challenges. Prior to joining McGill, he was a data analyst at the British Columbia Centre for Disease Control while receiving his B.Sc. in Statistics from the University of British Columbia. 

16:30 - 16:40 Contributed talk - Pareto Efficient Fairness for Skewed Subgroup Data

As awareness of the potential for learned models to amplify existing societal biases increases, the field of ML fairness has developed mitigation techniques. A prevalent method applies constraints, including equality of performance, with respect to subgroups defined over the intersection of sensitive attributes such as race and gender. Enforcing such constraints when the subgroup populations are considerably skewed with respect to a target can lead to unintentional degradation in performance, without benefiting any individual subgroup, counter to the United Nations Sustainable Development goals of reducing inequalities and promoting growth. In order to avoid such performance degradation while ensuring equitable treatment to all groups, we propose Pareto-Efficient Fairness (PEF), which identifies the operating point on the Pareto curve of subgroup performances closest to the fairness hyperplane. Specifically, PEF finds a Pareto Optimal point which maximizes multiple subgroup accuracy measures. The algorithm scalarizes using the adaptive weighted metric norm by iteratively searching the Pareto region of all models enforcing the fairness constraint. PEF is backed by strong theoretical results on discoverability and provides domain practitioners finer control in navigating both convex and non-convex accuracy-fairness trade-offs. Empirically, we show that PEF increases performance of all subgroups in skewed synthetic data and UCI datasets.

Ananth Balashankar

NYU, Google AI

Ananth Balashnkar is a 2nd year Ph.D student in Computer Science advised by Prof. Lakshminarayanan Subramanian at NYU's Courant Institute of Mathematical Sciences. He is currently interested in Interpretable Machine Learning and the challenges involved in applying machine perception for the domains of policy, privacy, economics and healthcare.

16:40 - 16:50 Contributed talk - Crisis Sub-Events on Social Media: A Case Study of Wildfires

Social media has been extensively used for crisis management. Recent work examines possible sub-events as a major crisis unfolds. In this project, we first propose a framework to identify sub-events from tweets. Then, leveraging 4 California wildfires in 2018-2019 as a case study, we investigate how sub-events cascade based on existing hypotheses drawn from the disaster management literature, and find that most hypotheses are supported on social media, e.g., fire induces smoke, which causes air pollution, which later harms health and eventually affects the healthcare system. In addition, we discuss other unexpected sub-events that emerge from social media.

Alejandro (Alex) Jaimes

Dataminr

Alex is Chief Scientist and SVP of AI at Dataminr. Alex has 15+ years of intl. experience in research and product impact at scale. He has published 100+ technical papers in top-tier conferences and journals in diverse topics in AI and has been featured widely in the press (MIT Tech review, CNBC, Vice, TechCrunch, Yahoo! Finance, etc.). He has given 80+ invited talks (AI for Good Global Summit (UN, Geneva), the Future of Technology Summit, O’Reilly (AI, Strata, Velocity), Deep Learning Summit, etc.). Alex is also an Endeavor Network mentor (which leads the high-impact entrepreneurship movement around the world), and was an early voice in Human-Centered AI (Computing). He holds a Ph.D. from Columbia U.

16:50 - 17:00 Contributed talk : Towards Detecting Dyslexia in Children's Handwriting Using Neural Networks

Dyslexia is a learning disability that hinders a person's ability to read. Dyslexia needs to be caught early, however, teachers are not trained to detect dyslexia and screening tests are used inconsistently. We propose (1) two new data sets of handwriting collected from children with and without dyslexia amounting to close to 500 handwriting samples, and (2) an automated early screening technique to be used in conjunction with current approaches, to accelerate the detection process. Preliminary results suggest our system out-performs teachers.

Katie Spoon

Indiana University

Katie recently completed her B.S./M.S. in computer science from Indiana University with minors in math and statistics, and with research interests in anomaly detection, computer vision, data visualization, and applications of computer vision to health and education, like her senior thesis detecting dyslexia with neural networks. She worked at IBM Research in the summer of 2018 on neuromorphic computing, and will be returning there full-time. She hopes to potentially get a PhD and become a corporate research scientist.

17:00 - 17:30 Keynote : Assisting Vulnerable Communities through AI and OR: from Data to Deployed Decisions

Phebe Vayanos

University of Southern California

Phebe Vayanos is Assistant Professor of Industrial & Systems Engineering and Computer Science at the University of Southern California, and Associate Director of the CAIS Center for Artificial Intelligence in Society. Her research aims to address fundamental questions in data-driven optimization (aka prescriptive analytics) with aim to tackle real-world decision- and policy-making problems in uncertain and adversarial environments.

17:30 - 17:40 Open announcement and Best Paper Award