Schedule
Saturday December 14th

8:00AM-8:05AM Opening Remarks

Yoshua Bengio

Mila

Yoshua Bengio is Full Professor of the Department of Computer Science and Operations Research, scientific director of Mila, CIFAR Program co-director of the CIFAR Learning in Machines and Brains program (formerly Neural Computation and Adaptive Perception), scientific director of IVADO and Canada Research Chair in Statistical Learning Algorithms. His main research ambition is to understand principles of learning that yield intelligence. He supervises a large group of graduate students and post-docs. His research is widely cited (over 130000 citations found by Google Scholar in August 2018, with an H-index over 120, and rising fast).

8:05AM-8:25AM Invited Talk

Carla P. Gomes

Cornell University

Carla Gomes is a Professor of Computer Science and the Director of the Institute for Computational Sustainability at Cornell University. Her research area is artificial intelligence with a focus on large-scale constraint-based reasoning, optimization and machine learning. She is noted for her pioneering work in developing computational methods to address challenges in sustainability.

8:25AM-9:45AM Invited Talk

Miguel Luengo-Oroz

UN Global Pulse

Dr. Miguel Luengo-Oroz is the Chief Data Scientist at UN Global Pulse, an innovation initiative of the United Nations Secretary-General. He is the head of the data science teams across the network of Pulse labs in New York, Jakarta & Kampala. Over the last decade, Miguel has built and directed teams bringing data and AI to operations and policy through innovation projects with international organizations, govs, private sector & academia. He has worked in multiple domains including poverty, food security, refugees & migrants, conflict prevention, human rights, economic indicators, gender, hate speech and climate change.

9:45AM-9:05AM Invited Talk

Thomas G. Dietterich

Oregon State University

Dr. Dietterich is Distinguished Emeritus Professor of computer science at Oregon State University and currently pursues interdisciplinary research at the boundary of computer science, ecology, and sustainability policy.

9:05AM-9:10AM Contributed talk - Balancing Competing Objectives for Welfare-Aware Machine Learning with Imperfect Data

From financial loans and humanitarian aid, to medical diagnosis and criminal justice, consequential decisions in society increasingly rely on machine learning. In most cases, the machine learning algorithms used in these contexts are trained to optimize a single metric of performance; however, most real-world decisions exist in a multi-objective setting that requires the balance of multiple incentives and outcomes. To this end, we develop a methodology for optimizing multi-objective decisions. Building on the traditional notion of Pareto optimality, we focus on understanding how to balance multiple objectives when those objectives are measured noisily or not directly observed. We believe this regime of imperfect information is far more common in real-world decisions, where one cannot easily measure the social consequences of an algorithmic decision. To show how the multi-objective framework can be used in practice, we present results using data from roughly 40,000 videos promoted by YouTube’s recommendation algorithm. This illustrates the empirical trade-off between maximizing user engagement and promoting high-quality videos. We show that multi-objective optimization could produce substantial increases in average video quality at the expense of almost negligible reductions in user engagement.

Esther Rolf

University of California, Berkeley

Esther Rolf is a 4th year Ph.D. student in the Computer Science department at the University of California, Berkeley, advised by Benjamin Recht and Michael I. Jordan. She is an NSF Graduate Research Fellow and is a fellow in the Global Policy Lab in the Goldman School of Public Policy at UC Berkeley. Esther’s research targets machine learning algorithms that interact with society. Her current focus lies in two main domains: the field of algorithmic fairness, which aims to design and audit black-box decision algorithms to ensure equity and benefit for all individuals, and in machine learning for environmental monitoring, where abundant sources of temporally recurrent data provide an exciting opportunity to make inferences and predictions about our planet.

9:10AM-9:15AM Contributed talk - Dilated LSTM with ranked units for Classification of suicide note

Presented by Annika Marie Schoene

9:15AM-9:20AM Contributed talk - Speech in Pixels: Automatic Detection of Offensive Memes for Moderation

This work addresses the challenge of hate speech detection in Internet memes, and attempts using visual information to automatically detect hate speech, unlike previous works that have focused in language.

Xavier Giro-i-Nieto

Universitat Politecnica de Catalunya

Xavier Giro-i-Nieto is an associate professor at the Universitat Politecnica de Catalunya (UPC) in Barcelona and visiting researcher at Barcelona Supercomputing Center (BSC). His obtained his doctoral degree from UPC in 2012 under the supervision of Prof. Ferran Marques (UPC) and Prof. Shih-Fu Chang (Columbia University). His research interests focus on deep learning applied to multimedia and reinforcement learning.

9:20AM-9:25AM Contributed talk - Towards better healthcare: What could and should be automated?

Presented by Paul Duckworth

9:25AM-9:45PM All Tracks Poster Session

9:45AM-10:30AM Morning Coffee/Tea Break

10:30AM-11:15AM Panel discussion on AI and Sustainable Development

Carla P. Gomes

Cornell University

Carla Gomes is a Professor of Computer Science and the Director of the Institute for Computational Sustainability at Cornell University. Her research area is artificial intelligence with a focus on large-scale constraint-based reasoning, optimization and machine learning. She is noted for her pioneering work in developing computational methods to address challenges in sustainability.

Miguel Luengo-Oroz

UN Global Pulse

Dr. Miguel Luengo-Oroz is the Chief Data Scientist at UN Global Pulse, an innovation initiative of the United Nations Secretary-General. He is the head of the data science teams across the network of Pulse labs in New York, Jakarta & Kampala. Over the last decade, Miguel has built and directed teams bringing data and AI to operations and policy through innovation projects with international organizations, govs, private sector & academia. He has worked in multiple domains including poverty, food security, refugees & migrants, conflict prevention, human rights, economic indicators, gender, hate speech and climate change.

Thomas G. Dietterich

Oregon State University

Dr. Dietterich is Distinguished Emeritus Professor of computer science at Oregon State University and currently pursues interdisciplinary research at the boundary of computer science, ecology, and sustainability policy.

Julien Cornebise

11:15PM-11:40PM Invited talk - Untangling AI Ethics: Working Toward a Root Issue

Given myriad issues in AI ethics as well as many competing frameworks/declarations, it may be useful to step back to see if we can find a root or common issue, which may help to suggest a broad solution to the complex problem. This involves returning to first principles: what is the nature of AI? I will suggest that AI is the power of increasing omniscience, which is not only generally disruptive to society but also a threat to our autonomy. A broad solution, then, is to aim at restoring that autonomy.

Patrick Lin

California Polytechnic State University

Patrick Lin is the director of the Ethics + Emerging Sciences Group, based at California Polytechnic State University, San Luis Obispo, where he is also a philosophy professor. He has published several books and papers in the field of technology ethics, especially with respect to robotics—including Robot Ethics (MIT Press, 2012) and Robot Ethics 2.0 (Oxford University Press, 2017)—human enhancement, cyberwarfare, space exploration, nanotechnology, and other areas.

11:40AM-12:00PM All Tracks Poster Session

12:00PM-2:00PM Lunch - on your own

2:00PM-2:20PM Invited talk

Artificial intelligence (AI) applications in healthcare hold great promise, aiming to empower clinicians to diagnose and treat medical conditions earlier and more effectively. To ensure that AI solutions deliver on this promise, it is important to approach the design of prototype solutions with clinical applicability in mind, envisioning how they might fit within existing clinical workflows. Here we provide a brief overview of how we are incorporating this thinking in our research projects, while highlighting challenges that lie ahead.

Nenad Tomasev

Google DeepMind

2:20PM-2:40PM Invited talk

One large-scale multistakeholder effort to implement the values of the Montreal Declaration as well as other AI ethical principles is ABOUT ML, a recently-launched project led by the Partnership on AI to synthesize and advance the existing research by bringing PAI's Partner community and beyond into a public conversation and catalyze building a set of resources that allow more organizations to experiment with pilots. Eventually ABOUT ML aims to surface research-driven best practices and aid with translating those into new industry norms. This talk will be an overview of the work to date and ways to get involved moving forward.

Jingying Yang

Partnership on AI

Jingying Yang is a Program Lead on the Research team at the Partnership on AI, where she leads a portfolio of collaborative multistakeholder projects on the topics of safety, fairness, transparency, and accountability, including the ABOUT ML project to set new industry norms on ML documentation. Previously, she worked in Product Operations at Lyft, for the state of Massachusetts on health care policy, and in management consulting at Bain & Company.

2:40PM-2:45PM Contributed talk - Hard Choices in AI Safety presented

As AI systems become prevalent in high stakes domains such as surveillance and healthcare, researchers now examine how to design and implement them in a safe manner. However, the potential harms caused by systems to stakeholders in complex social contexts and how to address these remains unclear. In this paper, we explain the inherent normative uncertainty in debates about the safety of AI systems. We then address this as a problem of vagueness by examining its place in the design, training, and deployment stages of AI system development. We adopt Ruth Chang's theory of intuitive comparability to illustrate the dilemmas that manifest at each stage. We then discuss how stakeholders can navigate these dilemmas by incorporating distinct forms of dissent into the development pipeline, drawing on Elizabeth Anderson's work on the epistemic powers of democratic institutions. We outline a framework of sociotechnical commitments to formal, substantive and discursive challenges that address normative uncertainty across stakeholders, and propose the cultivation of related virtues by those responsible for development.

Thomas Krendl Gilbert

UC Berkeley

Thomas Krendl Gilbert is an interdisciplinary Ph.D. candidate in Machine Ethics and Epistemology at UC Berkeley. With prior training in philosophy, sociology, and political theory, Tom researches the various technical and organizational predicaments that emerge when machine learning alters the context of expert decision-making. In particular, he is interested in how different algorithmic learning procedures (e.g. reinforcement learning) reframe classic ethical questions, such as the problem of aggregating human values and interests. In his free time he enjoys sailing and creative writing.

Roel Dobbe

AI Now Insitute, New York University

Roel Dobbe’s research addresses the development, analysis, integration and governance of data-driven systems. His PhD work combined optimization, machine learning and control theory to enable monitoring and control of safety-critical systems, including energy & power systems and cancer diagnosis and treatment. In addition to research, Roel has experience in industry and public institutions, where he has served as a management consultant for AT Kearney, a data scientist for C3 IoT, and a researcher for the National ThinkTank in The Netherlands. His diverse experiences led him to examine the ways in which values and stakeholder perspectives are represented in the process of designing and deploying AI and algorithmic decision-making and control systems. Roel founded Graduates for Engaged and Extended Scholarship around Computing & Engineering (GEESE); a student organization stimulating graduate students across all disciplines studying or developing technologies to take a broader lens at their field of study and engage across disciplines. Roel has published his work in various journals and conferences, including Automatica, the IEEE Conference on Decision and Control, the IEEE Power & Energy Society General Meeting, IEEE/ACM Transactions on Computational Biology and Bioinformatics and NeurIPS.

Yonatan Mintz

Georgia Tech

Yonatan Mintz is a Postdoctoral Research Fellow at the H. Milton Stewart School of Industrial and Systems Engineering at the Georgia Institute of Technology, previously he completed his PhD at the department of Industrial Engineering and Operations research at the University of California, Berkeley. His research interests focus on human sensitive decision making and in particular the application of machine learning and optimization methodology for personalized healthcare and fair and accountable decision making. Yonatan's work has been published in many journals and conferences across the machine learning, operations research, and medical fields.

2:45PM-2:50PM Contributed talk - The Effects of Competition and Regulation on Error Inequality in Data-driven Markets

Much work has documented instances of unfairness in deployed machine learning models, and significant effort has been dedicated to creating algorithms that take into account issues of fairness. Our work highlight an important but understudied source of unfairness: market forces that drive differing amounts of firm investment in data across populations.We develop a high-level framework, based on insights from learning theory and industrial organization, to study this phenomenon. In a simple model of this type of data-driven market, we first show that a monopoly will invest unequally in the groups. There are two natural avenues for preventing this disparate impact: promoting competition and regulating firm behavior. We show first that competition, under many natural models, does not eliminate incentives to invest unequally, and can even exacerbate it. We then consider two avenues for regulating the monopoly - requiring the monopoly to ensure that each group’s error rates are low, or forcing each group’s error rates to be similar to each other - and quantify the price of fairness (and who pays it). These models imply that mitigating fairness concerns may require policy-driven solutions, and not only technological ones.

Hadi Elzayn

University of Pennsylvania

Hadi Elzayn is a 4th year PhD Candidate in Applied Math and Computational Science at the University of Pennsylvania, advised by Michael Kearns. He is interested in the intersection of computer science and economics, and in the particular topics of how algorithmic learning interacts with social concerns like fairness, privacy, and markets (and how to design algorithms respecting those concerns). He received his BA from Columbia University in Mathematics and Economics. He has interned at Microsoft Research, and previously worked at the consulting firm TGG.

2:50PM-2:55PM Contributed talk - Learning Fair Classifiers in Online Stochastic Setting

One thing that differentiates policy-driven machine learning is that new public policies are often implemented in a trial-and-error fashion, as data might not be available upfront. In this work, we try to accomplish approximate group fairness in an online decision-making process where examples are sampled \textit{i.i.d} from an underlying distribution. Our work follows from the classical learning-from-experts scheme, extending the multiplicative weights algorithm by keeping separate weights for label classes as well as groups. Although accuracy and fairness are often conflicting goals, we try to mitigate the trade-offs using an optimization step and demonstrate the performance on real data set.

Yi Sun

MIT

Yi (Alicia) Sun is a PhD Candidate in Institute for Data, Systems and Society at MIT. Her research interests are designing algorithms that are robust and reliable, and as well as align with societal values.

2:55PM-3:00PM Contributed talk - Fraud detection in telephone conversations for financial services using linguistic features

In collaboration with linguistics and expert interrogators, we present an approach for fraud detection in transcribed telephone conversations. The proposed approach exploits the syntactic and semantic information of the transcription to extract both the linguistic markers and the sentiment of the customer's response. The results of the proposed approach are demonstrated with real-world financial services data using efficient, robust and explainable classifiers such as Naive Bayes, Decision Tree, Nearest Neighbours, and Support Vector Machines.

Nikesh Bajaj

University of East London

Nikesh Bajaj is a Postdoctoral Research Fellow at the University of East London, working on the Innovate UK funded project - Automation and Transparency across Financial and Legal Services, in collaboration with Intelligent Voice Ltd. and Strenuus Ltd. The project includes working with machine learning researchers, data scientists, linguistics experts and expert interrogators to model human behaviour for deception detection. He completed his PhD from Queen Mary University of London in a joint program with University of Genova. His PhD work is focused on predictive analysis of auditory attention using physiological signals (e.g. EEG, PPG, GSR). In addition to research, Nikesh has 5+ years of teaching experience. His research interests focus on signal processing, machine learning, deep learning, and optimization.

3:00PM-3:05PM Contributed talk - A Typology of AI Ethics Tools, Methods and Research to Translate Principles into Practices

What tools are available to guide the ethically aligned research, development and deployment of intelligence systems? We construct a typology to help practically-minded developers ‘apply ethics’ at each stage of the AI development pipeline, and to signal to researchers where further work is needed.

Libby Kinsey

Digital Catapult

Libby is lead technologist for AI at Digital Catapult, the UK's advanced digital technology innovation centre, where she works with a multi-disciplinary team to support organisations in building their AI capabilities responsibly. She spent her early career in technology venture capital before returning to university to study machine learning in 2014.

3:05PM-3:10PM Contributed talk - AI Ethics for Systemic Issues: A Structural Approach presented by Agnes E Schim van der Loeff

3:10AM-3:30PM All Tracks Poster Session

Yi Sun

MIT

Yi (Alicia) Sun is a PhD Candidate in Institute for Data, Systems and Society at MIT. Her research interests are designing algorithms that are robust and reliable, and as well as align with societal values.

3:21PM-3:30PM Contributed talk

1. A Typology of AI Ethics Tools, Methods and Research to Translate Principles into Practices presented by Libby Kinsey; 2. Fraud detection in telephone conversations for financial services using linguistic features presented by Nikesh Bajaj; 3. AI Ethics for Systemic Issues: A Structural Approach presented by Agnes E Schim van der Loeff.

3:30PM-4:15PM Afternoon Coffee/Tea Break

4:15PM-4:20PM Contributed talk - "Good" isn't good enough

Despite widespread enthusiasm among computer scientists to contribute to “social good,” the field's efforts to promote good lack a rigorous foundation in politics or social change. There is limited discourse regarding what “good” actually entails, and instead a reliance on vague notions of what aspects of society are good or bad. Moreover, the field rarely considers the types of social change that result from algorithmic interventions, instead following a “greedy algorithm” approach of pursuing technology-centric incremental reform at all points. In order to reason well about doing good, computer scientists must adopt language and practices to reason reflexively about political commitments, a praxis that considers the long-term impacts of technological interventions, and an interdisciplinary focus that no longer prioritizes technical considerations as superior to other forms of knowledge.

Ben Green

Harvard

Ben Green is a PhD Candidate in Applied Math at Harvard, an Affiliate at the Berkman Klein Center for Internet & Society at Harvard, and a Research Fellow at the AI Now Institute at NYU. He studies the social and policy impacts of data science, with a focus on algorithmic fairness, municipal governments, and the criminal justice system. His book, The Smart Enough City: Putting Technology in Its Place to Reclaim Our Urban Future, was published in 2019 by MIT Press.

4:20PM-4:50PM Invited talk - Indigenous Language Revitalization and AI

Micheal Running Wolf

Michael Running Wolf was raised in a rural village in Montana with intermittent water and electricity; naturally he now has a Masters of Science in Computer Science. He has professional experience with IBM, AT&T Wireless and Lawrence Livermore National Lab in Database Theory and distributed cloud computing. He recently gave up a stable career as a Full Stack Web-Developer to focus upon his true passion: endangered indigenous language revitalization using Augmented Reality and Virtual Reality (AR/VR) technology. He was raised with a grandmother who only spoke his tribal language, Cheyenne, which like many other indigenous languages is near extinction. By leveraging his advanced degree and technical skills Michael hopes to strengthen the ecology of thought represented by indigenous languages through Virtual and Augmented Reality.

4:50PM-5:50PM Panel discussion - Towards a Social Good? Theories of Change in AI

Considerable hope and energy is put into AI on the assumption that the field will make the world make the world a 'better' place. Will it? For who? This panel provides leaders in industry, law and AI-related activism with the opportunity to share their competing theories of change for the field.

Prem Natarajan

VP of Natural Understanding – Alexa AI

Dr. Prem Natarajan is a Vice President in Amazon’s Alexa unit where he leads the Natural Understanding (NU) organization within Alexa AI.  NU is a multidisciplinary science and engineering organization that develops, deploys, and maintains state-of-the-art conversational AI technologies including natural language understanding, intelligent dialog systems, entity linking and resolution, and associated worldwide runtime operations.  Dr. Natarajan joined Amazon from the University of Southern California (USC) where he was Senior Vice Dean of Engineering in the Viterbi School of Engineering, Executive Director of the Information Sciences Institute (a 300-person R&D organization), and Research Professor of computer science with distinction.  Prior to that, as Executive VP at Raytheon BBN Technologies, he led the speech, language, and multimedia business unit, which included research and development operations, and commercial products for real-time multimedia monitoring, document analysis, and information extraction. During his tenure at USC and at BBN, Dr. Natarajan directed R&D efforts in speech recognition, natural language processing, computer vision, and other applications of machine learning. While at USC, he directly led nationally influential DARPA and IARPA sponsored research efforts in biometrics/face recognition, OCR, NLP, media forensics, and forecasting.  Most recently, he helped to launch the Fairness in AI (FAI) program – a collaborative effort between NSF and Amazon for funding fairness focused research efforts in US Universities.

Rashida Richardson

AI Now

Sarah Hamid

Community Organizer

6:00PM End