Schedule
Saturday December 14th
See the videos here

8:00AM-8:05AM Opening Remarks

Yoshua Bengio

Mila

Yoshua Bengio is Full Professor of the Department of Computer Science and Operations Research, scientific director of Mila, CIFAR Program co-director of the CIFAR Learning in Machines and Brains program (formerly Neural Computation and Adaptive Perception), scientific director of IVADO and Canada Research Chair in Statistical Learning Algorithms. His main research ambition is to understand principles of learning that yield intelligence. He supervises a large group of graduate students and post-docs. His research is widely cited (over 130000 citations found by Google Scholar in August 2018, with an H-index over 120, and rising fast).

8:05AM-8:25AM Invited Talk Track 1 - Computational Sustainability: Computing for a Better World and a Sustainable Future

Computational sustainability is a new interdisciplinary research field with the overarching goal of developing computational models, methods, and tools to help manage the balance of environmental, economic, and societal needs for a sustainable future. I will provide a short overview of computational sustainability, with examples ranging from wildlife conservation and biodiversity to evaluating the impacts of hydropower dam proliferation in the Amazon basin. Our research leverages the recent artificial intelligence (AI) advances in deep learning, reasoning, and decision making. I will highlight cross-cutting computational themes, how AI enriches sustainability sciences and conversely, how sustainability questions enrich AI and computer science.

Carla P. Gomes

Cornell University

Carla Gomes is a Professor of Computer Science and the Director of the Institute for Computational Sustainability at Cornell University. Her research area is artificial intelligence with a focus on large-scale constraint-based reasoning, optimization and machine learning. She is noted for her pioneering work in developing computational methods to address challenges in sustainability.

8:25AM-8:45AM Invited Talk Track 1 - Translating AI Research into operational impact to achieve the Sustainable Development Goals

In September 2015, Member States of the United Nations adopted the Sustainable Development Goals: a set of goals to end poverty, protect the planet and ensure prosperity for all as part of a new global agenda. To achieve the SDGs by 2030, governments, private sector, civil society and academia must work together. In this talk, I will present my journey working almost a decade at UN Global Pulse - an innovation initiative of the UN Secretary General- researching and developing real applications of data innovation and AI for sustainable development, humanitarian action and peace. The work of the UN includes providing food and assistance to 80 million people, supplying vaccines to 45% of the world's children and assisting 65 million people fleeing war, famine and persecution. Examples of innovation projects include understanding perceptions on refugees from social data; mapping population movements in the aftermath of natural disasters; understanding recovery from shocks with financial transaction data; using satellite data to inform humanitarian operations in conflict zones or monitoring public radio to give voice to citizens in unconnected areas. Based on these examples, the session will discuss operational realities and the global policy environment, as well as challenges and opportunities for the research community to ensure that its groundbreaking discoveries are used responsibly and can be translated into social impact for all.

Miguel Luengo-Oroz

UN Global Pulse

Dr. Miguel Luengo-Oroz is the Chief Data Scientist at UN Global Pulse, an innovation initiative of the United Nations Secretary-General. He is the head of the data science teams across the network of Pulse labs in New York, Jakarta & Kampala. Over the last decade, Miguel has built and directed teams bringing data and AI to operations and policy through innovation projects with international organizations, govs, private sector & academia. He has worked in multiple domains including poverty, food security, refugees & migrants, conflict prevention, human rights, economic indicators, gender, hate speech and climate change.

8:45AM-9:15AM Invited Talk Track 3 - Sacred Waveforms: An Indigenous Perspective on the Ethics of Collecting and Usage of Spiritual Data for Machine Learning

This talk is an introduction to the intersection of revitalizing sacred knowledge and exploitation of this data. For centuries Indigenous Peoples of the Americas have resisted the loss of their land, technology, and cultural knowledge. This resistance has been enabled by vibrant cultural protocols, unique to each tribal nation, which controls the sharing of, and limits access to sacred knowledge. Technology has made preserving cultural data easy, but there is a natural tension between reigniting ancient knowledge and mediums that allow uncontrollable exploitation of this data. Easy to access ML opens a new path toward creating new Indigenous technology, such as ASR, but creating AI using Indigenous heritage requires care.

Micheal Running Wolf

Buffalo Tongue

Michael Running Wolf was raised in a rural village in Montana with intermittent water and electricity; naturally he now has a Masters of Science in Computer Science. Though he is a published poet, he is a computer nerd at heart. His lifelong goal is to pursue endangered indigenous language revitalization using Augmented Reality and Virtual Reality (AR/VR) technology. He was raised with a grandmother who only spoke his tribal language, Cheyenne, which like many other indigenous languages is near extinction. By leveraging his advanced Master’s degree in Computer Science and his technical skills, Michael hopes to strengthen the ecology of thought represented by indigenous languages through immersive technology.

Caroline Running Wolf

Buffalo Tongue

Caroline Running Wolf, nee Old Coyote, is an enrolled member of the Apsáalooke Nation (Crow) in Montana, with a Swabian (German) mother and also Pikuni, Oglala, and Ho-Chunk heritage. Thanks to her genuine interest in people and their stories she is a multilingual Cultural Acclimation Artist dedicated to supporting Indigenous language and culture vitality. Together with her husband, Michael Running Wolf, they create virtual and augmented reality experiences to advocate for Native American voices, languages and cultures. Caroline has a Master’s degree in Native American Studies from Montana State University in Bozeman, Montana. She is currently pursuing her PhD in anthropology at the University of British Columbia in Vancouver, Canada.

9:15AM-9:20AM Contributed Talk Track 1 - Balancing Competing Objectives for Welfare-Aware Machine Learning with Imperfect Data

From financial loans and humanitarian aid, to medical diagnosis and criminal justice, consequential decisions in society increasingly rely on machine learning. In most cases, the machine learning algorithms used in these contexts are trained to optimize a single metric of performance; however, most real-world decisions exist in a multi-objective setting that requires the balance of multiple incentives and outcomes. To this end, we develop a methodology for optimizing multi-objective decisions. Building on the traditional notion of Pareto optimality, we focus on understanding how to balance multiple objectives when those objectives are measured noisily or not directly observed. We believe this regime of imperfect information is far more common in real-world decisions, where one cannot easily measure the social consequences of an algorithmic decision. To show how the multi-objective framework can be used in practice, we present results using data from roughly 40,000 videos promoted by YouTube’s recommendation algorithm. This illustrates the empirical trade-off between maximizing user engagement and promoting high-quality videos. We show that multi-objective optimization could produce substantial increases in average video quality at the expense of almost negligible reductions in user engagement.

Esther Rolf

University of California, Berkeley

Esther Rolf is a 4th year Ph.D. student in the Computer Science department at the University of California, Berkeley, advised by Benjamin Recht and Michael I. Jordan. She is an NSF Graduate Research Fellow and is a fellow in the Global Policy Lab in the Goldman School of Public Policy at UC Berkeley. Esther’s research targets machine learning algorithms that interact with society. Her current focus lies in two main domains: the field of algorithmic fairness, which aims to design and audit black-box decision algorithms to ensure equity and benefit for all individuals, and in machine learning for environmental monitoring, where abundant sources of temporally recurrent data provide an exciting opportunity to make inferences and predictions about our planet.

9:20AM-9:25AM Contributed Talk Track 1 - Dilated LSTM with ranked units for Classification of suicide note

Recent statistics in suicide prevention show that people are increasingly posting their last words online and with the unprecedented availability of textual data from social media platforms researchers have the opportunity to analyse such data. This work focuses on identifying suicide notes from other types of text in a document-level classification task, using a hierarchical recurrent neural network to uncover linguistic patterns in the data.

Annika Marie Schoene

University of Hull & IBM Research UK

Annika Marie Schoene is a third-year PhD candidate in Natural Language Processing at the University of Hull and is affiliated to IBM Research UK. The main focus of her work lies in investigating recurrent neural networks for fine-grained emotion detection in social media data. She also has an interest in mental health issues on social media, where she looks at how to identify suicidal ideation in textual data.

9:25AM-9:30AM Contributed Talk Track 1 - Hate Speech in Pixels: Automatic Detection of Offensive Memes for Moderation

This work addresses the challenge of hate speech detection in Internet memes, and attempts using visual information to automatically detect hate speech, unlike previous works that have focused in language.

Xavier Giro-i-Nieto

Universitat Politecnica de Catalunya

Xavier Giro-i-Nieto is an associate professor at the Universitat Politecnica de Catalunya (UPC) in Barcelona and visiting researcher at Barcelona Supercomputing Center (BSC). His obtained his doctoral degree from UPC in 2012 under the supervision of Prof. Ferran Marques (UPC) and Prof. Shih-Fu Chang (Columbia University). His research interests focus on deep learning applied to multimedia and reinforcement learning.

9:30AM-9:35AM Contributed Talk Track 1 - Towards better healthcare: What could and should be automated?

While artificial intelligence (AI) and other automation technologies might lead to enormous progress in healthcare, they may also have undesired consequences for people working in the field. In this interdisciplinary study, we capture empirical evidence of not only what healthcare work could be automated, but also what should be automated. We quantitatively investigate these research questions by utilizing probabilistic machine learning models trained on thousands of ratings, provided by both healthcare practitioners and automation experts. Based on our findings, we present an analytical tool (Automatability-Desirability Matrix) to support policymakers and organizational leaders in developing practical strategies on how to harness the positive power of automation technologies, while accompanying change and empowering stakeholders in a participatory fashion.

Wolfgang Frühwirt

University of Oxford

Wolfgang Frühwirt is an Associate Member of the Oxford-Man Institute (University of Oxford, Engineering Department), where he works with the Machine Learning Research Group.

9:35AM-9:45AM All Tracks Poster Session

Increasing small holder farmer income by providing localized price forecasts. Anita Yadav (Credible); Arjun Verma (Credible); Dr. Anitha Govindaraj (Atal Bihari Vajpayee Institute of Good Governance and Policy Analysis); Vikram Sarbajna (Credible).
Talk description Oral
Finding Social Media Trolls: Dynamic Keyword Selection Methods forRapidly-Evolving Online Debate. Anqi Liu (California Institute of Technology); Maya S Srikanth (California Institute of Technology); Nicholas Adams-Cohen (Stanford University); R. Michael Alvarez (Caltech); Animashree Anandkumar (Caltech).

9:45AM-10:30AM Break / All Tracks Poster Session

10:30AM-11:30AM Panel Discussion Track 3 - Towards a Social Good? Theories of Change in AI

Considerable hope and energy is put into AI -- and its critique -- under the assumption that the field will make the world a “better” place by maximizing social good. Will it? For who? At what time scale? Most importantly, who defines "social good"?
This panel invites dissent. Leading voices will pick apart these questions by sharing their own competing theories of social change in relation to AI. In addition to answering audience questions, they will share how they decide on trade-offs between pragmatism and principles and how they resist elements of AI research that are known to be dangerous and/or socially degenerative, particularly in relation to surveillance, criminal justice and privacy.
We undertake a probing and genuine conversation around these questions. Audience members are invited to submit questions here.

Dhaval Adjodah

MIT

Facilitator Dhaval Adjodah is a research scientist at the MIT Media Lab. His research investigates the current limitations of generalization in machine learning as well as how to move beyond them by leveraging the social cognitive adaptations humans evolved to collaborate effectively at scale. Beyond pushing the limits of modern machine learning, he is also interested in improving institutions by using online human experiments to better understand the cognitive limits and biases that affect everyday individual economic, political, and social decisions. During his PhD, Dhaval was an intern in Prof. Yoshua Bengio's group at MILA, a member of the Harvard Berkman Assembly on Ethics and Governance in Artificial Intelligence, and a fellow at the Dalai Lama Center For Ethics And Transformative Values. He has a B.S. in Physics from MIT, and an M.S. in Technology Policy from the MIT Institute for Data, Systems, and Society.

Natalie Saltiel

MIT

Facilitator Natalie Saltiel is the Program Manager for the Ethics and Governance of Artificial Intelligence Initiative at the MIT Media Lab, a collaboration between the MIT Media Lab and the Berkman Klein Center for Internet & Society at Harvard University. She has an MSc from the Oxford Internet Institute at Oxford University and was previously the Coordinating Editor of the Journal of Design and Science. Her work focuses on bridging the gap between the humanities & social sciences and computing and the ethical implications of technology.

Rashida Richardson

AI Now Institute

As Director of Policy Research, Rashida designs, implements, and coordinates AI Now’s research strategy and initiatives on the topics of law, policy, and civil rights. Rashida joins AI Now after working as Legislative Counsel at the American Civil Liberties Union of New York (NYCLU), where she led the organization’s work on privacy, technology, surveillance, and education issues. Prior to the NYCLU, she was a staff attorney at the Center for HIV Law and Policy, where she worked on a wide-range of HIV-related legal and policy issues nationally, and she previously worked at Facebook Inc. and HIP Investor in San Francisco. Rashida currently serves on the Board of Trustees of Wesleyan University, the Advisory Board of the Civil Rights and Restorative Justice Project, the Board of Directors of the College & Community Fellowship, and she is an affiliate and Advisory Board member of the Center for Critical Race + Digital Studies. She received her BA with honors in the College of Social Studies at Wesleyan University and her JD from Northeastern University School of Law.

Sarah T. Hamid

Carceral Tech Resistance Network

Sarah T. Hamid is an abolitionist and organizer in Southern California, working to build community defense against carceral technologies. She's built and worked on campaigns against predictive policing, risk assessment technologies, public/private surveillance partnerships, electronic monitoring, and automated border screening. In March 2019, she co-founded the Prison Tech Research Group (PTR-Grp), a coalition of abolitionists working on the intersection of technology/innovation and the prison-industrial complex. PTR-Grp focuses on private-public research partnerships deployed under the guise of prison reform, which stage the prison as a site for technological innovation and low-cost testing. The project centers the needs and safety of incarcerated and directly impacted people who face the violently expropriative data science industry with few safety nets. Sarah also facilitates the monthly convening of the Community Defense Syllabus, during which activists of color from all over the country work to theorize the intersection of race and carceral computing. In 2020, she will lead the launch and roll-out of the Carceral Tech Resistance Network, a community archive and knowledge-sharing project that seeks to amplify the capacity of community organizations to resist the encroachment and experimentation of harmful technologies.

11:30AM-11:35AM Contributed Talk Track 2 - Hard Choices in AI Safety

As AI systems become prevalent in high stakes domains such as surveillance and healthcare, researchers now examine how to design and implement them in a safe manner. However, the potential harms caused by systems to stakeholders in complex social contexts and how to address these remains unclear. In this paper, we explain the inherent normative uncertainty in debates about the safety of AI systems. We then address this as a problem of vagueness by examining its place in the design, training, and deployment stages of AI system development. We adopt Ruth Chang's theory of intuitive comparability to illustrate the dilemmas that manifest at each stage. We then discuss how stakeholders can navigate these dilemmas by incorporating distinct forms of dissent into the development pipeline, drawing on Elizabeth Anderson's work on the epistemic powers of democratic institutions. We outline a framework of sociotechnical commitments to formal, substantive and discursive challenges that address normative uncertainty across stakeholders, and propose the cultivation of related virtues by those responsible for development.

Roel Dobbe

AI Now Institute, New York University

Roel Dobbe’s research addresses the development, analysis, integration and governance of data-driven systems. His PhD work combined optimization, machine learning and control theory to enable monitoring and control of safety-critical systems, including energy & power systems and cancer diagnosis and treatment. In addition to research, Roel has experience in industry and public institutions, where he has served as a management consultant for AT Kearney, a data scientist for C3 IoT, and a researcher for the National ThinkTank in The Netherlands. His diverse experiences led him to examine the ways in which values and stakeholder perspectives are represented in the process of designing and deploying AI and algorithmic decision-making and control systems. Roel founded Graduates for Engaged and Extended Scholarship around Computing & Engineering (GEESE); a student organization stimulating graduate students across all disciplines studying or developing technologies to take a broader lens at their field of study and engage across disciplines. Roel has published his work in various journals and conferences, including Automatica, the IEEE Conference on Decision and Control, the IEEE Power & Energy Society General Meeting, IEEE/ACM Transactions on Computational Biology and Bioinformatics and NeurIPS.

Thomas Krendl Gilbert

UC Berkeley

Thomas Krendl Gilbert is an interdisciplinary Ph.D. candidate in Machine Ethics and Epistemology at UC Berkeley. With prior training in philosophy, sociology, and political theory, Tom researches the various technical and organizational predicaments that emerge when machine learning alters the context of expert decision-making. In particular, he is interested in how different algorithmic learning procedures (e.g. reinforcement learning) reframe classic ethical questions, such as the problem of aggregating human values and interests. In his free time he enjoys sailing and creative writing.

Yonatan Mintz

Georgia Tech

Yonatan Mintz is a Postdoctoral Research Fellow at the H. Milton Stewart School of Industrial and Systems Engineering at the Georgia Institute of Technology, previously he completed his PhD at the department of Industrial Engineering and Operations research at the University of California, Berkeley. His research interests focus on human sensitive decision making and in particular the application of machine learning and optimization methodology for personalized healthcare and fair and accountable decision making. Yonatan's work has been published in many journals and conferences across the machine learning, operations research, and medical fields.

11:35AM-11:40AM Contributed Talk Track 3 - The Effects of Competition and Regulation on Error Inequality in Data-driven Markets

Much work has documented instances of unfairness in deployed machine learning models, and significant effort has been dedicated to creating algorithms that take into account issues of fairness. Our work highlight an important but understudied source of unfairness: market forces that drive differing amounts of firm investment in data across populations.We develop a high-level framework, based on insights from learning theory and industrial organization, to study this phenomenon. In a simple model of this type of data-driven market, we first show that a monopoly will invest unequally in the groups. There are two natural avenues for preventing this disparate impact: promoting competition and regulating firm behavior. We show first that competition, under many natural models, does not eliminate incentives to invest unequally, and can even exacerbate it. We then consider two avenues for regulating the monopoly - requiring the monopoly to ensure that each group’s error rates are low, or forcing each group’s error rates to be similar to each other - and quantify the price of fairness (and who pays it). These models imply that mitigating fairness concerns may require policy-driven solutions, and not only technological ones.

Hadi Elzayn

University of Pennsylvania

Hadi Elzayn is a 4th year PhD Candidate in Applied Math and Computational Science at the University of Pennsylvania, advised by Michael Kearns. He is interested in the intersection of computer science and economics, and in the particular topics of how algorithmic learning interacts with social concerns like fairness, privacy, and markets (and how to design algorithms respecting those concerns). He received his BA from Columbia University in Mathematics and Economics. He has interned at Microsoft Research, and previously worked at the consulting firm TGG.

11:40AM-11:45AM Contributed Talk Track 3 - Learning Fair Classifiers in Online Stochastic Setting

One thing that differentiates policy-driven machine learning is that new public policies are often implemented in a trial-and-error fashion, as data might not be available upfront. In this work, we try to accomplish approximate group fairness in an online decision-making process where examples are sampled \textit{i.i.d} from an underlying distribution. Our work follows from the classical learning-from-experts scheme, extending the multiplicative weights algorithm by keeping separate weights for label classes as well as groups. Although accuracy and fairness are often conflicting goals, we try to mitigate the trade-offs using an optimization step and demonstrate the performance on real data set.

Yi Sun

MIT

Yi (Alicia) Sun is a PhD Candidate in Institute for Data, Systems and Society at MIT. Her research interests are designing algorithms that are robust and reliable, and as well as align with societal values.

11:45AM-11:50AM Contributed Talk Track 2 - Fraud detection in telephone conversations for financial services using linguistic features

In collaboration with linguistics and expert interrogators, we present an approach for fraud detection in transcribed telephone conversations. The proposed approach exploits the syntactic and semantic information of the transcription to extract both the linguistic markers and the sentiment of the customer's response. The results of the proposed approach are demonstrated with real-world financial services data using efficient, robust and explainable classifiers such as Naive Bayes, Decision Tree, Nearest Neighbours, and Support Vector Machines.

Nikesh Bajaj

University of East London

Nikesh Bajaj is a Postdoctoral Research Fellow at the University of East London, working on the Innovate UK funded project - Automation and Transparency across Financial and Legal Services, in collaboration with Intelligent Voice Ltd. and Strenuus Ltd. The project includes working with machine learning researchers, data scientists, linguistics experts and expert interrogators to model human behaviour for deception detection. He completed his PhD from Queen Mary University of London in a joint program with University of Genova. His PhD work is focused on predictive analysis of auditory attention using physiological signals (e.g. EEG, PPG, GSR). In addition to research, Nikesh has 5+ years of teaching experience. His research interests focus on signal processing, machine learning, deep learning, and optimization.

11:50AM-11:55AM Contributed Talk Track 2 - A Typology of AI Ethics Tools, Methods and Research to Translate Principles into Practices

What tools are available to guide the ethically aligned research, development and deployment of intelligence systems? We construct a typology to help practically-minded developers ‘apply ethics’ at each stage of the AI development pipeline, and to signal to researchers where further work is needed.

Libby Kinsey

Digital Catapult

Libby is lead technologist for AI at Digital Catapult, the UK's advanced digital technology innovation centre, where she works with a multi-disciplinary team to support organisations in building their AI capabilities responsibly. She spent her early career in technology venture capital before returning to university to study machine learning in 2014.

11:55AM-12:00PM Contributed Talk Track 3 - AI Ethics for Systemic Issues: A Structural Approach

Much of the discourse on AI ethics has focused on technical improvements and holding individuals accountable to prevent accidents and malicious use of AI. While this is useful and necessary, such an “agency-focused” approach does not cover all the harmful outcomes caused by AI. In particular it ignores the more indirect and complex risks resulting from AI’s interaction with the socio-economic and political context. A “structural” approach is needed to account for such broader negative impacts where no individual can be held accountable. This is particularly relevant for AI applied to systemic issues such as climate change. This talk explains why a structural approach is needed in addition to the existing agency approach to AI ethics, and offers some preliminary suggestions for putting this into practice.

Agnes Schim van der Loeff

Cervest

Hi, my name is Agnes and I do ethics and policy research at Cervest, which is developing Earth Science AI to quantify climate uncertainty and inform decisions on more sustainable land use. As part of Cervest’s research residency programme earlier this year I started exploring the ethical implications of such use of AI, which resulted in this NeurIPS paper! Now I am developing a framework to ensure all steps in the development, distribution and use of Cervest’s AI-driven platform are ethical and prevent any harmful outcomes. I hold a first class Honours degree in Arabic and Development Studies from SOAS University of London. Having studied the intersection of social, economic and political aspects of development, I am interested in how dilemmas around AI reflect wider debates on power relations in society and I want to explore how AI could be a vehicle for transformative social change. I am particularly passionate about climate justice, which I have engaged with academically and through campaigning.

12:00PM-2:00PM Lunch - on your own

2:00PM-2:05PM Invited Talk Track 2 - ML system documentation for transparency and responsible AI development - a process and an artifact

One large-scale multistakeholder effort to implement the values of the Montreal Declaration as well as other AI ethical principles is ABOUT ML, a recently-launched project led by the Partnership on AI to synthesize and advance the existing research by bringing PAI's Partner community and beyond into a public conversation and catalyze building a set of resources that allow more organizations to experiment with pilots. Eventually ABOUT ML aims to surface research-driven best practices and aid with translating those into new industry norms. This talk will be an overview of the work to date and ways to get involved moving forward.

Jingying Yang

Partnership on AI

Jingying Yang is a Program Lead on the Research team at the Partnership on AI, where she leads a portfolio of collaborative multistakeholder projects on the topics of safety, fairness, transparency, and accountability, including the ABOUT ML project to set new industry norms on ML documentation. Previously, she worked in Product Operations at Lyft, for the state of Massachusetts on health care policy, and in management consulting at Bain & Company.

2:05PM-2:30PM Invited Talk Track 2 - Beyond Principles and Policy Proposals: A framework for the agile governance of AI

The mismatch between the speed at which innovative technologies are deployed and the slow traditional implementation of ethical/lethal oversight, requires creative, agile, multi-stakeholder, and cooperative approaches to governance. Agile governance must go beyond hard law and regulations to accommodate soft law, corporate self-governance, and technological solutions to challenges. This presentation will summarize the concepts, insights, and creative approaches to AI oversight that have led to the 1st International Congress for the Governance of AI, which will convey in Prague on April 16-18, 2020.

Wendell Wallach

The Hasting Center and Yale University, Interdisciplinary Center for Bioethics

Wendell Wallach is an internationally recognized expert on the ethical and governance concerns posed by emerging technologies, particularly artificial intelligence and neuroscience. He is a consultant, an ethicist, and a scholar at Yale University’s Interdisciplinary Center for Bioethics, where he chairs the working research group on technology and ethics. He is co-author (with Colin Allen) of Moral Machines: Teaching Robots Right from Wrong, which maps the new field variously called machine ethics, machine morality, computational morality, and friendly AI. His latest book is A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control. Wallach is the principal investigator of a Hastings Center project on the control and responsible innovation in the development of autonomous machines.

2:30PM-2:45PM Invited Talk Track 2 - Untangling AI Ethics: Working Toward a Root Issue

Given myriad issues in AI ethics as well as many competing frameworks/declarations, it may be useful to step back to see if we can find a root or common issue, which may help to suggest a broad solution to the complex problem. This involves returning to first principles: what is the nature of AI? I will suggest that AI is the power of increasing omniscience, which is not only generally disruptive to society but also a threat to our autonomy. A broad solution, then, is to aim at restoring that autonomy.

Patrick Lin

California Polytechnic State University

Patrick Lin is the director of the Ethics + Emerging Sciences Group, based at California Polytechnic State University, San Luis Obispo, where he is also a philosophy professor. He has published several books and papers in the field of technology ethics, especially with respect to robotics—including Robot Ethics (MIT Press, 2012) and Robot Ethics 2.0 (Oxford University Press, 2017)—human enhancement, cyberwarfare, space exploration, nanotechnology, and other areas.

2:45PM-3:00PM Invited Talk Track 2 - AI in Healthcare: Working Towards Positive Clinical Impact

Artificial intelligence (AI) applications in healthcare hold great promise, aiming to empower clinicians to diagnose and treat medical conditions earlier and more effectively. To ensure that AI solutions deliver on this promise, it is important to approach the design of prototype solutions with clinical applicability in mind, envisioning how they might fit within existing clinical workflows. Here we provide a brief overview of how we are incorporating this thinking in our research projects, while highlighting challenges that lie ahead.

Nenad Tomasev

Senior Research Engineer at DeepMind

My research interests lie at the intersection of theory and impactful real-world AI applications, with a particular focus on AI in healthcare, which I have been pursuing at DeepMind since early 2016. In our most recent work, published in Nature in July 2019, we demonstrate how deep learning can be used for accurate early predictions of patient deterioration from electronic health records and alerting that opens possibilities for timely interventions and preventative care. Prior to moving to London, I had been involved with other applied projects at Google, such as Email Intelligence and the Chrome Data team. I obtained my PhD in 2013 from the Artificial Intelligence Laboratory at JSI in Slovenia, where I was working on better understanding the consequences of the curse of dimensionality in instance-based learning in many dimensions.

3:00PM-3:30PM Panel Discussion Track 2 - Implementing Responsible AI

This panel will discuss what might be some practical solutions for encouraging and implementing responsible AI. There will be time for audience Q&A.
Audience members are invited to submit questions here.

Brian Patrick Green

Markkula Center for Applied Ethics, Santa Clara University

Facilitator Brian Patrick Green is Director of Technology Ethics at the Markkula Center for Applied Ethics at Santa Clara University. His interests include AI and ethics, the ethics of space exploration and use, the ethics of technological manipulation of humans, the ethics of catastrophic risk, and the intersection of human society and technology, including religion and technology. Green teaches AI ethics in the Graduate School of Engineering and is co-author of the Ethics in Technology Practice corporate technology ethics resources.

Wendell Wallach

The Hasting Center and Yale University, Interdisciplinary Center for Bioethics

Wendell Wallach is an internationally recognized expert on the ethical and governance concerns posed by emerging technologies, particularly artificial intelligence and neuroscience. He is a consultant, an ethicist, and a scholar at Yale University’s Interdisciplinary Center for Bioethics, where he chairs the working research group on technology and ethics. He is co-author (with Colin Allen) of Moral Machines: Teaching Robots Right from Wrong, which maps the new field variously called machine ethics, machine morality, computational morality, and friendly AI. His latest book is A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control. Wallach is the principal investigator of a Hastings Center project on the control and responsible innovation in the development of autonomous machines.

Patrick Lin

California Polytechnic State University

Patrick Lin is the director of the Ethics + Emerging Sciences Group, based at California Polytechnic State University, San Luis Obispo, where he is also a philosophy professor. He has published several books and papers in the field of technology ethics, especially with respect to robotics—including Robot Ethics (MIT Press, 2012) and Robot Ethics 2.0 (Oxford University Press, 2017)—human enhancement, cyberwarfare, space exploration, nanotechnology, and other areas.

Nenad Tomasev

Senior Research Engineer at DeepMind

My research interests lie at the intersection of theory and impactful real-world AI applications, with a particular focus on AI in healthcare, which I have been pursuing at DeepMind since early 2016. In our most recent work, published in Nature in July 2019, we demonstrate how deep learning can be used for accurate early predictions of patient deterioration from electronic health records and alerting that opens possibilities for timely interventions and preventative care. Prior to moving to London, I had been involved with other applied projects at Google, such as Email Intelligence and the Chrome Data team. I obtained my PhD in 2013 from the Artificial Intelligence Laboratory at JSI in Slovenia, where I was working on better understanding the consequences of the curse of dimensionality in instance-based learning in many dimensions.

Jingying Yang

Partnership on AI

Jingying Yang is a Program Lead on the Research team at the Partnership on AI, where she leads a portfolio of collaborative multistakeholder projects on the topics of safety, fairness, transparency, and accountability, including the ABOUT ML project to set new industry norms on ML documentation. Previously, she worked in Product Operations at Lyft, for the state of Massachusetts on health care policy, and in management consulting at Bain & Company.

Libby Kinsey

Digital Catapult

Libby is lead technologist for AI at Digital Catapult, the UK's advanced digital technology innovation centre, where she works with a multi-disciplinary team to support organisations in building their AI capabilities responsibly. She spent her early career in technology venture capital before returning to university to study machine learning in 2014.

3:30PM-4:15PM Break / All Tracks Poster Session

4:15PM-4:20PM Contributed Talk Track 3 - "Good" isn't good enough

Despite widespread enthusiasm among computer scientists to contribute to “social good,” the field's efforts to promote good lack a rigorous foundation in politics or social change. There is limited discourse regarding what “good” actually entails, and instead a reliance on vague notions of what aspects of society are good or bad. Moreover, the field rarely considers the types of social change that result from algorithmic interventions, instead following a “greedy algorithm” approach of pursuing technology-centric incremental reform at all points. In order to reason well about doing good, computer scientists must adopt language and practices to reason reflexively about political commitments, a praxis that considers the long-term impacts of technological interventions, and an interdisciplinary focus that no longer prioritizes technical considerations as superior to other forms of knowledge.

Ben Green

Harvard

Ben Green is a PhD Candidate in Applied Math at Harvard, an Affiliate at the Berkman Klein Center for Internet & Society at Harvard, and a Research Fellow at the AI Now Institute at NYU. He studies the social and policy impacts of data science, with a focus on algorithmic fairness, municipal governments, and the criminal justice system. His book, The Smart Enough City: Putting Technology in Its Place to Reclaim Our Urban Future, was published in 2019 by MIT Press.

4:20PM-4:50PM Invited Talk Track 1 - Automated Quality Control for a Weather Sensor Network

TAHMO (the Trans-African Hydro-Meteorological Observatory) is a growing network of more than 500 automated weather stations. The eventual goal is to operate 20,000 stations covering all of sub-Saharan Africa and providing ground truth for weather and climate models. Because sensors fail and go out of calibration, some form of quality control is needed to detect bad values and determine when a technician needs to visit a station. We are deploying a three-layer architecture that consists of (a) fitted anomaly detection models, (b) probabilistic diagnosis of broken sensors, and (c) spatial statistics to detect extreme weather events (that may exonerate flagged sensors).

Thomas G. Dietterich

Oregon State University

Dr. Dietterich is Distinguished Emeritus Professor of computer science at Oregon State University and currently pursues interdisciplinary research at the boundary of computer science, ecology, and sustainability policy.

4:50PM-5:50PM Panel Discussion Track 1 - AI and Sustainable Development

The focus on this panel is the use of AI for Sustainable Development and will explore the many opportunities this technology presents to improve lives around the world, as well as address the challenges and barriers to its applications. While there is much outstanding work being done to apply AI to such situations, too often this research is not deployed and there is a disconnect between the research and industry communities and the public sector actors. With leading researchers and practitioners from across the academic, public, UN and private sectors this panel brings a diversity of experience to address these important issues.
Audience members are invited to submit questions here.

Fei Fang

Carnegie Mellon University

Facilitator Fei Fang is an Assistant Professor at the Institute for Software Research in the School of Computer Science at Carnegie Mellon University. Before joining CMU, she was a Postdoctoral Fellow at the Center for Research on Computation and Society (CRCS) at Harvard University. She received her Ph.D. from the Department of Computer Science at the University of Southern California in June 2016. Her research lies in the field of artificial intelligence and multi-agent systems, focusing on integrating machine learning with game theory. Her work has been motivated by and applied to security, sustainability, and mobility domains, contributing to the theme of AI for Social Good.

Carla P. Gomes

Cornell University

Carla Gomes is a Professor of Computer Science and the Director of the Institute for Computational Sustainability at Cornell University. Her research area is artificial intelligence with a focus on large-scale constraint-based reasoning, optimization and machine learning. She is noted for her pioneering work in developing computational methods to address challenges in sustainability.

Miguel Luengo-Oroz

UN Global Pulse

Dr. Miguel Luengo-Oroz is the Chief Data Scientist at UN Global Pulse, an innovation initiative of the United Nations Secretary-General. He is the head of the data science teams across the network of Pulse labs in New York, Jakarta & Kampala. Over the last decade, Miguel has built and directed teams bringing data and AI to operations and policy through innovation projects with international organizations, govs, private sector & academia. He has worked in multiple domains including poverty, food security, refugees & migrants, conflict prevention, human rights, economic indicators, gender, hate speech and climate change.

Thomas G. Dietterich

Oregon State University

Dr. Dietterich is Distinguished Emeritus Professor of computer science at Oregon State University and currently pursues interdisciplinary research at the boundary of computer science, ecology, and sustainability policy.

Julien Cornebise

University College London

Julien Cornebise is an Honorary Associate Professor at University College London. He focuses on putting Machine Learning firmly into the hands of nonprofits, certain part of certain, governments, NGOs, UN agencies: those who actually work on tackling our societies' biggest problems: . He built and until recently was a Director of Research of Element AI's AI for Good team, and head of its London Office. Prior to this, Julien was at DeepMind (later acquired by Google) as an early employee, where he led several fundamental research projects used in early demos and fundraising then co-created its Health Research team. Since leaving DeepMind in 2016, he has been working with Amnesty International, Human Rights Watch, and other actors. Julien holds an MSc in Computer Engineering, an MSc in Mathematical Statistics, and a PhD in Mathematics, specialized in Computational Statistics, from University Paris VI Pierre and Marie Curie and Telecom ParisTech. He received the 2010 Savage Award in Theory and Methods from the International Society for Bayesian Analysis for his PhD work.

5:50PM-6:00PM Open announcement and Best Papers/Posters Award

Best Poster: Track 1 ML and precision public health: Saving mothers and babies from dying in rural India. Kasey Morris (Surgo Foundation); Vincent S. Huang (Surgo Foundation); Mokshada Jain (Surgo Foundation); B.M. Ramesh (University of Manitoba); Hannah Kemp (Surgo Foundation); James Blanchard (University of Manitoba); Shajy Isac (University of Manitoba); Bidyut Sarkar (University of Manitoba); Vikas Gothalwal (University of Manitoba); Vasanthakumar Namasivayam (University of Manitoba); Sema Sgaier (Surgo Foundation).
Best Paper: Track 1 Balancing Competing Objectives for Welfare-Aware Machine Learning with Imperfect Data. Esther Rolf (UC Berkeley); Max Simchowitz (UC Berkeley); Sarah Dean (UC Berkeley); Lydia T. Liu (UC Berkeley); Daniel Bjorkegren (Brown University); Moritz Hardt (UC Berkeley); Joshua Blumenstock (UC Berkeley).
Best Poster: Track 2 Hard Choices in Artificial Intelligence: Addressing Normative Uncertainty through Sociotechnical Commitments. Roel Dobbe (AI Now Institute, New York University); Thomas Krendl Gilbert (UC Berkeley); Yonatan Mintz (Georgia Tech)
Best Paper: Track 2 A Typology of AI Ethics Tools, Methods and Research to Translate Principles into Practices. Jessica Morley (Oxford Internet Institute); Luciano Floridi (Oxford Internet Institute); Libby Kinsey (Digital Catapult); Anat Elhalal (Digital Catapult).
Best Poster: Track 3 The Effects of Competition and Regulation on Error Inequality in Data-driven Markets. Hadi Elzayn (University of Pennsylvania); Benjamin Fish (MSR). Best Paper: Track 3
"Good" isn't good enough. Ben Green (Harvard University).