What part of current AI research can have an impact on social good?
AI is hailed with ever increasing frequency in the media as a potential panacea for the ills of the world. Administrators and executives in both the public and private sector are likewise sanguine about the societal prospects that the application of the technology could bring in its wake. In a January 2018 interview, Google CEO Sundar Pichai encapsulated the overall atmosphere well by declaring that its impact on humanity could end up being “more profound than […] electricity or fire”.[1] Readers will of course also be familiar with the mirror image of these concerns: namely, the apocalyptic visions and warnings issued by commentators who see in AI the destabilizing threat long portrayed in fiction. Caught in between these competing narratives, the actual reality is far more nuanced as the field is still in its early stages. While AI has yet to become a silver bullet, the last few years have demonstrated its worth as an avenue of research for tackling the most dire and pressing problems that we face today. The current state of the technology already presents clear and immediate applications for challenges traditionally considered beyond the reach of commercial or governmental solutions: a December 2018 McKinsey report identified as many as 160 pertinent social impact use cases for AI spread across multiple domains related to the U.N. Sustainable Development Goals[2]. In exploring these issues, the presentations and discussions at the AI for Social Good workshop have shed light on some of the social impact research being conducted today and the diversity of perspectives joining together with the goal of using AI to improve lives. The event was organized as part of the 2018 Conference and Workshop on Neural Information Processing Systems (NeurIPS), the largest congregation of machine-learning enthusiasts and specialists in the world and arguably the clearest annual snapshot of AI’s overall progress.
Even with the advent of modern production techniques, subsistence farmers across the world still find themselves at the mercy of the environment. In light of an infection being able to wipe out a community’s livelihood at a stroke, Dr. Ernest Mwebaze has shown the boon for agriculture that AI can represent through his efforts to automate the detection of viral diseases in cassava crops using the power of widespread low-cost mobile technology. Anirudh Koul, the creator of the Seeing AI app, demonstrated how smartphones could act as virtual assistants to help the visually impaired navigate their environment and interact with the world more effectively. Though unrelated at first glance, these two topics illustrate a fundamental aspect of AI’s potential for social impact: it can empower people and give them enough information to make the decisions they need to face the challenges in their lives and regain their autonomy and dignity in the face of obstacles. The evolution of human beings has been shaped by tool use, but with AI the definition of what a tool can be takes on multiple dimensions. Human-computer interaction can not only multiply the range of possible actions but also provide individuals with a way to contribute to their community through sharing crowd-sourced information.
If the individual sphere can benefit, so can institutions on a larger scale. As the breadth and timeliness of information increases, planners and administrators can more easily respond to daily needs and crises. Girmaw Abebe Tadesse’s research showed how AI could bypass the constraints of operating essential infrastructure in remote locations by predicting water pump failure in rural Kenya without relying on expensive sensors and depleting the local governance budget. Assistant professor Fei Fang’s proposed algorithms conversely addressed the opposite end of the planner’s woes: how to translate already available information into usable solutions, in this case combining human input and sparse data to formulate a coherent and efficient response against organized poachers. In both cases, AI can not only ensure that officials take advantage of available resources to the greatest extent possible, but also that they can get the information they need to make decisions with clarity and speed.
In a similar fashion to AlphaGo revealing patterns that expert human players never suspected, AI can find otherwise hidden correlations in data to provide the greatest impact on social good. As shown by Priya Donti’s research on consumption patterns, a surprising amount of information on private electricity use can be gleaned from publicly available data. Public policy can thus be informed by ever more precise fine-tuning that is but a few algorithms away. Information that lays dormant because of a lack of resources to treat it or because of the sheer magnitude of the work needed to process the whole can be actually used by governments at the local and national level. As shown by Aniket Kesari and Raesetje Sefala, AI can create databases out of thousands of hours of raw traffic footage that can provide an opening for urban planners to tailor their solutions to issues as they arise over time. Communities can also use AI to direct their real-time efforts: for instance, Arbel Vigodny’s Zzapp iniative offers a tool for anti-malarial campaigners to monitor the progress of field workers and continually adjust the overall pattern of action based on actual data on the ground.
While AI can assist people in their day to day lives, it can also help specialists make life-saving decisions. As evidenced by Chen Zhang’s work on fetal alcohol spectrum disorder and Arijit Patra’s work on fetal echocardiography, workers in the medical field can rely on AI to perform diagnoses with fewer resources at their disposal; in doing so, they can reduce the chance of vulnerable individuals slipping through the cracks and failing to receive treatment at a crucial period. At-risk patients can also find in AI an ally for tackling long term health problems. Ellie Gordon’s startup, Behaivior, uses for instance pattern recognition and wearable technology to provide real time assistance to combat addiction, shifting the strategy from retrospective solutions following a relapse to the possibility of preventing the problem in the first place. AI can thus not only provide medical professionals with better tools, lower error rates, and higher quality information but can also change the relationship between patients and doctors without compromising the latter’s autonomy.
The integration of AI in the skilled labor pool can of course invite uncertainty as to the potential of replacement by machines. The use of AI itself can paradoxically allay those fears, as evidenced by Logan Graham’s work providing a framework for which tasks can be automated and which cannot easily be performed by an algorithm. In the same way that administrators can use data in multiple ways through machine learning, AI can potentially inform individuals to a great extent with regards to what skills are in demand and how they should tailor their own educational path.
Roadmaps for the near future
The potential of AI to solve major societal issues presents a natural, almost obvious conundrum. If the technology can change lives and communities in dramatic ways, how do we make sure that it does so for the greatest benefit of all? Workshop panelists gathered to discuss the impact of AI in the near future and the potential problems that AI could leave in its wake, not only on its own but by its use in conjunction with the usual methods of governance. The current iteration of AI remains in a sense a direct reflection of its creator, since its functioning is at least in the initial stages based on the data fed to it. The discussion on bias and fairness in AI highlighted the dangers of introducing human bias in datasets. A programmer can influence the way an algorithm perceives the world and add his or her own slant to the data by omission, error, or even deliberate action. The datasets themselves, such as facial-recognition databases, can lack richness and variety in the first place. On the panel on economic inequality, participants debated the consequences of using AI to drive or inform public policy or as a decision making tool in the private sector. Should machine-learning sourced results become the main support that determines how the structure of society evolves at the macro or micro level, we can be left in a situation where marginalized groups suffer from old forms of prejudice in a magnified form as the AI passes judgments based on incomplete or biased premises. Ethical considerations increasingly seem like a necessity for AI design if the goal of increased social good is to be achieved, to avoid the scenario where the algorithms ironically end up amplifying the negative aspects of human decisions.
The use of AI for widespread social good raises the question of how much access to the technology and leeway to act will be available to various stakeholders. Panelists debating the impact of AI on civil society addressed the complexity of getting institutions at all levels to stay on the same page, from community organizers to the largest think tanks. Citizens must be educated on digital matters so that they can effectively engage in an AI-powered world, but they must also be provided with affordable means of telecommunication that make AI useful on the ground. Legal frameworks must be created around the emerging complexity of algorithms, and be at the same time explained to the citizenry so that they can safeguard their rights, especially if they come from historically marginalized communities. Diplomats and high-level officials must wrangle with the consequences of AI research competition between nation states, as new tenets of international relations have to be drafted up in response to the rippling effect that one country’s policy can have on the whole. Should AI become a near universal modifying influence on the complex flows that characterize a modern society, non-technical professions will have to master its use cases and develop an intuitive understanding of the technology’s parameters to reach their goals and responsibilities.
By its very nature, AI research is heavily dependent on the interplay between corporate and academic interests, as discussed by the last panel on the organizational practice of “good AI”. It remains unclear which group holds the most cards. On one hand, academics are responsible for breakthroughs, and their research decisions determine what is feasible; they can obtain autonomy through governmental funding. On the other, private companies can set the pace and goals of research through financial incentives of their own. Executives recognize the direct and indirect advantages of promoting sustainable AI practices, fostering an unconstrained research atmosphere, and targeting the social issues their companies’ practices can bring about. Conversely, academics understand that the private sector is often a necessary conduit through which innovation can spring forth and their recommendations be actually implemented on a large scale. The development of an AI system that is beneficial to society will require cooperation between both spheres.
Finally, AI does not just concern economics, problem solving and civic responsibility. It has a role to play in how the culture of the near future will thrive. We do not have to choose between a human touch and the new forms of algorithmic creation as they are not mutually exclusive and can on the contrary enrich each other. It is impossible to know exactly what members of the audience felt as they listened to the artificial melody developed by the Google Brain team and to Yo-Yo Ma’s cello demonstration, but from the collective mood it seemed that they caught a convincing glimpse of what AI could mean for art and the human experience. The technology can help introduce new concepts, by for instance helping artists and collectives meet with AI researchers to find new approaches to their work, or introducing innovations that make entirely new forms of art possible. Lastly, despite the novelty it introduces, AI can help preserve indigenous culture and underrepresented forms of cultural expression.
Given the range of experiences, there is no single overarching lesson to take away from the workshop on AI for Social Good, other than perhaps the realization of AI’s potential to affect human life in all its forms. Instead, we are made aware of our responsibility to stay informed and to consider unfamiliar avenues of thought. Noticing the potential benefits and pitfalls that characterize the technology is the first step, followed perhaps by an active participation in the debates that are likely to intensify as we come to grips with the new methods and dangers that await us in the future pursuit of a better society.
Virgile Sylvain, Jan. 2019
[1] Tony Romm, Drew Harwell, and Craig Timberg, ‘Google CEO Sundar Pichai: Fears about Artificial Intelligence Are “Very Legitimate,” He Says in Post Interview’, Washington Post <https://www.washingtonpost.com/technology/2018/12/12/google-ceo-sundar-pichai-fears-about-artificial-intelligence-are-very-legitimate-he-says-post-interview/>.