As any undergraduate textbook will tell you, “economics is the study of how society allocates scarce resources.” However, this age-old problem is increasingly being seen as less of an issue about markets, state plans, and economic institutions, and more of a problem of the availability of data, the development of algorithms, and people’s access to technology. While we have been aware of the potential for technology to bring about an age of abundance in technologically advanced societies since at least the time of Keynes, most people live distant from such technological utopias, in both geographical and economic terms. What impact is this shift having on such people, and how can we ensure that the development of technology helps solve ‘the problem of scarcity’ for them too?

One increasingly important technology for just about everyone is artificial intelligence (AI), which can improve the allocation of resources by considering vastly more information than human beings, and by greatly enhancing the quality of our decision making. The use of data collection and processing for agricultural applications offers a useful illustration of this. In one case, telecommunications company Vodafone launched an initiative to offer guidance to farmers in exchange for data regarding crops. The Vodafone Farmers’ Club aims to improve the lives of farmers in Ghana who register for the service by providing useful information including weather updates, market prices, and the ability to make calls. Using data from farmers and where they are located, Vodafone is able to offer advice that contributes to better-informed farming that leads to higher yield, less water usage, and other benefits—indicating that even in very poor countries, the use of Big Data and algorithms can create significant opportunities to improve the efficiency and security of agriculture and food systems.

In another example, Big Data has been processed using AI and machine learning algorithms to provide useful agricultural insight without the need to collect data from or rely on farmers. The start-up Descartes Labs uses satellite images of corn fields including spectral information that indicates chlorophyll levels to determine corn yield estimates and to offer a greater understanding to farmers and the agricultural industry. Data inputs including frequently captured satellite images and advanced weather predictions are analyzed by machine learning algorithms, and have allowed near real-time prediction of crop yield. Ultimately, the ability to provide insight to farmers by analyzing data with machine learning algorithms without requiring them to have access to advanced infrastructure, education, or technology could greatly increase agricultural efficiency.

Alongside such practical advances, AI could also provide useful insight into theoretical issues such as how to best manage common resources like grazing land and fresh water. A DeepMind project offers a new perspective which transcends existing economic and political solutions through a model of such ‘common-pool resource’ appropriation, and indicates how management solutions relate to inequality. Instead of relying on non-cooperative game theory, which often leads to the failure of agents to allocate resources with maximum efficiency, DeepMind used deep reinforcement learning to illustrate that trial-and-error learning in common-pool resource management can lead to socially positive novel cooperative solutions.

However, while AI presents promising opportunities to improve resource distribution and access, it is a dual-use technology associated with several serious risks, and AI systems could in fact make the problem of scarcity worse—the technology could be used to manipulate resources it is intended to manage. For example, developers of such technology could use data and machine algorithms to justify the exploitation of the land of a region. If a system indicates an ‘optimal’ distribution can be achieved by growing economically lucrative crops rather than growing sustainable, nutritious foods that could contribute to the mitigation of hunger, wealth and well-being disparity could dramatically increase. Therefore, the development of AI for resource management must be carefully considered.

One crucial issue is bias in data and algorithms. As algorithms use data that is historic, outputs reflect past injustices and mistaken beliefs and might consequently be unable to overcome these. While there is the potential for AI systems to lead to more ideal outcomes, there is also the risk that systems will benefit some interests more than others, and hence fail to improve the conditions of the worst off. Machine learning algorithms can be trained with data that has social and political bias, and in the case of resource management and distribution could be encouraged with data and by design to find particular patterns that might offer suggestions regarding an allocation or guidance that is not broadly beneficial, even if this was never the intention of its designers. For instance, historical data may reflect a reallocation of agricultural resources that benefits the most powerful individuals in communities, leading to guidance that suggests a disproportionate distribution. If these leaders of communities are able to influence how machine learning algorithms develop over time, there could be a prejudice toward individual interests—data could be presented that is aligned with these interests rather than data that would lead to more equitable outcomes.

What can be done to minimize bias? Development must be collaborative and cooperative, involving a broad range of perspectives. Local stakeholders must be involved from the earliest stages of development and development must incorporate insight from those who will be using these systems, or merely affected by them, as well as guidance from NGOs, international organizations, academics, and policy- and law-makers. While data inputs and machine learning algorithms will offer evolving—and presumably improved—guidance over time, people must continually evaluate systems with consideration of effectiveness and optimal operation according to the interests of all relevant perspectives. Further, there must be mechanisms in place for AI systems to be modified in the future, allowing people to train these systems to accurately reflect and to respect existing social and economic structures, values, and practices of their cultures, as well as future objectives of increased equality and sustainability.

However, even if AI systems can avoid biases and offer solutions that are generally beneficial to the worst off, they must also be trustworthy if they are to function effectively—people must be willing to engage with these systems. It is challenging to accept guidance unless we believe it to be derived legitimately and credibly, so suggestions offered by a system regarding resource management and distribution will likely not be considered if the foundational basis of the output of a system is not expressed explicitly—as is often the case with AI. Neural networks, for example, function in a way that is implicit and opaque, without clear reason offered that explains outputs. Further, as AI systems are being developed in societies historically associated with colonialism, there is the potential that past exploitation might continue. Ultimately, a lack of awareness of the reasoning that underlies suggestions made by an AI system implies a lack of trustworthiness, and therefore a lack of acceptance of the guidance that is offered.

How can we make AI systems more trustworthy? Onora O’Neill suggests that trustworthiness relies on three features—reliability, competency, and honesty. As should be evident from the above, such systems could be competent and reliable. However, honesty is much more challenging. If a system is to be honest, or even held to account, it needs to be meaningfully transparent—its guidance must be presented in a way that emphasizes explainability. Researchers are beginning to explore the potential of developing algorithms to explain how systems make decisions, requiring evidence-based justification and offering natural-language explanations.

However, it should be noted that while explainability is necessary for transparency, it is not sufficient to ensure honesty. This requires determining what people want to know and what it is important that they know—not just expressing the truth. For AI systems to be ‘honest,’ developers must seek to understand how they will be perceived by the communities that will be using the systems, and those using the systems must be educated not only to use them but also to continue to develop them, shaping guidance offered to reflect current values as well as the future objective of greater abundance and well-being.

Ultimately, while AI has enormous positive potential to address the problem of resource scarcity, we cannot be led by utopianism. Just as transparency is necessary to the development of trustworthy systems, an honest and thoughtful consideration of the benefits alongside the risks associated with the development of these systems is required. While it may in some ways inhibit development, it is important to engage with what people actually want and find useful—even if it may not be what we think is best. In considering risks and benefits transparently and honestly, we can foster trust throughout the development of AI for resource management applications, improving the outcomes from development and ensuring that the technology contributes to a better future.

 

CENTRE FOR RESEARCH IN THE ARTS, SOCIAL SCIENCES AND HUMANITIES

Tel: +44 1223 766886
Email enquiries@crassh.cam.ac.uk