From SDGs to GPTs: Trading Solidarity for AI Automation?

By Ron Salaj

In this historical moment of a deepening global polycrisis, artificial Intelligence (AI) seems to emerge as the new knowledge regime promising to transform the world. This has been amplified with the arrival of ChatGPT and other generative AI systems in 2022, which have introduced a shift in how AI interacts with people, society and institutions.

However, as argued by Dan McQuillan in his book “Resisting AI”, AI is more than a set of machine learning (ML) techniques: “AI is never separate from the assembly of institutional arrangements that need to be in place for it to make an impact in society.” AI is this layered and interdependent arrangement of technology, institutions, and ideology—what McQuillan terms an ‘apparatus.’ This apparatus does not exist in isolation from the social world—it intersects with and often reinforces the very contradictions and crises that define our post-normal times.

Therefore, when speaking about AI we should acknowledge that AI, like other technologies, is not neutral; but it embodies social relations, values, biases, politics and power dynamics. AI is a techno-political artefact and socio-technical assemblage of techniques, motivations, and ideologies. This is an important point to consider for international development and humanitarian actors and practitioners. Additionally, AI systems do not emerge in vacuum either—their development and deployment are deeply embedded in and influenced by historical, cultural, and political-ideological contexts and beliefs. For instance, as Matteo Pasquinelli historicizes in his book “The Eye of the Master”, Friedrich Hayek’s ideas on spontaneous order and market self-organization influenced the early development of connectionist AI. Frank Rosenblatt, who in 1958 developed the Perceptron—the first operative artificial neural network for pattern recognition—explicitly cited the work of Hebb and Hayek as foundational.

The Incompatibility of AI and Humanitarian Principles

It is in this context that Langdon Winner’s seminal question, “Do artifacts have politics?” finds renewed relevance. Winner argues that technical arrangements can perpetuate or challenge social orders, often producing results that serve some interests while disadvantaging others. In development and humanitarian efforts, where AI is frequently hailed as a solution to complex problems, such critiques take on heightened relevance. The politics embedded in AI directly collide with the foundational humanitarian principles of humanity, impartiality, neutrality, and independence. In practice, AI systems are predominantly owned and governed by a handful of private Big Tech companies, whose business models often rely on surveillance, data extraction, and market-driven logic—structures fundamentally incompatible with humanitarian ethics. Moreover, AI systems frequently dehumanize, obscure accountability, and perpetuate systemic biases. These are not unintended glitches, but symptomatic of broader consequences—corporate capture of digital development, algorithmic opacity, and digital colonialism—as evidenced in recent research. Such outcomes are structural features of how AI is conceptualized, financed, and implemented. As such, these dynamics demand urgent, sustained, and critical interrogation by humanitarian and development actors—not as a technical audit, but as a political and ethical reckoning over the direction of technological futures.

Despite being portrayed as revolutionary, AI developments frequently fail to translate into measurable progress towards achieving the SDGs. The Sustainable Development Goals—arguably the only global consensus on what ‘social good’ could look like—are, for the first time, stalling and even regressing. According to the SDG Reports from 2022, 2023, and 2024, cascading and interlinked crises such as COVID-19, climate change, and conflicts have reversed years of progress. The 2022 report highlighted setbacks in eradicating poverty and hunger, improving health and education, and ensuring basic services. By 2023, approximately half of the 140 measurable targets showed moderate or severe deviation from their intended trajectories. Worse still, over 30 percent of these targets showed no progress or had regressed below their 2015 baselines. All this, despite the acceleration of AI and other emerging technologies. Isn’t this the ultimate paradox of paradoxes: where the promise of innovation grows exponentially, yet the metrics of human and planetary well-being falter or decline?

AI as Paradox of Paradoxes

The perils—and paradoxes—of AI are starkly illustrated through its simultaneous deployment across vastly different spheres of power and vulnerability. Take, for example, OpenAI, a company whose technology is increasingly being used by development and humanitarian agencies, but also increasingly intertwined with military interests. In 2024, OpenAI, in partnership with Microsoft, was contracted by the United States Africa Command (AFRICOM) to provide AI capabilities for defense purposes—reinforcing the military-technological nexus. At the same time, OpenAI’s models are being used by the International Rescue Committee to develop AI-powered education tools in humanitarian settings. This duality exemplifies the ‘paradox of paradoxes’: a scenario where the same AI infrastructure is used to support both war-making and humanitarian aid. Such contradictions are not peripheral – they lie at the heart of AI’s techno-political apparatus. They challenge our assumptions about neutrality, ethics, and solidarity in technological interventions, and compel development actors to interrogate who controls AI, who benefits, and who is left behind.

Another critical risk today is the seductive—or even hypersuasive – power of retrieval-augmented generation (RAG) systems. These tools promise NGOs, development and humanitarian agencies rapid results by swiftly synthesizing large amounts of data and generating outputs. However, their ease of use masks deeper problems: data inaccuracies, biases in training materials, and an underlying disregard for contextual nuances and lived experiences of people. Organizations risk falling into a “RAG trap,” seduced by convenience and cost-efficiency, while ignoring the critical role of NGOs in fostering democratic processes, solidarity, and social justice. One such example is Save the Children’s development of a generative AI-powered chatbot “Ask Save the Children,” designed to provide child protection guidance by retrieving and generating information from curated corpora.

While such tools may offer support to frontline workers, parents and children, they also embody a dilemma: decisions and guidance are increasingly delegated to algorithmic agents trained on fixed datasets. Moreover, like many RAG-like ecosystems, the tool lacks transparent mechanisms for handling nuance, evolving knowledge, and power differentials between users and system designers. This problem intensifies when such chatbots are envisioned for use in high-risk, high-vulnerability contexts such as natural disasters or conflict zones. In these settings, where stakes are highest and miscommunication can have grave consequences, the risks of depending on rigid, pre-trained AI systems are amplified. Such deployments demand an even greater scrutiny of AI’s appropriateness, its adaptability to volatile and diverse environments, and its alignment with humanitarian principles.

Collective Human Intelligence, Not Artificial Intelligence

Still, there are cases where AI, particularly machine learning (ML), has been used in a supportive and responsible way—when it’s not positioned as a panacea, but as one tool among many, embedded in broader social efforts led by people on the ground. The work of the Distributed AI Research Institute (DAIR) in South Africa is a compelling example. Their study of spatial apartheid combined high-resolution satellite imagery and demographic data with ML tools to expose patterns of entrenched racial and economic segregation. Crucially, DAIR worked with local graduate students who brought vital knowledge and their lived experiences to the task of identifying townships, ensuring the accuracy and relevance of the analysis. Even in this case, it is important to note that technology alone—no matter how sophisticated or carefully applied—cannot solve structural injustice. What it can do, at best, is expose hidden patterns and amplify the evidence needed to demand change. But turning that evidence into transformation requires political will, sustained advocacy, and collective action. ML might reveal inequality, but only people—through politics—can address and resolve it.

To move toward that future, we must resist the gravitational pull of AI hype. The development and humanitarian sectors should not surrender their mandates to narrow metrics of innovation, but reclaim the language of democracy and solidarity—values rooted in collective human intelligence, not artificial intelligence. We need autonomy, not automation; agency, not alienation; collaboration, not isolation; dignity, not domination; and ethics, not efficiency. These principles are not just poetic alternatives – they are necessary anchors in an age of runaway techno-utopianism. Above all, we need technologies of humility—disciplined methods that recognize the partiality of scientific knowledge and help us act under conditions of uncertainty. Such ‘technologies’ encourage reflection on ambiguity and complexity, prompt us to ask better questions, and direct us to consider neglected epistemologies—ways of knowing grounded in lived experiences, diverse cultural contexts, and ethical frameworks that do not default to dominant or exclusionary norms, but remain open and inclusive to historically marginalized worldviews.

Ron Salaj currently serves as the Director of Research and Strategy at Impactskills. He is also a member of several expert groups and committees at the Council of Europe, providing policy and educational advice on issues related to artificial intelligence, education, anti-discrimination, and youth. Additionally, he is a member of the Board of Ethics on New Emerging Technologies for the Municipality of Turin. Previously, he served as a Research Fellow at the University of Turin, co-founded UNICEF’s first Innovations Lab Kosovo, and co-instigated a youth-led citizen science movement investigating air pollution in Kosovo. His most recent publication is the research report “Artificial Intelligence in Development and Humanitarian Work: Promises, Paradoxes, and Perils”.

Image: Michael Dziedzic on Unsplash

Leave a Reply

Your email address will not be published. Required fields are marked *