AI Manifesto for Social Justice

AI Futures? A Manifesto for Social Justice

AI is not neutral. It reflects and reinforces systems of power, control, and inequality. The harms it produces and the unevenly distributed benefits it delivers call for an alternative framework, rooted in social justice to guide how AI is developed and deployed. This manifesto arises from collaborative research projects involving groups critically exploring the impacts of AI.1 This manifesto, which will continue to evolve through dialogue and action, welcomes feedback and revision. Building a just AI, if at all possible, involves collective engagement with its opaque operations and unequal effects. The manifesto sets out values and commitments toward reclaiming AI to imagine alternative futures.

1. Reframe AI’s Purpose & Values

Why does AI exist? What are its aims and values, and whose interests are served? AI must not concentrate power in the hands of corporations and governments. It should not reduce social problems to technical fixes, or be driven by maximizing profit and efficiency. Instead, AI’s purpose must be redefined, guided by decolonial values of justice, care, and solidarity.

2. Knowledge & Power – Epistemic Justice

AI development relies on extractive practices, taking data, labour, knowledge, and creativity without proper recognition. Inclusion is not enough if it contributes knowledge and data to unjust systems. AI must stop reinforcing colonial knowledge hierarchies and instead acknowledge multiple ways of being and knowing. There needs to be respect and protection for diverse knowledge systems, including Indigenous, Black, and Global South epistemologies.

3. Redesign AI

AI needs to move away from large-scale, all-encompassing ‘frontier’ models trained on indiscriminate mass data harvesting that reproduce colonial practices of domination. Instead, curated datasets should be used to develop ‘local’ AI systems rooted in non-extractive inclusivity and safety. This means co-designing AI from the ground up, respecting the experiences, cultures, and histories of minoritized groups.

4. AI for ‘the Commons’

The AI systems must ensure data and infrastructure are part of the commons, shared management on behalf of society and not corporate interests. This requires creating data practices based on consent that resist extraction and misuse. Trust in AI requires having practices based on community principles so the systems stay accessible and equitable.

5. Accountability

Institutions must explain why an AI system is being used, who is responsible for it, and how it makes decisions. AI needs to be regulated and evaluated for its systemic biases and limitations, with accountability for the harms and inequities it reinforces. AI should be assessed by its outcomes – the changes it creates in advancing social justice.

6. Radical Safety and Refusal

Safety goes beyond having “guardrails” or auditing on AI systems. It can refuse to build AI that entrenches surveillance, structural discrimination, or punitive control. Radical safety applies to every element of the AI ecosystem, from development to implementation. To ensure that unforeseen consequences and structural injustices are addressed, AI systems should be developed under community oversight.

7. Environmental Responsibility

Climate injustice and environmental degradation are being intensified by AI development. Its infrastructures (e.g. data centres) must be made sustainable, with energy use and resource extraction be made clearly accountable. Community-led practices can inform how AI engages with environmental governance, including the refusal of systems that offload ecological harms onto others.

8. Liberatory Futures

Technology is not merely a tool but shapes social relations, futures, and possibilities. AI technologies are not inevitable and must be challenged by creating spaces and practices where marginalized groups can shape their own futures. These spaces become sites of resistance, refusal, and reimagination – where technology is claimed as a practice of liberation.


  1. The manifesto has been developed through the funded projects Inclusive Futures? Radical Ethics & Transformative Justice for Responsible AI (AHRC BRAID) and Re-Imagining AI with Afrofuturist Speculative Design (ESRC Digital Good Network), led by Sanjay Sharma (University of Warwick) in collaboration with BLAST Fest (Anita Shervington) and Diverse AI (Toju Duke). It has been shaped by the contributions of community participants, and inspired by scholars and activists working towards just technological futures. 

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *