All Models are wrong, but some are useful. — This famous dictum has long encapsulated the limits of statistics and today points to those of machine learning. Mainstream AI discourse stresses the need for unbiased data and algorithms to ensure fair representation, but overlooks the intrinsic limits of any statistical technique. Machine learning is a statistical model of the world and we should question the way it operates, also statistically, in world-making.

All Models are political. The statistical models of machine learning have silently become a new ubiquitous Kulturtechnik through which the perception of the world is increasingly mediated and jobs are automated. From face recognition and self-driving cars to automated decision making, AI constructs, fosters and controls statistical models of society. These statistical models automate labour, amplify traditional forms of power but also further gender, race, and class discrimination.

All Models are generative. AI is not bringing a new dark age but an age of shallow rationality, in which the traditional episteme of causation and explanation is replaced by one of automated correlations. These correlations have dramatic implications where they establish the illusion of predicting crime, credit score and abnormal behaviours. The algorithms of machine learning also project their own forms of statistical hallucination, when they add imaginary cancer cells to synthetic data, for instance, conflating scientific discovery and fabrication.

All Models evolve and decay. Gigantic online ‘model zoos’ show how machine learning models continuously breed new variations of themselves, improving the state-of-the-art but at the same time furthering technical and political debt. As in the ImageNet case, bugs, biases and misrepresentations that seem long fixed are passed on from ancient models and datasets and coagulate, becoming structural and difficult to unlearn by new models.

All Models break. During the COVID crisis, AI models trained on normal social behaviours started to perform badly, unable to make predictions as in the past. The social norm changed and the model misfired. AI models are technically conceived to adapt, but they are often fragile in facing new situations and historical anomalies. “Breaking the model” is the experimental method of critical AI studies.

All Models are human. AI models still rely on a massive amount of hidden human labour to be free of biases and failures. Search engines and social media, for instance, outsource to ghost workers from the Global South the censoring of adult content that escapes their filters. The interpretability of AI black boxes is not a technical issue: what is black-boxed is actually a vast army of workers that protects the AI’s reputation and credibility.

All Models require interpretation. The interpretability of AI black boxes has been perceived, so far, exclusively as a technical challenge. Critical AI studies redefines this technical challenge as a humanist and political endeavor. Interpretability is a critical, rather than a purely technical task: it implies the disentanglement of artificial neurons as much as it implies the disentanglement of historical practices of technological invention, social control and labor automation.

All Models is a research environment to question knowledge models in the broader sense. Statistical models did not emerge with machine learning and have long influenced culture and politics. For instance, climate science (and our perception of global warming) is mediated by mathematical models. Climate models are historical artefacts that are tested and debated within the scientific community, and today, also beyond. Machine learning models, on the contrary, are opaque and often inaccessible to community debate.

All Models is an international mailing list of critical AI studies to share independent research from disciplines such as computer science, media studies, history of science and technology, political economy, philosophy, and digital humanities. It initiates a process of community building expanding previous decades of network cultures and critical theory and bringing together scholars, artists, coders and activists to disseminate and discuss the newest developments in AI.

All Models questions all models!

All Models is initiated in July 2020 by an international community of scholars, activitsts and coders as alternative format to a series of workshops on AI models that did not take place due to the coronavirus pandemic. It is hosted by the research group KIM at the Karlsruhe University of Arts and Design.

© 2020 allmodels.ai | KIM HfG Karlsruhe