Sociology of AI

An Introduction Into A Very New Field: "Neuland" for All of Us

What would Michel Foucault say about AI & Society?

Symbol Foucault

AI with Foucault is like power/knowledge in motion: classification machines that help produce the very subjects they claim merely to detect. Governmentality shifts as optimization logics quietly govern bodies, cities, and selves.

Foucault’s question is not “who has power?” but “how does power work through practices, discourses, and apparatuses?” AI systems—datasets, labels, benchmarks, dashboards—are precisely such apparatuses. They sort, rank, and normalize; they make certain behaviors visible and actionable; and they knit together authorities (companies, agencies, experts) who act on that knowledge. A Foucauldian reading asks how AI’s classifications become true for us—how they enter routines, institutions, and self-understandings.

Six Foucauldian lenses for AI

1) Power/knowledge. Models don’t just reflect the world; they inscribe a world where categories like “risk,” “toxicity,” or “fit” become actionable truths. The test is not “is it biased?” alone, but “what regime of truth is this system building—and who must live inside it?”

2) From panopticon to dataplex. Supervision once centered on enclosed institutions (prison, school, factory). AI extends surveillance via platforms and sensors into open spaces—less a single tower than a mesh of logs, metrics, and traces that induce self-regulation.

3) Governmentality. Governance shifts from commanding citizens to steering them through indicators, rankings, nudgey defaults, and risk scores. AI operationalizes this art of governing at scale: optimization replaces explicit prohibition, yet shapes conduct just as effectively.

4) Biopolitics. Public health, mobility, and resource management rely on population-level prediction. AI intensifies this by modelling flows (epidemics, traffic, carbon) while also codifying whose life is counted, prioritized, or delayed—raising questions of whose flourishing is optimized.

5) Subjectivation and confession. Interfaces invite us to disclose needs, moods, location, and networks; prompts ask us to narrate ourselves in the system’s grammar. Over time, we become subjects who interpret our lives through scores and dashboards, confessing ever more data for guidance.

6) The archive & classification. Datasets are living archives with rules about inclusion, labeling, and retention. A Foucauldian audit examines who defines the classes, what exceptions are made, and how appeals rewrite the archive—or leave it intact.

Three applications

Content moderation. Policy taxonomies (hate speech, harassment, adult content) and automated filters normalize speech through escalating sanctions. A Foucauldian study maps not just takedowns, but how users learn the boundary and police one another in comment threads.

Risk scoring in welfare and justice. Predictive models distribute scrutiny and opportunity unevenly. Beyond accuracy, the question is which “problem populations” get produced—and what counter-knowledges and remedies are available to those classified.

Smart cities. Mobility pricing, sensor grids, and predictive maintenance steer behavior without explicit commands. Governmentality appears as a choreography of incentives—effective, yet often illegible to the governed.

Toolkit for students

  • Dispositif map: Diagram the assemblage—data sources, labels, models, policies, dashboards, human roles—and note points where conduct is shaped.
  • Category genealogy: Trace one label (e.g., “toxicity,” “priority user”) across documents and releases; record moments when contestation changed its definition.
  • Interface ethnography: Observe how users pre-empt sanctions (avoiding flagged words, self-censoring); collect repair tactics and folk theories of the system.
  • Counter-conduct log: Identify collective practices that resist or rework optimization (appeal campaigns, alternative datasets, community norms).
  • Visibility audit: For one group, list what the system sees, what it infers, and what it ignores; propose changes to the “field of visibility.”

Guiding questions

  • Which truths about people does this AI make actionable, and who authorizes them?
  • Where does optimization govern conduct more effectively than explicit rules—and with what accountability?
  • How are subjects invited to confess data, and what do they receive in exchange?
  • What forms of counter-knowledge (community audits, alt-labels, participatory review) can rebalance power?

Literature (APA)

Foucault, M. (1970). The order of things: An archaeology of the human sciences. Pantheon/Random House.

Foucault, M. (1977/1995). Discipline and punish: The birth of the prison (A. Sheridan, Trans.). Vintage.

Foucault, M. (1978/1990). The history of sexuality, Vol. 1: An introduction (R. Hurley, Trans.). Vintage.

Foucault, M. (1980). Power/Knowledge: Selected interviews and other writings, 1972–1977 (C. Gordon, Ed.). Pantheon.

Foucault, M. (2007). Security, territory, population: Lectures at the Collège de France, 1977–78 (M. Senellart, Ed.; G. Burchell, Trans.). Palgrave Macmillan.

Foucault, M. (2008). The birth of biopolitics: Lectures at the Collège de France, 1978–79 (M. Senellart, Ed.; G. Burchell, Trans.). Palgrave Macmillan.

One response to “What would Michel Foucault say about AI & Society?”

  1. […] Michel Foucault. He’d map AI as power/knowledge in motion—classification machines that produce the subjects they claim merely to detect. Governmentality shifts as optimization logics quietly govern bodies, cities, and selves. […]

Leave a Reply to Series Introduction: “What Would s*he say about AI & Society?” – Sociology of AI Cancel reply

Your email address will not be published. Required fields are marked *