Sociology of AI

An Introduction Into A Very New Field: "Neuland" for All of Us

What would Émile Durkheim say about AI & Society?

Symbolbild Durkheim

Let’s ask with Durkheim how AI becomes a social fact—a way of acting and judging that is external to individuals yet exerts constraint. Gains in coordination can travel with anomie when innovation outruns collective rule-making (Durkheim 1895/1982; 1897/1951).

Introduction

Durkheim taught me to look past individual “users” toward the moral order that organizes them. When schools, firms, and public agencies adopt AI policies, defaults, and rituals, these arrangements harden into facts that press upon us—acceptable prompts, disclosure norms, redress paths, even new taboos. My goal in this article is to make those facts visible and evaluable using a Durkheimian toolkit.

Five Durkheimian lenses for AI

1) Social facts & moral regulation. Platform rules, content standards, audit trails, and “acceptable use” codes are not mere settings; they stabilize collective expectations and carry sanctions. Studying AI means cataloguing these rules, their coercive force, and how they are learned and contested (Durkheim 1895/1982).

2) Solidarity after automation. AI reorganizes the division of labor—some tasks fragment, others recombine—shifting us from craft groupings to highly interdependent roles. The question is whether organic solidarity grows (cooperation, trust across difference) or whether coordination gains come with norm erosion (Durkheim 1893/1997).

3) Anomie signals. Rapid change without clear norms breeds aimlessness, rule-evasion, and burnout. In AI projects, watch for mismatched expectations (what counts as “cheating,” “consent,” “explanation”), escalating metrics with vague purpose, and sanction regimes that feel arbitrary (Durkheim 1897/1951).

4) Sacred/profane boundaries. Safety policies draw lines between permissible and taboo content; model “guardrails” become a secular sacred. Outrage cycles around violations perform a collective function—reaffirming boundaries—even as they may over-punish edge cases (Durkheim 1912/1995).

5) Education as moral socialization. If AI is reshaping work and citizenship, then moral education must teach obligations that travel with new capabilities: disclosure, attribution, consent, and fair dealing. Schools and professional bodies are the sites where such duties are taught and ritualized (Durkheim 1925/1961).

Three applications

Workplaces. Algorithmic management can coordinate complex teams, but legitimacy depends on intelligible rules and fair sanctions. A Durkheimian audit would track whether incident responses repair norms (learning and remedy) or only deter through fear.

Education. Generative tools alter the moral economy of effort, authorship, and help. Institutions need clear rituals—attribution statements, assignment types designed for AI-aware practice, and proportionate remedies—to prevent anomie among students and staff.

Public life. Deepfakes and synthetic media test the collective conscience. Durable responses look less like one-off takedowns and more like shared procedures: provenance standards, civic labeling, and trusted venues for contestation.

A Durkheimian scorecard (practical)

  • Norm clarity: Can ordinary members state the key rules in their own words?
  • Sanction proportionality: Are remedies graduated and educative rather than purely punitive?
  • Solidarity indicators: Cross-team help rates, retention across role boundaries, quality of hand-offs.
  • Anomie indicators: Quiet quitting, rule circumvention, contradiction between KPIs and stated mission.
  • Ritual health: Frequency and credibility of practices that express shared purpose (post-incident reviews, acknowledgment of contributors, public disclosures).

Methods toolkit

  • Rule mapping: Collect and code policies, prompts, escalation paths, and sanctions as a catalog of social facts.
  • Ritual tracing: Observe recurring practices (onboarding, reviews, disclosure steps) and test whether they build solidarity.
  • Anomie diary: Log breakdowns where members cannot tell what is expected; analyze how the organization repairs them.
  • Distributional check: Compare how rules and sanctions fall across groups to keep “moral regulation” from masking inequality.

Guiding questions

  • Which AI rules have real coercive force, and how are they legitimated?
  • Does automation thicken or thin organic solidarity among interdependent roles?
  • Where do we see anomie—and what institutional repairs actually work?
  • Which boundaries have become “sacred,” and who gets to redraw them?

Literature

Durkheim, E. (1893/1997). The division of labor in society (W. D. Halls, Trans.). New York, NY: Free Press.

Durkheim, E. (1895/1982). The rules of sociological method (S. Lukes, Ed.; W. D. Halls, Trans.). New York, NY: Free Press.

Durkheim, E. (1897/1951). Suicide: A study in sociology (J. A. Spaulding & G. Simpson, Trans.). Glencoe, IL: Free Press. (Alt. publisher ebook page: Simon & Schuster/Free Press.)

Durkheim, E. (1912/1995). The elementary forms of religious life (K. E. Fields, Trans.). New York, NY: Free Press. (Library copy: Internet Archive.)

Durkheim, E. (1925/1961). Moral education (E. K. Wilson & H. Schnurer, Trans.). New York, NY: Free Press.

One response to “What would Émile Durkheim say about AI & Society?”

  1. […] Émile Durkheim. He’d ask how AI becomes a “social fact,” stabilizing new forms of moral regulation in work, education, and public life. Gains in coordination may coexist with anomie if innovation outruns collective rule-making. […]

Leave a Reply

Your email address will not be published. Required fields are marked *