Sociology of AI

An Introduction Into A Very New Field: "Neuland" for All of Us

What would Harold Garfinkel say about AI & Society?

Garfinkel Symbol

Don’t ask what AI is; watch what people do to make AI outputs accountable and reasonable in situ. The action lives in the micro-work—how users format prompts, gloss odd answers, repair breakdowns, and achieve “that’ll do” as a local social fact (Garfinkel, 1967; Garfinkel, 2002).

Thesis (Garfinkel-ish)

AI is not a freestanding mind but a setting for practical reasoning. Treat every prompt→response→repair as a bit of orderly work through which members produce sense, sanction, and next steps (Garfinkel & Sacks, 1970/1972).

Core moves for an ethnomethodology of AI

  • Accountability. People supply reasons for an AI’s turn (“it missed my constraint”), transforming weirdness into acceptability. Design implication: support user accounts and compact system accounts (“why this answer”) that fit actual practice (Garfinkel, 1967).
  • Indexicality. Prompts and outputs depend on setting (“here,” “this style,” prior turns). Context windows are organized indexicality; study how participants point to prior turns/files as resources (Garfinkel, 1967).
  • Reflexivity. Explanations both describe and shape the next action (“since it’s cautious, I’ll ask bolder”). Guardrails and disclaimers aren’t metadata; they’re part of the moral order of the exchange (Garfinkel, 1967; Garfinkel, 2002).
  • Documentary method. Users read answers as evidence of a hidden “model persona” (“it knows APA but not German publishers”) and keep interpreting through that lens (Garfinkel & Sacks, 1970/1972).
  • Trouble & repair. Breakdowns (hallucinations, refusals, over-compliance) are analytic gold; watch how people escalate precision, reframe tasks, or switch tools (Garfinkel, 1967).
  • Membership categorization. Labels like “expert,” “assistant,” “non-compliant request,” “sensitive content” organize expectations and sanctions—who may ask what, and how (Garfinkel & Sacks, 1970/1972).

What Garfinkel would actually do (methods kit)

  • Naturalistic captures. Screen/audio recordings of real tasks (coding copilots, customer support chats, student writing aids). Keep keystrokes, cursor paths, and turn-by-turn logs; pair with trace ethnography to follow tickets, commits, and dashboards (Geiger & Ribes, 2011).
  • Sequential analysis. Treat prompt → response → repair as turn-taking. Code: request type, candidate understandings, accounts, repairs, acceptance/rejection (Garfinkel, 1967).
  • Ethical breach probes. Introduce small, consented perturbations—underspecified prompts, conflicting constraints, role shifts—to surface tacit methods; debrief participants (Garfinkel, 1967).
  • Artifact walkthroughs. Replay sessions and have users narrate “what I treated as relevant here” (Suchman, 1987/2006).
  • Comparative settings. Same task across teams (legal, HR, teaching) to see how different moral orders produce different “reasonable” AIs (Lynch, 1993).

Starter study (1-week sprint)

  1. Sample 8–10 real tasks; capture 30–45-minute sessions.
  2. Build a micro-annotation scheme: T (trouble), R (repair), A (account), C (categorization), E (etcetera-clause).
  3. Memo recurring “work practices”: template prompts, checklists, escalation paths.
  4. Deliver three short “practice patterns” for product/design and one risk memo for policy.

Design takeaways

  • Treat explanations as interactional—brief, referenceable, situated—rather than static model-cards alone (Suchman, 1987/2006).
  • Make repair affordances first-class (“reframe,” “tighten constraints,” “cite source”), because users already do this work (Garfinkel, 1967).
  • Log account chains (what counted as reasonable, by whom, when) for audit and pedagogy (Geiger & Ribes, 2011).
  • Align “safety” with ordinary sanctioning practices users already deploy (soft refusals, conditional permissions, alternatives) (Garfinkel, 1967).

Field prompts you can use today

  • “What work did the user do to make this answer ‘good enough’?”
  • “Which parts of the prompt were treated as indexical to this setting?”
  • “Where did an account (by user or system) license the next move?”
  • “What exactly counted as ‘trouble,’ and who declared it?”
  • “How was acceptance of the output displayed (edit, cite, ship)?”

Literature

Garfinkel, H. (1967). Studies in Ethnomethodology. Prentice-Hall/Polity. Publisher page (Wiley/Polity).

Garfinkel, H. (2002). Ethnomethodology’s Program: Working Out Durkheim’s Aphorism (A. W. Rawls, Ed.). Rowman & Littlefield/Bloomsbury. Publisher page.

Garfinkel, H., & Sacks, H. (1970/1972). On formal structures of practical actions. In D. Sudnow (Ed.), Studies in Social Interaction. Free Press. Routledge chapter record.

Suchman, L. (1987/2006). Plans and Situated Actions / Human–Machine Reconfigurations (2nd ed.). Cambridge University Press. Publisher page.

Lynch, M. (1993). Scientific Practice and Ordinary Action: Ethnomethodology and Social Studies of Science. Cambridge University Press. Publisher page.

Geiger, R. S., & Ribes, D. (2011). Trace ethnography: Following coordination through documentary practices. Proceedings of HICSS. PDF.

One response to “What would Harold Garfinkel say about AI & Society?”

  1. […] Harold Garfinkel. He’d follow the micro-work by which people make AI outputs accountable—asking “according to which methods do we treat this as reasonable?” Breakdown moments would be goldmines for seeing the tacit practices that hold human–AI interaction together. […]

Leave a Reply

Your email address will not be published. Required fields are marked *