AI is moving fast, but social life moves with it. Grounded Theory (GT) is a good fit when the object is changing while we study it. Instead of starting with a fixed grand theory, GT lets us build mid-range concepts from what we observe: practices, conflicts, workarounds, failures, and hopes around AI. It is rigorous without being rigid—ideal for a living blog that will grow over time.
What makes GT useful here
- Empirical and theoretical openness. We begin with sensitizing concepts (e.g., “automation of judgment”) and let categories earn their place through evidence. That keeps us responsive to new AI use cases, controversies, and publics.
- Iteration over perfection. Posts become analytic memos; later entries revisit earlier claims as new material arrives. Knowledge accumulates through cycles, not one-off pronouncements.
- Comparative logic. We constantly compare cases—across sectors (health, education, creative work), across regions, and across stakeholder positions—to see what varies and what travels.
- Co-production made explicit. This blog is written by a human researcher in collaboration with an AI assistant. GT helps us keep that reflexive: we memo how prompts, models, and tools shape the analysis.
How we’ll work (our GT workflow)
- Sampling that follows the phenomenon. We start where AI visibly reorganizes routines (e.g., admissions, hiring, grading, content moderation) and expand sampling when comparisons promise learning—theoretical sampling.
- Open → focused coding. We code fresh fieldnotes, platform policies, interviews, artifacts (prompts, model outputs, logs), and public debates. Codes become tighter as patterns stabilize.
- Constant comparison. Each new fragment is compared with earlier ones: similarities, differences, boundary cases.
- Memoing as public writing. Short posts are memos; longer pieces consolidate categories. We keep an audit trail of coding decisions and category revisions.
- Conceptual integration. When categories link together (conditions → actions → consequences), we sketch explanatory models suited to sociotechnical life—never just lists of themes.
- Ethics and care. We minimize traceability of individuals, disclose when text or images are AI-assisted, and foreground the situatedness of our claims.
What counts as “data” in this project?
- Everyday use of AI: classroom practices, workplace policies, creative workflows.
- Artifacts: prompts, system messages, model settings, red-team notes, commit histories.
- Governance texts: standards, the EU AI Act roll-out, institutional guidelines.
- Public talk: press, documentation, forums—how people explain and contest AI.
- Our own process: prompts, revisions, and failures—method as data.
Quality criteria (beyond buzzwords)
- Credibility: triangulation across actors and sites; member reflection when possible.
- Transferability: thick description so others can judge what travels to their setting.
- Dependability: versioned notes, dated analytic decisions, reproducible searches.
- Reflexivity: we log how AI tools nudge our categories (and when we push back).
What to expect on the blog
- Short memos that map emerging patterns (e.g., “from assistance to supervision”).
- Comparative case notes across sectors and countries.
- “Methods corners” showing coding moves, memo excerpts, and prompt design.
- Periodic syntheses that integrate categories into mid-range sociological claims.
Join the conversation
If you are using, governing, teaching, critiquing, or building AI, we welcome fieldnotes, documents, and counter-examples. This is an open, cumulative inquiry—exactly what Grounded Theory is for.
Related projects (in German & English)
- Sociology of Soccer — www.sociology-of-soccer.com
- Grounded Theory — www.grounded-theory.de
- KI-Karrierekompass für Studierende — www.ki-karrierekompass.de
- Kompass-Reihe — www.kompass-reihe.de
Co-production note: Drafting, synthesis, and some visuals are AI-assisted; responsibility for interpretation remains human.
Schreibe einen Kommentar