I’m launching a running series where I stage short encounters between classic sociology and today’s AI world. Below are two-sentence teasers—sketches of how each thinker pushes me to see power, practice, and possibility differently.
Thirty-six thinkers, two sentences each
Norbert Elias. If I ask with Elias how interdependent people weave themselves into figurations, how power balances shift, and how “we–I” relations change, AI looks less like a sudden disruption and more like a re-patterning of manners, knowledge, and institutions over generations.
Karl Marx & Friedrich Engels. He’d read AI as a force reorganizing the relations of production—intensifying extraction, surveillance, and the devaluation of labor while consolidating data-capital. Class struggle won’t disappear; it will be refactored around ownership of platforms, models, and compute.
Auguste Comte. He’d treat AI as the latest step in a “positive” science of society, insisting on systematic measurement, comparison, and prediction. But he’d also warn: without a moral science to steer it, technical order risks hollowing out social purpose.
Ferdinand Tönnies. He’d see AI deepening Gesellschaft—impersonal, calculative ties—even as platforms simulate Gemeinschaft through personalization and affective branding. The question is whether synthetic intimacy can repair the social fabric it simultaneously thins.
Émile Durkheim. He’d ask how AI becomes a “social fact,” stabilizing new forms of moral regulation in work, education, and public life. Gains in coordination may coexist with anomie if innovation outruns collective rule-making.
Max Weber. He’d diagnose algorithmic authority as the next chapter in rational-legal domination: efficient, calculable, and legitimated by expertise. The iron cage gets a software update—unless counter-institutions keep discretion, ethics, and vocation alive.
Harold Garfinkel. He’d follow the micro-work by which people make AI outputs accountable—asking “according to which methods do we treat this as reasonable?” Breakdown moments would be goldmines for seeing the tacit practices that hold human–AI interaction together.
Michel Foucault. He’d map AI as power/knowledge in motion—classification machines that produce the subjects they claim merely to detect. Governmentality shifts as optimization logics quietly govern bodies, cities, and selves.
Judith Butler. She’d show how AI’s labels and benchmarks are performative—iterating norms that make some identities legible and others precarious. The task is not only to de-bias but to trouble the grids that script who can appear at all.
W. E. B. Du Bois. He’d connect double consciousness to platform life: seeing oneself through the gaze of scoring systems and publics. His data-driven moral sociology would demand countersurveillance and emancipatory statistics.
Pierre Bourdieu. He’d chart platform “fields,” where capitals (data, code, cultural clout) and habitus meet algorithmic classification. Symbolic power works when sorting feels natural—until hysteresis exposes the game and its rules.
Richard Sennett. He’d contrast craft with frictionless UX, worrying that convenience corrodes character and collaboration. AI could augment skill and public life—or deskill workers and privatize civic competence.
bell hooks. She’d center love, liberation, and pedagogy, asking whose voices AI amplifies and whose are erased. Intersectional harms aren’t bugs but products of imperialist, white-supremacist, capitalist patriarchy coded into data and design.
Georg Simmel. He’d trace how money, number, and speed shape the “blasé” stance of the platform metropolis. Secrets, the stranger, and sociability all change when interaction is endlessly mediated and archived.
Niklas Luhmann. He’d model AI as communication that reduces complexity for functionally differentiated systems. The point isn’t “intelligence” but how programs couple with law, economy, science, and media through coded expectations.
Robert Putnam. He’d measure whether AI erodes bonding and bridging social capital—or can rebuild civic life through new associational forms. The curve of community may hinge on design choices that invite reciprocity rather than passivity.
Anthony Giddens. He’d stress reflexivity: people continuously remake structures through their use of AI while being constrained by them. Risk and ontological security become everyday management problems in datafied modernity.
Boaventura de Sousa Santos. He’d call for cognitive justice, warning against epistemicide when Northern datasets define the world. AI futures must grow from ecologies of knowledges, not a single universal vantage point.
Manuel Castells. He’d frame AI inside the network society, where power flows through programmable networks and communication codes. Identities resist and reconfigure, but infrastructure steers whose signals travel furthest.
James S. Coleman. He’d link micro incentives to macro patterns, asking how rules and reputations stabilize cooperative AI ecosystems. Social capital and principal–agent dynamics will decide whether alignment holds outside the lab.
Ulrich Beck. He’d describe AI as a “manufactured risk” that escapes industrial containment and demands reflexive modernization. The politics is precaution without paralysis, learning under uncertainty while distributing risk fairly.
John Urry. He’d widen the lens to mobilities—flows of code, carbon, people, and packages orchestrated by algorithmic logistics. Smart mobility isn’t only greener movement; it’s power over who can move, when, and on what terms.
Saskia Sassen. She’d show how global city infrastructures host AI’s command nodes while producing expulsions at the edges. Data extraction reorganizes territory, property, and labor beyond the visible platform interface.
George H. Mead. He’d ask how selves form when the “generalized other” includes machine interlocutors. Taking the role of the other now means learning to read and be read by models.
Margaret Mead. She’d compare cultures to reveal how communities domesticate AI differently across generations. Education, not inevitability, shapes whether tools expand or constrict the possible.
Antonio Gramsci. He’d analyze platform hegemony: how common sense gets coded into defaults, feeds, and moderation. Counter-hegemony grows when organic intellectuals build alternative institutions and narratives.
Erich Fromm. He’d contrast “having” AI (ownership, control, metrics) with “being” (care, creativity, presence). The marketing character thrives on datafication—unless a humanist ethic redirects desire.
Erving Goffman. He’d detail impression management with and through machines, from synthetic frontstages to datafied backstages. Face-work now includes negotiating with recommendation engines and risk scores.
Benedict Anderson. He’d track how algorithmic media assemble imagined communities—and fracture them into micro-publics. Platform nationalism and transnational fandoms coexist in the same feed.
Daniel Bell. He’d see AI as emblematic of post-industrial society’s knowledge class and technocratic dilemmas. Efficiency grows, but value disputes don’t vanish—they migrate into code.
Robert K. Merton. He’d warn about unintended consequences and self-fulfilling prophecies in predictive systems. Matthew effects flourish when visibility and credit are algorithmically allocated.
Howard S. Becker. He’d ask “whose rules?” as AI labels deviance and taste, and he’d follow the collaborative “art worlds” behind products shipped as genius. Creativity is collective; so are biases.
Jürgen Habermas. He’d test whether an algorithmic public sphere supports communicative action—or slides into strategic manipulation. Design for discourse ethics means transparency, contestability, and inclusive participation.
Talcott Parsons. He’d judge AI institutions by how well they meet systemic needs—adapting, goal-attaining, integrating, maintaining cultural patterns. Alignment is as much social integration as technical calibration.
Thomas Piketty. He’d track how data and IP concentrate wealth, proposing fiscal and institutional reforms to curb rentier power. Without countervailing policies, AI could widen the 21st-century inequality curve.
Henri de Saint-Simon. He’d welcome expert coordination and productive planning, imagining AI as an engine of industrial improvement. The open question: technocracy for whom—and answerable to which publics?
Jean-Jacques Rousseau. He’d worry that algorithmic comparison inflames amour-propre and corrodes civic virtue. A democratic “general will” today would require digital institutions that cultivate autonomy over vanity.
What’s next
In the series, I’ll expand selected teasers into short essays that pair each thinker with a concrete AI case—policing, education, workplace tools, health, or urban tech—and a small toolkit for student research.
Schreibe einen Kommentar