, For Planetary Governance

Another Kind of Technocracy

Authors: Carl Olsson, Dana Molzhigit, Dmitrii Aparin, M.C. Abbott

Conventional approaches to ethical dilemmas make it difficult to see the machinic unconscious that governs socio-technical systems—a calcified autonomy that can always be called into question. Our response to this “alien will” will determine our ability to construct a more desirable future—and a politics beyond fatalism and paranoia.

Technocracy is typically understood as governance by experts. But perhaps the word could acquire another, more literal meaning, one that questions where governance and decision making is located in socio-technical systems?

Specific varieties of technical systems are particularly important from this point of view, namely those referred to as infrastructural. Here we include the usual suspects—roads, railways, and containers—but again we can take a more literal view. From the perspective of decision-making, infrastructure is the aspect of a structure which lies beneath and between, and props up the alternatives from wish we are able to choose.

The word technocracy suggests that technical configurations lie beneath decisions, constrain social practices, future technological development, and even thinking itself. Infrastructure is technocratic—and “always has been,” replies the cosmonaut with its gun pointed toward the ostensibly autonomous humans who dwell below.

Technocracy is the hidden rule of and by technical systems through entrenchment and cumulative path dependency. Revealing the constraints imposed by infrastructure on decision-making matters in particular for understanding where politics happens. It is more traffic light than terminator in that it has been delegated the power to govern—regardless of how passive, mundane, or invisible its rule may seem to be.

 

Before The Problem There Was a Problem

In order to understand what technocracy means for the location of politics, we can think of two related notions: determinism and moral luck. Before we elaborate on these, let’s consider an example. The so-called trolley problem is a notorious ethical dilemma in which a speeding railway car is racing toward five people who find themselves bound to the track. Fortunately, there is a lever, and if that lever is pulled, it will shift the car onto another track. Unfortunately, another person is bound to this second track. The trolley problem is usually focused on the moment of decision: should you pull the lever or not?

Here we should step back and consider the conditions that make this a problem to begin with. Had the track been laid in a different fashion, without a lever present or anyone bound to the track, or alternatively had there been a third or even fourth track and a car equipped with adequate breaks, the problem would have been quite different. Yet, there you are, faced with the lever and a decision: a technical setup that must be considered for the seemingly intractable conditions that govern its decision space and the entities that figure within it.

Another reason to reframe the trolley problem as one of infrastructure and the unintended calcification of past decisions is because it has ramifications for how we construct the future. In the same way that past decisions created the constraints of the trolley problem, the moment of decision at the lever creates new decision spaces farther down the track. To illustrate, one can imagine a fan-like structure of branching tracks and a very large number of junctures. The trolley problem transpires at one of these junctures and we must understand that our decision will have much larger repercussions than we understood them to have in the moral dilemma. By making a decision we contribute to the configuration of future decision spaces.

“Decision” and its derivatives are words reserved for events in which a rational will make a decision for which it takes itself to be morally responsible. Two relevant properties of a rational will—agency and subjective autonomy (incling a sense of agency)—differentiate genuine decisions from mere processes. By focusing attention away from the moment of decision itself to its setup, we become sensitive to the technical governance of decision-making.

Regardless of how open or closed decision spaces may seem, their autonomy can always be questioned. Railways, traffic lights, and construction materials govern the decision spaces available to subjectively autonomous agents. All around us there are apparent decisions that turn out to have been processes and apparent processes that turn out to have been potential decisions. Technical governance is at work in both cases. In one variation of the trolley problem we make the decision not to pull the lever, but in the aftermath it turns out that the lever had been disconnected on the previous day, retrospectively making the “decision” one without agency.

This is where moral luck comes in. The notion was first developed in two philosophy papers by Bernard Williams (1981) and Thomas Nagel (1971) and functions as an undermining of the Kantian insistence that autonomy is an indispensable component of rational will, which by subjecting itself to rules can also be a good (1998). There are a few different variants of moral luck used to describe different situations. Here we are interested in the framing of autonomy that results from a certain combination, which Nagel referred to as “causal moral luck.” Causal moral luck refers to how actions—even actions of the will—are causally conditioned by antecedent events in a way that effectively eliminates the possibility of autonomous will and, arguably, moral responsibility. We might call this a version of determinism with a twist.

Determinism is the idea that all events are determined by their causes. According to the proposition, this eliminates the autonomy required for a rational will to make genuine decisions. If determinism is true in any given scenario (never mind tout court), relevant decisions would be mere processes. Kant saw that despite determinism being true, a good will would act under the “Idea of Freedom,” holding itself accountable for its decisions and constraining its actions as if acting freely towards a given end. The twist that causal moral luck brings is that its cognition imperils the “Idea of Freedom.” If in the trolley problem or similar technocratic setup, we recognize that we are in an unlucky situation, we might realize that we are partaking in a process that is not under our own rule.

 

Excavating the Machinic Unconscious

Technocratic governance isn’t limited to the way that traffic lights or railway tracks constrain our everyday lives. The proposition relocates the site of meaningful decision-making and political deliberation with far-reaching implications: such as the extent to which materials and energy sources influence the climate. By thinking about how infrastructure governs—enables and constrains—the functioning of, for example, planetary-scale carbon-extraction mechanisms, we might be able to reframe the manifest decision spaces available in a collective agenda of infrastructural decarbonization. If infrastructural configurations constrain current decision spaces, the takeaway should be that an important site of politics is the intended and unintended design of these configurations.

Under a technocratic governance on the track towards geochemical breakdown, the challenge isn’t whether or not to pull the lever (is it even connected?) but to substitute the tracks, engines, and wheels that give momentum to a speeding train which must shift its current trajectory. This requires us to look beyond the instrumentality of the instruments into the machinic unconscious that operates behind the veneer of subjectively autonomous ontological feints including governments, corporations, and individuals.

The organizing principle of this unconscious decision space is not necessarily identical or even similar to that of its instrumental superstrate. Neither are they necessarily unified and coherent, or clearly constitute an all-pervasive totality that cuts through all decisions. In fact, they probably aren’t and they probably don’t. We recognize that it is no longer sufficient to talk about steam engines or serpentine coils, but also that it’s not obvious what we ought to talk about instead. Asking what hinders access to this machinic unconscious seems to be a good place to begin.

 

Technicity and Beauty

There’s something monstrous about the way that technical systems constitute decision spaces beyond those that were intended. If the psychoanalytic analogy is at risk of foundering, we can support it by thinking in terms of a covert mode of technicity made invisible concomitant with the myth of “autonomous man”: designer of his own fate. By describing infrastructure as a covert mode of technicity, its ability to govern is contrasted with its more overt counterpart. Let us use French philosopher Gilbert Simondon’s (2017) speculative history of alienation to help with this distinction.

For Simondon, mediation between human and world finds its objective form in technical objects that are mobilized to fill a growing gap: “man’s” alienation from “nature.” In this opening, technical objects attain human significance—even beauty—particularly when their overt functions are revealed as in the rhythmic flashes of traffic lights guiding the pace of the city before the human gaze. As Simondon has it:

The telephone call center is beautiful in action, because at every instant it is the expression and realization of an aspect of the life of a city and of a region; a light is someone waiting, an intention, a desire, imminent news, a ringing telephone that one won’t hear but that will resound far away in another house. (Simondon, 2017)

It is in this discourse on beauty and technicity that Simondon refers to the beauty of technical objects in their ability to engender and encompass significance for humans, effectively disclosing possible worlds before our eyes. Simondon proceeds:

Hearing a nearby powerful transmitter is not technically beautiful, because its value is not transformed by this power to reveal man, to manifest an existence. And it is not only the overcoming of difficulty that makes the reception of a signal emanating from a different continent beautiful; it is the power that this signal has for making a human reality emerge for us, which it extends and manifests in actual existence, by rendering it perceptible for us, when it would have otherwise remained unknown despite being contemporary with ours. (Ibid.)

Below the surface of Simondon’s account of an early call center there is something more disquieting, which can be understood in terms of a sequestration of the human. If technical objects attain a sort of beauty through their disclosure of significance for human beings, this attainment has as its corollary an obfuscation of all other realities than the one disclosed. We can say that beauty is a relational property whose actuality is entirely dependent on its apprehension by a human being.

Yet attention to such beauty forecloses the possibility of seeing what lies beneath. In addition to the transmitter’s hum, we can think of the roads and trucks required to connect a call center to its customers, and about the consequences that these may have for ecosystems that are out of sight. The iPhone entails extraction. All technical objects have material implications, amenable to techno-psycho-analysis. The infrastructural reality and downstream consequences of technocratic governance unfold in the shadows of radiant technicity, and who can know what the reverse side of a polished coin will look like?

 

What Would Heidegger Say?

The beautiful technical object is part of a way of thinking that focuses on the political salience of genuine decision-making. This deliberative political imagination is suffused with overt technicity in its hope that some new instrument will arrive and reveal something about the world for the sake and deliverance of “autonomous man.” Rockets to Mars! Bigger drills! Self-driving cars! Robot care! It’s little wonder that within this imaginary, we see a perennial recurrence of the same problems: should a speeding autonomous vehicle be programmed to veer from its path to crash into one person instead of five?

Deliberative politics supposes that if only we can figure out how to design our way toward a more desirable future, all that remains is to pull the appropriate lever. While doing so might seem difficult in its own right, the covert mode of technicity presents another problem.

We can compare this position with that of another philosopher of technology, Martin Heidegger, who was famously concerned about the instrumentalization of the world and its human inhabitants (e.g. Heidegger, 1977). To some extent, the technocratic hypothesis aligns with Heidegger’s worries in that it suggests the effective dislocation of human autonomy by technology. Yet as per Simondon, there is a sense in which technologies make genuine disclosures and the technocratic hypothesis exists to question a deliberative mode of politics wherein an autonomous will is something more than pure process. The point is to navigate towards a world in which the disclosed reality that subjective autonomy ultimately belongs to is seen as determined by its infrastructural makeup.

 

The Teachings of an Outsider

How may we enter into an area of politics that is obfuscated per definition and where subjectively autonomous agents must recognize the severity of their own determination? In a talk at Duke University in 2018, philosopher and The Terraforming faculty member Reza Negarestani excavated the Hungarian psychoanalyst Sandor Ferenczi’s ideas about an alien will. In Ferenczi’s Clinical Diaries (1988), the concept figures intermittently and is used to describe the sense of having one’s conscious will replaced by an outside agent’s—as occasionally happens to victims of severe abuse—possibly with further destructive consequences. In his use of the concept, Negarestani applies it to wider structures of social domination and its internalization by the dominated.

Given what we’ve said about technocracy, humans have unwittingly fabricated their exploitation by constructing an infrastructural unconscious that is central to their continued sustenance as agents. From the perspective of any given decision space, then, its infrastructure is the alien will which controls it.

Compared to the moment of decision highlighted by the trolley problem, there is a different characteristic to the subjective autonomy that emerges from Negarestani’s analysis of the alien will. Because we possess a meta-cognitive or self-conscious ability to entertain that we may be consigned to being carrion, we are capable of positing another, less dire world and present to ourselves its consequences. Much as in Kant, the unique affordance of self-consciousness is its ability to extend a very long arm through which to gain leverage upon itself and the world. By such means, self-consciousness brings itself into its own kingdom. Already alienated from its world, a self-conscious subject who discovers its domination by an alien will also discover a transversal line by which it is alienated from its own alienation.

Instead of the auspicious moment of decision being the characteristic feature of subjective autonomy, it is the ability to infer consequences from still potential decisions that is primary. The possibility of thinking is always the possibility of rethinking and looking beneath that which appears given. It was important to rethink the trolley problem because it highlighted the possibility of excavating the covert technicity of a given decision space. Furthermore, it highlighted that there will be future infrastructural ramifications stemming from the moment of decision.

However, the problem of the alien will seems to go even further, because it introduces a degree of distrust within the re-cognitive ability that made in-practice emancipation possible. For if our will is really another’s, our ability to respond to reasons is too. Our autonomy turns out to be the autonomy of infrastructure. The technocratic hypothesis thus recurs, not as a version of determinism, but as that dissimulating agent which really decided when we, faced with a lever, took ourselves to be making a decision at all.

 

Fatalism and Paranoia

How can the recognition of our re-cognitive capacity shape the locus of political deliberation if, as a consequence of this recognition, we posit that we are in fact carrion on a technocratic plate? Recognition may be necessary to this end, but it’s unlikely to be sufficient. And so the question of agency returns in another guise.

Let us end by considering two variants of a “leverless” trolley problem. What if:

  1. We are so entrenched in a particular technological regime that its continued existence is what provides the conditions for the possibility of sustaining sapient life, even as the same regime actively works to ablate these conditions.

  2. The same technocratic unconscious is recognized to govern the space of counterfactual positing itself.

To give a real-life example of the first situation, we can think of certain self-sustaining features of carbon-induced global warming. If anthropogenic carbon emissions ceased tomorrow, the process of climate change would not halt immediately due to positive feedback loops in the carbon cycle. Even if a cataclysmic tipping point has not yet been passed, the full impact of past human activity on the atmosphere has yet to be felt. Moreover, the adaptive capacity of current societies is dependent on many of the extractive mechanisms that create the need for adaptation in the first place. Preparing for floods to come requires energy and materials produced by the combustion of carbon-rich fuels. We are faced with a railway that leads the trolley toward the five unfortunates in its path with no lever in sight. Here the autonomous subject awakens to the unlucky fact of its own automation. In the absence of an evident decision space, recognition amounts to little more than fatalist theodicy.

The second situation is one of all-consuming paranoia since it amounts to the freedom to ponder one’s own bondage with the recognition that this freedom is also bound. This version of the “leverless” trolley problem positions the autonomous subject, awake to its subsumption by an alien will, unable to recognize itself as a morally responsible agent at all. In this case, an Archimedean point may well be present, but in-practice recognition of its autonomy isn’t.

Remembering Kant’s idea that in-practice adherence to autonomy is a prerequisite for moral responsibility, in the paranoiac case, the “Idea of Freedom” is broken such that the “agent” no longer is capable of applying it for practical ends. It’s a failure of the subject to justify its own rule to itself not because there’s nothing it can do, but because it mistrusts its own motivation—“I” becomes a parasite upon “myself.”

The hypothesis of a kind of technocratic governance that rests within the infrastructure of any given decision space threatens autonomy with automation. To awaken to entrenched infrastructural governance is to accede to the inevitable or else to see one’s own exploitation everywhere, whether it is there or not or. Faced with a double bind—be it railway car or climate change—the ability of the fatalist and the paranoiac to imagine their own chains and posit counterfactual worlds amounts to either mute automatism or the idle fancy of the alien will. Any action on the part of these figures would remain within the confines of the world that is, or so they would believe.

The political task of infrastructural change for planet-wide systems cannot be construed as political in the colloquial sense of making a claim to govern based on a belief in the efficacy of ends. Contrary to such a model of politics, the propositional ability to posit a new world might be reconceived as an invitation to another infrastructural regime—to feed on the carrion which we, by virtue of our inability to deliberately enact a world other than the one that we have, must see that we are. Such a politics will navigate between fatalism and paranoia. It amounts to tending an always unseen parasite of our own making, whose functioning may have different ramifications for the planet. If navigation fails, paranoia is both more terrible and productive than fatalism could ever be. While the fatalist watches the train run amok, the paranoiac whose sense of agency has been abolished by automation simply refuses to accept the future that inevitably shall be.

The final version of this article has benefited greatly from comments on earlier drafts by Reza Negarestani, Thomas Moynihan, Ryan Bishop, Philip Maughan, and Nicolay Boyadjiev. All the views expressed are the authors' own.

Carl Olsson is a writer and philosopher of geographical thought.

M.C. Abbott is a design strategist based in Oakland, California.

Dmitrii Aparin is a scientific researcher with a background in experimental physics and microbiology. Not long ago, his area of interest shifted towards speculative design and complex systems.

Dana Molzhigit is a design researcher at the intersection of technology, biology, and culture.

References

Ferenczi, S. (1988). The clinical diary of Sándor Ferenczi. Edited by Dupont, J., Translated by Balint, M. and Jackson, N. Z. Harvard: Harvard University Press.

Heidegger, M. (1977). “The question concerning technology”, Translated by Lovitt, W., In The question concerning technology and other essays, pp. 3-35. New York: Garland Publishing.

Kant, I. (1998). Groundwork of the metaphysics of morals. Translated by Gregor, M. Cambridge, U.K: Cambridge University Press.

Nagel, T. (1979) Mortal Questions, New York: Cambridge University Press.

Negarestani, R. (2018). The Psyche and the Carrion: A Note on Ferenczi's Concept of the Alien Will. Draft published as “The Psyche and the Carrion” on https://toyphilosophy.com/2018/10/19/the-psyche-and-the-carrion/ (accessed, June 27, 2021).

Simondon, G. (2017). On the mode of existence of technical objects. Translated by Malaspina, C. and Rogove, J. Minneapolis: Univocal Publishing.

Williams, B. (1981). Moral Luck, Cambridge: Cambridge University Press.

If you noticed a typo or mistake, highlight it and send to us by pressing Ctrl+Enter.

Share