Say you believe humans are rational and perfectible. How do you account for impulse, chaos, error, and - worst of all - evil? For Christians, original sin had this covered. But we live in a secular world, in which - especially among tech elites - there exists a strong belief that we need only become smarter and better at reasoning, and evil will go away.
The news headlines this week have been full of bewilderingly evil deeds, which I will not re-circulate. If you are haunted by stories, images, or victims, I encourage you to step away from the internet and pray for the deceased, and for their families. But since here you still are on the internet, I wanted to share something somewhat related, on tech subcultures, reason, and evil.
Reading a very long essay reviewing a book about AI apocalypse, it struck me that these stories tell us more about our own incapacity to grapple with evil, than they do about the capabilities of artificial intelligence.
I personally am not very concerned about being obliterated by “superintelligent AI”. Superintelligent AI can only kill us if superintelligent AI is possible, and this assumes it’s possible to make a machine conscious. I’ve set out elsewhere my view that this is a category error, which rests on a basic misunderstanding of what consciousness is. But even if computers are never going to attain consciousness, AI is obviously still hugely significant. It’s also symbolically powerful. The media theorist Marshall McLuhan called technologies “extensions of man”, and technology cable of extending human thought is obviously transformativer even if such a technology can’t itself think.
And if it’s an extension of thought, it follows from this that we can see ourselves in that extension, as if in a kind of mirror, a bit like the way people see themselves in their cars. “Person who drives a little Japanese shopping car” means something quite different to “person who drives a huge roaring muscle car”. But if cars are proverbially a phallic symbol (and
would argue that they’re much more than this too) what does it mean to extend thought?This of course depends on what you believe “thought” is, which in turn depends on what you believe our nature to be, as humans. This will still be true even if, intellectually, you dismiss the idea of “human nature” as superstition or epiphenomenon. So I’ve been watching this whole “Is superintelligent AI a blessing or a curse” discussion roll on and thinking that even if it’s never literally going to happen, AI apocalypse is so compelling because it’s a way of talking about human nature at scale, and about large-scale moral questions, in circles where the loss of any religious framework makes this exceptionally difficult to do the old-fashioned way. And of these the most interesting and telling, to my eye, is the apocalyptic role played by “machine intelligence” within the subculture most committed of all to reason, logic, and secular materialism: the rationalists.
Who wouldn’t support rationalism? The classical definition of humans is as rational animals. But the ancients and medievals considered “rational” in this context to ecompass both ratio, analytic thought, and also meant the capacity to perceive truths directly through intellectus. (I discussed this distinction recently.)
The Nature of Chickens
This comes belatedly to you from Boston, where I’ve arrived (improbably via San Francisco) to teach the annual The Machine Has No Tradition seminar. It’s lovely to visit the USA, but travel scrambles the schedule! With due apologies for the delay, what follows continues recent reflections on how we shape practices of work and attention, or have them shaped for us, in relation to the prevailing technological and philosophical environment.
There isn’t space here to track the arc by which that definition narrowed to just ratio; but that’s where we are now. Thus, now, being a ‘rationalist’ is to say you wish to develop humans’ capacity for ratio above all else. And the rationalist subculture is one version of what this looks like in practice.
This resists easy definition, even by its own members. But it centres on West Coast America, draws heavily from the tech community, and foregrounds the importance of reason in avoiding all those ways emotion and illogical thought colours clear and accurate understanding of the world. Morally speaking, rationalists lean technocratic and utilitarian, and dismiss overt spiritual or religious frameworks as obstacles to rational truth. Behaviourally, they’re often very keen on self-optimisation (the “grindset” discussed in that linked post on the nature of chickens). In terms of the Thomist distinction I offered there, between ratio and intellectus, it’s probably fair to say these guys set great store by effortful, sequential, analytic ratio, and consider the contemplative, immediate, holistic perceptions of intellectus at best unreliable.
So it’s curious to me that this group should also have produced a subset of hyper-rationalistic individuals convinced that if we produce a machine superintelligence, it will kill us all. What does it mean, that the mirror AI holds up to this group should be an extension of thought that is hostile and murderous? To me it seems poignant evidence of how difficult it becomes, when you collapse all of human potential into ratio, to deal with the shadow.
This Jungian concept comprises what Jung calls “the thing a person has no wish to be”. It’s those aspects of ourselves that we deliberately avoid looking at too directly, and which influence over our behaviour in direct proportion to our refusing to confront them. But if you assume everything can be surfaced and analysed, effortfully, through ratio, you are at a grave disadvantage when it comes to accounting for these more hidden aspects of human personality - or even of acknowledging that such aspects might exist and resist rationalisation.
When you bear in mind Jung’s claim that the shadow becomes more potent the more determinedly it is repressed, we can perhaps begin to make sense of the sexual chaos, cultic phenomena, and even murderers that have to date spun out of rationalist subcultures. It stands to reason that such a group would find the question of evil difficult to frame, let alone understand. This is a community capable of working itself and moral paroxysms over the welfare of shrimp; how are such people to account for murder, let alone the kind that seem motivated by madness, or ideology, or even nihilism?
And when we add to this Jung’s insight that the shadow is most commonly encountered through projection, for example onto a family member or other associate, we can perhaps read “AI safety” discourse within rationalist communities as a picture of the collective cultural shadow produced by repressing everything about ourselves except ratio. What that looks like, fittingly, is a god-like entity that is both hyper-rationalistic and yet, despite this (or perhaps because of it), so morally ungovernable that its coming into being signals human catastrophe. It’s an eloquent ackowledgment of how inadequate ratio is, on its own, as a vector for good moral judgement.
The tragedy is that instead of recognising this for what it is, namely a demon conjured from the shadow of our truncated modern understanding of human reason, this community has instead projected it onto “AI superintelligence”. And because AI is technological, it means the collective shadow produced by overemphasis on radio is in theory amenable to the further application of ratio. But really this is like being chronically late for work and insisting you just need more or better alarm clocks. The proposed solution just prevents you from looking for the real source of the problem. (Hint: if you’re always late, that’s a you thing.)
I’m not mocking here. As a culture we’ve been lying to ourselves about what “reason” is, and what humans are, for so long I don’t blame anyone who accepts those truncated stories at face value - especially those trying as obviously and sincerely as the rationalists, to live well and do good within this framework. But I do hope that eventually those currently trying to make an externalised image of the contemporary cultural shadow “safe” through technological development realise how futile this project is, and return to the age-old, never-ending one of wrestling it in their own souls.
A Web of Our Making. Anton Barba-Kay. 😒