Originally delivered on 2 April at the Family Formation and the Future conference, at the Danube Institute, Budapest
Our session theme is “Tech and Human Relationships”. I want to talk about truth, in relation to those two themes. You might think this is a digression; it’s not. For the relation between technology, human relationships, and truth is a critical current battleground.
The most destructive progressive policies today have their grounding in the idea that there’s no truth, and no normative nature to anything – even, or especially, people. Meanwhile those who can still see that some things are more true are systematically marginalised in modern institutions. So today I want to offer a short history of how, and why, we buried truth in favour of power, and what that’s doing to our civilisation. I’ll talk about how truth-seekers are fighting back, in a way some of you might find counter-intuitive. And I’ll look at what this all implies for how we should be thinking about technology in order for it to nourish human relationships, rather than dissolve them.`
Does anyone here remember James Damore? He was fired from Google in 2017 for circulating a memo arguing, with all possible reference to the scientific evidence, that not all sex differences in employment choice are down to discrimination. He was pilloried and punished in essence for telling the truth. Now, just recently I read a Free Press interview with Damore, who lives in Europe now. It was a sympathetic piece; in the course of it the writer suggested Damore may have an autism spectrum disorder.
First: a necessary disclaimer. Lots of people find it helpful to have a label and diagnosis for those ways they feel different. What follows is in no way intended to dispute or invalidate that experience. But it’s also widely accepted that there’s a cultural component to what reads as “normal” or “different” in people’s psychological makeup. So what if another way of looking at at least some individuals who get lumped in with these supposed “disorders” is less as “disordered” than as outlier personalities, more oriented toward truth than social consensus?
And what if one reason we’ve got less good at building things is that the post-truth managerial moral framework has reframed truth-seeking as a pathology?
You may be wondering: what does all this have to do with human relationships? Isn’t Mary here to do a bit about the contraceptive pill or whatever? I’ll get to that, don’t worry; but since I wrote Feminism Against Progress I’ve thought some way further into the Pill as inflection point in the arrival of cyborg culture.
More plainly: the Pill was the first mass-market transhumanist technology. It said: we are entitled to break normative health in the name of human desire. A great deal has followed downstream of the Pill that is germane to this conference, including family formation, gender ideology, hostility between the sexes, epidemic loneliness and the second-order political consequences of demographic decline. As we explore these topics I want to zoom out and think about what was going on in that moment after the two world wars, as transhumanism came to seem possible and believable. And I want to read it as, in some respects, a trauma response to the triumph of engineering in what Peter Thiel calls “the world of atoms” – that is, physical machines.
The two World Wars were the climactic frenzy of Europe’s industrial civilisation - and the second of the wars was ended by truth-seekers, who split the atom just to see if it could be done. Robert Oppenheimer, one of the physicists who developed the atom bomb, perfectly expresses the engineering, truth-seeking mindset, when he said in 1954: “When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success.”
This is, at its core, the engineer mindset. Engineers want to know: is it technically sweet? And: does it work? The “why” or “what to do about it” as Oppenheimer puts it, is for many a secondary consideration to whether it’s technically sweet, and whether it works.
In the case of the bomb, it did work. The consequences were apocalyptic for the people of Hiroshima and Nagasaki; the cultural ripple effects are still with us. Something of the initial response can be gleaned from God and the Atom, published 1945, in which the theologian Ronald Knox reflected on the cultural and religious impact of the bomb.
Not just the loss of faith, but also what these explosions implied about the telos of the scientific revolution. As Knox saw it, Nagasaki was one possible culmination of a centuries-long arc of scientific and philosophical development, beginning with the nominalist William of Ockham, who rejected the notion of abstract universals for an account of reality beginning with what we can see and touch empirically.
After Ockham, thinkers first bracketed and later discarded the classical and medieval account of the world – including humans – as possessing a nature, and a purpose, in favour of an account of reality focusing on cause, effect, and material substance. This mindset enabled empiricist study of the world. It produced engineering marvels, powerful empires, and a vast corpus of philosophy.
But as Thiel has observed, particularly since the two World Wars we’ve taken a different path: turning away from “the world of atoms”, toward “the world of bits”. Drugs, inner exploration, radical relativism, therapy culture, and innovation only in digital realms. No flying cars! Why? I think this has something to do with our friend Oppenheimer.
What if our turn away from the world of atoms, to the world of bits, was a civilisation recoiling in terror from the cataclysmic achievements of these truth-seeking engineers? My hypothesis is that in response, we turned our technical skills inward and set about re-engineering ourselves. And this is how, in the 1960s, we arrived at the twin engines of the transhumanist revolution: computing and biotech. But as a consequence, it was also the point at which the engineering mindset turned on itself.
That decade saw the very first precursor of the internet, and the first transhumanist technology; the contraceptive pill. Since then we’ve directed these researches toward re-engineering human bodies, and human souls - and the positive intended outcome was, I submit, to ensure nothing like Hiroshima or Nagasaki could happen again.
But to embark on this project we have had to apply the nominalist mindset to humans. That is: discarding meaning and purpose, focusing only on materiality, and cause and effect. It amounts to an effort to liberate us from the givens of our own nature - or even to do away with having a nature. Elsewhere I’ve characterised this stance as “normophobia”: the assertion that nothing about us is natural, and therefore everything is open to remodelling according to whim, or desire, or ideology.
This has, again, led to some potent discoveries. But if you apply nominalism to people, you end up focusing on everything except what people actually care about: meaning and purpose - which in almost all cases means human relationships. And when you scale this kind of applied technicity up to the level of societies, and the government that order them, this blind spot in the shape of meaning and relationships begins to look like many of the challenges we are discussing at this conference.
The collapse in family formation; the bleeding away of rightly ordered sexuality into hedonism or apathy; the inability to grasp why mass migration is widely unpopular and highly volatile at scale; the disintegration of religious faith. All of these are downstream of an anthropology that rejects human meaning and human purpose, for one that sees only cause, and effect, and materiality to be engineered.
This brings us to the central postmodern claim: “There is no truth, only power”. This is the moral assertion at the normophobic core of our efforts to re-engineer ourselves. This mindset says: I can change sex by fiat, provided I have the support of law and moral consensus. It says: men and women are interchangeable; whole peoples, even, are interchangeable. It says: every family structure is equal, any combination of parents or carers is fine, what matters is love.
One of the more jarring and extreme cases in point is of course gender ideology. But this is easy to point to not because it succeeds, but because it fails: even a toddler can see that this man in a dress is still a man. There are many vastly subtler and more effective examples, including the Pill – which actually does go some considerable distance toward making men and women interchangeable.
But even the Pill fails to make us completely interchangeable. And in practice every application of nominalism to ourselves ends up as a power-grab by admininstrators. Because when something isn’t true, you need state power and social pressure to make up the shortfall. What happened to James Damore is a case in point: he spoke the truth about the fact that sex differences, while small, are persistent. So he had to be punished, pour encourager les autres.
The quintessential character of the long, post-Hiroshima twentieth century has been the application of nominalist science to ourselves, while multiplying institutional power and managerial bureaucracies to cover the resulting concatenating falsehoods. The kind of people who succeed in this managerial culture are those that prioritise social consensus over truth.
But “No truth, only power” is not just a cultural, moral, and metaphysical dead end. It’s also a technological dead end. And at this point it has turned itself on the truth-seekers - which means it has begun to destroy the technological achievements upon which its bureaucracies depend.
Think of the HR edict: “Bring your whole self to work”. Anyone who thinks about this for a moment will realise that it isn’t actually an invitation to bring your whole self to work. It’s a trap for truth-seekers.
Most people have enough sense not even to bring their whole self to Christmas dinner with the family, let alone work. The edict is designed, consciously or not, to surface people like James Damore, so they can be offloaded in favour of people who are better at calibrating for social consensus. Over time, then, the aggregate effect of policies like this is to increase the number of consensus-seekers, which is to say those adapted to managerialism, and to decrease the number of truth-seekers.
But the problem with filtering out truth-seekers is that you end up firing your best engineers.
A good engineer has to be interested in truth, otherwise their priority will not be what’s technically sweet, but what’s socially approved, and they won’t be interested in answers to the question: does it work?
We all know what the result looks like. It’s industries where leaders’ incentives no longer align with those of end users - what the writer Cory Doctorow calls “enshittification” - in which you adopt any way at all of making growth metrics go up, including ruining your product. In the world of atoms it’s when windows start falling out of planes. Then finally the managers asset-strip and and sell what engineers built, chalk that up as growth, and move on This is what business writer Ed Zitron calls “the rot economy”.
But I see truth-seekers fighting back - and what’s driving this is specifically the engineering requirements of AI, and also the affordances of AI as a set of tools.
Some of the most cutting-edge tech firms, especially in AI, have realised the structural problem with giving social consensus too much power. Just recently Elon Musk has made much of how he wants his Grok AI tool to be “maximally truth-seeking”. That phrase struck me forcefully because it echoes what Alexander Karp, CEO of defence tech firm Palantir, says about the profile of their workforce. There is, of course, a range of views on the moral meaning of both Musk’s endeavours, and those of Palantir. But no one can dispute that, technically speaking, if your aim is building things then centering truth-seeking is effective. And it must necessarily follow from this that those individuals who both very clever and also primarily oriented not toward social consensus but toward truth read, in this context, not as mildly disabled by their lack of managerial “soft skills”, but as possessing a different kind of superpower.
It’s also significant that both Musk’s and Palantir’s re-orientation toward truth seems strongly linked to AI development. To understand why this is, we need to bracket all the speculations about “AGI” and think about what “AI” is as things stand. The term “intelligence” is misleading: generally what we’re referring to with “AI” is powerful modelling and prediction tools based on pattern recognition across very large datasets. That’s a much less catchy phrase but makes it easier to grasp why you can’t make these tools work unless you re-orient toward truth.
That is: unless your dataset is broadly accurate, your modelling and prediction will have no value when you try and apply it. It will just produce gobbledygook. In other words: AI is fundamentally tied to truth, or it’s useless.
(As an aside, I suspect this is also an under-appreciated reason why Silicon Valley broke for Trump last year. It wasn’t just the Democrats turning on Big Tech; it was also the demand to make the AI woke, even when this made it work less well. This produced, in essence, a revolt of engineers against being forced to disregard not just what’s technically sweet and what works, but also what’s true: an unholy trifecta that will reliably drive even the most peace-loving individual of this personality type from sullen compliance to furious mutiny.)
With all this in mind, I want to suggest that conservatives in particular need a more fine-grained assessment of AI than just “this is probably scary and bad, or maybe it’s demons” or whatever. There’s much about these tools that I find unsettling. But if the reaction to their development has been apocalyptic in some quarters this is surely due to a general intuition that their emergence signals the end of the inward turn in tech, and a re-orientation to the world of atoms.
Because what we should be watching out for isn’t the moment when the machine becomes conscious (see here for an exhaustive explanation of why I think this has the metaphysics backwards) but the way complex modelling and prediction tools are being applied to machines in the physical world. This is already shaping up a lot more like the sci-fi world we were always promised: SpaceX, robot dogs, armed drones, unmanned factories, and so on. It’s at least potentially a more frightening picture than what Oppenheimer did. And it’s coming.
But while I don’t precisely welcome this, what I think we should welcome is the way AI calls us, as a byproduct of being able to work at all, to re-orient from social consensus to truth - and in the process shifts the balance of innovation back toward truth-seekers, and perhaps also (though really it’s too early to tell) the balance of power in governance. (More on this in future essays.)
And, more broadly, the foundation of these tools in pattern recognition invites pattern back into the metaphysical picture as well. I’ve written elsewhere about how digital reading is already contributing to this profound shift in consciousness; add truth-seekers into the picture, and the nominalist anthropology soon begins to fray. Because – as the martyrdom of James Damore foretold – those who prioritise truth over social consensus have been the first to see, and to insist, that humans still have a nature.
By this I mean that Gestalt of normative features of the human organism and experience, that we used to group under the term “human nature”, and which we discarded when we turned the nominalist scalpel on ourselves. Damore noticed that sex difference was still real, despite half a century of transhumanist experiments in making the sexes interchangeable. But it’s not just him. Many young people have also seen that abolishing sex didn’t work. Men and women are still different. Our attempt to re-engineer our own sexed nature is failing, more subtly than “gender affirming surgery” fails but failing nonetheless.
I expect us to have to learn the hard way, across the board, that our own nature can’t be abolished. The endpoint of experiments in areas such as surrogacy will be much the same: realising that we are the way we are for a reason. I’ll go as far as to predict that failing to create AGI, or upgrade our own “wetware”, will be what finally forces the metaphysical question on what consciousness is, and this will be what finally breaks the nominalist consensus. And my bet is that it’ll be the truth-seekers who get there first.
There may well be some Catholics in the room mentally rolling their eyes now and thinking “congratulations, you just invented Thomist metaphysics”. Which, okay but the point is that there’s a great deal more scope for common ground, and for cooperation, between secular and religious truth-seekers than you might think.
There’s a great deal more to say on the scope AI (maybe) affords to reconcile empiricism and metaphysical idealism - but that’s for another day. For now, we should stop treating these new technologies of pattern recognition as “intelligence” in the human sense, and think of them instead as an escape hatch from the dead-end postwar experiment in trying to re-engineer human nature while leaving the material world to managerialism and stagnation.
Instead we should turn our inventiveness outwards again. And we should order our technologies to the picture of human nature so clearly revealed by the failure of our efforts to re-engineer ourselves.
I’ll wrap up by citing a handful of examples that are indicative of the approach I’m describing
Fertility: startups using AI and advances in wearable tech to support women who do not wish to use hormonal contraception, in managing their fertility.
What’s important about this is the position shift from instrumental mastery in the name of making us interchangeable, but acknowledging and enabling constructive working with women’s sex-specific cycles and physiology.
Dating: In the US there’s a startup that uses detailed sex-specific questionnaires and AI matching technology to power an introduction service ordered toward long-term relationships. Again, the framework is pragmatic acceptance of human nature, and the ordering of technology toward human meaning, purpose, and relationship
Language learning: This might seem banal but millions use the language learning app Duolingo, which employs AI components including voice recognition as part of its “gamified” teaching! Importantly it’s not presented as an alternative to human interlocutors: rather it’s ordered to the ultimate end of communication offline, with other humans, in the acquired language.
And, finally, defence: War is as much part of human nature as family formation. I may not love this fact, but it’s true. The deepest hope of the postwar transhumanist experiment was that we might make war obsolete; you only have to open a newspaper to see this has now comprehensively failed, and we are back in an era of great-power conflict. In the light of this it would be irresponsible of any state with the capacity to improve defence through AI innovation not to do so.
The age now opening out in front of us needn’t contain artificial general intelligence to be both wondrous and frightening. But there’s lots to be optimistic about, not least that the age of relativism is drawing to an end. Actively elevating truth-seekers is still a cutting-edge trend, but I expect it to spread.
But if we want to survive and flourish we need not just the empiricist truth of “does it work” but also the metaphysical one of “is this ordered to human ends”. I think this can be done. But that means, before we begin, the first thing we must do is restore truth to its rightful place, at the heart of how we meet the world.
Thank you.
"The most destructive progressive policies today have their grounding in the idea that there’s no truth"
Scruton (bless his cotton socks...)
"A writer who says that there are no truths, or that all truth is ‘merely relative,’ is asking you not to believe him. So don’t."
From your mouth to God’s ears as the old saying goes.
This is one of the more positive takes on AI that I have read, and I hope it is true. I am one of those “spectrum” people whose obsession for truth telling and visceral discomfort with the obvious lies our culture has imposed top down make me want to gnaw my own arm off. My husband has been employed by the same company for 30 years and the past three have fit your description of managerial idiocy to perfection. Men who were acknowledged as unique achievers in the field were removed because they held inconvenient beliefs. Keeping your head down and your mouth shut keeps the paycheck coming.
A positive piece. And don’t worry about the Thomists. Aquinas’s observation that all he had written was as straw seems to me the most important part of his work. Not to denigrate it but as a way of explaining the inexplicable.