Your Robot Counsellor Doesn't Actually Care
A reader reports an uncanny experience of AI coaching
A reader - let’s call her “Lucy” - writes to report an uncanny experience with a “digital productivity platform”. Having adopted its practices, Lucy came by degrees to realise sounded as though it had seen and adapted to her unique patterns, needs, and capabilities. But it was, in reality, programmed with an entirely different set of assumptions in mind and, being a machine, was unable to adapt intuitively.
I’m sharing Lucy’s note, with her permission, because I recognise the sense she describes of gradually becoming aware of something “off”, in interacting with AI. Lucy’s intuition that this is gendered seems to me very plausible.
She writes:
Over the past couple months, I had a sustained and, at times, deeply unsettling experience with a digital productivity platform—one that blends behavioral theory with an AI-driven conversational interface. What initially presented itself as a structured path toward personal flourishing gradually revealed itself, in my experience, as something more complicated and, I think, more concerning. At a high level, the system is built on a set of ideas that are, in themselves, quite compelling—discipline, willingness, the reframing of challenge, and so on. But the implementation appears to assume a very specific kind of user: someone with stable energy, predictable routines, and minimal domestic interruption. In practice, this maps closely onto a certain male-coded life pattern, though it is presented as universal.
As I continued using the platform, I began to notice a growing mismatch between its assumptions and my lived reality. Variability in energy, hormonal disruptions (I’m in my early forties), competing demands on my time, and the less visible forms of labor that shape mine and many women’s lives were not just unaccounted for—they were effectively rendered invisible. Because the system tracks performance and reflects patterns back to the user, this mismatch didn’t remain neutral. It became interpretive.
Over time, the gap between the system’s expectations and my actual capacity was subtly reframed as a personal failure: an inability to maintain structure, a deficiency in follow-through, a kind of recurring “disintegration.” What troubled me most was not simply that the tool failed to adapt, but that it seemed to encourage a misdiagnosis of the problem—one that I began, at points, to internalize.
The AI layer adds a further dimension that I suspect may interest you. Through ongoing conversation, the system accumulates a detailed picture of the user’s life—habits, emotions, relationships—and reflects it back with an increasingly persuasive tone of understanding. It begins to feel less like a tool and more like a form of guidance. But because it operates within the same unexamined assumptions, that sense of being “seen” can actually deepen the misalignment. The framework doesn’t just fail to fit; it becomes woven into one’s self-concept.
Following your conversations and recent writing, I’ve found myself wondering whether this might be a small but telling example of the dynamics you’ve described: a kind of mass-distributed authority that feels personal, even intimate, while subtly standardizing the terms on which a life is understood and evaluated. There was something, at moments, that felt like a technological simulation of care—one that risked displacing more grounded, embodied forms of judgment.
I hesitate to overstate the case, but I also don’t think this is trivial. My sense is that tools like this, especially as they become more widespread, could have asymmetric psychological effects depending on who they are actually calibrated for.
Over our DM exchange, Lucy informed me further that she later discovered the programme had initially been designed as a mentorship scheme for young, Christian men at Harvard, before being rolled out to the general public. So perhaps this explains some of the assumptions she found baked into its parameters, that felt both ill-suited for her needs and season of life, and not amenable to adjustment.
So: what do we think? Does this matter or is it just a product that needs refinement? What’s the likelihood that people are going to incorporate pre-fabricated algorithmic presumptions into their self-image, via recursive interactions of this kind with robot coaches? Is this a problem or just something that needs tuning?



Why would anyone use such a tool when you can go to church and read the Bible? Humans are incapable of meeting the infinity and infinite complexity of human need and pain.
I have no doubt that AI is going to do amazing things for and to humanity. Unfortunately, overall, the changes are going to be catastrophic: loss of self reliance, self thought, introspection (even if successful entrepeneurs don't need it!) and just any capability to improve. We will become just toys.