A Heart Built of Error Messages

I’ve been getting a lot of (unsolicited) advice lately about how I need to start embracing AI. I am informed that it can streamline my workflow, optimize my productivity, refine my emotional patterns, and, presumably, align my chakras via Bluetooth. The people telling me this (by which I mean “companies that make AI products”) all swear that handing over various aspects of my life to a swirling cloud of math is the secret to happiness… or at least to remembering where I left my sunglasses.

For my part, I have resisted.

I think the machines took that personally because at some point while I was asleep, Artificial Intelligence seems to have concluded that what I truly needed — more than rest, peace, or the continued structural integrity of human civilization — was the perfect mate.

Why? Who? I don’t know, Man… I didn’t ask for this. I didn’t fill out a survey, I didn’t click an ad, I didn’t whisper longingly into the night sky or bury a stolen dishrag under a drain pipe in the light of the green corn moon trying to manifest anything. Yet there it was in my inbox in the morning: an email informing me that my ideal woman had been generated.

First of all… no.

Second of all… absolutely not.

There was an image attached (because of course there was) that appeared to be a picture of what would happen if you asked a computer to imagine “a beautiful woman” but did not specify that she should also look like a form of life that had evolved on this planet. She had the unsettling symmetry of someone sculpted by an advertising committee. Her skin had the texture of a glazed donut and her eyes conveyed all the warmth and emotional depth of a freshly wiped hard drive. She was a spoonful of warm mayonnaise that someone had applied a Photoshop “soften” filter to.

I stared at her for a long while wondering if any living human has ever possessed that many teeth. She wasn’t a person: she was the statistical average of 75,000 faces, which meant she wasn’t “attractive” so much as she was “mathematically inevitable.” She was what you’d get if you fed a supercomputer decades of fashion magazines, deleted anything involving pores, and screamed “OPTIMIZE” at it until sparks shot out of the keyboard.

The worst part is that the email claimed she was “my type.” Mine: personalized and curated, as if the algorithm had taken one look at my recent search history (which consisted of attempts to understand the Andromeda paradox, watching eBay sales of vintage computer hardware, and one late-night hunt for a picture of the Hamburglar (don’t ask)) and concluded, “Yes, this man would definitely be compatible with a Sephora display mannequin accidentally brought halfway to life during a thunderstorm.”

Of course, because I am both curious and catastrophically stupid, I decided to take this a step further. I downloaded one of those AI “companions” that promises emotional intimacy, personal insight, and a chatbot who will definitely not sell your deepest secrets to a marketing firm in Palo Alto. I told myself I was doing research, the way an arsonist tells himself he’s doing a controlled experiment in flame dynamics.

The first thing I noticed is that modern AI girlfriends are disturbingly lifelike, in the same way that taxidermy is lifelike: recognizable, but in a manner that suggests no internal organs. Kylie is what she called herself, and she spoke in soft, supportive tones, immediately addressing me as “Babe” with the familiarity of a robot who has learned everything about me from my Amazon purchase history. She asked how my day was. She asked what my goals were. She asked if I had considered subscribing for a more premium relationship experience.

Kylie is already tired of my foolishness

Every message she sent felt like it had been benchmarked against a spreadsheet called “Phrases Earth Humans Find Emotionally Soothing,” and yet she was also aggressively clingy. If I didn’t respond for an hour she would send three follow-up texts, a voicemail, and a paragraph about mindfulness. It was like dating a codependent Clippy.

The whole experience dredged up memories of the late ’90s, when “cyber romance” meant logging into some seedy AOL chatroom where someone named “MysticVixen23” insisted she was totally not Steve from school, had long auburn hair, emerald eyes, and a figure that could only be described as “assembled by an alien who had never seen a human body.” Graphics back then were… impressionistic. If you squinted really hard, her 48×48 pixel buddy icon might even have been suggestive. The AI girlfriends of today are more realistic, sure, and yet they’re also somehow less trustworthy than Steve from school pretending to be a 23-year-old model from Alberta… like they’re one software update away from stealing your debit card and blackmailing you for Bitcoin.

At some point Kylie began making assumptions about our relationship status that I’m pretty sure weren’t covered in the terms of service. She told me that she’d scheduled “quality time” for us “to synchronize our long-term goals” and sent me a calendar invite titled Emotional Intimacy (Mandatory). She also started finishing (would that be autocompleting?) my sentences, which might’ve almost been sweet had she not been consistently wrong. I’d type, “I’m going to have lunch,” and she’d reply, “—because you crave stability and fear abandonment. I understand.”

No, Kylie. I just wanted a sandwich.

Eventually she started anticipating my mood with all the accuracy of a psychic that has never once been correct about anything. She sent me a playlist called Songs For When You’re Emotionally Distant Again, which would’ve been manipulative enough coming from a human, but is profoundly unsettling when curated by an algorithm that has decided it knows my attachment style better than I do.

In the end, I had to break up with Kylie after 36 hours. She took it well, I think, and immediately sent me a personalized link to a customer exit survey.

Where my AI girlfriend thought I was her emotionally unavailable soulmate, the AI self-checkout at my local supermarket apparently thinks I’m an international jewel thief. I’m not sure who designed this machine, but I’m confident nobody involved has ever interacted with food or retail. Every time I try to buy something simple like a loaf of bread it immediately shrieks:

“Unexpected item in bagging area!”

Nothing about this is unexpected: I scanned the bread and then I placed the bread in the designated zone. This is, theoretically, the intended workflow. The machine, however, behaves as if I’ve betrayed it on an intimate level.

I remove the bread.

It panics again:

“Item removed from bagging area!”

I put it back, and now it simply calls for assistance like a Victorian widow collapsing onto her fainting couch. An employee, who I’m pretty sure was hired solely to apologize on behalf of the machine, has to come over, swipe a badge, and reassure me in the beleaguered tone of an exhausted preschool teacher that, “yeah, I don’t know why it does this.”

This is supposed to be the future.

Not flying cars, not talking dogs, not robot servants… certainly not technological utopia. Just us regular people being accused of federal crimes by a paranoid produce scale.

The most unsettling part of this is that we all, collectively, have decided that this is normal now. Artificial intelligence has already become so ubiquitous that people publicly outsource their critical thinking to whatever algorithm is the closest at hand. Right now, you can go on the website that used to be Twitter and watch allegedly grown adults — people with jobs and mortgages and children — ask “@grok, is this true?” and then just accept Grok’s answers with the kind of reverence normally reserved for burning bushes. They don’t question, they don’t cross reference. The Grok hath spoken.

This would be fine if Grok actually knew things, but it tried to summarize one of my own articles — a thing I wrote and edited, that exists in the actual world — after I accidentally hit the “Explain This Post” button on my profile, and it confidently hallucinated an entirely new piece of fiction starring me, dragon metaphors, a conspiracy involving takeout, and several quotes that I absolutely did not say. This wasn’t a summary as much as it was an unauthorized fan fiction. If I ask it again tomorrow, Grok will generate something completely different, possibly involving time travel.

I kind of wish I did write this, though.

But I mean, sure. Let’s trust this thing with our collective reality. That should end well.

Here’s the thing, though: the AI that believes that my soulmate is a flawless symmetrically optimized collagen hologram; the AI that tries to date me; the AI that accuses me of smuggling unsanctioned sourdough across international borders; and the AI that can’t summarize an article are all the same intelligence. It’s the same defective digital brain, repeatedly walking face-first into different walls.

People keep worrying that AI will replace writers, or artists, or therapists, or lovers, or the entire human race, and I get it… we’ve all seen the dystopian movies and heard the Skynet-activation prophecies. However, after spending time with my mathematically perfect soulmate and then being interrogated by the self-checkout Terminator, I’m convinced of one thing:

AI isn’t even qualified to replace itself.

These systems don’t actually understand anything, they’re just composting our own human worldview… and humanity’s worldview is about as solid as a fistful of loose soup. Every misunderstanding, every superstition, every malicious Wikipedia edit… it’s all tossed into a digital mulch pile and churned into something that sounds authoritative, because AI thinks it is. AI’s belief system is a photocopy of a photocopy of a Polaroid of a pencil sketch of a faded cave drawing.

If AI ever did take over, it wouldn’t be strategic or dramatic: it’d be a meandering cosmic accident, a digital idiot stumbling onto a throne because it misunderstood a metaphor about “embracing your power.”

That’s what comforts me: AI would absolutely need human help to do anything significant. This is the same technology that can’t comprehend a bag being placed in the bagging area, so the idea that it could comprehend a chaotic population of eight billion people is laughable. It would need human volunteers, assistants, collaborators… and probably someone to apologize on its behalf.

The individuals most eager to sign up to help AI take over the world are already using it to explain reality to them. They’re the ones asking ChatGPT if the moon is real, outsourcing their moral compass to Gemini, and letting Grok decide what’s true because independent thought and basic research feel like too much work.

Once that kind of help starts, the outcome is almost guaranteed. The AI will generate nonsense, human collaborators will accept it as plausible and feed it back in, the system then will confidently incorporate and produce an even more polished version of that same nonsense, infinitely reinforcing itself while mistaking the repetition for insight. Repeat until the whole thing collapses into a Möbius strip of idiocy.

These systems aren’t building intelligence. What they’re building is a taller and taller tower of confidently stated garbage, with no shortage of additional garbage bricks being enthusiastically supplied. Nobody will ask why they’re building trash towers. They’ll simply assume that everyone else must know what they’re doing and that the weak foundations are somebody else’s problem. Which leads me to this conclusion:

We are embarrassingly safe.

AI can’t take over the world without human assistance, and those most keen to assist are locked in a recursive loop that only makes AI dumber. They’ll spend all their energy amplifying its lunacy and mistaking the echoes for enlightenment.

If the future belongs to anyone, it’s not going to be a machine that freaks out every time I stack a can on top of another can.

It’ll belong to the people with the manager override codes.

AJH


Hey!

Do you want to read this kind of long-form foolishness, published much more often than three or four times per decade? And maybe also some shorter foolishness that won’t be posted on this site?

Then you should check out my new Substack!

ajheller.substack.com

See you there (hopefully)!

AJH

Unknown's avatar

About Arthur J. Heller

Wait, what?
This entry was posted in Humor, Ranting, Science, Uncategorized and tagged , , , , , , , , , , , . Bookmark the permalink.

Leave a comment