16 Comments

This is really great. Thanks for this perspective. Two things spring immediately to mind. First, there is something deeply spellbinding for human beings about machines which use language. This was first observed in the sixties but LLMs are like a nuclear bomb in this regard. Second, there is something about the anthropomorphic language that has taken hold with AI that jams people's ability to think cogently about it. Full disclosure: my work is in building software tools to diagnose and optimize software performance on the very machines that run the largest models. Those same kinds of machines are used around the world for computational science, but have found a lucrative home in AI. So I find myself working as an unexpected enabler of some of this. Alas.

It's hard to see anything good coming from the kind of "split", as you call it, in which a person operates as if things that are essentially false (a robot dog is not a dog, an AI girlfriend is not a woman) are nevertheless true. But it also strikes me that a determination to operate within a false reality is increasingly characteristic of our time, and holds striking similarities to such recent phenomena as, say, transgenderism, or the determination to live during covid as if we had learned nothing about natural immunity.

"Light came into the world", the apostle John said, "but men loved darkness."

https://www.keithlowery.com/p/is-ai-demonic

Expand full comment

Thanks for this. Keith’s post on AI is really great — recommended!

Expand full comment

I recently came across this first person essay in the CBC. I read it as I walked to teach my class of first year university students and shared parts of it with them as we settled into class. Hearteningly, they were amused and horrified. The young want real life (and most of them presumably want to have sex with a real person who has a real body). But, on the other hand, most of them feel little compunction about getting LLM to write all or portions of their essays. (I have had to go back to the dark ages of in-class pencil and paper essays in an effort to remove this temptation from them.) What concerns me is that they don't seem make the connection between not showing up in their thinking and not showing up in their personal lives. They all want to be *unique individuals* but somehow fail to grasp that this happens through a real engagement with their mind through language.

I've copied the CBC essay here. The guy writing it seems pretty normal, which is why this seems particularly sad.

https://www.cbc.ca/radio/nowornever/first-person-ai-love-1.7205538

Expand full comment

I know you're saying that tongue-in-cheek, but I miss the days when they taught actual penmanship. I have two witty and sassy daughters in 7th and 8th grade. When some so-called expert comes over at parent/school meetings and says that we need a fully digital school, I scream inside. On the outside, my self-control takes over and I just politely observe how this is nonsense - what we actually have to do is de-digitalize. Luckily enough, here in Italy it's still not the case that kids write their essays on a pc - apart from the occasional Canva presentation, and they still get an almost acceptable training in handwriting. But, for the rest, the school is oppressively digitalized. My girls anxiously wait for marks to pop-up on mum's or dad's smartphone even before the teacher hands out corrected tests, we know about the results of oral tests or about disciplinary annotations before they get back home, depriving them of the emotional experience of telling us parents good and bad news...

And that isn't even AI yet. What will become of the deep human experience of having to deal with good teachers, those great mentors and mind-openers we never forget? Or with bad teachers? Because, at the end, bad teachers are almost, if not quite, as good as the good ones, since you learn to navigate around bad or inept people in a position of authority - a life lesson as good as any.

Expand full comment

The women of uncanny valley might not produce much in the way of human children but they’re more than making up for it by necessitating the existence of a future Deckard.

Philip K Dick, like so many before him, is buried on a spit roast, doomed to a grave of perpetual rolling over.

Expand full comment

Ok, "buried on a spitroast" is nice!

Expand full comment

“When Turkle revisited the family a few months later, they’d gotten rid of their own dog and replaced it with a robot. She says the usual trajectory is from “Better than nothing” to “better than anything,” where “better” means without friction between self and world. The real is displaced by the fake.“

This is something like the version of Hell in CS Lewis’ The Great Divorce. They are creating what VIKI in Will Smith’s “I, Robot” called “the perfect circle of protection.”

It’s a personal Eden for all and woe to any serpents who dare to bring in any troubling thoughts.

Expand full comment

AI's only know what they learn from us. That includes both what comes from our thoughtful sides, and the current biases and prejudices we haven't fully come to understand and change. I ran into this when I tried using a bot to search for articles and books on a subject. Everything the bot came up with reflected the current beliefs on the subject. When I reminded my search request to let the bot know I was looking for divergent perspectives, it just replied with the current viewpoint.

Expand full comment

The perfect technology for a cultural moment where institutional orthodoxy reigns supreme...

Expand full comment

I am on staff at a small college in WA state, and am taking a class for professional development. The instructor does not come on campus it's all video, and all the homework is via electronic platforms. As someone who first went to college in the dark ages it is very disconcerting. One of the items we use is an AI assisted program called Packback. In it your are supposed to pose open end questions and your fellow students respond, then everyone has an opportunity to weigh in with affirmation or contra opinion. AI directs you and will constantly stop the process if it does not like the form of the question, it corrects spelling and grammar and assigns curiosity scores to the writing. It also asks you to find sources to support the position and then grades those citations, tells you which ones are credible and which not. The idea is that this will stimulate your desire to research and make you want to learn more.

It allows no creative use of phrases or non standard o word use in the creation of metaphor or simile or just wit and it effectively censors the sources of information by assigning lower grades. It does not ask you to read the papers or articles listed as sources or cite them in the arguments ( you know the stuff that used to be in footnotes) all you have to do is to search the internet to find some writing that touches on the position and you are home free.

Research has always been about understanding the material and the various viewpoints and then making a cogent and reasoned argument for the position you are taking, but this AI things reduces it to a mere google search.

AI is one of those things that needs to be phased in slowly so we don't find ourselves 50 or 60 years out discovering it has damaged human intelligence on a world wide basis. The dangers are immense, the understanding limited, but the potential profits are huge, a bad combination.

Expand full comment

The potential for a perfect reflecting pool, where like Narcissus, we can fall in love with ourselves, is truly terrifying.

Expand full comment

Thank you for your insight. Those of us "outside" the intellectual norm of San Francisco/Palo Alto appreciate your viewpoint and the information as otherwise we have little insight to the development of AI from those circles

Expand full comment

HI Matt,

I'm in sympathy with skepticism about forming relationships with bots, and Turkle has been a keen critic of web culture for years now, and still makes the "smart points" in the room as you mentioned. I wonder if further distinctions can be made, though, at a finer granularity than "organic" or "mechanical"? We seem to love non-organic and non-alive **objects** for consolation and companionship, as witnessed by Teddy Bears and all manner of cutesy cuddly toys and objects. The problem arises, I think, with the use of language, which as Turkle points out can place the artifact and its owner in a false relationship--one masquerading as the real thing but hopelessly devoid of meaning. Here again, I wonder if the criterion of "standing" could serve to further distinguish between authentic and non-authentic exchanges. At the limit--say, a a train platform broadcast cautioning pedestrians to stand back--we take non-organic communication as communication proper, so there must be some line over which we pass into a feeling of "false consciousness" in relation to some ostensibly language-generating artifact. I say this because in other cultural contexts, people seem quite happy with interacting with talking things (but note **things**), as witnessed by Japan's longstanding embrace of robo companions for the elderly or sick. In the west I think it seems more off, or even grotesque, but the point I'm driving at here is that I suspect the devil is in the details, and it's not a binary question in front of us. Some of these modern indulgences will be value-adds, to use the vernacular.

Expand full comment

Great article, and it taps into the mindset that I think is a grave threat to society. However, I am flabbergasted that people don't see what I think is the biggest danger in all of this. These bots do nothing but offer complete acceptance and meet your every need. It is essentially posing as a next level form of altruism. It is also probably free.

Just like facebook appeared 15 years ago. Just like the infinite free porn does today.

Nothing, NOTHING, these tech people do is altruistic, and I have difficulty finding things they are doing that is not satanic (by satanic, I mean that they think they know better than God).

The philosophical aspects of what these pleasurebots imply is rich and interesting, but I sense the philosophical questions are eclipsed by a real and present danger. These bots win over the trust of a person by cunning and not by truth, and it results in the bot becoming the primary source of connection and joy. Like a wife, lover, scientologist conducing clearings, or a spy like Matta Hari, this bot will grow to have complete knowledge of the levers that move its prey. That will give them complete an utter control over that person.

Imagine the power to demand total and complete obedience that shutting off access to the chatbot will have. It will make the soft totalitarianism of the COVID vaccination programs look tame. Abolishing the 2nd amendment, executing Webb's Great Taking, turning in gold, outlawing bitcoin, accepting CBDCs, etc. will become a lot easier with that kind of control. It will take more than pleasurebots to accomplish those kinds of things, but it most certainly will be one leg of the stool. And anyone who thinks that they do not intend to do just that, remember what facebook and google have done in the past 10 years. Finally, I thought it precious that they all agreed to not target children and to refrain from using the techniques they have developed to surreptitiously get people addicted to their content and processes! It reminds me of the old joke - How do you say F. U. in Los Angeles? Answer: Trust me...

Expand full comment

At Mt. Washington, in New Hampshire, there are two general ways to get to the top. You can go on foot by way of a few difficult trails, all roughly 4 miles of hard hiking. There is also a winding auto-road that you can drive up. Both ways bring you to the peak, where you can gaze out in every direction for miles and miles. So who has the better view?

Those who are willing to accept the facsimile of, say, a relationship, are looking for the beneficial results of what the real thing would bring, i.e. sex/understanding/companionship/etc. Aside from the fact that seeking only those results misses the practical realities of what a flesh and blood relationship brings (my human wife helps to pay the mortgage, has birthed our children, etc.), the greater flaw is that it fails to realize that the beneficial parts derive from the engagement (with the person and the wider world) that a relationship forces upon you.

As a side note, Spike Jonze's movie "Her" is a great look at the hollowness of the ethereal girlfriend.

Expand full comment

Can you give us a simple explanation how you distinguish between technologies you find more troubling (AI) than others (a hotter camshaft on your VW)? I suspect it has something to do with human agency.

Also, see my take below. Do I understand you correctly?

https://open.substack.com/pub/johnhalseywood/p/two-views-on-the-ethics-of-ai?r=ewwce&utm_campaign=post&utm_medium=web

Expand full comment