As I live in the Netherlands, chances are you’ll find me on a bicycle every now and then. And so it was that on a weekday afternoon, I found myself pedaling away into an insistent western wind trying to persuade me to go in the opposite direction.

On this bike ride, which would turn out to be a memorable one, I was listening to a podcast interview. At one point, when the interviewee’s keen insights made me smile, and I did what I often do: use Siri to take a quick hands-free note for future reference. So far, so good.

Swinging Sensually on the Song

The inter­esting idea I wanted to hold on to was this: what we call “ratio­nal­iza­tions” (post hoc, often invol­untary) and “framing” (proactive, delib­erate) are essen­tially the same: an attempt by our brain to convince us that we’re okay, that we made the right choice, that what we have is better than what we’ve missed out on. Fascinating, but otherwise incidental to this story. Read on!

The routine I went through may be familiar to iPhone-savvy readers: I double-tapped my AirPods, waited for the Siri chime, said “Take a note,” heard “What do you want it to say?”, and then dictated a rough, impro­vised version of my thoughts.

As a side note, I love these high-tech tricks and hacks because they’re conve­nient time-savers—but mostly, if I’m being honest, because they give me the feeling that I’m actually living in a world that was pure science fiction when I was a kid. It’s the ultimate Look mom, no hands! And usually, it works like a charm: Siri takes your note and reads it back to you.

In this case, however, the system’s finely tuned voice-recog­nition was thwarted as the micro­phone also picked up the rumbling noise of the wind all around me. I cannot remember the exact words I spoke, but this is what Siri penned down in my new note: 

Rationalizations, what time are you swinging sensually on the song: 0792 entrances Lindsay better music reflects. 

Say what? But if at first you don’t succeed…

Take two

That clearly wasn’t what I wanted to say, so I tried again. Already suspecting that the wind noise was the culprit, I decided to go for a pithy, key-word approach. Again, I do not recall my exact words, but this is what my phone turned them into:

Rationalizations, Cocaine, and Cream

The first word was spot-on, strike one for so-called AI. But then… cocaine and cream?! Siri, what gives? This sounded like a new Häagen-Dazs ice-cream flavor straight out of some demented alter­native universe.

I gave up. By now I’d reprocessed my thoughts often enough that I didn’t need the reminder note anymore. Cocaine and cream it is, indeed. Rationalize that!

A Hallowed Domain

In the moment, I was a bit frustrated at my inability to dictate my thoughts success­fully to Siri. But later, reading those two botched attempts to record my speech, I marveled at the complexity of it all.

It is amazing that computer dictation gets so many things right most of the time. Artificial intel­li­gence is nowhere near capable of having actual conver­sa­tions with us, but the trans­lation of sounds into spelling has come a long way.

Maybe it’s the software’s lack of an actual general intel­li­gence that frees it up to simply brute-force its way into the hallowed domain of human language. Read those sentences again. 

  • Rationalizations, what time are you swinging sensually on the song: 0792 entrances Lindsay better music reflects.
  • Rationalizations, cocaine, and cream.

No human being in their right mind would ever write these words. Not while sober, anyway. But they are also not some random lucky draw from a dictionary database. 

This is, at best, a somewhat struc­tured but ultimately miscarried attempt to do what ordinary people do non-stop from the time they are toddlers: convert an ongoing series of sound vibra­tions into meaningful symbolic repre­sen­ta­tions of coherent ideas about physical reality. It’s as if a blind person were trying to describe the image in a painting simply by feeling the texture of the paint strokes with their fingertips.

Epic Fail?

This little adventure in the land of voice-recog­nition was humbling, in a way. Such “malfunc­tions” put into relief the funda­mental lack of humanity of the tech that surrounds us.

It is the purpose of technology to extend the reach of human capabil­ities. The invention of the wheel has let us transport heavier objects over larger distances. The advent of the bow and arrow has let us hunt prey—and humans—with greater accuracy and effectiveness.

But at its best, technology does more than enhance our potential: it can be a trans­for­mative force that opens up new perspec­tives into our under­standing of the world. The micro­scope and telescope have let us peer into worlds well beyond the range of the human eye. And once we had unveiled the realm of the micro-organism, or the existence of faraway galaxies, there was no going back. The mental landscape in which we situate ourselves had changed irrevocably.

Rationalization Redux 

As things stand, Siri et al. are not quite there yet. The arresting juxta­po­sition of words it distilled from my windblown dictation is strangely evocative, but it makes no sense. It couldn’t have—algorithms don’t do “sense”; they simple processes inputs. Even when the voice recog­nition perfectly transcribes every word I say, it still doesn’t know what it means.

Conversely, it is our own attempts to try to rescue some semblance of relevance from the jumbled word soup that makes reading it so jarring. We want it to mean something, we need it to make sense. And despite our best attempts at ratio­nal­ization, possibly with some cocaine and cream, it doesn’t.

Now please excuse me while I go swing sensually on the song.

• • •

Top image credit: They listen to the gramophone by Vladimir Makovsky (1910) (source)

Father, son, husband, friend and writer by day; asleep by night. Happily pondering the immortality of the crab wherever words are shared.

Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments