Reasoning with Robots

Image by Văn Tấn from Pixabay

As technology has invaded many facets of our lives, I am guessing that all of us experienced the frustration of trying to reason with algorithmic minds. Have you ever raised your voice to a device, asking “Really!?” We might wonder if the application was designed by an unpaid intern, or if the designer ever tried to use it in realistic circumstances. But no amount of frustration will have an effect. The operating space is prescribed and rigid, so that no matter how many times we try, the thing will stubbornly execute the same boneheaded behavior.

Imagine that a self-driving car in a city detects a voluminous plastic bag in its way. It will stop, and say—perhaps silently to itself—”Object.” “Object.” “Object.” “Object.” “Object.” “Object.” I could go all day. No, actually I can’t. But it can, and that’s the point. A really sophisticated version might say: “Bag.” “Bag.” “Bag.” “Bag.” Meanwhile, a human driver might look at the bag, and based on the way it waves in the breeze decide that it’s mostly empty, but just looks big, and is safe to drive over without even slowing down—thus avoiding interminable honks from behind.

Arguing with robots would likely be similarly tedious. No matter what insults you are compelled to fling after reaching your frustration threshold, all you get back is the annoyingly repetitive insult: “Meat bag.” “Meat bag.” “Meat bag.”

Meat bag brains have the advantage of being able to take in broader considerations and weave in context from lived experience. We can decide when algorithmic thinking is useful, and when it has limits. Unfortunately, I buy the argument from Iain McGilchrist that modern culture has increasingly programmed people to be more algorithmic in their thinking—in my view via educational systems, video games, and ubiquitous digital interfaces. I often feel like I’m arguing with robots, but of the meat variety.

Continue reading

Hits: 6132