Seeking the human meaning in consumer behavior

The Myth Of Cybernetic Regret

The Myth Of Cybernetic Regret

Over at Fast Company, Ariel Schwartz describes the work of Dr. Sophie Wang on adaptive robotic manufacturing systems. Dr. Wang’s robotic system is capable of assessing the efficiency of its own process, and changing it to reduce waste. The system is also capable of responding to human indications that it can take over new aspects of the manufacturing process.

Schwartz’s catchy headline, however, is Robots Can Now Understand Trust And Regret: Is Your Roomba Looking At You Funny? Where does regret come into the story?

Schwartz writes that, “Wang is also teaching robots to understand regret using mathematical formulas. In the real world, that might mean that a robot needs to calculate how much regret it would feel picking up the wrongly shaped object (presumably because the shape wasn’t clear using the robot’s onboard camera)—and if the risk is worth it, it’ll pick it up. Seems similar to how human brains work, when you think about it.”

Trust is a fairly simple thing to measure, because it centers around an externally-communicated kind of interaction. Trust is about rules established between two entities. It’s a series of if-then statements, something that computer programmers know how to work with.

Regret, though, is different. Regret is not the opposite of trust. It’s an emotion, not a set of rules.

Dr. Wang’s robot may be able to work with programmed models of regret, “to calculate how much regret it would feel” if it could feel. Robots, however, cannot feel. Robots can operate according to complex rules that simulate regret, but they cannot “understand trust and regret”, because understanding trust and regret requires the ability to understand states of mind, and only conscious beings can understand states of mind.

Schwartz’s mistake is important to note because she has made, in reverse, the error made by corporations that have decided to depend almost exclusively upon quantitative models of reality, made possible by Big Data. Big Data is impressive and useful, just as Dr. Wang’s adaptive robots are. Big Data is not going to provide a replacement for the work of humans, however, any more than Dr. Wang’s robots will. The reason is the same: They may measure and assess what humans do, but they cannot achieve the actual experience of humanity.

As much as economists like to reduce our marketplace activities to rational algorithms, at the heart of our humanity is the fact that we are motivated by complex psychosomatic systems that lead us away from the straight lines of maxiumum efficiency. Often, our emotional feelings are their own reward.

Marketing is for human beings. It requires more than mere human activity. It requires human engagement.

The mistake of the more zealous architects of Big Data is to conclude that, if we have enough information, insight becomes irrelevant. They seek to build intricate models of human behavior, and to provide regimens of reward to turn those models to their own uses.

However, they forget that in order to have money to spend, human customers need to work. In their work, human beings need more than a set of algorithmic parameters of the sort that Dr. Wang’s robots could work with. People need to feel that their work has meaning. They need to live within cultures, complete with myths and rituals. Likewise, as consumers, we seek to purchase meanings, to engage in acts of sacrifice and connection, and even to engage in acts of signficant sacrifice.

A world in which a future generation of Dr. Wang’s robots do all the work, and where Big Data systems direct our spending, would be like a playground where all the actual playing is outsourced to automatons, which the children simply sit and watch.

Such a world would break our trust, and fuel regret beyond calculation.



Leave a Reply

Your email address will not be published. Required fields are marked *