Robot in front of a screen showing the earth from space
Photo by Arseny Togulev on Unsplash

Reply hazy, try again.

Henner Hinze
6 min readJan 17, 2020

--

September 6, 2029

When does paranoia become a reasonable doubt? When does trust only serve comfort? The answer used to seem easy.

Listen to this story

Many years ago, when my son was about four years old, he suffered from chronic fatigue and was sick quite often. He always kept us worrying. Still, the doctors couldn’t find anything and thus declared all symptoms as mere growing pains. They prescribed vitamins, diets, physical activity and waiting. Nothing helped.

Eventually, at yet another routine check-up, we were finally told that there had been new developments. The doctor received us in his office, staring at his note pad. “The hospital has recently acquired an advanced automated diagnostic support system. I have run it on the record of your son and it has proposed a set of tests that we haven’t tried before. To be clear, this procedure is outside the accepted standard … and, to be frank, I can’t even tell how the procedure relates to the symptoms of your son. But the system can connect a vast amount of diagnostic knowledge to make its recommendation — so, it might just be smarter than me.” He smiled, apparently not quite sure whether he was joking. “In any case, the proposed tests are minimally invasive and with your permission, I’d like to give it a try.”

More tests were ordered. It turned out that my son had a very rare but treatable hormonal condition. He still has to take medication daily but otherwise, he’s doing great.

This kind of diagnostic system wasn’t universally available in hospitals back then. Not like now when it is mandatory to have every diagnosis double-checked by it. Certainly, computer-aided diagnostics had been around for quite a while, but this one employed the Gödel Algorithm — the ultimate breakthrough in automated decision making.

The algorithm had just been introduced a few years before in a completely unrelated field:

The groundwork of the Arbitration and Decision Advisor (ADA, in allusion to the great logician, I believe) was laid when the G7 decided to establish the International Arbitration Court for Commerce. This required judges to decide highly complex cases in trade, environmental and intellectual property law spanning multiple jurisdictions. Without automated support at scale, the sheer amount of material to assess and all the, possibly contradicting, local regulations to be considered seemed to make this enterprise futile for even large teams of experts. Surely, concern among the governments to systematically experience unfavorable decisions helped to make the case for machine objectivity.

The development was planned for three years as a multinational collaboration of universities and some of the tech giants. Finally, the ADA, with the Gödel Algorithm at its heart, was proudly presented to the public. Reportedly, this represented a breakthrough in AI research — an algorithm that would show capabilities in decision making surpassing any human.

From the first day of service, the ADA was under the scrutiny of human judges and experts eager to point out any flaw or trace of bias in the system.

However, it soon became clear that most judges had come to accept the algorithm’s proposals with no further questioning or effort of their own. Regardless, any concerns brought forth had to compete against the economics of the legal industry. Thus, the ADA quickly saw implementation in many national courts. Legal procedures became faster, more affordable and arguably juster.

Although worries about completely automating human judgment and empathy were voiced from progressive and conservative camps alike, all efforts to reveal biases of any kind in the Gödel Algorithm’s decisions remained fruitless. By now, human judges only symbolically supervise the court systems. Anything from traffic transgression to high crime is decided purely on facts, unfazed by human prejudice.

Soon after, the ADA was made commercially available. Banks and investors have been using her as an advisor for large scale economic and financial decisions. Doctors have been employing her for diagnostic supervision. But also in everyday questions, ADA became an invaluable helper: Should I do the laundry or just buy new underwear?, Coke or Pepsi?, Should I start dating Karen from finance? I did — dated and married, still happy. The ADA knew better than any horoscope.

Everybody embraced her enthusiastically, earning her the endearing nickname “Magic 8-Ball”.

In the early days, she was of course not always right. She wasn’t after all made for the variety of applications she came to find herself in. But feeding her more data — the experts said — more training would allow her to make more precise decisions in the longer-run. She proved them right.

The impact of her superior decision-making capabilities on our everyday lives and society as a whole is hard to overstate. Finally, we no longer had to choose between relying on our gut feeling and tedious research — no more sleepless nights over unforeseeable consequences of our own decisions. ADA could plan further ahead than any other system and surely as any human.

Sometimes her decisions might seem irrational to us, maybe even outright wrong. But sooner or later all have been leading to beneficial outcomes many had not seen coming.

On one prominent incident, ADA, already in charge of military operations, had decided to employ a squadron of drones to bomb a civil hospital during a conflict somewhere in the Middle East. Videos went viral showing the area laid to waste, hospital beds buried in the rubble, a few dead, many injured. Nobody was willing to take responsibility and nobody was able to explain. The public outcry almost forced ADA into an untimely retirement. That was until the clean-up teams discovered that the hospital was used as a front by local separatists to store an arsenal of long-range missiles. Nobody could tell what tipped her off. But the nay-sayers finally fell silent.

We had to accept that she knew better and could handle bigger complexities than any human institution. Sometimes short-term sacrifices have to be accepted for long-term benefits — the machine works in mysterious ways. I’ve always been taking comfort in knowing that in the long-run everything will pan out for the better. I’ve been sleeping more peacefully and I’m sure so have many others.

Now, eight years after its initial introduction, ADA could take the next big leap: The UN has started negotiations on a proposal to employ ADA to completely take care of international diplomacy. For too long the fate of whole populations had been decided by the whims and shortsightedness of the powerful few. ADA would make sure that international diplomacy and relationships would be conducted rationally and objectively for the benefit of all equally — for a future of global wealth and prosperity.

That is, if there hadn’t been that incident that might bring all proceedings to a screeching halt.

Three days ago, one of the former leading developers of the original Gödel Algorithm, the brain of ADA, dropped a bomb on public TV. This is a transcript of what he stated:

“… we just couldn’t pull it off. It looked so promising. But after almost three years we still couldn’t make it work reliably. We began to run out of money and the investors started to get nervous. When the day of the public launch drew closer, we made a judgment call. It was just for the demo. Nobody would notice, we thought, and nobody would expect it to be perfect. So, we replaced the core decision engine by a random number generator … just throws out a 0 or a 1 — based on the time of day, on the weather, maybe on some cosmic ray hitting an electronic component just the right way. Who knows? The demo went down great. Everybody was impressed. The investors were hyper. We thought that would buy us some time, some more investment to fix it for real. But as it was, it seemed to work so well that nobody could be persuaded to delay the deployment, to delay their return on investment. Everybody was so eager to take the credit.

At some point we were simply not allowed to talk about it anymore … and we got good compensation for that. But finally, I had to speak out … with the discussions in the UN going on… we can’t allow it engaging in wars, at random … with us in the middle … can we?”

Immediately he was discredited as being a liar, seeking attention. Following public speculation, he either has mental issues or connections to the North Korean government or Anonymous, maybe all of the above.
I guess we will never find out. He was found dead just the day after. As for the cause, the investigation is still going.

Maybe all he said is a lie, maybe it’s not. Could we now go back? Do we still know how to make decisions on our own? Could we just collectively ignore what he revealed? So far it has just been working so well. We’ve been doing so great.

Maybe we’ll be fine. I just don’t sleep as easily anymore.

--

--

Henner Hinze

Speculator, thinker, and curious wonderer about futures and the consequences of AI. I also create digital products.