4 Comments

The Cyborg and Centaur analogies hit the nail on the head – it's all about striking a balance between what machines can do and our own creative mojo. Giddyup.

Expand full comment

Hopefully, we'll use a tamed horse to build our centaur… This is the main danger, I think, relative to the halting problem. Do the horse know when to stop? Why would he stop? The Jewish culture defines the existence of an ideal point of equilibrium between chutzpah & shame. What if we build a machine with a super-chutzpah & zero shame? It seems that we are already on this path. Even if the pure logic of a situation may produce some “ideal” solution, like the infamous “final solution” enforced by the Nazi mob during WW2, non-psychopathic humans could anytime ask themselves : am I not going too far? Such feeling is rooted in non-logical ethical values & may supersede the mind, using simply our “common sense”. In all the examples I heard of blatant errors committed by some AI, what seems to be always lacking is the most basic “common sense”. How can we code that? Is it possible to code that?

Expand full comment