Our Recent Posts


Aspiring Toward Provably Beneficial AI Including The Case Of Autonomous Cars

As AI systems continue to be developed and fielded, one nagging and serious concern is whether the AI will achieve beneficial results.

Perhaps among the plethora of AI systems are some that will be or might become eventually untoward, working in non-beneficial ways, carrying out detrimental acts that in some manner cause irreparable harm, injury, and possibly even death to humans. There is a distinct possibility that there are toxic AI systems among the ones that are aiming to help mankind.

We do not know whether it might be just a scant few that are reprehensible or whether it might be the preponderance that goes that malevolent route.

One crucial twist that accompanies an AI system is that they are often devised to learn while in use, thus, there is a real chance that the original intent will be waylaid and overtaken into foul territory, doing so over time, and ultimately exceed any preset guardrails and veer into evil-doing.

Proponents of AI cannot assume that AI will necessarily always be cast toward goodness.

There is the noble desire to achieve AI For Good, and likewise the ghastly underbelly of AI For Bad.

To clarify, even if AI developers had something virtuous in mind, realize that their creation can either on its own transgress into badness as it adjusts on-the-fly via Machine Learning (ML) and Deep Learning (DL), or it could contain unintentionally seeded errors or omissions that when later encountered during use are inadvertently going to generate bad acts.

Somebody ought to be doing something about this, you might be thinking and likewise wringing your hands worryingly.

For my article about the brittleness of ML/DL see: https://www.aitrends.com/ai-insider/machine-learning-ultra-brittleness-and-object-orientation-poses-the-case-of-ai-self-driving-cars/

For aspects of plasticity and DL see my discussion at: https://aitrends.com/ai-insider/plasticity-in-deep-learning-dynamic-adaptations-for-ai-self-driving-cars/

On my discussion of the possibility of AI failings see: https://www.aitrends.com/ai-insider/goto-fail-and-ai-brittleness-the-case-of-ai-self-driving-cars/

To learn about the nature of failsafe AI, see my explanation here: https://aitrends.com/ai-insider/fail-safe-ai-and-self-driving-cars/

Proposed Approach Of Provably Beneficial AI

One such proposed solution is an arising focus on provably beneficial AI.

Here’s the background.

If an AI system could be mathematically modeled, it might be feasible to perform a mathematical proof that would logically indicate whether the AI will be beneficial or not.

As such, anyone embarking on putting an AI system into the world would be able to run the AI through this provability approach and then be confident that their AI will be in the AI For Good camp, and those that endeavor to use the AI or that become reliant upon the AI will be comforted by the aspect that the AI was proven to be beneficial.

Voila, we turn the classic notion of A is to B, and as B is to C, into the strongly logical conclusion that A is to C, as a kind of tightly interwoven mathematical logic that can be applied to AI.

For those that look to the future and see a potential for AI that might overtake mankind, perhaps becoming a futuristic version of a frightening Frankenstein this idea of clamping down on AI by having it undergo a provability mechanism to ensure it is beneficial offers much relief and excitement.

We all ought to rejoice in the goal of being able to provably showcase that an AI system is beneficial.

Well, other than those that are on the foul side of AI, aiming to use AI for devious deeds and purposely seeking to do AI For Bad. They would be likely to eschew any such proofs and offer instead pretenses perhaps that their AI is aimed at goodness as a means of distracting from its true goals (meanwhile, some might come straight out and proudly proclaim they are making AI for destructive aspirations, the so-called Dr. Evil flair).

There seems to be little doubt that overall, the world would be better off if there was such a thing as provably beneficial AI.

We could use it on AI that is being unleashed into the real-world and then is heartened that we have done our best to keep AI from doing us in, and accordingly use our remaining energies on keeping watch on the non-proven AI that is either potentially afoul or that might be purposely crafted to be adverse.

Regrettably, there is a rub.

The rub is that wanting to have a means for creating or verifying provably beneficial AI is a lot harder than it might sound.

  • LinkedIn
  • Facebook

©2018 by B-AIM