Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why do safety requirements like to discourage use of AI?

Seems that requirements on safety do not seem to like systems that use AI for safety-related requirements (particularly where large potential risks of destruction/death are involved). Can anyone suggest why? I always thought that, provided you program your logic properly, the more intelligence you put in an algorithm, the more likely this algorithm is capable of preventing a dangerous situation. Are things different in practice?

like image 806
Dmitri Nesteruk Avatar asked Nov 26 '22 23:11

Dmitri Nesteruk


2 Answers

Most AI algorithms are fuzzy -- typically learning as they go along. For items that are of critical safety importance what you want is deterministic. These algorithms are easier to prove correct, which is essential for many safety critical applications.

like image 150
tvanfosson Avatar answered Dec 10 '22 12:12

tvanfosson


I would think that the reason is twofold.

First it is possible that the AI will make unpredictable decisions. Granted, they can be beneficial, but when talking about safety-concerns, you can't take risks like that, especially if people's lives are on the line.

The second is that the "reasoning" behind the decisions can't always be traced (sometimes there is a random element used for generating results with an AI) and when something goes wrong, not having the ability to determine "why" (in a very precise manner) becomes a liability.

In the end, it comes down to accountability and reliability.

like image 39
casperOne Avatar answered Dec 10 '22 12:12

casperOne