The decision paths within a system with artificial intelligence are unfathomable. Mistakes are difficult to comprehend, but can lead to serious mistakes and ethical questions. In the article, Orange Business Services shows why explainable AI is needed for the responsible handling of artificial intelligence and why blockchain could be a solution.
Autonomous driving cars
Figure 1, train their Artificial Intelligence (AI) based on neuronal networks and in the course of their runs in test environments. In this context, it has become evident that the vehicles are prone to errors - errors that were partially difficult to understand and trace and that are extremely problematic because wrong decisions of self-driving cars can ultimately put human life at risk. The AI issues revolving around driverless cars, however, also present additional ethical issues.
This is a preview of subscription content, log in to check access.