Who is liable for accidents involving self-driving cars?


Die increasing use of artificial intelligence in everyday life raises new questions. A central one is: Who is liable if something goes wrong? If someone is hit by a car these days, the matter is usually clear. As long as the car was delivered free of defects, the owner or driver are liable for the damage. But what about self-driving cars that are controlled by AI? Who is liable for misdiagnoses when an AI is used by doctors? Or for the incorrect evaluation of documents by an AI?

In theory, the answer is the same as for all products. If a product is defective, the manufacturer must compensate the injured party for the damage. Only the "victim" has to prove that the damage was caused by the product. This is exactly where the problem lies: Why and how an AI "decides" is often not understandable for the user. Experts speak of a "black box problem".

There are therefore voices in the European Parliament to simply reverse the burden of proof in general. In the event of damage, the providers would then have to prove that their AI worked correctly in order to free themselves from liability. This is going too far for the European Commission. However, in the middle of next week she wants to present a proposal with which she reverses the burden of proof in individual cases. She also wants to oblige providers to disclose exactly how their AI works. A draft of the proposal is available to the FAZ.

The EU Parliament and the Council of Ministers still have to agree

Specifically, the injured party should be able to request training or test data sets, data from the technical documentation and logs or information about quality management systems from the providers. If in doubt, you can sue. However, the court must ensure that only absolutely necessary data is disclosed in order to protect trade secrets. If the provider does not comply, the burden of proof is reversed. Then he has to prove that his AI wasn't "to blame".



Source link