The Ethics of Autonomous Vehicle OS Programming During Unavoidable Accidents: Who Gets to Survive?
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
このコンテンツについて
Scenario: An autonomous EV is traveling at 70 mph in a center lane of a freeway. Suddenly, a car is released into the lane from a tow truck, making a collision inevitable. There is no time to stop. The only alternatives are to swerve into the adjacent lanes:
- Option 1 (Straight): Collide with the object (Potential severe harm to 2 EV occupants).
- Option 2 (Left Swerve): SUV with a family of 4 (Potential severe harm to 4 occupants).
- Option 3 (Right Swerve): Motorcyclist (High likelihood of fatality for 1 occupant).
Since there is no clear legal consensus, any deliberate action to harm a third party (Options 2 or 3) would expose the manufacturer and programmer to immediate criminal negligence or manslaughter charges. Conversely, failure to implement harm-minimization (Option 1) could invite massive civil liability for not utilizing known technology to save the occupants.
The programmer, operating in the current U.S. regulatory vacuum concerning explicit "trolley problem" rules, faces a practical and legal impossibility.