Self-Driving Cars: A Strategy to Address Accountability

A few years ago, futurists were telling us that we were only a few years away from self-driving cars.  Well, it’s a few years later and we’re still a few years away from self-driving cars. From what I can tell there are two primary challenges to be overcome with self-driving cars: Safety and accountability.

First, artificial intelligence (AI)/robot drivers must be significantly safer than human drivers, probably a few orders of magnitude safer. We’re accustomed to people being hurt or even killed when people behind the wheel make a mistake. In Canada almost 1,800 automobile deaths occurred in 2021 and in the US almost 43,000 deaths occurred. This is the level of acceptable losses from car accidents in our respective societies. But when an accident occurs involving a self-driving car it’s headline news even without any fatalities. It’s a strange psychological issue, but our expectations of AIs and robots are much higher than for people. We’re not yet used to the idea that people can be hurt or even killed as the result of a decision made by an AI.  Perhaps we’ll get to the point where we can accept that a much smaller percentage of people will be hurt or killed, but we don’t seem to be there yet.

Second, we need to directly address accountability. Like it or not, self-driving cars have killed people and will likely do so again. The leads to what I call “the insurance issue.” When a self-driving car kills someone, who do we sue? The owner? The car manufacturer? The AI engineers who developed the AI models employed by the car? Society insists on holding people accountable, where appropriate, for the actions that they take behind the wheel. Similarly, society insists on accountability with self-driving cars too.  Therein lies the rub.

One strategy to address accountability would be simply to define terms and conditions for self-driving cars. When you enter the car, you are asked to accept the terms and conditions. This is spectacularly naïve. First, the terms and conditions for software are there to protect the producer of the software, not the end users. Second, it’s dubious whether people read them or understand them when they do.

To be clear, I don’t work on self-driving cars and as a result I can’t realistically have any impact on addressing the safety issue. Luckily a lot of very smart people are actively working on the safety problem and they have made great progress doing so. I do have some ideas about how to address the accountability issue, although also to be clear I’m not a lawyer nor am I an insurance expert.  But I am a smart person who is very good at solving problems. Furthermore, when I say “self-driving car” I mean the real thing where there is no human driver waiting to take over if things go wrong. The car, not the people, do the driving.  I realize that this is not where we are today – we’re still effectively in beta/pilot testing of the technology.

The strategy that I propose is the following:

  1. Someone is the “designated driver (DD)”. What this means is that someone assumes responsibility for the actions taken by the car. This will require a process where the person logs into the car and indicates that they accept responsibility. If this doesn’t occur then the car does not go into self-driving mode. This person wouldn’t be required to stay in the car while it’s driving, thereby enabling taxi-like business models, but they would still be held accountable. It also enables non-drivers to take advantage of the mobility provided by automobiles.
  2. Designated drivers must work through a training and configuration process to be eligible. This is the “secret sauce” to this strategy. The idea is that the DD spends a few hours working through scenarios where their preferences are explored. The scenarios would be similar to “trolley problem” cases. For example, if there are three people in a cross walk and you are alone in the car, who should the car sacrifice if it comes to that? If your spouse is with you, how should the car act? When your child is with you, how should the car act? If there is a person with a baby stroller in the cross-walk, how should the car act?  When the people in the cross-walk are all senior citizens, how should the car act? And so on. Each DD works through a collection of scenarios, perhaps hundreds, to configure their driving preferences for which they will be held liable. Furthermore, potential DDs are put through sufficient training before the configuration process so that they understand the implications of what they’re doing. This training doesn’t need to be long, but it does need to work through the fundamental implications.
  3. The DD doesn’t need to be a licensed driver, but they do need to have cognitive maturity.  Remember, true self-driving cars drive themselves. Yes, it would be good that the DD does drive but the real issue is the ability to configure their preferences. That requires them to exercise their moral compass, not their technical driving skills.
  4. The car adopts the driving preferences of the DD. When a DD logs into a self-driving car, their configuration of preferences is used by the AI to make life-critical decisions. The effectively puts the DD virtually behind the wheel. If the car does end up harming, or even killing, someone the DD is held responsible just as if they were driving.
  5. There is a safety “seal of approval” for self-driving cars. A regulation body, such as Canadian Standards Association (CSA) or National Highway Traffic Safety Administration (NHTSA) in the US certifies that the vehicle has been approved to self-drive. This enables DDs to identify cars that they can trust. This ensures that manufacturers are responsible for producing vehicles that conform to the defined safety standards of the jurisdictions in which self-driving vehicles are legal.

Just as it’s your choice to accept accountability when you physically get behind the wheel, this strategy enables you to become accountable you virtually get behind the wheel. Right now it seems to me that car makers are trying to create a universal set of rules for all cars to follow, or at least all cars of a given brand to follow. This clearly addresses the safety issue but not the accountability issue. To address the accountability issue we must develop a universal set of trade-offs, the trolley scenarios, that the car “driver” chooses and is held responsible for. Let the humans decide up front. In short, put the moral decisions back into the hands of the people where they belong.

You may find some of my other blog postings about artificial intelligence to be of interest. Enjoy!

3 Comments

  • Curtis Hibbs
    Posted November 20, 2023 12:10 am 0Likes

    I like your strategy.

    I think this could be easily abstracted in a general strategy for AGI agents, in general. This example then serves as a concrete expression for the self-driving use-case.

    • Scott Ambler
      Posted November 21, 2023 8:39 am 0Likes

      Thanks. I was thinking along the same lines, albeit only for life-critical applications. However, I think you’re right that it’s a generalizable concept. For example, if I was to use an AI to autorespond to email I would certainly want to configure it with my preferences. That could be one configuration session up front, a series of mini-sessions (i.e. “What should I do with emails like this?”), or a combination of the two (most likely).

  • keira Lewis
    Posted September 20, 2024 2:05 am 0Likes

    Great article! The potential of artificial intelligence to revolutionize industries is truly remarkable. AI isn’t just about automation; it’s about augmenting human capabilities, providing deeper insights, and enhancing decision-making processes. I particularly agree with your point on the ethical considerations and how responsible AI usage is key for long-term success. Looking forward to seeing how AI continues to evolve and shape the future of technology. Thanks for sharing this insightful read!

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.