Teaching Self-Driving Cars to Break the Law

The promise of self-driving cars is that they will be smarter and therefore safer than human drivers. They will see what we miss and they will react faster than even those of us with the best reflexes. The challenging part is teaching them when they should break the law in the interest of public safety. It sounds counterintuitive, but we break the law for all the right reasons while we're driving every day. We regularly veer over that double yellow line to avoid people, pets, and random car doors that are flung into the path of traffic. Technically, crossing that line is breaking the law, but it's the right thing to do at times and self-driving cars need to learn this, too. RELATED: See the Google Self-Driving Car
Teaching Self-Driving Cars to Break the Law
According to a report from Forbes, this is the tricky part of autonomous car technology. It's not about making the technology work, but about ethics. Cars can be programmed to recognize situations and react the way a human might, but this isn't something that has been widely tested, despite its importance to the process. Ethics-based programming might even find its way to becoming a required feature. It could go further than simply making a decision in the event of a potential accident. What if a car could recognize that a person, even the driver, is sick? The car could take action to call for help, going beyond driving to actually aiding people. Before self-driving cars become the norm, programmers have to work out the mechanical and ethical challenges of having these vehicles in control. RELATED: See The Self-Driving Mercedes-Benz Truck 2025
Teaching Self-Driving Cars to Break the Law
RELATED: How Self-Driving Taxis Could Put Uber and Lyft on Notice

Be part of something big