Artificial intelligence and driverless cars: Does it lead to an ethical trade-off?

2019/08/11 Innoverview Read

Opinion Who lives? Who dies? Who decides?

The ethics surrounding driverless cars and autonomous decision making are hotly debated subjects.

In a situation of certain fatality, should the driverless car hit A and B to save C, or swerve to hit C to save A and B? The moral conundrum is hopelessly difficult to resolve. The scenario echoes the renowned trolley problem – should you kill one person to save five? Perhaps there will never be a universally accepted solution to this dilemma, yet in the case of the artificial intelligence of driverless cars this is the kind of ethical question that is concerning people the most.  

There is no global consensus on the law governing driverless cars. So far, Germany is the only country that has legislated on the issue and has attempted to set out guidelines for its manufacturers. German law requires that a driverless car be manned by a human with a black box recording when the vehicle is in the human’s control or when it is under the control of the vehicle’s self-piloting technology. It is thought that this will go a long way to resolve liability should an accident happen; if it was the fault of the human they may be personally liable, if it is the fault of the artificial intelligence then the liability may rest on the manufacturer.

This appears to be a step towards a sensible solution to the liability problem; but what if the human element was taken out of driving completely? In this scenario, the lives of the passengers and other road users would be in the hands of the vehicle’s self-piloting technology.

This in itself has thrown up some troubling reports. Artificial technology is trained to recognise the human shape and form, in order to train the technology, it is shown images of real people to enable it to recognise a person from an inanimate object. However, a recent study has suggested that driverless technology is less capable of recognising non-Caucasian people thus putting them at greater risk of being harmed or killed by a driverless car. How is this possible?

Studies suggest that artificial technology is fed millions of images to recognise humans, however it has been suggested that the majority of these images are of Caucasian people. Therefore, the technology is conditioned to recognise a white person as a pedestrian; the consequences being that a darker skinned person is less likely to be identified as a pedestrian by the technology which could result in catastrophic and even fatal consequences.

Whilst there are many safety sceptics, there are those who suggest that driverless cars could be safer than those that require human handling. By programming a driverless car to follow the applicable road laws and be trained to spot and avert dangerous situations, the idea is that harmful consequences will be reduced considerably. Whilst there will undoubtedly be errors and a human cost, some argue that regardless, driverless cars this will still create a safer environment than the current climate. 

We are a few years away from driverless cars being commonplace on the roads, that is if the manufactures can guarantee that their technology is dependable. No doubt accidents will occur which will allow scope for massive and endless litigation, which insurers will ultimately seek to claw back from drivers through insurance premiums.

With no concrete guidelines in the UK, how can manufacturers avoid liability should a ‘trolley problem’ type situation occur? The human or artificial driver who avoids A and B and hits C is susceptible to a claim by C/their estate if it can be shown that he was negligent or reckless or that the technology was defective or biased in some way. The court may make allowances for human fallibility in assessing that question. The court will also take into account the contributory negligence of a pedestrian involved in accident when determining the level of compensation.  But what fallibility will be allowed to the manufacturers of driverless cars?

In absence of any legal precedent, the courts are likely to initially take a very strict approach and not accept arguments that flexibility should be given to developing technology and that liability will rest firmly on the manufacturers’ shoulders in the event of an accident caused by driverless technology.

Universally accepted guidelines should be in place to set a benchmark for driverless car manufacturers. A fragmented approach which varies from manufacturer to manufacturer or indeed from country to country should be avoided at all costs. Following in Germany’s steps, there should be a unanimous agreement that driverless cars must avoid death and/or injury to humans no matter what.

The principle that all people should be treated as equal, and that there should be no discrimination in respect of race, age or physicality, should be paramount. Legislators have a duty to ensure that manufacturers are not left to create a moral algorithm that departs from these key principles.

(Copyright:IOTNEWS:https://www.iottechnews.com/news/2019/aug/09/artificial-intelligence-and-driverless-cars-does-it-lead-ethical-trade-/)