On 20 March 2018 a pedestrian, Elaine Herzberg, died after being hit by a driverless Uber car in the US. At the time of the incident the autonomous vehicle failed to identify Ms Hertzberg as a pedestrian and so took no action to avoid hitting her. The human ‘safety’ driver was believed to have been watching television at the time and in addition, Ms Herzberg was crossing an unlit road without paying attention.
The event has understandably focused the minds of all stakeholders to consider who is liable in such an incident. So far, consensus seems to be that the manufacturer should be responsible (although this has not been established in the Uber case which settled quickly and out of the public domain).
Liability - what are the possible options?
Autonomous vehicles were always going to pose complex liability considerations when involved in road traffic accidents.
If a human driver fails to exercise reasonable care in avoiding an accident, they would be liable under the principles of negligence. If the automated features of the vehicle failed to note the presence of a victim, the manufacturer of the vehicle could also be held responsible under product liability principles. If a victim fails to take care then they also may be found to have contributed to the accident or damage arising from it.
The Uber case leans towards highlighting that the errors the vehicle displayed were similar to that which a human would make, in that the vehicle failed to note the presence of an obstacle. However, what is different is that the failure in this instance comes from a machine and not a human. This raises questions as to whether a machine can ever be negligent? For now, the answer is a resounding no. This then brings back the consideration of product liability, together with the principles of strict liability.
Traditionally strict liability is prominent in product liability cases, including those involving vehicles. However, the unique merging of human and vehicle introduces new concepts about its applicability due to the potential involvement of multiple parties sharing liability (owner, “driver”, manufacturer, insurer, software developer). How liability is apportioned will also depend upon the level of autonomy that a vehicle provides, which can differ greatly.
Automated and Electric Vehicles Bill
The Bill provided an ideal platform to drill down into those liability considerations and provide a safety net to capture at least some foreseeable events – like the Uber incident.
Instead, having received Royal Assent on 19 July, the new law has taken the arguably safer route of widening the existing liability on an insurer for an accident caused by an automated vehicle (if the vehicle is insured).
The key provisions include:
- Amendments to the Road Traffic Act 1988 so that compulsory third party insurance is extended to damage caused by an autonomous vehicle when driving itself. The right of recovery from parties who have contributed to the accident is preserved.
- Listing of automated vehicles by the Secretary of State. How the list will look and operate will follow, but the list (presumably set up by the DVLA) will allow a list of vehicles that can attract insurance.
- Liability exclusions where an accident results from unauthorised alterations or failure to update the software by the insured driver. Therefore, the onus of responsibility for installing safety-critical software rests with the human user and not the manufacturer.
- A strict standard of care placed on the human driver not to hand over the function of the vehicle to begin driving itself when it is “inappropriate” to do so.
Comment - where next?
For the time being, the UK government appears determined to keep the existing legal framework and legislate retrospectively. Whilst we appreciate the dilemma of how to legislate when the technology is not yet fully developed, this seems like somewhat of a missed opportunity. It is of course impossible to legislate for every scenario. However, the determination to put the UK at the forefront of the technology should, to our minds, go hand in hand with the UK being the driving force behind shaping the international standards that will be required globally so that vehicle cover is harmonised as far as possible.
Meanwhile, the new law confirms there is no current desire to create a legal duty on manufacturers to report new models or ensure safety-critical software. Instead, the new measures rely on insurers’ strong right of recovery. In most cases, first instance liability will fall on insurers so that victims of accidents caused by autonomous vehicles will be able to receive compensation easily and quickly, albeit with the insurer’s ability to pursue recovery from a product liability or professional indemnity insurer.
Looking forward, those responsible for further legislation (including the Law Commission) must strive to align the legislative process with the practical reality of liability and coverage. They will have the opportunity to provide further clarity on the level of automation which is intended to fall within the scope of the legislation. They will also have the chance to define a measure of driver behaviour against a reasonable standard and remove the risk of any confusion around the ‘residual’ role of a human driver.
Doing so should then assist in meeting the objective that, whoever or whatever is driving, will owe exactly the same duties in relation to compliance with road traffic laws as a driver of a conventional vehicle. Whilst that may not prevent further incidents like the Uber one above, it should help achieve clarity and certainty for both insurers and consumers around liability in the event of motor accidents involving partially, highly and fully autonomous vehicles.
- Driverless vehicles: a blueprint for successful implementation
- Kennedys welcomes rolling programme of reform for autonomous cars
- Only 44% of UK adults back driverless cars on UK roads, Kennedys survey reveals
- Autonomous vehicles – the race is on: an industry perspective