Skip to main content
Menu

Responsible AI - let’s make driving boring again

The RAILS project - Responsible AI for Long-term Trustworthy Autonomous Systems -  is focused on questions of responsibility and how these are understood, mapped, and experienced within the domain of autonomous vehicles. This focus is particularly crucial in areas where these autonomous systems may have lifelong-learning capacities and thus can be subject to significant change during their post-deployment lifespan. 

Autonomous systems are not designed to be deployed in conditions of perfect stasis, as they are unlikely to encounter such conditions in real-world environments. They are frequently designed for changing environments, like public roads, and may also be designed to change themselves over time, for instance by means of learning capabilities. Not only that, but these changes in deployed systems and in their operating conditions are also likely to take place against a shifting contextual background of societal alteration (e.g. other technologies, ‘black swan’ events, or simply the day-to-day operation of communities). The effects of such change, on the systems themselves, on the environments within which they are operating, and on the humans with which they engage, must be considered as part of a responsible innovation approach. RAILS is therefore examining social and legal contexts, as well as technical requirements, in order to assess whether and how these systems can be designed, developed, and operated in a way that they are responsible, accountable, and trustworthy.

During its first few months of operation, the RAILS project identified a need to reach out to its industry partners and other collaborators, including policymakers and thinktanks, to collectively unpick some of the preconceptions and implications of concepts of ‘responsibility’ that had been demonstrated during a series of qualitative interviews with domain experts. The subsequent workshop, including Imperium Drive, AWS, Oxbotica, and Addleshaw Goddard, was held at Worcester College, Oxford, in June 2022. Several themes were evident during the discussions, but some key points are directly relevant to these questions of ‘responsibility’.

The myth of machine infallibility

Publics have become acculturated to the idea that humans make mistakes, but machines do not - seen perhaps most clearly in the UK’s recent Post Office scandal, where hundreds of sub-postmasters were taken to court on the basis that the accounting software could not possibly be incorrect. This is not generally a perception shared by designers, who understand and expect that machines and technology can and do fail. However, this expectation of infallibility in the public consciousness may create a significant gap in understandings of responsibility if designers are working towards something approximating a ‘minimum viable’ model but societal expectations are of a ‘perfect’ system. This is especially true in the case of self-driving cars, especially so as these are frequently discussed as safer than human-driven ones. If there is such a perception of infallibility then accidents - particularly if there are gaps in social infrastructure around insurance, accident investigation, regulation and so forth - are likely to seriously undermine the deployment of self-driving cars and long-term societal trust in the technology. 

Trade-offs between cost and safety

Linked to the above is the connection between safety/reduction of risk, and cost. Commercial vehicles have to strike a balance between ‘acceptable’ levels of safety - according to calculated risks - and how expensive they would need to be to achieve higher safety levels. An ‘infallibility myth’ as discussed above, where a self-driving vehicle must be safer than an equivalent human-driven vehicle, could potentially make the cost at the point of sale so high that self-driving cars would in essence be unaffordable for the majority of the population. This would materially damage one of the purported benefits of self-driving vehicles - that they will increase accessibility for the elderly, disabled, and people who cannot drive themselves - if their target population cannot afford them. 

Systemic trust

Understandings of fallibility are not necessarily fatal to levels of public trust in a product, however. Where there is a perception of a trustworthy system of development, there can still be high levels of acceptance despite acknowledgement that there will be failure. This might include elements such as: corporate transparency and willingness to admit errors; regulatory frameworks that drive cross-industry safety standards; organisational ‘no-blame’ cultures that permit admissions of failure; accurate understandings of calculated risk; a lack of hype; and other factors. These factors can, by and large, be seen in the aviation industry - which receives a high level of public trust and remains a safer way to travel (by some measures) than driving. However, for this to develop in the autonomous car sector would require a sharp reduction in ‘hype’ around self-driving cars and a corresponding reduction in anticipatory excitement around their deployment. Self-driving cars need to be perceived as products of a rigorous, robust, responsible system of development, with ongoing work on safety - they need, above all, to be boring.

Next steps

The RAILS project team is continuing to analyse the themes of the workshop, which will contribute to the development of the technical side of the project. This is focused on the corner-cases that can create challenges for the designed-in learning of a system, and is working to create a cloud-based model system for evaluating the causal responsibility and accountability of autonomous systems. This will persist after the end of the project and be available for use within the wider TAS network and RAS community. Another technical workpackage aims to develop an assurance case for the safe operation of lifelong learning and self-adaptive autonomous systems in post-deployment settings. Finally, RAILS will draw these elements together to design and evaluate an adaptive governance framework for the continuous development and long-term deployment of autonomous systems.

 

This article was previously published on the Trustworthy Autonomous Systems Hub website.