The new legislative framework is aimed at fostering the government’s ambition to capture a 6% share of the AV market by 2035 which the (DfT) has estimated could be worth upwards of £41.7 billion.
Background
On 19 August 2022 the government published a policy paper entitled ‘Responsible Innovation in Automated Vehicles.’ The paper put forward recommendations to support the Department for Transport’s (DfT) roadmap for developing a legislative framework for Automated Vehicles (AVs). This updated regulatory framework will need to take account of the ramifications that AV technology is going to have on data privacy, data sharing and fairness principle (i.e., how the processing may affect the individuals concerned).
Data Privacy
AVs collect and process vast quantities of data from their surrounding environments. This has significant privacy implications regarding the transfer of personal data. Under UK Data Protection law, controllers are required to have a lawful basis for data processing. This is normally founded on an individual’s consent in the context of goods and services.
Although the user of the AV can consent to their personal data being processed, AVs are required to process personal data in a way that oftentimes bypasses the explicit consent of other road users. For example, some AV companies have explored using the biometric data of other road users. This has included facial recognition and ‘gaze detection’ technologies which are used to predict the intentions of other road users.
Without prior consent, the use of these technologies would only be lawful if they satisfied the ‘legitimate interests’ basis of UK GDPR (the retained EU law version of the GDPR). Given the ambiguity that exists over this matter, the government paper has recommended that the AV regulators provide clarity over whether the processing of personal data of individuals outside of the AV is considered lawful under Article 6 of UK GDPR.
Fairness Principle
All data driven technologies have the potential for various forms of algorithmic bias. Facial recognition technologies are already well known for their issues relating to protected characteristics like race.
Whilst the datasets required for the AVs core functionality (for instance, detection of types of road users and their movement) mean that algorithmic bias is less of an issue in this industry, researchers have already identified potential data gaps caused by a concentration of testing and data collection in certain areas. Philip Koopman has provided one such case whereby an AV system failed to identify people in high-visibility clothing because the training datasets had not included construction zones.
Moreover, people and objects, such as wheelchairs and wheelchair users, are likely to have been underrepresented in training datasets because these people and objects are less frequently seen.
The paper recommends that algorithmic bias can be minimised by AV companies through the reporting of the fit between their training data and their Outlier Detection Datasets (ODDs). Any substantially altered ODDs would require new reports on training data.
Data Sharing
The safety related issues associated with new AI technology will require AV companies to take part in data sharing programmes particularly when data is safety critical.
However, it is important to note that the policy paper recommends some degree of market competition on safety. This is in contrast to other industries like aviation and medicine where a ‘no-blame safety’ culture is standard. Recently this ‘no blame safety’ culture has come under fire following the crashes of Boeing 737 Max aircraft.
The policy paper hopes that such complacency can be avoided through the use of a ‘blended’ approach which draws on aspects of a ‘no blame’ culture whilst simultaneously incentivising safety innovation through market competition.
The government policy paper can be found here.
Please contact Jose Saras and Xavier Prida if you have any questions about the data protection implications around AI technologies.
The material in this article is only for general review of the topics covered and does not constitute legal advice. No legal or business decision should be based on its content.