As artificial intelligence (“AI”) continues to evolve, the European Union Agency for Cybersecurity (“ENISA”) has recently published an overview report on AI cybersecurity and standards.
ENISA states that: “The overall objective of the present document is to provide an overview of standards (existing, being drafted, under consideration and planned) related to the cybersecurity of artificial intelligence (AI), assess their coverage and identify gaps in standardisation.”
Therefore, the report: (i) seems to be a good indication of the direction AI cybersecurity requirements and standards in the EU is taking; and (ii) could therefore assist software developers and service providers to plan in advance how they develop and shape their AI products to ensure they are in line with future requirements.
Key recommendations to ensure standardisation
The report contains a section setting out a list of recommendations to various target audiences actions to ensure standardisation for the cybersecurity of AI:
- Use standardised AI terminology for cybersecurity.
- Develop technical guidance on how existing standards relating to software cybersecurity should be applied to AI.
- Establish coordination between cybersecurity and AI technical committees for standards so that AI cybersecurity concerns can be addressed and a coherent standards outcome achieved.
- Encourage R&D in areas where standardisation is limited by technological development.
- Ensure coherence between the draft EU AI Act and other laws on cybersecurity (such as the Cyber Resilience Act).
The report is crucially set against the backdrop of the draft EU AI Act, which is very close to becoming law , as ENISA aims to influence standards-developing organisations (“SDOs”) and public sector/government bodies regulating AI technologies on the role of standards in helping to address cybersecurity issues in AI. ENISA stresses the importance of standards in the implementation of the upcoming EU AI Act and recommends a set of actions to ensure standardisation is achieved.
What is AI and cybersecurity of AI?
First and foremost, ENISA states that what needs to be understood is the definition and scope of AI. ENISA emphasises that it is key for SDOs to be aligned on their understanding of what AI is.
The report borrows the draft EU AI Act’s definition of AI admitting that it is inherently broad due to AI technology being ever evolving and thus, ENISA decides to focus the analysis of standards on machine learning (“ML”). ENISA asserts that the ML aspects of AI are what make it prone to vulnerabilities that affect the cybersecurity of AI.
Cybersecurity of AI has two dimensions. Namely, the traditional one that is intended to protect against attacks on the confidentiality, integrity and availability of assets (“CIA”), and the broader one which focuses on trustworthiness features such as data quality, oversight, robustness, accuracy, explainability, transparency and traceability. Such trustworthiness features are necessary in order to ensure proper functioning of cybersecurity systems.
The focus of the report is on standards that can be harmonised. Many SDOs have already set up technical committees in order to coordinate standardisation in this area and address any gaps.
The report highlights several problem areas in the existing technical standards landscape of cybersecurity and AI. The report finds that at present, there is very little consistency among the SDOs.
Considering AI is in its fundamental, a type of software, certain software security measures can be transposed into the AI domain (general-purpose standards). However, only partially given AI also includes technical and organisation elements beyond software. Guidance from SDOs on the general-purpose standards’ application to AI is needed.
Businesses that make use of AI solutions and/or are engaged in cybersecurity should keep informed as the report concludes that some standardisation gaps might become apparent only as the AI technologies advance and with further study of how standardisation can support cybersecurity. Similarly, some aspects of cybersecurity are still subject of R&D analysis, and therefore might not be mature enough yet to be exhaustively standardised.
Companies should ensure system-specific analysis of their AI to ensure appropriate security measures are in place and be prepared to adapt their use of AI to meet changing cybersecurity standards.
Ahead of the EU AI Act, ENISA and other standards bodies shall continue to influence and support its finalisation and subsequently, its implementation. ENISA is gathering relevant information from stakeholders on AI risk management, cybersecurity requirements, and data security.
Please contact Jose Saras if you would like to find out more.
Please see the full ENISA report here.
The material in this article is only for general review of the topics covered and does not constitute legal advice. No legal or business decision should be based on its content.
This article is written in English language. Preiskel & Co LLP is not responsible for any translation of all or part of its content into any language.
 On 11 May 2023, a key committee of lawmakers in the European Parliament have voted in favour of amendments to the draft AI legislation. See further at https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence
 The scope of ENISA’s analysis is limited to the standards of the International Organization for Standardization (“ISO”) and International Electrotechnical Commission (“IEC”), the European Committee for Standardization (“CEN”) and European Committee for Electrotechnical Standardization (“CENELEC”), and the European Telecommunications Standards Institute (“ETSI”).