ETSI to work on making artificial intelligence secure: Page 2 of 2

October 03, 2019 // By Julien Happich
artificial intelligence
Standards organization ETSI announced it has created a new Industry Specification Group on Securing Artificial Intelligence (ISG SAI). The group will develop technical specifications to mitigate threats arising from the deployment of AI throughout multiple ICT-related industries.

Securing AI Problem Statement

This specification will be modelled on the ETSI GS NFV-SEC 001 “Security Problem Statement” which has been highly influential in guiding the scope of ETSI NFV and enabling “security by design” for NFV infrastructures. It will define and prioritise potential AI threats along with recommended actions. The recommendations contained in this specification will be used to define the scope and timescales for the follow-up work.

Data Supply Chain Report

Data is a critical component in the development of AI systems, both raw data, and information and feedback from other AI systems and humans in the loop. However, access to suitable data is often limited, causing a need to resort to less suitable sources of data. Compromising the integrity of data has been demonstrated to be a viable attack vector against an AI system. This report will summarise the methods currently used to source data for training AI, along with a review of existing initiatives for developing data sharing protocols and analyse requirements for standards for ensuring integrity in the shared data, information and feedback, as well as the confidentiality of these.

The founding members of the new ETSI group include BT, Cadzow Communications, Huawei Technologies, NCSC and Telefónica. The first meeting of ISG SAI will be held in Sophia Antipolis on 23 October.

ETSI – www.etsi.org


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.