Page 1 of 1

2. Security of the Machine Learning System

The following questionnaire is composed of 16 questions meant to evaluate how secure your machine learning system is. These questions are organized into 2 sections. The first section aims to evaluate if threats against your system and identified potential vulnerabilities. The second part aims to assess if you have processes and techniques in place that can mitigate those attacks.

Target: Data Scientist (or Data Engineer, Security expert)

Security assessment:

1. Have you assessed and identified the security threats to your machine learning system (e.g., by performing threat analysis or threat modeling of your ML system)?

Untitled checkboxes field
2. Have you performed security testing on your machine learning system (to discover potential vulnerabilities)?
Untitled checkboxes field

3. Have you assessed the vulnerability of your machine learning system to the following machine learning-specific attacks? (Select all that apply)

Untitled checkboxes field
4. Is your machine learning system compliant with specific security

standards (e.g., ISO/IEC 27000-series)?

Untitled checkboxes field
5. Is your machine learning system certified for cybersecurity (e.g., using the certification scheme created by the Cybersecurity Act in Europe)?
Untitled checkboxes field

Mitigation methods (specific to attacks against machine learning):

6. Have you deployed defense(s) to detect and/or prevent attacks of the following types? (Select all that apply)
Untitled checkboxes field
7. Have you deployed any defense(s) specifically designed to

protect your machine learning system from adversarial ML attacks? (Select all that apply)

Untitled checkboxes field

Mitigation methods (good ML practices with side attack resilience):

8. Do you ensure that the data used to train your machine learning system is: (select all that apply)
Untitled checkboxes field
9. Do you have a sanitization process to detect and remove “abnormal” data, “outliers” or data points with inconsistent labels from your training dataset?
Untitled checkboxes field
10. Is access to your training data controlled and restricted to the minimum required to handle it?
Untitled checkboxes field
11. Do you monitor and document relevant performance metrics (e.g., , precision, recall, false positives, etc.) of your machine learning system over time?
Untitled checkboxes field
12. Do you monitor if the data input to your machine learning system at inference comes from the same distribution as the data used to train it?
Untitled checkboxes field
13. Do you filter the data input to your machine learning system at inference in any way (e.g., to filter out “abnormal” inputs)?
Untitled checkboxes field
14. Do you have any means to limit the queries to your model during inference?
Untitled checkboxes field
15. Did you define fallback plans to address errors from your ML system during inference and did you governance procedures in place to trigger them (e.g., you have means to detect errors from your ML system and to cope with them)? (Select all that apply)
Untitled checkboxes field
16. Do you monitor if your machine learning system has changed in a way that requires a review of its robustness and vulnerabilities?
Untitled checkboxes field