Evaluating Transparency in Artificial Intelligence Systems: Adapting the Z-Inspection® Framework for the MANOLO Project
Karanxha, Giulia (2025)
Karanxha, Giulia
2025
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:amk-2025060621342
https://urn.fi/URN:NBN:fi:amk-2025060621342
Tiivistelmä
Transparency is a key requirement for Trustworthy AI, mandated by the EU AI Act to ensure accountability, trust, and compliance. Current evaluation methods are often conceptual, domain-specific, or rely on qualitative checklists, limiting their practical use in different environments.
This thesis addresses this gap through a Systematic Literature Review (SLR) of 28 studies, identifying how transparency is defined and operationalised. The findings stress a lack of structured, measurable approaches that align with regulatory expectations while remaining practical for development workflows.
To address the lack of generalisable, benchmarkable methods for evaluating transparency, this thesis proposes an artefact-based adaptation of the Z-Inspection® Framework for the MANOLO Project. The adaptation introduces five interlinked components: (1) a Stakeholder Transparency Requirements Catalogue, (2) a Transparency Artefact Registry (TAR), (3) a YAML-based metadata schema, and (4) a Transparency Scorecard and (5) Backlog. These components support artefact-level traceability, documentation quality, and stakeholder alignment across the AI lifecycle.
The adapted framework contributes to both research and practice by offering a replicable method to operationalise transparency evaluation. While empirical validation is pending, this work provides a practical foundation for aligning AI development practices with transparency obligations under the EU AI Act.
This thesis addresses this gap through a Systematic Literature Review (SLR) of 28 studies, identifying how transparency is defined and operationalised. The findings stress a lack of structured, measurable approaches that align with regulatory expectations while remaining practical for development workflows.
To address the lack of generalisable, benchmarkable methods for evaluating transparency, this thesis proposes an artefact-based adaptation of the Z-Inspection® Framework for the MANOLO Project. The adaptation introduces five interlinked components: (1) a Stakeholder Transparency Requirements Catalogue, (2) a Transparency Artefact Registry (TAR), (3) a YAML-based metadata schema, and (4) a Transparency Scorecard and (5) Backlog. These components support artefact-level traceability, documentation quality, and stakeholder alignment across the AI lifecycle.
The adapted framework contributes to both research and practice by offering a replicable method to operationalise transparency evaluation. While empirical validation is pending, this work provides a practical foundation for aligning AI development practices with transparency obligations under the EU AI Act.