Summary#

Transparency#

Transparency

The property of no parts of a decision or the decision making process being hidden.

In this subject transparency refers to the interpretability of processes and decision-making systems involving algorithms. It the context of AI systems, transparency requires that processes which impact individuals be made public (though this doesn’t mean publishing the code.)

This is different from explainability, which requires that the decisions made by algorithms are interpretable and understandable by humans.

Requirements of Transparency#

In automated decision making systems we should have:

  • Clarity in the procurement of data, funding, resources, etc.

  • Clarity of implementation

    • publicised technical implementation - at least a summary of the process (no requirement to publish code)

  • Systems allowing data subjects to access information about how their data is processed and stored

For example, censorship on social media is often handled by AI systems. A high level transparency in this process could be achieved by taking steps like:

  • publishing clear and accessible criteria for disallowed content,

  • making clear the role and scope of the censorship algorithm,

  • publishing information about the technical details of the algorithm,

  • making clear how the algorithm was trained, for example the data that was used, especially if it was acquired from users

  • providing systems for data subjects to access and modify how their data is processed and stored,

  • etc.

Nominal vs. Effective Transparency#

Nominal Transparency

When transparency measures offer transparency in name but with no regard for whether information is easily accessible, easily consumed, and easily understood.

Effective Transparency

Transparency which demonstrably enables users to understand what the platform does with their data and what the implications of their actions on the platform are.