Summary#

AI Harms#

The implementation of AI systems can cause harm in various ways, including:

  • eroding privacy (eg. when collecting data for training large models),

  • biased models,

  • inaccurate models,

  • models that are not robust,

  • increasing the potential for cyber-security risks.

These potential harms are what legal frameworks for governing AI seek to address.

Four Models of AI Regulation#

Soft Law Codes

The standards principles of AI ethics that influence behaviour, but are not codified in enforceable hard law.

Data Protection Laws

Laws that regulate the use of data in automated systems, including collection, storage, etc.

General Legal Principles

Laws not specific to data protection or AI that nonetheless impact how AI systems are implemented and deployed.

AI-specific Laws?

Policies that do not yet exist which would specifically address the potential harms of AI systems.

Data Protection Law#

The General Data Protection Regulation (GDPR)#

A set of regulations from the EU which establishes a set of individual rights:

  • The right to be informed,

  • The right of access,

  • The right to rectification,

  • The right to erasure,

  • The right to restrict processing,

  • The right to data portability,

  • The right to object,

  • Rights in relation to automated decision making and profiling.

The Australian Privacy Act (1988)#

A relatively weak policy that governs the collection and use of personal information/data.

Specific Law for AI#

Currently, proposals are being made for ex-ante regulations to establish standards that AI systems should adhere to to prevent harm from occurring in the first place.

The two pieces of regulation that were discussed in the lecture are:

  • The California bot law

  • The EU AI draft law