Tutorial 8 Activities
Contents
Tutorial 8 Activities#
Activities#
The use of AI in everyday life may sometimes be profoundly misleading. For example:
AI might give humans half information – eg. Alexa says ‘I recommend restaurant X, without disclosing Amazon is being paid for that recommendation.
AI might disguise itself as a human - eg. a chatbot therapist for mental health that doesn’t disclose it is a bot.
AI might overestimate its capacities - eg. rely on the AI investment advisor to make you rich by dealing in cryptocurrencies (nb. don’t do this…)
Exercise
In your groups, identify:
An example (not the examples given above) of a ‘misleading’ digital or AI product in each category above;
The kinds of harm (actual and normative or ethical) to humans that may arise from this misleading conduct;
The ‘regulatory’ response (ie. interventions that reduce the likely harm) to these products, such as transparency, explanations, human centred values in design, or legal bans.
Note
Questions prepared by Professor Jeannie Paterson.
Note
As always, post your answers on the forum discussion.