AI Policy
Watershed’s values guide everything we do. We strive to be inclusive, transparent, responsible, kind and hopeful in all aspects of our work – including our use of AI.
As a pioneering creative technology organisation, our approach to AI use is intentional.
This means:
- We resist AI as a pervasive, ambient or default workplace tool.
- We champion and support artists and creatives. Watershed does not commission or create artistic or creative work that has been wholly made using generative AI.
- We recognise that some staff members will use generative AI tools as assistive technologies.
- We are realistic. We recognise AI tools and assistants are being added to the software packages and systems used in Watershed’s day-to-day operation. Watershed is also operating in a context of shrinking resources. From time-to-time, we may need to use AI tools to support business operations. We will always strive to do so with intention.
Help us make our website work better for you
We use Google Analytics to gather information on how our website is used. This helps us to make changes to our website that improve the usefulness and overall experience for our visitors.
To deliver our approach
- We conduct ethical, social and environmental impact assessments around new uses of AI.
- We use transparent crediting that celebrates the people we work with and will always make it clear if and how AI has been used as part of a project.
- We keep a register of AI tools and services that can be used at Watershed, alongside their impact assessments.
With thanks to Rachel Coldicutt at Careful Industries for helping us develop Watershed's AI policy.
Rachel describes Watershed's approach as "conscious and inclusive risk taking"
For any questions or feedback email info@watershed.co.uk
Our policy
The use of AI tools and systems has a range of environmental impacts, and can lead to discriminatory outcomes for individuals and hoarding of power by a small number of billionaire-owned companies.
These technologies can also have accessibility and inclusion benefits, and can be used to increase Watershed’s operational effectiveness.
As such, all staff are asked to be mindful that their use of AI tools and technologies comes with consequences and use them only when necessary. Organisationally, we will implement AI-enabled tools and systems when we are confident that there is a robust business case.
Watershed has the following policies in place for management of AI:
- Meeting transcriptions and summaries
- Use of generative AI tools
- Systems that use machine learning
- Transparency statements
- Impact Assessments
These will be reviewed annually, or more frequently if required.
1. Meeting transcriptions and summaries
It is Watershed policy to only use AI transcriptions and summaries of video conversations when the benefit to an individual or team is shown to outweigh potential broader societal harms1.
Specifically:
- As part of an individual’s broader package of reasonable workplace adjustments.
- When there is a clear case that a mostly accurate, verbatim transcript is required (such as during a research interview).
Use of Recordings, Transcriptions and Summaries
Recordings and transcriptions may contain errors2 and so may not be used as part of HR or grievance procedures or as legal documents, such as minutes. Outputs (recordings, transcriptions and summaries) should not be distributed to others without the consent of everyone featured and should always be edited or checked before distributing them.
Staff should also be mindful that AI tends to generate very long documents and meeting notes which just shift the burden of labour from the note taker to the reader.
Consent
In line with guidance from the Information Commissioner's Office, everyone who is recorded or transcribed must give clear consent. If possible, this should be arranged in advance so that alternative methods of transcription can be arranged if any participant raises a reasonable objection. Ordinarily, the use of transcription to support a reasonable adjustment will be considered to have priority over other concerns.
Declining Recording and Transcription in Meetings With External Stakeholders
From time-to-time, staff may be invited to take part in external meetings at which recording or transcription is turned on without consent. In these cases, staff are empowered to decline and say either in writing or verbally that it is not Watershed policy to use generative AI as an ambient tool and that it is only organisational policy to use AI transcriptions and summaries of video conversations when the benefit to an individual or team is shown to outweigh potential broader societal harms.
Tools
Watershed’s IT team will confirm which tools are suitable for use (depending on whether the meeting takes place over Zoom, Teams, or another video or audio conferencing package); requests to use other forms of software may be subject to an impact assessment.
Storage
Transcriptions will only be stored for 30 days, after which time they will be automatically deleted.
2. Use of generative AI tools
Watershed does not support the use of generative tools to create images, audio or video in any public-facing outputs.
From time-to-time, staff members may use the generative AI numerical, text generation and summary tools in Microsoft Copilot. For specialist tasks, including software development, Claude.ai may be used.
Accuracy is important, staff should always keep a human in the loop, and ask one other person to check the accuracy of outputs.
3. Systems that use machine learning
Organisational systems may be purchased or developed that use machine learning and other kinds of AI. These systems will be subject to impact assessments.
4. Transparency statements
If generative AI has been used, we will let others know by adding a short transparency statement such as a footnote to a document, or in the comments of any code that has been generated.
5. Impact Assessments
Use of AI tools at Watershed are subject to an ethical, social and environmental assessment 3, and – at first use – each use case will be logged in the Watershed Generative AI Register. The Register will be revised annually to remove outdated and defunct use cases.
Further reading suggestions
1 Societal harms could broadly be considered to include:
- Bias and Discrimination
- Misinformation and Disinformation
- Economic and Labor Disruptions
- Privacy and Surveillance
- Environmental Impact
- Safety and Security Risks
Beyond the individual: governing AI’s societal harm, Internet Policy Review
Women Reclaiming AI co-founded by Watershed's Pervasive Media Student resident Carol Manton
2 Nine risks caused by AI notetakers, by Careful Industries
3 Introducing the Careful Consequence Check
Your privacy
Further details of our privacy policy in relation to Watershed's use of AI
Sharing our policy
This policy can be shared or adapted on the basis of Creative Commons licensing Attribution-NonCommercial. The licence requires:
- Attribution "AI Policy by Watershed, licensed under CC BY-NC 4.0"
- For a modified/adapted work, you must indicate changes were made:
"This work is adapted from AI Policy by Watershed, licensed under CC BY-NC 4.0. Changes were made to the original." - NonCommercial — You may not use this material for commercial purposes.