Collective Content AI Policy

Purpose

As an agency that works with technology companies, Collective Content is aware that AI is used to great effect in some of our own clients’ solutions. This policy reflects the recent rise of public generative AI platforms, such as ChatGPT and Gemini, and their use for creative and operations work.

AI has the potential to support the operations of many businesses, either through automation, supporting decision making or content creation. We believe that while AI, and GenAI in particular, has the potential to augment some areas of our operations, we also have a responsibility to consider the implications of using AI – both on how we work and on the assets we create for our clients.

This policy represents an ongoing reflection on how we use AI responsibly in our business and how we communicate that use in a transparent way. It has been produced by a small working group responsible for policy, best practice, research and training on AI.

Scope

Agency activities covered by this policy:

  • Research
  • Writing
  • Design
  • Editing and quality assurance
  • Operations and policy development
  • Recruitment and hiring

Ethical principles

As the EU AI Act states:

(6) As a prerequisite, AI should be a human-centric technology. It should serve as a tool for people, with the ultimate aim of increasing human well-being.

This human-centric approach forms the basis of our engagement and use of AI.

  • Transparency – we will always be transparent with colleagues and clients about use and applications of AI.

  • Confidentiality – the commercial confidence of our clients and their customers is our highest priority – we will never expose client-confidential material to AI.

  • Accountability – we will track the evolution of AI regulations to ensure we protect our content and that of our clients on matters of copyright, privacy and plagiarism.

  • Sustainability – AI technology needs to be used responsibly given the extra computing required, so we should always be mindful of the environmental impact.

  • Privacy – we regularly record calls and produce transcripts using AI-assisted tools typically built into Microsoft365 or Slack. We always ask for permission before recording and we do not share recordings or transcripts outside the organisation without the consent of call participants.

Agency activities and AI

Research


We’re open to using new tools and technology to help in the research stage of client work. For example, AI can be helpful for understanding new and complex concepts or technologies in simple ways. But we always advise our editors to tread with care. If you use AI to come up with a topic for thought leadership then you aren’t really demonstrating thought leadership, just borrowing someone else’s. The ideas you start from may not be new but they should be your own.

We also have personal experience about how AI hallucinates – in other words, how it makes up facts where convenient truths do not exist. Therefore, while we encourage our writers to experiment with AI around research, we insist they always verify data and information sources independently to ensure they exist and are accurate.

Nothing generated by AI is copied into client assets. Nor do we input any sensitive or confidential client information into an AI tool for research.

With many former journalists on our writing roster, we've spent many years interviewing experts and there’s no substitute for a human-to-human conversation. Face-to-face conversations (even over video) will carry added weight as other sources of content become harder to validate as being original.

Writing


Our position is that we will not use any AI technology to write content for our clients. This includes copy for outlines and assets, as well as for documents such as content strategies and social copy. We extend this to design activities as well as animation and video creation, although there are edge cases where AI tools can augment human graphic designers.


AI crawls the internet for data and therefore uses pre-existing content to generate answers to questions. This means that there is a reasonable likelihood that content generated by AI will include copyrighted material – potentially from a client’s competitor.


As writers we are proud of the quality, authenticity and accuracy of what we produce for clients. The use of AI for content creation compromises that position.

Design


We take a similar approach to visual design as with writing in that we don’t use AI to generate or materially alter images. We won't use tools such as Midjourney or DALL-E for graphic design for clients or internal marketing purposes. These tools are trained on artists' work without consent.

However, AI tools built into Photoshop and other image editors can be useful for making minor adjustments to aspect ratios or similar low-level tasks. We will advise clients if we use these tools in a significant way.

Editing and quality assurance


Submitting a written asset into a GenAI tool with editorial guidelines and style instructions can result in accurately proofed documents. However, this carries with it the serious issue of compromising client IP and protecting commercial content.

The editorial process at CC is managed solely by human editors. Using AI might result in quicker results, but it will not help us improve as writers and editors. Human judgement is not always perfect, but it can detect and avoid bias in ways that AI tools cannot.

Operations and policy development


We have used AI to develop internal policy documents related to day-to-day operations, such as HR practices and IT usage policies. It can also be useful to help improve internal processes.


In addition to more traditional methods of research, AI tools can be useful for helping to understand best practice with such policies. GenAI can also be useful for generating templates and boilerplate statements for further development.


Recruitment and hiring talent


We won't filter applications or CVs through AI. As a human-centric organisation, we read every application and CV that lands in front of us. In a similar vein, if you’re putting together a CV or resume, we strongly dissuade you from using AI. As a company that’s so focused on words, we want to read yours. Not ChatGPT outputs.

Meetings, calls and transcripts

Meetings and interviews are a fundamental part of what we do, and we think there's no substitute for properly engaged human interaction. While AI tools can be useful for transcribing meetings and interviews, we want to be present in the meeting for a better engagement with meeting participants.

You will always have our full attention. AI is no substitute for interactions between people, not least because the results of summarising calls and conversations can lead to worse results.

We also recognise that AI tools can support colleagues with disabilities or people who are neurodivergent. AI-enabled tools that can be used to address issues of accessibility are of great value.

Training


In such a fast-evolving field, appropriate training can help us keep up to speed with what best practice looks like and make sense of the latest developments. We will offer all colleagues the opportunity to receive training on appropriate and meaningful use of AI tools.


As an organisation, we have a duty and a passion for staying on top of trends in AI and in technology in general. Our AI working group is an example of this, but we encourage the whole team to share links and thoughts on AI matters.

Working with this AI policy


Our evolving AI policy has always been communicated to our writers and broader team, such as with the network of freelance colleagues we work with. We expect all staff, both permanent and freelance, to always abide by this policy. Updates to the policy will be communicated promptly.

In addition, we look forward to having ongoing discussions with partners and clients about their own approaches to using AI, as the technology evolves.

Review and update schedule


AI is a rapidly evolving discipline, and we expect this policy to evolve as new technology, tools and use cases emerge. At the very least we expect to review this policy on a quarterly basis and maintain a record of changes in the event of updates.

If you would like to use this policy in your own organisation you are free to do so but please let us know.

Definitions


AI:
 An engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy. [‘The Language of Trustworthy AI: An In-Depth Glossary of Terms’, NIST]


GenAI:
 An artificial intelligence (AI) technology that automatically generates content in response to written prompts. The generated content includes texts, software code, images, videos and music. [Introduction to Generative AIUCL]


Bias:
 A systematic error. In the context of fairness, we are concerned with unwanted bias that places privileged groups at systematic advantage and unprivileged groups at systematic disadvantage. [‘The Language of Trustworthy AI: An In-Depth Glossary of Terms’, NIST]


Content marketing:
 A strategic marketing approach of creating and distributing valuable, relevant and consistent content to attract and acquire a clearly defined audience, with the objective of driving profitable customer action. [Glossary for Content Marketing, CMI]

Privacy: Freedom from intrusion into the private life or affairs of an individual. [‘The Language of Trustworthy AI: An In-Depth Glossary of Terms’, NIST]

v. 2.250924

25 September 2024

Top