Evolving GenAI guidelines: Collective Content’s AI Policy 2.0

Three groups of icons representing people have shapes travelling between them and a page in the middle of the image. The page is a simple rectangle with straight lines representing data. The shapes traveling towards the page are irregular and in squiggly bands.

Evolving GenAI guidelines: Collective Content’s AI Policy 2.0

Written by

Collective Content Team
 

29/10/2024

Almost exactly a year ago, we published a piece titled Why we have a GenAI policy – and why you should too. The blog post introduced our formal guidelines for how we use (and don’t use) generative AI (GenAI) in our work as a content agency. At the time, 61 per cent of respondents to the Content Marketing Institute’s (CMI) annual B2B Content Marketing Outlook lacked any formal guidelines for using GenAI.

As is the norm in the world of technology, things have changed in that year – and fast. The 2025 issue of the CMI’s survey found that the number of companies without formal GenAI guidelines has dropped from 61 per cent to 45 per cent — a positive change.

Creating a set of guidelines is not the finish line, however. As much as we have become weary of phrases like ‘in today’s ever-evolving technology landscape’, that evolution is real, and has impact. The near-constant state of AI innovation demands an ongoing practice of researching and interrogating the developments that arise. This is particularly true for organisations working almost exclusively with tech businesses, such as Collective Content.

As AI capabilities change, so must the way we interface with and regulate it. Knowing this, we set to work on developing an update to our initial AI guidelines. We arrived at our AI Policy 2.0 at the end of September, a policy which we chose to make available to the public – and even invite other companies to borrow from, with acknowledgement to its origin.

Developing our AI Policy 2.0

There’s no good excuse for not having an AI policy. That might sound overblown – harsh, even – but given the likelihood of hallucinations (more accurately described as confabulations, because GenAI is a predictive text technology without consciousness or awareness), legal/copyright risks and potential breaches of NDAs, we stand firmly in that position.

We’ve noticed that many technology businesses have well-structured, extensive AI policies on their websites. This is an area in which many organisations would do well to take a leaf from the tech sector’s books. And with prickly subjects like shadow AI (employees using AI without disclosing their usage), we’d all do well to have regularly updated AI policies that clearly indicate what is permissible and what is strictly prohibited.

The creation of our AI Policy 2.0 was partly a natural byproduct of our AI working group’s regular meetings, but it was no accident. We meet weekly to discuss our research into AI developments, risks and emerging use cases, and within six months of sharing our first set of guidelines, were starting to have conversations about updating them. Our whole team is also split into groups, each investigating and experimenting with a different AI tool so that we understand the practice, not just the theory. The AI Policy 2.0 is the culmination of months of research and discussions, and around three weeks of writing and iterating the new version.

What’s changed?

Originally, the focus of our GenAI guidelines was on the written aspect of content creation. As a team of writers, this is a natural starting point. There are thousands of other use cases where AI can enter the stage, though, and we recognise that it’s important not to omit these.

We also tackled the issue of AI and GenAI in design work, which brings forth a multitude of issues to consider, given that many tools were trained on photography and art without the creators’ permission – or any attempt to seek it. With the development of tools such as Midjourney and DALL-E, it would be an oversight not to include our stance on image generation in our policy. We arrived at the position that:

“We won’t use tools such as Midjourney or DALL-E for graphic design for clients or internal marketing purposes. These tools are trained on artists’ work without consent.”

You’ll notice that we’ve broadened our horizons and are using the wider term of AI, rather than strictly GenAI, in our updated policy. Our new policy includes areas that go beyond the use of GenAI to create content, such as operations, recruitment and hiring. The bottom line is, if it uses AI, we need to regulate it.

This isn’t the end for our AI policy. We expect to update this on a regular basis as new risks and advancements enter the arena. We will continue to help you make sense of GenAI while on our own journey with AI and its regulation, and we would love to hear your thoughts on what we’ve shared so far.

 

 

 

More Content

Tech Quarterly

Our quarterly summary of top research, market stats, new developments and predictions in five key technology topics of importance to our readers and clients: artificial intelligence, automation, CIOs, Internet of Things and virtual reality/augmented reality/extended reality (VR/AR/XR).

If a major analyst report, survey or forecast has been published on any of these topics in the preceding three months, you’ll find out about it in Tech Quarterly.

Top