Generative artificial intelligence (GenAI) – AI tools that can create content, including text responses, articles and images – is potentially the most significant disruptor in marketing of this decade. Since the release of ChatGPT in November 2022, and the subsequent development of Google’s Bard, Microsoft’s Copilot and more, people in roles involving content creation have been on the journey of learning how to contextualise and work with GenAI.
As the tools evolve, so do the benefits and drawbacks. One of the biggest advantages of GenAI tools is the sheer speed with which you can generate an answer to a prompt, without doing the research or the thinking. But you can’t always guarantee that you can trust these answers. AI hallucinations, in which an AI tool creates its best guess at answering a question but presents false conclusions as fact, are being described as a feature, not a bug. These are just two examples – you can read more of our thoughts on the pros and cons of using GenAI for content in our blog post.
Respecting your clients and your craft in an AI-driven world
Perhaps the most significant risk of using GenAI to create content is that everything you input into a chatbot to generate a response will be used to train that AI tool. This means that you are forfeiting the privacy of whatever you input – and that you can’t guarantee that the tool won’t churn out sections of your prompts to other users.
Why does this matter? For content creators working with clients, it’s common for clients to divulge sensitive, confidential information in their briefs to help inform the article, blog or report that you’re working on. If you decide to bypass the content writing process by inputting a brief into ChatGPT or a similar tool, you could be violating your client’s privacy policy or any non-disclosure agreement (NDA) you might have signed, and you’ll be releasing their top-secret information into the public by giving the AI tool the opportunity to use it in future responses to other users.
This is why it’s vital that your team has an informed GenAI policy that dictates how staff can and can’t use AI in their work. Despite the real and significant risk of breaching NDAs by using AI to create content, many organisations lack a formal policy or guidelines to mitigate this risk.
The Content Marketing Institute’s 14th Annual B2B Marketing Survey reported that 61 per cent of respondents do not have formal guidelines in place for using GenAI tools. This is a worrying statistic, and shows that the speed at which AI tools are developing is outstripping the speed with which organisations are able to keep up with regulating their usage. For clients working with agencies to create content, this should be of concern.
How we became early adopters of formal AI guidelines
Once a week, our internal AI working group meets to discuss the developments, benefits and risks of GenAI, specifically in content. Within the first few meetings, we discussed how the alarming risks associated with AI could become a problem if we didn’t try to tame the AI beast early.
So, in June 2023, Content Director Aled Herbert devised a series of statements on our position on AI, with some guidelines for our internal staff and our freelancers. Collective Content’s official AI guidelines hit everyone’s inbox soon after, informing everyone that:
“Our position is that we will not use any AI technology to create content for our clients. This includes copy for outlines and assets, as well as for documents like content strategies and social copy.”
And furthermore:
“Nothing generated by AI is copied into client assets. Nor do we input any sensitive/confidential client information into an AI tool for research or ideation.
Naturally, as writers, we are proud of the quality, authenticity and accuracy of what we produce for clients. The use of AI for content creation compromises that position. This is why it’s so important that we tread with caution around the matter of using AI in our work.”
Is AI completely prohibited at Collective Content?
In a word, no. Our statement continues, informing our staff and freelancers that we’re open to using new tools and technology to help in the research and ideation stages of client work. Therefore, while we encourage our writers to experiment with AI for ideation and research, we insist they always verify data and information sources to ensure that they exist and are accurate.
This allows our team to play around with AI tools and use them with these guidelines in mind. We don’t want to echo the sentiments of the monks who believed that the printing press would make people lazy. We’re not here to condemn technological developments. But we are passionate about making sure that we use these tools in ways that are ethical and compliant, putting our clients’ needs and interests first.
With that said, we encourage everyone to learn how to use AI to boost creativity and efficiency. But, just as important, we encourage you to consider the real-world implications of relying on GenAI to create your content, and to create a policy for your organisation that keeps the risks at bay.
Introducing Collective Content’s AI Policy 2.0
Since the time of writing the above, our AI working group has continued to discuss the developments arising near daily in the GenAI space. We were early adopters of an AI policy in the content/agency world, but just as the technology is evolving, so must our thinking, guidance and strategy for how we integrate (or don’t integrate) AI into our own operations.
Meeting weekly to build on our original policy, we arrived at a second iteration of the guidelines mentioned above: our AI Policy 2.0. We also made the somewhat bold decision to publish this policy publicly – you can find the full document on our website – and make it free for other organisations to adopt.
What’s changed?
Originally, our focus was on the written aspect of content creation. As a team of writers, this is a natural starting point. But what we do goes beyond just text on a page: we have a talented pool of designers who take our words and turn them into something beautiful. With the development of tools such as Midjourney and DALL-E, it would be an oversight not to include our stance on image generation in our policy. We arrived at the position that:
“We won’t use tools such as Midjourney or DALL-E for graphic design for clients or internal marketing purposes. These tools are trained on artists’ work without consent.”
But what about tools that have AI functionality built into them, we hear you ask?
“…AI tools built into Photoshop and other image editors can be useful for making minor adjustments to aspect ratios or similar low-level tasks. We will advise clients if we use these tools in a significant way.”
We also covered some less obvious areas where AI might play a role, including recruitment and hiring talent, meetings, calls and transcripts, and training.
This isn’t the end for our AI policy. We expect to update this on a regular basis as new risks and advancements enter the arena. We will continue to help you make sense of GenAI while on our own journey with AI and its regulation, and we would love to hear your thoughts on what we’ve shared so far.
Follow us on LinkedIn to stay up to date with our developments.