Why we have a GenAI policy – and why you should too

Image of a man sitting at a table and using a laptop. Photo by Matheus Bertelli: https://www.pexels.com/photo/a-man-is-sitting-at-a-table-with-a-laptop-16094044/

Why we have a GenAI policy – and why you should too

30/10/2023

 

30/10/2023

 

Why we have a GenAI policy – and why you should too

WRITTEN BY

Eve Michell
Senior Writer

Eve joined us as our first apprentice in 2020. Since then, she’s become a key member of our AI working group, as well as our social media and newsletter teams. Working across a wide range of clients, she loves learning about cybersecurity, B2B SaaS and all kinds of enterprise technology. She’s an avid reader and published poet, having studied English Literature and Creative Writing in Canterbury, and she helps run an independent zine venture called EXIT Press.

Generative artificial intelligence (GenAI) – AI tools that can create content, including text responses, articles and images – is potentially the most significant disruptor in marketing of this decade. Since the release of ChatGPT in November 2022, and the subsequent development of Google’s Bard, Microsoft’s Copilot and more, people in roles involving content creation have been on the journey of learning how to contextualise and work with GenAI.

As the tools evolve, so do the benefits and drawbacks. One of the biggest advantages of GenAI tools is the sheer speed with which you can generate an answer to a prompt, without doing the research or the thinking. But you can’t always guarantee that you can trust these answers. AI hallucinations, in which an AI tool creates its best guess at answering a question but presents false conclusions as fact, are being described as a feature, not a bug. These are just two examples – you can read more of our thoughts on the pros and cons of using GenAI for content in our blog post.

Respecting your clients and your craft in an AI-driven world

Perhaps the most significant risk of using GenAI to create content is that everything you input into a chatbot to generate a response will be used to train that AI tool. This means that you are forfeiting the privacy of whatever you input – and that you can’t guarantee that the tool won’t churn out sections of your prompts to other users.

Why does this matter? For content creators working with clients, it’s common for clients to divulge sensitive, confidential information in their briefs to help inform the article, blog or report that you’re working on. If you decide to bypass the content writing process by inputting a brief into ChatGPT or a similar tool, you could be violating your client’s privacy policy or any non-disclosure agreement (NDA) you might have signed, and you’ll be releasing their top-secret information into the public by giving the AI tool the opportunity to use it in future responses to other users.

This is why it’s vital that your team has an informed GenAI policy that dictates how staff can and can’t use AI in their work. Despite the real and significant risk of breaching NDAs by using AI to create content, many organisations lack a formal policy or guidelines to mitigate this risk.

The Content Marketing Institute’s 14th Annual B2B Marketing Survey reported that 61 per cent of respondents do not have formal guidelines in place for using GenAI tools. This is a worrying statistic, and shows that the speed at which AI tools are developing is outstripping the speed with which organisations are able to keep up with regulating their usage. For clients working with agencies to create content, this should be of concern.

How we became early adopters of formal AI guidelines

Once a week, our internal AI working group meets to discuss the developments, benefits and risks of GenAI, specifically in content. Within the first few meetings, we discussed how the alarming risks associated with AI could become a problem if we didn’t try to tame the AI beast early.

So, in June 2023, Content Director Aled Herbert devised a series of statements on our position on AI, with some guidelines for our internal staff and our freelancers. Collective Content’s official AI guidelines hit everyone’s inbox soon after, informing everyone that:

“Our position is that we will not use any AI technology to create content for our clients. This includes copy for outlines and assets, as well as for documents like content strategies and social copy.”

And furthermore:

“Nothing generated by AI is copied into client assets. Nor do we input any sensitive/confidential client information into an AI tool for research or ideation.

Naturally, as writers, we are proud of the quality, authenticity and accuracy of what we produce for clients. The use of AI for content creation compromises that position. This is why it’s so important that we tread with caution around the matter of using AI in our work.”

Is AI completely prohibited at Collective Content?

In a word, no. Our statement continues, informing our staff and freelancers that we’re open to using new tools and technology to help in the research and ideation stages of client work. Therefore, while we encourage our writers to experiment with AI for ideation and research, we insist they always verify data and information sources to ensure that they exist and are accurate.

This allows our team to play around with AI tools and use them with these guidelines in mind. We don’t want to echo the sentiments of the monks who believed that the printing press would make people lazy. We’re not here to condemn technological developments. But we are passionate about making sure that we use these tools in ways that are ethical and compliant, putting our clients’ needs and interests first.

With that said, we encourage everyone to learn how to use AI to boost creativity and efficiency. But, just as important, we encourage you to consider the real-world implications of relying on GenAI to create your content, and to create a policy for your organisation that keeps the risks at bay.

 

 

Top