Friend or foe? Creating content with artificial intelligence
Artificial intelligence (AI) technology is transforming the world of content. But in some cases, it’s hindering rather than helping comms teams – and quality control measures are becoming more important than ever.
You’d be hard-pressed to find a content marketing outlet that hasn’t been littered with mentions of AI over the past year. Machine learning tools have launched left and right across different mediums, with diverse outcomes – from artificially rendered images to increasingly realistic deepfake videos. When it comes to written content, intelligent chatbots are churning out essays at the drop of a hat, creating snappy social posts and even inventing recipes.
One-third of brands are now using generative AI in at least one function – and leaders across the content industry have hailed its transformative impact.
Media company News Corp Australia revealed that it has used generative AI to produce 3,000 news stories a week. Buzzfeed, meanwhile, claims this technology has not only cut down on the time spent creating written content, but in some cases garnered more engagement than media made exclusively by humans.
Yes, the tech is shiny and exciting – but don’t go bowing to the robot overlords just yet. Content created by AI is not fail-safe. In their efforts to meet prompts, language learning models – which are not explicitly designed to provide truthful answers – have been known to produce outdated and even entirely made-up information. And if these issues creep into your content, you risk irreparably damaging your brand’s reputation.
In other words, if you’re considering recruiting an AI tool to your comms team, your quality control measures need to be airtight.
Keep your authenticity intact
Amid the rise of artificial intelligence, the journalistic values of transparency, accuracy and authenticity have never been more essential. And maintaining these values still requires the human touch.
Take data privacy. With AI systems using vast swaths of data to train their algorithms, concerns have skyrocketed around personal information security. Rather than being let loose, these tools need to be closely monitored by people who recognise the need to keep data secure – and spot potential risks.
Clear briefing, editing and robust fact-checking, led by real people, are also crucial components of quality control that become even more important when dealing with a content-creating machine.
Some brands who neglected quality control measures when relying on AI suffered reputational downfalls – or made laughing stocks of themselves. One supermarket meal planner app generated deadly recipes for its consumers, including “poison bread sandwiches” and mocktails containing bleach, which soon went viral on social media.
Another case saw CNET come under fire after it emerged that the tech outlet had been quietly publishing articles for months with the aid of “automation technology”. While audiences condemned the organisation’s lack of transparency, it was also revealed that more than half of the stories published using AI contained factual errors or plagiarised material.
Over-relying on AI didn’t just damage the brand’s reputation, but also, ironically, ended up creating more work for its colleagues. The company has now launched a new editorial policy that indicates how its teams will use the technology in the future.
Generative AI platforms may be able to produce long-form articles at the click of a button, but without a clear brief from a competent journalist or comms expert, it may pull facts and quotes out of thin air. The article’s quality and authenticity won’t be guaranteed without the deft hand of an experienced editor. And if there is no eagle-eyed fact-checker on hand to confirm its accuracy, you risk publishing a piece that is plagiarised or full of errors – and losing the trust of your audience.
Looking ahead
With only a fifth of newsrooms reporting that they had developed guides on using AI as of May 2023, maintaining quality control around the technology is an area where content teams need to establish some ground rules.
There’s no denying that this tech is transformational. When explored in a safe way, AI can be an incredibly useful and important tool that will forever reshape how content is created. Its potential to change the world is exactly why it requires careful consideration.
As companies continue to test these tools, they are recognising the need to put robust guidelines in place for how and when comms teams rely on AI – and clearly communicate this usage with their audiences.
Prioritising clear quality-focused measures will help brands to reassure staff, stakeholders and audiences that their company values remain intact. After all, AI can help elevate your content processes – but it can’t replace them entirely.
AI at Speak Media
It’s impossible to ignore the headlines on AI. We believe that, while the ground-breaking technology offers huge opportunities in content creation, the risks surrounding accuracy, quality and copyright mean that it’s essential to take a considered approach.
Our team are testing AI tools internally to assess how they can be best used across the business to support our clients’ communications goals and boost efficiency – while maintaining our robust quality control processes and protecting our high editorial standards.
3 tips for quality control
1. A human touch: Maintaining accuracy and authenticity in your content output is essential – and not something that AI can replicate. Implement careful briefing, diligent editing, and robust proofing and fact-checking in your processes, led by real people.
2. Establish guidelines: Create clear processes on how your team will be using any AI tools. Be transparent with your audience and stakeholders about how you are using AI in your work.
3. Protect your data: Consider the potential data and security risks when using AI platforms. Ensure that any AI usage aligns with your brand’s ethical standards and that using the platforms doesn't risk compromising data.
Want to find out how to implement a best-in-class editorial strategy that connects your organisation to your always-on audience?
Contact Gabrielle from our client services team at gabriellebridle@speakmedia.co.uk or on LinkedIn.