The uncanny valley: how to keep your brand’s authenticity intact when using AI

Artificial intelligence (AI) has increasingly been adopted by comms teams to help develop everything from social posts to artificially rendered images – but the rise of misinformation and offensive content has caused concerns about authenticity. We explore why transparency is vital when using these tools – and how to maintain the trust of your audience.


Since exploding into the public eye in late 2022 with the viral rollout of ChatGPT and arty selfie platforms, AI has been holding the interest of many corporate comms teams.

In a 2023 survey from Deloitte, more than half of business leaders reported that their organisation was evaluating or experimenting with the tech – and 80% said that generative AI would increase efficiencies in their organisations. What’s more, the technology’s market size is expected to grow by at least 120% year-on-year.

But with great power comes great responsibility (yes, we just quoted Spider-Man). The rapid growth of AI usage in branded communications has reinforced the need for quality control measures – from robust editing to fail-safe fact-checking to the indispensable human touch.

Amid concerns around the spread of misinformation and offensive content, Meta announced that it will begin labelling AI-generated images on social networks Facebook and Instagram. And YouTube now requires creators to disclose when ‘realistic’ content – which viewers could mistake for a real person, place, scene or event – is made with altered or synthetic media, including generative AI.

In the EU, legal measures are even being put in place through the AI Act – the world’s first-ever comprehensive legal framework on AI. These new rules aim to ensure that AI systems respect fundamental rights, safety and ethical principles in Europe and beyond.

One buzzword that has been front and centre – even in the days before AI dominated the news – is ‘authenticity’. A recent survey found that authenticity is important to 88% of consumers when deciding which brands to support – so it goes without saying that transparency and credibility are essential for any organisation wanting to retain its audience. This can become something of a gauntlet for teams considering using AI to develop their corporate communications.

Which side are you on?

Beauty brand Dove has drawn a line in the sand. The company pledged last month that it will not use AI in its advertising to reinforce its commitment to “protecting, celebrating and championing Real Beauty”.

Sharing this statement as part of its marketing campaign ‘The Dove Code’, the organisation highlighted the vast number of women who feel pressure to alter their appearance because of what they see online, “even when they know the images are fake or AI-generated”.

Is this announcement a bold move? Yes, but it also makes sense from a comms perspective. Rather than racing with competitors to adopt AI technology, Dove has taken a smart, considered approach that aligns with its brand ethos. It shows audiences that the company is serious about upholding its values.

Commenting on the move, Harris Wilkinson, chief creative officer at The Marketing Arm, said that more organisations would likely follow suit in the future: “If brands claim they are authentic, eventually their audiences will demand they prove it.”

This is not to say your company has to reject AI technology entirely. What’s most important is keeping your brand’s genuine values front and centre when creating corporate content – whether you use the technology or not.

Tread with caution

A recent global study conducted by Getty Images found that virtually all consumers (98%) feel that ‘authentic’ images and video are pivotal in establishing audience trust. Needless to say, comms teams need to think very carefully about how they’re presenting AI-supported content to their audience. This may be part of the reason that reportedly less than half of journalists are using generative AI in their work.

Our view? Leave the heavy lifting to your comms teams when it comes to protecting quality, and make sure you have measures in place to manage risk. Data-saturated as they may be, AI tools won’t have your team’s expert insights or years of experience. They’re not yet able to understand your brand purpose and values or comms landscape inside and out. Left unchecked, AI tools can result in – at best – generic and unengaging writing and – at worst – misinformation and inauthenticity.

Of course, AI has its uses as a content-creation tool, and learning more about its capabilities is essential. If you’re curious about the technology, or trialling ways to effectively incorporate it into your corporate comms processes, prioritise transparency. Be open and honest about how your company is using AI to help reassure your readers that your brand values remain intact.

When marrying AI with authenticity in corporate comms, remember that it’s a marathon and not a sprint if protecting your brand’s reputation is your priority. A slow and steady approach might not feel like a quick win, but it will help keep your audience cheering from the sidelines every step of the way.


AI at Speak Media

We believe that while AI offers huge opportunities in content creation, the risks surrounding accuracy, quality and copyright mean that it’s essential to take a considered approach to the technology.

Our team are testing AI tools internally to assess how they can best be used across the business to support our clients’ communications goals and boost efficiency – while maintaining our robust quality control processes and protecting our high editorial standards.


Want regular branded content insights sent straight to your inbox? Sign up for the Speak Media newsletter.


If you want to find out how Speak can help your business, contact Gabrielle Bridle from our client services team at gabriellebridle@speakmedia.co.uk or on LinkedIn.


Previous
Previous

Headline hacks for corporate comms teams

Next
Next

Why building a strong brand reputation comes down to quality control