When Gen AI Erodes Trust, Brands Take the Fall

Generative AI’s potential to change the way we work and shop holds amazing promise, but this new technology also poses heightened risks for brands. Maintaining customer trust will be a perpetual challenge as generative AI (GenAI) changes the fraud landscape in two important ways. First, GenAI is already making it easier for professional and amateur scammers to create websites, marketplace stores and social media accounts that impersonate legitimate brands. Second, fake news and other disinformation created with GenAI are eroding consumers’ trust in many online platforms, which can have a negative impact on brand trust and safety.

Businesses that want to maintain the confidence of their customers need to understand these threats to brand trust and develop strategies for counteracting them. That requires keeping a closer eye on brand assets and on how well platforms are protecting their own reputations.

A Stronger Focus on Brand Authenticity

In a recent international survey of consumer attitudes on ecommerce, fraud, and customer experience, 23% of shoppers said that they’d been deterred from making online purchases for two reasons. One was not being certain that the website was legitimate. The other was the risk of online scams.

Online shoppers are wise to be wary. From 2021 through the first half of 2023, the Federal Trade Commission logged $2 billion in fraud losses to scam websites and $2.7 billion in losses to social media scams. As the FTC notes, the low cost of targeting ads to potential victims makes social platforms an easy place to prey on shoppers.

Now GenAI makes it easier than ever for criminals to generate ad copy, website content, and social media posts that look and sound virtually identical to messaging from trusted brands. Add logos and a lookalike URL or social media handle, and brand impersonators can dupe even careful shoppers online. That can have long-term consequences for brands. Once a customer experiences fraud on a website, 84% will never shop there again, according to the consumer attitudes survey.

The rising risk of losing customers’ trust and lifetime value means that businesses need to become more proactive about protecting their brands and shutting down impostors. Gartner predicts that within two years, more than half of CMOs will be using “content authenticity technology, enhanced monitoring, and brand-endorsed user-generated content (UGC) to protect their brands from deception unleashed by GenAI.” Identifying impostor sites, accounts, and marketplace shops–and then getting them taken down–is an ongoing task that requires its own type of AI technology and automation.

Avoiding “Guilt by Association” with Untrustworthy Platforms

It’s not only ecommerce sites that consumers worry about. Gartner found that half of U.S. shoppers plan to spend less time on social media by 2025 because they’re worried about fake news and ads. Consumers aren’t optimistic about the ways things are trending on social platforms, either. More than 70% believe that GenAI’s increasing role in social media will “harm user experience.”

Especially for brands that market heavily through social media platforms, it’s increasingly important to protect brand trust from impostors–and to evaluate each platform according to its own trustworthiness to avoid brand damage by association. Brands that continue to maintain a presence on platforms and sites that have lost credibility run the risk of wasting their marketing efforts, because consumers either won’t see them or won’t trust the brand.

Cultivating and Maintaining Consumer Brand Trust

With the risk of “guilt by association,” maintaining customer trust may require some marketing strategy and budget adjustments. For example, a business that currently spends most of its social media budget on platforms that are seeing an increase in disinformation and a decrease in user engagement may need to switch to outlets with higher levels of credible content and engagement but also higher ad rates. For example, Forrester Research recommends that businesses “double down on marketing activities aligned to tier-one media and ensure that they are seen as reliable and trusted sources for journalists, buyers, investors, and other critical audiences.”

Due to GenAI’s disruptive potential in terms of fake news and declining trust, businesses should regularly reassess their marketing channels in terms of brand trust, engagement trends, and rates of misinformation in each channel and platform. At some point, the metrics may show that it’s time to leave a particular social platform or stop advertising with a particular network due to rates of misinformation, fraud, consumer distrust, or falling engagement.

Brands may also need to adapt their messaging strategy to address customers’ questions or concerns about trust and legitimacy. Posting the brand’s trusted URLs and social media links on the company’s website, social pages, email signatures and even packaging can help customers be sure they’re in the right place. So can sharing information about information policies – such as that employees will never ask for customers’ account passwords. This kind of safety messaging should be an ongoing conversation that changes as new forms of brand impersonation emerge. The overarching goal is to communicate to customers that the brand values their security.

If communicating about safety seems like a familiar recommendation, it is–but it’s more urgent now. Bad actors have been trying to impersonate brands online to steal customer information since the dawn of ecommerce. The difference now is that GenAI stands to make the tools of brand impersonation more precise and easier for more people to use. Stepping up brand protection practices, taking extra care about where your brand is present, and communicating about safety with customers are time-tested best practices that can help keep the brand trust that has earned from customers, and protect it from new threats.


Original article at: https://streetfightmag.com/2024/02/21/when-gen-ai-erodes-trust-brands-take-the-fall/