AI Fakes in Marketing: How Brands Consciously Create Fictional Cases, Reviews and Personalities in 2025

Virtual brand case

In 2025, the marketing world is witnessing an unprecedented evolution driven by artificial intelligence. While AI has empowered brands with personalisation and automation, a new, controversial trend has emerged—intentionally fabricated AI content. Companies now create fictional customer reviews, success stories, and even entire personalities to strengthen brand trust and boost sales. But how far can this illusion go before it erodes consumer confidence?

Strategic Fiction: Why Brands Choose to Fabricate

Many marketers justify the use of AI fakes as a strategic move to stay ahead in a hyper-competitive digital landscape. In cases where real customer feedback is lacking or delayed, fictionalised testimonials crafted by generative models fill the void. This enables brands to maintain a consistent narrative and keep engagement rates high during critical campaigns.

Another motivation stems from the desire to create viral moments. A compelling AI-generated case study—despite being fictional—can attract massive attention on social media, provided it mimics the storytelling dynamics that resonate with users. These narratives often blend emotional hooks with data points, making them seem legitimate.

Interestingly, some brands also use AI-generated personas as spokespersons or ambassadors. These digital avatars, backed by machine learning, simulate influencer characteristics, interact with audiences, and even respond to comments, creating a believable illusion of authenticity.

Consumer Awareness and the Fine Line of Deception

As this practice becomes more widespread, consumer awareness is catching up. In 2025, digital natives are increasingly sceptical and adept at identifying content that seems “too perfect.” Tools for detecting AI-generated content have also improved, enabling watchdogs and journalists to expose inauthentic practices quickly.

Nevertheless, many consumers remain unaware of the extent to which AI fakes permeate marketing. Transparency becomes key. Brands that disclose AI use and clarify the fictional nature of some testimonials tend to retain public trust. Lack of disclosure, however, can lead to backlash and reputational damage.

From a legal perspective, jurisdictions like the EU are beginning to propose AI labelling regulations that require brands to flag machine-generated content clearly. This could reshape how businesses deploy such strategies in the near future.

The Rise of Fictional Influencers and Brand Representatives

Digital avatars—powered by sophisticated neural networks—are now leading marketing campaigns. Unlike traditional influencers, these fictional characters are immune to scandals, infinitely scalable, and fully programmable. Brands craft their backstories, personalities, and public behaviour to align precisely with corporate goals.

For instance, a virtual athlete promoting a health drink can perform superhuman feats in videos and offer perfectly curated fitness advice on social media. These avatars build followings and even “collaborate” with real influencers to boost credibility.

However, this raises critical ethical questions. Can a consumer truly connect with a brand represented by a non-existent human? In many cases, the connection is psychological rather than rational, mirroring how audiences once believed in TV characters and mascots. The emotional bond is real, even if the figure is not.

Implications for Authenticity in Brand Communication

Authenticity has long been a cornerstone of effective branding. With AI fakes in play, the meaning of authenticity is being redefined. Instead of reflecting reality, authenticity now refers to emotional consistency and narrative coherence.

Some experts argue that if a fictional influencer aligns with brand values and provides real value to users—such as entertainment or education—the deception is harmless. Others warn that this sets a dangerous precedent, where trust is manipulated for profit.

The public’s reaction will likely shape how far this trend goes. If consumers begin to demand transparency and penalise deception, brands will be forced to retreat from fictionalisation and return to genuine user interactions.

Virtual brand case

Case Studies and Future Outlook

In 2025, several brands have already come under scrutiny for publishing entirely fabricated case studies. A notable example involved a skincare brand that used AI to generate before-and-after photos and customer feedback, only to be exposed by online investigators. The fallout included social media boycotts and an official investigation into misleading advertising.

On the other hand, some brands are finding success by integrating AI fakes ethically. A travel app, for example, uses fictional testimonials in beta phases but clearly labels them as simulated experiences. This allows them to showcase features while maintaining transparency.

Looking ahead, it’s likely that the ethical use of AI-generated marketing will become standard practice. Regulations will compel transparency, and consumer expectations will demand it. Brands that embrace honesty—while still leveraging the creative power of AI—will stand out in an increasingly artificial digital marketplace.

Guidelines for Ethical Use of AI in Marketing

To navigate this evolving terrain, marketers must establish ethical boundaries. Clear labelling, disclaimer practices, and consumer education should be top priorities. It’s essential to respect the audience’s right to know whether a testimonial or brand story is real or fabricated.

Additionally, collaboration with legal and compliance teams ensures that campaigns stay within the limits of emerging AI regulation. Brands should also monitor consumer sentiment continuously to adjust strategies based on public feedback.

Ultimately, the use of AI in marketing is not inherently unethical. The intent and transparency behind its use will determine whether it builds trust or undermines it. In 2025, responsible innovation—not deception—must be the path forward.