Consumer perspective

  —  Column 

Content is not connection:
why AI needs behavioral science for consumer satisfaction*

KEYWORDS 

AI

Ethics

Consumer adoption

Product experience

Brand trust

DEI

Immune health

Immune health

About the Author

Kathryn Ambroze

Kathryn Ambroze is a behavioral neuroscientist with experience in consumer research and methodological innovation. She earned her Bachelors in Neuroscience and Business from Muhlenberg College and her Masters in Behavioral and Decision Sciences at the University of Pennsylvania. She currently works at JP Morgan Chase as a Senior User Researcher, infusing behavioral science and design thinking into the customer experience. 

Behavioral Scientist and User Researcher, JP Morgan Chase

Within the past few years, artificial intelligence (AI) has quickly become embedded in consumers' everyday lives. What started as cheeky conversations about robots and automation has rapidly shifted to AI becoming a valued tool for both large corporations and personal households alike. The rapid integration of AI into consumer experiences comes with many benefits and drawbacks, but more importantly, it has led to a clear need to understand human behavior to ensure this tool is used appropriately—not only by consumers themselves but also by companies trying to stay competitive. By incorporating behavioral science principles into AI development and consumer education, organizations can create valuable experiences that use AI to enhance, rather than overrun, the consumer-company dynamic.

*Note: The views and opinions expressed are solely those of the author and do not necessarily reflect the official position or policy of any affiliated organization, employer, or company.

Introduction

Artificial intelligence (AI) has surged in development in recent years, weaving its way into consumers’ daily routines. Whether it's large language models (LLMs) like ChatGPT and Gemini or voice assistants like Apple's Siri and Amazon's Alexa, AI has become a normal fixture for consumers. Because of the fast integration into consumer lives, companies are scrambling to integrate AI anywhere into their workflows and products, often driven by the fear of being left behind. But this panic-driven approach has led many companies to throw AI into inappropriate situations, creating disjointed experiences rather than intuitive solutions. There needs to be reflection on the consumer experience to ensure that AI serves a purpose rather than checks a box. Behavioral science can bridge the critical gap between AI's capabilities and what consumers actually need. By applying behavioral insights into AI applications, it can ensure AI actually improves consumer experiences instead of making them more complex.


Keeping it real has its perks

Positioning a company in the digital landscape is one of the most crucial strategies that connects brands with consumers. Every touchpoint—from online marketing to websites and apps—shapes how consumers form impressions. In today's environment, where online stimuli bombard consumers constantly, standing out becomes increasingly challenging. Many companies have turned to AI to build efficiencies, create cost savings, and increase market share. However, the AI-generated ads have often made companies appear inauthentic. Both Coca-Cola and Toys "R" Us recently experimented with AI-generated ads that received significant backlash for awkward facial expressions, unnatural movements, and distorted details.


These glitches created controversy particularly because they appeared in ads involving sensitive topics like childhood nostalgia and holiday traditions. AI-generated content has been called out for making consumers feel uneasy, a reaction explained by the "uncanny valley" phenomenon—the negative response people have to something is almost, but not quite, human. This near-human quality evokes feelings that not only make consumers uncomfortable but also distract from the ad's core message. If a consumer is more focused on the irregularly shaped trucks in the Coke commercial, it's unlikely they will be focused on having a soda during the holidays. When determining how and if to design an ad campaign with AI, companies must account for these limitations and consider their audiences' wants carefully.


Even with the ridicule surrounding ads trying to be innovative, AI tools continue to fuel the influx of content flooding digital channels. Sifting through the clutter of content has become increasingly burdensome for consumers. Since social media platforms tend to reward consistent posts, many have leaned into using AI to make a continuous stream of high volume posts. Unfortunately, this often results in low-quality, AI-generated content that has become coined as "AI slop". Examples like @ethos_atx, the fake Instagram restaurant that posts AI-generated images of pancakes topped with caviar or impossibly large chicken tenders, are major contributors to the digital pollution that makes it harder for consumers to determine what is real.


Experimenting with this technology will take time to understand what works or flops, but companies must consider the consequences of AI-use on the consumer experience. The context in which the technology is used matters. Online content is often the first touchpoint with the consumer, which contributes to the overall impression of the brand. Negative consumer reactions to AI-generated content can create a halo effect that damages brand perception, reinforcing undesirable associations like being lazy or insincere. Contrastingly, consumers may have a different reaction if AI was used to serve algorithmic nudges that tailor the ad to their interests. The key to success is not whether to use AI, but how to use AI. While it is challenging to stay relevant and appear genuine, consumers need clear signals of authenticity to determine what or whom to trust.


The words we use matter

Familiarity with AI stems from its exponential growth in popularity. Debates about AI's influence on human behavior traditionally appear in science fiction and futuristic contexts. Entertainment, like the 2013 movie Her, capitalizes on salacious storylines about building romantic relationships with operating systems similar to Apple’s Siri. While these concepts once felt far-fetched, anthropomorphic language around AI now creates major implications for how individuals interact with these systems.


The language people use shapes how consumers perceive not only the technology but what it can accomplish. Consumers unintentionally build one-way emotional connections by imposing human-like qualities onto AI. In recent years, LLMs showcase the most prevalent examples of anthropomorphized AI. Nielsen et al. (1) examined consumer behavior around prompt writing and identified degrees of AI anthropomorphism that build connection, ranging from courtesy to companionship. Courtesy involves using polite language such as "please" or "thank you" to respect social norms and establish expected tone in responses. Other degrees include reinforcement (saying "Good job!"), roleplay (asking the chatbot to assume a specific profession), and companionship, which is the development of a deep emotional connection with AI as a way for an individual to feel support or connection. Considerations for these anthropomorphic degrees is important when integrating features like chatbots or automated texts into a company experience. Understanding consumer perspectives helps to design a stronger experience that better meets their expectations.

AI-lmost protected doesn’t cut it

Prompts are not the only way that individuals anthropomorphize AI. When discussing the output, phrases like "AI thinks..." or "The model understands..." miscommunicate the technology's actual capabilities. AI excels at pattern recognition and merely simulates emotions, intentions, and self-reflection. While this distinction may minimally impact AI usability, it fundamentally changes how people use and interpret these tools. When consumers develop emotional connections to AI, they more likely project human imperfections and biases onto the tool's output. This anthropomorphism, as Placani (2) notes, exaggerates and over-inflates AI performance. Such subtle behavior shifts consumer perceptions about AI capabilities. Overconfidence is a bias in behavioral science that misleads individuals to believe that something, like AI, is more reliable than it is, which can result in being too trusting and oversharing sensitive information.


The lack of understanding how AI collects data can lead to harmful repercussions for consumers. While consumers may not fully understand AI intricacies, they worry about uncomfortable and unknown uses of their personal information. Companies must implement safeguards to ensure consumers understand the ramifications of information sharing. This responsibility extends to companies integrating these models into their experiences: What appropriate measures will protect consumers from oversharing when interacting with AI systems? Responsible deployment of AI systems is essential for data security. Not only will this ensure consumers feel safe using the technology, but it will also extend to how consumers will feel about the company.


The average consumer should not have to understand how AI works in order to be protected from sensitive information being endangered. If consumers are under the impression interacting with a site, chatbot, or other digital experience is risky, they will avoid it. Incorporating salient messaging around this technology will give consumers a chance to make an informed decision about how they use AI. By addressing anthropomorphism tendencies, overconfidence biases, and privacy concerns through a behavioral science lens, companies can create AI systems that work with human psychology rather than against it—ultimately delivering technology that serves rather than confuses the consumer.

Trust is the main motivator

The integrity of digital experiences remains fragile. As AI increasingly permeates consumer interactions, companies must mindfully deliver on promises to consumers. While AI offers tremendous potential for creating personalized, convenient, tailored experiences, it can also mishandle data to exploit consumers through deepfakes, voice cloning, and misrepresentation. Mills et al. (3) highlights how data misuse with AI can manipulate individuals by exploiting their biases. Consumers often use AI to fill knowledge gaps, which places them in vulnerable positions when LLM outputs are notorious for hallucinations, or producing inaccurate information. The lapse in accuracy causes consumers to question what and whom they can trust. Questions around authenticity extend to all touchpoints, from reviews to products to even companies themselves. If consumers doubt if a company is real, why would they buy from it?


For consumers to engage with an experience, they must first trust it. Transparent communication about AI's role in consumer experiences enables companies to leverage the technology while maintaining the authenticity consumers demand. Demonstrating how and why the product fits into the consumer lifestyle creates the foundation for strong partnerships between consumers and brands. In building consistent patterns of high quality that consumers rely on, they can feel confident in their decisions to trust the company.


​​​​​​​Conclusion

Staying connected to the consumer is critical for companies to stay relevant, but far too often companies feel the need to jump on a trend or a fad without considering how it will impact their customers. To be sustainable, brands must make a mindset shift acknowledging that a strong consumer relationship is an investment rather than a cost. Evaluating the consumers’ needs, motivations, and habits gives insight into ways to serve them more effectively. Further, using AI can augment the product lifecycle and the consumer experience, but only if implemented properly. Success stories, like Netflix's personalized content recommendations increasing engagement and subscriptions, demonstrate how behavioral insights can guide AI implementation to enhance rather than diminish the consumer experience. However, misuse of AI—whether through overreliance or lack of transparency—may lead to consumers finding an alternative or abandoning the product altogether.


The old adage of “quality over quantity” remains relevant in the conversations around how companies use AI to improve the overall experience. Organizations that acknowledge AI's role while maintaining human oversight and authentic connections will build lasting customer loyalty and drive meaningful innovation. The future of consumer experiences lies not in complete AI automation, but in the thoughtful integration of AI guided by behavioral science principles that prioritize consumer transparency, well-being and trust.

Parents understand that the maintenance of their child’s health and wellness is their responsibility; however, many parents face various challenges when trying to do so. FMCG Gurus findings highlight that sugar is the ingredient that parents are most conscious about in food and drink products, with 74% of consumers concerned by sugar content in products. As children are typically drawn to sugary indulgences, parents are concerned by the link between obesity and diabetes and the hidden sugars in products.


Many parents believe that the complex labeling used by brands disguises ingredients. As a result, brands should ensure that nutritional labeling is made clear and simple for parents so that they are able to unpick the nutritional profile of products within seconds.