Andrew Ciszczon

View Original

AI in Marketing: Resistance, Rhetoric, Reality

AI generated cartoon about AI usage. Photo credit: Ideogram 2.0

Meta and Coca-Cola have been in some hot water recently. All around AI. That got me thinking. And to no surprise, there’s a lot to unpack. 

Let’s start with a brief recap over a few recent examples including Lego. Then, start peeling back some layers. 

Coca-Cola Re-imagines Their Iconic Ad, Holidays Are Coming

Photo by Hert Niks via pexels.com

Go straight to the source for the news and more detail. 

Coca-Cola used generative AI to create video ads, seeking to replicate the magic that came from the original ad. While most humans probably wouldn’t notice, the more discerning spotted some signs. 

Then, its AI origins were confirmed, snowballing into the social media backlash. Admittedly, once you know, it’s hard to unsee. 

But, it’s interesting to think about if people didn’t know, how did they really respond? Did they notice something was off and didn’t care? Or did they not notice entirely? 


Meta AI Profiles

Photo by Julio Lopez via pexels.com

Go straight to the source for the news and more detail.

Meta created AI based profiles on their platforms. Despite labeling them, albeit subtly, they were designed to feel like real people. Complete with backstories and sharing content such as pictures of their kids…hmmm.  

It seems these profiles were flying under the radar until recent comments from a Meta VP sparked a volley of internet sleuthing. Once discovered, the AI ultimately succumbed to some pressing. 


Lego Uses AI generated art for a Ninjago Quiz

Photo by Andrew Ciszczon.


Go straight to the source for the news and more detail.

This one is pretty self-explanatory. Lego used AI generated art for a Ninjago quiz. Fingers and Naruto style headbands were dead giveaways. While adults do enjoy Ninjago, me included, the primary audience is youth. I wonder whether kids noticed or cared. Some might even appreciate the cross-over to Naruto.  


The resistance, what it’s all about?

Photo by cottonbro studio via pexels.com


Ultimately, it’s fear. Fear of being replaced and for some, fear of losing control to machines. 

In some ways this is understandable. AI has certainly hit peak inflated expectations. According to the creators of the hype cycle, Gartner, generative AI is 2-5 years from plateauing. Below is an example of the hype cycle graphic and some other mainstream technologies to help illustrate where they stand.

Claude 3.5 Sonnet



This sparks many questions, but one came to mind when talking to my girlfriend (who came up with the title for this post) as our kids were running around causing chaos. 

Does the pushback vary from generation to generation?

While the research largely reflects what you’d assume, there are nuances to it. For instance, baby boomers are more apt to use the technology if it’s integrated into something they already use. That’s a nice nugget for a potential target audience that might otherwise be deprioritized. As a marketer, that’s valuable insight when considering how to deploy the technology. More nuggets await. 

For more generational takeaways check out AEM - understanding-generational-differences-in-the-age-of-ai 


Get Started

Let’s presume you’re ready to start experimenting with Generative AI. 

Where do you start? 

I recommend giving a few of the models a try with some simple examples. Are there topics you don’t understand? Ask it to explain it as if you are a 5th grader. Enter the same prompt and try multiple models to see which one you like best. A word of caution - the models can change significantly with each update, so you may prefer one now, but that could change.

Prompting and priming the tool is crucial. As with so many other things in life, you get what you put in. 

Here’s a straightforward guide I found via Google search, but what’s even easier is asking the tool itself!

Keep in mind that anything you enter isn’t guaranteed to be private, even if you pay for it. The model improves with more data and what happens when companies are incentivized to use data they have access to? 

For now, Claude, Copilot, and supposedly Google’s Gemini rank well in privacy. This list isn’t exhaustive and the last one is certainly a question mark. Google’s actions in other spaces would certainly cause me some concern when it comes to privacy.

Another useful bit is that not all models do the same thing. 

While Claude can do visuals to a degree, it’s limited to diagrams when it comes to creation. It has image understanding capabilities…that are also limited. And, since its approach prioritizes privacy, it does not have a live connection to the internet. Claude 3.5 Sonnet has a cutoff date of April  2024. That means it doesn’t know that Trump won the election or that LA is reeling from wildfires. 

Gemini can do almost everything…if you’re willing to pay for it and employ the other tools needed. Not all of its generative AI capabilities are native to the app itself. They are embedded in other tools. 

I’m personally a fan of Ideogram. It specializes in generating images, does a good job, and has a social element. Outputs from other people’s prompts are shown enabling you to like and follow people. It really produces some amazing images and kudos must be given to the people who prompt them. 

Can generative AI do video? 

Yes, but even with Coca-Cola’s resources, the output is inferior. The tools available to a normal person are even less capable, but still worth a visit. 


Personal Example from a Marketer Perspective 

A lot of people are going to fall into the same boat that my team and I were in. 

The team was small and mighty, but operating at 85% capacity or more is not sustainable for a long time. Our spend was limited as well, probably .8% net sales or lower for true marketing spend. 

Despite this, we were being asked to do more and often with an eye on doing it with less spend. 

Sound familiar? 

That’s what makes generative AI so compelling. For a relatively small investment we get a tool that offers a valuable starting point at the least and produces a final output at the most, anything in between is possible and likely. 

We started with tag lines and were surprised how little input was needed to get something good out of it. We gave it some context on who we are and some brand attributes plus some adjectives for what we wanted. 

BAM!

Something that used to take us multiple brainstorms and whiteboard sessions adding up to hours of at least 4 people’s time, now took us 20-30 minutes of dialogue and a short vote. 

Gaining confidence, the team was set loose and encouraged to use the technology wherever it made sense. Guidelines dictated what to use or not use and what to enter or not enter.  

Product descriptions, project plan feedback, social media posts, email copy, blog posts and more started to flow more freely. The tech didn’t replace anyone, it made us better. 

And I have a hypothesis why. 

I’ve noticed throughout my career that many people find it incredibly difficult to deal with a blank canvas. 

Pose a question or problem without good facilitation and progress will be painful. 

But, give people something to comment on and they will! 

That something can be generative AI and it can be done quickly. In minutes or sometimes seconds, you can share a starting point that gets people’s creative juices flowing. It can lead to new lines of thought and questions, sometimes resembling very little of what you started with. 

This process lacks a name. While this moniker is taken, I think it describes the process well, creative destruction. 


What are the strategic options for marketers? 


1 - Don’t use it at all

Maybe you don’t need or want what the technology has to offer. In the short run, that might be ok. For the long run, unless this backlash prevails and/or governments intervene to limit its use, it will be clear who’s using it or not. Those who use the technology will be much more productive without compromising quality of output. In 5 years this could be the new norm.

There is a space where intentionally ignoring generative AI makes sense though, when it’s good for your brand. If your target audience has a propensity to shun generative AI or new technology in general, then avoiding it and promoting that fact should work in your favor. Especially, if competitors go in the opposite direction. You can double down on authenticity, trust, and valuing people. This is sure to resonate with many. 


2 - Hope - use it without making it clear, hoping no one notices…including your own company.

Perhaps you’re a glass half-full person, a shoot first and ask questions later person, someone willing to ride through the storm,  or you believe the backlash will ultimately dissipate into something that can be completely ignored. Or, maybe you feel like you’ve never had to disclose the use of a specific technology or tool before, so why start now?

Whatever the case might be, integrate the tech into your processes and don’t bother with making it known. You’re willing to assume all the risk and reap the rewards for the least amount of effort. 


3 - Transparency - use it and make it clear

You see the promise of generative AI and you either want or need it to increase your efficiency and effectiveness. You and your team have goals and you know help is not on the way. 

But, you acknowledge there’s an audience who has reservations. You believe your brand is strong enough and/or perhaps innovation is a major part of your brand identity.

Sally forth with adoption, but do come up with a means to let your audience know when the technology is used or not. This could be some kind of icon that tells your audience AI was used and/or a company policy in the same vein as current cookie and data privacy policies. 

You’ll want to sit down with stakeholders, not just those in your team. IT, product, HR, PR/communications, and maybe even Corporate Responsibility/ESG should be informed at the least. 

Let’s dive a little bit into IT and ESG. One is obvious and the other not so much.

IT will want to know how you’re using it and what are the benefits. They will have many questions. Does it have access to your data? Does it need to be connected to any internal systems? What’s the policy on what can be entered? How will the risks be mitigated? 

Chances are high IT is going to take control of the decision making and maybe even deployment.

The key here is to be prepared with solid responses and the business case. Show them and leadership it’s been well thought out. Chances are equally high that leadership is being exposed to generative AI’s potential and thus it’s only a matter of time before they’re getting pressure to communicate a strategy. Use this to your advantage. 

ESG? Sustainability? What does that have to do with AI, you ask? 

The answer is data centers. 

They need power to run and power to stay cool. Power more often than not is generated from fossil fuels like coal and natural gas. 

According to an article by the World Economic Forum, “the computational power needed for sustaining AI’s growth is doubling roughly every 100 days.” 

Hence, Microsoft’s deal with Three Mile Island and why you’re probably hearing about nuclear more.

If you’re a larger company and/or ESG matters, your scope 3 emissions will increase every year until other more environmentally friendly methods to generate power become widespread.

Is the impact significant? Maybe not now. 


4 - Transparency plus - use it and create complimentary content.

The potential and the backlash both feel very real to you. This strategy takes the transparency option to a different level. It exploits what you and your team do best, producing content that matches your brand and goals. 

Effort will be the highest here, but you position yourself and the organization to reap the rewards while mitigating the risk. 

The main difference to the previous option is how you will communicate the usage of AI. You’re going to create complimentary content. At the very least it will explain how your company uses the technology and addresses the concerns. This concept can be deployed at the organizational level via a policy and corresponding blog post. 

What’s more interesting is applying it to the content level somehow.

You can be specific to how AI and humans collaborated on that specific content. You can explain that no people lost their jobs. You can share how happy they are to produce more or focus more on the creative aspects of marketing. You could even have fun with it. AI has shortcomings in image and video generation. 


More questions, not answers

After validating the promise of generative AI, what other points should you consider before diving in? 

Here are some starters:

  1. How will you keep track of what AI generated or not?  DAM’s (digital asset management) are probably adjusting to this. Tools you already use probably offer their own generative AI. This could increase the administrative effort if you want to keep track.

  2. What happens with the data? Who owns the output? 

  3. Does this offer branding opportunities? If authenticity and trust are paramount to your industry, staying away from generative AI might not be a bad idea. Especially if competitors actively use it. 

  4. Are your vendors/partners/agencies using it? Or more likely, how are they using it? And, can you use this to your advantage to negotiate cost reductions or quicker deadlines? 

  5. Should you use 1 tool or several? Many marketing technologies are now offering their own native AI, eg Hubspot, Salesforce, Adobe, Canva. The list goes on. Is it better to use one so it has a growing library to reference?

  6. How are you going to stay on top of the tech? It’s moving fast. Similar to how Apple or Google release a new phone every year, the AI companies are trying to publish a new and improved model.


Reflection

The AI partner piece is here. The prompt I used is below:

Please write a blog post about the use of generative AI in marketing. Cover topics such as any current backlash from people, some explanation (but not detailed) on what the technology is and drawbacks, some approaches to overcoming drawbacks, limitations and potential backlash, and finally other considerations or questions that a marketer should consider. Please provide examples where appropriate to help the content resonate with the reader. These should be stories as well as multiple generative AI tools to offer comparisons to each making it clear what one can do today versus others. Please also create your own title.

I’m not a professional writer. That’s probably clear. Each piece offers something on its own.

What I especially like from Claude’s piece is:

  • the writing. It sounds more polished. Maybe less authentic and certainly not colloquial.

  • the structure it offered for drawbacks. Framing each as a challenge with a solution was a nice touch.

  • great questions. It offered great questions for marketers to consider.

  • succinct. It was more succinct and probably more cohesive.

What I don’t like:

  • the Jasper.ai example was a hallucination. A quick follow up verified it, see below. I only asked because I couldn’t find it on their website or using Google. But, it did remind me about Jasper and the need to fact check. Essentially Jasper AI is a 1-stop AI shop for marketers.


Claude 3.5 Sonnet admits to its hallucination

The blog posts are even more powerful together. Unfortunately, no one is going to take the time to read both posts.

Even with some AI assistance, it took me about 20 minutes to mindmap the main points and another 4 hours  to find related content as source material, write the post, and finetune it. 

Ideogram helped with images that weren’t from Getty. I also used Claude to break this piece up into more digestible chunks for other platforms.