A Growing Competition
There's been much discussion about the "AI arms race" between countries like the U.S. and China. However, within the U.S., a similarly intense competition is emerging among major tech companies. This race could significantly impact any business using these platforms to connect with consumers—which includes virtually everyone.
Recently, Google, Meta, and TikTok introduced new AI-based features for users, including advertisers, creators, and merchants. These additions join a growing list of generative AI tools launched by platforms such as Amazon, Wix, eBay, BigCommerce, Shopify, and Cart.com.
While each platform's AI capabilities differ, their common goal is to expedite and simplify the creation of advertising or product listing content. The expected outcome is more effective content generated quickly and cost-efficiently, theoretically benefiting everyone by increasing profitability for both the businesses creating more varied personalized listings and the platforms where they advertise.
Key Considerations Before Embracing AI Tools
Before fully adopting these innovative AI tools, brands should ponder several critical questions about their effectiveness and implications.
Training Material Behind Generative AI Models
Many AI tools available to advertisers and sellers on tech platforms can generate images or text, such as diverse backgrounds for product ads or product listings. While much of this content seems mundane—like generic seasonal or landscape images—brands need to scrutinize its origins.
Though receiving AI assistance to enhance creative content for numerous ads or product listings might appear harmless, it can be a slippery slope. Some individuals might not mind that their casual beach or food photos are among the countless images used to train AI models enabling the creation of replicas. However, consider how unsettling it might be for someone like Scarlett Johansson to hear her voice being replicated by AI (a claim OpenAI has denied although the resemblance was uncanny). What about videos of your children or pets, your face, or your body?
Based on publicly shared information by leaders in generative AI, it is reasonable to assume that this is happening in some cases, especially with content that has been "shared publicly." This issue extends beyond copyrighted material or the likenesses of professionals and celebrities; if individuals believe their personal content is immune from AI "scraping," they should reconsider.
Potential Copyright Violations
This issue poses significant risks for businesses. If you use AI-generated content from these platforms for your advertising and brand assets, you could inadvertently incorporate copyrighted material. Imagine if a generated background in one of your ads closely resembles a copyrighted image by a nature photographer who decides to pursue legal action. Could your brand face infringement claims?
Many platforms creating these tools, including Google and Meta, assert that brands own the imagery created on their platforms. Does this ownership also mean brands are responsible if issues arise regarding the creation of these images?
Currently, there are no definitive answers to these questions. As FTC Chair Lina Khan recently noted, regulation often lags behind, especially with the rapid pace of technological innovation. A notable copyright lawsuit involving generative AI underscores these complexities: A group of visual artists sued several text-to-image AI platforms for copyright infringement, but the case was dismissed partly due to lack of specificity. The plaintiffs are now seeking to clarify in an amended lawsuit the different levels of responsibility between the original AI software—Stability AI's Stable Diffusion—and open-source AI tools like DeviantArt and Midjourney that utilize it.
The broader issue of business and individual liability when using these platforms remains unresolved and is unlikely to be addressed by this particular lawsuit. It will likely take years for legislators and the justice system to untangle the complex layers of these issues and establish clear guidelines around fair use. This is why "caution" and "slow" should be the guiding principles for now, although these are challenging to adopt when many brands are under intense pressure to keep up.
Comfort with Brand Content Being Used to Train AI
Another consideration is whether businesses are comfortable with their content being utilized to train AI models. One of Google's latest innovations allows brands to upload a reference picture to generate similar backgrounds for their product imagery, ensuring consistency with brand guidelines and aesthetics. However, when asked, executives did not clarify whether these uploaded images might then be used for future training of the model.
Companies should at least consider whether they are comfortable with the idea of their uploaded assets—such as imagery, campaigns, style guides, lookbooks, and brand-specific language—being potentially used to help other companies generate content at scale. If this concept makes you uneasy, it might be wise to hold off or at the very least thoroughly examine the platform’s privacy policy to see if it offers any safeguards.
The sticking point often lies with the notion of "publicly available content," which likely includes ads you are running online or content you are posting on your public social media feeds. These may already be used to train AI models.
The Cost of "Free" AI Tools
Adages persist because they contain inherent timeless truths. In the case of these “free” generative AI tools, two come to mind: “There’s no such thing as a free lunch” and “follow the money.”
The revelations about data privacy and the extent to which our personal lives have been mined by big tech over the past few decades should have taught us that nothing online is truly “free.” We've all learned that the cost of the free tools and services we access online is our data and attention, which digital platforms convert into real monetary value.
The exchange online is similar for advertisers and sellers, though perhaps more transparent. When companies like Meta, Google, and TikTok help customers sell more on their platforms with AI-powered tools, they also profit through fees and commissions. This creates a clear win-win situation.
But given the historical precedent, one must ask: What else might these platforms be gaining in exchange? One possible answer is a rich repository of brand creatives, inspirational imagery, product details, and language variations, all of which could potentially be used to train AI for future replication.
To be clear, there is no solid evidence that this is happening universally. However, as the lawsuit against Stability AI and the Open AI-Johansson incident highlight, proving such practices is challenging. What we do know is that for years many of these companies engaged in questionable data collection and tracking practices simply because they could—either because no one noticed or because the mechanisms to monitor and regulate these actions were not yet in place. This history alone should foster a healthy dose of skepticism and caution in this new domain where the stakes could be even higher.
One thing these platforms certainly gain is insight into what performs well and what doesn’t within their ecosystem. This is why each platform offers its own version of tools that essentially serve the same purpose. The more they can get people to use their tools on their platform, the more information they gather to enhance their position in the AI race.
Not all of this is necessarily a bad trade-off. We live in a capitalist system, and companies deserve to be compensated for the services they provide. The issue lies in the opacity of these exchanges: we should be aware of the "cost" of these services even if it isn't measured in dollars and cents.
Why It Matters
One response to all these complex, indirect questions might be a sense of fatalism: "It’s already happening, so why bother trying to change it?" However, the rapid advancement of this technology is precisely why businesses and individuals need to continue researching and questioning AI.
The entities designed to oversee the industry are struggling to keep up. Therefore, slowing down and asking critical questions about the direction of AI is essential. This diligence helps ensure we don’t look back in regret at the choices being made today.