Artificial intelligence (AI) has already eliminated jobs—lots of them.
The rapid rise of generative tools like ChatGPT is a cause of concern for a lot of writers I know, because they don’t want their jobs to be the next to get cut. There are also plenty of concerns that are a bit less weighty: Does my content strategy work anymore? Will the stories I write still rank on search engine results pages (SERPs)? Does it even matter if they rank?
Given the speed of ChatGPT’s growth and the serious concerns numerous tech leaders have raised about AI more generally, it’s difficult to say how scared anyone should be. One thing you can be sure of, however: If ChatGPT and similar tools are a threat to your content program, your content program was in danger before these tools became available.
Here are a few reasons you should fear generative AI.
1. You should worry if your content strategy mirrors ChatGPT’s approach to content creation.
If you’ve experimented with ChatGPT, you’ll find out quickly that while it produces polished prose, it’s not very original. Let me rephrase that: It’s not original at all. Unique creation is not part of AI’s DNA, because it doesn’t have any DNA to begin with (though it does have plenty of useful smarts).
ChatGPT is great at producing summaries, comparative copy and listicle-like overviews of topics it finds on the internet. So, if your content creation process is to pick a subject, search it online and summarize the information shared in the top five ranking search results, ChatGPT is probably doing this more efficiently than you ever will, and it’s a threat. But if this is your approach, your written content is probably lackluster anyway—even if it ranks No. 1 in Google search for 200 relevant search terms.
Before ChatGPT, search engines themselves had already developed plenty of ways to prioritize content that is unique, content that hadn’t yet existed in the great confines of the internet. So even if ChatGPT hadn’t totally broken this content search-and-summarize strategy, content made this way was already on the way out.
2. You should be concerned if your content has no values and no point of view.
Informational searches account for the majority of all queries on mainstream search engines. They outweigh the combined volume of commercial, transactional and navigational searches, even when you factor in search volumes from less-traditional search engines such as Amazon. It’s also simple to create content that answers these informational queries, such as via generative AI like ChatGPT.
The thing is, ChatGPT can take information that seems factual (not knowing whether it is factual or not) and remix or repurpose it much faster than a person ever could. This is a game-changer in how people go hunting for information. Informational search volume is subsequently dropping fast and will continue to do so as generative AI cuts into the pie.
The content that will elude ChatGPT is content that conveys an opinion, a genuine point of view that goes beyond facts. No matter how much we’d like truth with a capital T to show up in our search results, there are plenty of queries that don’t have factual answers yet. The list of things we know will always be much smaller than the list of things we don’t know, and fortunately ChatGPT is built with many limitations that respect the unknown by acknowledging its own lack of surety.
Think of the difference between a grade school report and an argumentative essay. A report tends to be factual: Humans need water to survive, for instance. An argument, on the other hand, is only worth arguing if it’s about something still unsettled or unknown: Access to clean water is a basic human right, for example.
Generative AI is not well equipped to answer values-based questions. ChatGPT can reproduce others’ opinions, but the tool is quick to acknowledge its own limitations in forming and even evaluating opinions, so there’s a constant need for human voices with timely, authentic, courageous opinions on topics that matter.
Both search and social platforms today reward content that evokes emotion and stands for something. Opinions, in other words. Even more importantly, this is the kind of content human readers, listeners, viewers and collaborators value, too.
3. You should change gears quickly if your content could have come from just anyone in the first place.
We’ve all encountered content that feels too vanilla. Sometimes it’s a listicle rehashing things everyone already knows. Other times it’s an animation that feels so generic you could swear you’ve already seen it in a YouTube ad. Maybe it’s a story that’s trying to have an opinion, as discussed above, but just keeps saying what everyone else has already said and accepts.
Whatever the case, you’re in danger if you’re putting out content that your competitors, your peers or even your audience could have produced themselves. If that many other entities can do it, generative AI can, too.
Here are some of the elements this kind of content lacks:
- Institutional authority and credibility.
- New data or distinctive insights.
- A strong point of view.
- Topical and temporal relevance to your brand.
- A genuine human story that comes from you and your colleagues, partners, customers, etc.
If, on the other hand, you’re creating content that could only originate with you and your organization, you’re striking gold. ChatGPT can summarize or report on such content, but it can’t come up with it in the first place. This feeds into two reasons you should be confident in the content you’re creating, no matter how popular ChatGPT is.
Everyone needs net-new, truly new content. And that includes ChatGPT.
Generative AI models need to learn from somewhere. Unique content creators provide that needed source. This in turn gives immense power to the creators of truly original material, since their contributions will inform what AI turns out in coming years.
Fresh, new content could range from information about products that have never existed before to social proof, from influencer or celebrity interviews to what we call news (hopefully). The long and short is that if you’re producing content that doesn’t yet exist on the internet, you’re adding value that ChatGPT can’t compete with. Yet.
You can be confident if you’re creating content that demands context and authority.
Copyright is sticky, but it’s one of the places where governments most frequently cross party lines in agreement, supporting intellectual property protections as an economic imperative. If you’re creating content that you can show you truly own, you’re in a valuable space. If you’re creating content that others will cite, then your position is even stronger.
Academic or industry research is one example: Citing your sources is critical to getting your research published, and publishing is key to your institution’s authority. Furthermore, it’s not going to fly to cite ChatGPT as a source. So, if you’re the one getting cited because you’re the one with new data to share, you’re in luck, as long as the data is relevant.
Technical or regulated topics across B2B spaces tend to demand a level of context and transparency as well, so content about financial services, insurance, technical products, government policy or legal issues will continue to defy pure AI generation. There are plenty of concerns about fabricated or deepfake content in B2C circles as well. This all promises valuable returns for the true creators out there.
ChatGPT and other tools can, of course, produce citations of existing content as needed. But if you’re the one being cited, these generative tools are serving you more than they are cutting into your lane of authority.
ChatGPT is trained to spread fear about itself—because so much of what in analyzes is fear-centric content. But if you’re a content creator who makes new things, who creates from a place of authority, whose work adds value that doesn’t yet exist, then you can proceed with assurance.
There’s an audience of humans out there that needs what you’re about to make.
Talk with our team.