The impact of deepfakes on marketing
Join senior executives in San Francisco on July 11-12 to learn how leaders are integrating and optimizing AI investments for success. Learn more
By searching AI experts, I came across a fake deep. It wasn’t obvious at first, given his seemingly legitimate profile and social media engagement. Yet after seeing the same creepy AI-generated photo of Dr. Lance B. Eliot all over the web, it was clear he wasn’t a real person. So I followed him and learned his stuff.
The ubiquitous Dr. Lance B. Eliot
Eliot has over 11,000 followers on LinkedIn and we have two connections in common. Both have thousands of LinkedIn followers and decades of experience in AI with roles as investors, analysts, speakers, columnists and CEOs. LinkedIn members interact with Eliot even though all of his posts are repetitive thread hijacks that lead to his numerous Forbes articles.
On Forbes, Eliot posts every one to three days with nearly identical headlines. After reading a few articles, it’s obvious that the content is AI-generated tech jargon. One of the biggest problems with Eliot’s extensive Forbes portfolio is that the site limits readers to five free stories per month until they’re prompted to purchase a subscription for $6.99 per month or 74 $.99 per year. It gets more complicated now that Forbes has officially put up for sale with a price tag of around $800 million.
Eliot’s content is also available behind an average paywall, which charges $5 per month. And a thin profile of Eliot appears in Cision, Muckrack and Sam Whitmore Media Survey, expensive paid media services used by a large majority of PR professionals.
Then there is the online sale of Eliot’s books. He sells them through Amazon, fetching just over $4 per title, though Walmart offers them cheaper. On Thriftbooks, Eliot’s Pearls of Wisdom sells for around $27, which is a bargain compared to the $28 price on Porchlight. It’s a safe bet that book sales are driven by fake reviews. Still, a few disappointed humans bought the books and gave them low ratings, calling the content repetitive.
Damage to major brands and individual identities
After clicking on a link to Eliot’s Stanford University profile, I used another browser and landed on the real Stanford website, where a search for Eliot produced no results . A side-by-side comparison shows that the red color of the mark on Eliot’s Stanford page was not the same shade as the genuine page.
A similar experience happened on Cornell’s ArXiv site. With just a slight tweak to the Cornell logo, one of Eliot’s academic papers has been published, filled with typos and shoddy AI-generated content presented in a standard academic research paper format. The newspaper cited a long list of sources, including Oliver Wendell Holmes, who apparently published in an 1897 edition of the Harvard Law Review – three years after his death.
Those not interested in reading Eliot’s content can head to his podcasts, where a bot spouts out meaningless jargon. An excerpt from a listener’s review reads, “If you enjoy listening to someone read verbatim from a paper script, this is a great podcast for you.”
The URL posted next to Eliot’s podcasts promotes his self-driving car website, which initially led to a dead end. A refresh on the same link led to Techbrium, one of Eliot’s fake employer websites.
It’s amazing how Eliot is able to do all of this while still making time to speak at executive leadership summits hosted by HMG Strategy. The fake events feature big-name tech companies listed as partners, with a who’s who of advisors and real executive bios from Zoom, Adobe, SAP, ServiceNow and the Boston Red Sox, among others.
Attendance at HMG events is free for senior technology executives, provided they register. According to HMG’s terms and conditions, “If for any reason you are unable to attend and are unable to submit a live report on your behalf, a $100 no-show fee will be charged to cover meal and staff.”
The cost of ignoring deepfakes
Further digging by Eliot led to a two-year-old Reddit thread calling him out and quickly veering into hard-to-follow conspiracy theories. Eliot may not be an anagram or linked to the NSA, but he is one of the millions of deepfakes making money online that are increasingly difficult to spot.
Examining the financial ripple effects of deepfakes raises the question of who is responsible when they generate revenue for themselves and their partners. Not to mention the cost of downloading malwaretargeting fake leads and paying for spammy affiliate marketing links.
Arguably, a keen eye can spot a deepfake from a blurry or missing background, weird hair, weird eyes, and robotic voices that don’t sync with their mouths. But if it were a universal truth, the cost of deepfakes wouldn’t be billions in losses because they generate financial scams and pretend to be real people.
The AI hasn’t fixed all the issues that make it difficult to detect a deepfake’s lack of authenticity, but is actively fixing them. It’s this type of deepfake article that helps the AI learn and improve. This leaves the responsibility of spotting deepfakes up to individuals, forcing them to be vigilant about who they let into their networks and lives.
Kathy Keating is a real person and founder of ProsInComms, a public relations consultancy.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including data technicians, can share data insights and innovations.
If you want to learn more about cutting-edge insights and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.
You might even consider contributing an article your own!
The impact of deepfakes on marketing