March 18, 2023 9:10 AM
Image Credit: CHRISTOPH BURGSTEDT/SCIENCE PHOTO LIBRARY/Getty
In researching AI experts, I came across a deepfake. It wasn’t obvious at first, given his seemingly legit profile and engagement on social networks. Yet after seeing the same creepy AI-generated photo of Dr. Lance B. Eliot all over the web, it was clear that he wasn’t a real person. So I followed him and learned his grift.
The ubiquitous Dr. Lance B. Eliot
Eliot has over 11,000 followers on LinkedIn and we have two connections in common. Both have thousands of LinkedIn followers and decades of experience in AI with roles spanning investor, analyst, keynote, columnist and CEO. LinkedIn members engage with Eliot even though all his posts are repetitive thread-jacking that lead to his many Forbes articles.
On Forbes, Eliot publishes every one to three days with nearly identical headlines. After reading a few articles, it’s obvious the content is AI-generated tech jargon. One of the biggest issues with Eliot’s extensive Forbes portfolio is that the site limits readers to five free stories a month until they are directed to purchase a $6.99-a-month or $74.99-a-year subscription. This gets complicated now that Forbes has officially put itself up for sale with a price tag in the neighborhood of $800 million.
Eliot’s content is also available behind a Medium paywall, which charges $5 a month. And a thin profile of Eliot appears in Cision, Muckrack and the Sam Whitmore Media Survey, paid media services that are expensive and relied upon by a vast majority of PR professionals.
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Then there’s the sale of Eliot’s books online. He sells them through Amazon, fetching a little over $4 per title though Walmart offers them for less. On Thriftbooks, Eliot’s pearls of wisdom sell for around $27, which is a deal compared to the $28 price tag on Porchlight. A safe bet is that book sales are driven by fake reviews. Yet a few disappointed humans purchased the books and gave them low ratings, calling out the content for being repetitive.
The damage to big brands and individual identities
After clicking a link to Eliot’s Stanford University profile, I used another browser and landed on the real Stanford website, where a search on Eliot produced zero results. A side-by-side comparison shows the branded color red on Eliot’s Stanford page was not the same shade as the authentic page.
A similar experience happened on Cornell’s ArXiv site. With just a slight tweak to the Cornell logo, one of Eliot’s academic papers was posted, filled with typos and more low-quality AI-generated content presented in a standard academic research paper format. The paper cited an extensive list of sources, including Oliver Wendell Holmes, who apparently published in an 1897 edition of the Harvard Law Review — three years after he died.
Those not interested in reading Eliot’s content can make their way to his podcasts, where a bot spouts meaningless lingo. An excerpt from one listener’s review reads, “If you like listening to someone read word for word from a script of paper, this is a great podcast for you.”
The URL posted next to Eliot’s podcasts promotes his website about self-driving cars, which initially led to a dead end. A refresh on the same link led to Techbrium, one of Eliot’s fake employer websites.
It’s amazing how Eliot is able to do all of this and still carve out time to speak at executive leadership summits organized by HMG Strategy. The fake events feature big-name tech companies listed as partners, with a who’s who of advisors and real bios of executives from Zoom, Adobe, SAP, ServiceNow and the Boston Red Sox, among others.
Attendance at HMG events is free for senior technology executives, provided they register. According to HMG’s terms and conditions, “If for some reason you are unable to attend, and unable to send a direct report in your stead, a no-show fee of $100 will be charged to cover costs of meals and service staff.”
The cost of ignoring deepfakes
A deeper dig into Eliot led to a two-year-old Reddit thread calling him out and quickly veering into hard-to-follow conspiracy theories. Eliot may not be an anagram or tied to the NSA, but he’s one of the millions of deepfakes making money online that are getting harder to spot.
Looking at the financial ripple effects of deepfakes raises questions about who is responsible when they generate revenue for themselves and their partners. That’s not to mention the cost of downloading malware, targeting fake prospects and paying for spammy affiliate marketing links.
Arguably, a keen eye can spot a deepfake from the fuzzy or missing background, strange hair, odd-set eyes, and robotic voices that don’t sync with their mouths. But if this were a universal truth, the cost of deepfakes wouldn’t be billions in losses as they generate financial scams and impersonate real people.
AI has not solved all the issues that make it hard to spot a deepfake’s lack of authenticity, but is actively fixing them. It’s this type of outing-the-deepfakes article that helps AI learn and improve. This leaves the responsibility of spotting deepfakes to individuals, forcing them to be vigilant about who they let into their networks and lives.
Kathy Keating is a real person and founder of ProsInComms, a PR consultancy.