AI Influencers – Virtually Unaccountable
At risk of sounding far older than I am, the far-out imaginings that were around when I was a child are beginning to materialise. We’re now seeing predictions of what the future would look like from nearly 30 years ago coming to light. As a child, I was awed at the possibilities of self-driving cars, pervasive holo-technology and machines that could think for themselves.
These sorts of musings made their way to news features. They were the hallmark of science fiction that attempted to picture the future. We only need to think about the Back to The Future franchise and its famous predictions for the year 2015. These films got much right yet much wrong. On the one hand, they guessed we’d be using biometrics to scan eyes and fingerprints. They foresaw high-tech eyegear, with capabilities that echoed the functionality of Google Glass. The films also expected us to be travelling by hoverboard, and to be wearing foot-wear with self-tying shoelaces.
A constant theme in science fiction’s 21st Century predictions has been the emergence of Artificial Intelligence. Consider 2001: A Space Odyssey’s Hal 9000, or films like WarGames and Metropolis. Perhaps the dark yet wacky imaginings of the future have contributed to a large-scale misunderstanding of what AI means. There’s definitely a slice of older demographics who believe that AI is something to be feared. They believe AI is going to put them out of work, or on the extreme margins, that machines will rise up against humanity.
Getting AI Wrong
One of the biggest misconceptions is that AI is a new invention. In fact, the use of AI goes back to the 1950s. British mathematician, Alan Turing, proposed the theory of computations, suggesting that a machine, through the use of binary code ‘0’s and ‘1’s, could simulate any conceivable act of mathematical deduction. Famous for cracking the enigma code during the Second World War, and for the cruel injustice he received due to his sexuality, Turing is the reason that machines can use information and reason to solve problems and make decisions. In short, AI has been around for decades.
Whether or not AI is ethical is another conversation. However, many businesses continue to underplay the rising use and therefore relevancy of AI when it comes to marketing strategy. The truth is that AI plays an important part in cost optimization. Contrary to the idea that AI represents an expensive luxury, such as in using state-of-the-art machine learning, AI is often used to generate revenue. Artificial Intelligence has been used to improve customer experience, help analyze data more efficiently, and to provide cost-effective metrics on performance. AI is not limited to the big tech giants, but is used across the field.
AI in Practice
We’ve seen brands launch chatbots that are programmed to decipher and respond to a specific customer’s needs. We’ve also seen a host of virtual assistants flood the marketplace. There’s been AI that has automated workloads, collecting data and using algorithms to categorise work and roll out service requests. eCommerce has employed AI to optimize logistics, automate production processes and to predict consumer behaviour. Notably, AI has been used to improve marketing and advertising campaigns, tracking user behaviour and updating strategies accordingly. This may paint a picture of a cynical, data-driven world. However, it’s very much a reality that eCommerce has long become accustomed to.
Artificial Influencers – A Step too Far?
Influencer marketing is by no means a novel undertaking.
According to Business Insider, brands are set to spend up to $15 billion on influencer marketing by 2022.
Partnering with influencers has become an every-day practise for many brands. In addition to establishing paid promotions with celebrities, brands work with more niche influencer types, with interests that reflect their area of the market. Marketers are refining this symbiosis, investing more time and resources into improving the effectiveness of their influencer campaigns. Marketers are therefore selective in how they choose the right influencers, hoping that they have a fan-base who will convert to customers. This is because a brand can’t predict the success of an influencer based on their follower count. Much of an influencer’s following may be made up of bots as indicated by a low engagement rate on their posts.
Perhaps it’s to avoid these fradulent influencers that brands have begun to rely on AI influencers. The concept of AI influencers has been around for years, yet it wasn’t until 2018 that they began to gain traction. It may seem odd to use a robot for influencer marketing. You’d think that for an industry that strives to create a human connection between brand and consumer, it’s counterintuitive. Yet, AI influencers have also been used by some of the world’s biggest brands for successful campaigns.
Let’s define the two types of artificial influencer. There’s the standard virtual influencer, who doesn’t quite utilise AI, but is the fictitious mouthpiece of a brand’s key opinion leader (KOL). Then there’s the second type, less common, but growing in significance. They’re the influencer that uses Artificial Intelligence to generate a persona that’s programmed to capture a following.
Lil Miquela is the most famous example of the first type. She’s a fashionable faux 19-year-old with almost 3 million Instagram followers. She, like her peers, is beautiful, popular, enthusiastic about the clothes she wears and the brands she likes. She’s modelled for some of the world’s biggest fashion houses, including Prada, Chanel, Louis Vuitton and Calvin Klein. She’s also not real. Rather, she was made to be as attractive and appealing as possible without alienating real-life people.
Lil Miquela speaks as if she’s a person with feelings and who cares about causes. She’s been used to speak in support of the Black Lives Matter movement, and ironically, to speak against exacting beauty standards towards young people.
Alternatively, you may look at how fashion photographer Cameron-James Wilson has designed a team of virtual models, used to promote Smart cars, Ellesse and KFC.
But what’s the result when brands use programmes as aspirational avatars? Is it a natural progression of today’s marketing trends? A reflection of where industry is moving?
One argument is that these digital creations of the marketing industry have been used to manipulate young people, threatening their wellbeing. That’s certainly the stance taken by Internet Matters, a children-focused online safety NGO.
The risk comes when these AI avatars are being used to mimic relationships that build trust, that are centred around recognition. It’s believed that in a matter of years, machine learning-centred AI will enable virtual influencers to generate social media content that takes into account the data generated about its followers. Simply, these avatars are being primed to best manipulate their audiences. Some virtual influencers fail to disclose that they’re not real, that they’re echoing the thoughts of a brand manager, or that they’re the result of algorithmic communication. Brand leaders are using these digital personas to better manage their campaigns. They are simulating false interest from a fake young person in an attempt to inspire other youth.
Given that these influencers share their feelings, which in reality are the words of a creative lead or computer programme, would it not be more ethical to operate with a higher degree of transparency?
With AI dominating every aspect of our lives, it should be no surprise that virtual influencers have made their debut. But they are also deserving of scrutiny. Brands have the advantage of creative freedom over the direction and implementation of their influencer campaign. Yet, the distance that comes with using a virtual avatar may just be saving them from the standards of accountability that expect of a human. Does the teen who’s following a digital influencer know that they’re following the mouthpiece of a brand? Do they appreciate that their words have been greenlit by a marketing manager whose values and politics may be vastly different than their own? Maybe they don’t realise that it’s by design that they identify with a virtual influencer?
Yes, I can respect the necessity of technological advancements. However, I also believe it’s worth honouring authentic and open marketing.