Instagram has a reputation for being fake: the jewel in the social-media crown of people pretending to be something they’re not. The home of influencers, the impossibly beautiful, and pictures that don’t resemble anyone’s real life, the fakeness of Instagram goes further than those who use it to make a living.
Today, we’re moving past the vanity of the ‘gram, exploring the fake behavior created by bots, spam accounts, scams, and invalid clicks, and seeing how they affect its users and advertisers.
It’s estimated that there are 100 million bot accounts on Instagram, costing businesses upwards of $1.3 billion every year. With one in ten accounts being fake, Instagram has a lot to answer for when it comes to proving who’s legitimate.
You only need to look in the comments to see fake accounts – you can spot them a mile off. Bots can only get the gist of what an image shows and struggle to create replies that make sense. They’re also often rude, promotional, or a mix of the two: “Don’t view my story if you don’t want to _____” is a typical sentence.
Not only does spam take away from a user’s experience, but it damages the value that the platform is providing. Who wants to create content for or advertise in a space that’s littered with fake bots? Instagram has repeatedly stated that they’re cracking down on inauthentic behavior, most recently in August 2020. They claimed that they’d be asking users to prove their identity if they’re flagged for “coordinated inauthentic behavior”; however, looking at today’s newsfeed, it’s clear that this has made little difference.
Instagram’s bot problem is overshadowed by its older sibling’s: Facebook. Facebook has reportedly removed 1.7 billion bots over the last few years, and that’s not to suggest that they have detected anywhere near all of them. Advertisers will know that Instagram Ads are run through the same Ad Manager as Facebook Ads, and users will know that when one platform goes down, often the other follows. So, what potential crossover is there for bots too? After all, the same leadership, morals, and priorities apply to both platforms – and it isn’t focused on eliminating fake behavior.
Instagram has also become rife with scams. This is likely because its audience is always looking for quick ways to get ahead, and so any scam that promises them increased attention is something they’re going to be interested in. These scams range from fake influencer sponsors, giveaways, investments, free followers and likes – you name it, there’s an Instagram scam going.
One example which seems to do well is the sale of verification badges. For a fee, these sellers claim they can guarantee you ‘blue tick status’. What makes these more believable is that the account the messages are being sent from is verified themselves, so their sales pitch of “I can vouch for you” seems like they’re offering you a leg up. In reality, anyone can apply for a verification badge – your chances of being accepted are the same with or without paying $700.
“Cheat The Algorithm”
What exacerbates Instagram’s bot issue is its culture. It’s a platform where follower counts and likes are King; anything that is seen to damage these is thrown to the engagement-hungry wolves. The Instagram algorithm is frequently set-upon as many users believe it can work against them by hiding their posts or limiting their reach.
While there are a lot of tips out there to help users increase their engagement organically, such as posting consistently, it’s easier for frustrated users to feel unfairly penalized by the AI and look for a quick win.
Introducing: buying followers.
As more people attempt to accelerate their rise to influencer stardom, fraudsters can make a lot of money by providing fake followers. The problem with this is that it’s fueling the platform’s bot problem and actually penalizes the buyer because of Instagram’s policy. While they aren’t great at stopping bots and scams on a larger scale, Instagram does seem hot on banning people for more obvious violations such as a sudden increase in activity.
Instagram’s heavy-handedness when it comes to temporarily disabling accounts has led to a phenomenon called Shadowbanning. Some users claim that Instagram will limit your reach and visibility by showing your content to fewer people and excluding your posts from hashtags. Head of Instagram, Adam Mosseri, denied the existence of shadowbanning during an interview in 2019, but this has not stopped people from feeling marginalized by the platform’s algorithms and acting out.
Should Instagram’s focus be on blocking genuine users who employ shady tactics to increase their visibility or on the creators of the bots which allow them to do it? Bots, which it’s important to note, go on to click on paid ads.
Who’s Checking The Ad Traffic Quality?
Invalid clicks target every type of PPC ad, and so Instagram is no exception. In fact, due to the site’s culture, it’s easy to see how fake bots and behavior dominate and leak into paid ads. Not only are some of the ads shown fake, but clicks on genuine ads can be too. This costs advertisers ad budget and skews their data, engagement metrics, and chances of being shown.
Much like their stance on the rest of the platform’s problems, Instagram hasn’t made much effort to ensure the traffic driven to advertisements is free from bots and data centers. If bots are interacting with genuine users, your ads are likely seeing the same results.
Instagram has a lot of work to do to create a better experience for its users and advertisers. The increasing saturation of bots in comments, illegitimate ad clicks, and phishing activity means it’s fallen to users and advertisers to protect themselves: while Instagram’s feed may look pretty from the outside, it’s full of ugly truths hidden beneath.