Toggle Menu
  1. Home/
  2. Tech & Science/

The elusive socialbot: How fraudulent accounts affect social media engagement

135 views

One in five social media users accepts friend requests from people they don’t know. 51% of those accounts are bots.

And that was reported back in 2011. Since then, the exact number of bots infiltrating social media (called socialbots) are inexact, but much more prevalent.

One in five social media users accept friend requests from people they don’t know. 51% of those accounts are bots.

loading...

And that was reported back in 2011. Since then, the exact number of bots infiltrating social media (called socialbots) are inexact, but much more prevalent.

You’d probably like to think you can tell the difference between a real human being and a bot. But the socialbot accounts are getting more advanced as they learn and adapt to the patterns of human interaction on social media. Socialbots acquire friends, likes and upvotes on their accounts. Photos and personal information, which create a “normal” looking profile, are lifted from the profiles of unsuspecting real people. Socialbots can produce comments online that imitate the pace and frequency of human social media activity.

It is almost impossible to identify them with certainty. In general, if you think a major social media platform such as Twitter, and all its top employees, cannot figure out how to resolve a problem after many years, but you think you’ve figured out a solution, it’s likely that you don’t have the answer. However, if you think that you might be able to assist Twitter with weeding out fake accounts, they’re hiring.

It’s a cat and mouse game. As soon as one bot or fake account is deleted, another pops up in its place. New bots can be developed to get past phone number authentication, email verification and CAPTCHAs. Even advanced machine learning algorithms designed to detect fake accounts may result in false positives or false negatives, which is a risk most are unwilling to take – and as a society, we have to weigh whether it is worse to mistake lies for truth or truth for lies.

What we do know for certain is that programs are developed specifically with the aim of influencing discussions on social media. Echo chambers are artificially created all the time. In 2011, even the US military developed bots that would spread pro-America messages and comments online to influence discourse in foreign countries.

What appear to be the opinions and thoughts of real people are, in fact, often just some code and a script, a form of AI that learns to imitate real human interaction so precisely that it could fool a large group of people into, for instance, voting for a particular candidate. Angry Internet mobs could develop over exaggerated claims via constant exposure, and outrage can literally be manufactured.

Worse still, daily outrage could eventually lead to desensitization and apathy – a cry wolf scenario in which, when deluded with false information for so long, we no longer recognize the truth.

loading...

Many people noticed a significant and odd difference during the 2016 presidential campaign between online polls and traditional political polls. Online polls were more likely to show widespread support for Trump, whereas traditional media polls were consistently favorable to Clinton. What happened? Users of 4Chan and r/The_Donald utilized bots and instigated brigades of online debate polls to skew the numbers in Trump’s favor. Trump pointed to his surging popularity on social media to demonstrate that he had a much larger following that went unreported, and criticized the reliability of the official polls. He was not wrong.

On Reddit, especially in the r/politics sub (which has over 3 million subscribers), users frequently suspect one another of having fake accounts. Some even suspect that all the top rated articles on the front page every day are posted and upvoted by bots, artificially creating the impression of consensus and support for particular (liberal) political views. However, there’s no way to prove it. In the depths of the comments sections, people accuse each other of being paid “shills,” and go on witch hunts looking through account histories to determine if a user seems legitimate. In response to this, since it distracts from civil discussion, moderators warn that posts that accuse others of shilling can be deleted.

During the primaries, it was Clinton supporters (or critics of Trump) who were accused of being paid frauds, employed for a propaganda organization called Correct the Record. While CTR was a real site that provided information and talking points Democrats could share when discussing politics online, it’s never been proven that CTR employed members of the public to support Clinton. But its existence started a lot of arguments on Reddit in 2016, back when most of the commenting subscribers of r/politics were pro-Bernie and couldn’t believe that anyone would actually support Clinton.

But it was Trump who had more fake accounts supporting him online. Between the first and second debates, an Oxford research team discovered that bots accounted for one-third of Trump supporters, while Clinton had less than one-fifth.

Now that the politics subreddit’s most popular posts and discussions are anti-Trump, with half the articles on the front page reporting vague evidence of the Trump campaign’s collusion with Russia, those who defend Trump or doubt the media’s anonymous sources are often accused of being bots or frauds hired by Russia. Former FBI agent Clint Watts testified in early April that Russian bots and fake accounts were indeed being used to influence public opinion online.

The demographics of Reddit, however, reveal that its users tend to skew liberal. The loud, aggressive anti-Trump and anti-GOP sentiment is most likely a real backlash against the current administration. The problem is, socialbots are so advanced that it’s hard to tell the difference between a genuine outpouring of support and an invasion of con artists and pranksters. Lack of clarity opens the door for conspiracy theories.

One suggestion to counteract bot accounts seems promising, though it is not currently in practice – public shame. Any bot that was once deleted should have its account tagged, stating something to the effect of, “Banned as potential bot.” Degrees of confidence could be determined by how many tags that account acquires. Fake upvotes and likes by bots would be tagged on the posts of celebrities and politicians, including the POTUS, making it easier to determine how many real people actually support the post or tweet. A complex system involving many layers of machine learning and human reporters who identify potential bot accounts would be necessary to implement this. Ideally, bot brigades would be revealed, and social media transparency could result in a decline of bot usage due to public humiliation.

However, there are drawbacks to this approach. Not only would it require more manpower and vetted employees to determine which accounts are fake, but it might backfire. One could easily send 10,000 fake bot comments, likes or upvotes to the posts of an enemy to negatively affect their online reputation. Anyone could defend themselves by saying, “Those bots are sent by people who wish to undermine my popularity.” One can easily imagine Trump declaring all his bot supporters to be part of a conspiracy against him by the deep state.

It’s yet another Orwellian phenomenon most people never envisioned for the future. Did anyone think we would one day accuse each other of being robots when we disagree? Stranger still is that many of those who accuse others of being robots are robots themselves. The influence of artificial feedback has created an Internet that is littered with landmines of false information. One day, historians may look back on this as the time when society fought a cold war of lies and suffered a plague of deception that brainwashed millions of people due to rapid technological advancement. And social media influence on our opinions, feelings, and choices is not going away anytime soon.

Rebecca Chance

Loading...