Social bots, fake news and other forms of disruptive intervention in opinion-making processes have both a public and political impact. In January, a discussion was held among experts from the Committee on Education, Research and Technology Assessment led by Patricia Lips (CDU/CSU). Prof. Dr. Dr. Dietmar Janetzko, Professor of Business Informatics and Business Process Management at the Cologne Business School, was asked to attend this round table discussion of experts as an authority on the subject.
Social bots are programs that act on the basis of a false identity within social media, where they try to influence opinion-making processes. The question as to how effective they are in manipulating opinions and whether and how lawmakers should react to them was a subject of dispute among the authorities present. Topics of discussion were a mandatory registration for social bots as well as an intensified engagement within media education in order to counter the potential manipulation of opinion.
In the following interview, Prof. Janetzko offers insight into the activities of social bots while shedding light as to whether they are actually dangerous and how they can impact the economy.
How does one recognize a social bot within social networks?
As for now, there is no surefire way of doing this. But there are indicators. Social bots are mainly active on Twitter. Here, the relationship between friends and followers can be an indication: Social bots usually follow many accounts, while having few followers of their own. However, this criterion for identifying social bots is to be used with caution and is no guarantee. The reason is that bots are often active in bot networks and follow one another. So the relationship between friends and followers is becoming more and more balanced with bots as well. Further indicators might be the profile picture; a logo or a comic figure and no real photo can also be a sign not to mention the time and frequency of posts. For example, tweets sent off in the middle of the night or several consecutively in a very short time period could be an indication of a bot. But not to forget here as well, developers can reprogram their programs at any time, influencing the times and time intervals of posts.
Can social bots be reported and their developers identified?
Any abuse can be reported to the social network. The use of social bots is a breach of the terms and conditions, so it is in their interest to curb the activities of social bots. Without a doubt, social media are certainly combatting such activities, but one could still imagine further intervention. The developers of the bots themselves are difficult to identify. Although there are approaches – one could theoretically establish a relationship between a bot and its developer based on the program’s blueprint – but the chain of evidence is quite unreliable.
How do you explain the widespread interest in social bots and related issues such as fake news?
Our currently quite intense discussion can be explained, on the one hand, using the backdrop of Brexit and the American presidential elections, both events where social bots were deployed. On the other hand, this year’s Bundestag election and the presidential election in France also explain the massive attention. This is a topic the Bundestag takes very seriously, which is why a mandatory registration for Social Bots is currently being deliberated. However, there are various groups that doubt the feasibility and sense in doing so.
On the basis of many media reports and according to political leaders, one could assume that social bots are dangerous and could manipulate this year’s election. What is your stance on this?
With the information sources currently available, it appears unlikely that social bots have much of a chance of influencing large numbers of people. But in order to relativize this again, politicians have been debating this issue, often arriving at an impasse – because here, even a small number of people can have an impact. In my opinion, bots and other related influencers can facilitate making a topic appear greater than it actually is. I also want to add that it’s not only social bots which can be dangerous but also the reaction to them. The fact, for example, that the government would contemplate adopting censorship-like measures is concerning.
What do you think of the increased efforts in the area of media education to respond to problems related to social bots and fake news?
The call for media education is understandable, because there are unfortunately still confused ideas about social media. However, from my point of view, many of the efforts in media education are too much of an instrumental approach to social networks, for example, the labor market. A critical analysis in our current media education plays only a marginal role.
Can social bots also have an impact on the economy? How could companies use bots?
Social bots can influence the economy on various levels. For example, in the financial sector. When financial service providers make recommendations based on a prevailing opinion in the social media, this opens the gates for share manipulation. There was a similar development even before social bots came into existence: the so-called pump and dump principle. On the basis of false data, a share is aggrandized before being resold by the culprits who made the false claims in the first place. Another possibility of influencing is the spreading of rumors by social bots, so that companies fall into disrepute. Companies, on the other hand, can also use bots for positive action. For example, to a certain extent they can take over customer communication, clear up misunderstandings or, if necessary, counteract a shit storm.