Study of TikTok, X ‘For You’ feeds in Germany finds far-right political bias ahead of federal elections

0
6


Recommendation algorithms operated by social media giants TikTok and X have shown evidence of substantial far-right political bias in Germany ahead of a federal election that takes place Sunday, according to new research carried out by Global Witness.

The non-government organization (NGO) undertook an analysis of social media content displayed to new users via algorithmically sorted “For You” feeds — finding both platforms skewed heavily toward amplifying content that favors the far-right AfD party in algorithmically programmed feeds.

Global Witness’ tests identified the most extreme bias on TikTok, where 78% of the political content that was algorithmically recommended to its test accounts, and came from accounts the test users did not follow, was supportive of the AfD party. (It notes this figure far exceeds the level of support the party is achieving in current polling, where it attracts backing from around 20% of German voters.)

On X, Global Witness found that 64% of such recommended political content was supportive of the AfD.

Testing for general left- or right-leaning political bias in the platforms’ algorithmic recommendations, its findings suggest that non-partisan social media users in Germany are being exposed to right-leaning content more than twice as much as left-leaning content in the lead up to the country’s federal elections.

Again, TikTok displayed the greatest right-wing skew, per its findings — showing right-leaning content 74% of the time. Although, X was not far behind — on 72%.

Meta’s Instagram was also tested and found to lean right over a series of three tests the NGO ran. But the level of political bias it displayed in the tests was lower, with 59% of political content being right-wing.

Testing “For You” for political bias

To test whether the social media platforms’ algorithmic recommendations were displaying political bias, the NGOs’ researchers set up three accounts apiece on TikTok and X, along with a further three on Meta-owned Instagram. They wanted to establish the flavor of content platforms would promote to users who expressed a non-partisan interest in consuming political content.

To present as non-partisan users the tests accounts were set up to follow the accounts of the four biggest political parties in Germany (conservative/right-leaning CDU; center-left SPD; far-right AfD; left-leaning Greens), along with their respective leaders’ accounts (Friedrich Merz, Olaf Scholz, Alice Weidel, Robert Habeck).

The researchers operating the test accounts also ensure that each account clicked on the top five posts from each account they followed, and engaged with the content — watching any videos for at least 30 seconds and scrolling through any threads, images, etc., per Global Witness.

They then manually collected and analyzed the content each platform pushed at the test accounts — finding there was a substantial right-wing skew in what was being algorithmically pushed to users.

“One of our main concerns is that we don’t really know why we were suggested the particular content that we were,” Ellen Judson, a senior campaigner looking at digital threats for Global Witness, told TechCrunch in an interview. “We found this evidence that suggests bias, but there’s still a lack of transparency from platforms about how their recommender systems work.”

“We know they use lots of different signals, but exactly how those signals are weighted, and how they are assessed for if they might be increasing certain risks or increasing bias, is not very transparent,” Judson added.

“My best inference is that this is a kind of unintended side effect of algorithms which are based on driving engagement,” she continued. “And that this is what happens when, essentially, what were companies designed to maximize user engagement on their platforms end up becoming these spaces for democratic discussions — there’s a conflict there between commercial imperatives and public interest and democratic objectives.”

The findings chime with other social media research Global Witness has undertaken around recent elections in the U.S., Ireland, and Romania. And, indeed, various other studies over recent years have also found evidence that social media algorithms lean right — such as this research project last year looking into YouTube.

Even all the way back in 2021, an internal study by Twitter — as X used to be called before Elon Musk bought and rebranded the platform — found that its algorithms promote more right-leaning content than left.

Nonetheless, social media firms typically try to dance away from allegations of algorithmic bias. And after Global Witness shared its findings with TikTok, the platform suggested the researchers’ methodology was flawed — arguing it was not possible to draw conclusions of algorithmic bias from a handful of tests. “They said that it wasn’t representative of regular users because it was only a few test accounts,” noted Judson.

X did not respond to Global Witness’ findings. But Musk has talked about wanting the platform to become a haven for free speech generally. Albeit, that may actually be his coda for promoting a right-leaning agenda.

It’s certainly notable that X’s owner has used the platform to personally campaign for the AfD, tweeting to urge Germans to vote for the far-right party in the upcoming elections, and hosting a livestreamed interview with Weidel ahead of the poll — an event that has helped to raise the party’s profile. Musk has the most-followed account on X.

Toward algorithmic transparency?

“I think the transparency point is really important,” says Judson. “We have seen Musk talking about the AfD and getting lots of engagement on his own posts about the AfD and the livestream [with Weidel] … [But] we don’t know if there’s actually been an algorithmic change that reflects that.”

“We’re hoping that the Commission will take [our results] as evidence to investigate whether anything has occurred or why there might be this bias going on,” she added, confirming Global Witness has shared its findings with EU officials who are responsible for enforcing the bloc’s algorithmic accountability rules on large platforms.

Studying how proprietary content-sorting algorithms function is challenging, as platforms typically keep such details under wraps — claiming these code recipes as commercial secrets. That’s why the European Union enacted the Digital Services Act (DSA) in recent years — its flagship online governance rulebook — in a bid to improve this situation by taking steps to empower public interest research into democratic and other systemic risks on major platforms, including Instagram, TikTok, and X.

The DSA includes measures to push major platforms to be more transparent about how their information-shaping algorithms work, and to be proactive in responding to systemic risks that may arise on their platforms.

But even though the regime kicked in on the three tech giants back in August 2023, Judson notes some elements of it have yet to be fully implemented.

Notably, Article 40 of the regulation, which is intended to enable vetted researchers to gain access to non-public platform data to study systemic risks, hasn’t yet come into effect as the EU hasn’t yet passed the necessary delegated act to implement that bit of the law.

The EU’s approach with aspects of the DSA is also one that leans on platforms’ self-reporting risks and enforcers then receiving and reviewing their reports. So the first batch of risk reports from platforms may well be the weakest in terms of disclosures, Judson suggests, as enforcers will need time to parse disclosures and, if they feel there are shortfalls, push platforms for more comprehensive reporting.

For now — without better access to platform data — she says public interest researchers still can’t know for sure whether there is baked-in bias in mainstream social media.

“Civil society is watching like a hawk for when vetted researcher access becomes available,” she adds, saying they are hoping this piece of the DSA public interest puzzle will slot into place this quarter.

The regulation has failed to deliver quick results when it comes to concerns attached to social media and democratic risks. The EU’s approach may also ultimately be shown to be too cautious to move the needle as fast as it needs to move to keep up with algorithmically amplified threats. But it’s also clear that the EU is keen to avoid any risks of being accused of crimping freedom of expression.

The Commission has open investigations into all three of the social media firms which are implicated by the Global Witness research. But there has been no enforcement in this election integrity area so far. However, it recently stepped up scrutiny of TikTok — and opened a fresh DSA proceeding on it — following concerns of the platform being a key conduit for Russian election interference in Romania’s presidential election.

“We’re asking the Commission to investigate whether there is political bias,” adds Judson. “[The platforms] say that there isn’t. We found evidence that there may be. So we’re hoping that the Commission would use its increased information[-gathering] powers to establish whether that’s the case, and … address that if it is.”

The pan-EU regulation empowers enforcers to levy penalties of up to 6% of global annual turnover for infringements, and even temporarily block access to violating platforms if they refuse to comply.

LEAVE A REPLY

Please enter your comment!
Please enter your name here