The COVID-19 pandemic and subsequent lockdowns allowed people around the world to connect the only way they could – using social media. However, this has led to a new wave of misinformation being spread throughout the internet, a problem New Zealand has only had to face since the pandemic began.
“We didn’t really have much misinformation targeted at New Zealanders prior to the COVID outbreak,” Cocker says.
According to Cocker, before the COVID pandemic there was misinformation floating around the internet but it wasn’t so relevant to New Zealand.
“We have started to see more misinformation specifically targeting New Zealand activities – specifically responding to what the Government is doing.”
“There’s definitely been a lot more – I’m not saying that it’s out of hand, but it’s gone from a thing that was talked about and happening in other places in the world, to being something that is directly relevant to New Zealanders.”
Influencers and misinformation
The freedom rallies that occurred worldwide a couple of weeks ago, but most notably in locked-down Sydney, put a spotlight on social media personalities or ‘influencers’ who attended the march, and yelled about their freedom via placards and Instagram stories.
Australian media outlet Pedesterian.TV blasted a list of 20 influencers, including New Zealand’s former hero Egg Boy, who either attended the protest or posted profusely about it on their platforms.
Almost all of the people in this list are verified, and have millions of followers between them, meaning millions of people saw the anti-lockdown, anti-COVID, anti-vaccination messages they were posting.
The Australian outlet accused the influencers of “feeding their followers unsafe advice that’s putting everyone at risk”.
Cocker says when the spread of misinformation starts to put the public’s health at risk, it becomes a problem.
“There is harm caused to society by disrupting the public health messaging, and the systems that the Government is putting in place to protect everybody,” he says.
“We always talk about harmful content, so harmful hate speech, harmful digital communications, harmful disinformation. At the point that something becomes harmful, then you need to have a formal response to it.”
What should the ‘formal response’ be?
Newshub asked experts if verified accounts should have their accounts unverified if they are reported too many times for promoting misinformation.
Meisner and Christensen believe a penalty system would help combat the spread of misinformation.
“Based on how quickly misinformation can spread, we think a penalty system would be a good step in helping these platforms moderate the spread of misinformation,” they say.
“Social media empowers people to believe that their opinion is valid, regardless of their level of expertise on the topic. If verified accounts were at risk of losing their blue tick we think it would definitely lead to more considered sharing.”
Cocker says that whether an account is verified or not, there should be a punishment.
“There absolutely should be penalties – firstly, not because people are reported a lot of times, but because people are in breach of the rules,” he says.
“Facebook and Instagram have rules around misinformation and if people breach those, there should be punishment.”
When it comes to removing verification of an account, Cocker says he is “more of a fan of removing their ability to use the platform if they share misinformation deliberately”.
What social media platforms are doing to combat misinformation
When the COVID pandemic struck, Facebook was quick to produce a pop-up that directs a user to a website explaining more about the virus whenever COVID-19 is mentioned on social media. They also employ third-party fact checkers, and flag posts that have been proven false.