governance of technology and governance through technology
For each quote, we plan on discussing it, placing it into its context, and understanding how it fits into the broader discussion about bias.
NYTimes
This isn’t about free expression. This is about paying for reach. And paying to increase the reach of political speech has significant ramifications that today’s democratic infrastructure may not be prepared to handle. It’s worth stepping back in order to address.
Doubtless, big debates about free speech will be raised about the new policy, but no company has to sell ads it doesn’t want to sell. And, as much as it feels like it is a public square, Twitter is a private enterprise that can make money any way it likes.
Internet political ads present entirely new challenges to civic discourse: machine learning-based optimization of messaging and micro-targeting, unchecked misleading information and deep fakes. All at increasing velocity, sophistication, and overwhelming scale.
And he called for political ad regulation well beyond simply ad transparency.
Would we really block ads for important political issues like climate change or women’s empowerment? Instead, I believe the better approach is to work to increase transparency. (Zuckerberg)
Washington Post
Barriers to mass communication are lower. With platforms such as Facebook and Twitter, modern-day purveyors of disinformation need only a computer or smartphone and an internet connection to reach a potentially huge audience – openly, anonymously or disguised as someone or something else, such as a genuine grassroots movement. In addition, armies of people, known as trolls, and so-called internet bots – software that performs automated tasks quickly – can be deployed to drive large-scale disinformation campaigns.
In what’s known as state-sponsored trolling, for instance, governments create digital hate mobs to smear critical activists or journalists, suppress dissent, undermine political opponents, spread lies and control public opinion.
But the Oxford report singles out China as having become “a major player in the global disinformation order.” Along with those two countries, five others – India, Iran, Pakistan, Saudi Arabia and Venezuela – have used Facebook and Twitter “to influence global audiences,” according to the Oxford report.
Twitter and Facebook, in August 2019, revealed a Chinese state-backed information operation launched globally to de-legitimize the pro-democracy movement in Hong Kong.
A Rand Corp. study of the conflict in eastern Ukraine, which has claimed some 13,000 lives since 2014, found the Russian government under President Vladimir Putin ran a sophisticated social media campaign that included fake news, Twitter bots, unattributed comments on web pages and made-up hashtag campaigns to “mobilize support, spread disinformation and hatred and try to destabilize the situation.”
Before India’s 2019 elections, shadowy marketing groups connected to politicians used the WhatsApp messaging service to spread doctored stories and videos to denigrate opponents. The country also has been plagued with deadly violence spurred by rumors that spread via WhatsApp groups.
The most sophisticated disinformation operations use troll farms, artificial intelligence and internet bots – what the Oxford researchers call “cyber troops” – to flood the zone with social-media posts or messages to make a fake or doctored story appear authentic and consequential.
A Singapore law that took effect Oct. 2 allows for criminal penalties of up to 10 years in prison and a fine of up to S$1 million ($720,000) for anyone convicted of spreading online inaccuracies. The responsibility for identifying falsehoods detrimental to the public interest was given to government ministers.
Vice
Having to “play up” falsehoods for users
Having organized calls to action to attack accounts
with the way it is engineered now (with anonynimity), this will always arise
eliminating anonynimity will make people accountable to their actions and words