Once upon a time, social media was used to connect us with family and friends. Businesses used it to reach customers and advertisers, and in some alternate reality, social media is still an innocent tool used to bring people together. But in our current reality, social media has turned into a far more nefarious tool used to spread misinformation and divide people.
How did companies like Google and Facebook become such a threat?
Congress wanted to know the same thing. In July, the CEOs of Amazon, Apple, Google and Facebook—four of the most powerful figures in tech—testified before Congress members at an antitrust hearing. Some of the concerns expressed included how the companies abused their market power to prevent rivals from threatening their position, as well as concerns regarding privacy, security and the proliferation of misleading information on their platforms. These hearings hint that more regulation may happen in the future.
Below, E&P has compiled a list of other developments for news publishers to watch.
At the Congressional hearing, Google CEO Sundar Pichai was under a fair share of scrutiny, along with its parent company Alphabet. According to a report from the New York Times, questions from the lawmakers focused on its search engine, and they accused Google of lifting content from other websites to keep users in an enclosed environment of its search engine in order to make more advertising dollars.
In addition to this probe, the Justice Department’s antitrust inquiry into Alphabet keeps moving forward promptly, spurred by the personal interest of Attorney General William P. Barr. The Times reported in September that Justice Department officials told lawyers involved in the case to wrap up their work by the end of the month. However, most of the lawyers working on the investigation opposed the deadline; some argued that they could bring a strong case against the tech company but required more time.
YouTube, a Google product, has also been battling misinformation, especially during the COVID-19 pandemic. The platform, along with others including Facebook and Twitter, successfully blocked Plandemic: Indoctornation—a follow-up to Plandemic, a video promoting falsehoods about the pandemic—from going viral on their platforms. The original Plandemic video racked up more than 8 million views across social platforms, with one YouTube version hitting 7.1 million views before it was removed, according to The Verge. But this time around, YouTube quickly began removing full uploads for violating its policies around COVID misinformation. The actions being taken to prevent its spread shows that perhaps progress is possible.
At the same time, Google has been fighting back against a new Australian code of conduct that would force tech giants to pay for news on their platforms. The News Media Bargaining Code, as it’s called, aims “to address bargaining power imbalances between Australian news media businesses and digital platforms, specifically Google and Facebook.”
It would require the platforms to negotiate with news media on how to pay for content and advise media companies of changes to algorithmic ranking and presentation of news. The Australian Competition and Consumer Commission (ACCC) released a draft of the code for public consultation on July 31.
Google responded several weeks after the news broke with an open letter, which was served to millions of Australians visiting Google through a pop-up. It argues that the new law would force them to provide users with a dramatically worse Google Search and YouTube, and it could lead to user’s data being handed over to big news businesses and put the free services users utilize at risk in the country. The letter also claims that they “already pay (news media businesses) millions of dollars and send them billions of free clicks every year.”
The letter was met with pushback from the ACCC: “The open letter published by Google today contains misinformation about the draft news media bargaining code which the ACCC would like to address.” They also stated that Google will not be required to charge Australians for the use of its services or share any additional user data with Australian news business unless it chooses to do either of those things.
With this law, Australia is making it clear that a healthy news media industry is essential to democracy and the damage achieved to it by Big Tech can no longer be ignored.
Since the 2016 American election cycle, Facebook, as well as sister platforms Instagram and WhatsApp, have been dealing prominently with combating misleading information. Consequently, Facebook began bracing for this year’s election early through efforts like the Voting Information Center launched in June. The feature is located at the top of user’s Facebook and Instagram feeds and provides people with accurate information about voting as well as the tools they need to register.
However, Facebook doesn’t anticipate its troubles to end on Election Day (Nov. 3). In case President Donald Trump interferes once the votes are in, Facebook is “laying out contingency plans and walking through postelection scenarios,” the New York Times reported. A few scenarios include Trump wrongly claiming he won another four-year term or invalidating the results by asserting the Postal Service lost mail-in ballots. The Times also reported that the company discussed a “kill switch” to shut off political advertising after Election Day.
Shortly after this report, news broke that Facebook would stop accepting new political advertising in the U.S. in the week leading up to the election. However, candidates and political action committees can choose to target existing ads at different groups or adjust their level of spending, according to The Verge.
Facebook also recently announced plans to partner with academics for a new research project, which will study how the 2020 election is playing out on the platform and how it affects things like voter participation and the spread of misinformation. The findings, Facebook said, would be published around mid-2021 at the earliest. If successful, the results could show once and for all social media’s influence in elections.
At the Congressional antitrust hearing in July, Facebook’s role in the proliferation of misinformation was a common theme. House antitrust chair David Cicilline (D-R.I.) suggested Facebook allows misinformation to reap advertising dollars. An example he points to is the Breitbart video that circulated on Facebook’s platforms which falsely called hydroxychloroquine a cure for COVID-19. Facebook CEO Mark Zuckerberg said such content does not benefit the business and the video was removed for violating the company’s policies. However, Cicilline pointed out it took five hours for the video to be removed but not before 20 million people had already viewed it.
Like Google, Facebook is also attempting to fight the ACCC’s News Media Bargaining Code. In a blog post, the company stated, “Assuming this draft code becomes law, we will reluctantly stop allowing publishers and people in Australia from sharing local and international news on Facebook and Instagram.”
The article claims that news represents a fraction of what Facebook users see in their news feeds and is not a significant source of revenue for the company. In fact, it suggests that news publishers benefit the most from the relationship because the platforms allow them to reach a large audience and that in turn “allows them to sell more subscriptions and advertising.” It also states that the company already invests “millions of dollars in Australian news businesses.”
The Chinese app TikTok was created in 2018 after a merger between current owner ByteDance and a karaoke app called Musical.ly. Popular with teens, the app is known for dancing videos, beauty tutorials and comedic clips, but lately, security concerns have surrounded the platform.
In July, reports that Amazon had instructed some employees to delete TikTok from their phones due to security concerns made waves. Although the company later stated that the request was made in error, the move still resonated with users.
Like countless other apps installed on our mobile devices, TikTok is a platform that captures data as you use it. Although tech security experts haven’t seen a documented threat materialized, Trump has argued that the app is a national security threat, and in August, the president signed two executive orders. According to the New York Times, the first order banned transactions with TikTok within 45 days. The second order gave the company 90 days to divest from its American assets.
Consequently, ByteDance was forced to seek the sale of TikTok’s U.S. operations. The sprint to acquire the app heated up at the end of August, with three groups submitting bids for TikTok’s operations in the U.S., Canada, Australia and New Zealand. The groups included Microsoft and Walmart, Oracle, and Centricus Asset Management LTD and Triller Inc.
At the same time, the company filed a lawsuit against the U.S. government claiming that the Trump administration deprived it of due process when Trump issued the executive order.
As the deadline to sell neared, the company announced that it would not sell TikTok’s U.S. operations to Microsoft or Oracle, nor would it give the source code to any U.S. buyers, according to reports. However, Oracle is reportedly going to be picked as TikTok’s “trusted tech partner” in the U.S.
Should TikTok get banned, there are plenty of other platforms ready to take its place, such as the newly launched Instagram Reels and YouTube Shorts.
In the midst of all this, TikTok launched tiktokus.info and a new Twitter account (@tiktok_comms) in an effort to fight “rumors and misinformation about TikTok proliferating in Washington and in the media.” Like most other social media platforms, TikTok is trying to stay ahead of misinformation, and that includes fighting misinformation in the lead up to the election. Thus, TikTok has broadened their fact-checking partnerships with PolitiFact and Lead Stories (who already covered misinformation related to COVID-19, climate change and more) to fact-check potential misinformation related to the 2020 U.S. election.
In July, hackers took control of some of Twitter’s most famous users as part of a cryptocurrency scam. Bloomberg reported that the attackers gained access to 130 Twitter accounts including Barack Obama and Elon Musk. Due to the incident, Twitter limited functionality for all verified accounts to prevent the scam from spreading. Those users with a blue checkmark by their name were forbidden from posting for approximately two hours, according to the Guardian. Three people were later charged with the security breach.
The hack was just the latest example of how social media platforms are vulnerable, despite the heavy security measures that are put into place.
Over the summer, Twitter also stated that it is was under investigation by the FTC for alleged privacy violations. The company said it inadvertently used phone numbers and email addresses that some users uploaded for security reasons to target them with ads. This move is a potential violation of a 2011 FTC consent ruling in which the company agreed to better protect personal data, according to Bloomberg.
The 2011 agreement (which sprang from a 2009 hack) barred Twitter for 20 years from “misleading consumers about the extent of which it protects the security, privacy, and confidentiality of nonpublic consumer information.” The company said the investigation may lead to a loss of $150 million to $250 million.
The app has also been busy combating misleading information on its platform. Much like Facebook, Twitter’s role in proliferating misinformation and disinformation grew from the 2016 U.S. presidential election. In the lead up to this year’s election, Twitter has made several advancements including implementing a tool that “enables people to report deliberately misleading information about how to participate in an election or other civic event,” according to the platform. In an noteworthy move, Twitter also banned political ads last fall, claiming that “political message(s) reach should be earned, not bought.”
Additionally, for the first time, the social media network that is favored by Trump has labeled his tweets as “potentially misleading.” For example, Trump posted two tweets in May claiming that “mail-in ballots will be anything less than substantially fraudulent” and will result in a “rigged election.” A link from Twitter is affixed to the tweets that says, “Get the facts about mail-in ballots.” Once clicked, the user is directed to experts saying “mail-in ballots are very rarely linked to voter fraud.”
While Twitter has made moves to fight misinformation, the platform has been timid with fact-checking and removing false content. With the election around the corner, Twitter should be prepared for the new wave of misinformation and disinformation. Forget an edit button. Twitter needs a fact-checking button.