Humans Disrupting Digital? Expect More Regulation, Fact-Checking and Curation

Posted

As news publishing enters a new decade, digital will continue to be a disruptor among us. The good thing is that the industry has learned how to harness its power and use the platform effectively in newsrooms and advertising departments. Digital can be a powerful tool (when used right), but on the other hand, digital can be our worst own enemy. We’ve seen sales numbers decline, readers jump ship and misinformation spread like wildfire, thanks to digital. Despite this, news publishers should always be prepared.

No one can predict where digital will take us in 2020, but we compiled a list of trends to watch in this new year. From rules and regulation to the rise of fact-checking and artificial intelligence, digital will undoubtedly disrupt us as it has before, but the time has come for us to turn the tables.

Data Regulation Across the U.S.

At the start of this year, the California Consumer Privacy Act (CCPA) went into effect. As explained by CNET, the law gives California residents the right to know what kind of data companies have collected about them, the right to ask they not sell their data and the right to request their data be deleted. This data can include any source such as the internet or even paper forms.

According to Fortune magazine, there are several big companies, such as Microsoft, that are voluntarily complying in all 50 states to create goodwill, and even some smaller companies, like Boston-based internet service provider Starry. Aside from goodwill, it makes sense to provide these services to all 50 states as several are already following California’s lead.

CNET reported that Nevada recently passed its own privacy law which went into effect in October, however, it only applies to data collected from consumers through the internet. Maine passed its own law in June, which requires internet providers in the state get customers permission before selling or sharing data with a third party. In addition, Washington considered a law last year although it was not passed. CNET suggests the bill could be reintroduced this year.

An impact assessment prepared for the California Department of Justice estimates that the CCPA could cost companies with 20 employees $50,000 in initial costs to comply with the law. A larger company with more than 500 employees could incur, on average, an initial cost of $2 million.

At this rate, companies might be willing to allow other states the same rights as Californians before they pass their own laws with new rules and regulations.  

The Move to First-Party Cookies

With the recent rise of anti-tracking features (browsers, extensions, policies, etc.) from the likes of Apple, Mozilla and Google, as well as new regulations like the General Data Protection Regulation (GDPR) and the CCPA, publishers are increasingly making the switch from the third-party cookie to the first-party cookie.

News Corp, the Washington Post and Insider Inc. are a few of the publishers Digiday recently reported that are attempting to make the switch as they can no longer rely on the third-party cookie.

News Corp (parent company of the Wall Street Journal, and the Times and Sun in the U.K.) developed a news ID for individual readers so that they can be identified without third-party cookies—the feature tracks readers across its multiple sites.

Similarly, Insider Inc. developed reader IDs, which isn’t personally identifiable but can provide insight into “reader behaviors, interests and intents, in order to create effective targeting segments for marketers.”

The Post developed Zeus Insights, a first-party ad targeting tool that offers contextual targeting capabilities along with user-intent predictions for marketers, according to Digiday.

As consumers continue to demand rights over their data, we should see more publishers and companies alike rely on first-party cookies. Social media will continue to struggle with stopping misinformation from spreading.

The Tech Battle Against Misinformation Will Continue

As we enter 2020, the coverage of the U.S presidential election will certainly ramp up. So will the spread of misinformation on social media.

“It’s likely that there will be a high volume of misinformation and disinformation pegged to the 2020 election, with the majority of it being generated right here in the United States, as opposed to coming from overseas,” Paul Barrett, deputy director of New York University’s Stern Center for Business and Human Rights, told Politico.

In its report, Politico makes a valid point: Policing domestic content is tricky. Social media is a readily available tool for Americans to use to partake in their democracy and their posts won’t “leave obvious markers such as ads written in broken English.” The organization also points out that bad actors are learning. For example, they can now better avoid automatic detection despite big tech pouring money into resources to dismantle fake accounts.

To make matters worse, social media has also struggled with how to handle political campaigns. We’ve seen Twitter ban political campaigns, Facebook openly say they will not fact- check them and Google restrict how the campaigns can target users online.

It’s also important to point out that should artificial intelligence (AI) be democratized as so many 2020 predictions foresee, this will pose another hurdle for social media.

According to the Verge, research conducted by scientists from Stanford University, the Max Planck Institute for Informatics, Princeton University and Adobe Research proved deepfakes are becoming easier every day. Imagine the kind of deepfakes that could be created with technology like Google Duplex, which uses realistic synthetic speech to make automated calls to restaurants or stores, or Adobe’s VoCo software that lets users edit recordings of speech as easily as Photoshopping a picture.

While these technologies aren’t yet perfected—or in Adobe’s case—released to the public, in a prediction piece for Enterprise Project, Max Lytvyn, head of revenue and co-founder of Grammarly, said, “More and more advanced technologies are available with little to no overhead cost or time commitment and algorithms can be effective with progressively smaller data sets. This trend democratizes AI innovation and also enables smaller and more niche AI tools to be created.” Expect to see more media companies steer away from algorithms and use more human curation instead.

Human Curation Over Algorithms

As 2019 ended, tech companies and media groups set out to rely less on algorithms and more on human curation to shine a light on original, professional journalism.

Facebook, one of media’s biggest disruptors, announced last summer it would launch a News Tab, a new section inside its mobile application that would utilize journalists to curate the day’s top stories although most other stories would still be algorithmically sorted and ranked.

While Facebook’s News Tab feature just launched in October, Apple, on the other hand, has been using this same approach since 2015 when it replaced its Newsstand with News, a free app that matches users with their preferred publications and allows them to curate feeds they enjoy. According to the New York Times, the announcement attracted little fanfare when it launched, but shortly after, Apple announced that humans would start selecting the app’s top stories.

This could be a step in the right direction as human curators may reduce the amount of misinformation that spreads online. In the Times article, Lauren Kern, Apple News’ editor-in-chief, explained that she values “accuracy over speed” which has moved the platform away from spreading stories filled with false claims several times.

Even Google has taken professional journalism into account by making changes to their search rater guidelines to help original reporting surface more prominently in their search engines and ensure it stays in search results longer.

While none of these companies have totally steered away from algorithms, this shift seems to be better for everyone involved. Users see more trusted, original reports, and as a result, these stories get more traffic. In addition, Big Tech could help stop the spread of misinformation and disinformation that they have come under fire. If this formula succeeds, it will be interesting to watch how human curation is either incorporated into these companies, and whether or not there will be a ripple effect that will reach news publishers. YouTube will be now restricted on running targeted ads on children’s videos due to a settlement with the Federal Trade Commission.

As Big Tech Struggles, Digital Advertising May Fall

Due to data privacy concerns and regulation laws, Big Tech may suffer a setback in advertising sales.

Last year it became clear that confidence in Big Tech was plummeting quickly. For example Pew Research Center released a study in June that showed only 50 percent of Americans thought tech companies had a positive impact on the country, down 21 percent from 2015 when 71 percent held this view.

Thus, online users might be more inclined to utilize their new rights via the GDPR and the CCPA (see point number one). If enough users make this request, targeted ads would be less effective, which could lead to this setback for big tech.

In addition, eMarketer discusses how internet users are concerned about how Facebook uses their data which leads the company to expect usage of the social network to decrease in 2020. Facebook expects usage to go down from 38 minutes daily in 2019 to 37 minutes in 2020. Losing a minute may not seem like a big difference, but when time spent is currency, those sixty seconds will end up being costly.

One advertiser, Brandon Rhoten, CMO of Potbelly Sandwich Works, told eMarketer, “We’re there to reach and hopefully influence consumers in a positive way. So, if we see a demographic shift occur in the platform, if we see reach change, if we see efficiencies shift, those are things that give us direct pause to say ‘Hey, we’re not so sure about this as the primary mechanism of advertising.’”

If other advertisers share the same thoughts as Rhoten, then digital advertising is in trouble.

It’s not just Facebook. Recently, major changes to YouTube went into effect that may impact the platform’s advertising revenue. The Verge reported that due to a settlement with the Federal Trade Commission over the alleged violations of the Children’s Online Privacy Act, targeted ads will now be restricted from running on children’s videos.

In addition, the Washington Post reported the Justice Department had taken interest in opening a federal antitrust investigation into Google. In September, news came that half of the nation’s state attorneys general were also readying an investigation to identify potential antitrust violations.  

AI Investment Will Grow…but For How Long?

A new report by Charlie Beckett, a professor at the London School of Economics and Political Science “New Powers, New Responsibilities: A Global Survey of Journalism and Artificial Intelligence,” revealed that many of the newsrooms they spoke with are already AI-active.

The report, based on 71 news organizations from 32 different countries, said nearly half of the respondents were already using AI. Examples include robot journalism, personalization of newsfeeds, predictive analytics, speech-to-text services, photo tagging and even spell check.

At the New York Times, Project Feels uses reader insights and machine learning to determine what its articles make readers feel. Last year, Guardian Australia announced the publication of their first automated news story as well, utilizing an automated system called ReporterMate. The Associated Press was an early adopter of AI. In 2014, the news organization partnered with Automated Insights, a technology company that specializes in natural language generation software, to begin automating quarterly earnings reports and generating minor league and college game stories.

The biggest challenges cited for adopting AI are lack of resources, knowledge or skills, and the fear of human workers losing jobs to AI. But respondents in the Beckett’s report claim that AI will create labor as well as reduce it.

The “Future of Work Survey Report” (a survey of 1,500 employed adult Americans) by Sykes showed that 67 percent of Americans think of tools, machines or software that could assist them with tasks and make their job more efficient when they hear the words “automation technologies” or “robots.”

The idea of AI as a useful resource is becoming more accepted in workplaces as leaders invest in resources and training for their employees. When asked if their employer was providing training and/or resources to help keep up with changes in technology, 44 percent answered “Yes, some” and 13 percent answered “Yes, a lot.”

With this growing acceptance, expect more players to enter the AI game, but in the end, will the human touch win out?

New York Times, Wall Street Journal, Washington Post

Comments

No comments on this item Please log in to comment by clicking here