Later in August, the Times website went down again, this time as a result of a Distributed Denial of Service (DDoS) attack launched by the Syrian Electronic Army (SEA), a group of hackers who support embattled President Bashir Assad. The attack, which crashed the Times website at about 3 p.m. Eastern Standard Time, wasn’t fully fixed until the next morning.
The hacking of the Times web site is only the latest in a string of attacks that have resulted in disrupted operations of the Financial Times, the Washington Post and the Las Vegas Sun, to name just a few.
As media organizations continue to grow their presence online, these attacks underscore the vulnerability of their web sites, where software, vendors, users, advertisers and layers upon layers of complexity are often integrated and piled upon one another, even as traffic continues to grow.
DDoS attacks are one of the largest threats media companies face online today. Political coverage, a cornerstone of most journalistic enterprises, can easily spur strong opinions that lead to attacks like the one the Times faced. What makes these attacks so difficult for IT departments to handle, especially those at smaller media companies with limited budgets, is anyone can utilize a DDoS attack for a nominal fee. In addition, attackers can easily utilize several different routes and protocols to shut down a website.
In the Times case, the hacker group attacked the company’s domain name registrar, Melbourne IT, and gained log-in details to their system through a phishing attack of a sales partner. The group changed authoritative Domain Name System (DNS) servers to point to Syrian Electronic Army websites, in effect redirecting Times’ traffic to their own pages, a particularly effective type of attack, according to Cory Von Wallenstein, the Chief Technology Officer of Dyn, which provides DNS services for Twitter.
“What makes this attack so dangerous is what’s called the TTL… or time to live,” said Wallenstein. “Changes of this nature are globally cached on recursive DNS servers for typically 86,400 seconds, or a full day. Unless operators are able to purge caches, it can take an entire day (sometimes longer) for the effects to be reversed.”
If DDoS attacks aren’t enough to keep the IT directors up at night, software vulnerability is another leading cause of downtime for most media websites. Regular scanning of all systems and regularly updating software has led to a decrease in the amount of vulnerabilities uncovered at many websites, according to a report by WhiteHat Security. According to Jeremiah Grossman, co-founder and Chief Technology Officer of WhiteHat Security, many media companies are still playing catch-up when it comes to their web security.
“This collective data has shown that many organizations do not yet consider they need to proactively do something about software security. It is apparent that these organizations take the approach of ‘wait-until-something-goes-wrong’ before kicking into gear, unless there is some sense of accountability,” said Grossman. “Website security is an ever-moving target, and organizations need to better understand how various parts of the software development lifecycle (SDLC) affect the introduction of vulnerabilities, which leave the door open to breaches.”
A good number of smaller media companies rely on content management systems like Drupal, Joomla! and Wordpress to power their websites. While they make website development and maintenance easy and inexpensive for many budget-conscious media companies, all can be a haven for hackers looking to take advantage of poorly-designed free plugins and code vulnerability.
“The problem comes in when you have a CMS that has plugins that are developed by third party vendors,” said Ben Fisher, the lead consultant at Steady Demand and publisher of HostingNews.com. “Often free plugins are rarely updated and could contain holes in security that a site owner will probably not take the time to investigate and can compromise a website very easily.”
According to Kurt Hagerman, the Director of Information Security for FireHost, application layer threats are the most prevalent path to a site-wide shutdown and continue to be on the rise. “Since media companies depend so heavily on the Internet to get their content out, application vulnerabilities represent the greatest threat,” Hagerman says.
Web security also isn’t limited to lines of code. According to Brett Haines, the Director of Operations for Atlantic.Net, physical security of data and web services is just as important as web security. “For most, physical security is easy to conceptualize as you can see it but it is a concern that must not be forgotten,” said Haines. “Cameras, access logging, biometric access, locked cabinets or cages, and conduit for all exposed wiring are just a few steps to physically secure your data."
Ways to protect your organization
For media companies large and small, it all starts with risk assessment, where individual threats and risks are considered alongside the steps (and costs) needed to mitigate those risks. The Roanoke Times might not have the same level of external threats as the Washington Post, but both should have a detailed plan that takes into account both the required internal infrastructure and which elements of web management are best outsourced to determine the best course of action for risk tolerance and budget.
“There is no magic bullet or formula for allocating budget in percentages to the various categories; this is unique to each organization,” says Hagerman.
According to Haines, every media company should not only have a robust backup solution in place for restoration of services, employees should be fully trained and updated on these procedures in the event of a shutdown. “It is common to have data and physical hardware backups for most companies,” Haines said. “However, some fail to plan for employee turnover in key positions, or even allow that key employee forget how to restore the services properly, as the last restore exercise was a year or more ago.”
In terms of potential application vulnerabilities, Hagerman notes there are three primary controls that media companies can put in place to help mitigate their risk. First, developers should be educated on secure coding practice and media companies should include processes that ensure these practices are followed. Second, application vulnerability testing should be performed on code throughout the development process, especially on code before it’s released into production. And lastly, every media company, large and small, should actively manage a web application firewall to filter all public traffic to provide additional protection against application vulnerabilities.
Media companies should also consider running at least two sites in different locations in a fully redundant configuration, that way an outage at one location won’t lead to a site-wide collapse.
“A secure, high performance infrastructure is a good high level goal,” suggests Hagerman, who notes that solid perimeter security, including at least strong network and application level firewalls, is a must. “An infrastructure that supports scalability is also important and can help control and keep costs as low as possible while providing the performance needed to support the business.”
WhiteHat Security notes that security training for programmers can have a very positive effect on the overall security of a media organization’s website. According to their report, the organizations that provide some amount of instructor-led or computer based software security training for their programmers experience 40 percent fewer security vulnerabilities and resolved them 59 percent faster.
What to do if and when your site goes down
When the New York Times went down, the news cycle didn’t go down with it. In the midst of an afternoon and evening-long shutdown, the Times began posting “key news articles” on their popular Facebook page in full, marking the first time one of the newspaper’s articles debuted anywhere but their own properties.
Throughout the afternoon, the Times posted articles like “Egypt Declares State of Emergency as Scores Are Killed in Crackdown”, “Ahead of Israeli-Palestinian Peace Talks, Rocket Fire and Air Strikes” and “Rep. Jesse Jackson Jr. Sentenced to 2.5 Years” directly to their Facebook page (they’re still up at Facebook.com/nytimes/notes if you want to take a look) and used their other social media properties, including their 9-plus millions on Twitter, to direct traffic to their Facebook page.
Another alternative is to have a separate, backup web site ready to go in the event of a crash. In the Times case, the August 28 attack didn’t affect nytco.com, normally devoted to presenting the “who we are” and “what we do” of the Times’ organization. So, editors made the decision to post content there until the system attack was resolved.
“We decided to publish yesterday on nytco.com because it was available to us and not impacted by the attack and had the benefit of being an already established and clearly recognizable domain associated with the Times,” Times spokesperson Eileen Murphy told Poynter. “It offered a good alternate publishing platform.”
Fisher suggests that when IT directors and editors develop their backup editorial plans in the event of a site failure, one social network should be in the forefront of their plans—Google+.
Because of the site’s lack of direct traffic, Google+ is often lost in the battle for editorial attention to social networking sites such as Facebook and Twitter that provide a more robust stream of traffic. There are many benefits to using it as your direct posting source of news and information in the event of a site meltdown, including a very important factor for media companies - Search Engine Optimization, or SEO.
“It’s important to remember that everything that happens in Google+ also happens all across Google,” notes Fisher. “So if you’re posting a news story directly to Google+, it will become available to users in a Google search within seven seconds.”
Fisher says a smart strategy to consider as a backup plan would be to post important content directly onto the media organization’s Google+ page during an outage, and limit the use of other social networks, like Facebook and Twitter, to drive traffic and readers there.
Another benefit of posting content on Google+ is when your site does come back online, you can take that same content, re-post it on your own domain and Google will redirect search traffic to your original domain and not punish you SEO-wise due to the duplicate posting.
“Most people don’t know the benefits Google+ has to offer news organizations” said Fisher. “Google may be gaming the system, but it enables you to keep your readers engaged while your site is down, and once it’s live again, you have a direct link to the story on your Web site already in Google search.”
In addition to finding the proper venue to keep important reporting operations from ceasing during a shutdown, it is equally important to keep everyone updated on the progress of repairing the shutdown.
“One of the largest complaints received when downtime occurs is if a client is in the dark about what is going on,” says Haines “The population is becoming more and more aware that computer systems fail and that services will go offline. Most will be angry about the service being offline but when they are informed as to what is or had occurred, most will apply some understanding to the issue.”
The key is to have a plan ready to go and editors trained to implement it if your web site happens to crash. In the Times case, the crash happened without the Times’ Web server itself being hacked itself, so any media organization could be vulnerable at any given moment.
“There is no “set it and forget it” systems when it comes to IT,” said Haines. “It is a must to continually work towards improving your systems while maintaining them. “
Rob Tornoe is a cartoonist and reporter for Editor & Publisher, and can be reached at firstname.lastname@example.org.