Pages

Monday, October 24, 2016

Lessons Learned from the DynDNS DDoS


As everyone probably knows, DynDNS was recently hit by a massive DDoS which in turn caused large sites to be either nonresponsive or extremely sluggish. Dyn DNS was hosting records for these organizations when an application layer SYN flood attack against their DNS service brought them to their knees. The attack caused legitimate DNS requests for these sites to be “lost in the mix” with a steady flow of garbage requests saturating Dyn's DNS service. After watching the attack play out, I had a few thoughts on the subject I’d thought I’d share. 

I’ve personally fought DDoS attacks in the past and they’re not fun. To be bluntly honest, they’re a pain in the butt.  Many times they come out of nowhere and it’s an all hands on deck situation when the flood starts. But after seeing the recent attacks on Krebs, OVH and now Dyn, it seems that everyone on Twitter has recently become a DDoS expert. It takes some skill and most importantly experience when dealing with DDoS attacks, so let’s not take this subject lightly. We need to learn from our mistakes and the incidents of others to achieve the best security we can possibly offer. Let’s not just start being a Twitter warrior with nothing to back it up. Okay, I feel better now. 

This being said, now that we all know DDoS is a huge issue (because the media doesn’t lie, of course!) those who work in the security field can’t plead ignorance anymore. Just because your industry doesn’t normally see DDoS attacks doesn’t mean they won’t pop up and smack you in the face now. With the tools and vulnerable systems to create massive botnets we might only be seeing the beginning of what’s in store. Everyone in charge of security needs to start the process of creating a DDoS runbook today. This needs to become a table top within your incident response plan. Incident handlers and groups outside of security need to understand how to handle DDoS attacks when they occur. The last thing you want is an attack to occur without any preparation. The Dyn DNS team did a great job explaining to the public how the attack was being handled and gave frequent updates through this site: www.dynstatus.com. This is important during an attack that knocks you off the grid. Communication is key during this time, especially to your customers. 

Another thing to consider is how a DDoS attack will be mitigated. With attacks cranking in at over 1Tbps there is no on-premise DDoS mitigation appliance in the world that’s going to handle the load right off the bat. Not only will they not physically handle the load, but the ISP’s will have issues fulfilling traffic of this magnitude. The current infrastructure just isn’t designed to handle this amount of traffic traversing its network. The best method of mitigating these services isn’t with onsite DDoS appliances, but with cloud providers like Akamai (formerly Prolexic), Cloudflare, or Google Jigsaw. They’ve positioned their network to be resilient, with multiple scrubbing centers throughout the world to absorb and filter the malicious traffic as close to the source as possible. By using anycast and having traffic from customers directed to them via BGP, these cloud providers make sure they don’t become a bottleneck and allow customers to receive large amounts of bandwidth via proxy. I personally feel this is the only way to efficiently defend against the volumetric  attacks we’ve seen this past month.  Also, Colin Doherty was announced as the new CEO of Dyn this October 6th. He was the former CEO of Arbor Networks (a company selling and specializing on premise DDOS solutions). I don’t know if this had anything to do with the situation, but it’s interesting. If anything, hopefully his experience in the industry helped with the mitigation.

For the cloud providers who are absorbing and mitigating DDoS traffic on their networks, they’re going to have to expand their available bandwidth quickly. Many cloud based DDoS mitigation providers need to have bandwidth increased by a certain percentage each time they see an attack increase. They all want to be a particular percentage higher than the largest DDoS attack on record. This is because they too have to scale towards the attacks as they come in. They’re not only dealing with the one large attack occurring today, but possibly three more like it tomorrow at the same time. These providers need to keep a close eye on bandwidth utilization and attack size monthly to keep up with the growing botnet sizes. 

I’m not sure what happened with the Dyn DNS attack from a mitigation standpoint, but it’s a good opening for customers to start speaking with their third party vendors on incident response; especially on DDoS. Many third parties say they have DDoS prevention, but how? Is it home grown? On-premise? In the cloud? These questions need to be answered.  Also, if a DDoS hits a SAAS provider will all clients go down? These and similar questions need to be asked of your cloud providers to validate your hosted services will be available when needed. 

IoT will continually be an issue going forward when it comes to DDoS. I don’t see anything in the near future putting a stop to the abuse of IoT systems on the internet.  In Brian Krebs latest article he mentions Underwriters Laboratories and how they’ve been used in the past to become a sign of approval for devices going to market in the electronics field. I think there does have to be something similar in the future that assist with reviewing the code of appliances before being put onto the internet. At this point I’d settle for standard OWASP top 10 type scans, but would to see static analysis testing done for vulns. I don’t know how this will work with systems overseas, since most of the Miria botnet infected DVR and IP cams from a Chinese company named XiongMai Technologies. Either way, we need to at least follow standard security practices of password management, patching and secure coding when it comes to IoT devices. This isn’t rocket science, especially when many of these systems were using default hardcoded passwords and being logged remotely with telnet. Sigh.

My concern with botnets of this size is that someone’s going to create multiple IoT botnets quietly and unleash something with traffic limits that can’t be stopped. There are other vulnerable IoT systems on the web which will eventually be found, but what if this time they weren’t used right away. What if the creator keeps finding other vulns in different systems and ends up with a botnet-of-botnets with enough power to overwhelm even the largest DDoS cloud providers. Now take this a step further: What if this was then used for political or terrorism?  I know this sounds like fear mongering, but it’s a valid concern. In this case, people would die or be hurt in the process. This is a concern of mine with the amount of insecure IoT devices being connected to the internet today. It might seem farfetched, but it’s no longer outside the scope of reality. The Miria botnet was seen as being used in the Dyn DNS attack (by Flashpoint, L3 and Akamai), but it seems that there were other systems being controlled in the botnet too. It just seems that there a never-ending pool of IoT devices that attacker can select form at this point. 

As of right now I haven’t seen any official motive for the attack, but there doesn’t always have to be one. I saw people mention that it’s a test for the Unites States election, WikiLeaks took credit for it due to America pulling Assange’s internet, internet activists blaming Russia, etc. Either way, everyone in security needs to be prepared for these attacks and if you’re not already planning now, at least start thinking about it. We’re no longer given the luxury of being comfortably numb.

1 comment:

  1. I found one article to be quite useful. Both the phrase and the justification are superb. I value the author's contribution because I found it to be fascinating and useful. Please keep in mind to check out the d10 dice online blog as well. lets you throw multiple dice, such as two or three D10s. Dice roll results can be altered, increased, or decreased.

    ReplyDelete