Tuesday, November 8, 2016

Change the candidates by changing the process


In order to establish change in a nation we need to bring solutions to issues, not complaints. During the 2016 election cycle I’ve heard more gripping about the available candidates then I’ve ever heard during any other previous Presidential election. The reason citizens are voting today isn’t because they think their candidate will serve this country with respect, but because they're scared of the “other” candidate winning. People are no longer voting for candidates of quality, but being pushed to vote in fear that a candidate they dislike less will be elected. When we cast our ballot for the lesser of two evils we’re still voting for evil. That’s what people don’t seem to realize when they state this logic.

Anyway, that’s the issue, but we’re not here to just complain without action. What can we do in order to fix this this? In my opinion we need more transparency with the government. I think we’ve seen many candidates promoted based of wealth and being part of an already established agreement. With the release of documents by WikiLeaks we saw just how much of this was true within the Democratic party. When your own party leadership has to resign due to back channel deals being made to promote one candidate and suppress another it removes democracy from the people. When a candidate can make crude claims you don’t agree with personally, but it’s the party you voted for in the past and you’re now feeling remorse about your available options; we have a problem as a nation. This is what many Americans feel right now and it these two candidates that will receive the majority of the votes during this election. 

In order to add more transparency to this process and to bring about change from the people, I think we should add an additional voting option. I’ve heard many people say this election that they’re not only upset about the candidates, but so annoyed with the options they’re not voting. This is the type of outrage that our government needs to hear by the casting of ballots, but when your protest vote doesn’t get recorded how do you bring about change?

What if, this is hypothetical here, we had the option to vote “abstain” on the ballot? Many people will abstain from voting and not vote in protest, but what if your protest vote was tallied and recorded? We’ve tried to start other parties and this hasn’t really taken with the majority of Americans in a two party system, but if there was an option to vote your displeasure with the available candidates it might assist with getting better party nominees. Too many people feel the need to vote for anyone within their party, but this allows you to go outside it. The need for additional parties would be a welcome change, but having the option to show recorded protest is another I find very interesting.

Now you’re probably saying, “So what, you’re just taking a vote away from a candidate. Who cares?” Well what if we had enough people vote in this manner that the majority of the people “abstained” from selecting an available candidate? If these states or even the country isn’t satisfied with them we have it pushed to the House to vote on like they would do if a candidate doesn’t reach the 270 electoral votes. This of course isn’t perfect, but it’s an option and it gives transparency to the voting process and allows candidates to be held responsible. Essentially, this gives the power back to the people, not the party.

I’d be very interested in your thoughts.

Wednesday, October 26, 2016

The Digital Defenders: Privacy Guide for Kids (Comic)

Check out EDRi's "Digital Defenders guide on privacy". It's a comic directed towards kids about the benefits of privacy and security. It goes into privacy on social media, password security, smartphones and even how to use Signal and Tor all throughout a well drawn comic. Overall, this is an awesome piece of work that drives home a great message to children.

If possible, please donate to their cause here: Donations.

Tuesday, October 25, 2016

Threat Intelligence Sharing Should Start at the Top

How many vendor phone calls do you dodge every day? One of the most consistent calls that I receive is from vendors selling the latest, greatest “Threat Intelligence” product.  If you are not familiar with threat intelligence, it is the aggregation of suspicious or known malicious information from multiple sources around the world.  This information is then used to warn subscribers of the impending threats.  It is a way for a subscriber of a particular service to achieve “actionable intelligence” about an impending threat.  Sounds neat!

However, I have heard at least one brave webcaster declare that threat intelligence is a steaming pile of dung. This is a bold statement in a world that seems over-run with constant news of cyber-attacks and an even louder tocsin by the public about the urgency to stop it.In a recent meeting with a threat intelligence provider, I too am starting to hold my nose when I am given the pitch about threat intelligence.Most of the threat intelligence vendors will proudly speak of information sharing, that is, when they see a pattern of malicious traffic forming against one of their clients, they will share that information amongst the threat intelligence feed to their other clients.

We are all aware by now of the unprecedented DDoS attack against Brian Krebs In mid- September.  This attack was the largest DDoS ever witnessed on the internet; traffic clocked at 620Gbps was aimed at Brian Krebs’ server. We all felt threatened that such an attack could be so easily carried out by using all of the unsecured IoT devices out there.  We were all equally shocked at Akamai’s initial response to dump Brian, yet we understood the difficult business decision that they had to make to protect their paying customers.

So, why am I all of a sudden holding my nose about threat intelligence?  A vendor was demonstrating their “superior threat intelligence product” and part of their presentation included a boastful commentary about how they saw the attack against Krebs forming before it took place.  Their excellent intelligence gathering capabilities allowed them to see the attack against Akamai in formation.

Allow that to sink in for a moment.

Here are some questions for that vendor: Are you actually boasting that you stood idly by when you witnessed the formation of the greatest attack to date against the entire internet? 
And this model you are selling derives its power from information sharing?

The incongruence of ideology here is somewhat baffling. Sort of like boasting about your superior powers in space defense, yet when an asteroid, capable of an extinction-level event is heading towards the planet, you chose to stand by because it will not impact your country.  What is the logical or ethical sense of that?

I understand business decisions, and how sharing with a competitor is generally considered a poor business decision, but if threat intelligence companies won’t share their information with another intelligence company in the greater interest of the preservation of the internet, why should they expect anyone to subscribe to their sharing and intelligence service? Threat intelligence sharing should start at the top.

Guest Author: Art Logan

Monday, October 24, 2016

Lessons Learned from the DynDNS DDoS


As everyone probably knows, DynDNS was recently hit by a massive DDoS which in turn caused large sites to be either nonresponsive or extremely sluggish. Dyn DNS was hosting records for these organizations when an application layer SYN flood attack against their DNS service brought them to their knees. The attack caused legitimate DNS requests for these sites to be “lost in the mix” with a steady flow of garbage requests saturating Dyn's DNS service. After watching the attack play out, I had a few thoughts on the subject I’d thought I’d share. 

I’ve personally fought DDoS attacks in the past and they’re not fun. To be bluntly honest, they’re a pain in the butt.  Many times they come out of nowhere and it’s an all hands on deck situation when the flood starts. But after seeing the recent attacks on Krebs, OVH and now Dyn, it seems that everyone on Twitter has recently become a DDoS expert. It takes some skill and most importantly experience when dealing with DDoS attacks, so let’s not take this subject lightly. We need to learn from our mistakes and the incidents of others to achieve the best security we can possibly offer. Let’s not just start being a Twitter warrior with nothing to back it up. Okay, I feel better now. 

This being said, now that we all know DDoS is a huge issue (because the media doesn’t lie, of course!) those who work in the security field can’t plead ignorance anymore. Just because your industry doesn’t normally see DDoS attacks doesn’t mean they won’t pop up and smack you in the face now. With the tools and vulnerable systems to create massive botnets we might only be seeing the beginning of what’s in store. Everyone in charge of security needs to start the process of creating a DDoS runbook today. This needs to become a table top within your incident response plan. Incident handlers and groups outside of security need to understand how to handle DDoS attacks when they occur. The last thing you want is an attack to occur without any preparation. The Dyn DNS team did a great job explaining to the public how the attack was being handled and gave frequent updates through this site: www.dynstatus.com. This is important during an attack that knocks you off the grid. Communication is key during this time, especially to your customers. 

Another thing to consider is how a DDoS attack will be mitigated. With attacks cranking in at over 1Tbps there is no on-premise DDoS mitigation appliance in the world that’s going to handle the load right off the bat. Not only will they not physically handle the load, but the ISP’s will have issues fulfilling traffic of this magnitude. The current infrastructure just isn’t designed to handle this amount of traffic traversing its network. The best method of mitigating these services isn’t with onsite DDoS appliances, but with cloud providers like Akamai (formerly Prolexic), Cloudflare, or Google Jigsaw. They’ve positioned their network to be resilient, with multiple scrubbing centers throughout the world to absorb and filter the malicious traffic as close to the source as possible. By using anycast and having traffic from customers directed to them via BGP, these cloud providers make sure they don’t become a bottleneck and allow customers to receive large amounts of bandwidth via proxy. I personally feel this is the only way to efficiently defend against the volumetric  attacks we’ve seen this past month.  Also, Colin Doherty was announced as the new CEO of Dyn this October 6th. He was the former CEO of Arbor Networks (a company selling and specializing on premise DDOS solutions). I don’t know if this had anything to do with the situation, but it’s interesting. If anything, hopefully his experience in the industry helped with the mitigation.

For the cloud providers who are absorbing and mitigating DDoS traffic on their networks, they’re going to have to expand their available bandwidth quickly. Many cloud based DDoS mitigation providers need to have bandwidth increased by a certain percentage each time they see an attack increase. They all want to be a particular percentage higher than the largest DDoS attack on record. This is because they too have to scale towards the attacks as they come in. They’re not only dealing with the one large attack occurring today, but possibly three more like it tomorrow at the same time. These providers need to keep a close eye on bandwidth utilization and attack size monthly to keep up with the growing botnet sizes. 

I’m not sure what happened with the Dyn DNS attack from a mitigation standpoint, but it’s a good opening for customers to start speaking with their third party vendors on incident response; especially on DDoS. Many third parties say they have DDoS prevention, but how? Is it home grown? On-premise? In the cloud? These questions need to be answered.  Also, if a DDoS hits a SAAS provider will all clients go down? These and similar questions need to be asked of your cloud providers to validate your hosted services will be available when needed. 

IoT will continually be an issue going forward when it comes to DDoS. I don’t see anything in the near future putting a stop to the abuse of IoT systems on the internet.  In Brian Krebs latest article he mentions Underwriters Laboratories and how they’ve been used in the past to become a sign of approval for devices going to market in the electronics field. I think there does have to be something similar in the future that assist with reviewing the code of appliances before being put onto the internet. At this point I’d settle for standard OWASP top 10 type scans, but would to see static analysis testing done for vulns. I don’t know how this will work with systems overseas, since most of the Miria botnet infected DVR and IP cams from a Chinese company named XiongMai Technologies. Either way, we need to at least follow standard security practices of password management, patching and secure coding when it comes to IoT devices. This isn’t rocket science, especially when many of these systems were using default hardcoded passwords and being logged remotely with telnet. Sigh.

My concern with botnets of this size is that someone’s going to create multiple IoT botnets quietly and unleash something with traffic limits that can’t be stopped. There are other vulnerable IoT systems on the web which will eventually be found, but what if this time they weren’t used right away. What if the creator keeps finding other vulns in different systems and ends up with a botnet-of-botnets with enough power to overwhelm even the largest DDoS cloud providers. Now take this a step further: What if this was then used for political or terrorism?  I know this sounds like fear mongering, but it’s a valid concern. In this case, people would die or be hurt in the process. This is a concern of mine with the amount of insecure IoT devices being connected to the internet today. It might seem farfetched, but it’s no longer outside the scope of reality. The Miria botnet was seen as being used in the Dyn DNS attack (by Flashpoint, L3 and Akamai), but it seems that there were other systems being controlled in the botnet too. It just seems that there a never-ending pool of IoT devices that attacker can select form at this point. 

As of right now I haven’t seen any official motive for the attack, but there doesn’t always have to be one. I saw people mention that it’s a test for the Unites States election, WikiLeaks took credit for it due to America pulling Assange’s internet, internet activists blaming Russia, etc. Either way, everyone in security needs to be prepared for these attacks and if you’re not already planning now, at least start thinking about it. We’re no longer given the luxury of being comfortably numb.

Tuesday, October 18, 2016

WikiLeaks and the Dead Man's Code

No matter how you personally feel about Julian Assange and his organization Wikileaks, the silencing of his internet access is a clear attempt into pressuring him not to release the information he's in possession of. At this point, the cutting of his internet in the Ecuadorian embassy seems to be the action of a state actor who's attempting to quite WikiLeaks. I think it would foolish to think that this would stop WikiLeaks from moving on with their mission of transparency and is more of a power move by those concerned about what he might have.

WikiLeaks as an organization has proven to be resilient against attacks in the past (either by the financial blockade of denying VISA, Mastercard and PayPal the ability to process donations flowing to their site, Amazon dropping them off their service, constant DDoS attacks against their site, etc), but this particular attempt was more personal. I'm not sure what the mindset was of removing his internet access, but I would have to think those who orchestrated this outage would know he'd have contingency plans in effect for something of this nature.

Yesterday there were multiple tweets from the WikiLeaks Twitter account which people called a "Dean Man's Code". This started rumors that Assagne had been killed and that these were decryption codes for sensitive information about to be released. They have since been deleted, but are being considered per-commitment codes or a way to prove authenticity of any downloads of the dumps of documentation WikiLeaks has in their possession. After his latest dumps against Hilary Clinton there has been rumors that the documents were being edited, or that they were fake. Maybe this is Wikileaks attempt to validate them before being downloaded.
Either way, it's a difficult place for both parties involved. Assange has been holed up in the embassy for years and is supposedly in bad health. For someone that contains potential damaging information against another party it would be dangerous to think he'd be pressured into following along. He's cornered right now and that makes him even more dangerous to his opposition.

It's should also be mentioned that WikiLeaks only publishes what they're given. There's a fair amount of editing down to the documents themselves, but they're being given to this organization because people feel the need to shed light on what they deem inappropriate behavior. If there's damaging information to people within these leaks it wasn't this group that went out and "stole" them, they were given the documents and WikiLeaks has made it their duty to attempt to bring transparency to a situation they deem important.

We need to consider all things in perspective when thinking about WikiLeaks. Many people don't like the organization because of Assange's ego, the way they seem to be attacking certain individuals, or the damage to a group these documents may shed. At the end of the day it's my opinion that by trying to intimidate WikiLeaks into going quite also intimidates whistleblowers from having a voice. This in my opinion, is bigger than Julian and Hiliary's ego combined. There needs to be a place people can alert of wrong doing (after multiple attempts to take make the problem known through standard channels) and for the time being that place seems to be WikiLeaks.

Monday, October 17, 2016

OpenSSL vulnerabilities allow DDoS-attacks

On September 22nd, 2016 OpenSSL announced the elimination of more than a dozen vulnerabilities in it's cryptographic library. Among the bugs was a mistake which allowed attackers to carry out DoS-attacks within their software.

What's the problem

OpenSSL is a popular open-source cryptographic library which allows for the creation of encrypted internet connections using SSL or TLS. It's also used by the vast majority of websites and networks today. A critical vulnerability (CVE-2016-6304) is contained in OpenSSL versions 1.0.1, 1.0.2 and 1.1.0 and has been fixed in the new versions 1.1.0a, 1.0.2i b 1.0. The vulnerability within these older versions lies in the fact that in successive TLS renegotiations, the server doesn't release the memory allocated for one of the TLS protocol extensions - status request, but "frays" a pointer to it, essentially causing a memory leak.

TLS Renegotiations - a mechanism that allows a client or server to change TLS connection settings on the fly without interrupting the current session. The parties exchange Hello messages and certificates as in conventional handshakes, but in here it uses an already established secure channel. A status request extension assists with speeding up the server certificate status checking, if the latter provides a mechanism OCSP Stapling. By abusing this method, an attacker can cause a memory leak each time a TLS renegotiation is requested. The size of the memory leak ranges from 16 to 64 kilobytes (depending on the version of OpenSSL in use).

A little background on OCSP (Online Certificate Status Protocol) - This protocol is supported by all modern web browsers, it's designed to ensure verification of the digital certificate installed on the site. OCSP is divided into both client and server responsibilities. When an application or a web browser attempts SSL-certificate validation the client sends a HTTP-request to an online database which returns the status of the certificate. However, to speed up the validation mechanism for the client, the server itself can access the OCSP servers and then return the OCSP responses to the client within the handshake step. This mechanism is called OCSP stapling and allows the customer to avoid the waste of resources to appeal to the OCSP servers.
That's not all

The OpenSSL Foundation security bulletin from September 22nd also describes another vulnerability CVE-2016-6307 (it has a low priority vulnerability rating). An error in the code library version 1.1.0 could allow an attacker to carry out DoS-attacks by sending large tls_get_message_header() header. Later it became clear that a patch for the vulnerability CVE-2016-6307 spawned yet another vulnerability (CVE-2016-6309). As a result of applying the patch to fix the DoS issue a buffer processing error was generated causing applications to execute arbitrary code. After this was deteremined another patch was released to fix this defect.

How to protect yourself

Servers that use the OpenSSL version to 1.0.1g are non-affected by the CVE-2016-6304 vulnerability when working in standard configuration. Administrators of vulnerable resources should use the no-ocsp option to mitigate the chances of DDoS against their systems. In addition to this DDoS fix, the OpenSSL Foundation team has also fixed another vulnerability (CVE-2016-6305) in the library version 1.1.0, which could be used to carry out DoS-attacks. Staying current on patches, as always, will help remediate the risks within the OpenSSL libraries.

Guest Author: Written by Alex Bod, Information Security Researcher and the founder of Gods. He runs the penetration testing services provided by Gods Hackers Team.