The Target security breach is another eye-opening example that these compromises don’t just happen in the movies. This is going on everywhere and we need to protect ourselves from being the next Target (pun intended). With the limited information we know about the Target breach, there’s still quite a few lessons learned. First of all, let’s discuss a few things that Target has done well.
- I think Target has taken responsibility about what occurred and they haven’t been “hiding too much”. Some have said that Target knew about the breach for a few days before alerting the public, but so what!! The company was dealing with a criminal investigation and if it wasn't confident that the bleeding was stopped the company shouldn’t alert the world that there is a problem – it could cause the investigation to change and tip off the attackers. I personally think what they did was more than appropriate.
- With all the phishing and misinformation going around due to the breach, Target’s set up a page on their site to be the authoritative place for details on the breach. This includes e-mails they’ve sent out, videos form the CEO, etc. You can visit that page here:https://corporate.target.com/about/payment-card-issue.aspx
- They brought in third parties right away to deal with the breach. For an issue this big, with a company as large as Target, third parties are needed to assist in the investigation. Law enforcement was brought in right away, as well as Mandiant (cyber investigators) to work the incident.
Now a few lessons learned. This isn’t meant to point fingers, but to learn from breaches in the past and do our best to ensure that history doesn’t repeat itself. There is no perfectly secure network and we shouldn’t be on our high horses when speaking about other breaches.
- It seems right now that the POS breach was first started by a vulnerability in a Target website, with the attackers than going deeper into the network to attack the POS systems. Take note – your website will almost always be the most vulnerable part of your organization. It’s public and could potentially leave you open to large breaches due to its configuration. Keeping your public facing applications secured with proper vulnerability scanning, segmentation and monitoring is a must (read my blog series on network security design for more on this: Part 1, Part 2, Part 3 and Part 4).
- Multiple articles reference that the Target breach had the data manually siphoned to an offsite server where the attackers collected almost 11GB of stolen information. This leads me to a few observations:
- It doesn’t sound like proper egress filtering was occurring on the network if they were able to siphon data out of the network directly. If there was proper egress filtering occurring, FTP wouldn’t have been able to work on all servers.
- The best case scenario is that they do have proper egress filtering, but that the attackers were able to determine which systems could exit via FTP. If this is the case they had complete access to the network to try and egress with FTP, which is more of an issue, because they had full reign and weren’t noticed.
- Why was FTP allowed to go outbound to begin with? This is an insecure protocol and should be removed from your network if possible (watch this video to learn common firewall misconfigurations).
- Monitoring and thresholds weren’t configured to alert the Target Infosec team that there was unusual activity occurring in the network.
- Why was a user account created with admin rights without anyone noticing? Was this part of domain admins? Proper logging needs to be applied towards all systems with SIEM and correlation against suspicious traffic.
- Netflow data should be pulled from the firewalls and routers to determine when something in the traffic patterns isn’t kosher. Creating thresholds for “normal” traffic will assist when someone is dumping something out of your network, like 11GB of data via FTP.
- How was the network segmented? I’m sure there was some type of network segmentation, since Target must comply with PCI-DSS, but there were obviously holes in it somewhere.
- The attack seems to have started from the web, moved into the network and found the POS systems. They were pivoting across multiple segments of the network which is very interesting (attackers were able to get mailing addresses, e-mail addresses, etc. which means they were able to move outside of these systems and deeper into their network). We need to determine how to lock down these segments to only allow needed traffic into the appropriate networks.
- Oh, and just because Target was PCI-DSS compliant, that didn’t make the company “Hacker proof”.
- If you’re running a POS system in your network make sure it’s locked down.
- The malware that was found was pretty nasty, but as the information comes out more about this breach we need to determine just how the attackers pushed it to so many terminals.
- Did they compromise an account that had admin rights to send or install this malware across the network? Did the registers have local admin? Were these ever audited and secured? These are things we need to ask ourselves about our own POS systems.
As the story continues to unfold we’ll continue to get more details about the Target compromise, but for now we need to take away these lessons learned and either apply them to our network, or audit the existing configurations. We never want to see a breach happen, especially one that affects so many people, but if we don’t learn and change from it we’re doomed to be the next Target.
No comments:
Post a Comment