tag:blogger.com,1999:blog-82940913154721794252024-03-19T02:05:38.018-04:00Frontline SentinelCyber Security Advisory - Privacy - Digital ActivismMatthew Pascuccihttp://www.blogger.com/profile/07395762527897221899noreply@blogger.comBlogger393125tag:blogger.com,1999:blog-8294091315472179425.post-46213462798310043892018-01-28T22:46:00.003-05:002018-01-28T22:46:45.954-05:00LDAP injection: How can it be exploited in an attack?Joomla is a popular content management system that accounts for almost 3% of all websites on the internet, and it has been downloaded over 84 million times. A static analysis organization called Rips Technologies recently found it to be vulnerable to an LDAP injection vulnerability. This vulnerability was in the Joomla code for over eight years, and the company recently released a patch to remediate the blind LDAP injection.<br />
<br />
This type of attack takes place using the login pages of sites that use LDAP for authentication, and it can infiltrate data or applications by abusing entries inserted into the software in an attempt to extract, view or change the data.<br />
<br />
An LDAP injection attack, especially a blind one, like what is used in this method, aims to abuse the authentication process of passing credentials to controllers, as an LDAP server stores the username and password of the users in a database. With this particular vulnerability, there's a complete lack of sanitation, enabling an attacker's script to rotate attempts through the login field and slowly extract the credentials of a user -- this is the blind part of the injection, and it is usually aimed at an administrator account to get complete access to the Joomla control panel.<br />
<br />
With this vulnerability, an attacker can submit an LDAP injection of query syntax into the login form in an attempt to slowly gain access to the LDAP database one bit request at a time. When the scripted attack runs, it's able to quickly submit multiple login attempts, and it can eventually work through all the possible characters in the credentials until it completes the password. Since this is scripted and aimed at the system's login form, it's able to make quick work of Joomla systems that use LDAP for authentication.<br />
<br />
It's probably safe to say that not many Joomla servers use LDAP for authentication, but it's most likely being used somewhere. LDAP is used quite frequently for authentication.<br />
<br />
The first thing you should do is review if your site is vulnerable. Anyone running Joomla versions 1.5 through 3.7.5 is vulnerable if they're using LDAP authentication on their unpatched site. However, there was a patch released that specifically addresses this issue, and it can be installed to mitigate this vulnerability.<br />
<br />
Using these plug-ins for authentication naturally brings up the topic of using multifactor authentication. Your authentication architecture should no longer rely on systems using single-factor authentication for applications, especially public-facing applications. This process will limit the risk of vulnerabilities or data leaks that can expose data credentials to attackers.<br />
<br />
My article at: http://searchsecurity.techtarget.com/answer/LDAP-injection-How-can-it-be-exploited-in-an-attackMatthew Pascuccihttp://www.blogger.com/profile/07395762527897221899noreply@blogger.com209tag:blogger.com,1999:blog-8294091315472179425.post-37388401409607451432018-01-28T22:45:00.001-05:002018-01-28T22:45:23.733-05:00BlueBorne vulnerabilities: Are your Bluetooth devices safe?Last month, a series of Bluetooth vulnerabilities was discovered by research firm Armis Inc. that enables remote connection to a device without the affected users noticing.<br />
<br />
The vulnerabilities were reported on Android, Linux, Windows and iOS devices. These vendors were all contacted to create patches for the BlueBorne vulnerabilities and worked with Armis via a responsible disclosure of the exploit. The concern now is the vast amount of Bluetooth devices that might not update efficiently. This concern, combined with working with Android devices to have the update go out to all its manufacturers, will be the biggest hurdle when remediating the BlueBorne vulnerabilities.<br />
<br />
The BlueBorne vulnerabilities enable attackers to perform remote code execution and man-in-the-middle attacks. This attack is dangerous because of the broad range of Bluetooth devices out in the wild and the ease with which an attacker can remotely connect to them and intercept traffic. With this exploit, an attacker doesn't have to be paired with the victim's device; the victim's device can be paired with something else, and it doesn't have to be set on the discoverable mode. Essentially, if you have an unpatched system running on any Bluetooth devices, then your vulnerability is high.<br />
<br />
However, the affected vendors have done a good job releasing patches for the BlueBorne vulnerabilities. Microsoft patched the bug in a July release and Apple's iOS isn't affected in iOS 10. The issue is with Android, which is historically slow to patch vulnerabilities, and will have to work with itsvendors to have the patch pushed down.<br />
<br />
Likewise, the larger issue will be with all of the smart devices and internet of things devices that are installed on networks, meaning your TVs, keyboards, lightbulbs and headphones could all be vulnerable. There's probably a smaller risk of data being exposed on these devices, but they can still intercept information and be used as a way to propagate the issue further.<br />
<br />
Another concern with these vulnerabilities is the possibility of a worm being created, released in a crowded area and potentially spreading itself through devices in close proximity to each other. Particular exploits might not work on all phones in this case, but it could still be possible given the right code and circumstance. For example, if the worm was released in a stadium or large crowd, then it could theoretically spread if the systems haven't been properly patched.<br />
<br />
Being able to perform code injection to take over a system or create man-in-the-middle attacks, which can be used to steal information, is extremely worrisome. These attacks are happening inside the firewall and don't need to join your network in order to be executed. This is essentially like a backdoor that enables attackers to compromise systems from a distance and within your network.<br />
<br />
It is extremely important that you patch all systems if you have the capability to do so, or that you disable Bluetooth devices when they're not needed.Matthew Pascuccihttp://www.blogger.com/profile/07395762527897221899noreply@blogger.com310tag:blogger.com,1999:blog-8294091315472179425.post-2417722436655095332018-01-28T22:42:00.003-05:002018-01-28T22:42:59.101-05:00How can Windows digital signature check be defeated?Recently, it was determined by a SpecterOps researcher, Matt Graeber, that there is a way to bypass a Windows digital signature check by editing two specific registry keys. This is an important discovery because Windows uses digital signature protection to validate the authenticity of binary files as a security measure.<br />
<br />
Digital signature protection is used by Windows and others to determine if a file was tampered with during the time in which it was sent to the receiving party. Being able to validate the integrity of a received file and that it's actually from the party that signed it is important since digital signatures work on trust -- when a system can work around this feature, it opens up doors to malicious activity.<br />
<br />
It's also important to state that digital signatures don't secure the file, but give it a level of trust based off of the private key it was signed with; therefore, if that specific key was stolen or used maliciously, then the system would approve the digital signature check.<br />
<br />
Many Windows security features and security products rely on the trust and guarantees that a digital signature check brings with it. In the case of the CCleaner malware last month, it spread due to having been signed by a legitimate certificate, which led to the code being trusted by the OS. In his research report, Graeber wrote, "Subverting the trust architecture of Windows, in many cases, is also likely to subvert the efficacy of security products."<br />
<br />
The attack is focused on two registry keys that enable you to impersonate files with any other valid signature when adjusted. However, this isn't done via injection of code into the system, but with the registry key modification, meaning the attacker can do this remotely if they have access to the registry. This also means that they must be admins on the system, which isn't incredibly hard to escalate if they aren't don't have permission.<br />
<br />
Locking down the administrator rights to limit changes to these keys and implementing monitoring to determine if they were changed would be a way of reviewing if the registry keys were modified, even though this would require the logs of all the systems. It's also possible that a group policy could be made to limit access to these files in greater detail, but these are all reactive methods to this problem.<br />
<br />
The issue once again comes down to trust, as this is one area that's put in place to protect you from impersonation. It also happens to be the most likely thing to be used for malicious purposes, especially malware, that would bypass the internal mechanisms to slip past application whitelisting, such as Microsoft's Windows Defender Device Guard.<br />
<br />
There needs to be more procedures around digital signature protection to protect against malicious files entering your endpoint.<br />
There needs to be more procedures around digital signature protection to protect against malicious files entering your endpoint, such as reputation services, sandboxes and next-generation malware protection that doesn't rely on signatures.<br />
<br />
Is a digital signature check needed? Yes, but it's a layer in the protection against malware, and abusing the trust of these signatures enables them to be bypassed. In the end, we simply need to add more layers to our defense.<br />
<br />
My article at: http://searchsecurity.techtarget.com/answer/How-can-Windows-digital-signature-check-be-defeatedMatthew Pascuccihttp://www.blogger.com/profile/07395762527897221899noreply@blogger.com142tag:blogger.com,1999:blog-8294091315472179425.post-8009977889350153772018-01-28T22:40:00.002-05:002018-01-28T22:40:52.353-05:00Active Cyber Defense Certainty Act: Should we 'hack back'?Recently, a bill was proposed by Georgia Congressman Tom Graves named the Active Cyber Defense Certainty Act, which has now gone on to be called the hack back bill by individuals in the cyber community. This bill is being touted as a cyberdefense act that will enable those who have been hacked to defend themselves in an offensive manner. It's essentially attempting to try and fill the holes the antiquated Computer Fraud and Abuse Act has left wide open.<br />
<br />
I'm a big fan of evolving our laws to bring them into a modern state when it comes to cybersecurity, but I feel this law will cause more harm than good. Allowing others to hack back without the proper oversight -- which I feel is extremely lacking in the proposed bill -- will create cyber vigilantes more than anything else. I also feel that this law can be abused by criminals, and it doesn't leave us in any better state than we're in now.<br />
<br />
First, the jurisdiction of the Active Cyber Defense Certainty Act only applies to the U.S. If someone notices an attack coming from a country outside the U.S., or if stolen data is being stored outside the boundaries of our borders, then they won't be able to hack back.<br />
<br />
This already severely limits the effectiveness of this bill, as it can easily be bypassed by attackers who can avoid consequences by launching an attack with a foreign IP. It can also enable pranksters or attackers to start problems for Americans by purposefully launching attacks from within compromised systems in the U.S. to other IPs inside the country. This would give the victims the legal right to hack back against the mischievous IPs, while the spoofed organizations remained unaware of what happened, and started the process of attacking them back.<br />
<br />
In theory, this would create a hacking loop within the U.S. and would end up causing disarray, giving an advantage to the hackers. Not only can systems be hacked by a malicious entity, but they can be legally hacked by Americans following the initial attack; hackers would essentially be starting a dispute between two innocent organizations.<br />
<br />
On that note, if attackers launch attacks from the U.S. against other systems within the U.S., it's possible for them to attack the systems that regulate our safety. And what if they attack the systems of our healthcare providers, critical infrastructure or economy? Do we really want someone who might not be trained well enough to defend against attacks poking at these systems? This isn't safe, and it borders on being negligent on the part of those who were compromised.<br />
<br />
The mention of 'qualified defenders with a high degree of confidence of attribution,' really leaves the door open to what someone can do within the Active Cyber Defense Certainty Act.<br />
The mention of "qualified defenders with a high degree of confidence of attribution," really leaves the door open to what someone can do within the Active Cyber Defense Certainty Act. First, what makes someone a "qualified defender," and how are they determining a "high confidence of attribution"? Is there a license or certification that someone must have in order to request the ability to hack back? Even if they did receive something similar, they still won't know the architecture or systems they're looking to compromise in order to defend themselves. What tools are they able to use and what level of diligence must be shown for attribution? This is a recipe for disaster, and it's also very possible that emotions could get in the way when determining what to delete or how far to go.<br />
<br />
The Active Cyber Defense Certainty Act also mentions contacting the FBI in order to review the requests coming into the system before companies are given the right to hack back. This could lead to an overwhelming number of requests for an already stretched cyber department within the FBI.<br />
<br />
If anything, I feel that the bill should leave these requests to the Department of Homeland Security instead of the FBI, as an entirely new team would need to be created just to handle these requests. This team should be the one acting as the liaison to the victim organizations.<br />
<br />
For example, if we knew someone stole a physical piece of property, and we knew where they were storing it, we'd most likely call the local authorities and let them know what occurred. In the case of cybercrime, they're giving us the ability to alert the authorities, and then go after our stolen goods ourselves. This is a mistake that could lead to disaster.<br />
<br />
Lastly, there are technical issues that might make this a lot more difficult than people think. What if a system is being attacked by the public Port Address Translation/Network Address Translation address of an organization? Are they going to start looking for ways into that network even though they can't access anything public-facing?<br />
<br />
Also, what will happen if cloud systems are being used as the source of an attack? How do you track systems that might be moving or destroyed before someone notices? In that case, you could end up attacking the wrong organization. I personally don't trust someone attacking back and making changes to a system that they don't manage, since it leaves the door open for errors and issues later on that we're not even considering now.<br />
<br />
Data theft today is a massive concern, but the privacy implications and overzealous vigilantism of this bill could make a bad situation much worse. The Active Cyber Defense Certainty Act should be removed from consideration, and the focus should be put on how Americans can work toward creating a better threat intelligence and cybersecurity organization that can act as a governing body when attacks like these occur. Leaving such matters in the hands of those affected will never produce positive results.Matthew Pascuccihttp://www.blogger.com/profile/07395762527897221899noreply@blogger.com55tag:blogger.com,1999:blog-8294091315472179425.post-86646130241659642652018-01-28T22:39:00.004-05:002018-01-28T22:39:37.329-05:00iOS updates: Why are some Apple products behind on updates?A new study from mobile security vendor Zimperium Inc. showed that nearly a quarter of the iOS devices it scanned weren't running the latest version of the operating systems. If Apple controls iOS updates, and enterprise mobility management vendors can't block them, then why are so many devices running older versions? Are there other ways to block iOS updates?<br />
<br />
Zimperium's study showed that more than 23% of the iOS devices it scanned weren't running the latest and greatest version of Apple's operating system. Even though Apple has a more streamlined method of updating its mobiles devices than its main competitor, Android, this is only because it controls both the hardware and the software -- Apple doesn't rely on disparate manufacturers to apply patches.<br />
<br />
That being said, it came as a surprise to many that so many iOS devices weren't up to the bleeding-edge iOS; however, there are a few reasons why we're seeing almost a quarter of iOS devices being delinquent.<br />
<br />
For starters, some people just don't want the new update when it becomes available. Even though iOS updates can be nagging, it's possible to delay them or have your device remind you to install it later. It would be interesting to know how many devices are only one update behind the latest update to see if people are holding off temporarily or indefinitely.<br />
<br />
Another reason that devices might not be up to the latest version is that legacy devices may not support the newest update -- the newer releases of iOS aren't compatible with every device. This might be a small percentage of devices, but it's still part of the 23%.<br />
<br />
Likewise, certain devices have been jailbroken, and thus could have issues receiving updates. These are possible issues that can add up to the 23% found by Zimperium, but there are some configuration and operational changes that might also cause a delayed update.<br />
<br />
By default, automatic iOS updates are enabled, and that's a great way for Apple to continue pushing over 75% of its devices to run the latest software update. While you can have the automatic updates disabled on an iOS device and delete the update after it's been downloaded, there is probably only a small percentage of devices operating like this.<br />
<br />
Also, there's most likely a small percentage of people that don't have their devices connected to Wi-Fi, which is often how the update is downloaded, if not via iTunes on a computer.<br />
<br />
Lastly, if a device can't access apple.com, then it cannot receive the update. In the past, I've seen web filters block iPads from accessing apple.com to limit what could be downloaded from iTunes. With this filtering in place, you're also stopping the download of the latest iOS update.<br />
<br />
When all of these small issues add up, you can understand the percentage of devices that aren't running the latest update. However, I'm still curious to see what the average patching cycle for devices is after an update is released, as it's possible that Zimperium's scan was in the middle of a release, which could have inflated the numbers a bit.<br />
<br />
Either way, there will always be issues with patching systems, but as consumer devices go, Apple is doing a pretty good job of having its iOS devices updated in the field.<br />
<br />
My article at: http://searchsecurity.techtarget.com/answer/iOS-updates-Why-are-some-Apple-products-behind-on-updatesMatthew Pascuccihttp://www.blogger.com/profile/07395762527897221899noreply@blogger.com31tag:blogger.com,1999:blog-8294091315472179425.post-77209346950440131132018-01-28T22:37:00.002-05:002018-01-28T22:37:54.852-05:00PGP keys: Can accidental exposures be mitigated?Recently, security researcher Juho Nurminen attempted to contact Adobe via their Product Security Incident Response Team (PSIRT) regarding a security bug he wanted to report. Instead, he stumbled across something much more vulnerable.<br />
<br />
It turns out that Adobe not only published their public key on their website, which is used to send encrypted emails, but the corresponding private PGP keys, as well. After being contacted privately by Nurminen, Adobe moved quickly to revoke the key and had it changed.<br />
<br />
The risks of having the entire key pair published on the site could have led to phishing, decryption of traffic, impersonation, and spoofed or signed messages from Adobe's PSIRT. This was extremely embarrassing for Adobe; however, their ability to act quickly was their saving grace.<br />
<br />
One thing that they did right was putting a passphrase on the certificate because, without it, the Adobe private key is useless to those with malicious intent. This is one step that every organization should take to protect against the accidental release of a certificate or having an attacker gain access to keys and attempt to use them maliciously. Be warned though -- having a passphrase on a certificate for security is only as good as the passphrase it's being secured with, and a weak passphrase increases the probability of it being brute-forced.<br />
<br />
Having procedures in place to quickly revoke PGP keys when needed should be part of your organization's incident response plan. This might not be a common occurrence for many people; however, being able to manage certificates in an expedited fashion could not only save your organization, but could also stop those with malicious intent from attempting to impersonate you.<br />
<br />
Having procedures in place to quickly revoke PGP keys when needed should be part of your organization's incident response plan.<br />
Acting quickly is extremely important. Luckily, the Adobe private key had limited use -- the certificate was only being used for email communication for the PSIRT, so it wasn't as publically used as some of their other certificates.<br />
<br />
As for how the certificate was published in the first place, that's a different issue -- I'd be very curious to know why this certificate was sent in the first place, and who sent it. There should be some type of privileged access in place for these certificates internally, which I'm assuming is a different department from those managing the CMS.<br />
<br />
I understand things can accidentally be miscommunicated or published, but there seems to have been a few breakdowns in the communication process for the Adobe private key to have been published to the internet. I'm hoping Adobe was able to learn from the experience, make adjustments and tighten their security.<br />
<br />
My article at: http://searchsecurity.techtarget.com/answer/PGP-keys-Can-accidental-exposures-be-mitigatedMatthew Pascuccihttp://www.blogger.com/profile/07395762527897221899noreply@blogger.com32tag:blogger.com,1999:blog-8294091315472179425.post-37787622044103047422018-01-28T22:36:00.004-05:002018-01-28T22:36:56.495-05:00VMware AppDefense: How will it address endpoint security?<div>
<span style="color: #666666; font-family: NeueHaasGroteskText W01, Helvetica, Arial, sans-serif;"><span style="font-size: 18px;">VMware recently added a new service called AppDefense to their cybersecurity portfolio that aims to lower false positives and utilize least privilege in order to secure endpoints living on the host. VMware also has NSX to create microsegmentation on the network layer, which can integrate into AppDefense. However, with AppDefense, the security of the systems is taken down a layer to the endpoints.</span></span></div>
<div>
<span style="color: #666666; font-family: NeueHaasGroteskText W01, Helvetica, Arial, sans-serif;"><span style="font-size: 18px;"><br /></span></span></div>
<div>
<span style="color: #666666; font-family: NeueHaasGroteskText W01, Helvetica, Arial, sans-serif;"><span style="font-size: 18px;">The first major benefit of having VMware AppDefense is that it understands what the endpoints were provisioned to do and their intended behavior. The AppDefense service is in the hypervisor and has a detailed understanding of what's normal within the endpoints. If something changes, such as malware reaching a system, then it's able to detect that the endpoint is doing something outside of what it was designed to do.</span></span></div>
<div>
<span style="color: #666666; font-family: NeueHaasGroteskText W01, Helvetica, Arial, sans-serif;"><span style="font-size: 18px;"><br /></span></span></div>
<div>
<span style="color: #666666; font-family: NeueHaasGroteskText W01, Helvetica, Arial, sans-serif;"><span style="font-size: 18px;">This feature helps to reduce false positives within your network and enables overworked security teams to focus on the alerts that truly matter. By creating alerts to monitor the system's behavior and to make sure they are operating properly, the alert time for analysts is reduced. Since VMware AppDefense recognizes that detecting and responding to incidents is key, these alerts help security teams focus on what is important.</span></span></div>
<div>
<span style="color: #666666; font-family: NeueHaasGroteskText W01, Helvetica, Arial, sans-serif;"><span style="font-size: 18px;"><br /></span></span></div>
<div>
<span style="color: #666666; font-family: NeueHaasGroteskText W01, Helvetica, Arial, sans-serif;"><span style="font-size: 18px;">Utilizing least privilege is a security staple, and using it whenever possible is always recommended. With AppDefense, you're able to build off of what VMware NSX started and drop least privilege down from the network layer to the endpoint. This further increases the ability to lock down your systems to only what's needed and limit your threat exposure.</span></span></div>
<div>
<span style="color: #666666; font-family: NeueHaasGroteskText W01, Helvetica, Arial, sans-serif;"><span style="font-size: 18px;"><br /></span></span></div>
<div>
<span style="color: #666666; font-family: NeueHaasGroteskText W01, Helvetica, Arial, sans-serif;"><span style="font-size: 18px;">When alerts within AppDefense are found, it's possible to kick off a response from NSX to take action and to block communications, take snapshots for forensics, or even shut down the endpoint. This detailed control of what can occur after an alert has been found with AppDefense enables endpoints to be isolated and for remediation to occur quickly and efficiently. The automation of AppDefense and the integration of NSX enables in-depth security and an added layer of visibility into workloads that might have been overlooked in the past.</span></span></div>
<div>
<span style="color: #666666; font-family: NeueHaasGroteskText W01, Helvetica, Arial, sans-serif;"><span style="font-size: 18px;"><br /></span></span></div>
<div>
<span style="color: #666666; font-family: NeueHaasGroteskText W01, Helvetica, Arial, sans-serif;"><span style="font-size: 18px;">With the creation of NSX and AppDefense services, VMware has been making big strides in security by focusing on the fundamentals. By giving analysts the visibility into their networks and endpoints using least privilege, an understanding of a behavior change enables a quicker incident response time. I'm excited to see how VMware continues to evolve on its own.</span></span></div>
<div>
<span style="color: #666666; font-family: NeueHaasGroteskText W01, Helvetica, Arial, sans-serif;"><span style="font-size: 18px;"><br /></span></span></div>
<div>
<span style="color: #666666; font-family: NeueHaasGroteskText W01, Helvetica, Arial, sans-serif;"><span style="font-size: 18px;">My article at: http://searchsecurity.techtarget.com/answer/VMware-AppDefense-How-will-it-address-endpoint-security</span></span></div>
Matthew Pascuccihttp://www.blogger.com/profile/07395762527897221899noreply@blogger.com58tag:blogger.com,1999:blog-8294091315472179425.post-87856512001979912752018-01-28T22:35:00.003-05:002018-01-28T22:35:30.926-05:00Killer discovery: What does a new Intel kill switch mean for users?<div>
Recently, security researchers from Positive Technologies discovered a way to disable the Intel Management Engine that referenced a National Security Agency (NSA) program.</div>
<div>
<br /></div>
<div>
Over the years, the Intel ME has caused controversy while being touted as a backdoor into systems for governments, mainly the NSA. With the finding of the Intel kill switch, many people seemed to take it as a nefarious and secretive method the NSA used to spy on systems. But, before we jump to any conclusions, let's dig deeper into what actually occurred.</div>
<div>
<br /></div>
<div>
First of all, the Intel ME has been considered a security risk and backdoor by many people in the past. These chips have separate CPUs, they can't be disabled out of the box with code that's unaudited and they are used by Active Management Technology (AMT) to remotely manage systems. Likewise, these chips have full access to the TCP/IP stack, the memory, they can be active when the system is hibernating or turned off, and they have dedicated connections to the network interface card.</div>
<div>
<br /></div>
<div>
These facts must be pointed out to make a more logical hypothesis based off of what was found by the researchers. The risk that the Intel ME function could come under attack or have a vulnerability that enabled attackers to access systems directly, without interfacing directly with the OS, is a large concern in general, but especially for government agencies.</div>
<div>
<br /></div>
<div>
By setting and using the undocumented feature in a configuration file, the researchers were able to find a way to turn off the Intel ME function and disable it from being used. This configuration setting was labeled HAP, which stands for High Assurance Platform, and it is a framework developed by the NSA as part of a guide on how to secure computing platforms.</div>
<div>
<br /></div>
<div>
Intel has further confirmed that the HAP switch within the configuration was put there per the request of the U.S. government; however, it was only used in a limited release, and it is not an official part of the supported configuration.</div>
<div>
<br /></div>
<div>
Now, before we get too upset about the NSA, I firmly believe that asking to have the Intel kill switch enabled was a good move. The Intel ME is an accident waiting to happen, and if it can't be disabled by default, then the configuration of this code to kill its function actually helps harden the device's security. I wouldn't be as concerned with the NSA requesting the Intel kill switch, since they're probably trying to harden the U.S. government's system from attack.</div>
<div>
<br /></div>
<div>
Intel and other vendors include config changes like this in their hardware to accommodate the needs of large customers. Overall, this HAP config change simply enables you to harden your system against the use of the Intel ME feature. The blame should land more on Intel for allowing this function in the first place, than on the NSA for looking to remove it.</div>
<div>
<br /></div>
<div>
My article at: http://searchsecurity.techtarget.com/answer/Killer-discovery-What-does-a-new-Intel-kill-switch-mean-for-users</div>
<div>
<br /></div>
Matthew Pascuccihttp://www.blogger.com/profile/07395762527897221899noreply@blogger.com23tag:blogger.com,1999:blog-8294091315472179425.post-41800448025599411182018-01-28T22:33:00.002-05:002018-01-28T22:33:14.771-05:00WireX botnet: How did it use infected Android apps?WireX was recently taken down by a supergroup of collaborating researchers from Akamai Technologies, Cloudflare, Flashpoint, Google, Oracle, RiskIQ and Team Cymru. This group worked together to eliminate the threat of WireX and, in doing so, brought together opposing security vendors to work toward a common goal.<br />
<br />
The WireX botnet was a growing menace, and it was taken down swiftly and collectively. We're starting to see this happen more often, and this was a great example of what the security community can do when information is shared.<br />
<br />
The WireX botnet was an Android-based threat that consisted of over 300 different infected apps found in the Google Play Store. The botnet started ramping up application-based distributed denial-of-service (DDoS) attacks that were able to continually launch, even if the app wasn't in use.<br />
<br />
The WireX botnet is assumed to have been created for use in click fraud to make money off of advertising fraud, but quickly seemed to move toward the DDoS route after it gained a large enough botnet. The WireX botnet itself is estimated at 70,000 endpoints, but some researchers think it might be larger. Due to the fluid nature of the mobile device endpoints, the IP addresses from these systems are likely to change as a user moves geographically.<br />
<br />
The researchers were able to work together and share data on the attacks they were seeing and piece together their intelligence to get a complete story. By sharing details on a peculiar DDoS attack against a particular customer with this collective group, the teams were able to identify the source of the attack as malicious Android apps. After determining the source, they were then able to reverse engineer the apps, find the command-and-control servers, and remove them. The group worked with service providers to assist with cleaning the networks and with Google to remove the infected apps.<br />
<br />
Security groups are now coming together more frequently to help defeat large attacks on the internet. Previously, we saw a very competitive industry -- and there are still some others that don't play nice – but, in general, it's encouraging to watch competitors team up and work together to stop attacks for the common good and not for a marketing scheme.<br />
<br />
Security groups are now coming together more frequently to help defeat large attacks on the internet.<br />
This has to do directly with the larger attacks, such as Mirai and NotPetya, which have recently attacked the internet on a global scale. Many of the same vendors that worked together on the WireX removal were also involved with teaming up on the Mirai and NotPetya attacks.<br />
<br />
At this point, vendors are working together to protect themselves and their customers, since all botnets must be addressed; however, they are also working with each other because it allows for a clearer look into these threats and, thus, remediation is quickened.<br />
<br />
We saw from the internet of things attacks with Mirai botnet just how devastating a DDoS attack can be on the internet, so when a similar Android botnet was ramping up on mobile devices, it was in everyone's best interest to act quickly. The lesson to remove a threat as a team before it reaches the strength of something like Mirai was learned and taken into consideration with the WireX botnet.<br />
<div>
<br /></div>
<div>
My article at: http://searchsecurity.techtarget.com/answer/WireX-botnet-How-did-it-use-infected-Android-apps</div>
Matthew Pascuccihttp://www.blogger.com/profile/07395762527897221899noreply@blogger.com141tag:blogger.com,1999:blog-8294091315472179425.post-69541704358949598772018-01-28T22:31:00.002-05:002018-01-28T22:31:45.591-05:00How should security teams handle the Onliner spambot leak?A list of 711 million records stolen by the Onliner spambot was recently discovered, and it's utterly staggering to think of the sheer size of this data set. To put things into perspective: the United States only has 323 million people. Even if everyone in America had their data on this list, it would only make up half of that data.<br />
<br />
The list of data that the Onliner spambot stole was given to security researcher Troy Hunt, who then imported the entire list onto his site Have I been pwned? This site creates a searchable database of email addresses and usernames that have shown up following today's largest breaches, such as those at LinkedIn, Adobe and Myspace.<br />
<br />
It would be beneficial for you to personally validate if your email addresses or usernames have been compromised in these breaches. By submitting your email address or username, the site queries the aggregated list of dumped credentials found and informs you if you were a part of it. If your credentials are found in the aggregated list, then you should reset the passwords for those accounts immediately.<br />
<br />
There are also ways for organizations to determine and be notified if a user account on their domain has been caught in a data breach. Once an enterprise has submitted its domain name to the site and completed the verification process, an email is sent each time an email address with that domain is found in a data breach that's within the Have I been pwned? database.<br />
<br />
In addition to changing passwords as soon as possible, users should also determine if they are reusing the hacked password on any other sites. If so, those passwords should be changed as well, since we've seen attackers use breaches like these and attempt to reuse the credentials on other sites in hopes of the credentials being the same.<br />
<br />
Some advice to users who reuse their credentials would be to start using password vaults to store passwords, as this is an easier way to manage multiple complex passwords for different accounts. Likewise, users should attempt to use some sort of multifactor authentication on their accounts to limit the effect of massive breaches, as attackers won't have the second form of authentication. Even though the credentials would still be public, the second factor would not be within these lists, thus acting as a stop gap to limit attackers from using these accounts.<br />
<br />
Using Have I been pwned? as a tool to increase your situational awareness on the status of current major breaches, such as the Onliner spambot, is an added way to keep yourself and your organization safe. Similarly, enforcing multifactor authentication and eliminating credential reuse can go a long way to help you stay safe.<br />
<br />
My article at: http://searchsecurity.techtarget.com/answer/How-should-security-teams-handle-the-Onliner-spambot-leakMatthew Pascuccihttp://www.blogger.com/profile/07395762527897221899noreply@blogger.com7tag:blogger.com,1999:blog-8294091315472179425.post-55839230469524348342018-01-28T22:30:00.002-05:002018-01-28T22:30:18.996-05:00Monitoring employee communications: What do EU privacy laws say?According to the European Court of Human Rights, employers must inform their users if their business-related communications are being monitored while working for the organization. The court informed individuals that there must be a clear distinction of the type of monitoring, the timeframes, which content is monitored and the administrators that have access to the data.<br />
<br />
The EU's privacy laws are head and shoulders above those in the United States. Just look at their General Data Protection Regulation (GDPR), which will go into effect soon.<br />
<br />
The GDPR regulates the privacy of EU citizens in relation to user data being sent to third parties, breach notification requirements, data security restrictions and the right to be forgotten. GDPR also necessitates that companies perform privacy impact assessments, validate the existence of a data protection officer and review how data is transferred to other countries. Organizations that don't meet these stipulations will be fined. While these are just a few examples of how the EU is enforcing the regulation, it shows that it takes the privacy of its citizens' data extremely seriously.<br />
<br />
When it comes time to review how monitoring employee communications should be handled within the workplace, it's not surprising to see that the EU is taking a similar privacy-based approach.<br />
<br />
Personally, I have no problem with what they're doing, and I agree that people should be alerted when their communications are being monitored. I also don't have an issue with organizations monitoring employee communications from a business perspective -- in today's world, both of these options need to be in place. Organizations need to monitor communications to validate that attacks and insider threats aren't occurring, but users should be made aware of how and when this is occurring -- it should never come as a surprise.<br />
<br />
When you start a company, you normally use some type of communication filtering system, such as for email or the web. In the United States, it's legal to monitor these communications as long as they're a part of the organization and not for the user's personal use. This means that if you're browsing personal websites on a business-related internet network or system, then it will be monitored.<br />
<br />
Many organizations are aware that this is happening and whitelist filtering for particular categories, such as banking, so there's never a question if they're monitoring personal information that doesn't pose a risk to the organization. Just keep in mind that anything employer-owned can be monitored.<br />
<br />
Furthermore, unlike the EU, the legal right to monitor and how far it can go in the U.S. is state-dependent. There are no federal guidelines on how monitoring employee communications should be handled, and it's completely left up to the local and state levels to decide.<br />
<br />
My article at: http://searchsecurity.techtarget.com/answer/Monitoring-employee-communications-What-do-EU-privacy-laws-sayMatthew Pascuccihttp://www.blogger.com/profile/07395762527897221899noreply@blogger.com18tag:blogger.com,1999:blog-8294091315472179425.post-42952460710183284592018-01-28T22:28:00.002-05:002018-01-28T22:28:31.657-05:00How does the Ursnif Trojan variant exploit mouse movements?As security researchers and vendors improve the security within their products, malicious actors are continually looking for ways to bypass them and continue their efforts. This cat and mouse game continues to play out, and is best seen in how malware authors are continually developing creative ways to create new attacks or workarounds. Many times, these techniques are very creative and, with a new variant of the Ursnif Trojan, we saw attackers use mouse movements to decrypt and evade sandbox detection.<br />
<br />
Sandboxes are used to validate that downloaded files from the internet are safe to run on the endpoint. They're sent to the sandbox and executed on a virtual machine to determine their intended purpose. Since this can detect malware, attackers are continually looking for ways to bypass this security layer.<br />
<br />
There have been multiple methods used in the past to detect sandboxes, such as searching for VMware registry keys, virtual adapters, low CPU and RAM, and doing nothing for hours to determine if a file is on a VM.<br />
<br />
In this case, the malware would sit idle. This is also a way to avoid sandboxes, since the scans don't last hours, and users don't perform the malicious actions if they are tipped off to these variables. This would allow the files to enter your network where, like a Trojan horse, they'd wreak havoc.<br />
<br />
The Ursnif Trojan's spin on sandbox detection is to use the previous and current mouse point locations to validate that it's not sitting in a sandbox. The technique, discovered by Forcepoint Security Labs, looks for the delta between these pointer locations and uses these variables to create a base seed that can assist with decryption.<br />
<br />
The Ursnif Trojan goes through the base seeds to decipher the key, and once it matches the proper checksum, which can essentially take a brute force-like combination to achieve, the malware executes the remainder of the code. It does this because the D-value of the mouse movement is always zero, and it will never be able to decipher the proper decoded code at this starting point. Since this is the case, it will never execute within a sandboxed environment.<br />
<br />
Read the rest of my article here: <a href="http://searchsecurity.techtarget.com/answer/How-does-the-Ursnif-Trojan-variant-exploit-mouse-movements">http://searchsecurity.techtarget.com/answer/How-does-the-Ursnif-Trojan-variant-exploit-mouse-movements</a>Matthew Pascuccihttp://www.blogger.com/profile/07395762527897221899noreply@blogger.com6tag:blogger.com,1999:blog-8294091315472179425.post-62644690764859041592018-01-28T22:27:00.000-05:002018-01-28T22:27:02.290-05:00Flash's end of life: How should security teams prepare?Whether you're a fan of Adobe Flash or not, it has been a building block for interactive content on the web, and we must acknowledge what it has accomplished before talking about its eventual removal from the internet. These plug-ins helped usher in a new age of web browsing and, at the same time, were targets for vulnerabilities and exploits within browsers.<br />
<br />
As HTML5 becomes more popular, even now becoming close to a standard, use of the once-popular Flash is diminishing. Using HTML5 enables a more secure and efficient browsing experience that works across both mobile and desktop platforms.<br />
<br />
Adobe is aware that, even though Flash is steadily declining, there are still many sites that rely on their technology to function; therefore, Adobe has given a timeframe of 2020 before Flash's end of life. The company knew it needed to give clients who are currently using its software the proper lead time to migrate toward other software to run their applications before pulling the plug.<br />
<br />
Adobe itself has encouraged those using Flash to migrate any existing Flash content to new open formats. During this time, Adobe has mentioned that it will stop updating and distributing Flash, but will continue to support it through regular security patches, features and capabilities. Hearing this, I get the feeling that they'll be keeping Flash on life support for a while, before they completely pull the plug on the project altogether.<br />
<br />
In order to not be caught off guard when Flash's end of life is official, security teams should be aware of which applications in their organization are currently using Flash, and then create migration paths to have them updated to HTML5 or other open standards. Even if there might be small portions of support after 2020, you never want to be running end-of-life code, especially code that has historically had security vulnerabilities.<br />
<br />
Also, security teams should take notice of which desktops are currently using the Flash plug-in and attempt to have it removed around this time. Since Flash acceptance has declined, and will continue to take a nose-dive after this news, there should be less need for the Flash plug-in moving forward.<br />
<br />
You should prepare for Flash's end of life by taking stock of your systems; remove the plug-in for systems that may connect to sites that haven't migrated away from Flash yet. By following the school of thought of least privilege and having only software that's needed installed, the attack surface becomes limited.<br />
<br />
Eventually, Flash won't be supported, and if bugs are found within the software, then attackers could utilize them for phishing attacks by supporting sites that are designed to use Flash and haven't migrated away. If you don't need it, don't install it.<br />
<br />
Read the rest of my article here: <a href="http://searchsecurity.techtarget.com/answer/Flashs-end-of-life-How-should-security-teams-prepare">http://searchsecurity.techtarget.com/answer/Flashs-end-of-life-How-should-security-teams-prepare</a>Matthew Pascuccihttp://www.blogger.com/profile/07395762527897221899noreply@blogger.com162tag:blogger.com,1999:blog-8294091315472179425.post-52611057659047683492018-01-28T22:25:00.000-05:002018-01-28T22:25:10.276-05:00How does a private bug bounty program compare to a public program?It really depends on what you're looking to offer and receive out of your bug bounty program. There are differences between a public and private bug bounty; normally, we see programs start as private, and then work their way into public. This isn't always the case, but most of the time, organizations will open a private bug bounty by inviting a subset of security researchers in order to test the waters, before having it publically available to the community.<br />
<br />
There are a few things to consider before launching a public bug bounty. There's going to be a testing period with your application, and before you call down the thunder from the internet abroad, it's wise to work with a group of skilled researchers or an organization that specializes in this area to validate your processes and procedures.<br />
<br />
Many times, organizations aren't comfortable with opening this to the public, and they tend to limit the scope of the testing and those that can test it; your risk appetitive will reduce the amount of tests, and also limit the vulnerabilities that can be found within the application. Many organizations want to validate their security posture, use external resources to test their security and supplement this testing to find vulnerabilities before they're found by malicious actors.<br />
<br />
Before flipping from a private to a public bug bounty program, there are a few things to consider. First, open the program to researchers or organizations that are tested and trusted. You don't want to go to just anyone right away, as vulnerabilities could cost you your reputation and revenue if they are found.<br />
<br />
Since many of these researchers are doing this for financial gain, you need to have a firm grip on your payout structure within the private bug bounty to better understand how to use it if it goes public. Are your applications so insecure that you'll be paying out numerous bounties at a high rate? Understanding your payout structure upfront will help you maintain a manageable bug bounty program.<br />
<br />
Before you go public with a bug bounty program, you also need to have a good reason to have the program public. What is the end goal of the program going public versus keeping it private? If you want to find vulnerabilities, and you have a process to do this internally, then maybe a private vulnerability program is right for you. If you already have a vulnerability management process in line and are performing static and dynamic analysis, but want to supplement that with additional manual testing from a larger community, then public testing might be what you're looking for.<br />
<br />
Lastly, it's very important to have a bug bounty rules of engagement page on your site or application to let participants know how to act, what to expect and the rewards for each bug. It will also help to let researchers know what to expect when it comes to how bugs should be submitted using responsible disclosure practices.<br />
<br />
Many sites have bug bounties now, but just because you open it publically doesn't mean you'll have a horde of white hat hackers crashing through your site to search for bugs. Determining what the best bounty is, the section of the code that you'd like to test and how to act operationally when you start seeing attacks occur is important to your bug bounty submissions and your overall day-to-day operations.<br />
<br />
Read the rest of my article here:<a href="http://searchsecurity.techtarget.com/answer/How-does-a-private-bug-bounty-program-compare-to-a-public-program"> http://searchsecurity.techtarget.com/answer/How-does-a-private-bug-bounty-program-compare-to-a-public-program</a>Matthew Pascuccihttp://www.blogger.com/profile/07395762527897221899noreply@blogger.com25tag:blogger.com,1999:blog-8294091315472179425.post-9305928941284939422018-01-28T22:22:00.002-05:002018-01-28T22:22:27.589-05:00WoSign certificates: What happens when Google Chrome removes trust?The certificate authority WoSign and its subsidiary StartCom will no longer be trusted by Google with their Chrome 61 release. Over the past year, Google has slowly been phasing out trust for StartCom and WoSign certificates, and as of September 2017, trust has been completely removed.<br />
<br />
As a certificate authority (CA), having the support of browsers is mandatory for your business to thrive, and without the support of Chrome and other browsers, WoSign is in danger.<br />
<br />
Google Chrome isn't the only browser taking a stance against WoSign certificates, as other large web browsers have either depreciated support for them or are in the midst of removing them. The same goes for Microsoft, Mozilla and Apple in regards to taking action against WoSign for what's being called continued negligent security practices by the Chinese company. There is only one browser that's currently not taking action against WoSign, and that's Opera -- though it should also be noted that Opera was purchased last year by a Chinese investment consortium named Golden Brick Silk Road.<br />
<br />
There are many reasons WoSign certificates are considered unsafe by the major web browsers. These issues include back-dating and SHA-1 certificates with long lives; identical certs, except for NotBefore; and certificates with duplicate serial numbers.<br />
<br />
Google has gone back and forth with WoSign regarding these issues, and WoSign released a statement regarding how they're handling the situation.<br />
<br />
As part of the process, Qihoo 360, a Chinese security technology company and majority owner of WoSign, agreed last year to replace WoSign CEO Richard Wang as a show of faith that they're looking to get a better understanding of the industry and regain trust from the large certificate authorities. It seems this wasn't done; WoSign still hasn't named a new CEO, and Wang has been working with the company in a different role in the business.<br />
<br />
Also, WoSign said it recently passed a security assessment, and it is calling to remain a trusted CA. It's not likely that this will turn things around; it might be too little, too late for the Chinese CA.<br />
<br />
WoSign has a free certificate authority and, due to this, there seems to be a large user base in China. If you're a customer of WoSign or StartCom, then it would be beneficial to replace your certificate with a provider that's fully trusted. If a switch is not made, issues with communication, VPNs or connecting to sites that are using these certificates on their web servers could occur.<br />
<br />
Read the rest of the article here: <a href="http://searchsecurity.techtarget.com/answer/WoSign-certificates-What-happens-when-Google-Chrome-removes-trust">http://searchsecurity.techtarget.com/answer/WoSign-certificates-What-happens-when-Google-Chrome-removes-trust</a>Matthew Pascuccihttp://www.blogger.com/profile/07395762527897221899noreply@blogger.com139tag:blogger.com,1999:blog-8294091315472179425.post-90696984017582624292018-01-28T22:20:00.003-05:002018-01-28T22:20:25.814-05:00How can peer group analysis address malicious apps?Google has had issues in the past with malicious Android apps found in the Google Play Store. The company has since taken to machine learning, peer group analysis and Google Play Protect to improve the security and privacy of these apps. By utilizing these techniques, Google is taking a proactive approach to limit attackers from publishing apps that could take advantage of users after being installed on their mobile devices. This article will explain how these actions can increase security, while asking a few other questions regarding their vetting process.<br />
<br />
By using machine learning and peer grouping, Google is looking to discover a malicious app by comparing its functionality to similar apps, and then sending an alert when things are out of the norm for its categories. Machine learning helps to review apps, as well as the function and privacy settings that are being used within other apps in the Google Play Store.<br />
<br />
The peer grouping creates somewhat of a category for these apps and searches for anomalies in new apps coming into the store. This can baseline the apps for what is considered normal activity, and then compare that activity to a standard. In theory, these comparable apps should be similar in fashion, and abnormalities are then flagged for review by Google.<br />
<br />
An example of this would be a flashlight app that needs access to your contacts, GPS and camera. There is essentially no need for this app to have permission to access these functions and, thus, it would be flagged by peer group analysis as something outside the norm.<br />
<br />
Personally, I'm a big fan of machine learning to assist with finding and guiding engineers toward making better decisions, but I also believe it's neither a standard, nor a framework.<br />
<br />
We're also seeing this machine learning functionality used to improve security and privacy within the Google ecosystem of apps. This is a fantastic way to determine potential issues within the app store, but I think requiring particular standards to be in place before apps are allowed to be published may be a better first step in achieving enhanced privacy.<br />
<br />
Such standards could include enforcing NIST and OWASP Mobile standards, or validating that all EU apps meet the General Data Protection Regulation -- or, if there's health-related information in the app, that it passes HIPAA-related standards. This would be difficult to enforce, since there might be multiple categories and frameworks the app has to adhere to, but this would take a security-first approach when putting an app through the store for vetting.<br />
<br />
Read the rest of the article here:<a href="http://searchsecurity.techtarget.com/answer/How-can-peer-group-analysis-address-malicious-apps"> http://searchsecurity.techtarget.com/answer/How-can-peer-group-analysis-address-malicious-apps</a>Matthew Pascuccihttp://www.blogger.com/profile/07395762527897221899noreply@blogger.com45tag:blogger.com,1999:blog-8294091315472179425.post-14877019321027850422018-01-28T22:14:00.001-05:002018-01-28T22:16:58.267-05:00What security risks does rapid elasticity bring to the cloud?<div style="background-color: white; color: #666666; line-height: 1.75em; margin-bottom: 1.5em; margin-top: 1.5em;">
<span style="font-family: inherit;">One of the major benefits of anything living in the cloud is the ability to measure resources and use rapid elasticity to quickly scale as the environment demands. The days of being locked into physical hardware are over, and the benefits of rapid elasticity in cloud computing are attractive to many organizations.</span></div>
<div style="background-color: white; color: #666666; line-height: 1.75em; margin-bottom: 1.5em; margin-top: 1.5em;">
<span style="font-family: inherit;">There are some concerns -- more based off the education of cloud computing -- which an organization needs to be aware of before using these features. Like anything else, the cloud can be deployed securely, but without understanding how to implement these services, an organization can find itself at risk.</span></div>
<div style="background-color: white; color: #666666; line-height: 1.75em; margin-bottom: 1.5em; margin-top: 1.5em;">
<span style="font-family: inherit;">With measured services, which are cloud services that are monitored and measured by the provider according to usage, an organization can leverage resource metering to perform particular automated actions. These systems can expand based on thresholds and from an on-demand service model.</span></div>
<div style="background-color: white; color: #666666; line-height: 1.75em; margin-bottom: 1.5em; margin-top: 1.5em;">
<span style="font-family: inherit;">As a cloud footprint can swell or deflate with demand, there are multiple security concerns to consider with the fluctuating infrastructure of potential PaaS systems. Managing data in the cloud needs <a href="http://www.computerweekly.com/news/450418882/Ransomware-Protect-yourself-with-good-backup-and-cloud-policies" style="color: #00b3ac; text-decoration-line: none; transition: color 0.2s;">proper policy and configuration</a> to validate its security. This is always a concern, but there are some unique use cases when it comes to cloud security because of the elastic nature of the infrastructure.</span></div>
<div style="background-color: white; color: #666666; line-height: 1.75em; margin-bottom: 1.5em; margin-top: 1.5em;">
<span style="font-family: inherit;">Read the rest of the article here: http://searchcloudsecurity.techtarget.com/answer/What-security-risks-does-rapid-elasticity-bring-to-the-cloud</span></div>
Matthew Pascuccihttp://www.blogger.com/profile/07395762527897221899noreply@blogger.com13tag:blogger.com,1999:blog-8294091315472179425.post-84162400525019384592017-10-16T11:30:00.002-04:002017-10-16T11:30:33.420-04:00Open Letter to Congressman Tom Graves on the “Active Cyber Defense Certainty Act”<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="MsoNormal">
To the Honorable Tom Graves:<o:p></o:p></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
In November of 2015 I was invited to the now retired
Congressman Steve Israel’s Cyber Consortium to participate with other security
professionals in the community to discuss cyber security related issues
affecting both our organizations and communities. During this meeting you were invited
to speak about your thoughts on cyber security, the issues you’re dealing with
in Congress and your approval for the CISA bill. After listening to you
describe your concerns over the OPM breach I noticed how seriously you took the
issue of cyber security. I didn’t personally agree with some of the stances taken
in the room, but you don’t have to agree on everything to initiate progress. I
applaud your dedication and attention to cyber security and will continue to be
interested in your thoughts; even if we might have differing opinions. With
this being said, I have concerns with your latest bill being proposed to
Congress: The “<i>Active Cyber Defense
Certainty Act</i>”.<o:p></o:p></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
Each time I see someone propose reform to the “Computer
Fraud and Abuse Act” it peaks my interest. Evolving our laws with the
ever-changing cyber industry is both needed and incredibly difficult to
accomplish and I appreciate your effort to modernize them. With that in mind,
I’m concerned that the newly proposed ACDC bill crosses some boundaries I’d
like to bring to your attention.<o:p></o:p></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
As you’re most likely aware many of the cyber incidents
occurring are being launched from systems that criminals have already
compromised and being using as a guise for their attacks. This essentially
could end up being an attacker proxied through multiple systems throughout
various countries with the face of the attack showing as an innocent bystander.
By getting the approval to perform a “hack back” against this entity puts this
unknowing victim in the middle of a complicated and intrusive scenario. Not
only are they already compromised by a malicious entity, but they’re now being legally
attacked by others that have assumed have done them harm. Congressmen Graves,
these devices could end up being systems used to assist with our economies
growth, hold personal records that could affect the privacy of our citizen’s
data or may even be used with aiding our healthcare industry. The collateral
damage that could occur from hack backs is unknown and risky. Essentially, if
someone determines they were compromised by a system in the United States and
they start the process of hacking back the system owners might notice the
attack and start the process of hacking them back. This in turn could create a perpetual
hacking battle that wasn’t even started by the actors involved. This method will
in theory cause disarray all over the internet with a system being unknowingly
used as a front by a criminal to start a hacking war between two innocent
organizations.</div>
<div class="MsoNormal">
<o:p></o:p></div>
<div class="MsoNormal">
To interrupt these systems without oversight is dangerous
for us all. In reading through the bill I noticed that these cyber defense
techniques should only be used by “<i>qualified
defenders with a high degree of confidence of attribution</i>”. From this
statement, what qualifications does a defender have to hold before they attempt
to hack-back? Also, what constitutes a high level of attribution? Seeing this
bill is only focused towards American jurisdiction I personally feel attackers
will bypass this threat by using foreign fronts to launch their attacks to get
around being “hacked back”. This somewhat limits the bills effectiveness as
it’s currently written. By being able to track, launch code or use beaconing
technology to assist with attribution of the attack is dangerous to our
privacy. I agree that this is an issue, one that needs to be dealt with, but it
should be dealt with via the hands of law enforcement directly, not the
citizens themselves. I’ve read the requirements where the FBI’s National Cyber
Investigative Joint Task Force will first review the incident before the “hack
back” can occur and offers a certain level of oversight to the incident, but I
don’t think there’s enough. I understand the resource requirements within the
FBI are stretched, but leaving this in hands of those affected by the breach
allows emotions to get involved. This is one reason why we call the police if
there’s a dispute in our local communities. They’re trained, have a third party
perspective and attempt not to make it personal. I feel that there will be
carelessness on the part of those hacking back and this emotion could lead
towards carelessness and neglect that will bring upon greater damage. <o:p></o:p></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
Lastly, the technology is always changing and being able to
get confident attribution is incredibly difficult. If an attack was seen from a
particular public IP address it’s possible that the NAT’d (Network Address
Translation) source is shielding multiple other internal addresses. By
attacking this address it will give no attribution as to where the data or
attacks might actually be sourced. Also, with the fluid environment of cloud
based systems a malicious actor can launch an attack from a public CSP (cloud
service provider) that would quickly remove attribution as to where the source
was occurring. I noticed the language within the bill referencing “<i>types of tools and techniques that defenders
can use</i>” to assist with hacking back. Will there be an approved tool and
technique listing that the active defenders be required to use that stay within
the boundaries of this law? Or will active defenders be able to use the tools
of their choice? Depending on the tools and how they’re used they could cause
unexpected damage to these systems being “hacked back”. Lastly, there’s mention
about removing the stolen data if found and I’m concerned defenders will not be
as efficient with this data deletion and could cause major damage to systems
hosting other applications or systems legitimately. Deleting this data at times
could become an issue with investigations, forensics and might not solve the
issue long term. This stolen data is digital and just because it’s deleted in
one place doesn’t mean it’s been removed permanently. <o:p></o:p></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
Congressman Graves, I respect what you’re doing for our
country, but I’m concerned with the methods in place to protect the privacy of
the data and systems being actively hacked by defenders. I’m anxious about the
overzealous vigilantism that might be implied by defenders looking to defend
themselves, their systems or their stolen data. You’re an outside the box
thinker and passionate about the protection of our country, I love that, but
the methods in place could essentially cause more harm than good as the bill is
currently written. I personally implore you to reconsider the actions of having
a nation of defenders actively attempting to restore their data from sources
that were most likely being used without their consent. The unintended privacy
consequences, destruction of systems and even life are too important not to
mention. If I could have advise in any way it would be to have our country start
focusing on the fundamentals of cyber security before they start writing
licenses to hack. <o:p></o:p></div>
<div class="MsoNormal">
Thank you for your service and your continued efforts to
protect our nation from future cyber events. <o:p></o:p></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
Sincerely,<o:p></o:p></div>
<div class="MsoNormal">
<br /></div>
<div class="MsoNormal">
Matthew Pascucci<o:p></o:p></div>
<div class="MsoNormal">
<br /></div>
<br />
<div class="MsoNormal">
<br /></div>
</div>
Matthew Pascuccihttp://www.blogger.com/profile/07395762527897221899noreply@blogger.com14tag:blogger.com,1999:blog-8294091315472179425.post-5189331921108415922017-09-11T16:54:00.000-04:002017-09-11T16:54:19.534-04:00The Equifax breach - Now what?<div dir="ltr" style="text-align: left;" trbidi="on">
<div style="text-align: left;">
<span style="font-family: inherit;"><span style="background-color: white;">By now we’re all probably very aware of the </span><a href="http://www.npr.org/2017/09/08/549549935/equifax-breach-exposes-personal-data-of-143-million-people" rel="noopener" style="background-color: white; margin: 0px; padding: 0px;" target="_blank">massive Equifax hack</a><span style="background-color: white;"> that exposed 143 million American's social security numbers, birth dates, addresses and drivers’ licenses. There was also a small subset of credit cards and personal identifying documents released with limited personal information to an uncertain amount of Canadian and UK citizens being accessed as well. </span><span id="more-3173" style="background-color: white; margin: 0px; padding: 0px;"></span><span style="background-color: white;">According to a statement released by Equifax the breach occurred from mid-May through July 2017. They discovered the breach on July 29</span><span style="background-color: white; line-height: 0; margin: 0px; padding: 0px; position: relative; top: -0.5em; vertical-align: baseline;">th</span><span style="background-color: white;">, which means attackers were actively working well over a month, if not more, at exhilarating this treasure trove of data. Equifax also stated that criminals exploited a vulnerability in their web application to gain access to sensitive data as the means of compromising their site</span></span></div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
<span style="font-family: inherit;">Here are a few of my thoughts on the Equifax breach: </span><a href="https://www.ccsinet.com/blog/equifax-breach-what-now/"><span style="color: black; font-family: inherit;">https://www.ccsinet.com/blog/equifax-breach-what-now/</span></a></div>
<div style="text-align: left;">
<span style="font-family: inherit;"><br /></span></div>
<div style="text-align: left;">
<span style="font-family: inherit;">Also, here's my bald head on CBS news talking about it: </span><a href="http://newyork.cbslocal.com/2017/09/08/equifax-breach-fallout/amp/"><span style="color: black; font-family: inherit;">http://newyork.cbslocal.com/2017/09/08/equifax-breach-fallout/amp/</span></a></div>
<div style="text-align: left;">
<br /></div>
<br /></div>
Matthew Pascuccihttp://www.blogger.com/profile/07395762527897221899noreply@blogger.com4tag:blogger.com,1999:blog-8294091315472179425.post-18319939941798022202017-09-07T16:02:00.000-04:002017-09-07T16:02:34.399-04:00How do network management systems simplify security?<div dir="ltr" style="text-align: left;" trbidi="on">
Today, many network management systems aim to increase visibility into the network and focus more on security. Since security is often left to the administrators of each department, having additional security built in to tools is becoming common.<br />
<br />
Network management systems that provide security insight are useful tools for your networking team. However, there are a few things to consider before implementing one.<br />
<br />
From a security perspective, monitoring a network is important because, as all data has to run through it, it's a good place to look for anomalies and incidents. There has also been a shift in the security field to make behavior analysis the norm when monitoring for malicious activity.<br />
<br />
There are other things to look for in network management systems that help administrators detect threats within the data, and that's with performance. If you're able to gauge the performance of your equipment or applications, then you're more able to detect incidents that cause loads on the systems based off the thresholds for which they're configured. This would also include the bandwidth usage of systems that might experience slowdowns due to distributed denial-of-service attacks or a worm outbreak within the network. Read more of my article at the link below:<br />
<a href="https://www.blogger.com/goog_1711859087"><br /></a>
<a href="http://searchsecurity.techtarget.com/answer/How-do-network-management-systems-simplify-security">http://searchsecurity.techtarget.com/answer/How-do-network-management-systems-simplify-security</a></div>
Matthew Pascuccihttp://www.blogger.com/profile/07395762527897221899noreply@blogger.com5tag:blogger.com,1999:blog-8294091315472179425.post-29114891037495674012017-09-07T15:59:00.001-04:002017-09-07T15:59:09.640-04:00How can enterprises secure encrypted traffic from cloud applications?<div dir="ltr" style="text-align: left;" trbidi="on">
With many applications being utilized in a SaaS model, it's important to encrypt the traffic between end users and applications. When personal and sensitive data is transferred, processed or stored off local premises, the connections between these points need to be secured.<br />
<br />
Many large websites default to SSL/TLS, increasing the encrypted traffic on the internet. This is a plus for data security, but malicious actors can and do take advantage of this encryption with their malware, spoofing and C2 servers. With organizations like Let's Encrypt and Amazon Web Services, attackers use these flexible, well-designed and inexpensive technologies for malicious purposes. It's for this reason that enterprises need to make monitoring of encrypted traffic and decryption appliances mandatory in networks.<br />
<br />
The recent increase in SSL/TLS traffic within networks is cause for both delight and concern. The security community has seen the need for encryption, but so have malicious actors. From a network security standpoint, it's important to be cautious when dealing with encrypted traffic. Its use is only going to grow from here, and the majority of internet traffic will move toward end-to-end encryption. Read more of my article at the below link:<br />
<a href="https://www.blogger.com/goog_1711859079"><br /></a>
<a href="http://searchsecurity.techtarget.com/answer/How-can-enterprises-secure-encrypted-traffic-from-cloud-applications">http://searchsecurity.techtarget.com/answer/How-can-enterprises-secure-encrypted-traffic-from-cloud-applications</a><br />
<br /></div>
Matthew Pascuccihttp://www.blogger.com/profile/07395762527897221899noreply@blogger.com13tag:blogger.com,1999:blog-8294091315472179425.post-24332042378932172992017-09-07T15:57:00.000-04:002017-09-07T15:57:05.896-04:00Should an enterprise BYOD strategy allow the use of Gmail?<div dir="ltr" style="text-align: left;" trbidi="on">
Creating separate accounts for business use on a third-party platform can be risky, but it depends on the context.<br />
<br />
Google offers organizations the ability to host their mail on its platform, and it also offers additional features to manage these accounts -- though these features are not part of Google's free service. There are privacy concerns regarding enterprise use of Google business accounts, but organizations that have their employees use personal Gmail accounts for business purposes is a separate matter.<br />
<br />
This enterprise BYOD strategy is a risky idea. Using a free service outside of the organization's control and making it the recommended communication method is dangerous. The organization will have no control over the data being sent or the security policy wrapped around the communications. There is no data loss prevention applied to what's being sent, there's no web filtering or antiphishing protection, and the forensic data and logging of the email are lost.<br />
<br />
Essentially, creating a separate personal account as part of an enterprise BYOD strategy actually severely limits BYOD security, and organizations should avoid doing it.<br />
<a href="https://www.blogger.com/goog_1711859077"><br /></a>
<a href="http://searchsecurity.techtarget.com/answer/Should-an-enterprise-BYOD-strategy-allow-the-use-of-Gmail">http://searchsecurity.techtarget.com/answer/Should-an-enterprise-BYOD-strategy-allow-the-use-of-Gmail</a></div>
Matthew Pascuccihttp://www.blogger.com/profile/07395762527897221899noreply@blogger.com9tag:blogger.com,1999:blog-8294091315472179425.post-70359868698868765142017-09-07T15:55:00.003-04:002017-09-07T15:55:53.846-04:00What should you do when third-party compliance is failing?<div dir="ltr" style="text-align: left;" trbidi="on">
The security of your data being held, processed or transmitted by a third party is always a security risk. Essentially, you have to trust an organization other than your own with the security and care of your data.<br />
<br />
The third party or business partner could perform security up to or even beyond your standards, but there's always the possibility for negligence. If there's even the slightest concern that a third party is being careless with the security of your organization's data, you should act immediately.<br />
<br />
Before giving your data to a third party or business partner, there should be a thorough review of the partner and how it performs security. This can include security questionnaires, on-site visits, audits of the third party's environment and a review of its regulatory certifications. Vendor management has become one of the largest areas of concern when it comes to data governance, and it's a growing risk if due diligence isn't done upfront. Read more of my article at the link below:<br />
<br />
<a href="http://searchsecurity.techtarget.com/answer/What-should-you-do-when-third-party-compliance-is-failing">http://searchsecurity.techtarget.com/answer/What-should-you-do-when-third-party-compliance-is-failing</a></div>
Matthew Pascuccihttp://www.blogger.com/profile/07395762527897221899noreply@blogger.com1tag:blogger.com,1999:blog-8294091315472179425.post-40361514275135290902017-09-01T08:59:00.002-04:002017-09-01T08:59:28.063-04:00Security Researchers and Responsible Vulnerability Disclosure <div dir="ltr" style="text-align: left;" trbidi="on">
I was asked to comment on the following article regarding responsible disclosure of vulnerabilities by security researchers. This is a debate that's recently been resurrected over the past couple months. In my opinion there's work to be done on both sides. Below is article I was quoted on regarding the subject:<br />
<br />
<a href="https://www.tripwire.com/state-of-security/security-data-protection/security-researchers-protect-organizations-means-necessary/">https://www.tripwire.com/state-of-security/security-data-protection/security-researchers-protect-organizations-means-necessary/</a></div>
Matthew Pascuccihttp://www.blogger.com/profile/07395762527897221899noreply@blogger.com9tag:blogger.com,1999:blog-8294091315472179425.post-91634369775631532922017-08-29T10:48:00.003-04:002017-08-29T10:48:48.081-04:00Gotta Respect the Hacker Hustle<div dir="ltr" style="text-align: left;" trbidi="on">
Many times you'll see attackers exploit low hanging fruit to breach a network, but other times they really have to work to get into a target. This due diligence has to be respected. I'm not saying hacking into an organization for malicious gain is approved, but the skills have to be respected. If you can't respect your competition there's a good chance you'll be beat by them.<br />
<br />
Here's an article I was quoted in regarding the HBO attackers:<br />
<br />
<div style="margin-bottom: .0001pt; margin: 0in;">
<span style="font-family: "Helvetica","sans-serif";"><a href="https://www.scmagazineuk.com/hbo-breach-accomplished-with-hard-work-by-hacker-poor-security-practices-by-victim/article/680639/">https://www.scmagazineuk.com/hbo-breach-accomplished-with-hard-work-by-hacker-poor-security-practices-by-victim/article/680639/</a></span><span style="font-family: "Helvetica","sans-serif"; font-size: 9.0pt;"><o:p></o:p></span></div>
</div>
Matthew Pascuccihttp://www.blogger.com/profile/07395762527897221899noreply@blogger.com2