Joomla is a popular content management system that accounts for almost 3% of all websites on the internet, and it has been downloaded over 84 million times. A static analysis organization called Rips Technologies recently found it to be vulnerable to an LDAP injection vulnerability. This vulnerability was in the Joomla code for over eight years, and the company recently released a patch to remediate the blind LDAP injection.
This type of attack takes place using the login pages of sites that use LDAP for authentication, and it can infiltrate data or applications by abusing entries inserted into the software in an attempt to extract, view or change the data.
An LDAP injection attack, especially a blind one, like what is used in this method, aims to abuse the authentication process of passing credentials to controllers, as an LDAP server stores the username and password of the users in a database. With this particular vulnerability, there's a complete lack of sanitation, enabling an attacker's script to rotate attempts through the login field and slowly extract the credentials of a user -- this is the blind part of the injection, and it is usually aimed at an administrator account to get complete access to the Joomla control panel.
With this vulnerability, an attacker can submit an LDAP injection of query syntax into the login form in an attempt to slowly gain access to the LDAP database one bit request at a time. When the scripted attack runs, it's able to quickly submit multiple login attempts, and it can eventually work through all the possible characters in the credentials until it completes the password. Since this is scripted and aimed at the system's login form, it's able to make quick work of Joomla systems that use LDAP for authentication.
It's probably safe to say that not many Joomla servers use LDAP for authentication, but it's most likely being used somewhere. LDAP is used quite frequently for authentication.
The first thing you should do is review if your site is vulnerable. Anyone running Joomla versions 1.5 through 3.7.5 is vulnerable if they're using LDAP authentication on their unpatched site. However, there was a patch released that specifically addresses this issue, and it can be installed to mitigate this vulnerability.
Using these plug-ins for authentication naturally brings up the topic of using multifactor authentication. Your authentication architecture should no longer rely on systems using single-factor authentication for applications, especially public-facing applications. This process will limit the risk of vulnerabilities or data leaks that can expose data credentials to attackers.
My article at: http://searchsecurity.techtarget.com/answer/LDAP-injection-How-can-it-be-exploited-in-an-attack
Pages
Sunday, January 28, 2018
BlueBorne vulnerabilities: Are your Bluetooth devices safe?
Last month, a series of Bluetooth vulnerabilities was discovered by research firm Armis Inc. that enables remote connection to a device without the affected users noticing.
The vulnerabilities were reported on Android, Linux, Windows and iOS devices. These vendors were all contacted to create patches for the BlueBorne vulnerabilities and worked with Armis via a responsible disclosure of the exploit. The concern now is the vast amount of Bluetooth devices that might not update efficiently. This concern, combined with working with Android devices to have the update go out to all its manufacturers, will be the biggest hurdle when remediating the BlueBorne vulnerabilities.
The BlueBorne vulnerabilities enable attackers to perform remote code execution and man-in-the-middle attacks. This attack is dangerous because of the broad range of Bluetooth devices out in the wild and the ease with which an attacker can remotely connect to them and intercept traffic. With this exploit, an attacker doesn't have to be paired with the victim's device; the victim's device can be paired with something else, and it doesn't have to be set on the discoverable mode. Essentially, if you have an unpatched system running on any Bluetooth devices, then your vulnerability is high.
However, the affected vendors have done a good job releasing patches for the BlueBorne vulnerabilities. Microsoft patched the bug in a July release and Apple's iOS isn't affected in iOS 10. The issue is with Android, which is historically slow to patch vulnerabilities, and will have to work with itsvendors to have the patch pushed down.
Likewise, the larger issue will be with all of the smart devices and internet of things devices that are installed on networks, meaning your TVs, keyboards, lightbulbs and headphones could all be vulnerable. There's probably a smaller risk of data being exposed on these devices, but they can still intercept information and be used as a way to propagate the issue further.
Another concern with these vulnerabilities is the possibility of a worm being created, released in a crowded area and potentially spreading itself through devices in close proximity to each other. Particular exploits might not work on all phones in this case, but it could still be possible given the right code and circumstance. For example, if the worm was released in a stadium or large crowd, then it could theoretically spread if the systems haven't been properly patched.
Being able to perform code injection to take over a system or create man-in-the-middle attacks, which can be used to steal information, is extremely worrisome. These attacks are happening inside the firewall and don't need to join your network in order to be executed. This is essentially like a backdoor that enables attackers to compromise systems from a distance and within your network.
It is extremely important that you patch all systems if you have the capability to do so, or that you disable Bluetooth devices when they're not needed.
The vulnerabilities were reported on Android, Linux, Windows and iOS devices. These vendors were all contacted to create patches for the BlueBorne vulnerabilities and worked with Armis via a responsible disclosure of the exploit. The concern now is the vast amount of Bluetooth devices that might not update efficiently. This concern, combined with working with Android devices to have the update go out to all its manufacturers, will be the biggest hurdle when remediating the BlueBorne vulnerabilities.
The BlueBorne vulnerabilities enable attackers to perform remote code execution and man-in-the-middle attacks. This attack is dangerous because of the broad range of Bluetooth devices out in the wild and the ease with which an attacker can remotely connect to them and intercept traffic. With this exploit, an attacker doesn't have to be paired with the victim's device; the victim's device can be paired with something else, and it doesn't have to be set on the discoverable mode. Essentially, if you have an unpatched system running on any Bluetooth devices, then your vulnerability is high.
However, the affected vendors have done a good job releasing patches for the BlueBorne vulnerabilities. Microsoft patched the bug in a July release and Apple's iOS isn't affected in iOS 10. The issue is with Android, which is historically slow to patch vulnerabilities, and will have to work with itsvendors to have the patch pushed down.
Likewise, the larger issue will be with all of the smart devices and internet of things devices that are installed on networks, meaning your TVs, keyboards, lightbulbs and headphones could all be vulnerable. There's probably a smaller risk of data being exposed on these devices, but they can still intercept information and be used as a way to propagate the issue further.
Another concern with these vulnerabilities is the possibility of a worm being created, released in a crowded area and potentially spreading itself through devices in close proximity to each other. Particular exploits might not work on all phones in this case, but it could still be possible given the right code and circumstance. For example, if the worm was released in a stadium or large crowd, then it could theoretically spread if the systems haven't been properly patched.
Being able to perform code injection to take over a system or create man-in-the-middle attacks, which can be used to steal information, is extremely worrisome. These attacks are happening inside the firewall and don't need to join your network in order to be executed. This is essentially like a backdoor that enables attackers to compromise systems from a distance and within your network.
It is extremely important that you patch all systems if you have the capability to do so, or that you disable Bluetooth devices when they're not needed.
How can Windows digital signature check be defeated?
Recently, it was determined by a SpecterOps researcher, Matt Graeber, that there is a way to bypass a Windows digital signature check by editing two specific registry keys. This is an important discovery because Windows uses digital signature protection to validate the authenticity of binary files as a security measure.
Digital signature protection is used by Windows and others to determine if a file was tampered with during the time in which it was sent to the receiving party. Being able to validate the integrity of a received file and that it's actually from the party that signed it is important since digital signatures work on trust -- when a system can work around this feature, it opens up doors to malicious activity.
It's also important to state that digital signatures don't secure the file, but give it a level of trust based off of the private key it was signed with; therefore, if that specific key was stolen or used maliciously, then the system would approve the digital signature check.
Many Windows security features and security products rely on the trust and guarantees that a digital signature check brings with it. In the case of the CCleaner malware last month, it spread due to having been signed by a legitimate certificate, which led to the code being trusted by the OS. In his research report, Graeber wrote, "Subverting the trust architecture of Windows, in many cases, is also likely to subvert the efficacy of security products."
The attack is focused on two registry keys that enable you to impersonate files with any other valid signature when adjusted. However, this isn't done via injection of code into the system, but with the registry key modification, meaning the attacker can do this remotely if they have access to the registry. This also means that they must be admins on the system, which isn't incredibly hard to escalate if they aren't don't have permission.
Locking down the administrator rights to limit changes to these keys and implementing monitoring to determine if they were changed would be a way of reviewing if the registry keys were modified, even though this would require the logs of all the systems. It's also possible that a group policy could be made to limit access to these files in greater detail, but these are all reactive methods to this problem.
The issue once again comes down to trust, as this is one area that's put in place to protect you from impersonation. It also happens to be the most likely thing to be used for malicious purposes, especially malware, that would bypass the internal mechanisms to slip past application whitelisting, such as Microsoft's Windows Defender Device Guard.
There needs to be more procedures around digital signature protection to protect against malicious files entering your endpoint.
There needs to be more procedures around digital signature protection to protect against malicious files entering your endpoint, such as reputation services, sandboxes and next-generation malware protection that doesn't rely on signatures.
Is a digital signature check needed? Yes, but it's a layer in the protection against malware, and abusing the trust of these signatures enables them to be bypassed. In the end, we simply need to add more layers to our defense.
My article at: http://searchsecurity.techtarget.com/answer/How-can-Windows-digital-signature-check-be-defeated
Digital signature protection is used by Windows and others to determine if a file was tampered with during the time in which it was sent to the receiving party. Being able to validate the integrity of a received file and that it's actually from the party that signed it is important since digital signatures work on trust -- when a system can work around this feature, it opens up doors to malicious activity.
It's also important to state that digital signatures don't secure the file, but give it a level of trust based off of the private key it was signed with; therefore, if that specific key was stolen or used maliciously, then the system would approve the digital signature check.
Many Windows security features and security products rely on the trust and guarantees that a digital signature check brings with it. In the case of the CCleaner malware last month, it spread due to having been signed by a legitimate certificate, which led to the code being trusted by the OS. In his research report, Graeber wrote, "Subverting the trust architecture of Windows, in many cases, is also likely to subvert the efficacy of security products."
The attack is focused on two registry keys that enable you to impersonate files with any other valid signature when adjusted. However, this isn't done via injection of code into the system, but with the registry key modification, meaning the attacker can do this remotely if they have access to the registry. This also means that they must be admins on the system, which isn't incredibly hard to escalate if they aren't don't have permission.
Locking down the administrator rights to limit changes to these keys and implementing monitoring to determine if they were changed would be a way of reviewing if the registry keys were modified, even though this would require the logs of all the systems. It's also possible that a group policy could be made to limit access to these files in greater detail, but these are all reactive methods to this problem.
The issue once again comes down to trust, as this is one area that's put in place to protect you from impersonation. It also happens to be the most likely thing to be used for malicious purposes, especially malware, that would bypass the internal mechanisms to slip past application whitelisting, such as Microsoft's Windows Defender Device Guard.
There needs to be more procedures around digital signature protection to protect against malicious files entering your endpoint.
There needs to be more procedures around digital signature protection to protect against malicious files entering your endpoint, such as reputation services, sandboxes and next-generation malware protection that doesn't rely on signatures.
Is a digital signature check needed? Yes, but it's a layer in the protection against malware, and abusing the trust of these signatures enables them to be bypassed. In the end, we simply need to add more layers to our defense.
My article at: http://searchsecurity.techtarget.com/answer/How-can-Windows-digital-signature-check-be-defeated
Active Cyber Defense Certainty Act: Should we 'hack back'?
Recently, a bill was proposed by Georgia Congressman Tom Graves named the Active Cyber Defense Certainty Act, which has now gone on to be called the hack back bill by individuals in the cyber community. This bill is being touted as a cyberdefense act that will enable those who have been hacked to defend themselves in an offensive manner. It's essentially attempting to try and fill the holes the antiquated Computer Fraud and Abuse Act has left wide open.
I'm a big fan of evolving our laws to bring them into a modern state when it comes to cybersecurity, but I feel this law will cause more harm than good. Allowing others to hack back without the proper oversight -- which I feel is extremely lacking in the proposed bill -- will create cyber vigilantes more than anything else. I also feel that this law can be abused by criminals, and it doesn't leave us in any better state than we're in now.
First, the jurisdiction of the Active Cyber Defense Certainty Act only applies to the U.S. If someone notices an attack coming from a country outside the U.S., or if stolen data is being stored outside the boundaries of our borders, then they won't be able to hack back.
This already severely limits the effectiveness of this bill, as it can easily be bypassed by attackers who can avoid consequences by launching an attack with a foreign IP. It can also enable pranksters or attackers to start problems for Americans by purposefully launching attacks from within compromised systems in the U.S. to other IPs inside the country. This would give the victims the legal right to hack back against the mischievous IPs, while the spoofed organizations remained unaware of what happened, and started the process of attacking them back.
In theory, this would create a hacking loop within the U.S. and would end up causing disarray, giving an advantage to the hackers. Not only can systems be hacked by a malicious entity, but they can be legally hacked by Americans following the initial attack; hackers would essentially be starting a dispute between two innocent organizations.
On that note, if attackers launch attacks from the U.S. against other systems within the U.S., it's possible for them to attack the systems that regulate our safety. And what if they attack the systems of our healthcare providers, critical infrastructure or economy? Do we really want someone who might not be trained well enough to defend against attacks poking at these systems? This isn't safe, and it borders on being negligent on the part of those who were compromised.
The mention of 'qualified defenders with a high degree of confidence of attribution,' really leaves the door open to what someone can do within the Active Cyber Defense Certainty Act.
The mention of "qualified defenders with a high degree of confidence of attribution," really leaves the door open to what someone can do within the Active Cyber Defense Certainty Act. First, what makes someone a "qualified defender," and how are they determining a "high confidence of attribution"? Is there a license or certification that someone must have in order to request the ability to hack back? Even if they did receive something similar, they still won't know the architecture or systems they're looking to compromise in order to defend themselves. What tools are they able to use and what level of diligence must be shown for attribution? This is a recipe for disaster, and it's also very possible that emotions could get in the way when determining what to delete or how far to go.
The Active Cyber Defense Certainty Act also mentions contacting the FBI in order to review the requests coming into the system before companies are given the right to hack back. This could lead to an overwhelming number of requests for an already stretched cyber department within the FBI.
If anything, I feel that the bill should leave these requests to the Department of Homeland Security instead of the FBI, as an entirely new team would need to be created just to handle these requests. This team should be the one acting as the liaison to the victim organizations.
For example, if we knew someone stole a physical piece of property, and we knew where they were storing it, we'd most likely call the local authorities and let them know what occurred. In the case of cybercrime, they're giving us the ability to alert the authorities, and then go after our stolen goods ourselves. This is a mistake that could lead to disaster.
Lastly, there are technical issues that might make this a lot more difficult than people think. What if a system is being attacked by the public Port Address Translation/Network Address Translation address of an organization? Are they going to start looking for ways into that network even though they can't access anything public-facing?
Also, what will happen if cloud systems are being used as the source of an attack? How do you track systems that might be moving or destroyed before someone notices? In that case, you could end up attacking the wrong organization. I personally don't trust someone attacking back and making changes to a system that they don't manage, since it leaves the door open for errors and issues later on that we're not even considering now.
Data theft today is a massive concern, but the privacy implications and overzealous vigilantism of this bill could make a bad situation much worse. The Active Cyber Defense Certainty Act should be removed from consideration, and the focus should be put on how Americans can work toward creating a better threat intelligence and cybersecurity organization that can act as a governing body when attacks like these occur. Leaving such matters in the hands of those affected will never produce positive results.
I'm a big fan of evolving our laws to bring them into a modern state when it comes to cybersecurity, but I feel this law will cause more harm than good. Allowing others to hack back without the proper oversight -- which I feel is extremely lacking in the proposed bill -- will create cyber vigilantes more than anything else. I also feel that this law can be abused by criminals, and it doesn't leave us in any better state than we're in now.
First, the jurisdiction of the Active Cyber Defense Certainty Act only applies to the U.S. If someone notices an attack coming from a country outside the U.S., or if stolen data is being stored outside the boundaries of our borders, then they won't be able to hack back.
This already severely limits the effectiveness of this bill, as it can easily be bypassed by attackers who can avoid consequences by launching an attack with a foreign IP. It can also enable pranksters or attackers to start problems for Americans by purposefully launching attacks from within compromised systems in the U.S. to other IPs inside the country. This would give the victims the legal right to hack back against the mischievous IPs, while the spoofed organizations remained unaware of what happened, and started the process of attacking them back.
In theory, this would create a hacking loop within the U.S. and would end up causing disarray, giving an advantage to the hackers. Not only can systems be hacked by a malicious entity, but they can be legally hacked by Americans following the initial attack; hackers would essentially be starting a dispute between two innocent organizations.
On that note, if attackers launch attacks from the U.S. against other systems within the U.S., it's possible for them to attack the systems that regulate our safety. And what if they attack the systems of our healthcare providers, critical infrastructure or economy? Do we really want someone who might not be trained well enough to defend against attacks poking at these systems? This isn't safe, and it borders on being negligent on the part of those who were compromised.
The mention of 'qualified defenders with a high degree of confidence of attribution,' really leaves the door open to what someone can do within the Active Cyber Defense Certainty Act.
The mention of "qualified defenders with a high degree of confidence of attribution," really leaves the door open to what someone can do within the Active Cyber Defense Certainty Act. First, what makes someone a "qualified defender," and how are they determining a "high confidence of attribution"? Is there a license or certification that someone must have in order to request the ability to hack back? Even if they did receive something similar, they still won't know the architecture or systems they're looking to compromise in order to defend themselves. What tools are they able to use and what level of diligence must be shown for attribution? This is a recipe for disaster, and it's also very possible that emotions could get in the way when determining what to delete or how far to go.
The Active Cyber Defense Certainty Act also mentions contacting the FBI in order to review the requests coming into the system before companies are given the right to hack back. This could lead to an overwhelming number of requests for an already stretched cyber department within the FBI.
If anything, I feel that the bill should leave these requests to the Department of Homeland Security instead of the FBI, as an entirely new team would need to be created just to handle these requests. This team should be the one acting as the liaison to the victim organizations.
For example, if we knew someone stole a physical piece of property, and we knew where they were storing it, we'd most likely call the local authorities and let them know what occurred. In the case of cybercrime, they're giving us the ability to alert the authorities, and then go after our stolen goods ourselves. This is a mistake that could lead to disaster.
Lastly, there are technical issues that might make this a lot more difficult than people think. What if a system is being attacked by the public Port Address Translation/Network Address Translation address of an organization? Are they going to start looking for ways into that network even though they can't access anything public-facing?
Also, what will happen if cloud systems are being used as the source of an attack? How do you track systems that might be moving or destroyed before someone notices? In that case, you could end up attacking the wrong organization. I personally don't trust someone attacking back and making changes to a system that they don't manage, since it leaves the door open for errors and issues later on that we're not even considering now.
Data theft today is a massive concern, but the privacy implications and overzealous vigilantism of this bill could make a bad situation much worse. The Active Cyber Defense Certainty Act should be removed from consideration, and the focus should be put on how Americans can work toward creating a better threat intelligence and cybersecurity organization that can act as a governing body when attacks like these occur. Leaving such matters in the hands of those affected will never produce positive results.
iOS updates: Why are some Apple products behind on updates?
A new study from mobile security vendor Zimperium Inc. showed that nearly a quarter of the iOS devices it scanned weren't running the latest version of the operating systems. If Apple controls iOS updates, and enterprise mobility management vendors can't block them, then why are so many devices running older versions? Are there other ways to block iOS updates?
Zimperium's study showed that more than 23% of the iOS devices it scanned weren't running the latest and greatest version of Apple's operating system. Even though Apple has a more streamlined method of updating its mobiles devices than its main competitor, Android, this is only because it controls both the hardware and the software -- Apple doesn't rely on disparate manufacturers to apply patches.
That being said, it came as a surprise to many that so many iOS devices weren't up to the bleeding-edge iOS; however, there are a few reasons why we're seeing almost a quarter of iOS devices being delinquent.
For starters, some people just don't want the new update when it becomes available. Even though iOS updates can be nagging, it's possible to delay them or have your device remind you to install it later. It would be interesting to know how many devices are only one update behind the latest update to see if people are holding off temporarily or indefinitely.
Another reason that devices might not be up to the latest version is that legacy devices may not support the newest update -- the newer releases of iOS aren't compatible with every device. This might be a small percentage of devices, but it's still part of the 23%.
Likewise, certain devices have been jailbroken, and thus could have issues receiving updates. These are possible issues that can add up to the 23% found by Zimperium, but there are some configuration and operational changes that might also cause a delayed update.
By default, automatic iOS updates are enabled, and that's a great way for Apple to continue pushing over 75% of its devices to run the latest software update. While you can have the automatic updates disabled on an iOS device and delete the update after it's been downloaded, there is probably only a small percentage of devices operating like this.
Also, there's most likely a small percentage of people that don't have their devices connected to Wi-Fi, which is often how the update is downloaded, if not via iTunes on a computer.
Lastly, if a device can't access apple.com, then it cannot receive the update. In the past, I've seen web filters block iPads from accessing apple.com to limit what could be downloaded from iTunes. With this filtering in place, you're also stopping the download of the latest iOS update.
When all of these small issues add up, you can understand the percentage of devices that aren't running the latest update. However, I'm still curious to see what the average patching cycle for devices is after an update is released, as it's possible that Zimperium's scan was in the middle of a release, which could have inflated the numbers a bit.
Either way, there will always be issues with patching systems, but as consumer devices go, Apple is doing a pretty good job of having its iOS devices updated in the field.
My article at: http://searchsecurity.techtarget.com/answer/iOS-updates-Why-are-some-Apple-products-behind-on-updates
Zimperium's study showed that more than 23% of the iOS devices it scanned weren't running the latest and greatest version of Apple's operating system. Even though Apple has a more streamlined method of updating its mobiles devices than its main competitor, Android, this is only because it controls both the hardware and the software -- Apple doesn't rely on disparate manufacturers to apply patches.
That being said, it came as a surprise to many that so many iOS devices weren't up to the bleeding-edge iOS; however, there are a few reasons why we're seeing almost a quarter of iOS devices being delinquent.
For starters, some people just don't want the new update when it becomes available. Even though iOS updates can be nagging, it's possible to delay them or have your device remind you to install it later. It would be interesting to know how many devices are only one update behind the latest update to see if people are holding off temporarily or indefinitely.
Another reason that devices might not be up to the latest version is that legacy devices may not support the newest update -- the newer releases of iOS aren't compatible with every device. This might be a small percentage of devices, but it's still part of the 23%.
Likewise, certain devices have been jailbroken, and thus could have issues receiving updates. These are possible issues that can add up to the 23% found by Zimperium, but there are some configuration and operational changes that might also cause a delayed update.
By default, automatic iOS updates are enabled, and that's a great way for Apple to continue pushing over 75% of its devices to run the latest software update. While you can have the automatic updates disabled on an iOS device and delete the update after it's been downloaded, there is probably only a small percentage of devices operating like this.
Also, there's most likely a small percentage of people that don't have their devices connected to Wi-Fi, which is often how the update is downloaded, if not via iTunes on a computer.
Lastly, if a device can't access apple.com, then it cannot receive the update. In the past, I've seen web filters block iPads from accessing apple.com to limit what could be downloaded from iTunes. With this filtering in place, you're also stopping the download of the latest iOS update.
When all of these small issues add up, you can understand the percentage of devices that aren't running the latest update. However, I'm still curious to see what the average patching cycle for devices is after an update is released, as it's possible that Zimperium's scan was in the middle of a release, which could have inflated the numbers a bit.
Either way, there will always be issues with patching systems, but as consumer devices go, Apple is doing a pretty good job of having its iOS devices updated in the field.
My article at: http://searchsecurity.techtarget.com/answer/iOS-updates-Why-are-some-Apple-products-behind-on-updates
PGP keys: Can accidental exposures be mitigated?
Recently, security researcher Juho Nurminen attempted to contact Adobe via their Product Security Incident Response Team (PSIRT) regarding a security bug he wanted to report. Instead, he stumbled across something much more vulnerable.
It turns out that Adobe not only published their public key on their website, which is used to send encrypted emails, but the corresponding private PGP keys, as well. After being contacted privately by Nurminen, Adobe moved quickly to revoke the key and had it changed.
The risks of having the entire key pair published on the site could have led to phishing, decryption of traffic, impersonation, and spoofed or signed messages from Adobe's PSIRT. This was extremely embarrassing for Adobe; however, their ability to act quickly was their saving grace.
One thing that they did right was putting a passphrase on the certificate because, without it, the Adobe private key is useless to those with malicious intent. This is one step that every organization should take to protect against the accidental release of a certificate or having an attacker gain access to keys and attempt to use them maliciously. Be warned though -- having a passphrase on a certificate for security is only as good as the passphrase it's being secured with, and a weak passphrase increases the probability of it being brute-forced.
Having procedures in place to quickly revoke PGP keys when needed should be part of your organization's incident response plan. This might not be a common occurrence for many people; however, being able to manage certificates in an expedited fashion could not only save your organization, but could also stop those with malicious intent from attempting to impersonate you.
Having procedures in place to quickly revoke PGP keys when needed should be part of your organization's incident response plan.
Acting quickly is extremely important. Luckily, the Adobe private key had limited use -- the certificate was only being used for email communication for the PSIRT, so it wasn't as publically used as some of their other certificates.
As for how the certificate was published in the first place, that's a different issue -- I'd be very curious to know why this certificate was sent in the first place, and who sent it. There should be some type of privileged access in place for these certificates internally, which I'm assuming is a different department from those managing the CMS.
I understand things can accidentally be miscommunicated or published, but there seems to have been a few breakdowns in the communication process for the Adobe private key to have been published to the internet. I'm hoping Adobe was able to learn from the experience, make adjustments and tighten their security.
My article at: http://searchsecurity.techtarget.com/answer/PGP-keys-Can-accidental-exposures-be-mitigated
It turns out that Adobe not only published their public key on their website, which is used to send encrypted emails, but the corresponding private PGP keys, as well. After being contacted privately by Nurminen, Adobe moved quickly to revoke the key and had it changed.
The risks of having the entire key pair published on the site could have led to phishing, decryption of traffic, impersonation, and spoofed or signed messages from Adobe's PSIRT. This was extremely embarrassing for Adobe; however, their ability to act quickly was their saving grace.
One thing that they did right was putting a passphrase on the certificate because, without it, the Adobe private key is useless to those with malicious intent. This is one step that every organization should take to protect against the accidental release of a certificate or having an attacker gain access to keys and attempt to use them maliciously. Be warned though -- having a passphrase on a certificate for security is only as good as the passphrase it's being secured with, and a weak passphrase increases the probability of it being brute-forced.
Having procedures in place to quickly revoke PGP keys when needed should be part of your organization's incident response plan. This might not be a common occurrence for many people; however, being able to manage certificates in an expedited fashion could not only save your organization, but could also stop those with malicious intent from attempting to impersonate you.
Having procedures in place to quickly revoke PGP keys when needed should be part of your organization's incident response plan.
Acting quickly is extremely important. Luckily, the Adobe private key had limited use -- the certificate was only being used for email communication for the PSIRT, so it wasn't as publically used as some of their other certificates.
As for how the certificate was published in the first place, that's a different issue -- I'd be very curious to know why this certificate was sent in the first place, and who sent it. There should be some type of privileged access in place for these certificates internally, which I'm assuming is a different department from those managing the CMS.
I understand things can accidentally be miscommunicated or published, but there seems to have been a few breakdowns in the communication process for the Adobe private key to have been published to the internet. I'm hoping Adobe was able to learn from the experience, make adjustments and tighten their security.
My article at: http://searchsecurity.techtarget.com/answer/PGP-keys-Can-accidental-exposures-be-mitigated
VMware AppDefense: How will it address endpoint security?
VMware recently added a new service called AppDefense to their cybersecurity portfolio that aims to lower false positives and utilize least privilege in order to secure endpoints living on the host. VMware also has NSX to create microsegmentation on the network layer, which can integrate into AppDefense. However, with AppDefense, the security of the systems is taken down a layer to the endpoints.
The first major benefit of having VMware AppDefense is that it understands what the endpoints were provisioned to do and their intended behavior. The AppDefense service is in the hypervisor and has a detailed understanding of what's normal within the endpoints. If something changes, such as malware reaching a system, then it's able to detect that the endpoint is doing something outside of what it was designed to do.
This feature helps to reduce false positives within your network and enables overworked security teams to focus on the alerts that truly matter. By creating alerts to monitor the system's behavior and to make sure they are operating properly, the alert time for analysts is reduced. Since VMware AppDefense recognizes that detecting and responding to incidents is key, these alerts help security teams focus on what is important.
Utilizing least privilege is a security staple, and using it whenever possible is always recommended. With AppDefense, you're able to build off of what VMware NSX started and drop least privilege down from the network layer to the endpoint. This further increases the ability to lock down your systems to only what's needed and limit your threat exposure.
When alerts within AppDefense are found, it's possible to kick off a response from NSX to take action and to block communications, take snapshots for forensics, or even shut down the endpoint. This detailed control of what can occur after an alert has been found with AppDefense enables endpoints to be isolated and for remediation to occur quickly and efficiently. The automation of AppDefense and the integration of NSX enables in-depth security and an added layer of visibility into workloads that might have been overlooked in the past.
With the creation of NSX and AppDefense services, VMware has been making big strides in security by focusing on the fundamentals. By giving analysts the visibility into their networks and endpoints using least privilege, an understanding of a behavior change enables a quicker incident response time. I'm excited to see how VMware continues to evolve on its own.
My article at: http://searchsecurity.techtarget.com/answer/VMware-AppDefense-How-will-it-address-endpoint-security
Killer discovery: What does a new Intel kill switch mean for users?
Recently, security researchers from Positive Technologies discovered a way to disable the Intel Management Engine that referenced a National Security Agency (NSA) program.
Over the years, the Intel ME has caused controversy while being touted as a backdoor into systems for governments, mainly the NSA. With the finding of the Intel kill switch, many people seemed to take it as a nefarious and secretive method the NSA used to spy on systems. But, before we jump to any conclusions, let's dig deeper into what actually occurred.
First of all, the Intel ME has been considered a security risk and backdoor by many people in the past. These chips have separate CPUs, they can't be disabled out of the box with code that's unaudited and they are used by Active Management Technology (AMT) to remotely manage systems. Likewise, these chips have full access to the TCP/IP stack, the memory, they can be active when the system is hibernating or turned off, and they have dedicated connections to the network interface card.
These facts must be pointed out to make a more logical hypothesis based off of what was found by the researchers. The risk that the Intel ME function could come under attack or have a vulnerability that enabled attackers to access systems directly, without interfacing directly with the OS, is a large concern in general, but especially for government agencies.
By setting and using the undocumented feature in a configuration file, the researchers were able to find a way to turn off the Intel ME function and disable it from being used. This configuration setting was labeled HAP, which stands for High Assurance Platform, and it is a framework developed by the NSA as part of a guide on how to secure computing platforms.
Intel has further confirmed that the HAP switch within the configuration was put there per the request of the U.S. government; however, it was only used in a limited release, and it is not an official part of the supported configuration.
Now, before we get too upset about the NSA, I firmly believe that asking to have the Intel kill switch enabled was a good move. The Intel ME is an accident waiting to happen, and if it can't be disabled by default, then the configuration of this code to kill its function actually helps harden the device's security. I wouldn't be as concerned with the NSA requesting the Intel kill switch, since they're probably trying to harden the U.S. government's system from attack.
Intel and other vendors include config changes like this in their hardware to accommodate the needs of large customers. Overall, this HAP config change simply enables you to harden your system against the use of the Intel ME feature. The blame should land more on Intel for allowing this function in the first place, than on the NSA for looking to remove it.
My article at: http://searchsecurity.techtarget.com/answer/Killer-discovery-What-does-a-new-Intel-kill-switch-mean-for-users
WireX botnet: How did it use infected Android apps?
WireX was recently taken down by a supergroup of collaborating researchers from Akamai Technologies, Cloudflare, Flashpoint, Google, Oracle, RiskIQ and Team Cymru. This group worked together to eliminate the threat of WireX and, in doing so, brought together opposing security vendors to work toward a common goal.
The WireX botnet was a growing menace, and it was taken down swiftly and collectively. We're starting to see this happen more often, and this was a great example of what the security community can do when information is shared.
The WireX botnet was an Android-based threat that consisted of over 300 different infected apps found in the Google Play Store. The botnet started ramping up application-based distributed denial-of-service (DDoS) attacks that were able to continually launch, even if the app wasn't in use.
The WireX botnet is assumed to have been created for use in click fraud to make money off of advertising fraud, but quickly seemed to move toward the DDoS route after it gained a large enough botnet. The WireX botnet itself is estimated at 70,000 endpoints, but some researchers think it might be larger. Due to the fluid nature of the mobile device endpoints, the IP addresses from these systems are likely to change as a user moves geographically.
The researchers were able to work together and share data on the attacks they were seeing and piece together their intelligence to get a complete story. By sharing details on a peculiar DDoS attack against a particular customer with this collective group, the teams were able to identify the source of the attack as malicious Android apps. After determining the source, they were then able to reverse engineer the apps, find the command-and-control servers, and remove them. The group worked with service providers to assist with cleaning the networks and with Google to remove the infected apps.
Security groups are now coming together more frequently to help defeat large attacks on the internet. Previously, we saw a very competitive industry -- and there are still some others that don't play nice – but, in general, it's encouraging to watch competitors team up and work together to stop attacks for the common good and not for a marketing scheme.
Security groups are now coming together more frequently to help defeat large attacks on the internet.
This has to do directly with the larger attacks, such as Mirai and NotPetya, which have recently attacked the internet on a global scale. Many of the same vendors that worked together on the WireX removal were also involved with teaming up on the Mirai and NotPetya attacks.
At this point, vendors are working together to protect themselves and their customers, since all botnets must be addressed; however, they are also working with each other because it allows for a clearer look into these threats and, thus, remediation is quickened.
We saw from the internet of things attacks with Mirai botnet just how devastating a DDoS attack can be on the internet, so when a similar Android botnet was ramping up on mobile devices, it was in everyone's best interest to act quickly. The lesson to remove a threat as a team before it reaches the strength of something like Mirai was learned and taken into consideration with the WireX botnet.
The WireX botnet was a growing menace, and it was taken down swiftly and collectively. We're starting to see this happen more often, and this was a great example of what the security community can do when information is shared.
The WireX botnet was an Android-based threat that consisted of over 300 different infected apps found in the Google Play Store. The botnet started ramping up application-based distributed denial-of-service (DDoS) attacks that were able to continually launch, even if the app wasn't in use.
The WireX botnet is assumed to have been created for use in click fraud to make money off of advertising fraud, but quickly seemed to move toward the DDoS route after it gained a large enough botnet. The WireX botnet itself is estimated at 70,000 endpoints, but some researchers think it might be larger. Due to the fluid nature of the mobile device endpoints, the IP addresses from these systems are likely to change as a user moves geographically.
The researchers were able to work together and share data on the attacks they were seeing and piece together their intelligence to get a complete story. By sharing details on a peculiar DDoS attack against a particular customer with this collective group, the teams were able to identify the source of the attack as malicious Android apps. After determining the source, they were then able to reverse engineer the apps, find the command-and-control servers, and remove them. The group worked with service providers to assist with cleaning the networks and with Google to remove the infected apps.
Security groups are now coming together more frequently to help defeat large attacks on the internet. Previously, we saw a very competitive industry -- and there are still some others that don't play nice – but, in general, it's encouraging to watch competitors team up and work together to stop attacks for the common good and not for a marketing scheme.
Security groups are now coming together more frequently to help defeat large attacks on the internet.
This has to do directly with the larger attacks, such as Mirai and NotPetya, which have recently attacked the internet on a global scale. Many of the same vendors that worked together on the WireX removal were also involved with teaming up on the Mirai and NotPetya attacks.
At this point, vendors are working together to protect themselves and their customers, since all botnets must be addressed; however, they are also working with each other because it allows for a clearer look into these threats and, thus, remediation is quickened.
We saw from the internet of things attacks with Mirai botnet just how devastating a DDoS attack can be on the internet, so when a similar Android botnet was ramping up on mobile devices, it was in everyone's best interest to act quickly. The lesson to remove a threat as a team before it reaches the strength of something like Mirai was learned and taken into consideration with the WireX botnet.
My article at: http://searchsecurity.techtarget.com/answer/WireX-botnet-How-did-it-use-infected-Android-apps
How should security teams handle the Onliner spambot leak?
A list of 711 million records stolen by the Onliner spambot was recently discovered, and it's utterly staggering to think of the sheer size of this data set. To put things into perspective: the United States only has 323 million people. Even if everyone in America had their data on this list, it would only make up half of that data.
The list of data that the Onliner spambot stole was given to security researcher Troy Hunt, who then imported the entire list onto his site Have I been pwned? This site creates a searchable database of email addresses and usernames that have shown up following today's largest breaches, such as those at LinkedIn, Adobe and Myspace.
It would be beneficial for you to personally validate if your email addresses or usernames have been compromised in these breaches. By submitting your email address or username, the site queries the aggregated list of dumped credentials found and informs you if you were a part of it. If your credentials are found in the aggregated list, then you should reset the passwords for those accounts immediately.
There are also ways for organizations to determine and be notified if a user account on their domain has been caught in a data breach. Once an enterprise has submitted its domain name to the site and completed the verification process, an email is sent each time an email address with that domain is found in a data breach that's within the Have I been pwned? database.
In addition to changing passwords as soon as possible, users should also determine if they are reusing the hacked password on any other sites. If so, those passwords should be changed as well, since we've seen attackers use breaches like these and attempt to reuse the credentials on other sites in hopes of the credentials being the same.
Some advice to users who reuse their credentials would be to start using password vaults to store passwords, as this is an easier way to manage multiple complex passwords for different accounts. Likewise, users should attempt to use some sort of multifactor authentication on their accounts to limit the effect of massive breaches, as attackers won't have the second form of authentication. Even though the credentials would still be public, the second factor would not be within these lists, thus acting as a stop gap to limit attackers from using these accounts.
Using Have I been pwned? as a tool to increase your situational awareness on the status of current major breaches, such as the Onliner spambot, is an added way to keep yourself and your organization safe. Similarly, enforcing multifactor authentication and eliminating credential reuse can go a long way to help you stay safe.
My article at: http://searchsecurity.techtarget.com/answer/How-should-security-teams-handle-the-Onliner-spambot-leak
The list of data that the Onliner spambot stole was given to security researcher Troy Hunt, who then imported the entire list onto his site Have I been pwned? This site creates a searchable database of email addresses and usernames that have shown up following today's largest breaches, such as those at LinkedIn, Adobe and Myspace.
It would be beneficial for you to personally validate if your email addresses or usernames have been compromised in these breaches. By submitting your email address or username, the site queries the aggregated list of dumped credentials found and informs you if you were a part of it. If your credentials are found in the aggregated list, then you should reset the passwords for those accounts immediately.
There are also ways for organizations to determine and be notified if a user account on their domain has been caught in a data breach. Once an enterprise has submitted its domain name to the site and completed the verification process, an email is sent each time an email address with that domain is found in a data breach that's within the Have I been pwned? database.
In addition to changing passwords as soon as possible, users should also determine if they are reusing the hacked password on any other sites. If so, those passwords should be changed as well, since we've seen attackers use breaches like these and attempt to reuse the credentials on other sites in hopes of the credentials being the same.
Some advice to users who reuse their credentials would be to start using password vaults to store passwords, as this is an easier way to manage multiple complex passwords for different accounts. Likewise, users should attempt to use some sort of multifactor authentication on their accounts to limit the effect of massive breaches, as attackers won't have the second form of authentication. Even though the credentials would still be public, the second factor would not be within these lists, thus acting as a stop gap to limit attackers from using these accounts.
Using Have I been pwned? as a tool to increase your situational awareness on the status of current major breaches, such as the Onliner spambot, is an added way to keep yourself and your organization safe. Similarly, enforcing multifactor authentication and eliminating credential reuse can go a long way to help you stay safe.
My article at: http://searchsecurity.techtarget.com/answer/How-should-security-teams-handle-the-Onliner-spambot-leak
Monitoring employee communications: What do EU privacy laws say?
According to the European Court of Human Rights, employers must inform their users if their business-related communications are being monitored while working for the organization. The court informed individuals that there must be a clear distinction of the type of monitoring, the timeframes, which content is monitored and the administrators that have access to the data.
The EU's privacy laws are head and shoulders above those in the United States. Just look at their General Data Protection Regulation (GDPR), which will go into effect soon.
The GDPR regulates the privacy of EU citizens in relation to user data being sent to third parties, breach notification requirements, data security restrictions and the right to be forgotten. GDPR also necessitates that companies perform privacy impact assessments, validate the existence of a data protection officer and review how data is transferred to other countries. Organizations that don't meet these stipulations will be fined. While these are just a few examples of how the EU is enforcing the regulation, it shows that it takes the privacy of its citizens' data extremely seriously.
When it comes time to review how monitoring employee communications should be handled within the workplace, it's not surprising to see that the EU is taking a similar privacy-based approach.
Personally, I have no problem with what they're doing, and I agree that people should be alerted when their communications are being monitored. I also don't have an issue with organizations monitoring employee communications from a business perspective -- in today's world, both of these options need to be in place. Organizations need to monitor communications to validate that attacks and insider threats aren't occurring, but users should be made aware of how and when this is occurring -- it should never come as a surprise.
When you start a company, you normally use some type of communication filtering system, such as for email or the web. In the United States, it's legal to monitor these communications as long as they're a part of the organization and not for the user's personal use. This means that if you're browsing personal websites on a business-related internet network or system, then it will be monitored.
Many organizations are aware that this is happening and whitelist filtering for particular categories, such as banking, so there's never a question if they're monitoring personal information that doesn't pose a risk to the organization. Just keep in mind that anything employer-owned can be monitored.
Furthermore, unlike the EU, the legal right to monitor and how far it can go in the U.S. is state-dependent. There are no federal guidelines on how monitoring employee communications should be handled, and it's completely left up to the local and state levels to decide.
My article at: http://searchsecurity.techtarget.com/answer/Monitoring-employee-communications-What-do-EU-privacy-laws-say
The EU's privacy laws are head and shoulders above those in the United States. Just look at their General Data Protection Regulation (GDPR), which will go into effect soon.
The GDPR regulates the privacy of EU citizens in relation to user data being sent to third parties, breach notification requirements, data security restrictions and the right to be forgotten. GDPR also necessitates that companies perform privacy impact assessments, validate the existence of a data protection officer and review how data is transferred to other countries. Organizations that don't meet these stipulations will be fined. While these are just a few examples of how the EU is enforcing the regulation, it shows that it takes the privacy of its citizens' data extremely seriously.
When it comes time to review how monitoring employee communications should be handled within the workplace, it's not surprising to see that the EU is taking a similar privacy-based approach.
Personally, I have no problem with what they're doing, and I agree that people should be alerted when their communications are being monitored. I also don't have an issue with organizations monitoring employee communications from a business perspective -- in today's world, both of these options need to be in place. Organizations need to monitor communications to validate that attacks and insider threats aren't occurring, but users should be made aware of how and when this is occurring -- it should never come as a surprise.
When you start a company, you normally use some type of communication filtering system, such as for email or the web. In the United States, it's legal to monitor these communications as long as they're a part of the organization and not for the user's personal use. This means that if you're browsing personal websites on a business-related internet network or system, then it will be monitored.
Many organizations are aware that this is happening and whitelist filtering for particular categories, such as banking, so there's never a question if they're monitoring personal information that doesn't pose a risk to the organization. Just keep in mind that anything employer-owned can be monitored.
Furthermore, unlike the EU, the legal right to monitor and how far it can go in the U.S. is state-dependent. There are no federal guidelines on how monitoring employee communications should be handled, and it's completely left up to the local and state levels to decide.
My article at: http://searchsecurity.techtarget.com/answer/Monitoring-employee-communications-What-do-EU-privacy-laws-say
How does the Ursnif Trojan variant exploit mouse movements?
As security researchers and vendors improve the security within their products, malicious actors are continually looking for ways to bypass them and continue their efforts. This cat and mouse game continues to play out, and is best seen in how malware authors are continually developing creative ways to create new attacks or workarounds. Many times, these techniques are very creative and, with a new variant of the Ursnif Trojan, we saw attackers use mouse movements to decrypt and evade sandbox detection.
Sandboxes are used to validate that downloaded files from the internet are safe to run on the endpoint. They're sent to the sandbox and executed on a virtual machine to determine their intended purpose. Since this can detect malware, attackers are continually looking for ways to bypass this security layer.
There have been multiple methods used in the past to detect sandboxes, such as searching for VMware registry keys, virtual adapters, low CPU and RAM, and doing nothing for hours to determine if a file is on a VM.
In this case, the malware would sit idle. This is also a way to avoid sandboxes, since the scans don't last hours, and users don't perform the malicious actions if they are tipped off to these variables. This would allow the files to enter your network where, like a Trojan horse, they'd wreak havoc.
The Ursnif Trojan's spin on sandbox detection is to use the previous and current mouse point locations to validate that it's not sitting in a sandbox. The technique, discovered by Forcepoint Security Labs, looks for the delta between these pointer locations and uses these variables to create a base seed that can assist with decryption.
The Ursnif Trojan goes through the base seeds to decipher the key, and once it matches the proper checksum, which can essentially take a brute force-like combination to achieve, the malware executes the remainder of the code. It does this because the D-value of the mouse movement is always zero, and it will never be able to decipher the proper decoded code at this starting point. Since this is the case, it will never execute within a sandboxed environment.
Read the rest of my article here: http://searchsecurity.techtarget.com/answer/How-does-the-Ursnif-Trojan-variant-exploit-mouse-movements
Sandboxes are used to validate that downloaded files from the internet are safe to run on the endpoint. They're sent to the sandbox and executed on a virtual machine to determine their intended purpose. Since this can detect malware, attackers are continually looking for ways to bypass this security layer.
There have been multiple methods used in the past to detect sandboxes, such as searching for VMware registry keys, virtual adapters, low CPU and RAM, and doing nothing for hours to determine if a file is on a VM.
In this case, the malware would sit idle. This is also a way to avoid sandboxes, since the scans don't last hours, and users don't perform the malicious actions if they are tipped off to these variables. This would allow the files to enter your network where, like a Trojan horse, they'd wreak havoc.
The Ursnif Trojan's spin on sandbox detection is to use the previous and current mouse point locations to validate that it's not sitting in a sandbox. The technique, discovered by Forcepoint Security Labs, looks for the delta between these pointer locations and uses these variables to create a base seed that can assist with decryption.
The Ursnif Trojan goes through the base seeds to decipher the key, and once it matches the proper checksum, which can essentially take a brute force-like combination to achieve, the malware executes the remainder of the code. It does this because the D-value of the mouse movement is always zero, and it will never be able to decipher the proper decoded code at this starting point. Since this is the case, it will never execute within a sandboxed environment.
Read the rest of my article here: http://searchsecurity.techtarget.com/answer/How-does-the-Ursnif-Trojan-variant-exploit-mouse-movements
Flash's end of life: How should security teams prepare?
Whether you're a fan of Adobe Flash or not, it has been a building block for interactive content on the web, and we must acknowledge what it has accomplished before talking about its eventual removal from the internet. These plug-ins helped usher in a new age of web browsing and, at the same time, were targets for vulnerabilities and exploits within browsers.
As HTML5 becomes more popular, even now becoming close to a standard, use of the once-popular Flash is diminishing. Using HTML5 enables a more secure and efficient browsing experience that works across both mobile and desktop platforms.
Adobe is aware that, even though Flash is steadily declining, there are still many sites that rely on their technology to function; therefore, Adobe has given a timeframe of 2020 before Flash's end of life. The company knew it needed to give clients who are currently using its software the proper lead time to migrate toward other software to run their applications before pulling the plug.
Adobe itself has encouraged those using Flash to migrate any existing Flash content to new open formats. During this time, Adobe has mentioned that it will stop updating and distributing Flash, but will continue to support it through regular security patches, features and capabilities. Hearing this, I get the feeling that they'll be keeping Flash on life support for a while, before they completely pull the plug on the project altogether.
In order to not be caught off guard when Flash's end of life is official, security teams should be aware of which applications in their organization are currently using Flash, and then create migration paths to have them updated to HTML5 or other open standards. Even if there might be small portions of support after 2020, you never want to be running end-of-life code, especially code that has historically had security vulnerabilities.
Also, security teams should take notice of which desktops are currently using the Flash plug-in and attempt to have it removed around this time. Since Flash acceptance has declined, and will continue to take a nose-dive after this news, there should be less need for the Flash plug-in moving forward.
You should prepare for Flash's end of life by taking stock of your systems; remove the plug-in for systems that may connect to sites that haven't migrated away from Flash yet. By following the school of thought of least privilege and having only software that's needed installed, the attack surface becomes limited.
Eventually, Flash won't be supported, and if bugs are found within the software, then attackers could utilize them for phishing attacks by supporting sites that are designed to use Flash and haven't migrated away. If you don't need it, don't install it.
Read the rest of my article here: http://searchsecurity.techtarget.com/answer/Flashs-end-of-life-How-should-security-teams-prepare
As HTML5 becomes more popular, even now becoming close to a standard, use of the once-popular Flash is diminishing. Using HTML5 enables a more secure and efficient browsing experience that works across both mobile and desktop platforms.
Adobe is aware that, even though Flash is steadily declining, there are still many sites that rely on their technology to function; therefore, Adobe has given a timeframe of 2020 before Flash's end of life. The company knew it needed to give clients who are currently using its software the proper lead time to migrate toward other software to run their applications before pulling the plug.
Adobe itself has encouraged those using Flash to migrate any existing Flash content to new open formats. During this time, Adobe has mentioned that it will stop updating and distributing Flash, but will continue to support it through regular security patches, features and capabilities. Hearing this, I get the feeling that they'll be keeping Flash on life support for a while, before they completely pull the plug on the project altogether.
In order to not be caught off guard when Flash's end of life is official, security teams should be aware of which applications in their organization are currently using Flash, and then create migration paths to have them updated to HTML5 or other open standards. Even if there might be small portions of support after 2020, you never want to be running end-of-life code, especially code that has historically had security vulnerabilities.
Also, security teams should take notice of which desktops are currently using the Flash plug-in and attempt to have it removed around this time. Since Flash acceptance has declined, and will continue to take a nose-dive after this news, there should be less need for the Flash plug-in moving forward.
You should prepare for Flash's end of life by taking stock of your systems; remove the plug-in for systems that may connect to sites that haven't migrated away from Flash yet. By following the school of thought of least privilege and having only software that's needed installed, the attack surface becomes limited.
Eventually, Flash won't be supported, and if bugs are found within the software, then attackers could utilize them for phishing attacks by supporting sites that are designed to use Flash and haven't migrated away. If you don't need it, don't install it.
Read the rest of my article here: http://searchsecurity.techtarget.com/answer/Flashs-end-of-life-How-should-security-teams-prepare
How does a private bug bounty program compare to a public program?
It really depends on what you're looking to offer and receive out of your bug bounty program. There are differences between a public and private bug bounty; normally, we see programs start as private, and then work their way into public. This isn't always the case, but most of the time, organizations will open a private bug bounty by inviting a subset of security researchers in order to test the waters, before having it publically available to the community.
There are a few things to consider before launching a public bug bounty. There's going to be a testing period with your application, and before you call down the thunder from the internet abroad, it's wise to work with a group of skilled researchers or an organization that specializes in this area to validate your processes and procedures.
Many times, organizations aren't comfortable with opening this to the public, and they tend to limit the scope of the testing and those that can test it; your risk appetitive will reduce the amount of tests, and also limit the vulnerabilities that can be found within the application. Many organizations want to validate their security posture, use external resources to test their security and supplement this testing to find vulnerabilities before they're found by malicious actors.
Before flipping from a private to a public bug bounty program, there are a few things to consider. First, open the program to researchers or organizations that are tested and trusted. You don't want to go to just anyone right away, as vulnerabilities could cost you your reputation and revenue if they are found.
Since many of these researchers are doing this for financial gain, you need to have a firm grip on your payout structure within the private bug bounty to better understand how to use it if it goes public. Are your applications so insecure that you'll be paying out numerous bounties at a high rate? Understanding your payout structure upfront will help you maintain a manageable bug bounty program.
Before you go public with a bug bounty program, you also need to have a good reason to have the program public. What is the end goal of the program going public versus keeping it private? If you want to find vulnerabilities, and you have a process to do this internally, then maybe a private vulnerability program is right for you. If you already have a vulnerability management process in line and are performing static and dynamic analysis, but want to supplement that with additional manual testing from a larger community, then public testing might be what you're looking for.
Lastly, it's very important to have a bug bounty rules of engagement page on your site or application to let participants know how to act, what to expect and the rewards for each bug. It will also help to let researchers know what to expect when it comes to how bugs should be submitted using responsible disclosure practices.
Many sites have bug bounties now, but just because you open it publically doesn't mean you'll have a horde of white hat hackers crashing through your site to search for bugs. Determining what the best bounty is, the section of the code that you'd like to test and how to act operationally when you start seeing attacks occur is important to your bug bounty submissions and your overall day-to-day operations.
Read the rest of my article here: http://searchsecurity.techtarget.com/answer/How-does-a-private-bug-bounty-program-compare-to-a-public-program
There are a few things to consider before launching a public bug bounty. There's going to be a testing period with your application, and before you call down the thunder from the internet abroad, it's wise to work with a group of skilled researchers or an organization that specializes in this area to validate your processes and procedures.
Many times, organizations aren't comfortable with opening this to the public, and they tend to limit the scope of the testing and those that can test it; your risk appetitive will reduce the amount of tests, and also limit the vulnerabilities that can be found within the application. Many organizations want to validate their security posture, use external resources to test their security and supplement this testing to find vulnerabilities before they're found by malicious actors.
Before flipping from a private to a public bug bounty program, there are a few things to consider. First, open the program to researchers or organizations that are tested and trusted. You don't want to go to just anyone right away, as vulnerabilities could cost you your reputation and revenue if they are found.
Since many of these researchers are doing this for financial gain, you need to have a firm grip on your payout structure within the private bug bounty to better understand how to use it if it goes public. Are your applications so insecure that you'll be paying out numerous bounties at a high rate? Understanding your payout structure upfront will help you maintain a manageable bug bounty program.
Before you go public with a bug bounty program, you also need to have a good reason to have the program public. What is the end goal of the program going public versus keeping it private? If you want to find vulnerabilities, and you have a process to do this internally, then maybe a private vulnerability program is right for you. If you already have a vulnerability management process in line and are performing static and dynamic analysis, but want to supplement that with additional manual testing from a larger community, then public testing might be what you're looking for.
Lastly, it's very important to have a bug bounty rules of engagement page on your site or application to let participants know how to act, what to expect and the rewards for each bug. It will also help to let researchers know what to expect when it comes to how bugs should be submitted using responsible disclosure practices.
Many sites have bug bounties now, but just because you open it publically doesn't mean you'll have a horde of white hat hackers crashing through your site to search for bugs. Determining what the best bounty is, the section of the code that you'd like to test and how to act operationally when you start seeing attacks occur is important to your bug bounty submissions and your overall day-to-day operations.
Read the rest of my article here: http://searchsecurity.techtarget.com/answer/How-does-a-private-bug-bounty-program-compare-to-a-public-program
WoSign certificates: What happens when Google Chrome removes trust?
The certificate authority WoSign and its subsidiary StartCom will no longer be trusted by Google with their Chrome 61 release. Over the past year, Google has slowly been phasing out trust for StartCom and WoSign certificates, and as of September 2017, trust has been completely removed.
As a certificate authority (CA), having the support of browsers is mandatory for your business to thrive, and without the support of Chrome and other browsers, WoSign is in danger.
Google Chrome isn't the only browser taking a stance against WoSign certificates, as other large web browsers have either depreciated support for them or are in the midst of removing them. The same goes for Microsoft, Mozilla and Apple in regards to taking action against WoSign for what's being called continued negligent security practices by the Chinese company. There is only one browser that's currently not taking action against WoSign, and that's Opera -- though it should also be noted that Opera was purchased last year by a Chinese investment consortium named Golden Brick Silk Road.
There are many reasons WoSign certificates are considered unsafe by the major web browsers. These issues include back-dating and SHA-1 certificates with long lives; identical certs, except for NotBefore; and certificates with duplicate serial numbers.
Google has gone back and forth with WoSign regarding these issues, and WoSign released a statement regarding how they're handling the situation.
As part of the process, Qihoo 360, a Chinese security technology company and majority owner of WoSign, agreed last year to replace WoSign CEO Richard Wang as a show of faith that they're looking to get a better understanding of the industry and regain trust from the large certificate authorities. It seems this wasn't done; WoSign still hasn't named a new CEO, and Wang has been working with the company in a different role in the business.
Also, WoSign said it recently passed a security assessment, and it is calling to remain a trusted CA. It's not likely that this will turn things around; it might be too little, too late for the Chinese CA.
WoSign has a free certificate authority and, due to this, there seems to be a large user base in China. If you're a customer of WoSign or StartCom, then it would be beneficial to replace your certificate with a provider that's fully trusted. If a switch is not made, issues with communication, VPNs or connecting to sites that are using these certificates on their web servers could occur.
Read the rest of the article here: http://searchsecurity.techtarget.com/answer/WoSign-certificates-What-happens-when-Google-Chrome-removes-trust
As a certificate authority (CA), having the support of browsers is mandatory for your business to thrive, and without the support of Chrome and other browsers, WoSign is in danger.
Google Chrome isn't the only browser taking a stance against WoSign certificates, as other large web browsers have either depreciated support for them or are in the midst of removing them. The same goes for Microsoft, Mozilla and Apple in regards to taking action against WoSign for what's being called continued negligent security practices by the Chinese company. There is only one browser that's currently not taking action against WoSign, and that's Opera -- though it should also be noted that Opera was purchased last year by a Chinese investment consortium named Golden Brick Silk Road.
There are many reasons WoSign certificates are considered unsafe by the major web browsers. These issues include back-dating and SHA-1 certificates with long lives; identical certs, except for NotBefore; and certificates with duplicate serial numbers.
Google has gone back and forth with WoSign regarding these issues, and WoSign released a statement regarding how they're handling the situation.
As part of the process, Qihoo 360, a Chinese security technology company and majority owner of WoSign, agreed last year to replace WoSign CEO Richard Wang as a show of faith that they're looking to get a better understanding of the industry and regain trust from the large certificate authorities. It seems this wasn't done; WoSign still hasn't named a new CEO, and Wang has been working with the company in a different role in the business.
Also, WoSign said it recently passed a security assessment, and it is calling to remain a trusted CA. It's not likely that this will turn things around; it might be too little, too late for the Chinese CA.
WoSign has a free certificate authority and, due to this, there seems to be a large user base in China. If you're a customer of WoSign or StartCom, then it would be beneficial to replace your certificate with a provider that's fully trusted. If a switch is not made, issues with communication, VPNs or connecting to sites that are using these certificates on their web servers could occur.
Read the rest of the article here: http://searchsecurity.techtarget.com/answer/WoSign-certificates-What-happens-when-Google-Chrome-removes-trust
How can peer group analysis address malicious apps?
Google has had issues in the past with malicious Android apps found in the Google Play Store. The company has since taken to machine learning, peer group analysis and Google Play Protect to improve the security and privacy of these apps. By utilizing these techniques, Google is taking a proactive approach to limit attackers from publishing apps that could take advantage of users after being installed on their mobile devices. This article will explain how these actions can increase security, while asking a few other questions regarding their vetting process.
By using machine learning and peer grouping, Google is looking to discover a malicious app by comparing its functionality to similar apps, and then sending an alert when things are out of the norm for its categories. Machine learning helps to review apps, as well as the function and privacy settings that are being used within other apps in the Google Play Store.
The peer grouping creates somewhat of a category for these apps and searches for anomalies in new apps coming into the store. This can baseline the apps for what is considered normal activity, and then compare that activity to a standard. In theory, these comparable apps should be similar in fashion, and abnormalities are then flagged for review by Google.
An example of this would be a flashlight app that needs access to your contacts, GPS and camera. There is essentially no need for this app to have permission to access these functions and, thus, it would be flagged by peer group analysis as something outside the norm.
Personally, I'm a big fan of machine learning to assist with finding and guiding engineers toward making better decisions, but I also believe it's neither a standard, nor a framework.
We're also seeing this machine learning functionality used to improve security and privacy within the Google ecosystem of apps. This is a fantastic way to determine potential issues within the app store, but I think requiring particular standards to be in place before apps are allowed to be published may be a better first step in achieving enhanced privacy.
Such standards could include enforcing NIST and OWASP Mobile standards, or validating that all EU apps meet the General Data Protection Regulation -- or, if there's health-related information in the app, that it passes HIPAA-related standards. This would be difficult to enforce, since there might be multiple categories and frameworks the app has to adhere to, but this would take a security-first approach when putting an app through the store for vetting.
Read the rest of the article here: http://searchsecurity.techtarget.com/answer/How-can-peer-group-analysis-address-malicious-apps
By using machine learning and peer grouping, Google is looking to discover a malicious app by comparing its functionality to similar apps, and then sending an alert when things are out of the norm for its categories. Machine learning helps to review apps, as well as the function and privacy settings that are being used within other apps in the Google Play Store.
The peer grouping creates somewhat of a category for these apps and searches for anomalies in new apps coming into the store. This can baseline the apps for what is considered normal activity, and then compare that activity to a standard. In theory, these comparable apps should be similar in fashion, and abnormalities are then flagged for review by Google.
An example of this would be a flashlight app that needs access to your contacts, GPS and camera. There is essentially no need for this app to have permission to access these functions and, thus, it would be flagged by peer group analysis as something outside the norm.
Personally, I'm a big fan of machine learning to assist with finding and guiding engineers toward making better decisions, but I also believe it's neither a standard, nor a framework.
We're also seeing this machine learning functionality used to improve security and privacy within the Google ecosystem of apps. This is a fantastic way to determine potential issues within the app store, but I think requiring particular standards to be in place before apps are allowed to be published may be a better first step in achieving enhanced privacy.
Such standards could include enforcing NIST and OWASP Mobile standards, or validating that all EU apps meet the General Data Protection Regulation -- or, if there's health-related information in the app, that it passes HIPAA-related standards. This would be difficult to enforce, since there might be multiple categories and frameworks the app has to adhere to, but this would take a security-first approach when putting an app through the store for vetting.
Read the rest of the article here: http://searchsecurity.techtarget.com/answer/How-can-peer-group-analysis-address-malicious-apps
What security risks does rapid elasticity bring to the cloud?
One of the major benefits of anything living in the cloud is the ability to measure resources and use rapid elasticity to quickly scale as the environment demands. The days of being locked into physical hardware are over, and the benefits of rapid elasticity in cloud computing are attractive to many organizations.
There are some concerns -- more based off the education of cloud computing -- which an organization needs to be aware of before using these features. Like anything else, the cloud can be deployed securely, but without understanding how to implement these services, an organization can find itself at risk.
With measured services, which are cloud services that are monitored and measured by the provider according to usage, an organization can leverage resource metering to perform particular automated actions. These systems can expand based on thresholds and from an on-demand service model.
As a cloud footprint can swell or deflate with demand, there are multiple security concerns to consider with the fluctuating infrastructure of potential PaaS systems. Managing data in the cloud needs proper policy and configuration to validate its security. This is always a concern, but there are some unique use cases when it comes to cloud security because of the elastic nature of the infrastructure.
Read the rest of the article here: http://searchcloudsecurity.techtarget.com/answer/What-security-risks-does-rapid-elasticity-bring-to-the-cloud
Subscribe to:
Posts (Atom)