Pages

Monday, October 16, 2017

Open Letter to Congressman Tom Graves on the “Active Cyber Defense Certainty Act”

To the Honorable Tom Graves:

In November of 2015 I was invited to the now retired Congressman Steve Israel’s Cyber Consortium to participate with other security professionals in the community to discuss cyber security related issues affecting both our organizations and communities. During this meeting you were invited to speak about your thoughts on cyber security, the issues you’re dealing with in Congress and your approval for the CISA bill. After listening to you describe your concerns over the OPM breach I noticed how seriously you took the issue of cyber security. I didn’t personally agree with some of the stances taken in the room, but you don’t have to agree on everything to initiate progress. I applaud your dedication and attention to cyber security and will continue to be interested in your thoughts; even if we might have differing opinions. With this being said, I have concerns with your latest bill being proposed to Congress: The “Active Cyber Defense Certainty Act”.

Each time I see someone propose reform to the “Computer Fraud and Abuse Act” it peaks my interest. Evolving our laws with the ever-changing cyber industry is both needed and incredibly difficult to accomplish and I appreciate your effort to modernize them. With that in mind, I’m concerned that the newly proposed ACDC bill crosses some boundaries I’d like to bring to your attention.

As you’re most likely aware many of the cyber incidents occurring are being launched from systems that criminals have already compromised and being using as a guise for their attacks. This essentially could end up being an attacker proxied through multiple systems throughout various countries with the face of the attack showing as an innocent bystander. By getting the approval to perform a “hack back” against this entity puts this unknowing victim in the middle of a complicated and intrusive scenario. Not only are they already compromised by a malicious entity, but they’re now being legally attacked by others that have assumed have done them harm. Congressmen Graves, these devices could end up being systems used to assist with our economies growth, hold personal records that could affect the privacy of our citizen’s data or may even be used with aiding our healthcare industry. The collateral damage that could occur from hack backs is unknown and risky. Essentially, if someone determines they were compromised by a system in the United States and they start the process of hacking back the system owners might notice the attack and start the process of hacking them back. This in turn could create a perpetual hacking battle that wasn’t even started by the actors involved. This method will in theory cause disarray all over the internet with a system being unknowingly used as a front by a criminal to start a hacking war between two innocent organizations.
 
To interrupt these systems without oversight is dangerous for us all. In reading through the bill I noticed that these cyber defense techniques should only be used by “qualified defenders with a high degree of confidence of attribution”. From this statement, what qualifications does a defender have to hold before they attempt to hack-back? Also, what constitutes a high level of attribution? Seeing this bill is only focused towards American jurisdiction I personally feel attackers will bypass this threat by using foreign fronts to launch their attacks to get around being “hacked back”. This somewhat limits the bills effectiveness as it’s currently written. By being able to track, launch code or use beaconing technology to assist with attribution of the attack is dangerous to our privacy. I agree that this is an issue, one that needs to be dealt with, but it should be dealt with via the hands of law enforcement directly, not the citizens themselves. I’ve read the requirements where the FBI’s National Cyber Investigative Joint Task Force will first review the incident before the “hack back” can occur and offers a certain level of oversight to the incident, but I don’t think there’s enough. I understand the resource requirements within the FBI are stretched, but leaving this in hands of those affected by the breach allows emotions to get involved. This is one reason why we call the police if there’s a dispute in our local communities. They’re trained, have a third party perspective and attempt not to make it personal. I feel that there will be carelessness on the part of those hacking back and this emotion could lead towards carelessness and neglect that will bring upon greater damage.

Lastly, the technology is always changing and being able to get confident attribution is incredibly difficult. If an attack was seen from a particular public IP address it’s possible that the NAT’d (Network Address Translation) source is shielding multiple other internal addresses. By attacking this address it will give no attribution as to where the data or attacks might actually be sourced. Also, with the fluid environment of cloud based systems a malicious actor can launch an attack from a public CSP (cloud service provider) that would quickly remove attribution as to where the source was occurring. I noticed the language within the bill referencing “types of tools and techniques that defenders can use” to assist with hacking back. Will there be an approved tool and technique listing that the active defenders be required to use that stay within the boundaries of this law? Or will active defenders be able to use the tools of their choice? Depending on the tools and how they’re used they could cause unexpected damage to these systems being “hacked back”. Lastly, there’s mention about removing the stolen data if found and I’m concerned defenders will not be as efficient with this data deletion and could cause major damage to systems hosting other applications or systems legitimately. Deleting this data at times could become an issue with investigations, forensics and might not solve the issue long term. This stolen data is digital and just because it’s deleted in one place doesn’t mean it’s been removed permanently.

Congressman Graves, I respect what you’re doing for our country, but I’m concerned with the methods in place to protect the privacy of the data and systems being actively hacked by defenders. I’m anxious about the overzealous vigilantism that might be implied by defenders looking to defend themselves, their systems or their stolen data. You’re an outside the box thinker and passionate about the protection of our country, I love that, but the methods in place could essentially cause more harm than good as the bill is currently written. I personally implore you to reconsider the actions of having a nation of defenders actively attempting to restore their data from sources that were most likely being used without their consent. The unintended privacy consequences, destruction of systems and even life are too important not to mention. If I could have advise in any way it would be to have our country start focusing on the fundamentals of cyber security before they start writing licenses to hack.
Thank you for your service and your continued efforts to protect our nation from future cyber events.

Sincerely,

Matthew Pascucci



Monday, September 11, 2017

The Equifax breach - Now what?

By now we’re all probably very aware of the massive Equifax hack that exposed 143 million American's social security numbers, birth dates, addresses and drivers’ licenses. There was also a small subset of credit cards and personal identifying documents released with limited personal information to an uncertain amount of Canadian and UK citizens being accessed as well. According to a statement released by Equifax the breach occurred from mid-May through July 2017. They discovered the breach on July 29th, which means attackers were actively working well over a month, if not more, at exhilarating this treasure trove of data. Equifax also stated that criminals exploited a vulnerability in their web application to gain access to sensitive data as the means of compromising their site

Here are a few of my thoughts on the Equifax breach: https://www.ccsinet.com/blog/equifax-breach-what-now/

Also, here's my bald head on CBS news talking about it: http://newyork.cbslocal.com/2017/09/08/equifax-breach-fallout/amp/


Thursday, September 7, 2017

How do network management systems simplify security?

Today, many network management systems aim to increase visibility into the network and focus more on security. Since security is often left to the administrators of each department, having additional security built in to tools is becoming common.

Network management systems that provide security insight are useful tools for your networking team. However, there are a few things to consider before implementing one.

From a security perspective, monitoring a network is important because, as all data has to run through it, it's a good place to look for anomalies and incidents. There has also been a shift in the security field to make behavior analysis the norm when monitoring for malicious activity.

There are other things to look for in network management systems that help administrators detect threats within the data, and that's with performance. If you're able to gauge the performance of your equipment or applications, then you're more able to detect incidents that cause loads on the systems based off the thresholds for which they're configured. This would also include the bandwidth usage of systems that might experience slowdowns due to distributed denial-of-service attacks or a worm outbreak within the network. Read more of my article at the link below:

http://searchsecurity.techtarget.com/answer/How-do-network-management-systems-simplify-security

How can enterprises secure encrypted traffic from cloud applications?

With many applications being utilized in a SaaS model, it's important to encrypt the traffic between end users and applications. When personal and sensitive data is transferred, processed or stored off local premises, the connections between these points need to be secured.

Many large websites default to SSL/TLS, increasing the encrypted traffic on the internet. This is a plus for data security, but malicious actors can and do take advantage of this encryption with their malware, spoofing and C2 servers. With organizations like Let's Encrypt and Amazon Web Services, attackers use these flexible, well-designed and inexpensive technologies for malicious purposes. It's for this reason that enterprises need to make monitoring of encrypted traffic and decryption appliances mandatory in networks.

The recent increase in SSL/TLS traffic within networks is cause for both delight and concern. The security community has seen the need for encryption, but so have malicious actors. From a network security standpoint, it's important to be cautious when dealing with encrypted traffic. Its use is only going to grow from here, and the majority of internet traffic will move toward end-to-end encryption. Read more of my article at the below link:

http://searchsecurity.techtarget.com/answer/How-can-enterprises-secure-encrypted-traffic-from-cloud-applications

Should an enterprise BYOD strategy allow the use of Gmail?

Creating separate accounts for business use on a third-party platform can be risky, but it depends on the context.

Google offers organizations the ability to host their mail on its platform, and it also offers additional features to manage these accounts -- though these features are not part of Google's free service. There are privacy concerns regarding enterprise use of Google business accounts, but organizations that have their employees use personal Gmail accounts for business purposes is a separate matter.

This enterprise BYOD strategy is a risky idea. Using a free service outside of the organization's control and making it the recommended communication method is dangerous. The organization will have no control over the data being sent or the security policy wrapped around the communications. There is no data loss prevention applied to what's being sent, there's no web filtering or antiphishing protection, and the forensic data and logging of the email are lost.

Essentially, creating a separate personal account as part of an enterprise BYOD strategy actually severely limits BYOD security, and organizations should avoid doing it.

http://searchsecurity.techtarget.com/answer/Should-an-enterprise-BYOD-strategy-allow-the-use-of-Gmail

What should you do when third-party compliance is failing?

The security of your data being held, processed or transmitted by a third party is always a security risk. Essentially, you have to trust an organization other than your own with the security and care of your data.

The third party or business partner could perform security up to or even beyond your standards, but there's always the possibility for negligence. If there's even the slightest concern that a third party is being careless with the security of your organization's data, you should act immediately.

Before giving your data to a third party or business partner, there should be a thorough review of the partner and how it performs security. This can include security questionnaires, on-site visits, audits of the third party's environment and a review of its regulatory certifications. Vendor management has become one of the largest areas of concern when it comes to data governance, and it's a growing risk if due diligence isn't done upfront. Read more of my article at the link below:

http://searchsecurity.techtarget.com/answer/What-should-you-do-when-third-party-compliance-is-failing

Friday, September 1, 2017

Security Researchers and Responsible Vulnerability Disclosure

I was asked to comment on the following article regarding responsible disclosure of vulnerabilities by security researchers. This is a debate that's recently been resurrected over the past couple months. In my opinion there's work to be done on both sides. Below is article I was quoted on regarding the subject:

https://www.tripwire.com/state-of-security/security-data-protection/security-researchers-protect-organizations-means-necessary/

Tuesday, August 29, 2017

Gotta Respect the Hacker Hustle

Many times you'll see attackers exploit low hanging fruit to breach a network, but other times they really have to work to get into a target. This due diligence has to be respected. I'm not saying hacking into an organization for malicious gain is approved, but the skills have to be respected. If you can't respect your competition there's a good chance you'll be beat by them.

Here's an article I was quoted in regarding the HBO attackers:

Thursday, August 24, 2017

Weighing In on Encryption Backdoors

Here's an article I was quoted on regarding why it's a bad idea to give the government a backdoor into into encryption:

https://www.venafi.com/blog/security-professionals-weigh-on-encryption-backdoors-a-bad-idea-given-governments-own-data

Infosecurity Fall 2017 Virtual Conference Agenda

I'm speaking at the Infosecurity Fall 2017 Virtual Conference September 20th. My session will be discussing, "All You Need to Know about NYC Cyber Regulations" with two other speakers.

New regulations announced this year will ensure that within New York State, there will be ‘minimum security standards’ that financial services firms will be obliged to meet. The intention of these measures is to encourage organizations to keep pace with changes in technology and ensure a cybersecurity program that ‘is adequately funded and staffed’.

In this opening keynote, we will look at the over-arching obligations of the NYC Cyber Regulations and evaluate what the minimum standards will be and how businesses will need to adapt to fit into this framework.

What exactly are the NYC Cyber Regulations?

  • How can businesses comply and what could the penalties be for non-compliance?
  • Will this spread to other states, like DC and Massachusetts, or even California?
  • How does this effect national companies who operate in all different States, including NYC?

Sign up for the virtual conference with Infosec Magazine here: https://www.infosecurity-magazine.com/virtual-conferences/imvc-fall-2017/

Wednesday, August 23, 2017

FEMA Virtual Cybersecurity Tabletop Exercise

Yesterday I took part in a FEMA virtual tabletop exercise with my local county. It was great seeing other counties within the country prepare against cyberthreats against their infrastructure. These tabletop collaborations allow other local businesses and governments the ability to bounce ideas off what they've seen, what's worked for them and sound advice moving forward.


What’s needed for the first NYS DFS cybersecurity transitional phase?

The first transitional phase of the New York State’s Department of Financial Services (NYS DFS) cybersecurity regulation is upon us. As of August 28th, 2017 covered entities are required to be in compliance with the first phase of the 23 NYCRR Part 500 standard.

The NYS DFS was kind enough not drop the entire regulation on businesses all at once and broke up adherence within transitional phases. This means organizations will have the opportunity create a phased approach based off these transitional phases to become compliant over the next two years.

With the first phase expiring shortly it means covered entities are required to have these particular aspects of the regulation in place during this timeframe.

For the first transitional phase covered entities that aren’t exempt will need to adhere to the following sections within the guidance. Read the rest of my article at HelpNetSecurity here:

https://www.helpnetsecurity.com/2017/08/23/nys-dfs-cybersecurity-transitional-phase/

Monday, August 21, 2017

Top 10 Security Challenges of 2017

I was quoted in SCMagazine regarding the top 10 security challenges of 2017. To ease the suspense my top concern was "patching". I know it's not sexy, but I'm still very concerned by it based off patching procedures based of what we've seen this year. Check out the link below and other controls that people are dealing with now.

https://www.scmagazine.com/top-10-security-challenges-for-2017/article/682314/

Friday, August 18, 2017

Using OSINT against Online Child Predators

The Internet is a potentially dangerous place for users. This is especially so for children. Oftentimes, these younger users don’t yet understand that some people harbor bad intentions. They are therefore prime targets of digital predators who would seek to prey upon them online. I'm quoted in this article regarding how to keep children safe online.

https://www.tripwire.com/state-of-security/security-awareness/hacking-innocent-lives-using-osint-online-child-predators/

Wednesday, August 16, 2017

Can a PCI Internal Security Assessor validate level 1 merchants?

There are differences between Internal Security Assessors and Qualified Security Assessors (QSA), as well as the assessments they're able to validate. With these assessments, there are also particular levels of providers and merchants that require different standards of validation.

Internal Security Assessors are normally employees of the organization being assessed. This closeness to the business can create a better understanding of the processes of the system owners, but when level 1 service providers are involved, there needs to be a third-party perspective.

A service provider is defined as an entity that processes, stores or transmits cardholder data on behalf of another business or organization. Like merchants, there are multiple levels of service providers, and a level 1 merchant requires a Qualified Security Assessor to complete the reports on compliance.

Read more at my article below:

http://searchsecurity.techtarget.com/answer/Can-a-PCI-Internal-Security-Assessor-validate-level-1-merchants

How is the Samba vulnerability different from EternalBlue?

The vulnerability in Samba -- as well as WannaCry ransomware -- shows that every organization needs to apply appropriate patches and enforce configuration management in its systems to defend itself against security risks.

These Linux and Windows systems are similar in that both created remote concerns by having port 445 open on the perimeter. Samba is used to enable Linux devices, such as printers, to communicate with Windows systems, and it is a key element in having interoperability between the operating systems.

It's interesting the Samba vulnerability (CVE-2017-7494) was announced soon after the WannaCry ransomware spread. While neither has anything to do with the other, seeing this vulnerability just cements the urgent need for IT security to move back to the fundamentals.

Both of the vulnerabilities are concerning for remote execution if the systems are exposed to the internet and are unpatched. Also, both of the vulnerabilities require a payload to be dropped in order to achieve their results. In the case of WannaCry, it was EternalBlue that was used to power the malware; in the Samba vulnerability, there was no known malware wrapped around the exploit. Read my article below:

http://searchsecurity.techtarget.com/answer/How-is-the-Samba-vulnerability-different-from-EternalBlue

Could the WannaCry decryptor work on other ransomware strains?

The WannaCry ransomware caused a panic in the security industry, and researchers Benjamin Deply, Adrien Guinet and Matt Suiche created a decryptor that might be able to retrieve encrypted files being held ransom by WannaCry.

The WannaCry decryptor tools work on the majority of Windows systems affected by the ransomware; this includes Windows XP, Windows 7, Windows 2003 and Windows 2008 systems. The caveat is that the WannaCry decryptor tool requires the infected system to still have, in memory, the associated prime numbers that were used by the malware to create the RSA key pairs to encrypt the data.

The two tools that can be used to decrypt WannaCry files are WannaKey and WanaKiwi. The WanaKiwi tool took the ideas of the WannaKey decryptor and added documentation and an easier method of deployment. Read my article below:

http://searchsecurity.techtarget.com/answer/Could-the-WannaCry-decryptor-work-on-other-ransomware-strains

How are hackers using Unicode domains for spoofing attacks?

Trust is a necessity in cybersecurity, and it's one of the main reasons attackers continually try to exploit this emotion when assaulting networks.

We put a lot of time and defensive effort into verifying that a particular party on the internet is who they say they are, and we do this with good reason. But because of this need for trust, attackers rely on spoofing as a standard method of exploitation. The more an attacker can deceive someone, the higher his probability of success, or cover, while attempting an exploit.

Here is where the recent proof of concept that shows attackers can abuse Unicode domains to look like legitimate sites comes into play. Attackers are able to trick users into clicking on particular links that look like they are from legitimate domains, but that actually lead to malicious sites.

This deception works because many letters look very similar within Unicode domains, especially within Latin and Cyrillic character sets. There is no distinguishable difference between many of these letters to the human eye, but computers treat them differently, and attackers use this to their advantage. Read my article below:

http://searchsecurity.techtarget.com/answer/How-are-hackers-using-Unicode-domains-for-spoofing-attacks

Did DDoS attacks cause the FCC net neutrality site to go down?

With any DDoS attack, the best way to investigate it is to review the logs. Due to the sensitivity of the information submitted to the Federal Communications Commission (FCC) net neutrality site, and the ability for IP addresses to potentially increase privacy risks for users submitting their opinions, the logs have not been publicly released for review. The FCC's CIO, David Bray, stated that, after reviewing the logs, it was determined that nonhuman bots were creating a large number of comments to the FCC net neutrality site via an API. He also mentioned that the systems creating the large wave of comment traffic wasn't from a botnet of infected systems, but from a publically available cloud service.

If this truly was a botnet pumping large amounts of comments to the FCC's net neutrality site -- possibly for spam-related purposes -- while there was a large influx of users attempting to post opinions and comments regarding the net neutrality policy, it's likely that the application reacted in a manner that's identical to a DDoS attack. We know that the API was hit hard from public comments made by the FCC and it's these application-based resources that can become very expensive when it comes to utilization. Read my article below:

http://searchsecurity.techtarget.com/answer/Did-DDoS-attacks-cause-the-FCC-net-neutrality-site-to-go-down

How can OSS-Fuzz and other vulnerability scanners help developers?

In December 2016, Google released its project, dubbed OSS-Fuzz, as an open source tool to fuzz applications for security and stability concerns. The tool doesn't scan every piece of open source software; in order to be accepted by OSS-Fuzz, an open source project must have a large following or be considered software that's critical to global infrastructure.

In the past year, the project has scanned 47 applications and has found over 1,000 vulnerabilities, with over a quarter of those being security vulnerabilities.

Developers running an open source project should definitely look to integrate into Google's project. The code of the fuzz target, or the code being fuzzed for vulnerabilities, should be part of the project's source code repository.

Developers also need to have seeds so that the fuzzing can be more efficient. Google recommends having a "minimal set of inputs that provides maximal code coverage." Developers also need to be aware of what's being fuzzed in their code, and the coverage of the fuzzers should be reviewed to validate that the application is being tested efficiently. Read the rest of my article at the link below:

http://searchsecurity.techtarget.com/answer/How-can-OSS-Fuzz-and-other-vulnerability-scanners-help-developers

How does the Microsoft Authenticator application affect password use?

Protecting passwords has always been a thorn in the side of security practitioners looking to secure their organizations. The call to kill passwords has been out there for years and, recently, Microsoft took a stab at it by limiting password use with new phone-based sign in available on the Microsoft Authenticator app.

As the iconic comic XKCD says, "Through 20 years of effort, we've successfully trained everyone to use passwords that are hard for humans to remember, but easy for computers to guess." Truer words have never been spoken.

With similar concerns today, National Standards and Technology (NIST) came out with new guidance that included making passwords longer, not necessarily as complex and rotating them only as needed to reduce the risk of forgotten and poorly created passwords. With these changes, people have moved toward two-factor authentication, configured on as many accounts as possible, to increase the security of passwords with a second factor, and it's here that Microsoft improves the idea of using a second device for authentication even more.

By downloading the app for either iOS or Android, users logging into Microsoft applications are able to sync their mobile device as a way to authenticate the login request to the particular application. By selecting the type of account being used for logon, the mobile app can be configured to receive a validation each time a user logs into a program that's been configured to use Microsoft Authenticator. Read the rest of my article at the link below:

http://searchsecurity.techtarget.com/answer/How-does-the-Microsoft-Authenticator-application-affect-password-use

What are the challenges of migrating to HTTPS from HTTP?

The United States Patent and Trademark Office (USPTO) recently had an issue switching from HTTP to HTTPS on its website, and had to temporarily revert back to HTTP during the process.

In June of 2015, the U.S. government mandated that all publicly accessible federal websites provide secure connections to their services to protect data in transit. This is important because all traffic going to these sites and services is being sent in the clear, and has the risk of being eavesdropped on by an attacker.

Migrating to HTTPS has gotten much easier over the past couple years, but there are still issues and concerns that should be considered when making the move. A few large vendors, like Google, are depreciating HTTP by alerting the user when they try to access a site in Chrome that uses HTTP and that may send sensitive data. Google Chrome will eventually have a security warning set for all HTTP sites.

In the past, one of the major pain points for organizations moving to SSL was the cost of the certificate, but Let's Encrypt stepped in to issue free certificates for anyone who requested them, which helped push the progress of those looking to make the jump to HTTPS. Read the rest of my article at the link below:

http://searchsecurity.techtarget.com/answer/What-are-the-challenges-of-migrating-to-HTTPS-from-HTTP

How did Webroot's antivirus signature update create false positives?

Webroot Inc.'s issue happened on Apr. 24 between 1800 and 2100 Coordinated Universal Time, and it tagged particular Windows OS system files as part of the W32.Trojan.Gen. Once these files were tagged as malicious, they went into quarantine, and the systems were left inoperative.

An antivirus signature update was pushed down from the Webroot cloud service, updating the agents with the false positive and triggering a chain reaction for all the systems receiving the update to cause the Windows systems to quarantine the files. It was reported that the antivirus signature update was only active for 13 minutes, but that many managed service providers were utilizing the service and pushing updates to their clients that it propagated the issue to additional endpoints.

Shortly after the issue, Webroot started working on ways to remediate the problem, and social media started lighting up with comments and potential workarounds in an attempt to get the files back -- including removing Webroot, restoring the needed files from backup and rebooting. Read the rest of my article at the link below:

http://searchsecurity.techtarget.com/answer/How-did-Webroots-antivirus-signature-update-create-false-positives

Evaluating Public Cloud Storage Providers

A move to the public cloud is a major shift in an organization's architecture, and it provides many computing and performance benefits that aren't available from a locally installed storage network. But before selecting a public cloud storage provider, you must ensure its offerings are a good fit for your organization. Review the cost, architecture and security at my article below:

http://searchcloudstorage.techtarget.com/feature/Four-criteria-for-evaluating-public-cloud-storage-providers

Comparing the Leading Public Cloud Storage Providers

Amazon, Microsoft and Google dominate the public cloud market when it comes to addressing an organization's budget, security requirements, and infrastructure and business needs.

Other niche and large public cloud service providers -- including IBM, Virtustream, Rackspace and NTT Communications -- provide beneficial services as well, but they don't have the same market share as the big three.

Although Amazon Simple Storage Service (S3), Google Cloud Storage and Microsoft Azure Storage offer similar storage features, there are a number of differentiating factors that companies should consider before selecting a service. Read more about these services at the link below:

http://searchcloudstorage.techtarget.com/feature/Comparing-the-leading-public-cloud-service-providers

A look at the services the leading public cloud vendors provide

All three of the major public cloud vendors provide storage services that can be used by organizations ranging from small- and medium-sized businesses to enterprises. Each vendor's public cloud services are similar in nature, so deciding which one(s) to select can be difficult. Some small, but significant, differences between each service can help businesses decide. Read my article below to get the details on what solution is best for you.

http://searchcloudstorage.techtarget.com/feature/A-look-at-the-services-the-leading-public-cloud-vendors-provide

Update - Using AWS Organizations to Secure Your Cloud Accounts

AWS Organizations was designed to allow cloud administrators working in Amazon Web Services (AWS) to manage accounts more securely and efficiently. Essentially, AWS Organizations creates custom policies that can be applied to users/groups to manage security, create better automation and simplify billing. There is some overlap with AWS IAM (Identity and Access Management) services, but it is more complementary, and it builds off of the IAM policies already in place.

AWS Organizations allows the management of accounts under a new entity. This entity is built into a hierarchy, and the policies and organizational units can be built within each other for management. I couldn't help but think of Microsoft's Active Directory when looking at it the first time, but that goes for anything with organizational units (OU) and hierarchy. Each particular OU can have policy applied to it, and the user/group will inherit the policy of the OU in which they reside. This also means that each user/group can only be in one OU at a time, but can have multiple policies applied to it since the OUs can be nested. The groups can be created by region, user, group or other elements.

With both AWS IAM and AWS Organizations, there can be a little overlap, but you can think of Organizations as a way of containing the rights of users. The IAM policies can still be created and even pushed through Organizations, but it's the guardrail to determine least privilege. If users go against these policies, it's possible to contain or restrict them with blacklist or whitelist policies. This helps to keep the permissions and security of users to what's deemed necessary by hierarchical policy enforcement. AWS Organizations is the framework that IAM policies can use to tighten security. Read the rest of my article at the link below:

http://searchcloudsecurity.techtarget.com/answer/How-can-AWS-Organizations-help-secure-cloud-accounts

Public Cloud Storage Offering Scalability and Performacne

Using public cloud storage services lets organizations offload management tasks and the costs associated with supporting physical hardware to an external provider. An organization's data is stored in the provider's data center and the provider manages and maintains all facets of the data center, including power, cooling and server maintenance. As a result, organizations don't have to worry about archive planning, implementing security practices or conducting resource planning for future data growth.

Public cloud storage services are also cost-effective; organizations pay only for the resources they use. Public cloud storage provides a scalable and agile environment for businesses to increase or decrease storage on demand.

Organizations use the public cloud to store both structured and unstructured data. Many applications that have made their way to the cloud -- such as those that use back-end databases or structured data -- handle data from applications that tie directly into cloud database services. This type of cloud storage environment is appealing to companies that are either just starting out and don't want to purchase hardware or that are looking for scalable storage that doesn't require a large capital expenditure. Read the rest of my article at the link below:

http://searchcloudstorage.techtarget.com/feature/Public-cloud-storage-services-offer-scalability-and-performance

What's the difference between software containers and sandboxing?

There are a few things to understand upfront when speaking about the differences between sandboxing and software containers, which are sometimes called "jails," and before you make a decision on which one to implement. The answer is a combination of both, but many organizations might not be able to afford or have the expertise to implement both. Hopefully, understanding how they're used will allow enterprises to make an educated decision moving forward.

Sandboxes became a big hit a few years back, after we realized malware was still making its way past antivirus software and infecting our networks. The issue with antivirus is that all systems need signature-based agents installed on the machines, and they have to be updated to at least give the endpoint a fighting chance against malware. Since antivirus wasn't catching everything -- even when it was fully updated and installed on workstations -- the use of sandboxing grew.

Sandboxing relies on multiple virtual machines (VMs) to catch traffic as it ingresses/egresses in the network, and it is used as a choke point for malicious activity detection. The goal of sandboxing is to take unknown files and detonate them within one of the VMs to determine if the file is safe for installation. Since there are multiple evasion techniques, this doesn't always make for a foolproof solution; it's just an extra layer of defense. Read the rest of my article at the link below:

http://searchsecurity.techtarget.com/answer/Whats-the-difference-between-software-containers-and-sandboxing

How can enterprises leverage Google Project Wycheproof

The name Wycheproof was chosen because it's the smallest mountain in the world, and stands at a whopping 486 feet above sea level. The reason this particularly unimpressive mountain serves as the namesake of this project is because the tool is only the beginning.

The authors of this tool said they wanted to "create something with an achievable goal," and allowing others to use these tools without "digesting decades worth of academic literature" could lead to increased adoption of this tool and improved security across the internet.

If you're developing applications and using cryptographic libraries, this tool could be something to keep in your toolbox for further investigation and implementation. Project Wycheproof searches for 80 test cases in crypto libraries, and it has already found 40 security bugs that are currently being worked on.

When using tools like Project Wycheproof, or performing security research, you must always attempt to notify the vendor or organization that's responsible for the vulnerabilities discovered. Are there flaws or danger when exposing these faults in the wild? Absolutely. The issue then becomes how to notify others of these vulnerabilities after they're found in crypto libraries. Read the rest of my article at the link below:

http://searchsecurity.techtarget.com/answer/How-can-enterprises-leverage-Googles-Project-Wycheproof

Basic steps to improve network device security

Routers are gateways to networks, and often, they're the first devices compromised when an attacker enters your network. Because of this, a router should be as hardened as possible before it's put on your network. With this in mind, there are a few areas we can focus on to improve network device security.

The first step is to place these systems squarely within the network vulnerability management process. This includes running authorized scans of the routers with an account that's able to access the system and determining what risks are present within the router. These risks could be out-of-date patches, running insecure protocols, being versions behind on images and so on. Getting a solid risk assessment of your routers on a scheduled basis can help you to get a foothold on where your risks are and what needs to change, all while being tracked as metrics.

Along the same lines, there are tools that can connect to network equipment and review router configurations and rule sets for security and compliance checks. This is a higher level of network device security than vulnerability management, since it reviews the rule set of the device and makes recommendations based on best practices. It's something to strive for, but verifying that the routers are free from vulnerabilities should be the first priority. Read the rest of my article at the link below:

http://searchsecurity.techtarget.com/answer/What-basic-steps-can-improve-network-device-security-in-enterprises

Cisco CloudCenter Orchestrator Vulnerability

This vulnerability gives an unauthenticated, remote attacker the ability to install Docker containers to the system, and could potentially allow him to attain escalated privileges, such as root. This was made possible by a misconfiguration that makes the Docker management port accessible to attackers, and allows them to submit Docker containers to the Cisco CloudCenter Orchestrator without an administrator's knowledge.

Docker is open source software that allows you to run multiple instances of an application on virtualized hardware, with the flexibility to have these containers moved into cloud platforms for high portability. These containers are typically more lightweight than a usual virtual machine, and will run under a host that's sharing similar libraries. The applications running in these containers can quickly be spun up or ported to hosts that support them. The concern with the recently disclosed vulnerability from Cisco means there could be additional containers or applications running in your CloudCenter Orchestrator that weren't configured by you, and which are being used for malicious purposes. Read the rest of my article at the below link:

http://searchcloudsecurity.techtarget.com/answer/How-does-the-Cisco-CloudCenter-Orchestrator-vulnerability-work

What should enterprises know about how a stored XSS works

Cross-site scripting, or XSS, is a web application attack that attempts to inject malicious code into a vulnerable application. The application isn't at risk during this attack; XSS' main purpose is to exploit the account or user attempting to use the application.

There are a few different types of XSS -- such as stored, reflective and others -- but in this article, we'll briefly go over the stored version of the exploit, which recently affected VMware's ESXi hypervisor.

With stored XSS, there is a method called persistent XSS, where an attacker aims to make an XSS exploit permanently part of an application, instead of it being a reflected XSS attack, where the user might have to click on a link to exploit the vulnerable app.

In this case, a permanent XSS exploit means the application can be modified to allow software -- such as a web browser -- to automatically load the exploit without user interaction. The stored XSS is not part of the app, and will load each time a user interacts with the application. This type of cross-site scripting is not as common as reflective XSS, but it's definitely higher risk. If an attacker is able to find a high-value application with a high hit rate, they can do a lot of damage. Read the rest of my article at the link below:

http://searchsecurity.techtarget.com/answer/What-should-enterprises-know-about-how-a-stored-XSS-exploit-works

What to consider about signatureless malware detection

Antivirus isn't dead, it's just changing. People have been calling for the death of antivirus for years, but in reality, it isn't possible. There will always be a need for endpoint protection, no matter what anyone says. No one in their right mind is going to leave an endpoint purposely unprotected, so calling for the death of antivirus is a little premature. The people who call for the execution of antivirus are likely fed up with how it works, and put too much faith in its ability to catch every malware sample.

Using signature-based antimalware means you're always one step behind attackers, and for those not using a defense-in-depth approach, this reliance on endpoint protection can cause a false sense of security.

This backlash against the old method has caused many companies, both vendors and customers, to move toward more of a signatureless malware detection model. Read the rest of my article at the link below:

http://searchsecurity.techtarget.com/tip/What-to-consider-about-signatureless-malware-detection

What's the best corporate email security policy for erroneous emails?

The issue here is that Uber doesn't verify email addresses, and these erroneous emails were being sent directly to a different user who was able to view private information on the real customer.

With that being said, if there are multiple emails incoming to an organization regarding accidental sign-ups or verification, it is an enterprise's right to block these incoming messages without question.

Unlike personal email, which the user has control over, corporate email security is the responsibility of the enterprise for which the employee works. This account, the emails and everything associated with it, are property of said organization.

If there is ever an issue with emails accidently being sent to the company and affecting it adversely, the company has the right to block these emails in its mail gateways or spam and phishing filters as part of the corporate email security policy. Read the rest of my article at the link below:

http://searchsecurity.techtarget.com/answer/Whats-the-best-corporate-email-security-policy-for-erroneous-emails

How identity governance and access management systems differ

Access management is the process by which a company identifies, tracks and controls which users have access to a particular system or application. Access management systems are more concerned with the assimilation of users, creating profiles and the process of controlling and streamlining the manual effort of granting users the proper access and roles. Having a process and the due diligence in place to create the roles, groups and permissions first is necessary with access management. Access management systems rely on the framework of which users have which rights and how that's accomplished.

This is somewhat different from identity governance, in which administrators are more concerned about giving users new access to roles and alerting the security team to attempts by unauthorized users to access resources.

Identity governance relies on policies to determine if updated access is too risky for a particular user based on his previous access and behavior. These governance policies can be put into an automated workflow when a change is deemed a risk, and allows the owners of the application or the data to sign off on the update. This fixes the issue of having to recertify users annually, and takes more of an incremental approach to auditing access. Read the rest of my article at the link below:

http://searchsecurity.techtarget.com/answer/How-do-identity-governance-and-access-management-systems-differ

Are you still using SMBv1 on your system?

The Server Message Block, or SMB, protocol is a file sharing protocol that allows operating systems and applications to read and write data to a system. It also allows a system to request services from a server.

The latest versions of the Windows operating system support SMB v2 and SMB v3, and Microsoft is attempting to depreciate the use of SMB v1 within its software.

There have been numerous vulnerabilities tied to the use of Windows SMB v1, including remote code execution and denial-of-service exploits. These two vulnerabilities can leave a system crippled, or allow attackers to compromise a system using this vulnerable protocol.

Throughout the years, Microsoft has patched its operating system for similar vulnerabilities in Windows SMB v1, and has introduced new versions of the protocol to eliminate the use of this first version of SMB. Read the rest of my article at the link below:

http://searchsecurity.techtarget.com/answer/How-can-users-tell-if-Windows-SMB-v1-is-on-their-systems

Universal Second Factor Device for Facebook

Facebook recently introduced the ability to use what they call a Facebook Security Key as a second factor of authentication to its site. In order to use this feature within Facebook, the user needs to own a universal second factor device, or U2F security key, to enable login approvals through the security section of their profile.

The universal second factor standard was created by Google and Yubico, and uses the FIDO protocol with standard public key cryptography to provide a secure second form of authentication.

A U2F security key is registered with a service, like Facebook, by approving it during the registration process. This is done by pressing the button on the universal second factor device when prompted, which starts the process of creating the second factor. This approval creates a key pair, in which the public key is sent to the online service and linked to the particular user's account. The private key is kept locally on the universal second factor device, and is never sent to the provider. This registration process creates the key pair for the second factor of authentication that is used each time during login going forward. Read the rest of my article at the link below:

http://searchsecurity.techtarget.com/answer/How-does-a-universal-second-factor-device-secure-Facebook-users

What to Know About IAM In the Cloud Before Implementation

Identity management is a complex topic that has been making its way into the cloud and enhancing the prospect for companies to federate and engage with previously unavailable identity services.

By embracing security services in the cloud, or security as a service (SECaaS), enterprises are able to streamline and take advantage of more flexible services that they might have been struggling to maintain on premises or for which they weren't staffed.

One of the more popular SECaaS applications is identity and access management. This service can either be fully maintained within a cloud platform or it can work with systems at a customer's site in a hybrid model.

Both identity management -- the ability to create, modify and delete an identity -- and access management -- the authorization of that identity for only the proper resources -- are extremely necessary in today's environment. Having the capability to create roles with the proper access to resources, while keeping security in mind, is of the utmost importance to an organization utilizing the cloud. Read the rest of my article at the link below:

http://searchcloudsecurity.techtarget.com/tip/What-enterprises-need-to-know-about-cloud-IAM-before-implementation

Airwatch Agent Vulns in Inbox

AirWatch is software that can be used to protect against compromised mobile devices, which are known as being rooted, and that allow security settings, emails and other functions to be applied to a phone for defense against attackers.

In this case, two vulnerabilities allowed attackers to root devices without the AirWatch software noticing. Normally, when AirWatch software is installed on a device, it checks whether the device is already rooted. A policy can be created on the agent console that informs the agent what to do if this is found during enrollment. Typically, the policy is configured to have an installation of the AirWatch Agent decline the install if it's being attempted on a rooted device.

AirWatch also has apps that can be installed within its suite of products, and one of these apps, the AirWatch Inbox -- a containerized email client that's supposed to provide separation from the data within it and the rest of the device -- was also found to be vulnerable. Read the rest of my article at the link below:

http://searchsecurity.techtarget.com/answer/How-did-vulnerabilities-in-AirWatch-Agent-and-Inbox-work

Google Cloud KMS Security Benefits

Google Cloud Key Management Service (KMS) allows its customers to create, use, rotate and destroy encryption keys in the cloud. Customers can either create keys in the Google Cloud KMS with AES-256.

This is important for Google -- the company recently made a big push to vie for enterprise customers -- because Amazon Web Services (AWS) and Microsoft Azure have had this capability for some time. The addition of Google Cloud KMS proves that it's maturing into a real contender within the enterprise cloud space. Allowing a cloud-based KMS to store symmetric keys in the cloud -- Google-created -- makes implementing encryption acceptable and easy.

One of the main challenges with key management systems is handling the keys when there are complicated systems and in-house expertise already at play. With Google's Cloud KMS, there is no longer the need to have an on-premises system, like a hardware security module, and lack of scalability is no longer a concern. Read the rest of my article at the link below:

http://searchcloudsecurity.techtarget.com/answer/Google-Cloud-KMS-What-are-the-security-benefits

How companies should prepare for EU GDPR Compliance

Beginning in May 2018, all businesses housing data from European residents will have to abide by the EU GDPR. If companies don't abide by the rules defined by GDPR, they'll be fined 20 million Euros or 4% of their annual turnover.

With this regulation, Microsoft has taken steps to protect the data it holds in the cloud before the GDPR goes into effect. Microsoft is one of the largest cloud service providers in the world, and will need to comply with the more stringent regulations being imposed by the EU data directive to continue doing business under GDPR.

Under this regulation, the EU can validate how companies collect, process or store data on any European resident, and enterprises must comply with their directive on securing EU user privacy. This law pushes companies outside the EU to comply with their rules if they want to continue business with their citizens. This may be a challenge for global e-commerce retailers that weren't following these directives completely in the past. Read the rest of my article at the link below:

http://searchsecurity.techtarget.com/answer/How-should-companies-prepare-for-EU-GDPR-compliance

Why You Need Separate Administrator Accounts

Creating a policy for separate administrator accounts isn't something unusual; it's actually becoming a standard. Creating a separate account for administrators allows for the proper separation of duties and security on accounts that have full access to systems and data.

This can become an issue within a larger company -- not because it can't be done, but because it's going to take more work in order to complete. A lot of this work might also change the mindset of the administrators and management.

I've seen large companies create separate accounts for administrators in a few ways. One way was creating accounts for systems with read-only access to review configurations and access so they didn't accidentally create an issue within a system when reviewing the application. On the flip side, I've also seen the same thing happen where accounts were created that were only used when the administrator was about to make a change to a system, or if he needed to elevate his rights. Read more of my article at the below link:

http://searchsecurity.techtarget.com/answer/Are-separate-administrator-accounts-a-good-idea-for-enterprises

How a Slack Vuln Exposed User Auth Tokens

Frans Rosen, a security researcher at web security company Detectify, discovered a Slack vulnerability that essentially enabled attackers to gain access to another Slack users' chats, messages, file content and more. The vulnerability would have enabled an attacker to gain complete access to another user's account by accessing a malicious page that would redirect the Slack WebSocket to the malicious site, stealing the user's session token in the process.

Rosen originally found this Slack vulnerability on the browser version of the application. He submitted the bug to Slack, and it was fixed within five hours. Slack's bug bounty program paid him $3,000 for the vulnerability submission.

The major reason this Slack vulnerability could have been successfully exploited was due to the fact that the application wasn't properly checking messages when using cross-origin communication. With this flaw in place, an attacker could create a malicious link that abused this trust, and directed the user to a page of the attacker's choosing. This site would then be configured to steal the authentication token from the user who assumed they were logging into Slack. The proof-of-concept attack also abused the postMessage function and the WebSocket protocol on which the application relies for communication. Read more of my article at the link below:

http://searchsecurity.techtarget.com/answer/How-did-a-Slack-vulnerability-expose-user-authentication-tokens

MongoDB Security Issues and How to Resolve them

Recently, there was a surge of attacks looking for misconfigured installations of MongoDB on the internet. The attackers were abusing the lack of authentication and remote accessibility to these MongoDB instances by deleting an original database and holding a copy of it for ransom.

These and other MongoDB security misconfigurations and vulnerabilities aren't completely related to patch management, and are more in the realm of configuration management. There are a few ways to improve MongoDB security and protect your database from attackers.

The major issue here lies with certain versions of MongoDB coming with loose default configurations. The responsibility in this case lies firmly with the administrators installing the database software and not managing it appropriately. Read more of my article at the link below:

http://searchsecurity.techtarget.com/answer/What-MongoDB-security-issues-are-still-unresolved

How can the latest LastPass vulns be mitigated?

Tavis Ormandy, a Google Project Zero researcher, has been a thorn in the side of LastPass for the past year. In 2016, he found multiple vulnerabilities in its software, and in March 2017, he discovered multiple new exploits in the LastPass password management tool that enabled password theft and remote code execution.

LastPass is password manager that creates random passwords, enabling an application/website to auto-fill passwords when possible, and it creates a digital wallet of credentials. This tool improves credential hygiene and limits a user from password reuse. Because of this, LastPass has become a target, and Ormandy's findings have helped the company to improve its security by remediating the LastPass vulnerabilities before they are exploited in the wild. Read more of my article at the link below:

http://searchsecurity.techtarget.com/answer/How-can-the-latest-LastPass-vulnerabilities-be-mitigated

Should the Vulnerabilities Equities Process be Codified Into Law?

The Vulnerabilities Equities Process was created to guide government agencies through the decision-making process of releasing or withholding vulnerabilities they've discovered. This answer isn't as black and white as it sounds, and it's a complex issue that can be polarizing for those who deal with the issue directly. The call to codify the VEP, or to formalize the process into law, has both pros and cons. Read more of my article at the link below:

http://searchsecurity.techtarget.com/answer/Should-the-Vulnerabilities-Equities-Process-be-codified-into-law

Using a SOC2 Report to Evaluate Cloud Providers

There are a few tools that can be used when assessing a cloud service provider, and a SOC 2 report is one of them. If a cloud provider or vendor has a SOC 2 report available, it can be extremely useful to understand the company's controls when it comes to security, availability, processing, integrity, confidentiality and privacy. If the third party cannot provide a SOC 2 report, it's possible that they haven't had an assessment performed, or that they're not willing to disclose this data.
It's always best to receive a Type 2 SOC 2, but many vendors might send over a SOC 3 to prove that work has been completed. The Type 2 SOC 2 report will not only review the controls in question, but will go into detail on the effectiveness of the controls. If possible, try to get a Type 2 SOC 2 from the vendor as a first step. Read more at the link below:
http://searchcloudsecurity.techtarget.com/answer/How-can-enterprises-use-SOC-2-reports-to-evaluate-cloud-providers

Domain Validation Certificates: What are the Security Implications

Let's Encrypt is a free and open certificate authority that enables those that might not be able to afford or configure HTTPS on their web servers to protect their sites.
Using tools in partnership with Let's Encrypt, such as the Electronic Frontier Foundation's Certbot, enables website administrators to freely enable TLS on their sites, and to even automate security functions within cipher suites and other encryption features.
The major goal of Let's Encrypt is to create a secure internet, with all sessions encrypted in transit. Let's Encrypt has major sponsors assisting its community -- including Mozilla, Cisco, Electronic Frontier Foundation, Google, Facebook and others -- that have offered their support for the service. Read more at the link below:

Patching telcom infrastructure can become a challenge

As with many priority systems, patching can become an arduous, and even political battle within an enterprise. These priority systems can be deemed so critical by the organization that patching them is viewed as a risk to the business, which is counter-intuitive when thinking from a security standpoint. This is normally the case when these systems run on outdated or legacy operating systems where installing patches would either void a support agreement or where the organization doesn't have the funds or architecture to test the patches' functionality in a QA environment. Read more at the link below:

http://searchsecurity.techtarget.com/answer/Why-is-patching-telecom-infrastructures-such-a-challenge

How does a privacy impact assessment affect enterprise security

A privacy impact assessment is a review of how an organization handles the sensitive or personal data flowing through their systems. Through this review, the organization -- or potentially a hired third party -- will review internal corporate processes, procedures and even technology to determine how privacy data on users or customers is being stored, collected and processed. This is commonly seen within government agencies and sometimes within organizations storing large amounts of private data on their users or customers, like in healthcare, e-commerce or other industries. Read more at the link below:

http://searchsecurity.techtarget.com/answer/How-does-a-privacy-impact-assessment-affect-enterprise-security

Friday, July 28, 2017

AI and the Future of Cybersecurity: Analyzing & Identifying Cybercrime (Webinar)

On August 17th I'll be co-presenting a webinar with Darktrace on "AI and the Future of Cybersecurity: Analyzing & Identifying Cybercrime".

In today’s world, it is critical to be proactive. Ransomware, malware, insider threat, and IoT are evolving rapidly, which means that prevention tactics must keep up and evolve at an equally rapid pace. Zero day attacks can be detected and prevented when businesses incorporate AI and Machine Learning into their cyber defense strategy.

If you want to learn more, please register for the webinar (seats are limited): http://www.ccsinet.com/ai-future-cybersecurity/

Monday, July 3, 2017

Targeted iPhone Phishing Scams (Trident Zero Day)

Here's the video of an interview I did for News12 regarding iPhone users being targeted for phishing scams related to the Trident zero day. Time to update!

http://longisland.news12.com/story/35241250/cybersecurity-expert-warns-of-scam-targeting-iphone-users

The Rise of Artifical Intelligence in Cyber Security

The rise of behavioral analytics, machine learning, artificial intelligence, or whatever the latest nomenclature is currently being promoted by vendors, has taken the security community by storm and showing no signs of stopping. It's almost impossible not to see these phrases mentioned on new preventative solutions going to market and rightfully so. With an industry accustomed to relying on static signatures, known bad hashes and singular alerting, this technology is a welcome relief for defenders and we've seen the market capitalize on our desire for it. Here's an article I wrote for SC Magazine regarding how AI become the darling of an industry: https://www.scmagazine.com/how-artificial-intelligence-became-the-darling-of-an-industry/article/666778/

Monday, May 15, 2017

WannaCry - It's Time To Get Back To Basics

I've been asked to comment on the WannaCry Ransomware by a few groups. Here are my thoughts on what happened and what the logical next steps are. You can read the blog post here: http://www.ccsinet.com/blog/wannacry-keep-calm-and-remember-the-basics/
Honestly, this is a wake-up call for the security community to "Get Back to Basics". Plain and simple. 

Friday, May 5, 2017

Defeating Ransomware With A Little Help From Your Friends

We all know this so it doesn't have to be said, but I'm going to say it anyway: Ransomware sucks. For anyone who's suffered at the hand of attackers making money by holding your personal or business data hostage, you know just how much it sucks. The issue doesn't seem to be going away either, but getting exponentially more difficult to deal with as attackers hone techniques and companies continue to deal with limited security resources.

Last month I worked with CCSI to write a whitepaper on behavioral analytics and machine learning and how it can be applied to detect and prevent attackers in your network. On May 11th, CCSI is hosting a webinar to review this whitepaper and the role of how MSSP's can use this technology to assist you with becoming more secure.

The key questions to ask when attempting to defeat ransomware are:

1. Will your current technology detect ransomware in your network?
2. If it does detect it will it prevent it?
3. How do you respond to these notifications? Especially during off-hours or with a limited staff.

This webinar reviews the role of MSSPs in this space and how they assist your organization become more resilient by using this technology to detect/prevent/respond to ransomware in your network 24x7x365.

There is limited space for this webinar so sign up soon: http://www.ccsinet.com/ccsi-webinar-defeat-ransomware/

Tuesday, April 25, 2017

Using Machine Learning and Behavior Analysis to Assist with Threat Detection

Here's a whitepaper I wrote with CCSI describing what machine learning is and how you can use behavior analysis to assist your organization with threat detection. Few things over the past years have changed the way we defend our network like these two. 

Attackers are consistently breaching enterprise networks in attempts to compromise confidential data and the hard truth is they’re not slowing down. Data breaches have almost become common place in today’s news and we’ve seen businesses hit with attacks that cost them millions of dollars in lost revenue, fines and consumer trust. The majority of these organizations already had the traditional security commodities in place (e.g. Logging, firewall, SIEM) and yet was still breached by dedicated attackers. In today’s attack landscape advanced attackers are able to bypass many of these defenses with persistent and dedicated attacks directed towards the organizations user base and vulnerabilities within their security architecture. The unfortunate truth when using only traditional security defenses is that the odds are heavily weighted in the attackers favor. By adding behavioral analysis and machine learning to a business’s cyber defense brings visibility to threats, which are sorely needed in today’s networks.

Tuesday, March 28, 2017

MegaplanIT Supports PCI SSC North America Community Meeting with Platinum Sponsorship

I've worked with the group over at MegaplanIT for quite some time and have nothing but great things to say about them and their company. They're professionalism, technical ability and business acumen have always impressed me. Which is why when I heard they were sponsoring the PCI SSC North America Community as a Platinum sponsor I wanted to give them the recognition they deserve. Over the years MegaplanIT has grown to become a trusted partner in the security and compliance space and it's great seeing good people succeed. I would highly recommend reaching out to them for any PCI related services. Below is their new press release - Kudos, guys!




MegaplanIT Supports PCI SSC North America Community Meeting with Platinum Sponsorship

MegaplanIT, LLC, is the Platinum sponsor for the PCI SSC North America Community Meeting being held in Orlando, Florida, in September, 2017.

Scottsdale, Arizona – March 2017

MegaplanIT, LLC, a PCI QSC and premier provider of security and compliance solutions, has announced that it would be participating in the PCI SSC North America Community event this September, as a Platinum sponsor. The event, which takes place September 12-14, in Orlando Florida, is a principal conference bringing together stakeholders from the payment card industry to participate in discussions on the latest standards, technologies, and strategic initiatives shared by the PCI Council.

“We are excited for the opportunity to partner with the PCI Council as a Platinum sponsor in this year’s PCI SSC North America Community Meeting.  By sponsoring the event, we hope to display MegaplanIT’s continued commitment to, and appreciation of, the PCI Council’s hard work and guidance”, says Managing Partner of MegaplanIT, Michael Vitolo. He goes on to share, “with this support of the Council, we’re continually looking to develop strong relationships and work with other organizations to become a trusted partner within the payment card industry, while offering the best services available to our customers.”

By promoting the Platinum sponsorship, MegaplanIT believes that showcasing their brand during this PCI community event demonstrates their level of commitment and dedication to their clients in need of PCI and security related services. 

For further details please contact:

Jerry Abowd
Principal Account Manager
MegaPlanIT, LLC
800-891-1634 ext 105

Thursday, March 23, 2017

10 Must Read Infosec Books

I was recently asked to participate in selecting one information security book to add to a round-up of recommended reading for infosec pros. The round-up includes ten selections from different people and was published by Tripwire here.

There are many great books out there I wanted to recommended, but since I only had one spot on the list I wanted to make it count. My selection, even though it's an older book, was: Extrusion Detection: Security Monitoring for Internal Intrusions by Richard Bejtlich.

The technology in this book might have changed, but the concepts are still the same. In order to defend the confidential data within your network, there needs to be proper extrusion detection in place to detect intruders who have comprised your internal systems and are attempting to siphon sensitive data our of your network. There's been a huge emphasis on preventing threats in the past but we have to gain a mindset on expecting that we're already breached and how to deal with it. This book gives you some serious food for thought on how it can be applied and was eye-opening for me when I read it almost a decade ago.

Wednesday, March 22, 2017

Update - Remediating the NTP Daemon DoS Vulnerability

There were multiple vulnerabilities recently discovered in the Network Time Protocol (NTP) daemon, along with a patch to remediate them. A patch for this specific vulnerability -- named NTP 4.2.8p9 -- was released by the Network Time Foundation Project (NTFP).

A researcher named Magnus Stubman discovered the vulnerability and, instead of going public, took the mature route and privately informed the community of his findings. The remediation was part of the NTP 4.2.8p9 release. Stubman has written that the vulnerability he discovered could allow unauthenticated users to crash NTPF with a single malformed UDP packet, which will cause a null point dereference (you can read more about the technical details of the exploit of the NTP daemon from Stubman on his personal website). This means that an attacker could be able to craft a UDP packet towards the service, resulting in an exception bypass that can cause the process to crash.

This denial-of-service (DoS) attack on the NTP daemon is dangerous because all systems rely on synchronizing their time within milliseconds of each other to properly operate, keep authentication protocols working smoothly, timestamp for compliance, correlate security logs and so on. Without the NTP daemon working properly in an environment, errors could cascade quickly throughout the network. The threat to the environment is real, and if it's not patched, an attacker could take advantage of this vulnerability. Read the rest of my article at the link below:

http://searchsecurity.techtarget.com/answer/How-can-enterprises-fix-the-NTP-daemon-vulnerability-to-DoS-attacks

Tuesday, February 21, 2017

New York State’s New Cybersecurity Regulation and What it Means to you

New York is launching a new regulation in cybersecurity which will come into effect March 1. This new regulation will target banking and insurance sectors with the aim of better protecting institutions and consumers against the bad actors that target these firms.
This new cyber security regulation, believed to be the first of its kind adopted by a U.S. state, highlights need as well as the inability to quall the attack on businesses and government agencies regardless of the countless monies invested in information security being thrown at the bad guys.
Take a look at the rest of the article here to determine what this means for youi http://www.ccsinet.com/ny-states-cybersecurity-regulation/

Friday, February 10, 2017

Establishing a Data Protection Committee for the Boardroom

Within other countries, especially Europe, there’s requirement to have data protection committees to enforce the privacy and protection of a countries or organizations data. Within America we don’t have those particular laws enforced here, but it’s something we should still strive towards even if it’s not mandated by government….yet. By establishing a committee regarding data protection within an organization there needs to be upper management approval, understanding of risk and law and the proper tools to complete the job. With this in mind the two largest concerns to data itself is security and privacy. These two topics overlap in certain areas, but can each standalone individually. When building a committee to protect these two aspects of data we’ll need to understand what the role of the committee is and how it will function going forward.


By far the most important part of the committee is the membership of who’s been asked to attend. There needs to be chairs, preferably co-chairs, that have been either voted on or assigned to the committee by upper management or leadership. The committee itself should include all walks of life when it comes to its members and not only include those in the security field. By only including members within security you miss out on valuable insight from other areas of the business. 

Membership should include representation from legal, compliance, particular business units, M&A teams, security & privacy, operations, etc. The membership can grow, but it should be kept to individuals who have the authority and acumen to make decisions regarding the topics at hand. They don’t always have to be experts on data security, but should bring knowledge of their business unit or field and how it relates to the protection of the organizations data. These members should be a cross-functional group of individuals working together with potentially a few advisors to help guide the conversation. This group should be in attendance for the majority of the committee meetings and not continually sending someone in their place. If this happens the meeting will be derailed and won’t bring about change. The tone of the committee should be one of top down management that’s making strategic decisions about data security and should be less operational in nature.

The need for this committee should be one that stimulates conversation with each business group, while guiding, proposing and advising the company on how to handle data protection as an organization. They’ll have to have an understanding of the current threat landscape and where the company is with protecting their data and privacy. By understanding this they’ll also have to understand where the gaps lie within their strategic vision. Once this occurs they can start putting plans in motion for standards and deliverables for subsequent meetings. By creating a vision of the future and reacting towards gaps that are in the company currently the data protection committee can start making real progress within the organization.

With this progress, there will also need to be resources, budget and metrics. Proposing a plan of the future might require budget, but many times there are things that can be done without even spending a dime. Creating an agenda for each meeting with the appropriate deliverables to be accomplished is a helpful way to determine the progress of the committee. By brining metrics of these deliverables and holding those accountable to the data protection tasks will help involvement and participation. Long story short, this data protection committee needs to be made up people throughout the business that are looking to the future to protect the security and privacy of the data your organization holds. By using this committee to shine a light to your data protection efforts it can improve the safety of your data going forward.