Tuesday, August 29, 2017

Gotta Respect the Hacker Hustle

Many times you'll see attackers exploit low hanging fruit to breach a network, but other times they really have to work to get into a target. This due diligence has to be respected. I'm not saying hacking into an organization for malicious gain is approved, but the skills have to be respected. If you can't respect your competition there's a good chance you'll be beat by them.

Here's an article I was quoted in regarding the HBO attackers:

Thursday, August 24, 2017

Weighing In on Encryption Backdoors

Here's an article I was quoted on regarding why it's a bad idea to give the government a backdoor into into encryption:

Infosecurity Fall 2017 Virtual Conference Agenda

I'm speaking at the Infosecurity Fall 2017 Virtual Conference September 20th. My session will be discussing, "All You Need to Know about NYC Cyber Regulations" with two other speakers.

New regulations announced this year will ensure that within New York State, there will be ‘minimum security standards’ that financial services firms will be obliged to meet. The intention of these measures is to encourage organizations to keep pace with changes in technology and ensure a cybersecurity program that ‘is adequately funded and staffed’.

In this opening keynote, we will look at the over-arching obligations of the NYC Cyber Regulations and evaluate what the minimum standards will be and how businesses will need to adapt to fit into this framework.

What exactly are the NYC Cyber Regulations?

  • How can businesses comply and what could the penalties be for non-compliance?
  • Will this spread to other states, like DC and Massachusetts, or even California?
  • How does this effect national companies who operate in all different States, including NYC?

Sign up for the virtual conference with Infosec Magazine here:

Wednesday, August 23, 2017

FEMA Virtual Cybersecurity Tabletop Exercise

Yesterday I took part in a FEMA virtual tabletop exercise with my local county. It was great seeing other counties within the country prepare against cyberthreats against their infrastructure. These tabletop collaborations allow other local businesses and governments the ability to bounce ideas off what they've seen, what's worked for them and sound advice moving forward.

What’s needed for the first NYS DFS cybersecurity transitional phase?

The first transitional phase of the New York State’s Department of Financial Services (NYS DFS) cybersecurity regulation is upon us. As of August 28th, 2017 covered entities are required to be in compliance with the first phase of the 23 NYCRR Part 500 standard.

The NYS DFS was kind enough not drop the entire regulation on businesses all at once and broke up adherence within transitional phases. This means organizations will have the opportunity create a phased approach based off these transitional phases to become compliant over the next two years.

With the first phase expiring shortly it means covered entities are required to have these particular aspects of the regulation in place during this timeframe.

For the first transitional phase covered entities that aren’t exempt will need to adhere to the following sections within the guidance. Read the rest of my article at HelpNetSecurity here:

Monday, August 21, 2017

Top 10 Security Challenges of 2017

I was quoted in SCMagazine regarding the top 10 security challenges of 2017. To ease the suspense my top concern was "patching". I know it's not sexy, but I'm still very concerned by it based off patching procedures based of what we've seen this year. Check out the link below and other controls that people are dealing with now.

Friday, August 18, 2017

Using OSINT against Online Child Predators

The Internet is a potentially dangerous place for users. This is especially so for children. Oftentimes, these younger users don’t yet understand that some people harbor bad intentions. They are therefore prime targets of digital predators who would seek to prey upon them online. I'm quoted in this article regarding how to keep children safe online.

Wednesday, August 16, 2017

Can a PCI Internal Security Assessor validate level 1 merchants?

There are differences between Internal Security Assessors and Qualified Security Assessors (QSA), as well as the assessments they're able to validate. With these assessments, there are also particular levels of providers and merchants that require different standards of validation.

Internal Security Assessors are normally employees of the organization being assessed. This closeness to the business can create a better understanding of the processes of the system owners, but when level 1 service providers are involved, there needs to be a third-party perspective.

A service provider is defined as an entity that processes, stores or transmits cardholder data on behalf of another business or organization. Like merchants, there are multiple levels of service providers, and a level 1 merchant requires a Qualified Security Assessor to complete the reports on compliance.

Read more at my article below:

How is the Samba vulnerability different from EternalBlue?

The vulnerability in Samba -- as well as WannaCry ransomware -- shows that every organization needs to apply appropriate patches and enforce configuration management in its systems to defend itself against security risks.

These Linux and Windows systems are similar in that both created remote concerns by having port 445 open on the perimeter. Samba is used to enable Linux devices, such as printers, to communicate with Windows systems, and it is a key element in having interoperability between the operating systems.

It's interesting the Samba vulnerability (CVE-2017-7494) was announced soon after the WannaCry ransomware spread. While neither has anything to do with the other, seeing this vulnerability just cements the urgent need for IT security to move back to the fundamentals.

Both of the vulnerabilities are concerning for remote execution if the systems are exposed to the internet and are unpatched. Also, both of the vulnerabilities require a payload to be dropped in order to achieve their results. In the case of WannaCry, it was EternalBlue that was used to power the malware; in the Samba vulnerability, there was no known malware wrapped around the exploit. Read my article below:

Could the WannaCry decryptor work on other ransomware strains?

The WannaCry ransomware caused a panic in the security industry, and researchers Benjamin Deply, Adrien Guinet and Matt Suiche created a decryptor that might be able to retrieve encrypted files being held ransom by WannaCry.

The WannaCry decryptor tools work on the majority of Windows systems affected by the ransomware; this includes Windows XP, Windows 7, Windows 2003 and Windows 2008 systems. The caveat is that the WannaCry decryptor tool requires the infected system to still have, in memory, the associated prime numbers that were used by the malware to create the RSA key pairs to encrypt the data.

The two tools that can be used to decrypt WannaCry files are WannaKey and WanaKiwi. The WanaKiwi tool took the ideas of the WannaKey decryptor and added documentation and an easier method of deployment. Read my article below:

How are hackers using Unicode domains for spoofing attacks?

Trust is a necessity in cybersecurity, and it's one of the main reasons attackers continually try to exploit this emotion when assaulting networks.

We put a lot of time and defensive effort into verifying that a particular party on the internet is who they say they are, and we do this with good reason. But because of this need for trust, attackers rely on spoofing as a standard method of exploitation. The more an attacker can deceive someone, the higher his probability of success, or cover, while attempting an exploit.

Here is where the recent proof of concept that shows attackers can abuse Unicode domains to look like legitimate sites comes into play. Attackers are able to trick users into clicking on particular links that look like they are from legitimate domains, but that actually lead to malicious sites.

This deception works because many letters look very similar within Unicode domains, especially within Latin and Cyrillic character sets. There is no distinguishable difference between many of these letters to the human eye, but computers treat them differently, and attackers use this to their advantage. Read my article below:

Did DDoS attacks cause the FCC net neutrality site to go down?

With any DDoS attack, the best way to investigate it is to review the logs. Due to the sensitivity of the information submitted to the Federal Communications Commission (FCC) net neutrality site, and the ability for IP addresses to potentially increase privacy risks for users submitting their opinions, the logs have not been publicly released for review. The FCC's CIO, David Bray, stated that, after reviewing the logs, it was determined that nonhuman bots were creating a large number of comments to the FCC net neutrality site via an API. He also mentioned that the systems creating the large wave of comment traffic wasn't from a botnet of infected systems, but from a publically available cloud service.

If this truly was a botnet pumping large amounts of comments to the FCC's net neutrality site -- possibly for spam-related purposes -- while there was a large influx of users attempting to post opinions and comments regarding the net neutrality policy, it's likely that the application reacted in a manner that's identical to a DDoS attack. We know that the API was hit hard from public comments made by the FCC and it's these application-based resources that can become very expensive when it comes to utilization. Read my article below:

How can OSS-Fuzz and other vulnerability scanners help developers?

In December 2016, Google released its project, dubbed OSS-Fuzz, as an open source tool to fuzz applications for security and stability concerns. The tool doesn't scan every piece of open source software; in order to be accepted by OSS-Fuzz, an open source project must have a large following or be considered software that's critical to global infrastructure.

In the past year, the project has scanned 47 applications and has found over 1,000 vulnerabilities, with over a quarter of those being security vulnerabilities.

Developers running an open source project should definitely look to integrate into Google's project. The code of the fuzz target, or the code being fuzzed for vulnerabilities, should be part of the project's source code repository.

Developers also need to have seeds so that the fuzzing can be more efficient. Google recommends having a "minimal set of inputs that provides maximal code coverage." Developers also need to be aware of what's being fuzzed in their code, and the coverage of the fuzzers should be reviewed to validate that the application is being tested efficiently. Read the rest of my article at the link below:

How does the Microsoft Authenticator application affect password use?

Protecting passwords has always been a thorn in the side of security practitioners looking to secure their organizations. The call to kill passwords has been out there for years and, recently, Microsoft took a stab at it by limiting password use with new phone-based sign in available on the Microsoft Authenticator app.

As the iconic comic XKCD says, "Through 20 years of effort, we've successfully trained everyone to use passwords that are hard for humans to remember, but easy for computers to guess." Truer words have never been spoken.

With similar concerns today, National Standards and Technology (NIST) came out with new guidance that included making passwords longer, not necessarily as complex and rotating them only as needed to reduce the risk of forgotten and poorly created passwords. With these changes, people have moved toward two-factor authentication, configured on as many accounts as possible, to increase the security of passwords with a second factor, and it's here that Microsoft improves the idea of using a second device for authentication even more.

By downloading the app for either iOS or Android, users logging into Microsoft applications are able to sync their mobile device as a way to authenticate the login request to the particular application. By selecting the type of account being used for logon, the mobile app can be configured to receive a validation each time a user logs into a program that's been configured to use Microsoft Authenticator. Read the rest of my article at the link below:

What are the challenges of migrating to HTTPS from HTTP?

The United States Patent and Trademark Office (USPTO) recently had an issue switching from HTTP to HTTPS on its website, and had to temporarily revert back to HTTP during the process.

In June of 2015, the U.S. government mandated that all publicly accessible federal websites provide secure connections to their services to protect data in transit. This is important because all traffic going to these sites and services is being sent in the clear, and has the risk of being eavesdropped on by an attacker.

Migrating to HTTPS has gotten much easier over the past couple years, but there are still issues and concerns that should be considered when making the move. A few large vendors, like Google, are depreciating HTTP by alerting the user when they try to access a site in Chrome that uses HTTP and that may send sensitive data. Google Chrome will eventually have a security warning set for all HTTP sites.

In the past, one of the major pain points for organizations moving to SSL was the cost of the certificate, but Let's Encrypt stepped in to issue free certificates for anyone who requested them, which helped push the progress of those looking to make the jump to HTTPS. Read the rest of my article at the link below:

How did Webroot's antivirus signature update create false positives?

Webroot Inc.'s issue happened on Apr. 24 between 1800 and 2100 Coordinated Universal Time, and it tagged particular Windows OS system files as part of the W32.Trojan.Gen. Once these files were tagged as malicious, they went into quarantine, and the systems were left inoperative.

An antivirus signature update was pushed down from the Webroot cloud service, updating the agents with the false positive and triggering a chain reaction for all the systems receiving the update to cause the Windows systems to quarantine the files. It was reported that the antivirus signature update was only active for 13 minutes, but that many managed service providers were utilizing the service and pushing updates to their clients that it propagated the issue to additional endpoints.

Shortly after the issue, Webroot started working on ways to remediate the problem, and social media started lighting up with comments and potential workarounds in an attempt to get the files back -- including removing Webroot, restoring the needed files from backup and rebooting. Read the rest of my article at the link below:

Evaluating Public Cloud Storage Providers

A move to the public cloud is a major shift in an organization's architecture, and it provides many computing and performance benefits that aren't available from a locally installed storage network. But before selecting a public cloud storage provider, you must ensure its offerings are a good fit for your organization. Review the cost, architecture and security at my article below:

Comparing the Leading Public Cloud Storage Providers

Amazon, Microsoft and Google dominate the public cloud market when it comes to addressing an organization's budget, security requirements, and infrastructure and business needs.

Other niche and large public cloud service providers -- including IBM, Virtustream, Rackspace and NTT Communications -- provide beneficial services as well, but they don't have the same market share as the big three.

Although Amazon Simple Storage Service (S3), Google Cloud Storage and Microsoft Azure Storage offer similar storage features, there are a number of differentiating factors that companies should consider before selecting a service. Read more about these services at the link below:

A look at the services the leading public cloud vendors provide

All three of the major public cloud vendors provide storage services that can be used by organizations ranging from small- and medium-sized businesses to enterprises. Each vendor's public cloud services are similar in nature, so deciding which one(s) to select can be difficult. Some small, but significant, differences between each service can help businesses decide. Read my article below to get the details on what solution is best for you.

Update - Using AWS Organizations to Secure Your Cloud Accounts

AWS Organizations was designed to allow cloud administrators working in Amazon Web Services (AWS) to manage accounts more securely and efficiently. Essentially, AWS Organizations creates custom policies that can be applied to users/groups to manage security, create better automation and simplify billing. There is some overlap with AWS IAM (Identity and Access Management) services, but it is more complementary, and it builds off of the IAM policies already in place.

AWS Organizations allows the management of accounts under a new entity. This entity is built into a hierarchy, and the policies and organizational units can be built within each other for management. I couldn't help but think of Microsoft's Active Directory when looking at it the first time, but that goes for anything with organizational units (OU) and hierarchy. Each particular OU can have policy applied to it, and the user/group will inherit the policy of the OU in which they reside. This also means that each user/group can only be in one OU at a time, but can have multiple policies applied to it since the OUs can be nested. The groups can be created by region, user, group or other elements.

With both AWS IAM and AWS Organizations, there can be a little overlap, but you can think of Organizations as a way of containing the rights of users. The IAM policies can still be created and even pushed through Organizations, but it's the guardrail to determine least privilege. If users go against these policies, it's possible to contain or restrict them with blacklist or whitelist policies. This helps to keep the permissions and security of users to what's deemed necessary by hierarchical policy enforcement. AWS Organizations is the framework that IAM policies can use to tighten security. Read the rest of my article at the link below:

Public Cloud Storage Offering Scalability and Performacne

Using public cloud storage services lets organizations offload management tasks and the costs associated with supporting physical hardware to an external provider. An organization's data is stored in the provider's data center and the provider manages and maintains all facets of the data center, including power, cooling and server maintenance. As a result, organizations don't have to worry about archive planning, implementing security practices or conducting resource planning for future data growth.

Public cloud storage services are also cost-effective; organizations pay only for the resources they use. Public cloud storage provides a scalable and agile environment for businesses to increase or decrease storage on demand.

Organizations use the public cloud to store both structured and unstructured data. Many applications that have made their way to the cloud -- such as those that use back-end databases or structured data -- handle data from applications that tie directly into cloud database services. This type of cloud storage environment is appealing to companies that are either just starting out and don't want to purchase hardware or that are looking for scalable storage that doesn't require a large capital expenditure. Read the rest of my article at the link below:

What's the difference between software containers and sandboxing?

There are a few things to understand upfront when speaking about the differences between sandboxing and software containers, which are sometimes called "jails," and before you make a decision on which one to implement. The answer is a combination of both, but many organizations might not be able to afford or have the expertise to implement both. Hopefully, understanding how they're used will allow enterprises to make an educated decision moving forward.

Sandboxes became a big hit a few years back, after we realized malware was still making its way past antivirus software and infecting our networks. The issue with antivirus is that all systems need signature-based agents installed on the machines, and they have to be updated to at least give the endpoint a fighting chance against malware. Since antivirus wasn't catching everything -- even when it was fully updated and installed on workstations -- the use of sandboxing grew.

Sandboxing relies on multiple virtual machines (VMs) to catch traffic as it ingresses/egresses in the network, and it is used as a choke point for malicious activity detection. The goal of sandboxing is to take unknown files and detonate them within one of the VMs to determine if the file is safe for installation. Since there are multiple evasion techniques, this doesn't always make for a foolproof solution; it's just an extra layer of defense. Read the rest of my article at the link below:

How can enterprises leverage Google Project Wycheproof

The name Wycheproof was chosen because it's the smallest mountain in the world, and stands at a whopping 486 feet above sea level. The reason this particularly unimpressive mountain serves as the namesake of this project is because the tool is only the beginning.

The authors of this tool said they wanted to "create something with an achievable goal," and allowing others to use these tools without "digesting decades worth of academic literature" could lead to increased adoption of this tool and improved security across the internet.

If you're developing applications and using cryptographic libraries, this tool could be something to keep in your toolbox for further investigation and implementation. Project Wycheproof searches for 80 test cases in crypto libraries, and it has already found 40 security bugs that are currently being worked on.

When using tools like Project Wycheproof, or performing security research, you must always attempt to notify the vendor or organization that's responsible for the vulnerabilities discovered. Are there flaws or danger when exposing these faults in the wild? Absolutely. The issue then becomes how to notify others of these vulnerabilities after they're found in crypto libraries. Read the rest of my article at the link below:

Basic steps to improve network device security

Routers are gateways to networks, and often, they're the first devices compromised when an attacker enters your network. Because of this, a router should be as hardened as possible before it's put on your network. With this in mind, there are a few areas we can focus on to improve network device security.

The first step is to place these systems squarely within the network vulnerability management process. This includes running authorized scans of the routers with an account that's able to access the system and determining what risks are present within the router. These risks could be out-of-date patches, running insecure protocols, being versions behind on images and so on. Getting a solid risk assessment of your routers on a scheduled basis can help you to get a foothold on where your risks are and what needs to change, all while being tracked as metrics.

Along the same lines, there are tools that can connect to network equipment and review router configurations and rule sets for security and compliance checks. This is a higher level of network device security than vulnerability management, since it reviews the rule set of the device and makes recommendations based on best practices. It's something to strive for, but verifying that the routers are free from vulnerabilities should be the first priority. Read the rest of my article at the link below:

Cisco CloudCenter Orchestrator Vulnerability

This vulnerability gives an unauthenticated, remote attacker the ability to install Docker containers to the system, and could potentially allow him to attain escalated privileges, such as root. This was made possible by a misconfiguration that makes the Docker management port accessible to attackers, and allows them to submit Docker containers to the Cisco CloudCenter Orchestrator without an administrator's knowledge.

Docker is open source software that allows you to run multiple instances of an application on virtualized hardware, with the flexibility to have these containers moved into cloud platforms for high portability. These containers are typically more lightweight than a usual virtual machine, and will run under a host that's sharing similar libraries. The applications running in these containers can quickly be spun up or ported to hosts that support them. The concern with the recently disclosed vulnerability from Cisco means there could be additional containers or applications running in your CloudCenter Orchestrator that weren't configured by you, and which are being used for malicious purposes. Read the rest of my article at the below link:

What should enterprises know about how a stored XSS works

Cross-site scripting, or XSS, is a web application attack that attempts to inject malicious code into a vulnerable application. The application isn't at risk during this attack; XSS' main purpose is to exploit the account or user attempting to use the application.

There are a few different types of XSS -- such as stored, reflective and others -- but in this article, we'll briefly go over the stored version of the exploit, which recently affected VMware's ESXi hypervisor.

With stored XSS, there is a method called persistent XSS, where an attacker aims to make an XSS exploit permanently part of an application, instead of it being a reflected XSS attack, where the user might have to click on a link to exploit the vulnerable app.

In this case, a permanent XSS exploit means the application can be modified to allow software -- such as a web browser -- to automatically load the exploit without user interaction. The stored XSS is not part of the app, and will load each time a user interacts with the application. This type of cross-site scripting is not as common as reflective XSS, but it's definitely higher risk. If an attacker is able to find a high-value application with a high hit rate, they can do a lot of damage. Read the rest of my article at the link below:

What to consider about signatureless malware detection

Antivirus isn't dead, it's just changing. People have been calling for the death of antivirus for years, but in reality, it isn't possible. There will always be a need for endpoint protection, no matter what anyone says. No one in their right mind is going to leave an endpoint purposely unprotected, so calling for the death of antivirus is a little premature. The people who call for the execution of antivirus are likely fed up with how it works, and put too much faith in its ability to catch every malware sample.

Using signature-based antimalware means you're always one step behind attackers, and for those not using a defense-in-depth approach, this reliance on endpoint protection can cause a false sense of security.

This backlash against the old method has caused many companies, both vendors and customers, to move toward more of a signatureless malware detection model. Read the rest of my article at the link below:

What's the best corporate email security policy for erroneous emails?

The issue here is that Uber doesn't verify email addresses, and these erroneous emails were being sent directly to a different user who was able to view private information on the real customer.

With that being said, if there are multiple emails incoming to an organization regarding accidental sign-ups or verification, it is an enterprise's right to block these incoming messages without question.

Unlike personal email, which the user has control over, corporate email security is the responsibility of the enterprise for which the employee works. This account, the emails and everything associated with it, are property of said organization.

If there is ever an issue with emails accidently being sent to the company and affecting it adversely, the company has the right to block these emails in its mail gateways or spam and phishing filters as part of the corporate email security policy. Read the rest of my article at the link below:

How identity governance and access management systems differ

Access management is the process by which a company identifies, tracks and controls which users have access to a particular system or application. Access management systems are more concerned with the assimilation of users, creating profiles and the process of controlling and streamlining the manual effort of granting users the proper access and roles. Having a process and the due diligence in place to create the roles, groups and permissions first is necessary with access management. Access management systems rely on the framework of which users have which rights and how that's accomplished.

This is somewhat different from identity governance, in which administrators are more concerned about giving users new access to roles and alerting the security team to attempts by unauthorized users to access resources.

Identity governance relies on policies to determine if updated access is too risky for a particular user based on his previous access and behavior. These governance policies can be put into an automated workflow when a change is deemed a risk, and allows the owners of the application or the data to sign off on the update. This fixes the issue of having to recertify users annually, and takes more of an incremental approach to auditing access. Read the rest of my article at the link below:

Are you still using SMBv1 on your system?

The Server Message Block, or SMB, protocol is a file sharing protocol that allows operating systems and applications to read and write data to a system. It also allows a system to request services from a server.

The latest versions of the Windows operating system support SMB v2 and SMB v3, and Microsoft is attempting to depreciate the use of SMB v1 within its software.

There have been numerous vulnerabilities tied to the use of Windows SMB v1, including remote code execution and denial-of-service exploits. These two vulnerabilities can leave a system crippled, or allow attackers to compromise a system using this vulnerable protocol.

Throughout the years, Microsoft has patched its operating system for similar vulnerabilities in Windows SMB v1, and has introduced new versions of the protocol to eliminate the use of this first version of SMB. Read the rest of my article at the link below:

Universal Second Factor Device for Facebook

Facebook recently introduced the ability to use what they call a Facebook Security Key as a second factor of authentication to its site. In order to use this feature within Facebook, the user needs to own a universal second factor device, or U2F security key, to enable login approvals through the security section of their profile.

The universal second factor standard was created by Google and Yubico, and uses the FIDO protocol with standard public key cryptography to provide a secure second form of authentication.

A U2F security key is registered with a service, like Facebook, by approving it during the registration process. This is done by pressing the button on the universal second factor device when prompted, which starts the process of creating the second factor. This approval creates a key pair, in which the public key is sent to the online service and linked to the particular user's account. The private key is kept locally on the universal second factor device, and is never sent to the provider. This registration process creates the key pair for the second factor of authentication that is used each time during login going forward. Read the rest of my article at the link below:

What to Know About IAM In the Cloud Before Implementation

Identity management is a complex topic that has been making its way into the cloud and enhancing the prospect for companies to federate and engage with previously unavailable identity services.

By embracing security services in the cloud, or security as a service (SECaaS), enterprises are able to streamline and take advantage of more flexible services that they might have been struggling to maintain on premises or for which they weren't staffed.

One of the more popular SECaaS applications is identity and access management. This service can either be fully maintained within a cloud platform or it can work with systems at a customer's site in a hybrid model.

Both identity management -- the ability to create, modify and delete an identity -- and access management -- the authorization of that identity for only the proper resources -- are extremely necessary in today's environment. Having the capability to create roles with the proper access to resources, while keeping security in mind, is of the utmost importance to an organization utilizing the cloud. Read the rest of my article at the link below:

Airwatch Agent Vulns in Inbox

AirWatch is software that can be used to protect against compromised mobile devices, which are known as being rooted, and that allow security settings, emails and other functions to be applied to a phone for defense against attackers.

In this case, two vulnerabilities allowed attackers to root devices without the AirWatch software noticing. Normally, when AirWatch software is installed on a device, it checks whether the device is already rooted. A policy can be created on the agent console that informs the agent what to do if this is found during enrollment. Typically, the policy is configured to have an installation of the AirWatch Agent decline the install if it's being attempted on a rooted device.

AirWatch also has apps that can be installed within its suite of products, and one of these apps, the AirWatch Inbox -- a containerized email client that's supposed to provide separation from the data within it and the rest of the device -- was also found to be vulnerable. Read the rest of my article at the link below:

Google Cloud KMS Security Benefits

Google Cloud Key Management Service (KMS) allows its customers to create, use, rotate and destroy encryption keys in the cloud. Customers can either create keys in the Google Cloud KMS with AES-256.

This is important for Google -- the company recently made a big push to vie for enterprise customers -- because Amazon Web Services (AWS) and Microsoft Azure have had this capability for some time. The addition of Google Cloud KMS proves that it's maturing into a real contender within the enterprise cloud space. Allowing a cloud-based KMS to store symmetric keys in the cloud -- Google-created -- makes implementing encryption acceptable and easy.

One of the main challenges with key management systems is handling the keys when there are complicated systems and in-house expertise already at play. With Google's Cloud KMS, there is no longer the need to have an on-premises system, like a hardware security module, and lack of scalability is no longer a concern. Read the rest of my article at the link below:

How companies should prepare for EU GDPR Compliance

Beginning in May 2018, all businesses housing data from European residents will have to abide by the EU GDPR. If companies don't abide by the rules defined by GDPR, they'll be fined 20 million Euros or 4% of their annual turnover.

With this regulation, Microsoft has taken steps to protect the data it holds in the cloud before the GDPR goes into effect. Microsoft is one of the largest cloud service providers in the world, and will need to comply with the more stringent regulations being imposed by the EU data directive to continue doing business under GDPR.

Under this regulation, the EU can validate how companies collect, process or store data on any European resident, and enterprises must comply with their directive on securing EU user privacy. This law pushes companies outside the EU to comply with their rules if they want to continue business with their citizens. This may be a challenge for global e-commerce retailers that weren't following these directives completely in the past. Read the rest of my article at the link below:

Why You Need Separate Administrator Accounts

Creating a policy for separate administrator accounts isn't something unusual; it's actually becoming a standard. Creating a separate account for administrators allows for the proper separation of duties and security on accounts that have full access to systems and data.

This can become an issue within a larger company -- not because it can't be done, but because it's going to take more work in order to complete. A lot of this work might also change the mindset of the administrators and management.

I've seen large companies create separate accounts for administrators in a few ways. One way was creating accounts for systems with read-only access to review configurations and access so they didn't accidentally create an issue within a system when reviewing the application. On the flip side, I've also seen the same thing happen where accounts were created that were only used when the administrator was about to make a change to a system, or if he needed to elevate his rights. Read more of my article at the below link:

How a Slack Vuln Exposed User Auth Tokens

Frans Rosen, a security researcher at web security company Detectify, discovered a Slack vulnerability that essentially enabled attackers to gain access to another Slack users' chats, messages, file content and more. The vulnerability would have enabled an attacker to gain complete access to another user's account by accessing a malicious page that would redirect the Slack WebSocket to the malicious site, stealing the user's session token in the process.

Rosen originally found this Slack vulnerability on the browser version of the application. He submitted the bug to Slack, and it was fixed within five hours. Slack's bug bounty program paid him $3,000 for the vulnerability submission.

The major reason this Slack vulnerability could have been successfully exploited was due to the fact that the application wasn't properly checking messages when using cross-origin communication. With this flaw in place, an attacker could create a malicious link that abused this trust, and directed the user to a page of the attacker's choosing. This site would then be configured to steal the authentication token from the user who assumed they were logging into Slack. The proof-of-concept attack also abused the postMessage function and the WebSocket protocol on which the application relies for communication. Read more of my article at the link below:

MongoDB Security Issues and How to Resolve them

Recently, there was a surge of attacks looking for misconfigured installations of MongoDB on the internet. The attackers were abusing the lack of authentication and remote accessibility to these MongoDB instances by deleting an original database and holding a copy of it for ransom.

These and other MongoDB security misconfigurations and vulnerabilities aren't completely related to patch management, and are more in the realm of configuration management. There are a few ways to improve MongoDB security and protect your database from attackers.

The major issue here lies with certain versions of MongoDB coming with loose default configurations. The responsibility in this case lies firmly with the administrators installing the database software and not managing it appropriately. Read more of my article at the link below:

How can the latest LastPass vulns be mitigated?

Tavis Ormandy, a Google Project Zero researcher, has been a thorn in the side of LastPass for the past year. In 2016, he found multiple vulnerabilities in its software, and in March 2017, he discovered multiple new exploits in the LastPass password management tool that enabled password theft and remote code execution.

LastPass is password manager that creates random passwords, enabling an application/website to auto-fill passwords when possible, and it creates a digital wallet of credentials. This tool improves credential hygiene and limits a user from password reuse. Because of this, LastPass has become a target, and Ormandy's findings have helped the company to improve its security by remediating the LastPass vulnerabilities before they are exploited in the wild. Read more of my article at the link below:

Should the Vulnerabilities Equities Process be Codified Into Law?

The Vulnerabilities Equities Process was created to guide government agencies through the decision-making process of releasing or withholding vulnerabilities they've discovered. This answer isn't as black and white as it sounds, and it's a complex issue that can be polarizing for those who deal with the issue directly. The call to codify the VEP, or to formalize the process into law, has both pros and cons. Read more of my article at the link below:

Using a SOC2 Report to Evaluate Cloud Providers

There are a few tools that can be used when assessing a cloud service provider, and a SOC 2 report is one of them. If a cloud provider or vendor has a SOC 2 report available, it can be extremely useful to understand the company's controls when it comes to security, availability, processing, integrity, confidentiality and privacy. If the third party cannot provide a SOC 2 report, it's possible that they haven't had an assessment performed, or that they're not willing to disclose this data.
It's always best to receive a Type 2 SOC 2, but many vendors might send over a SOC 3 to prove that work has been completed. The Type 2 SOC 2 report will not only review the controls in question, but will go into detail on the effectiveness of the controls. If possible, try to get a Type 2 SOC 2 from the vendor as a first step. Read more at the link below:

Domain Validation Certificates: What are the Security Implications

Let's Encrypt is a free and open certificate authority that enables those that might not be able to afford or configure HTTPS on their web servers to protect their sites.
Using tools in partnership with Let's Encrypt, such as the Electronic Frontier Foundation's Certbot, enables website administrators to freely enable TLS on their sites, and to even automate security functions within cipher suites and other encryption features.
The major goal of Let's Encrypt is to create a secure internet, with all sessions encrypted in transit. Let's Encrypt has major sponsors assisting its community -- including Mozilla, Cisco, Electronic Frontier Foundation, Google, Facebook and others -- that have offered their support for the service. Read more at the link below:

Patching telcom infrastructure can become a challenge

As with many priority systems, patching can become an arduous, and even political battle within an enterprise. These priority systems can be deemed so critical by the organization that patching them is viewed as a risk to the business, which is counter-intuitive when thinking from a security standpoint. This is normally the case when these systems run on outdated or legacy operating systems where installing patches would either void a support agreement or where the organization doesn't have the funds or architecture to test the patches' functionality in a QA environment. Read more at the link below:

How does a privacy impact assessment affect enterprise security

A privacy impact assessment is a review of how an organization handles the sensitive or personal data flowing through their systems. Through this review, the organization -- or potentially a hired third party -- will review internal corporate processes, procedures and even technology to determine how privacy data on users or customers is being stored, collected and processed. This is commonly seen within government agencies and sometimes within organizations storing large amounts of private data on their users or customers, like in healthcare, e-commerce or other industries. Read more at the link below: