Friday, August 18, 2017

Using OSINT against Online Child Predators

The Internet is a potentially dangerous place for users. This is especially so for children. Oftentimes, these younger users don’t yet understand that some people harbor bad intentions. They are therefore prime targets of digital predators who would seek to prey upon them online. I'm quoted in this article regarding how to keep children safe online.

Wednesday, August 16, 2017

Can a PCI Internal Security Assessor validate level 1 merchants?

There are differences between Internal Security Assessors and Qualified Security Assessors (QSA), as well as the assessments they're able to validate. With these assessments, there are also particular levels of providers and merchants that require different standards of validation.

Internal Security Assessors are normally employees of the organization being assessed. This closeness to the business can create a better understanding of the processes of the system owners, but when level 1 service providers are involved, there needs to be a third-party perspective.

A service provider is defined as an entity that processes, stores or transmits cardholder data on behalf of another business or organization. Like merchants, there are multiple levels of service providers, and a level 1 merchant requires a Qualified Security Assessor to complete the reports on compliance.

Read more at my article below:

How is the Samba vulnerability different from EternalBlue?

The vulnerability in Samba -- as well as WannaCry ransomware -- shows that every organization needs to apply appropriate patches and enforce configuration management in its systems to defend itself against security risks.

These Linux and Windows systems are similar in that both created remote concerns by having port 445 open on the perimeter. Samba is used to enable Linux devices, such as printers, to communicate with Windows systems, and it is a key element in having interoperability between the operating systems.

It's interesting the Samba vulnerability (CVE-2017-7494) was announced soon after the WannaCry ransomware spread. While neither has anything to do with the other, seeing this vulnerability just cements the urgent need for IT security to move back to the fundamentals.

Both of the vulnerabilities are concerning for remote execution if the systems are exposed to the internet and are unpatched. Also, both of the vulnerabilities require a payload to be dropped in order to achieve their results. In the case of WannaCry, it was EternalBlue that was used to power the malware; in the Samba vulnerability, there was no known malware wrapped around the exploit. Read my article below:

Could the WannaCry decryptor work on other ransomware strains?

The WannaCry ransomware caused a panic in the security industry, and researchers Benjamin Deply, Adrien Guinet and Matt Suiche created a decryptor that might be able to retrieve encrypted files being held ransom by WannaCry.

The WannaCry decryptor tools work on the majority of Windows systems affected by the ransomware; this includes Windows XP, Windows 7, Windows 2003 and Windows 2008 systems. The caveat is that the WannaCry decryptor tool requires the infected system to still have, in memory, the associated prime numbers that were used by the malware to create the RSA key pairs to encrypt the data.

The two tools that can be used to decrypt WannaCry files are WannaKey and WanaKiwi. The WanaKiwi tool took the ideas of the WannaKey decryptor and added documentation and an easier method of deployment. Read my article below:

How are hackers using Unicode domains for spoofing attacks?

Trust is a necessity in cybersecurity, and it's one of the main reasons attackers continually try to exploit this emotion when assaulting networks.

We put a lot of time and defensive effort into verifying that a particular party on the internet is who they say they are, and we do this with good reason. But because of this need for trust, attackers rely on spoofing as a standard method of exploitation. The more an attacker can deceive someone, the higher his probability of success, or cover, while attempting an exploit.

Here is where the recent proof of concept that shows attackers can abuse Unicode domains to look like legitimate sites comes into play. Attackers are able to trick users into clicking on particular links that look like they are from legitimate domains, but that actually lead to malicious sites.

This deception works because many letters look very similar within Unicode domains, especially within Latin and Cyrillic character sets. There is no distinguishable difference between many of these letters to the human eye, but computers treat them differently, and attackers use this to their advantage. Read my article below:

Did DDoS attacks cause the FCC net neutrality site to go down?

With any DDoS attack, the best way to investigate it is to review the logs. Due to the sensitivity of the information submitted to the Federal Communications Commission (FCC) net neutrality site, and the ability for IP addresses to potentially increase privacy risks for users submitting their opinions, the logs have not been publicly released for review. The FCC's CIO, David Bray, stated that, after reviewing the logs, it was determined that nonhuman bots were creating a large number of comments to the FCC net neutrality site via an API. He also mentioned that the systems creating the large wave of comment traffic wasn't from a botnet of infected systems, but from a publically available cloud service.

If this truly was a botnet pumping large amounts of comments to the FCC's net neutrality site -- possibly for spam-related purposes -- while there was a large influx of users attempting to post opinions and comments regarding the net neutrality policy, it's likely that the application reacted in a manner that's identical to a DDoS attack. We know that the API was hit hard from public comments made by the FCC and it's these application-based resources that can become very expensive when it comes to utilization. Read my article below:

How can OSS-Fuzz and other vulnerability scanners help developers?

In December 2016, Google released its project, dubbed OSS-Fuzz, as an open source tool to fuzz applications for security and stability concerns. The tool doesn't scan every piece of open source software; in order to be accepted by OSS-Fuzz, an open source project must have a large following or be considered software that's critical to global infrastructure.

In the past year, the project has scanned 47 applications and has found over 1,000 vulnerabilities, with over a quarter of those being security vulnerabilities.

Developers running an open source project should definitely look to integrate into Google's project. The code of the fuzz target, or the code being fuzzed for vulnerabilities, should be part of the project's source code repository.

Developers also need to have seeds so that the fuzzing can be more efficient. Google recommends having a "minimal set of inputs that provides maximal code coverage." Developers also need to be aware of what's being fuzzed in their code, and the coverage of the fuzzers should be reviewed to validate that the application is being tested efficiently. Read the rest of my article at the link below: