Category Archives: Tutorial

These are posts that include how-to’s that will educate, step by step, as to how to accomplish a given security goal.

Releasing the TA for Microsoft Windows Defender for Splunk

TA_for_Microsoft_Windows_Defender_Splunkbase

Splunkbase now hosts my first public investment in the Splunk environment: the TA for Microsoft Windows Defender. This TA allows easy integration of your Microsoft Windows Defender-protected environment into common Splunk tooling. Included in this is Malware Common Information Model (CIM) mappings. These can be used to get more insight to malware events, Windows Defender signature updates, and scan behavior. It also allows you to use the Splunk Enterprise Security malware investigation workflow with Microsoft Windows Defender.

The TA is licensed as Apache 2.0. Issue tracking and pull requests can be found on its GitHub repository.

Communication OPSEC

One of the hardest parts of operational security when dealing with more powerful opponents than yourself is ensuring that you can have multiple levels of protection. In some circumstances, you do not even want your opponents to know that there is communication occurring. This will cover methods for how to create mechanisms for hiding, even in plain sight. The value in such training is not just for knowing how to do so yourself, but to understand how your opposition may do the same.

Before you go much further, you should be aware of the basics of operational security. For foundational reading, I suggest looking through the primer from the U.S. Navy. Additionally, you should look at the grugq’s blog, especially for counter-state-entities.

The first thing you have to determine is exactly how to examine what level of information security is needed. For the rest of this article, it will be assumed that you are dealing with state-level opposition. If nothing else, it sets a high-end to base from which to operate. You need to figure that out on your own, however.

Once you have done that, you need to determine what your covers will be. If you are doing anything with such a determined adversary, you need to have a “normal” experience. This means that someone should have friends who are not part of the targeted group. In doing so, you create a method that allows you to look for abnormalities. This mean you can look for people tailing you, and set up methods for contacting anyone you need to even while under observation.

The fine line that needs to be trodden provides a way to look for observation, as described above. Routine allows you to observe, look for new people on a path you know well, strange activity by people whose patterns you understand better than they themselves know, or

An integrated OPSEC is not just an act, it is a lifestyle. It has to be done so that even someone watching every move has no indication of what is being done. At its pinnacle, someone can be confronted with an event that demonstrates threat without showing any outward indication.

The first step in doing so is learning how to lie. There are many books that illustrate how to do so in simple ways, such as Covert Persuasion. As you get better, reading how people can look for lies can also be invaluable, to understand how you will be evaluated. Both Liespotting: Proven Techniques to Detect Deception as well as What Every BODY Is Saying: An Ex-FBI Agent’s Guide to Speed-Reading People provide good primers.

Beyond those, however, there are some simple suggestions. The movie Ocean’s Eleven has a nice quick introduction.

One of the things that clip hints at, but never directly says, is that you should avoid saying unnecessary details. Those are places where stories can be tripped up. For instance, you tell a story about growing up and your high school, but you cannot tell the truth of it due to the exposure of who you really are. It gives information that allows people to piece together who you really are. However, are you going to remember your fantasy school’s mascot? Their school colors? Are those details even important?

Usually, people like to give details because it makes them seem friendly. That is something you have to quash. Never give away what is not requested, but learn to do so in a way that does not appear unfriendly. People will remember if you refuse to talk about anything personal, but they may not notice if you talk about how you liked or disliked school and shrug about it with a laugh. That is the difference between being memorably obstinate and forgettably average.

Once you have a good grip on how to lie, you need to learn when to lie. Someone who is attempting to subvert via any method, including information gathering, does not want to utilize that skill. They want to look normal in all ways. They have a life that in no way intersects with this hidden passion. Many friends but none too close. Work that is enough to bring necessary income in, and preferably provide cover for “anomalous” activities.

Routine is powerful, but dangerous. It allows you to know people…the clerks at storefronts, employees walking the same path to and from work, even the homeless who frequent the area. Changes in this do not require panic, but they should draw attention. Evaluate the changes, and how they should be adjudicated in your plans for the day.

If you were planning on making contact with other conspirators, such flags should cause a delay in the action. That is why whenever any plan is made to do so, there should be fallback plans that can be acted upon that would not raise any eyebrows. Knowing other destinations near the point of contact is critical in order to do so. This is even true if you are planning on making contact over digital methods.

There is an old meme, jokingly stating that there is no fear behind multiple layers of protection. There is some truth to this, however. If you are reaching out to someone you are working with clandestinely, it should never be through a method that has any ties to you. This means using a device that is used only for that purpose, from a physical location that you cannot be tied to, using multiple VPNs, proxies, and tor. Any identities used here should have no connection either, which means you don’t create user IDs based on your favorite book. Simple sources for such things can be to go to a local library and grab a random book off the shelf and find a character name, or an author, or a publisher.

When working on these clandestine efforts, one of the things to keep in mind is that secrecy is important for all players involved. This has been historically true, given the difficulty in proving loyalties. From Greenpeace to Sabu, parties have come in with false allegiances or turned. The less that “compatriots” know of each other, the less that can be betrayed.

This means that communication should be done with as little face-to-face information as possible. Using dead drops to pass messages remain useful, especially for non-digital forms. If the recipient of a message is not important (for instance, trying to get sensitive information to someone who may disseminate it regardless of the content), then there exists digital dead drop systems that can be used. In order to facilitate the use of these, however, you need to know all of the information of your life as described above. You cannot utilize a dead drop if you do not know what to look for as a threat.

The ability to compact data increases the ability to conceal it. In previous decades, microfiche was often used to pass information. Now, MicroSD can hold up to 64gb of data in only 165 mm³. Short of strip searching everyone, however, you cannot prevent information sneaking out. This is why security policy will often disable USB ports (warning: PDF) and memory card slots to ensure that these tools cannot be used for exfiltration.

It also allows for the information to be hidden in multiple locations. This goes from the traditional hollow book all the way to compartments hidden in furniture. With the sizes described, all that is needed is a small space. Each of those could be used for a dead drop in conjunction with any public space, such as a library or coffee-house.

In order to do all these things, it requires dedication. As others have said, utilizing OPSEC correctly doesn’t require practice until you can do it right. It instead requires practicing it until you cannot do it incorrectly. On the other side, detecting behavior that raises questions is as easy as looking for slips in those behaviors. Most people do not have the training necessary to live OPSEC, and thusly will make mistakes.

Personal Medical Device Security

Former Vice President Dick Cheney spoke recently about how the wireless connection to his pacemaker was disabled in order to prevent attack. This is a warranted fear, as several years ago proof of concept attacks were shown against insulin pumps. Like much of embedded technology, it is extremely difficult to update medical devices when security flaws are found.

These flaws are even more problematic given the wireless technology that is included in many of the devices. The FDA just recently released guidelines for those connections, while Europe has had guidelines for the last two years. Without those, risks had been even higher for years.

Technological protections are starting to show up as well. RSA, for instance, is working on implementing encryption for these connections. There are also technologies that are more energy-efficient. These are not widely implemented, if at all. For now, absent those protections, devices should have their wireless turned off. The risk of exploitation is low for now, with no actual known attack made as of yet. The hypothetical impact of doing so, however, should give pause.

Process, Process, Process

Security is not a state. It is instead a process, a method by which you operate. Today I will be starting a series explaining several of the important processes that are critical for information security. First will be change control, the process by which technology and their derivative processes are modified. It is a formal method of control by which you ensure that changes that are introduced to a system in a coordinated method.

This process, when correctly utilized, apply to any configuration modification. The obvious requirement to begin the process is a desire to implement something that will increase value to whomever utilizes it. Once something exists that can do so, the following steps can begin.

Change management is typically seen as a process involving six steps. Each of them is a clear checkpoint that can be used to not only evaluate how the given change is progressing, but how to undo it when necessary.

Step 1: Record

In the first step, a proposal is made. This proposal will result in the modification in one or more configuration items, or CIs. These are identified as anything under the control of the change management system, which should in theory be the entirety of an information infrastructure. Included in the proposal should be the importance of the change, the complexity of implementing it, and the impact of doing so.

Step 2: Access

The proposal should be directed to change management. Management should be an individual or team that will then determine the risk of implementing the proposal as well as who will be responsible for doing so. Teams should involve anyone with any stake of the involved CI(s) should be involved in these determinations, although not necessarily with veto.

Step 3: Plan

Once the determination is made to move forward with a change, the person(s) determine to have ownership of the change take over. They will determine the specific implementation of the change in question, step by step, in detail. Additionally, a plan for rolling back the change must be prepared before it can go further. This is because any modification of a system can have something go wrong either in the planning or implementation stages. For instance, what would be done if the change effects something that wasn’t anticipated, or during the change a power outage occurs and the modification breaks?

Step 4: Build

With the plan completed, it is sent to the stakeholders identified in step two. With their approval, or modifications based on critique, the plan can begin. In this stage, any work necessary prior to its installation is done. This can include creating any configurations that are necessary for the change and staging them. Additionally, testing is done on anything staged to ensure that when installed the change will function as desired. This is where having reliable unit testing is invaluable.

Step 5: Implement

With testing complete, the change is ready for implementation. Once again, the implementors work with stakeholders to plan for the time to do so. The change is then implemented according to that schedule. If necessary, the regression plan decided upon in step three is activated and the change rolled back. Otherwise, a post-implementation assessment is done, and stakeholders report on any adjustments that need to be made either to the change or the change process in the future.

Step 6: Close

The final stage is the simplest to explain, but sometimes the most complex. This is where stakeholders agree that the change has been successfully implemented. While easy in theory, with poor planning in stage three this stage can go on for some time. With proper planning, however, this stage can be as short as a single meeting.

What makes this process important is that it introduces regularity to updates. Patches are necessary for maintaining a technically secure environment. Without regularity, security updates become few and far between as a result of the trauma it can cause.

Vulnerabilities are defined through Common Vulnerability and Exposure numbers, or CVEs. In 2009, the average time between a CVE and exploitation was a mere ten days (warning: PDF). By the end of 2012, some exploitations were found within hours of vulnerability discoveries. Faster change management procedures allow for minimizing exposure, and make it fundamentally important to security.

 

How to Handle Server-Side Passwords

Math can be hard. This is not a bad thing, and actually security depends on this fact. The difficulty of factoring prime numbers is what makes systems like GPG actually secure. This is best for systems where you will want to be able to look at the encrypted data, for instance financial data.

Hashing uses a similar thought process, by using an arbitrary algorithm to create a string, called a hash, by manipulating another string in a repeatable process. This is best for uses such as passwords. When you create a password in many places, the “password” itself is hashed, and when you log in the string you enter is hashed using the same process and grants authorization if equal to the known. Cryptographic hashing is useful because, with some certainty, you cannot create an arbitrary string to match a given hash.

Let’s look at a demonstration, using the tool sha1pass

# Let's hash the word ThisIsMyPassword
$ sha1pass ThisIsMyPassword
$4$1/CFCl67$zEr7Fd2L+YKBY4TsPb4hSnYd96A$
# Now, let's hash the word ThisIsMypassword, changing the final word to lowercase
$ sha1pass ThisIsMypassword
$4$SO6FyQhU$8m1Vigcfobz8oIIt5F6tycSCcso$

What this demonstrates is that even a simple change in a string can result in a dramatically different hash result. You can also change the hash by adding an additional random string, called a salt, so that even if multiple people use the same password it will result in different hashes.

EDIT Dec 12 2013: I should have noted that if you do not supply a salt, sha1pass will use a random salt. In the above examples, they are “1/CFCl67” and “SO6FyQhU”. Thanks to Eduard Iten for pointing this out.

Below, you see how to specify a salt when generating the hash.

# Using string "Salt" for the salt
$ sha1pass ThisIsMyPassword Salt
$4$Salt$EOzpsoDQyjcDrhYNzg4WUREMovg$
# Using string "Salts" for the salt
$ sha1pass ThisIsMyPassword Salts
$4$Salts$s4bYTD/KvrRzmdCg0/lK47Dgc+0$

As you see, changing that also results in different output. What is problematic is that the salt has to be listed in plain text somewhere. While file or database permissions should protect this, if gathered along with the hash results, password crackers can get to work. Recent advancements in password cracking have been as a result of programmable GPUs, which allow up to 160 million password attempts per SECOND. The high-end of cracking is currently around 350 billion attempts per second instead.

Given this, people have taken to adding work to generating password hashes. Tools such as PBKDF2 allows you to run the initial string through a hashing algorithm, as before. What is different is that it runs the result of that hash through the algorithm again, an additional number of times you specify.

# Using grub-mkpasswd-pbkdf2 and the password "ThisIsMyPassword",
# test using 100000 and 1000000 iterations
$  time grub-mkpasswd-pbkdf2 -c 100000
Enter password:

Reenter password:
PBKDF2 hash of your password is grub.pbkdf2.sha512.100000.07802F2A6CD41B90ED2C2E1740F67AAFFDD8A628DE4738DF4536D48367B019657F566D9293F61292FD922B5A8EC79BC133E27FA002A363BC9EF9C5914A22284B.59DBAED532019376DE96DC35FC10C498E43D295C583A414EB0A31BF6379127C366AEE3DA064D4C0088D21BF31F5C00AAA5AE7BB7462DE6BA83B896D1A7CAFF48
grub-mkpasswd-pbkdf2 -c 100000  0.48s user 0.00s system 20% cpu 2.373 total
$ time grub-mkpasswd-pbkdf2 -c 1000000
Enter password:

Reenter password:
PBKDF2 hash of your password is grub.pbkdf2.sha512.1000000.9329135ED8527C220E9DAF039A1D63D2324DFB64FF18F7CA5420F80A2FEC1E01323B8096333D7EAD2F98DB8AAAAB7099280C2A8B097893299637F4B6F88A538E.F22A9CFD6B443266158F8218531E12C1D1889C0A259BF6805D571CFE563AE53F5BF9452C05E478587A99B12AC354949FB8D48C9AFB59DDD9DB7D3025FA86146F
grub-mkpasswd-pbkdf2 -c 1000000  4.76s user 0.01s system 68% cpu 7.010 total

Not only does the result obviously change, the time necessary to calculate the hash goes up linearly with the increased difficulty. The password cracking cluster described above drops from 350 billion attempts per second to several hundred thousand instead. Absent advances either in finding algorithm vulnerabilities or underlying mathematics that allow for generating collisions.

Finally, we can go to the new example of how not to handle passwords. The underlying password should never be the desired information by a service, it is instead the knowledge of it for authentication. Even if it is forgotten, new passwords should be sent through what is believed to be known communication channels. Therefore, there is never a reason to allow for accessing that knowledge.

That last link very casually states that encryption should not be used for passwords. While better than plain text for password storage, it requires that the underlying system have the ability to decrypt them. When that system is compromised, for instance with Adobe’s recent breach, every customer password suddenly can be found and made available. This breach even allowed all of the password hints to be accessed, which as XKCD points out creates the world’s largest crossword puzzle.

A back door into the information will be used, and never in a good way. Stick with known-good hashing methods, and add as much work as your system can support. Be sure to stay aware of advances that necessitate modifications of either the hash used or the work factor associated with it.

Enhancing Browser Protection with Sandfox

In my own infrastructure, I use Sandfox to enhance my web browsing security. This is a Linux and OSX tool that creates a secure chroot, to ensure that even if your browser is compromised, it cannot access critical files. While I use it with Firefox, in theory it can be used to sandbox any application.

Before I do anything, I create a limited user who will only be used for web browsing. You should have pwgen installed before running this command or replace that part of the command with another long, randomly generated password. You should never log into this account.

# For the rest of this tutorial, that user's 
# name will be "USERNAME-sandbox".
useradd -m -p `pwgen -sy 30 1` "`whoami`-sandbox"

Once you have done that, you may initialize the sandbox.

# Create a Firefox sandbox but don't start Firefox
sudo sandfox --profile=firefox

For instance, I have mine configured to only have access to a specific downloads directory. That way you can control where any file goes, and ensure it does not just end up in your home directory where it could do more damage if exploited. Remember to ensure that the sandboxed account has write permission and your user account at least has read permissions to this folder.

# Add an additional bind to an existing sandbox named "firefox"
sudo sandfox --sandbox=firefox --bind /location/of/download/directory
# Force update of the firefox sandbox after editing its profile
# to add new binds.  (missing binds will be added, but existing
# binds will NOT be removed)
sudo sandfox --sandbox=firefox --make

Once that is configured, I alias firefox in my profile to ensure that whenever I launch it, the sandbox will be created. You should change the file modified to reflect what shell you use, I use zsh. Also ensure that you modify the alias to use the correct username, generated in the first step.

alias firefox=gksudo sandfox --profile=firefox --user=USERNAME-sandbox /usr/bin/firefox

This adds another layer of protection, although it is not insurmountable to an attacker. Anything you allow write access to can potentially be modified by a successful attacker. Sandboxing can work in Windows as well, although I have not used any of the options available and therefore cannot make any recommendations.

Link

For over a decade now I’ve been responsible for maintaining security resources and advising Sophos customers and partners about security best practices.
I also do a fair bit of public speaking for Sophos on emerging threats and protection strategies and am always in contact with IT professionals and end users.
What I haven’t done so well is make sure that those closest to me get the same benefit from my experience.

So here’s a checklist of what I did.

via Security begins at home – how to do a “back to basics” security overhaul on your family network | Naked Security.

This is a good addition to my previous articles on personal and wireless security. It offers a few other backup options to consider. My only major issue with the article is how it only suggests encrypting backups in the cloud. Data should be encrypted in all locations, especially those outside your total control. Despite that, it is overall a short, useful checklist.