Splunkbase now hosts my first public investment in the Splunk environment: the TA for Microsoft Windows Defender. This TA allows easy integration of your Microsoft Windows Defender-protected environment into common Splunk tooling. Included in this is Malware Common Information Model (CIM) mappings. These can be used to get more insight to malware events, Windows Defender signature updates, and scan behavior. It also allows you to use the Splunk Enterprise Security malware investigation workflow with Microsoft Windows Defender.
The TA is licensed as Apache 2.0. Issue tracking and pull requests can be found on its GitHub repository.
One of the hardest parts of operational security when dealing with more powerful opponents than yourself is ensuring that you can have multiple levels of protection. In some circumstances, you do not even want your opponents to know that there is communication occurring. This will cover methods for how to create mechanisms for hiding, even in plain sight. The value in such training is not just for knowing how to do so yourself, but to understand how your opposition may do the same.
Before you go much further, you should be aware of the basics of operational security. For foundational reading, I suggest looking through the primer from the U.S. Navy. Additionally, you should look at the grugq’s blog, especially for counter-state-entities.
The first thing you have to determine is exactly how to examine what level of information security is needed. For the rest of this article, it will be assumed that you are dealing with state-level opposition. If nothing else, it sets a high-end to base from which to operate. You need to figure that out on your own, however.
Once you have done that, you need to determine what your covers will be. If you are doing anything with such a determined adversary, you need to have a “normal” experience. This means that someone should have friends who are not part of the targeted group. In doing so, you create a method that allows you to look for abnormalities. This mean you can look for people tailing you, and set up methods for contacting anyone you need to even while under observation.
The fine line that needs to be trodden provides a way to look for observation, as described above. Routine allows you to observe, look for new people on a path you know well, strange activity by people whose patterns you understand better than they themselves know, or
An integrated OPSEC is not just an act, it is a lifestyle. It has to be done so that even someone watching every move has no indication of what is being done. At its pinnacle, someone can be confronted with an event that demonstrates threat without showing any outward indication.
Beyond those, however, there are some simple suggestions. The movie Ocean’s Eleven has a nice quick introduction.
One of the things that clip hints at, but never directly says, is that you should avoid saying unnecessary details. Those are places where stories can be tripped up. For instance, you tell a story about growing up and your high school, but you cannot tell the truth of it due to the exposure of who you really are. It gives information that allows people to piece together who you really are. However, are you going to remember your fantasy school’s mascot? Their school colors? Are those details even important?
Usually, people like to give details because it makes them seem friendly. That is something you have to quash. Never give away what is not requested, but learn to do so in a way that does not appear unfriendly. People will remember if you refuse to talk about anything personal, but they may not notice if you talk about how you liked or disliked school and shrug about it with a laugh. That is the difference between being memorably obstinate and forgettably average.
Once you have a good grip on how to lie, you need to learn when to lie. Someone who is attempting to subvert via any method, including information gathering, does not want to utilize that skill. They want to look normal in all ways. They have a life that in no way intersects with this hidden passion. Many friends but none too close. Work that is enough to bring necessary income in, and preferably provide cover for “anomalous” activities.
Routine is powerful, but dangerous. It allows you to know people…the clerks at storefronts, employees walking the same path to and from work, even the homeless who frequent the area. Changes in this do not require panic, but they should draw attention. Evaluate the changes, and how they should be adjudicated in your plans for the day.
If you were planning on making contact with other conspirators, such flags should cause a delay in the action. That is why whenever any plan is made to do so, there should be fallback plans that can be acted upon that would not raise any eyebrows. Knowing other destinations near the point of contact is critical in order to do so. This is even true if you are planning on making contact over digital methods.
There is an old meme, jokingly stating that there is no fear behind multiple layers of protection. There is some truth to this, however. If you are reaching out to someone you are working with clandestinely, it should never be through a method that has any ties to you. This means using a device that is used only for that purpose, from a physical location that you cannot be tied to, using multiple VPNs, proxies, and tor. Any identities used here should have no connection either, which means you don’t create user IDs based on your favorite book. Simple sources for such things can be to go to a local library and grab a random book off the shelf and find a character name, or an author, or a publisher.
When working on these clandestine efforts, one of the things to keep in mind is that secrecy is important for all players involved. This has been historically true, given the difficulty in proving loyalties. From Greenpeace to Sabu, parties have come in with false allegiances or turned. The less that “compatriots” know of each other, the less that can be betrayed.
This means that communication should be done with as little face-to-face information as possible. Using dead drops to pass messages remain useful, especially for non-digital forms. If the recipient of a message is not important (for instance, trying to get sensitive information to someone who may disseminate it regardless of the content), then there exists digital dead drop systems that can be used. In order to facilitate the use of these, however, you need to know all of the information of your life as described above. You cannot utilize a dead drop if you do not know what to look for as a threat.
The ability to compact data increases the ability to conceal it. In previous decades, microfiche was often used to pass information. Now, MicroSD can hold up to 64gb of data in only 165 mm³. Short of strip searching everyone, however, you cannot prevent information sneaking out. This is why security policy will often disable USB ports (warning: PDF) and memory card slots to ensure that these tools cannot be used for exfiltration.
It also allows for the information to be hidden in multiple locations. This goes from the traditional hollow book all the way to compartments hidden in furniture. With the sizes described, all that is needed is a small space. Each of those could be used for a dead drop in conjunction with any public space, such as a library or coffee-house.
In order to do all these things, it requires dedication. As others have said, utilizing OPSEC correctly doesn’t require practice until you can do it right. It instead requires practicing it until you cannot do it incorrectly. On the other side, detecting behavior that raises questions is as easy as looking for slips in those behaviors. Most people do not have the training necessary to live OPSEC, and thusly will make mistakes.
Former Vice President Dick Cheney spoke recently about how the wireless connection to his pacemaker was disabled in order to prevent attack. This is a warranted fear, as several years ago proof of concept attacks were shown against insulin pumps. Like much of embedded technology, it is extremely difficult to update medical devices when security flaws are found.
These flaws are even more problematic given the wireless technology that is included in many of the devices. The FDA just recently released guidelines for those connections, while Europe has had guidelines for the last two years. Without those, risks had been even higher for years.
Technological protections are starting to show up as well. RSA, for instance, is working on implementing encryption for these connections. There are also technologies that are more energy-efficient. These are not widely implemented, if at all. For now, absent those protections, devices should have their wireless turned off. The risk of exploitation is low for now, with no actual known attack made as of yet. The hypothetical impact of doing so, however, should give pause.
Then we assumed that the attack against the centrifuge drive system was the simple and basic predecessor after which the big one was launched, the attack against the cascade protection system. The cascade protection system attack is a display of absolute cyberpower. It appeared logical to assume a development from simple to complex. Several years later, it turned out that the opposite was the case. Why would the attackers go back to basics? […]
In other words, blowing the cover of this online sabotage campaign came with benefits. Uncovering Stuxnet was the end of the operation, but not necessarily the end of its utility. Unlike traditional Pentagon hardware, one cannot display USB drives at a military parade. The Stuxnet revelation showed the world what cyberweapons could do in the hands of a superpower. It also saved America from embarrassment. If another country — maybe even an adversary — had been first in demonstrating proficiency in the digital domain, it would have been nothing short of another Sputnik moment in U.S. history. So there were plenty of good reasons not to sacrifice mission success for fear of detection.
In previous posts, I have discussed how information has value. This is both in the eyes of attackers as well as the information’s original controllers. I have also written several tutorials, such as those on GPG or OPSEC, that by implication state that anonymity has worth. After all, what is the point of privacy if it has no value? What the article above demonstrates is a corollary to that theme. Specifically, when you utilize stealth, one of its powers is found in giving it up.
The mathematics that investigates competitive self-interest is known as game theory. Simplified, it states that rational actors will behave in a logical manner. Specifically, rational actors will attempt to maximize success, however that is quantified in the models involved. It is utilized in everything from economic modeling to poker. In this case, we will see how utilizing a more obvious weapon than its predecessor is rational.
It was no secret that the United States was working against Iran’s nuclear research programs. What was not clear was what efforts, if any, were being made outside of economic sanctions or other “non-violent” means (the applicability of the adjective “non-violent” of sanctions are outside of this discussion).
What the article quoted above indicates is that the Iranian nuclear research facilities had been compromised for some time. While it did not bring the research to a stop, it would both delay it and raise doubts as to the capabilities of the technical staff involved. This would hopefully allow time for other avenues to bring the research to a complete halt. However, despite the value in this, the technique was modified to instead utilize the payload found in Stuxnet.
This would increase the visibility of the attack, but it is possible that at this point that was the desired goal. It would continue causing delays in the research program, and with a smaller risk of escalation than a military strike (warning: PDF). Additionally, even if discovered it would most likely not cause a war given the ongoing debate over the role of cyber attacks in warfare. What it would do, however, is expose that they had the capability of doing so.
The results of this were obvious. First, the U.S. and their allies faced retribution. While not desirable, such attacks were easier to absorb than the projected asymmetric physical responses, such as car bombings. The response was found in software-based asymmetric responses instead such as Saudi Aramco and various Western banks. The correlation between the methods used both in the initial attack on Natanz and the response indicates that this may have been the desired response.
Given that asymmetric response was going to happen, and that malware and other information-based attacks are already utilized and asymmetric (warning: PDF), perhaps it was desired due to its existing threat. Assuming that those making the decision were in fact rational actors, it means that they saw the many revelations that would come from Stuxnet would inform other actors of their capabilities. By doing so it showed that in the event of such an attack on its own infrastructure the U.S. could respond in-kind.
Security is not a state. It is instead a process, a method by which you operate. Today I will be starting a series explaining several of the important processes that are critical for information security. First will be change control, the process by which technology and their derivative processes are modified. It is a formal method of control by which you ensure that changes that are introduced to a system in a coordinated method.
This process, when correctly utilized, apply to any configuration modification. The obvious requirement to begin the process is a desire to implement something that will increase value to whomever utilizes it. Once something exists that can do so, the following steps can begin.
Change management is typically seen as a process involving six steps. Each of them is a clear checkpoint that can be used to not only evaluate how the given change is progressing, but how to undo it when necessary.
Step 1: Record
In the first step, a proposal is made. This proposal will result in the modification in one or more configuration items, or CIs. These are identified as anything under the control of the change management system, which should in theory be the entirety of an information infrastructure. Included in the proposal should be the importance of the change, the complexity of implementing it, and the impact of doing so.
Step 2: Access
The proposal should be directed to change management. Management should be an individual or team that will then determine the risk of implementing the proposal as well as who will be responsible for doing so. Teams should involve anyone with any stake of the involved CI(s) should be involved in these determinations, although not necessarily with veto.
Step 3: Plan
Once the determination is made to move forward with a change, the person(s) determine to have ownership of the change take over. They will determine the specific implementation of the change in question, step by step, in detail. Additionally, a plan for rolling back the change must be prepared before it can go further. This is because any modification of a system can have something go wrong either in the planning or implementation stages. For instance, what would be done if the change effects something that wasn’t anticipated, or during the change a power outage occurs and the modification breaks?
Step 4: Build
With the plan completed, it is sent to the stakeholders identified in step two. With their approval, or modifications based on critique, the plan can begin. In this stage, any work necessary prior to its installation is done. This can include creating any configurations that are necessary for the change and staging them. Additionally, testing is done on anything staged to ensure that when installed the change will function as desired. This is where having reliable unit testing is invaluable.
Step 5: Implement
With testing complete, the change is ready for implementation. Once again, the implementors work with stakeholders to plan for the time to do so. The change is then implemented according to that schedule. If necessary, the regression plan decided upon in step three is activated and the change rolled back. Otherwise, a post-implementation assessment is done, and stakeholders report on any adjustments that need to be made either to the change or the change process in the future.
Step 6: Close
The final stage is the simplest to explain, but sometimes the most complex. This is where stakeholders agree that the change has been successfully implemented. While easy in theory, with poor planning in stage three this stage can go on for some time. With proper planning, however, this stage can be as short as a single meeting.
What makes this process important is that it introduces regularity to updates. Patches are necessary for maintaining a technically secure environment. Without regularity, security updates become few and far between as a result of the trauma it can cause.
Vulnerabilities are defined through Common Vulnerability and Exposure numbers, or CVEs. In 2009, the average time between a CVE and exploitation was a mere ten days (warning: PDF). By the end of 2012, some exploitations were found within hours of vulnerability discoveries. Faster change management procedures allow for minimizing exposure, and make it fundamentally important to security.
The U.S. military has over 1,000 military bases, distributed over 20 countries, containing at least 290,605 buildings (warning: PDF). Each of those hook into the military networks that remain targets, and prominent ones at that. There are countless stories about successful breaches of military infrastructure to gain information, at least according to what is publicly available. There are also those who target them as part of political activism. Overall, their role as one of the largest targets in the world is a known problem.
The U.S. Department of Defense announced in September that they intended to create a Joint Information Environment. This would involve integrating the various networks that they control into a single controlled design. In doing so, it would dramatically reduce the threat surface that the largest network in the world faces.
Currently, the Department of Defense has broad guidelines (warning: PDF) to allow for communication between branches. This has led to the development of improved systems that are foundational to the next stage of integration. For instance, C2 Central allows the sharing of information between the hundreds of networks across the branches. It also demonstrates the scale of the problem. The difficulties faced with the BACN project have also shown the layers of sensors, networks, and even basic lack of networking that the pieces of the military infrastructure contain.
The larger the environment, the harder it is to create universal policy. Powerful interests come into play at every juncture. Some of it is the desire to keep fiefdoms functioning. Some of it is disagreement over the choices made to remove the systems used in certain circumstances. There are as many reasons why not to integrate as there are players involved.
Regardless, reducing technologies and networks in play reduces the attack surface. It also increases the need for confidence in those technologies that are chosen in this reduced environment. The military has faced problems before in making poor choices on this matter. The project also faces potential conflicts with its BYOD strategy. The process involved would most likely take years, most likely at least a decade. It will involve process design, technology replacement, and retraining of every single person who interacts with the redesign. Even with the best of intentions, this will not be an easy task.
Math can be hard. This is not a bad thing, and actually security depends on this fact. The difficulty of factoring prime numbers is what makes systems like GPG actually secure. This is best for systems where you will want to be able to look at the encrypted data, for instance financial data.
Hashing uses a similar thought process, by using an arbitrary algorithm to create a string, called a hash, by manipulating another string in a repeatable process. This is best for uses such as passwords. When you create a password in many places, the “password” itself is hashed, and when you log in the string you enter is hashed using the same process and grants authorization if equal to the known. Cryptographic hashing is useful because, with some certainty, you cannot create an arbitrary string to match a given hash.
Let’s look at a demonstration, using the tool sha1pass
# Let's hash the word ThisIsMyPassword
$ sha1pass ThisIsMyPassword
# Now, let's hash the word ThisIsMypassword, changing the final word to lowercase
$ sha1pass ThisIsMypassword
What this demonstrates is that even a simple change in a string can result in a dramatically different hash result. You can also change the hash by adding an additional random string, called a salt, so that even if multiple people use the same password it will result in different hashes.
EDIT Dec 12 2013: I should have noted that if you do not supply a salt, sha1pass will use a random salt. In the above examples, they are “1/CFCl67” and “SO6FyQhU”. Thanks to Eduard Iten for pointing this out.
Below, you see how to specify a salt when generating the hash.
# Using string "Salt" for the salt
$ sha1pass ThisIsMyPassword Salt
# Using string "Salts" for the salt
$ sha1pass ThisIsMyPassword Salts
As you see, changing that also results in different output. What is problematic is that the salt has to be listed in plain text somewhere. While file or database permissions should protect this, if gathered along with the hash results, password crackers can get to work. Recent advancements in password cracking have been as a result of programmable GPUs, which allow up to 160 million password attempts per SECOND. The high-end of cracking is currently around 350 billion attempts per second instead.
Given this, people have taken to adding work to generating password hashes. Tools such as PBKDF2 allows you to run the initial string through a hashing algorithm, as before. What is different is that it runs the result of that hash through the algorithm again, an additional number of times you specify.
# Using grub-mkpasswd-pbkdf2 and the password "ThisIsMyPassword",
# test using 100000 and 1000000 iterations
$ time grub-mkpasswd-pbkdf2 -c 100000
PBKDF2 hash of your password is grub.pbkdf2.sha512.100000.07802F2A6CD41B90ED2C2E1740F67AAFFDD8A628DE4738DF4536D48367B019657F566D9293F61292FD922B5A8EC79BC133E27FA002A363BC9EF9C5914A22284B.59DBAED532019376DE96DC35FC10C498E43D295C583A414EB0A31BF6379127C366AEE3DA064D4C0088D21BF31F5C00AAA5AE7BB7462DE6BA83B896D1A7CAFF48
grub-mkpasswd-pbkdf2 -c 100000 0.48s user 0.00s system 20% cpu 2.373 total
$ time grub-mkpasswd-pbkdf2 -c 1000000
PBKDF2 hash of your password is grub.pbkdf2.sha512.1000000.9329135ED8527C220E9DAF039A1D63D2324DFB64FF18F7CA5420F80A2FEC1E01323B8096333D7EAD2F98DB8AAAAB7099280C2A8B097893299637F4B6F88A538E.F22A9CFD6B443266158F8218531E12C1D1889C0A259BF6805D571CFE563AE53F5BF9452C05E478587A99B12AC354949FB8D48C9AFB59DDD9DB7D3025FA86146F
grub-mkpasswd-pbkdf2 -c 1000000 4.76s user 0.01s system 68% cpu 7.010 total
Not only does the result obviously change, the time necessary to calculate the hash goes up linearly with the increased difficulty. The password cracking cluster described above drops from 350 billion attempts per second to several hundred thousand instead. Absent advances either in finding algorithm vulnerabilities or underlying mathematics that allow for generating collisions.
Finally, we can go to the new example of how not to handle passwords. The underlying password should never be the desired information by a service, it is instead the knowledge of it for authentication. Even if it is forgotten, new passwords should be sent through what is believed to be known communication channels. Therefore, there is never a reason to allow for accessing that knowledge.
That last link very casually states that encryption should not be used for passwords. While better than plain text for password storage, it requires that the underlying system have the ability to decrypt them. When that system is compromised, for instance with Adobe’s recent breach, every customer password suddenly can be found and made available. This breach even allowed all of the password hints to be accessed, which as XKCD points out creates the world’s largest crossword puzzle.
A back door into the information will be used, and never in a good way. Stick with known-good hashing methods, and add as much work as your system can support. Be sure to stay aware of advances that necessitate modifications of either the hash used or the work factor associated with it.
When I am working with clients, sometimes the hardest lesson is in calculating the value of information. Part of this is the difficulty in figuring out the risk calculations that determine appropriate care. The secondary part is figuring out what others think that information is worth, to determine the chance of being targeted in an attack.
Sometimes you get to know how valuable the information you control is. The Federal Reserve, for instance, knows that the data they release is worth for a fortune. As a result, they set up elaborate security for each report release. Despite this, timings from trades related to the “no taper” decision announced in September 2013 indicated that it still managed to leak early.
When you know that value, you still want to know how people would acquire it. The technical term for this is penetration testing. This is when an outsiders are paid to pose as attackers, and gain all the access they can in order to illustrate the methods that the client is vulnerable for. A good example of this can be found in the description from Adam Penenberg. The technical explanation from that same story can be found in a writeup from his attackers.
What makes this much harder is when you don’t even know what is valuable under your control. These are circumstances where the information you control becomes much more valuable to a given attacker. Examples of this can be pulled from the the attack on Mat Honan. There, the value was found in the mere knowledge of his email address, and then the last four digits of his credit card address, which Amazon displays once the account is compromised.
This illustrates how “trivial” information can have great value to the right audience, and why data security needs to be confirmed for all information under control and not just that which the controller believes to have worth. You will be compromised not for what you think is important, but for what your attackers decide they want. When you plan around everything having value, you can plan better how to protect yourself and those who depend on your business.
In my own infrastructure, I use Sandfox to enhance my web browsing security. This is a Linux and OSX tool that creates a secure chroot, to ensure that even if your browser is compromised, it cannot access critical files. While I use it with Firefox, in theory it can be used to sandbox any application.
Before I do anything, I create a limited user who will only be used for web browsing. You should have pwgen installed before running this command or replace that part of the command with another long, randomly generated password. You should never log into this account.
# For the rest of this tutorial, that user's
# name will be "USERNAME-sandbox".
useradd -m -p `pwgen -sy 30 1` "`whoami`-sandbox"
Once you have done that, you may initialize the sandbox.
# Create a Firefox sandbox but don't start Firefox
sudo sandfox --profile=firefox
For instance, I have mine configured to only have access to a specific downloads directory. That way you can control where any file goes, and ensure it does not just end up in your home directory where it could do more damage if exploited. Remember to ensure that the sandboxed account has write permission and your user account at least has read permissions to this folder.
# Add an additional bind to an existing sandbox named "firefox"
sudo sandfox --sandbox=firefox --bind /location/of/download/directory
# Force update of the firefox sandbox after editing its profile
# to add new binds. (missing binds will be added, but existing
# binds will NOT be removed)
sudo sandfox --sandbox=firefox --make
Once that is configured, I alias firefox in my profile to ensure that whenever I launch it, the sandbox will be created. You should change the file modified to reflect what shell you use, I use zsh. Also ensure that you modify the alias to use the correct username, generated in the first step.
alias firefox=gksudo sandfox --profile=firefox --user=USERNAME-sandbox /usr/bin/firefox
This adds another layer of protection, although it is not insurmountable to an attacker. Anything you allow write access to can potentially be modified by a successful attacker. Sandboxing can work in Windows as well, although I have not used any of the options available and therefore cannot make any recommendations.