Category Archives: Commentary

These posts are commentary and thoughts on world events, mostly on technology and event interactions.

The Value in Releasing Information

Then we assumed that the attack against the centrifuge drive system was the simple and basic predecessor after which the big one was launched, the attack against the cascade protection system. The cascade protection system attack is a display of absolute cyberpower. It appeared logical to assume a development from simple to complex. Several years later, it turned out that the opposite was the case. Why would the attackers go back to basics? […]

In other words, blowing the cover of this online sabotage campaign came with benefits. Uncovering Stuxnet was the end of the operation, but not necessarily the end of its utility. Unlike traditional Pentagon hardware, one cannot display USB drives at a military parade. The Stuxnet revelation showed the world what cyberweapons could do in the hands of a superpower. It also saved America from embarrassment. If another country — maybe even an adversary — had been first in demonstrating proficiency in the digital domain, it would have been nothing short of another Sputnik moment in U.S. history. So there were plenty of good reasons not to sacrifice mission success for fear of detection.

via Stuxnet’s Secret Twin – By Ralph Langner | Foreign Policy.

In previous posts, I have discussed how information has value. This is both in the eyes of attackers as well as the information’s original controllers. I have also written several tutorials, such as those on GPG or OPSEC, that by implication state that anonymity has worth. After all, what is the point of privacy if it has no value? What the article above demonstrates is a corollary to that theme. Specifically, when you utilize stealth, one of its powers is found in giving it up.

The mathematics that investigates competitive self-interest is known as game theory. Simplified, it states that rational actors will behave in a logical manner. Specifically, rational actors will attempt to maximize success, however that is quantified in the models involved. It is utilized in everything from economic modeling to poker. In this case, we will see how utilizing a more obvious weapon than its predecessor is rational.

It was no secret that the United States was working against Iran’s nuclear research programs. What was not clear was what efforts, if any, were being made outside of economic sanctions or other “non-violent” means (the applicability of the adjective “non-violent” of sanctions are outside of this discussion).

What the article quoted above indicates is that the Iranian nuclear research facilities had been compromised for some time. While it did not bring the research to a stop, it would both delay it and raise doubts as to the capabilities of the technical staff involved. This would hopefully allow time for other avenues to bring the research to a complete halt. However, despite the value in this, the technique was modified to instead utilize the payload found in Stuxnet.

This would increase the visibility of the attack, but it is possible that at this point that was the desired goal. It would continue causing delays in the research program, and with a smaller risk of escalation than a military strike (warning: PDF). Additionally, even if discovered it would most likely not cause a war given the ongoing debate over the role of cyber attacks in warfare. What it would do, however, is expose that they had the capability of doing so.

The results of this were obvious. First, the U.S. and their allies faced retribution. While not desirable, such attacks were easier to absorb than the projected asymmetric physical responses, such as car bombings. The response was found in software-based asymmetric responses instead such as Saudi Aramco and various Western banks. The correlation between the methods used both in the initial attack on Natanz and the response indicates that this may have been the desired response.

Given that asymmetric response was going to happen, and that malware and other information-based attacks are already utilized and asymmetric (warning: PDF), perhaps it was desired due to its existing threat. Assuming that those making the decision were in fact rational actors, it means that they saw the many revelations that would come from Stuxnet would inform other actors of their capabilities. By doing so it showed that in the event of such an attack on its own infrastructure the U.S. could respond in-kind.

The Difficulty of Doing Network Security Correctly: The U.S. Defense Department as a Case Study

The U.S. military has over 1,000 military bases, distributed over 20 countries, containing at least 290,605 buildings (warning: PDF). Each of those hook into the military networks that remain targets, and prominent ones at that. There are countless stories about successful breaches of military infrastructure to gain information, at least according to what is publicly available. There are also those who target them as part of political activism. Overall, their role as one of the largest targets in the world is a known problem.

The U.S. Department of Defense announced in September that they intended to create a Joint Information Environment. This would involve integrating the various networks that they control into a single controlled design. In doing so, it would dramatically reduce the threat surface that the largest network in the world faces.

Currently, the Department of Defense has broad guidelines (warning: PDF) to allow for communication between branches. This has led to the development of improved systems that are foundational to the next stage of integration. For instance, C2 Central allows the sharing of information between the hundreds of networks across the branches. It also demonstrates the scale of the problem. The difficulties faced with the BACN project have also shown the layers of sensors, networks, and even basic lack of networking that the pieces of the military infrastructure contain.

The larger the environment, the harder it is to create universal policy. Powerful interests come into play at every juncture. Some of it is the desire to keep fiefdoms functioning. Some of it is disagreement over the choices made to remove the systems used in certain circumstances. There are as many reasons why not to integrate as there are players involved.

Regardless, reducing technologies and networks in play reduces the attack surface. It also increases the need for confidence in those technologies that are chosen in this reduced environment. The military has faced problems before in making poor choices on this matter. The project also faces potential conflicts with its BYOD strategy. The process involved would most likely take years, most likely at least a decade. It will involve process design, technology replacement, and retraining of every single person who interacts with the redesign. Even with the best of intentions, this will not be an easy task.


Value is in the Eye of the Beholder

When I am working with clients, sometimes the hardest lesson is in calculating the value of information. Part of this is the difficulty in figuring out the risk calculations that determine appropriate care. The secondary part is figuring out what others think that information is worth, to determine the chance of being targeted in an attack.

Sometimes you get to know how valuable the information you control is. The Federal Reserve, for instance, knows that the data they release is worth for a fortune. As a result, they set up elaborate security for each report release. Despite this, timings from trades related to the “no taper” decision announced in September 2013 indicated that it still managed to leak early.

When you know that value, you still want to know how people would acquire it. The technical term for this is penetration testing. This is when an outsiders are paid to pose as attackers, and gain all the access they can in order to illustrate the methods that the client is vulnerable for. A good example of this can be found in the description from Adam Penenberg. The technical explanation from that same story can be found in a writeup from his attackers.

What makes this much harder is when you don’t even know what is valuable under your control. These are circumstances where the information you control becomes much more valuable to a given attacker. Examples of this can be pulled from the the attack on Mat Honan. There, the value was found in the mere knowledge of his email address, and then the last four digits of his credit card address, which Amazon displays once the account is compromised.

This illustrates how “trivial” information can have great value to the right audience, and why data security needs to be confirmed for all information under control and not just that which the controller believes to have worth. You will be compromised not for what you think is important, but for what your attackers decide they want. When you plan around everything having value, you can plan better how to protect yourself and those who depend on your business.


Economics of Malware: Presentation

Thanks for those of you who have been following my series on the economics of malware (part 1, part 2, part 3). I presented about that topic at Madison’s Nerd Nite today (October 30, 2013). This is the presentation, along with notes necessary to give it. It is available under a Creative Commons – Attribution/Non-Commercial/Share-Alike license.



For over a decade now I’ve been responsible for maintaining security resources and advising Sophos customers and partners about security best practices.
I also do a fair bit of public speaking for Sophos on emerging threats and protection strategies and am always in contact with IT professionals and end users.
What I haven’t done so well is make sure that those closest to me get the same benefit from my experience.

So here’s a checklist of what I did.

via Security begins at home – how to do a “back to basics” security overhaul on your family network | Naked Security.

This is a good addition to my previous articles on personal and wireless security. It offers a few other backup options to consider. My only major issue with the article is how it only suggests encrypting backups in the cloud. Data should be encrypted in all locations, especially those outside your total control. Despite that, it is overall a short, useful checklist.



Now that we have enough details about how the >NSA eavesdrops on the Internet, including today\’s disclosures of the NSA\’s deliberate weakening of cryptographic systems, we can finally start to figure out how to protect ourselves.
For the past two weeks, I have been working with the Guardian on NSA stories, and have read hundreds of top-secret NSA documents provided by whistleblower Edward Snowden. I wasn\’t part of today\’s story — it was in process well before I showed up — but everything I read confirms what the Guardian is reporting.
At this point, I feel I can provide some advice for keeping secure against such an adversary.

via Schneier on Security: How to Remain Secure Against the NSA.

I’m split preparing for presenting at Madison, Wisconsin’s Nerd Nite on October 30, 2013. Schneier covers several important notes as to how to handle security in general against a state-level actor, but the lessons are useful to implement in general.

Lesson three in particular is valuable. If you assume someone CAN be listening to your activity, it is easier to avoid doing something stupid. Where you should be worried about being discovered, act on those fears. Air gaps or one-way network connections can protect confidential information better than any firewall.


Obscurity itself, however, when added to a system that already has decent controls in place, is not necessarily a bad thing. In fact, when done right, obscurity can be a strong addition to an overall approach.

via Security and Obscurity Revisited | Daniel Miessler.

This is a good exploration of why “bad” defenses can still help, if used in addition to “good” defenses. As long as the additional layers do not undermine known-good policy or procedure, add them.