Sunday, December 18, 2016

What lies behind the veil…

Preface: This article is solely for Security interests and research, nothing more. No political ideas or motivations shall be discussed herein.

Background/timeline information:
“On Friday October 21, 2016 from approximately 11:10 UTC to 13:20 UTC and then again from 15:50 UTC until 17:00 UTC, Dyn came under attack by two large and complex Distributed Denial of Service (DDoS) attacks against our Managed DNS infrastructure.” [1] [DYN website]

We all know about this DDoS attack and it has been discussed and analyzed by many, so this article will not address this attack, but rather I will discuss one of the events that was quietly going on in the background during this attack.

As a security researcher, the DDoS attack was interesting but I wanted to know if any other events were happening at the same time, so I headed over to BGP Stream [2] to see how/if any of the BGP routes might be affected by this attack. It did not take me very long to notice something very interesting. During the last part of the second attack on Dyn, there were a set of events that caught my attention. It appears that during this time, AS3267 State Institute of Information Technologies and Telecommunications (SIIT&T Informika) [3] controlled by the Russian Federation, was attempting to hijack several other AS routes between 10/21/2016  17:08:25 (UTC) and 10/21/2016  17:08:51 (UTC). In fact, all of the 42 events against 22 different AS’s occurred precisely at these times, with the advertisements starting at 17:08:25 (UTC) and all of them ending at 17:08:51 (UTC), a total of 26 seconds.

26 seconds doesn’t sound like a very long time, but for an AS, a tremendous amount of data can be gather during that brief time. It’s also short enough not to cause any serious outages and, most likely not to be noticed by many. I don’t pretend to know why this occurred, only that it did and during a time when much of the internet in the US was being affected by a larger scale attack. I’m not much of a believer in coincidences, especially when it comes to internet networking so in my mind this seemed to me to be some sort of test. (Pure speculation on my part)

Here is the breakdown, by country, of the AS’s affected by these hijack attempts:

Isle of Man (1)
United Kingdom (1)
Japan (1)
Slovakia (1)
Canada (2)
Ukraine (2)
Germany (2)
Netherlands (4)
United States (7)

As you can see from these stats, these events were not solely targeted at the US even though the US was the most targeted country. I will provide all of the AS’s affected at the end of this article for your own research, but so far as I can see, preliminarily, there isn’t a good solid connection between all of the AS’s that appear to have been targeted. Also interesting is the fact that all 22 AS’s were targeted twice in succession, all during the exact same time period.

Conspiracy theories abound these days and I have no intention of getting involved in those discussions, but I do believe that this information is relative due to the circumstances under which they occurred, but the inferences is yours to make, I’m just reporting on the data collected.

All data collected from BGP Stream:
Affected AS
Expected Origin AS: Akamai Technologies, Inc. (AS 16625)
Expected Origin AS: Akamai Technologies, Inc. (AS 35994)
Expected Origin AS: Carolina Internet, Ltd. (AS 13618)
Expected Origin AS: Contabo GmbH (AS 51167)
Expected Origin AS: CW Vodafone Group PLC (AS 1273)
Expected Origin AS: FOP Smirnov V'yacheslav Valentunovuch (AS 30860)
Expected Origin AS: Host Europe GmbH (AS 8972)
Expected Origin AS: Insitu, Inc (AS 27214)
Expected Origin AS: Internet Initiative Japan Inc. (AS 2497)
Expected Origin AS: LeaseWeb Netherlands B.V. (AS 60781)
Expected Origin AS: Lertas NET s.r.o. (AS 201924)
Expected Origin AS: Loco Digital LTD (AS 58277)
Expected Origin AS: Mohawk Internet Technologies (AS 14537)
Expected Origin AS: Rackspace Hosting (AS 27357)
Expected Origin AS: Rackspace Hosting (AS 33070)
Expected Origin AS: Server Central Network (AS 23352)
Expected Origin AS: Serverius Holding B.V. (AS 50673)
Expected Origin AS: Velcom (AS 30407)
Expected Origin AS: Webhost Limited (AS 34738)
Expected Origin AS: Webzilla B.V. (AS 35415)
Expected Origin AS: WIBO International s.r.o. (AS 59939)

UPDATE: 12-19-16

@DynResearch responded with the following:

Explaining that will the advertisements were leaked, they were restricted to inside the AS3267 customer cone, meaning no one that wasn't already talking to AS3267 received the advertisements. (see links below for Tweets)

NOTE: Original data available on request. 


Sunday, October 2, 2016

Pumpkin-Spiced Cyber

So, ‘tis the season for all things Pumpkin-Spiced; from coffee, to pet shampoo, to Cheerio’s, to well, you get the picture. It has become one of the many jokes of the internet and it getting more prevalent every year. In thinking about this phenomenon, I immediately thought about how this could apply to cyber security and it didn’t really take very long to make the correlation.

The Pumpkin-Spiced phenomenon was born out of the seasonal events of Halloween and Thanksgiving in an attempt to “spice things up” a bit in order to boost sales of various products. And while there really aren’t too many seasonal event’s in cyber security (not including the increased malspam campaigns around Christmas and Tax season), the general principle of let’s “spices things up” still applies. After all, take the Pumpkin-Spiced away and the coffee is still just coffee, likewise, if you take away all of the blinky lights, buzzwords and the snazzy dashboards, security products are still just security products. So if the coffee was bad before it was “spiced up”, then you just have flavored bad coffee and the same goes for security products. In other words, if your product is not good at the core, then no matter what you add to it to make it more appealing, you still just have a bad product.

I believe that in the security industry, we understand this better than others as we have to deal with security products every day, usually the choice of these products being well beyond our control, so we end up with Pumpkin-Spiced tools that might look great in presentation, but normally fall well short in actual functionality. Thus when we see the next new Pumpkin-Spiced whatever, palms immediately go to faces.

This is the real world of marketing that we all face every day, in every aspects of our lives. It might be fine for coffee and various other personal products, because we have a choice in those matters, but in the marketing of security products, this could mean the difference between better security with a solid product or less security with a mediocre product that looks impressive to management.

So as much as this is a mild hit-piece on the marketing of Pumpkin-Spiced everything to the masses, it’s more of a call for rational marketing when it comes to security products. I know marketing will never go away, but I implore security vendors to concentrate on making strong products first and let the marketing come second. You’ll be surprised at the difference this will make, as the marketing of a really great product will come much easier, without the need for the meaningless splash of Pumpkin-Spice.

Monday, August 17, 2015

Major Identity Theft Protection Firm Compromised…

In a late day announcement, it was revealed that a major anti-identity theft company has been hit by a massive data breach that resulted in the loss of the personal information of tens of millions of customers. Citizens that have already been hit by one or more of the many data breaches that have occurred of the past couple of years are in shock, wondering what they will do now…

Yeah, this didn’t actually happen, not today at least, but one day and probably one day soon, I feel positive it will and when it does, it just might be the wake-up call to users that nothing is really truly safe. Of course if it doesn’t happen, then that means those anti-identity theft companies are doing it right!

But seriously, is this what it’s actually going to take to get customers and employees to start actively demanding that companies take more responsibility for the information they either willingly give them, or are forced to give them for the ‘privilege’ of doing business with their firms or to get a job? I personally believe that in the realm of information security, people will have to be backed into a corner, with nowhere left to turn before they get fed up with these companies and government organizations to take information security as seriously as it needs to be taken in this day and age.

Credit monitoring is a band-aid for the victims of a data breach and ‘cyber’ insurance is a crutch for the companies and organizations that are willing to pay the premiums rather than properly allocate sufficient resources to properly secure user’s data. Don’t believe me? Let a user’s cable, internet or cell phone access go down for even a few minutes and you will see hell raised with a fury! Just ask any service providers customer service reps, they’ll tell you. So until companies either willingly start doing what is necessary to properly protect customers, or until users take it upon themselves to demand such protections, the title of this article is destined to come true.


Saturday, May 30, 2015


No need to read a bunch into this, I’m simply writing this to confirm that attribution is the new propaganda. Shouldn't really even be surprised at this statement, but it needs to be said.

We all laugh and make jokes about attribution and how much bullshit revolves around this particular subject, but the plain and simple truth of the matter is that when the media talks about the attribution associated with most data breaches and stolen PII, the general public only sees one thing, whatever BS is spouted out. It doesn't matter if it’s CNN, MSNBC or Fox, the fact of the matter is that now attribution is being used as propaganda to sway the general population to whatever means is politically expedient at the moment.

It could be China, North Korea, the Russians, Anonymous, or any number of other “actors” that happen to be convenient at the time, and while a lot of this may be true, the means and uses of these disclosures are still suspect in my opinion, after all, we still don’t even know who pulled off the Sony hack right? I’m not trying to get too deep into the weeds with this, I’m just saying that we need to look outside our echo chamber and into the real world of normal people and what they think and believe about what they see when they see the things that the news reports. People believe what they want to believe and they are being pointed to conclusions that may or may not be true.

So, to my initial point, attribution is being used as propaganda. No surprise there right? I know the people that are reading this know the deal, so I’ll just stop for now…

Saturday, May 16, 2015


Security, Tactics and Defenses

Note: This post is NOT about sex or porn, not really, but kinda, but no…

So, was listening to Paul’s Security Weekly podcast (@securityweekly) and they were talking about vulns and rankings and how they affect companies, and how CVE rankings aren't always relative to a company’s security policies because there’s no one size fits all method for a particular organizations.  So I had this idea about how security vulns relate to different organizations and how different diseases relates to the human body. I almost didn’t write this because Paul said he hated the medical references, but then he brought up a plot line from CSI: Cyber and I felt better about it…

So, much like was discussed on their show, not every company is vulnerable to every exploit out there, no matter how severe. Much like, if you’re a non-smoker you’re mostly not going to get lung cancer, although this is not always the case, but the odds are easily relatable. However, on the flip side, if there is a new virus out there, both companies and people can be affected by it (no, I didn't mean that the same virus can affect computers and people, so just stop that). Here are a few examples, you can run with it as you wish.

Case 1: Let’s say we have two people, Person A’s family has a history of heart disease and person B’s family doesn't have any history in their family of heart disease. Now, since person A knows this information, they pay very close attention to how they eat, going to regular doctor visits and exercising so they reduce their risks that they know they could potentially cause problems. This is a great security practice for an organization because they have identified that they could have issues if they just do nothing. But person B, not having a history of this issue, doesn't worry about all these preventive measures and just does what they want to do, eating everything bad for most people, blowing off doctor visits and being a couch potato.  

In this example we have two people with varying risks practicing very different strategies. One person has identified the risk and are actively working on mitigating that risk, while the other person doesn't have the same risk, but they’re not taking into account all of the other outlining circumstances that could potentially have devastating impacts. In truth, both are still very vulnerable to heart disease. Even though person A has been actively trying to prevent having a heart attack, they can still have one but even if they do, they’re body is still more prepared for the aftermath because they were prepared. Now if the same thing happens to person B, they will more than likely be surprised and, more to the point, their bodies will not be strong enough to recover from such a traumatic event. So, even though you might not be vulnerable to a particular disease, you still can’t completely ignore the possibilities.

Case number one was very specific, so let’s take a look at case number two.

Case 2: Viruses.
They can potentially affect everyone, no matter what you do, especially if you do nothing! But let’s just say that different people do different things to help prevent getting sick from viruses and yet we all still get sick at some point in our life, because that is a fact of being a human being. It might be something we did or didn't do, something we didn't think about or just by dumb luck. However, how often we get sick and how well we recover is directly related to the things we do to prevent getting sick to begin with, wouldn't you say?

Let’s say that person A always makes sure they take their vitamins and they are trying to be healthy but one day they catch a bad bug. Now, since they thought they were safe because of all of their preventative measure, when they actually do get really sick, they don’t have any remediation medicines in their house to take to help reduce the impact of the infection and in the end, have to go to the doctor to get medicine to help them recover.

Now let’s move to person B, who doesn't really do a lot to prevent catching a bug, but when they do get one, they have a whole plethora of medicines in their household to help them recover from the virus without having to go to the doctor? The answer is simple in this case, both methods combined are the true path to take. You should always try to not get sick, but not to the extreme, just like you shouldn't take any preventative and just be prepared if you do get sick. In this case, you should try to make sure you’re being healthy but always know and be prepared that you will probably get sick and have a plan when/if you do.

So, now let’s all go out there and be smart and healthy, but not na├»ve! 

Wednesday, March 25, 2015

Baseline – Not the Dubstep You’re Looking For…

Disclaimer: I have no real world ISC/SCADA system or security experience, I’m just a guy that takes in information and thinks about a better ways to do things.

We've all heard about taking a “baseline” of your network environment so you have some way to gauge and detect any anomalous behavior on your systems to, hopefully, help catch any type of malicious activity before it gets out of control, or in the very least, have a good idea of where to start when performing IR if there is a network breach. And, like most of us that already work in well-established networks, we know how difficult and time consuming a task this would be. But in a well-designed ICS production environment, this might be a bit less traumatic experience than one might think.

First of all, if a production ICS network is properly segregated (as it should be) the traffic flowing over the network is really not that complex because the protocols used aren't that complex. Modbus, DNP3 and most of the other ICS protocols out there operate on very small frame sizes and command lists compared to other network protocols, so while there may be a lot of traffic flowing back and forth between a controller and a device, the commands being sent are known and can generally be predicated based on their configuration and what action the system is designed to perform. In other words, you’re not going to see your Smart-Meter or valve controller performing Google searches or streaming YouTube videos unless something has gone terribly wrong! (Yes, that was a terribly joke).

This being the case, an ICS production environment is ripe for baselining even if it’s already up and running (which most are). So this is the first step in beginning to secure an ICS production network and there really isn't any reason why this should not be happening right now in all major and even minor critical infrastructure environments. We all know this isn't the case, but it should be the case none the less. If critical infrastructure company can get over this first, daunting hurdle, keeping this baseline up to date can actually become relatively easy in the future. Let me explain…

Once the major baseline is established and the network engineers know what normal traffic looks like and what might be abnormal, it becomes much easier to tune inline controls to recognize and flag real warning signs that something maybe amiss in their systems. And at this point, maintaining this baseline can become very easy if the proper deployment controls are put into place when the engineers either have to replace a controller or deploy a new system.

I am sure (or hopeful, however you want to look at it) that when a new ICS controller is replaced due to failure or through a system upgrade, or anytime a new piece of equipment is going to be introduced into the system, extensive testing occurs to make sure the new device is function properly before being deployed into the production environment. This is only logical and makes perfect sense, but this is also the time to not only make sure the operation and control of the system is well established, but also the perfect time to baseline the new system’s network traffic as it’s being run through all normal  conditional testing. By imposing this new deployment protocol you can capture all of the communications of the system in its purest form and have a perfect baseline of what to expect its typical traffic to look like; from normal operational commands, to fault conditions, to extreme fail-safe actuation or any anomalous traffic that might indicate that someone is trying or has breached the network.

This can be helpful in any kind of network, but ICS/SCADA networks can greatly leverage this kind of process with more precision than any other network environment I can imagine, to great benefit not only to the company, but to the environment and to the consumers of the products produced by critical infrastructure companies.



Saturday, March 14, 2015

Of Cats and Security

So I was listening to Paul’s Security Weekly (@securityweekly) podcast last Thursday night when one of their guests, one Michael Santarcangelo (@catalyst), used the phrase, “Risk Catnip”. I almost fell on the floor laughing, as he weaved that phrase into his thought without any hesitation. It surprised everyone on the show and we all got a great laugh out of it.

The next day, since I loved that phrase so much, I decided to re-Tweet his phrase along with some other phrases ending with “catnip”. One of those phrases was “Threat Catnip”. A follower of mine by the name of @PeterGanzevles (Hacktic) replied with about the best response I believe I ever heard, he coined the term “Threatnip”, which got me thinking… (I know, I know, keep your jokes to yourself).

 Embedded image permalink

“Threatnip”, as it turns out, is actually a real thing and it’s used all the time as a lure to get executives to buy into Threat Intelligence products like reports, dashboards, blinky boxes and consultations. And much like catnip, once the prey has pounced on the lure and plays around a bit, the thrill is gone along with a considerable amount of money that could have been put to better use. Now I’m not saying that there is no use for Threat Intelligence, in fact, quite the opposite is true, but there has to be more than just the “Threat” part, because, as “Intelligence” implies, it must serve as a function of a continuous cycle of security posture improvement.

The morale of this short story is this: don’t be a “Threatnip” peddler, be a total solutions provider!

Here are some people that are much wiser than I on this subject:

Edward McCabe (@edwardmccabe):

John Berger

Rafal Los (@Wh1t3Rabbit)