Think of cybersecurity today as you would an 8-track tape player. And, think of 8-track cartridges as the equivalent of Twitter, Snapchat, Instagram and Facebook. Then, fast forward about 5 years and imagine a digitized version of music delivery (as in iTunes) taking the form of a passive social media embedded in an augmented reality world

Huh? Here’s what I mean: Many technology analysts predict that today’s concept of actively “posting” or “sharing” will be frowned upon in the future and will be entirely replaced by a passive stream of your life’s experiences, whereabouts, and media consumption. Andy Warhol’s droll prediction of 15 minutes of fame will be expanded to 24×7.

We will have a 24 hour channel of “you” that is always live (or automatically programmed), and always accessible to your friends (or if you’re born in the age of transparency (post year 2000), accessible to anyone), and always completely “authentic”. Any effort to actively post something will be seen as “manual editing” and will be broadly perceived as a huge negative no-no. Quality of streams will be community and algorithmically-determined, surfacing the highlights of your experience in ways that are determined through machine-learning and as a result will be assumed to reflect the “real” you as opposed to today’s “Facebook” you. The wisdom of crowds will enforce the authenticity by calling out clever fakes and workarounds.

In addition, we will all be riding a layer of augmented reality where our experiences will be enhanced by Geo-centric assistance/suggestions for food, beverage, entertainment, transportation, relaxation, stimulation, elimination, learning, exercise, sleep, housing, shopping, clothing, etc., in ways that we have selected through intelligence, machine learning and/or crowd sourced filtering to provide us with only the things we “like” and none of the things we “don’t like”.

These two emerging paths will merge to create a slew of social products and new forms of media advertising designed to entice not just you the traveler but also in the form of a natural viral infection, the people following you on your life journey. Whatever you are doing or consuming will become a catalyst for others’ discovery.

This means that today’s forms of paid user acquisition will become obsolete, and will instead be replaced by “product and “experience placement.” This will be great for you because the prices you will pay for products and services in your life will be offset by the exposure you bring to the brands you use. The cooler you are, the bigger your network and the better your conversion from viewers (CFV) measures become, the less your life will cost.

The relevance for social networks will be perishable and will rely entirely on context. They will move in and out of your augmented reality as they are useful. For example, you take a trip to New York City in April. Your social network will come to life enabling Big Apple navigation, events, connections, restaurants, friends, hotels, etc., and then just as suddenly disappear when your trip is over to be replaced by the next passive network infestation. Given the absence of manual editing, these networks will be trusted and become an effective form of empathy and truth. We will fall in love with machine learning.

Setting aside whether you think this all sounds “amazing and awesome” or “nightmarish”, not only will your augmented reality be continually under siege by advertising and product placement wars, but more importantly, you will not be able to distinguish truth from reality even more than you can’t today. Our current versions of ad blockers may advance in tandem with conventional advertising or product placement technologies but ideas will be more difficult to deal with than products, machine learning or otherwise.

If our inability to attend to the cybersecurity issues around IoT to-date is any indication, our future augmented realty platforms will become giant petri dishes for fraud and misdirection. Imagine what happens to your cognizant awareness if you receive all of your “information” from Fox News, or conversely from MSNBC? It is one thing to have a million records containing personal identifiable information stolen from a company’s databases via a cybersecurity breach, but it is quite another to be able to continually influence the direction of purchasing decisions of billions of consumers. A product marketing manager’s wet dream? Sure.

But the implications are obviously much greater and widespread and elevate the issue of cybersecurity to a different level.

I am sure we will all be able to install adaptive artificial intelligence combined with instant crowd-sourced filtering that will override unwelcome parts of our augmented reality experience and these will work fine right up until the moment that the bad guys figure out how to work around the defenses. This, in today’s world usually takes about 30 days. I see nothing in the way of technological advances that will shore up or lengthen that cycle.

The recent technology advances that have enabled this rapid evolution to a new world of spontaneous and copious information served up through our augmented reality platforms (our iPhone as today’s version of the 8-track player) is exciting and loaded with opportunity for both consumers and entrepreneurs and for capitalism as a whole.

It would be useful however, if we could just slow things down a bit and seriously address the cybersecurity risk associated with this direction before we plunge ahead. Because if we don’t, I don’t worry so much about bad guys being able to influence consumer behavior or even global politics as I am about our own government finding the rationalization to swoop in and “protect” us all by erecting another institution to regulate our collective behavior.

Whether it’s the future of passive social media and augmented reality or the present state of IoT defenses or even our immediate inability to protect our national infrastructure (vis-a-vis the October 21st DDoS probe) or the sensitive data that resides in most small and medium sized businesses, or medical and surgical devices in hospitals and treatment centers, we need to address the issue seriously and begin to implement technology and service solutions that can mitigate these attacks and deal with them appropriately.

If we don’t start sending that message now, we will be forever doomed to this cycle of probe, attack, breach, exfiltrate and conquer with no end in sight. And the really big prizes for able cyber-criminals are waiting in the wings.



In spite of headlines that might lead us to believe that security breaches are the result of external hackers (Russia) attacking our perimeter defenses or the continued failure of all of our advanced technology, the majority of breaches actually occur due to some action or failure of someone inside the enterprise.

In IBM’s recent report titled the 2016 Cybersecurity Intelligence Index, they found that over 60% of all attacks were carried out by insiders. Of those attacks, over 75% involved malicious intent and the balance were due to inadvertent mistakes. Of all industries studied, manufacturing, financial and healthcare came in as the top three owing to their stores of personal data, intellectual property, physical inventory, and massive financial assets under management.

Regardless of the differences in assets defended and regulatory requirements, the common denominator among all these industries was people. They all had employees and each of them represented some form of an insider threat.

There are three primary types of insider threats:

  • Joyce in accounting. Cyber criminals posing as trusted employees through hijacked identities easily compromise corporate systems through social engineering enhanced phishing email attacks. It happens multiple times every day and due to increasing regulatory oversight and fines, it is rarely reported.
  • Just plain dumb. Human error is a major factor and consistently recurring theme in most breaches. They come in the form of mis-addressed emails, lost devices, sensitive and confidential data sent to insecure home systems as well as well-intentioned system admins whose complete access to corporate systems can amplify a small mistake or sit at the root of privilege compromise.
  • A thief named Bob. With the advances and availability of exploit kits and malware on the Dark Web, anyone can become a script kiddie these days and almost all of us have a price. Disgruntled and even otherwise complacent employees are easily corrupted [I will probably get flack for this but …] and the threat of a malicious employee whose intent is to steal or damage is a very real risk today. At stake is competitive and secret intelligence information, proprietary data in the form of algorithms, formula, designs, plans, drawings, code, etc., sensitive employee PI or PH information, high value market intelligence, etc., and some employees, contractors, spouses and subs may just have a vendetta against the enterprise.

Not only can malicious actors erase evidence of their activities and presence, their access privileges gain them ingress to trusted system which will fly under the radar of even the most advanced technologies. Before anyone gets upset here, I realize there are [and we have partnered with] some very cool technology solutions that specifically address insider threat in a way that can isolate, identify and apprehend the actor in process based on behavioral analytics and machine learning, but even with this assist, managers still need to be aware of certain behaviors and ways to focus their security efforts to get the greatest returns on these defenses:

  • The Holy Grail. We often fail to properly identify the assets at greatest risk and provide them the most rigorous protections and monitoring. The bad guys aren’t really interested in your annual 10-K. They want those engine designs and affording each the same level of protection may not be th e best strategy.
  • Assessing Employee Access. While it may not be politically correct, implementing a tiered monitoring of all users with a particular focus on those with the broadest and most authoritative access is probably a good idea. These would include system administrators, key product developers, contractors, suppliers, vendors, and top level executives.
  • Block and Tackle.
  1. Implementing the automatic application of software patches will close holes that hackers can use to access your network.
  2. Developing, implementing and enforcing strong policies for user identities and passwords will make stealing credentials much harder.
  3. Continually collecting and monitoring data on every device that touches your network makes sure that you will be the first to know if you’ve been hacked and the forensics will tell you exactly where and by whom. Anyone not running a SIEM/SOC these days is asking for trouble.
  4. Developing and implementing continuous user training, education and awareness programs are the key to reducing and even eliminating the “just plain dumb” insider mistakes. An ongoing program of testing against spoofs and fake exercises goes a long way to increase your employees’ cyber-situational awareness at a disproportionately low cost compared to the potential risk reduction.
  • Implement Behavioral Analytics. The nice thing about insider threats is that they are dependent upon people and people are creatures of habit. As a result, anomalistic behavior is fairly easy to spot by analytics engines that are set to monitor the behaviors and are based on adaptive machine learning that renders them smarter over time. User and event behavioral analytics are also really good at spotting policy violations that may not be associated with malicious behavior but may result in the overall improvement of your security landscape as a by-product.

So, the next time we see a headline about a breach, let’s keep in mind that external attacks represent the minority of breaches and that the actor probably had insider help, whether in the form of an unintended identity share or outright collusion.

There is a lot you can do to make sure your company isn’t part of the next headline



A team of researchers from NIST and the Institute of Electrical and Electronics Engineers published the results of a recent survey of end users where they discovered that a vast majority (over 94%) reported feeling “overwhelmed and bombarded, and tired of being on constant alert, adopting safe behavior, and trying to understand the nuances of online security issues.”

The multidisciplinary team of researchers found that users’ weariness led to feelings of “resignation, loss of control, fatalism, risk minimization, and decision avoidance, all characteristics of security fatigue.” In turn, that made them prone to “avoiding decisions, choosing the easiest option among alternatives, making decisions influenced by immediate motivations, behaving impulsively, and failing to follow security rules” both at work and in their personal online activities including banking and shopping.

On the other hand, a surprising majority of respondents (78%) also expressed skepticism that they would ever be targeted by hackers. “The data showed that many interviewees did not feel important enough for anyone to want to take their information, nor did they know anyone who had ever been hacked.”

The cognitive psychologist researchers called the findings, “critical,” and concluded that if people can’t use security, they are not going to, and then we and our nation won’t be secure.

And those findings appear to be at the crux of the problem. Every security professional I know will tell you that our biggest threat is users (that would be people) who cannot or will not abide by policies, procedures and best practices.

However, in many cases it is the policies and not the users that form the core of the problem, as these are often designed without any consideration to the user experience or within a vacuum of appreciation for how ordinary workers go about their day and whether it is reasonable to expect them to perform in certain ways. In many ways, we have with the best of intentions, set up conditions that are guaranteed to fail.

Those who know me will know that I will go right to BYOD as a corporate policy that while intended to accommodate remote workers with online access to networks and systems through a single (personal) mobile device has resulted in creating one of the broadest attack surfaces most companies will ever experience. Implemented without proper controls and security tools in place, BYOD equips employees with instruments of destruction regardless of whether use policies are adhered to.

That single example is one of many that when combined with user security fatigue and carelessly developed use policies is leading to even greater exposure to threat than we have enjoyed so far.

While we have not done the best job we could have with the tools and software available so far, we are further compounding the risk exposures through poorly thought out policies and practices, a lack of training and situational awareness and a failure to properly frame the risk responsibility that each employee must assume as part of their everyday activities.

While I am never a fan of Polyannaesque approaches to security issues, the study rightfully suggests three ways employers can try to alleviate security fatigue and help users maintain secure online habits and behavior. They are:

1)     Limit the number of security decisions users need to make;

2)     Make it very simple for users to choose the right security action; and

3)     Design processes that encourage and maximize consistent decision making.

Simple enough, right? Of course not, but those three points are a good place to start. I’m not sure we have given any of this much thought however, even though we probably don’t need a survey to tell us what we all feel ourselves. It has become increasingly difficult to remember 25-30 passwords or to keep track of which systems we have used for which purposes over time.

I am told that the researchers will continue their work, and will next interview professional computer users of varying levels of responsibility, including cybersecurity professionals, mid-level employees with responsibilities to protect personally identifiable information in fields such as health care, finance and education, and workers who use computers but for whom security is not their primary responsibility.

I am pretty sure I know what they will find.



Zero percent.

That’s the current unemployment rate in cybersecurity.

Twenty jobs open for every qualified candidate. Over 1.4 million positions open listed as “Information Security Analyst”

Less than 10% of people employed in the information security field are women.

The average compensation for security analysts in California is $129,000 annually. The average pay for a CISO here in California is now above $430,000.

Market drivers:

1)     The volume of breaches and incidents of compromise increases every month and the numbers reflect only those which are reported, which are estimated to be only 40% of the actual incidents.

2)     The complexity and sophistication of the attacks is growing almost exponentially. The bad guys get smarter, while the good guys struggle with bureaucracy, confusing point solutions, a dysfunctional vendor market, budget constraints, resource constraints and technical inadequacy.

3)     The speed with which the attack vectors morph is astounding and the emergence of the Cloud and the wealth of data shared online make it easier than ever for malicious actors to discover security weak spots and create new attack vectors much faster than we can recognize and identify them, let alone concoct a remedy.

4)     While there are many qualified and smart people working in network and systems administration who could easily make the leap into a security analyst role, we concentrate instead on seeking Unicorns from the outside to help us save the day. The job descriptions are ridiculous.

Cutting to the chase, why don’t we stop writing job descriptions with a huge range of skill sets that even most CISOs don’t have and instead look inside to promote or move some of our capable existing resources over into these roles and get them the training they need to come up to speed?

I am quite certain that we could train a bright system admin in less time than it would take to recruit and board an experienced security analyst. And that assumes we could find one, have the budget to hire one without creating all of the attendant jealousy and disruption and expectations, and that we are actually cool enough to attract the candidate we seek.

After all, why should Mr. DefCON Ninja choose to work for Crapbotics in Sunnyvale?

And the only reason we don’t hire a Ms. DefCON Ninja is because there aren’t any. There are lots of women however working in data privacy who could easily make the transition. We need to immediately abandon this notion that the ideal Unicorn candidate is a seasoned male IT professional with a host of credentials. Women make better IT analysts anyway [IMHO].

It might also be instructive to take a look at the actual job duties being performed by the majority of resources in the field. Many InfoSec teams spend much of their time reporting or manually entering data rather than dealing with security issues in the first place. Do we really need a CISSP or EH cert to fill out a spreadsheet? We could reduce the size of the problem space by examining exactly what we need in these people before we rush to market.

A functional cyber-security program needs a leader and that is probably someone who is either certified as a CISO or has the hard-ball experience equivalency. It also needs one or two trained and/or experienced security analysts who can actually determine a real threat from a set of false positives and can evaluate outside vendor products and services against their own requirements. I am assuming that the typical struggling company here is not trying to create their own cyber-security solution as that would be simply stupid [explain in detail in another post].

And assuming that the leader is positioned in the company where she should be [working with or for the IT leader and not above or separate from], that person needs a solid background in IT [probably a former IT manager] and a solid background in technology [probably a former programmer or network admin]. Management experience and social skills are a must as dealing with a wide variety of confused and frustrated executives is definitely part of the job description.

The technicians could easily qualify in a few short weeks through an ethical hacker and CISSP certification and with a minimum of vendor assistance, they could wrap their brains around the issues and the solution architecture. One of the very best sources for technical knowledge is the CTO of your favorite security product or service vendor.

In order to satisfy some of the more formalized requirements embedded in audit or regulatory compliance issues, you don’t need to hire a seasoned CISO. There are lots of CISO-on-Demand and Virtual-CISO options available on the market and for a few thousand dollars, you can craft your compliance program and audit program so as to assure that your boxes are checked and your review schedule is solidified in ways that protect you from fines and may actually help you defend yourself as well.

A huge by-product is that your team members learn in the process all they need to know about the issues so that if you are able to retain them, you will be in good shape for the next go-around.

You can also hire an experienced CISO on that same temporary basis to work with your team to craft your own security program and create the actual policies and again, the experience rubs off, so your home team will be stronger for the future.

Until you get your basic cybersecurity hygiene up to par, even the best and most experienced security specialists will be constantly tasked with fighting fires and solving basic InfoSec problems. This is the stuff that your network guys do on a regular basis as it is. Today, it’s called resolving and recovering a network outage. What’s the difference to a network engineer between that and sorting out breach detections? Just a few new tools and a crash course in identity.

Of course, the last assumption in my thesis is that you will opt for automation wherever you can find it.

Whether you have a team of experienced and expensive security analysts or a rag-tag group of hastily trained converts from sysadmin, you don’t want them chasing down log events and trying to correlate historical evidence of intrusions with current network activity and sorting through false positives all day long.

You want to be able to leverage data analytics by making small investments in network behavioral tools and end-point detection technologies along with an external SOC/SIEM to assist with your program. This will always be way-y-y-y-y cheaper than hiring more security pros.

This will also allow you to establish a balance between reactive and proactive security so that your new team of internal security pros augmented by an outside consultant can improve your overall cybersecurity health.

You don’t have to be a victim of this current skills shortage.

With an honest evaluation of your current team, you can probably solve most of your InfoSec hiring problems, create a great career ladder and succession path for your employees, avoid the incumbent resentment and jealousies that will result from boarding experienced, expensive InfoSec pros, and let you focus on your next problem which will be the retention of your newly minted and suddenly marketable security analysts.

You Can Run But You Can’t Hide


Running works for a while but hackers can, and will, find you. Every time we think we have outsmarted the little devils, they whip out another workaround and we become toast.

A classic example is bio-metrics.

Many cybersecurity analysts recently got excited about facial recognition technology. Finally, a silver bullet appears but zap, some enterprising security researchers just demonstrated a particularly disturbing new method of stealing a face. This one uses 3-D rendering and Internet stalking.

Earlier this month at the Usenix security conference, some guys from the University of North Carolina, Chapel Hill presented a system that uses digital 3-D facial models based on publicly available photos and displayed with mobile virtual reality technology that defeated facial recognition systems four out of five times. One out of five would be plenty, but this is a grand slam homer.

Biometric facial recognition systems use motion and depth clues to identify their targets so that a flat unidimensional photo won’t pass the snicker test. But a Virtual Reality-style face, rendered in three dimensions, can provide the magic stuff that these systems look for. And then if they can port it to a smartphone’s screen, so much the better. Which is what they did.

These guys of course used Facebook as their source, aka the new public library of biometric data, and they went about collecting images of their 20 volunteers the way any Google stalker might—through image search engines, professional photos, and publicly available assets on social networks like LinkedIn, and Google+ in addition to Facebook. They were able to collect at least 3 and as many as 27 photos of each subject.

One of the researchers pointed out that many of their study volunteers were computer science researchers themselves, and some most had made an active effort to protect their privacy online. Nonetheless, the group was able to find at least three photos of each of them.

They tested their virtual reality face renderers on five authentication systems—KeyLemon, Mobius, TrueKey, BioID, and 1D, all of which are available from the Google Play Store and the iTunes Store and are designed for protecting data and locking smartphones.

To test the security systems, the researchers had the subjects program each one to detect their real faces. Then they showed 3-D renders of each subject to the systems to see if they would accept them. In addition to making face models from online photos, the researchers also took indoor head shots of each participant, rendered them for virtual reality, and tested these against the five systems. Using the control photos, the researchers were able to trick all five systems in every case they tested.

Using just the public web photos alone, the researchers were able to trick four of the five systems with success rates up to 85 percent.

This is bad news for these facial authentication systems that have been proliferating in consumer products like laptops and smartphones lately. Google announced earlier this year that it’s planning to put a dedicated image processing chip into its smartphones to do image recognition which is intended to help improve Android’s facial authentication, which was proven to be well, a joke. In the same breath, Google warns, “This is less secure than a PIN, pattern, or password. Someone who looks similar to you could unlock your phone.” And, if that is so, then why bother at all?

While the UNC researchers agree that it would be possible to defend against their attack, the question remains as to how quickly facial authentication systems will evolve to keep up with new and rapidly evolving methods of spoofing. New systems will probably need to incorporate hardware and sensors in addition to mobile cameras and web cams, which will probably be challenging to implement on mobile devices where the hardware footprint is highly limited.

But none of this seems to dissuade vendors from ramming these immature and untested products out the door and from proud early adopters from glomming onto them. Documented risks be damned.

Reminder:  In the Office of Personnel Management breach last year, hackers stole data for 5.6 million people’s fingerprints. Those markers will be in the wild for the rest of the victims’ lives. That data breach debacle, and the UNC researchers’ study, should clearly illustrate the troubling nature of cyber-security fixes in general and biometric authentication in particular.

When your fingerprint or your mug slips into the ether, there is no password reset button.


NSA Leak Spotlights Critical Cyber-Security Problem For Business


“The Only Thing More Dangerous than Ignorance is Arrogance” ~A. Einstein, part-time theoretical physicist

The recent NSA leak has revealed a set of critical security vulnerabilities in market leading network products from companies like Cisco, Fortinet and Juniper.

The code samples released by the Shadow Brokers this week proved that they indeed were able to steal sensitive National Security information from what is supposed to be the best protected government agency on the planet, the National Security Agency.

Up until now, the Obama administration has required that agencies reveal any vulnerability it discovers exclusively to a White House review board prior to releasing any of that information to equipment manufacturers or software producers. The methods revealed by the hack have now been disclosed to the product vendors but as of this writing, not all have produced patches for their hardware. This conceit puts every user of those products at high risk until a patch is developed and applied.

Security experts are hoping the government will see this as a teachable moment. Baloney.

The United States law enforcement and intelligence agencies routinely purchase vulnerabilities unknown to manufacturers to hack into devices for the purposes of developing their own list of zero-days, resembling in a weird way a school-yard game of “Neener, neener, I’m smarter that you are.” Or, alternatively, “It’s my F**ing ball, and we’ll play by my rules or not at all.”

The NSA will say that this “Vulnerability Equity Process” (VEP) which allows them to justify which zero-days to keep for offensive purposes  is meant to minimize risk by keeping the risk arsenal as small as possible. Which might be acceptable if we were fighting a war in which the battlefield were contained to some physical coordinates and the source of weaponry were clearly identified as say, Berlin where we could “spy” on production and manufacturing and then get a step up on our adversaries methods and techniques. Or, even if we at least knew who our adversaries were.

By the administration’s own admission, hoarding zero-days makes commercial computing products less secure. And, it is not just The Shadow Brokers. Anyone with even the most rudimentary understanding of the landscape of cyber-security knows that other nations and cyber-gangs will be on to the same vulnerabilities at the same time or even before the NSA figures it out. The apparent belief that because they are the NSA, they are smarter than the bad guys not only fails the snicker test, it sets up a false sense of security for the citizens the agency is chartered to protect.

The agency is supposed to be responsible for global monitoring, collection, and processing of information and data for foreign intelligence and counterintelligence purposes, and charged with the protection of U.S. government communications and information systems against penetration and network warfare. That ship sailed.

To make matters worse, the code samples offered by the Shadow Brokers appears to be from 2013, and regardless of their purpose in releasing it (many suspect it was held by the Russian government and is now being dangled in public as leverage against the U.S. fingering Russia in the Democratic Party hacks) had the NSA been under disclosure orders instead of the current protocols, the leak might not have been the security fiasco it is now.

The fiasco is that the vulnerabilities affect arguably 80% of the global network install base and because network infections typically lie in dwell for upwards of 300 days, it is possible that hundreds of thousands of networks are infected right at this moment. Cisco has quickly provided a workaround for one of two vulnerabilities and issued an advisory on the other, which was patched in 2011, in order to raise awareness among its customers. It doesn’t really matter that patches are being released. The damage is likely already done.

This leads to the inevitable questions related to IoT in the not so distant future. Should the NSA, NSC, FBI, or other government agencies, be required to inform Apple immediately when it finds a security hole? What if the subject of the investigation was a smart home alarm system, instead of an iPhone? What if it the vulnerability is in the infrastructure behind a city’s electrical grid, an airport communication system, a dam or water treatment facility or a hospital network?

As an example of the dangers implicit in the VEP, the Heartbleed Bug, which was made public in 2014, was a serious vulnerability in the widely-used OpenSSL cryptographic software library. The bug reportedly impacted the security of two-thirds of the world’s websites. It was widely reported that the NSA had been exploiting the Heartbleed Bug for two years prior to it being made public.

More recently, on April 14, 2016, the FBI, for the first time, disclosed to Apple a vulnerability affecting some iPhones and Macs. However, Apple announced later that the problem had already been discovered and repaired nine months prior to the FBI’s disclosure. This delay in disclosure raises serious questions about the effectiveness and the veracity of the VEP.

When the top hacking outfit on the planet is itself hacked, we should be concerned that keeping backdoors secure isn’t going to work.

Whether the Shadow Brokers hacked the NSA or the code was removed from the NSA by the Equation Group, the Agency’s own hacking group (more on them later), it appears to be a closely held secret that the agency was simply unable to protect. It is probably obvious that the theory that “the good guys” can create an encryption doorway that only the right intelligence agency will be able to pass through is bogus. Instead, it will always turn out that any back door of this nature will be easily hackable by anyone with a ten dollar toolkit.

For Cisco, the reveal may represent an unpleasant flashback to 2014, when Edward Snowden’s leaks demonstrated that the NSA was intercepting shipments of its equipment to install spyware. Then-CEO John Chambers wrote a letter to Obama at the time, arguing that the NSA’s practices had compromised his business. “We simply cannot operate this way,” Chambers wrote. “We need standards of conduct…to ensure that appropriate safeguards exist that serve national security objectives, while at the same time meet the needs of global commerce.”

It seems like it is beyond time that the government stops “protecting us” and starts reporting vulnerabilities it finds or acquires while there is still time for us to protect ourselves.

But, I don’t know. Maybe I missed a memo.

No pontificating, but I think it was some guy named Lincoln, while memorializing the sacrifices of war to ensure the survival of America’s representative democracy, mentioned that the “government of the people, by the people, for the people, shall not perish from the earth.”

We Are Better Prepared for a Zombie Apocalypse


Last week, a discussion panel of cyber security and electrical industry stakeholders examined what could be done to protect U.S. public utilities from cyber-attacks, and what steps could be taken during a high-risk event  to mitigate the effects on the grid.

It turns out that we now rely on our DoE regional coordinators in each of the 10 Federal Emergency Management Agency (FEMA) regions to work with first responders during the event of a natural disaster or a terrorist attack (which may be the same thing). The panel cited an agreement signed by the Secretary of Energy in February that identified these individuals as points of contact to share information with the DoE and states in the event of an energy supply disruption, as an important step toward cyber-security preparedness. This would supposedly serve to improve information sharing and communication during critical response activities.

I don’t know about you, but this sounds a lot like the ads for LifeLock where the “security monitor” tells the Bank manager that “Yep, it looks like a robbery”.  Except, those are supposed to be funny. This is not.

It gets worse. They went on to applaud the fact that they are working on preparedness exercises to be held by federal agencies and the private sector that would include annual studies on the risks and hazards that might affect the energy sector. And, we actually pay these people?

Someone should point out to this group that despite their heroic preparedness efforts, U.S. cyber security is not nearly as prepared as it appears. As Arthur House, commissioner for the state of Connecticut Public Utilities Regulatory Authority, warned, “The thing to remember about cyber security, we are far better on paper to take care of things than we are operationally. It’s not as if the president could turn to the secretary of energy in the event of a grid cyber-attack and say ‘turn it back on.’”

As we should have seen in the Ukraine power grid attack, the holistic strike vectors that disrupted restoration attempts immediately following the grid attack itself were the real problem faced by the Ukrainian security engineers and not just the initial strike on the grid. We are not even close to addressing let alone planning for a similar recovery disruption here.

It doesn’t take much imagination to conjure a scenario where an attack on the electric grid would be accompanied by an attack on our financial sector or another attack on our water supply at the same time. Or, simply an attack on our recovery efforts through brute force DDoS vectors against all of our FEMA sites and disruption of our communication protocols.

As recently as last year, Jehovah Johnson, Secretary of Homeland Security said “I’m sure FEMA has the capability to bring in backup transformers. If you want an inventory and a number, I couldn’t give you that.”

That might be because in fact, there is almost no such capability in the realm of large power transformers (LPT’s). Even if we had them as the STEP (Spare Transformer Equipment Program) people claim we do, how would we transport equipment weighing half a million pounds or more across interstate lines in a rapid response to a critical outage? According to FEMA representatives, as of this moment, that capability has never been tested.

LPTs are essential to the functioning of the grid. Because they are very expensive, only the largest and most profitable power companies can afford to keep backup transformers on hand. Because the transformers are custom-made, they are not easily interchangeable. Because the equipment is huge, it is not easily transported. Because these transformers are, on average, thirty-eight to forty years old, some of them were originally delivered by rail systems that no longer exist. Because the vast majority of LPTs are built overseas, it takes a very long time to replace them.

The federal response to federal response to Hurricane Sandy is an interesting case in point. In addition to hitting major sections of New Jersey and Long Island, Sandy flooded New York City streets, tunnels, and subways, effectively cutting off all electric power to Lower Manhattan.

They brought in power trucks, flown in from places as far away as California on DOD [Department of Defense] planes, to begin replacing the poles and the lines. At one point FEMA had about eighteen thousand people working in that area going door-to-door, bringing people food and removing them from unsafe buildings until they could get the power back on.

It took more than five days before any power was restored to Lower Manhattan, but 95 percent of New York’s customers did have their power back after thirteen days. Even with a relatively small emergency caused by a hurricane, thousands of homes were lost throughout the region and tens of thousands were rendered homeless.

Where, then, might you and I find advice on how to cope with the aftermath of such an attack?

Howard A. Schmidt, the former cybersecurity coordinator for the Obama administration, a principal in Ridge-Schmidt Cyber LLC, a Washington consultancy company in the field of cybersecurity and a board member of one of our technology partners, Taasera, says, “There is no answer.

No government agency has guidelines for private citizens because, according to Schmidt, there’s nothing any individual can do to prepare. “We’re so interconnected,” he said, that in terms of disaster preparation “it’s not just me anymore: it’s me and my neighbors and where I get my electricity from. There’s nothing I can do that can protect me if the rest of the system falters.”

The electrical industry panelists agreed that best practices for cyber security protection include layered defenses, regulatory oversight, external third party assessments and internal governance. Excuse me?

As Ted Koppel points out in his book, Lights Out, it would be helpful if the political world would just accept that there are two permanent conditions that are going to affect future generations: one is the global scourge of terrorism, the other is the digital forevermore. Within that world of the “digital forevermore” lies the prospect of a catastrophic cyber-attack on one of the U.S. power grids.

And that is the existential reality that the new president faces. I hope he or she is up to the job.

Back From BlackHat, Oh My!


One major online reporter recently returned from the BlackHat Conference in Las Vegas with a list of what he thinks are the four cybersecurity topics that were rooting many conversations, both on the expo floor and among experts and analysts in the briefing rooms. If what he says is true, I now know why we haven’t made any progress in Cyber-security in the last two years.

The BlackHat Conference started out as both an opportunity to share research and to demonstrate the fragility of computing systems, and a chance to show off new tools and technologies to defend against threats. I have no idea what it is now.

This was the 19th year (amazingly) of this six day event which began with four days of intense trainings for security practitioners of all levels followed by a two-day main event including over 100 independently selected briefings, exhibits and awards.

Let me explain why the four topics depress me.

First, Behavior Baselining.

This simple-minded notion is based on the idea that a good way to determine if you have had a network infection might be to establish a baseline of normalcy and then measure subsequent variations to that baseline over time.

In order to properly establish a useful baseline, this process requires a period of around 6-8 weeks of baselining to establish these norms and accommodate for occasional one-offs and anomalies.

Three years ago DarkTrace emerged on the Cyber-security software scene with a revolutionary approach to network infection detection using just that process followed by some pretty cool detection technology. DarkTrace has successfully raised over $85m in venture capital and purportedly has 1,000 customers worldwide.

DarkTrace was dismissed by most security analysts for two reasons: One, the baselining would not be able to identify an infection that already existed at the time the baselining began nor would it be able to detect an infestation during the baselining period. Two, it generated a ton of false positives requiring tuning down the filters to such an extent that the true positives might get easily lost in the noise.

The point is not that DarkTrace is a bad product, in fact we were their first American technology partner and I regard them highly.  The point is that they and their technique have been around now for 3 years and there have been several followers and lookalikes entering the market. So, to say that Behavior Baselining is one of the four hot topics at Black Hat 2016 is either indicative if a security community that has been napping for 3 years or just plain wrong. I’m hoping for the latter.

Second, Active Response

This topic is at least an indicator that our sensitivities have swung over to detection and away from prevention and that all alone is a good sign of progress. The premise here is that as organizations get better at detecting threats, the number of alerts their systems create also increases. This results in what security operations center (SOC) managers refer to as alert fatigue. Systems like DarkTrace don’t help. Due to the inability to respond, breaches persist for long periods of time. The Democratic National Committee hack is a good example of long-term resident infection.

Active response is suddenly a hot topic when we and others like us have been developing both human and automated processes that enable our ability to respond to an attack as soon as it is detected within the monitored environment. For 3 years.

This reporter outlines processes that include communication with secondary systems such as a ticketing system, or collecting additional data, or an automatic configuration change such as modifying a firewall to block communication with a bad actor. This is neither rocket science, nor should it be a new revelation.

What we should be talking about is improved machine response and artificial intelligence applied to the response mechanisms. It is hard for me to believe that active response is a hot topic in 2016.

Thirdly, Security Analytics.

This is where we have to shout out a loud, C’mon Man!

He says that identifying trends and patterns in an organization is a good starting point to mitigate systemic problems as well as identifying threats and that there is a clear need for security and IT teams to use analytics to broaden their security and operations insights.

Security analytics have been around forever. They are better now than they were but so are most things. This topic should have been extended or applied UBA, where we are looking for corollaries and using abductive reasoning algorithms to detect suspicious behaviors or to improve access authorities in complex systems.

He describes security analytics as data analysis across multiple sources of data, often log data enriched with non-log data such as threat intel, in order to provide actionable knowledge to the security analysts and to security managers. There are over 20 such systems on the market and in addition most major software products have embedded functional analytical capabilities into their threat detection suites to provide just this capability. Again, not new technology and not new applications.

The place where we should be focusing security analytics is in IoT and in ICS and SCADA infrastructure, because it is there that we can get the best leverage for both vulnerability management and detection. And God knows we’re going to need it.

Finally, Public Key Cryptology

I frankly have no idea why this topic is even relevant today. Beyond the fact that cryptography is embedded in most of the software and hardware systems that form the core of our financial systems and healthcare systems and has been leveraged by ransomware attackers, public key cryptology seems so old school that I am shocked it is even topical at this event.

We all know that public-key ciphers have never seriously challenged secret-key ciphers as techniques for encrypting large amounts of data and they are much slower than secret-key ciphers. It is also well-publicized that the public-key encryption process computes a mathematical formula using plaintext that has allowed attackers to exploit the mathematical nature of public-key encryption to uncover data in raw form.

Public-keys have also encouraged successful brute-force attacks that break them and grab the corresponding private keys which are used subsequently for masquerading during network attacks.

These are old and well-documented problems that have restricted ways that public-key encryption can be used safely.

One BlackHat training on public-key cryptology describes a focus on drawing out the foundations of cryptographic vulnerabilities and cryptographic exploitation primitives such as chosen block boundaries, and more protocol-related topics, including how to understand and trace authentication in complex protocols.

I’m sorry, but in my humble opinion if you haven’t got a solid handle on why you shouldn’t be using public-key cryptology by now, we are in deeper doo-doo than I thought.

So, there you are. Four topics from one of the premier conferences on Cyber-Security on the planet and we are talking about 3 year old issues and technologies and approaches to solving very real, very current and very severe problems. And, none of the issues are relevant.

The next time I scratch my head and tell you how confused I am by our lack of progress, please refer me to this blog post.

The Dark Overlord: One Bad Dude.


A New Twist to Healthcare Cyber-Attacks and it’s Not Just Healthcare.

The recent cyber-attack on Banner Health Care, which was reported on August 3rd and looks like it compromised the data of 3.7 million individuals, likely will be the largest healthcare data breach reported so far in 2016 and we are barely halfway through the year.

What is unique about this attack apart from the sheer volume of records stolen was the attack vector; one not used before in the healthcare sector but hugely popular in retail. Banner Health says the breach started when attackers gained unauthorized access to payment card processing systems at some of its food and beverage outlets which led to direct access through the administrative network to the entire PHI database.

The obvious big red flashing light here is that the two networks were connected … as in, not separated.

Rebecca Herold, CEO of The Privacy Professor and co-founder of SIMBUS360 Security and Privacy Services, says breaches involving payment systems at healthcare organizations are frequently undetected. “Such systems are often maintained separately from the rest of the network, and often with the heavy involvement of the vendor who is supporting the systems. The POS systems have been shown to be notoriously lacking in strong security protections – yes, even when they have passed all PCI DSS [Payment Card Industry Data Security Standard] requirements.”

As we have reported repeatedly in the past, the Dark Overlord who has now claimed to have breached databases of a number of healthcare entities, grabbing about 10 million patient records that he’s offering for sale on the dark web may have struck yet again.

Previously an expert in ransomware for cash, the Dark Overlord has lately switched to a more remunerative resource based on stolen PHI records. Among the healthcare providers that have recently confirmed cyberattacks by the Dark Overlord is Athens Orthopedic Clinic in Georgia which reportedly lost 1,500 Athens Orthopedic patient records due to missing a Dark Overlord “ransom” deadline.

This is one bad dude. And, he is now claiming a new victim: a large healthcare software developer.

His advertisement went up on July 12 on The Real Deal, an online bazaar for stolen data, fake IDs and drugs. He is offering for sale what he claims to be the source code, software signing keys and customer license database for a Health Level Seven interface engine, a type of middleware that enables different kinds of software applications to exchange information. HL7 is a set of standards describing how electronic health information should be formatted.

In an interview over encrypted instant messaging, he declined to name the U.S. software company. Many vendors sell HL7 interface engines as part of their products. He also declined to say how he was able to compromise the company, but claimed he gained root-level access – meaning total administrative control – to its servers.

The Dark Overlord claims he also obtained the software’s signing keys. Software applications are usually “signed” with a digital signature, which then can be verified to ensure that a new version hasn’t been tampered with. Software companies guard those secret keys carefully. If stolen, an attacker could insert spying code into the application and sign it with the private key, making the modification of the code appear legitimate.

Our Dark Overlord buddy claims there are two target buyers for this data. One, a smaller country outside the United States who may be looking to purchase a complete package for a fair price and use this in their own development or retail it directly after compilation. Or two, someone who has  nefarious intentions and would intend on using the keys to push a backdoor to the original customers of the victim company.

Over the last several weeks, The Dark Overlord has placed three other batches of data up for sale on The Real Deal: 48,000 records apparently from a clinic in Farmington, Mo.; 397,000 records allegedly from a healthcare provider in Atlanta; and 9.3 million records allegedly from an unnamed health insurance provider.

The Farmington breach victims have corroborated his story, and he has also provided additional information from that breach, including scans of driver’s licenses and insurance cards. The clinic has not responded to repeated queries.

Of the 165 major healthcare data breaches  – not yet including the Banner Health attack – added to the Department of Health and Human Service’s Office for Civil Rights’ “wall of shame” tally so far this year, 51 or nearly a third are listed as hacking incidents and represented 2.8 million individual records.

As of Aug. 5, the OCR tally of major health data breaches listed 1,624 incidents affecting a total of 159.2 million individuals since federal regulators began keeping track in September 2009. And while hacker incidents represent less than 13 percent of the total breaches, those incidents account for an astounding 74 percent of the individuals affected. So, where are those records going and for what purpose?

Healthcare records contain the most valuable information available, including Social Security numbers, home addresses and patient health histories — making them more valuable to hackers than other types of data. Stolen credit cards go for $1-$3 each. Social Security numbers are $15. But complete health care records are a gold mine, going for $60 each. Medicare records, which are rarer, start at around $400 each. The reason they are so valuable is because criminals can use such records to order prescriptions, pay for treatments and surgery and even file false tax returns.

With a common healthcare record, you can basically own a person. You have all the information necessary to create a new account and fake an entire identity.

The greatest threat to the healthcare industry today is not from one-off hackers seeking quick paydays, but from organized gangs and foreign governments that can store intimate personal health data for future use against individuals.

For example, hackers last year stole the records of about 80 million customers of Anthem Inc., the second largest U.S. health insurer.

The presumption was that they were state actors, and the purpose was to harvest the database in order to create a dossier of individuals that they could use for social engineering for future attacks.

In addition, foreign governments could use healthcare information to target government employees with emails containing notices related to medical conditions they may have. When a targeted individual opens one of those emails, malware infects his or her desktop computer and heads right into the network.

The research firm Forrester recently predicted that hackers would release ransomware specifically directed at medical devices in 2016. The Independent Security Evaluators study showed that through both physical USB plants and remote attacks, hackers could take over heart defibrillators, insulin pumps and machines that emit radiation.

Cyber security in hospitals is struggling to keep up with these threats. In addition to my own view which has been repeated ad nausea herein, other security experts like James Scott argue for more investment in security systems and personnel at hospitals. Scott’s think tank recently issued a paper that calls for better security too among medical device manufacturers but the real problem, according to the paper, is the Food and Drug Administration, whose policies don’t go far enough to make sure device manufacturers are proactively addressing cyber security issues.

The agency’s voluntary guidelines are “just standards, not regulatory,” says Scott. “It’s like, ‘Do it, don’t do it, whatever.’ It’s a ho-hum mentality.”

The Dark Overlord claims to have compromised some organizations using a zero-day vulnerability in Remote Desktop Protocol, which is implemented in many remote access clients. See our most recent post  It’s actually more probable that the attacks have been successful due to weak passwords and RDP clients that are accessible over the internet.

It’s not just a healthcare problem. Critical infrastructures from utilities to traffic lights to municipal personnel databases are fumbling through the same jungle of cyber security unknowns. And as more and more of our physical world becomes networked and connected to the internet–the embedded sensors in our streets, the Internet of Things in our kitchen appliances, the “smart” cities all around us–there’s a sharply growing potential for cyber-attacks that have not just digital but dangerously physical ramifications as well.

And massive health data breaches are not going away anytime soon. In fact, they will get worse. As hackers become more sophisticated and organizations continue to fail to even catch up, we will see more and more reports of these types of breaches and escalation of the impacts. PHI will continue to bring high value on black markets and more of it will be stolen.

Until everyone places a higher, determined and ongoing emphasis on cyber-security, our personal healthcare data along with all other forms of stored PII will continue to remain at risk.

And, soon our interconnected physical world will start to make headlines as attacks are successfully aimed at critical infrastructure in healthcare, energy, transportation and defense.

Cyber-Crime Outpaces Cyber-Defense

Just when you thought it couldn’t get any easier, cybercriminals have just received a new gift that lowers barriers to entry even further. is a newly re-launched Russian website that makes it easier even for less technically skilled individuals to become cybercriminals. It handles everything one needs to run an online store, including anonymity and security, payment services, website design, and protection against DDoS attacks, all of which allows even individuals with low and even non-existent technical skills to set up a cybercrime shop, and all for only $8/month (same as Hulu).

The service has quickly amassed over 25,000 subscribers who have earned a total of 253 million rubles or about $3.8 million US, and the most interesting thing about this service is that it is readily available on the surface web, the first of its kind that doesn’t hide down in the depths of the dark web. This is clearly a thumbing of the nose gesture on the part of the Russians aimed at US attempts to counter cyber-crime and economic insurgency.

Operating on the surface web however, doesn’t preclude the site form hosting nefariously illegal sites like, which is used to sell hundreds of millions of compromised user accounts from LinkedIn, Myspace, Twitter, and in fact a majority of the sites hosted on the platform specialize in social media accounts registered by bots, stolen credentials, coupons for services that provide social network followers, and accounts for banking and other services that are directly monetized.

This is one of the moving parts that has led to the fact that a record-breaking half of the six million fraud crimes committed in the UK in the 12 months ending March of 2016, are cyber-related. If you have to assemble your own exploit kit and if you don’t have a channel for distribution, it is hard to make a living selling stolen IDs. is aiming to solve that problem the same way that Alibaba created a market for everything and anything as the world’s biggest online marketplace.

One measure of this move into online crime means that people are now six times more likely to be a victim of plastic card fraud than a victim of theft from the person, and around 17 times more likely than robbery.

Victims of fraud differ from other crime victims. They come from higher income households than victims of violence. They tend to be in managerial and professional occupations rather than manual occupations, students or long-term unemployed. There is also a strong indication that those living in the most affluent communities are more likely to be affected than those in urban and deprived areas. This is not surprising since it is the same groups that are most likely to be involved in online financial transactions.

The threat grows daily and while we all continue to try and find technology solutions for technology threats, it remains largely up to the individual user to work toward combating this crime wave. As we have said so many times, people need to use reliable Internet security on all connected devices, apply security updates as soon as they become available, download software only from trusted sources and be cautiously paranoid about e-mail and other messages that include attachments and links – even and especially now if they appear to come from friends.

In spite of America’s reluctance to acknowledge we are losing the fight, most all other Western countries have echoed what the UK’s National Crime Agency (NCA) said in their recent Cyber Crime Assessment report for 2016, which is that criminal capability is outpacing industry’s ability to defend against attacks.

President Barack Obama on Tuesday instituted a new directive on cyber-attack coordination that aims to make clear how the federal government handles cyber incidents and better informs the public on what to do once they have been hacked. The directive institutes a Cyber Incident Severity Schema with a scale from level 0 to level 5 to classify a cyber-attack. According to the White House, any incident that ranks at a level 3 or higher is considered “significant.”

For the uninitiated, these attacks often take place months before they’re made public — leading to a system that’s largely in place to tell us about attacks that have already happened that we really can’t do anything about.

After all, it’s not like the criminals are tweeting that they have created a backdoor into OPM or spying on the Secretary of Defense or that they have access to Obama’s email.


In fact, the Cyber Incident Severity Schema is more likely a scoreboard for getting pwned (to conquer to gain ownership) by hackers and announcing just how badly it hurt. Instead of serving any useful purpose, this schema will, not unlike the Bush-era Homeland Security Advisory System, become a talking point on the 24-hour news cycle, a vehicle for spreading panic, a government handbook for how best to whip the population into a frenzy based on months-old threats — many of which will have seen the bulk of their damage done by the point we get to classifying it.

It is clear to us that crime and terror are becoming cyber-enabled as the world’s operational initiatives continue to become digital, and the enemies of freedom adapt to and learn to leverage technological advancements.

Without an increase in honest transparency around the scale of this problem and lacking a determined effort to create the digital equivalent of a Manhattan project,  we will continue to see news of increasingly catastrophic attacks on financial and government institutions and national infrastructure along with an increase in global cyber-crime.

The Cyber Incident Severity Schema is a disappointing and some might argue both a stupid and childish response to what is probably the greatest threat to our National security in history.

It is at least embarrassing.