Acohido: Cyber-insurance is still nascent, yet worth a look

ka_0011-2 Pulitzer-winning journalist Byron V. Acohido is the founder and executive editor of Last Watchdog, a pioneering security webzine. One of the nation’s most respected cybersecurity and privacy experts, Acohido conceived and delivered a nationally-recognized body of work for USA Today, chronicling the frenetic evolution of cybercrime in its formative stages.

OPAQ: Some 32 percent of U.S. businesses purchased some form of cyber liability and/or data breach coverage in the last six months, compared to 29 percent in October 2016, says a survey by the Council of Insurance Agents and Brokers (CIAB). Do you think this growth will continue—and why?

BA: Demand for cyber insurance absolutely will increase at a healthy clip for the foreseeable future. That’s because the value of business data and intellectual property today far outstrips the value of the physical plant. Think about it: we can do astounding things with cloud computing and mobile devices. And yet the business networks that support Internet-centric commerce remain chock full of security holes. Criminals get this, and will continue to take full advantage. Meanwhile, businesses are scrambling to figure out how to deal with data theft, network disruptions and cyber fraud. And we are in the very earliest stages of dialing in insurance to help them offset these emerging exposures.

OPAQ: There are a number of barriers for purchasers of cyber insurance, including: lack of standardization on policies and pricing, difficulties determining risk, difficulty showing attribution when a breach or incident occurs, and so on. Thoughts on these and how should the insurance industry address them?

BA: There’s nothing, really, stopping the industry from taking the first step of standardizing the basic terminology to use in cyber policies. Right now there is none. Standardized language would pave the way for underwriters to begin more assertively partnering with cybersecurity vendors to come up with innovations to measure cyber risks. Insurers could become much more proactive about incentivizing companies to embrace more rigorous security policies and practices. As the pool of lower-risk policyholders grows, the industry could then begin to extend policies to cover specific cyber exposures that today are not routinely covered.

OPAQ: There is risk in buying cyber insurance in terms of mitigating losses. For instance, Target received an estimated $100 million in coverage, which didn’t even cover half of the $290 million it lost. How can companies avoid this sort of outcome?

BA: No company should be relying solely on insurance to eliminate all, or even most, cyber exposures. In the current environment, where hackers probe business networks 24 by 7 by 365, network security should be a top priority for all organizations. It’s a cliché, but true, that there is no silver bullet. The use of layered security technologies remains vital; no less so continually refining and enforcing policies and training employees. A cyber policy can then be thoughtfully purchased to offset the remaining risk.

OPAQ: Given these barriers, and any tips for CSOs seeking carrier quotes?

BA: It’s an interesting time to go shopping for cyber coverage. Even though the insurance industry has left many things undone, there is wide recognition of the pent-up demand. The result is that there are many companies competing aggressively to sell policies. In a sense, it’s a buyers’ market. Numerous options are available to get some level of cyber coverage from somebody. The problem, of course, is that the devil is in the fine print. So it is important to find a knowledgeable, trustworthy agent to guide you through the due diligence process.

OPAQ: Finally, what could security vendors be doing to help their customers with cyber insurance – a.k.a. data collection, navigating insurance decisions, partnering, etc.?

BA: The path forward for security vendors, at this point, seems to be much the same as insurance buyers – become knowledgeable about this emerging market and align yourself with smart, trustworthy partners. A few pioneering partnerships between insurance companies and security vendors are out there, and I expect this trend to accelerate over the next few years.

Consistency and Cost Savings from Cloud-Based Security

Bob Brandt is an information security expert, most recently as the Global Security Architect at 3M. While at 3M, he focused on integration efforts for 3M application services across cloud and mobile platforms. Bob also devoted significant effort to improving 3M’s malware protection capabilities. He was on the governing body for several Twin Cities CISO Summits and co-chaired the Twin Cities chapter of the Identity Management Meetup for several years. Follow Bob on Twitter: @bobbrandt.

OPAQ: Which cyber security threats seem to be foiling enterprises today, and the vendors that serve them?

BB: The human factor is still a weak point. There are improvements that could be made to phishing defenses, as that is one of the main channels through which these attacks are successful. Phishers only need a low hit rate to be successful. However, a cloud service can deliver a consistent way of looking at data from all the various usage patterns. For example, every app has a Web version and an app for mobile, and those are distinctly different deployment patterns. All the traffic, whether it comes from a WiFi or wired or mobile network goes through the same cloud service on its way to the application, and this enables companies to provide consistent security. It’s also more cost-effective to secure your applications through a cloud service instead of using several different technologies.

Another key threat where companies are falling down is regarding the privacy, governance and risk around data. If you had controls on the data it wouldn’t matter if someone stole the whole database, because they couldn’t crack open the encrypted data.

OPAQ: If you could start a company in the security industry today, what would be the focus?

BB: I’d probably work on a service that applied and enforced controls on data, such as authorizing people to access data and tracking that. For instance, in a hospital environment, the software would track data on who looked at patient data and when, because there should be very few people doing that. Even those who are authorized should have a reason for accessing your personal data. If the fields are naturally encrypted at the data layer, it would be hard for hackers to use it. Axiomatics and BigID are two of the companies working on this today.

OPAQ: Are there big differences in how midsize to large enterprises should approach security compared with smaller companies? Especially since smaller companies can still have large databases of sensitive information of value to hackers?

BB: First off, I’ll say that the cloud is a great equalizer. I think everyone should use cloud services for security. Large enterprises might have a few experts on staff to keep vendors honest and to customize the solution if needed. Smaller companies might rely more on a managed service provider as they don’t want to pay for IT staff, but on their own, they can’t keep up with changing security needs and threats. The differences are mainly on how to staff for security. The functionality is about the same, regardless of company size, and most of it should run in the cloud. Another advantage of the cloud is if you are running applications in an outside service, your business benefits from the traffic data of thousands of companies. An event like a single packet doesn’t mean much, but across all those companies it does. The cloud providers can see patterns from the data which can result in early detection of the threat.

OPAQ: Security skills are at a premium. How do you think companies should best handle this challenge moving forward?

BB: People still tend to talk mainly about firewalls and hackers, but that problem will be solved. In the future the skills will be less about malware analysis and more related to application security and integration, digital signatures, and connecting clouds securely. If we just built security into transaction APIs, the noise of malware would go down substantially. Increasingly, security is becoming automated. You can get a firewall administrator from a vendor’s solution.

There are threat analytics services which are largely automated and look for patterns in big data sets. These services can tell a customer when an attack might be coming—the kind of analysis that a customer would never be able to see just by looking at its own data.

ELK Stack for Security Operations, Analytics and Reporting

The challenges that IT security teams face can often be condensed into two critical questions: how can I grapple with the security data that we generate, and how can our tools rapidly adapt to our changing business needs?

In search of answers to these questions, more and more organizations are turning to data intelligence solutions such as ELK Stack, which includes the Elasticsearch, Logstash and Kibana tools for data analytics and visualization. Also known as Elastic Stack, ELK Stack has become many companies’ top choice for the log aggregation and monitoring tool within their security operations center.

Why ELK Stack?

Platforms such as ELK Stack are notable first and foremost for their flexibility. As long as you have analysts trained on how to use the Elasticsearch Query DSL (domain-specific language), making changes and adaptations in response to an evolving IT landscape is easy.

It is important to emphasize that when you use Elastic Stack, a pre-built set of models is not always available to use right away. Rather, you will have to make preparations so that you can accurately model your security operations, capturing the kind of information that you really want to analyze and report.

Fortunately, Elastic Stack has robust capabilities for collecting exactly the information that you need to be aware of important events on within your IT environment. From geographical location to business units and network segments, Elastic Stack can provide metadata about anything that is germane to the security metrics that you want to analyze. With Elastic Stack, you can easily issue queries, generate and save visual reports, and track those in a central dashboard to see how certain key performance indicators change over time.

ELK Stack Features for Cyber Security

Although ELK Stack is fairly agnostic in terms of use cases, many IT teams have used it successfully for managing their cyber security monitoring and reporting. Below are just a few of the features that make ELK Stack such a strong choice when it comes to cyber security.

Level of Content Control

As your organization evolves and grows, you will likely want to change and expand on the types of content that you collect and index. Elastic Stack gives you the control to do exactly that by adding new data attributes to your documents while retaining your links to old content. By doing so, Elastic Stack forms a flexible solution that can evolve and grow alongside you.

Visualization

ELK Stack makes it easy for users to define new visualizations and queries and save these views in order to consult them later. Elastic Stack 5 also has the ability to analyze time-series data that is particularly germane to security analytics, by revealing how important metrics have changed over time. Through the power of visualization, you can search for correlations between particular data sources or data types as defined in your security analysts’ queries.

Community

Finally, the powerful community that has built up around Elastic Stack is one of its most underrated features. Users have provided tools such as sample visualizations and even pre-built virtual machine images with Elastic Stack already loaded on them. What is more, there are a wealth of tutorials and training courses that are available to help you understand how to capture data and understand the results that you collect. For example, the SANS Institute runs a course on security analytics with ELK and provides the VM as open source software.

Best Practices for ELK Stack with Cyber Security

Multi-tenancy

One of the biggest issues with Elastic Stack right out of the box is a lack of support for multi-tenancy. If you want to individualize and personalize what certain members of the security team see in your Elastic Stack dashboards, you will need to use either a commercial add-on called X-Pack or some of the products we’ve integrated into OPAQ’s GreySpark reporting and monitoring technology. Be aware of these multi-tenancy limitations, and make plans to accommodate these features if you have a need for them.

Deployment

Some organizations choose to deploy Elastic Stack ad hoc as a tool and then leave it as-is it after they have finished configuring it for their purposes. However, Elastic Stack requires care and management over time, and it will not necessarily scale quickly up or down without some effort on your part. If you want to use Elastic Stack as a long-term solution, you must carefully contemplate the deployment infrastructure that you need and the data volume that you will handle.

Clusters should be balanced appropriately according to your business needs. The log data that you plan to put into the system should be roughly proportional to the frequency with which you plan to query it. On the other hand, if you have a consistent amount of data over time but an increasing number of people using the system, for example, you might need to increase the number of client nodes that are serving these queries and requests.

Curation

The data that you generate through ELK Stack can get large and unwieldy. If it is left unmanaged, then the system will eventually start to buckle under its own weight. Make sure that you have processes in place for maintenance and monitoring of the stack itself, so that you can anticipate such a situation well in advance.

Visualization

Viewing and sharing Kibana visualizations is one of the greatest benefits of using ELK Stack. You can easily send a link to the results to your colleagues, as well as export and print them for use elsewhere. Many of these visualizations can be reused, repurposed, and moved around to best fit your needs as your IT environment evolves.

Final Thoughts

It should be little wonder that so many organizations have turned to using ELK Stack as their log management tool of choice, making it an integral part of their cyber security workflow. Whether you have no infrastructure currently in place or one that has become unruly and difficult to manage, ELK Stack gives you the flexibility to adapt your data reporting processes as your organization requires.

3 Types of Cyber Security Assessment Tools You Need to Know About

No one wants to be caught off guard by a cyber attack. But simply buying and deploying the latest cyber security products and services is no guarantee of success. Minimizing the risk of a breach requires you to periodically assess the effectiveness of your security program so that you know your organization remains sufficiently protected. Here is a look at three classes of cyber security assessment tools in use today and what each has to offer for your business.

Tool Class 1: Checklists

Simple tools like checklists are very common in the field of governance, risk management and compliance, as well as in cyber insurance. Checklists are simple to implement and may only consist of a piece of paper or a spreadsheet that poses questions about your organization’s cyber security technologies and policies. Such a list might also enumerate a series of actions for you to complete that may help reduce your risk of an attack or breach.

While simple to initiate, checklists falter in that they are both constrained by a preset number of questions and require a great deal of manual effort to complete and maintain. In addition, checklists can suffer from subjectivity when answers depend heavily on who is answering the questions at any given moment.

Nonetheless, checklists do have their purpose in getting a broad overview of your organization’s situation and helping you meet compliance requirements. These tools can help you find answers to questions that had not already crossed your mind, such as whether you have staff and executives dedicated to cyber security or what your service-level agreements with your providers look like.

Tool Class 2: Security Analytics

The next level of cyber security assessment consists of tools for security analytics — perhaps more accurately called security statistics. These tools can track and report a wide variety of data about the security of your IT environment: how many vulnerabilities were detected in the last month, how many systems were patched, how many firewall hits occurred or how many times a wrong password was entered.

Although this information may seem useful on its own, it is divorced from any greater business context. For example, an unusually high number of incorrect passwords entered for a particular month might be due to the fact that a large number of employees had to create a new password that month. In addition, these security analytics tools often present only a single snapshot in time, without any time-series or temporal elements. Without the surrounding information, users are left in the dark about whether a rise or fall in any given statistic is due to their own actions, organizational policies, a technical misconfiguration or some other cause such as an attack.

An additional caution with security analytics tools is that interpreting these figures can be a highly subjective process. Statisticians will be the first to tell you that you can create any number of narratives simply by manipulating the same data in different ways. Maintaining a consistent process across analysts is critical for reliable results. In deploying security analytics tools, the onus remains on the enterprise to staff knowledgeable analysts who can interpret the results with a high level of fidelity.

Despite these limitations, security analytics tools do have a place in an organization’s cyber security posture. Statistics provide some crucial perspective by helping you understand the scope and impact of the things that happen within your network. Awareness of your environment will help prioritize and direct your daily operations.

Tool Class 3: Business Intelligence

In closing, we come to the most advanced stage of cyber security assessment tools: those that make use of business intelligence (BI). Unlike the other two tool classes, BI platforms apply a consistent model or framework through which data points are viewed and interpreted. These products often use artificial intelligence and machine learning techniques to take in data, make decisions, receive feedback and adjust their behavior accordingly so that they can make better, wiser decisions over time.

Of course, there is a downside to applying BI: the difficulty involved in reaching this stage in the first place. The information security field still lacks a common standard that describes exactly which metrics are important for an organization to measure. As such, choosing a framework can be a politically fraught process filled with disagreements and tug-of-war games. Once the negotiations have concluded, however, most organizations find that the value supplied by BI platforms was well worth the effort.

Previously, we mentioned that security analytics tools are often limited because they lack the context behind them. As a metaphor, consider the project of building a house. With the plans before you, a particular wall may be 12 units long. But without the scale – a common method for measuring – the number of units is meaningless — it could be meters, yards, feet, furlongs or something else entirely. The scale, defining the model upon which the plans are built, allows everyone to move forward with the project and verify the outcomes. This common understanding is exactly what BI platforms are able to provide.

Final Thoughts

Although it seems likely that business intelligence tools are the way of the future, it is equally true that each of these tool classes has its own time and place, depending on the organization. What is also important to note is that BI tools are an equally plausible alternative, if not a preferable one, to checklists and security analytics tools. Any organization, from a small business to a massive corporation, can use BI as part of its cyber security strategy.

ISE® Northeast Forum Recognizes OPAQ Customer for Innovative Security Project

We’re excited to announce that one of our customers has been selected as a nominee for the 2017 ISE® Northeast Project Award. The nomination is based on the security achievements of our customer Sandy Alexander, a midsize marketing communications company that provides an array of services including CG studio services, digital printing, direct mailing, data driven marketing solutions, and retail visual merchandising.

Here’s the background on Sandy Alexander’s security project with OPAQ:

The company, which devotes 20% of its IT budget toward security, had been using a managed security service provider for branch office security management. While using a MSSP was a sensible approach to supplement Sandy Alexander’s small security staff, the benefits were not adding up. Justin Fredericks, the company’s IT director, says he was frustrated with the MSSP’s service quality and response time. He started looking for a new solution that would connect and secure its branch offices and vendors in a way that was less costly and complex, and ultimately more secure.

Sandy Alexander’s internal IT operations have dramatically improved using OPAQ’s centralized, automated security-as-a-service solution.  “My team no longer has to think about what policies are up on one site versus the other, or which IP addresses or VPN tunnels are where,” Fredericks says. “We have complete visibility and the ability to control these policies and rules across the entire environment on one dashboard.” He predicts that the company will save money, over time, compared with the MSSP.

Some details and benefits of the project include:

  •  OPAQ’s solution was layered on top of the company’s IT infrastructure, including several branch offices and sites of its vendors including data centers and manufacturing providers. The integration is deployed over redundant VPN connections. This was accomplished in one day!
  • The OPAQ 360 platform gives the IT department a central portal/dashboard to streamline policy enforcement, view status and alerts, manage threats and monitor all activity across its network.
  • The company is now obtaining complete security coverage over its IT infrastructure from one source: firewalls, intrusion and malware prevention, logging, reporting, analytics, and Distributed Denial of Service (DDOS) protection.
  • A distributed, branch office-based approach to security has now been replaced with a centralized system; that means less complexity for IT and better visibility and control over the network and all users.

 

The Information Security Project of the Year Award Program Series has been running for more than 10 years now, and winners will be announced at the ISE® Northeast Forum and Awards on October 11, 2017 in New York City.

 

Security Operations Center: The SOC of the Future

Few subjects change as fast as cyber security; being in this field is a great way to keep on your toes. With an ever-evolving IT landscape and a constant barrage of new threats, it should hardly come as a surprise that the cyber security best practices and procedures of today look very different than those from even a few years ago. Now with the state of cyber security constantly in flux, where is the Security Operations Center (SOC) headed next?

Changes to Security Operations Center Technology

Data-Driven Automated Decision-Making at the Fore

Simply put, the future of security operations is in automating responses to data-driven indicators of risk or compromise. And this will likely require advanced capabilities, including artificial intelligence (AI). This is completely understandable if you even cast a quick glimpse at the current state of the SOC: too much data, no reliable way to collect and manage it, much less make sense of what it means to the day-to-day operations or the strategic business outlook the SOC supports.

Right now, SOCs are spending much of their time attempting to wrestle their data into a manageable form. But the IT security field is moving so rapidly that business intelligence (BI), and eventually AI, seems to be the future. Ultimately, AI and machine learning will be used to make our systems smart enough to first guide us in the right decision, and then eventually make the decision for us entirely based on our technological or business priorities.

When you think about it, this approach makes a certain kind of sense: attacks happen at machine speed. Some malware and ransomware are modified automatically every few seconds. And the technologies used in the automated generation and curation of news content is reaching a level of sophistication that will make uniquely-generated individually targeted phishing emails nearly indistinguishable from normal messages.

All this means is that orchestration and automation, as guided through intelligent analytics, will be the most important roles of the SOC of the future. And reporting on the performance of these automation systems, in the context of business operations, will gain even more importance as actions become more automated and less “controlled” by humans.

I am not alone in this line of thinking. Gartner, for one, plans to change its SOAR acronym describing security systems from Security Operations Analysis and Reporting to Security Orchestration, Analytics and Response. This move further echoes the shift toward BI, then eventually to AI.

Verifying AI Performance

So what does all of this mean for your business? Straightaway, not much. As you begin to bring in automated systems into your SOC, your organization will inevitably go through a period of wariness and mistrust.

Things like rerouting a message, shutting down a port, turning off a besieged service or launching a counterattack can all be done by AI—but that does not necessarily mean you want to let automation handle that right away. Even those CISOs who embrace this new machine-centric approach will want to check for themselves that the AI is making good decisions based on the data.

In the period between today and full automation, AI will start by making small decisions at first, then larger ones as humans approve of its performance. If a phishing attack on one of your employees’ emails is successful, for example, then the AI can choose to take network access away from the user or their terminal. Of course, this is a fairly uncomplicated action to approve of, since you would likely do the same thing yourself by taking away the user’s laptop.

The key to ensuring trust in automation is keeping the false-positive rate (the number of times a remediation action is taken, when there was no cause) in check. False positives beleaguer many technologies that try to take over decision-making (SIEM systems are a common example). Transparency in AI decision-making is paramount to understanding how to adjust the systems to avoid false positives (or negatives, for that matter).

CISOs who want to allow AI to have greater influence and control will need to have these discussions and negotiations with the rest of the business. By starting with small slices, CISOs can make the business see the sense and utility in allowing the system to take actions such as automatically removing network access from phishing victims—even if (and especially if) that victim is the CEO.

In order to have these conversations, however, CISOs need to have confidence that the automated rules are able to outperform (in speed, cost or capabilities) their human equivalent. Unfortunately, proving this assertion is often easier said than done as AIs decision-making becomes less and less “human” or understandable by humans. This has caused serious controversy in fields such as criminal sentencing, where AI algorithms are used to calculate the probability of recidivism for the offender.

If humans are unable to understand how the AI makes its decision, the argument goes, then the process fails to be transparent and appears to be an arbitrary application of justice. While IT security is not nearly as socially charged as injustice in sentencing, the problem is the same. Impeding business operations in the name of security, when the rationale is not understood will cause mistrust in and trouble for the security operation as a whole.

The natural tension between the CIO and CISO is one of access to technology, where the former’s position tends to want to maximize that access in the name of more flexibility for the business. The latter looks to minimize access in order to ensure control and security. While this is by no means a new conflict, the implications of it will only grow as we hand off more decisions to AI.

Common Data Access and Analytics

In order to enable this level of automation, security and business data alike must be brought together and organized for common and centralized access. This then enables a communication and analytics platform to transform the data into information that is accessible and actionable.

A solution for the first requirement has existed for some time: the “data lake” as a central, disorganized (or organized) repository for all security and related data. The second part is emerging now. Gartner’s SOAR describes such a system, and several companies, like ours, offer security operations BI analytics systems – and these will evolve over time to include AI.

The final piece of creating an effective automated SOC is putting in place the feedback mechanisms for automating responses.

Changes to the Security Operations Center Team

More Analysts (For Now)

Technological change will be the driver that causes a ripple effect throughout your SOC. To begin with, your IT security team’s makeup and skill set will have to change significantly. In the interim period between the SOC of today and full automation of the future, you will actually need more analysts.

This is because during this period—what we like to call the era of trust-building—not only will you have to process and analyze the security incident, but you also must analyze the analytics (BI and AI) response and performance in order to ensure that the system is doing exactly what it should be (and you can explain it). This period may last months or years.

Why do I think it will take this long? Because there are existing examples in both healthcare and financial services. Both have had predictive models for years that can, somewhat reliably, flag fraudulent transactions.

In healthcare, the fraud detection problem is complex and the outcome matters quite a bit to the customer (e.g. Do we pay for this or not?), and business decisions have consistently come down on the side of caution, with very few claims payments denied through automation. In financial services, where, arguably, the outcome doesn’t matter as much (e.g. If your credit card gets declined, you can call the bank and sort things out quickly.), it took years to get to the point where credit card transactions were declined automatically based on analytics.

Incident Response

The way that you handle incident response (IR), too, will be significantly changed from the standard operating procedure of today. Right now, skilled professionals who can do incident response are few and far between, which means that most organizations have a forensics partner or Managed Security Service Provider (MSSP) on call for when an emergency strikes.

If your organization were to be attacked tomorrow, placing the call and assembling a competent team to stop the bleeding would involve major effort on your part. With centralized access to information, the potential exists for IR to be organized through the same automation channels that are well-defined and well-understood by everyone involved.

IR, specifically, will likely require human intervention for the foreseeable future. It is right after an incident happens that there is the greatest scrutiny on the operations of a security operations center. However, ready access to a central store of security knowledge and analytical support should act as a force-multiplier for in-house staff, enabling faster initial forensics and more surgical deployment of external resources.

Changes to Reporting

Today’s reporting is messy, cluttered and laborious. If your organization is like most right now, you probably have dozens of disparate systems deployed in your network, each releasing floods of data. An unlucky someone (perhaps you) is then tasked with harvesting this data and collating it manually in a spreadsheet or database.

Because of how complex this data is, and how independent each source is from the next, most organizations are relegated to counting events but not much in the way of creating security operations metrics such as key risk or control indicators. Rather than observing trends, organizations are forced to tally up disparate events, divorced from any meaningful context or information about their effect on business performance.

The more complex that your reporting becomes, the harder it is for nontechnical users to understand the process and the results, making it seem that each report is bespoke. Without a standard, it might seem that you are grading your own paper. Imagine for a second if a CFO presented a financial report to the board using a completely different set of standards than those set forth by the Generally Acceptable Accounting Practices (GAAP). They would likely not be around very long.

The SOC of the future will change all this, with automated, standardized reporting becoming the norm. Although you may not use the same metrics in the course of your operations as your peers, you should be using the same fundamental concepts to create and choose the metrics you track. Back to the financial services example, GAAP specifies how to calculate basic values such as revenue, assets or profit. However, it is up to the business to decide whether to use a specific metric: for example, return on assets or return on invested capital as their measure of financial performance.

Similarly, SOCs of the future will make these fundamental building blocks available by nature in the systems that are producing the data. They may customize their ultimate “measuring sticks,” but each of them will be based on fundamental, established metrics.

Changes to Security-Business Interactions

The Transformed CISO

One consequence of security becoming automated, is in the changing nature of the CISO. When security operations, including analytics and response, are automated, the CISO is relieved of the drudgery of data management and, to a large degree, of being the focal point of day-to-day security decision-making.

Industries that came before cyber security benefited when analytics and automation created new realities for technology and business leaders, who were no longer burdened with the drudgery of data management. Similarly, the CISO’s future role shifts from being a technologist or a security analyst to being a business strategist for the organization, who happens to specialize in information security and management, much like the CFO is a business strategist who specializes in finance.

This evolution of responsibilities is not unlike the one that the CIO has undergone in recent years. Many CIOs began as technologists but were eventually invited over to the business side of things, where their job was to be an executive who understood technology.

In the future, we may see the role of the CISO being combined with that of the CIO or CTO in many organizations.

Security Baked-In

Another consequence for the security-business interaction—and perhaps the most significant one—is that your employees will just stop being aware of security. Rather, security will be part and parcel of the fundamental way of conducting business, baked into every process and interaction as part of the push for transparency and metrics, across business and security operations.

Final Thoughts

The SOC of the future will build in security early on during the development stage, rather than coming in later, trying to secure an environment when it is already too late. As we mentioned above, the fundamental end goal is for security to become much more automated and less manual.

In certain respects, this is already happening; we see it in trends such as forced automatic updates to Windows 10 and the push for moving data to the cloud, where it is centralized and easier to access—and there is a building consensus that it is more secure there as well.

This prompts the question, “What will it take for you to trust in automation (AI or BI) for IT security?” For sure, the boundaries will be different for each organization. Trust will come with transparency and reliability of automation.

Of course, no system is perfect, and when a significant incident occurs, there will be a desire to assign blame, whether that be the product or the person who decided to trust it. The key is to have the security and business operations metrics needed to evaluate both sides of the equation (automated vs. manual operations).

Each organization will find a balance between human and machine they feel comfortable with, but what we predict, because history has shown it, is that balance will be shifting far more toward machine automation as the SOC marches forward.

Managed Security Services Pricing Models: Land and Expand

“Land and expand” (L&E) is a business strategy in which customers usually sign up at one of multiple potential entry levels and then purchase additional products and/or services later on. Once you have your foot in the door, it becomes much easier for you to cross-sell and upsell than if you push a higher-end solution or take an aggressive sales posture from the outset.

L&E is highly strategic since it requires a longer term perspective. As a result this approach is a solid fit for many managed security services providers.

Why Should You Choose Land and Expand?

In many cases, managed security service providers (MSSPs) struggling to attract new customers believe that their problems stem from the cost of their services. L&E allows you to test that hypothesis by offering several different levels of products or services at different prices, making it easier for them to “land” new clients with a smaller starting offering.

All too often, customers will move on from an MSSP early on because they need additional services that the MSSP does not offer, or because they feel lost and adrift after the sale. L&E is also intended to solve this issue of attrition by encouraging you to “expand” your engagement level with existing customers.

How Do You Get Started With Land and Expand?

Generally, MSSPs using an L&E approach tailor their offerings in a way that allows their initial cost-effective solutions to appeal to the top customer problems or priorities in a preliminary way while higher-end solutions address the problem in a more holistic or comprehensive way. After successfully onboarding and supporting a new customer, the MSSP expands their value to the customer to solve additional problems or sell new services to the customer. The idea behind L&E is clear: It is usually easier to persuade an existing client to upgrade its services than it is to acquire new clients.

This means that strong post-sales connections are crucial for any MSSP hoping to practice L&E. If you wave goodbye to your clients once the sale is done without looking in the rearview mirror, then you lose the opportunity to cross-sell or upsell later on. The time during and immediately after the sale must be a highly positive experience in order for your customers to be open to future buying decisions.

In addition to a strong sales team, you also need a solid engineering team to provide assistance during and after the transition. Your customers need to feel supported throughout the entire installation process — and a smooth, well-supported implementation is essential in order to present a positive first impression.

MSSPs using L&E often adopt a subscription-based licensing model, which fits their strategy better and also projects an aura of confidence about their offerings. Rather than obliging the client to pay a lump sum up front, you imply that you have faith in your abilities to win your customers again and again by the time that they decide whether to renew their subscription.

Who Is Land and Expand Right For?

Although L&E is probably one of the most versatile approaches to pricing, it is also not an easy feat to achieve. Many smaller MSSPs have employees who wear many hats, such as tech workers who also pitch in on sales. However, an L&E approach stands the greatest chance of success when used by a larger MSSP with separate dedicated teams that are able to support this model.

L&E can be difficult to pull off, because it requires your entire company to be collaborating and operating with the same customer-first mentality. From the initial sale all the way to the point of expansion, you need many different departments to embrace the task of customer acquisition and retention. If you are going to start using L&E as a managed security services business strategy, you better be ready to go all-in.

Land and Expand: Real-World Example

A real-life MSSP that is successfully using the L&E model offers customers a basic report after an entry-level vulnerability scan that highlights the results from all of the IPs within their network. Customers interested in receiving a report that includes the scan results from their external IPs purchase the next level service which costs slightly more. Basic pricing is available for one time scans or discounted bundles can be purchased at a discounted rate for quarterly or monthly options.

It is common for MSSPs using the L&E approach to offer a bundle pairing product capabilities with training and certification, thus increasing the value to their end-user. All this means is that once customers have graduated from their entry-level solution, they have several options for where they want to go next.

In order to increase the likelihood of customer expansion, the MSSP deploys dedicated teams at each stage of the process:

  • First, a sales team promotes the entry-level solution to potential customers.
  • Second, an engineering team follows up immediately after the sale in order to answer any lingering questions and to make sure that the implementation worked and continues to work smoothly.
  • Third, a customer success and retention team performs routine checks with the customer on a monthly basis throughout the year in order to better support the customer and make renewals and upgrades more likely.

The greatest benefit of L&E is that no customer is ever taken for granted. The MSSP is always making the most of their lowest hanging fruit and customer satisfaction rises due to better support and communication throughout the relationship. Often, the conversion rate rises and time to cross-sell/upsell decreases. It’s not surprising that asking for a small spend up front to build trust and prove yourself naturally facilitates larger purchases in due time.

Gartner’s SOAR: An Engineer’s Perspective

Organizations that ramp up their spending on cyber security tools inevitably face the question: “Am I really getting what I need out of this? If not, is it because I am not using things properly; because I need to invest more into information security; or because the threat landscape has changed under my feet?”

Of course, the answer is usually a combination of all three, but there is also a larger element at play. The world of cyber security is currently experiencing a sea of change in terms of what technologies organizations are using and how they’re using them.

Where once you had discrete, independent tools, each with its own purpose, you are now seeing all-inclusive solutions that unite these tools under one roof. What’s more, more and more managed security services providers (MSSPs) are offering to take the whole matter off your hands and handle things themselves as part of their core competency.

In response to these changes, IT research firm Gartner has introduced the concept of a “SOAR” (security operations, analytics and reporting) technology stack – a comprehensive cyber security platform that uses logical and analytical capabilities to support operational information security programs. So, how should you use the idea of SOAR to interpret your own cyber infrastructure?

The Direction of Cyber Security Solutions

Gartner’s SOAR is a natural extension of where information security management is going as an industry; but similar to the discrete point-products that came before it, SOAR is merely a means to an end – effective continuous management of information security risks. Initially, cyber security solutions focused on identifying potential threats. As technology has advanced, these tools have progressed to assessing threats’ severity, to responding to threats and finally to mitigating them.

To accomplish these goals, you need something in the middle of your cyber security operations — something that can bring together your separate systems and data, and find the bigger picture amid all the noise. SOAR technologies give companies this singular perspective by siphoning real data from a variety of sources: SIEMs (security information and event management software), GRC software (governance, risk management and compliance), service desks, forensic tools and so on.

The good news is that all of your Information Security spending was not for naught. As more security operations data becomes visible and available, applying business intelligence techniques to cyber security is now more popular than ever. The true value of your cyber infrastructure comes from assembling the disparate pieces of your organization’s network and systems and gleaning valuable insights and analyses from them.

Final Thoughts on Gartner’s SOAR

There are two things that companies need to consider when they evaluate their SOAR technologies. First, compare the number of tools that you have deployed with their net performance. If you are getting less out of your solutions than you put into them, then you are not being maximally efficient. Having two tools that do almost the same thing does not really make you safer — it should make you question why you need two tools that cover the same territory.

Second, modern security solutions give you a high degree of visibility into your cyber infrastructure. With that visibility, however, comes a heap of work that will always far exceed the amount of resources that you can throw at it. What is more, your organization’s cyber adversaries will always be able to outgun you. It is their core competency to attack you but not your organization’s core competency to defend.

As a result, you need to think smart and have specific priorities for your security operations activities, judiciously deploying the resources available to you. By doing so, you will be able to outflank and beat your would-be attackers, even at a numerical disadvantage. Today, many midsize enterprises look to MSSPs and MSPs to manage their security operations activities. These service providers are able to service many different customers with an automated security solution can scale on-demand.

Security Operations Center: The Changing and Evolving SOC

Security spending in 2016 was nearly 35x that of 2004, and market analysts project it to grow more than 10% year-over-year for the foreseeable future. The overall innovation and rapid advancement of information technology are the underpinnings of this growth, to which the proliferation of cyber crime is the major driver.

The evolution of the cyber security market in response to IT expansion leads to a number of secondary outcomes. Among them are a rapidly-increasing surface area (in number of devices, number of connections, or amount of data) which must be defended, huge leaps in both offensive and defensive technology and capabilities, growing complexity and inter-connectivity among distributed essential business processes, and a constant need to manage the threats and vulnerabilities created by these advances and changes in surface area.

Constrained by limited resources, security management has focused on maximizing the utilization of tools to stay ahead in the day-to-day battles of cyber security. In the trenches, it is all too easy to miss the big picture. Just as individual machine-utilization thinking constrained manufacturing companies prior to the invention of lean manufacturing (throughput utilization, specifically), cyber warriors were caught in a never-ending pursuit of working backlogs of individual sensor-alerts, rather than managing IT security performance for the best result at the enterprise level.

Like a game of security “whack-a-mole,” the task got harder and harder as more security systems created more events and alerts. The irony is that the addition of systems is meant to make organizations more secure, while the reality is that even fully-resourced organizations struggle to understand the performance of any individual security system, never mind the enterprise’s overall security posture.

One of the most effective approaches that has emerged to address this in recent years is the Security Operations Center (SOC), where both organizational and technical information security resources are centrally managed.

The Security Operations Center

The SOC developed much the same way as its older sibling, the Network Operations Center (NOC); where the NOC centralized all manner of information and data flow, at its foundation, the SOC is the nerve center of all IT security functions. The SOC is where all the security data flows to, where all the decisions are made and where all the activity happens — very much like a tactical operations center in the military. The SOC is where the rest of the organization puts its trust that cyber security risk is being managed.

Within the SOC are staff who are suited to perform three essential security functions: monitoring, planning and execution. The SOC collocates (often physically, but sometimes virtually) highly technical and skilled employees, such as security engineers and security analysts. This promotes economies of scale and efficacy of operations. Using this structure, the SOC has evolved rapidly in response to organizations experiencing dramatic and enduring increases in cyber crime.

The Challenges Presented by Change

A SOC is only as good as its ability to match the pace of organizational and environmental change. Lately, most businesses have undergone digital transformations that continue unabated. Meanwhile, every alteration to the IT landscape and business operation within an enterprise affects cyber security operations, compounding the pressure from cyber threats. This includes minor changes, such as new software versions, all the way to major shifts, such as the rise of mobile computing and the cloud.

The complexity of the problems a SOC addresses mirror that of the business environment, and transformational changes in business IT infrastructure require the same in security. Without a complementary shift in SOC capabilities, it is very easy for security operations to fall behind.

Then two things can happen: First, the SOC becomes divorced from the business decisions that are made as a result of IT security operations. Second, with the growing distance between security and business performance, a prevention mindset sets in – rather than managing security operations to organizational impact of cyber threats, the primary measure of success, which both business and security leaders agree on is that no incidents are to happen.

Perfect security is a myth, and any success factor predicated on it is destined to disappoint. Just as other business processes embrace decision support and business Intelligence systems to measure operations performance and drive optimization, so should SOC operations. IT security is business security.

The History of the Security Operations Center (SOC)

“The customer is always wrong!” This seems like a poor way to market cyber security solutions, yet it is prevalent in the industry. All too often, sales and marketing focuses on telling their audience just how insecure they are, and they need to adopt some new solution immediately in order to have a chance of fixing things. It is sales by sowing fear, uncertainty and doubt (FUD). FUD became the predominant sales tactic among security vendors because they struggle to measure the efficacy of their solutions.

This analysis will not use FUD. Our goal in this segment is to present a view of how SOC organizations have developed over time, what common shortcomings exist, and how the new SOC will avoid these. To be concise, our point is that the legwork that you and others have done in building SOC capabilities for your organization has all been in support of building the next generation of SOCs.

In the Beginning, there was Prevention and Response

During the lifetime of the SOC, its prevailing paradigm has largely been one of prevention and immediate response. Defenders purchased and deployed tools and platforms based on their ability to prevent incidents, and to rapidly respond when an incident was detected.

As organizations came to rely more and more on digital systems and content, the amount of “stuff” that needed protecting grew quickly. However, as IT was seen in a supporting role for business operations – a cost center – so was cyber security. Like IT, security was to be guaranteed at minimum cost and clearly belonged with IT. Security vendors responded with FUD, creating solutions that would identify and prevent sometimes extremely obscure and edge-case events.  Best intentions and corporate dynamics led to dozens of products, each offering part of the ‘solution’ to the security challenge.  What was lost is knowledge about what contribution any individual product is making to the overall security posture of an organization when each product type is creating a steady stream of data that is incompatible with the streams from all other products.

Which leads us to a common problem SOC leaders face, wasting extraordinary amounts of time generating reports nobody in business leadership can or wants to read.

Telling the Story of IT Security

Volumes have been written about how to explain the value and benefits of IT to non-technical business leaders (we have even written one of those volumes ourselves). IT security is every bit as complex as IT and additionally suffers from its value being defined through the absence of cyber incidents. Proving a negative (or the absence of something) is a known tricky problem, ideally solved in monetary loss probabilities that are generally not attainable from data available to security teams. Technical explanations are met with quizzical looks, and over-simplifications fail to convey a believable message. So what is a CISO to do when the CEO walks into the SOC and asks, “How are we doing?”

In order to be successful in their roles, SOC leaders must be able to instill an understanding of what they do and how it impacts the business. Since getting to monetary loss probabilities involves time-consuming analysis, these probabilities do not lend themselves to day-to-day calculation of operational metrics for cyber security. Luckily, there are three categories of metrics that are obtainable and, with a little coaching, quite understandable by both technical and non-technical executives: key performance, risk and control indicators.

Where is the Security Operations Center going next?

The SOC is about to get a lot more intelligent. Business intelligence has worked to change the game for other executives, and it is now it is poised to be transformative for strategic cyber security leadership.

The evolution of the cyber security industry has arrived at the same place as many IT technologies did only recently: platform disruption. Traditional business models and technologies rely on information-sharing via a complex lattice of interconnection. Platforms, on the other hand, provide a common communication and application interface backbone that obviate the need for process-to-process or technology-to-technology interconnects.

Several recent developments indicate that the cyber security industry is on the cusp of such disruption:

  • The emergence of Security Operations Analytics and Reporting (SOAR) applications, that are essentially Business Intelligence platforms for Security Operations – and –
  • The aggregation of capabilities in security technologies such as next-generation firewalls – and –
  • M&A activity in cyber security creating virtual platforms that have combining capabilities

Change is always challenging; however, it is also an era of excitement, growth and opportunity in security operations. The next frontier for security operations is here. Are you ahead of the curve?

A Methodical Approach to Key Control Indicators

On its face, the analysis of a key control indicator (KCI) can seem rather straightforward. Do you have confidence in the controls implemented to monitor your environment – are you “in control”?

You will have to answer two questions. First, do you have the controls you need? Second, are they working as intended? Unfortunately, the complexity of security organizations can make it difficult to answer these deceptively simple questions.

For example, you might implement a next generation firewall with a number of security features and opt to enable some of them later. Security operations happens, and your plan is once again overtaken by events (OBE’d) – and who has time to go back?

When it comes to KCIs, you need to plan, execute and monitor your control infrastructure in a managed way. But what is the best way to do that?

First, Examine Your Environment

Similar to key risk indicators (KRIs), it all begins with critical introspection. Where (or what) are your “crown jewels” that you need to protect? From there, what are the compliance/control boundaries?

Take the time to consider these two questions carefully, as the answers you provide – which will be unique to your organization – will change how you architect your environment, as well as how you protect it.

It is helpful to consider using asset classes of devices, networks, users, data and applications, and breaking each down into the cyber defense categories defined by the National Institute of Standards and Technology (NIST): identify, protect, detect, respond and recover.

This five-by-five grid will give you a solid foundation for
your defensive strategy.

Next, Anticipate Control Complexities

As a baseline, most organizations have policies in place to ensure compliance, procedures that enforce or put those policies into practice, and audits that confirm that those procedures are being followed. Again, as with the concept of the KCI itself, this can seem simple and linear.

In reality, the waters are easily muddied when you take the different variables into account.

You might perform an audit to confirm that all of your compliance boxes are checked appropriately. But, during the course of your business operations, you may have purchased many different systems and products to help prop up your security infrastructure. This can introduce a level of complexity that makes any measurement of what is actually going on quite difficult.

How can you tell what components are contributing to your compliance, what components are working well and what components should be removed from your security infrastructure?

Finally, Measure Control Indicators

When it comes to your controls, in order to have the right policy, procedure and audit processes in place, you need to have that higher-level, comprehensive understanding of your company’s security and compliance environments. This is particularly true, given the fact that often what you are doing in these cases is seeking to detect the unexpected – whether that is a misconfiguration or a security incident.

While often these adverse impacts are analyzed as performance indicators, they can also fall under the umbrella of your control indicators, in the event that a control that should have been in place was not.

By adopting a more measured and methodical approach to how you
form those control processes in the first place, the better you will be
able to understand your environment, remain in compliance and
protect what matters most.