Simplified Microsegmentation — From the Cloud

It is time to change the way that organizations approach network segmentation. In the past few years we have seen a mounting collection of threats target the wide open nature of most organizations’ internal computer networks. Although security pros have been harping on this for some time, most networks remain crunchy on the outside and chewy in the middle – once attackers get past the perimeter, they often have access to any and everything inside the organization.

We’ve seen repeated threats recently exploit this exposure. We’ve seen incidents where entire organizations are crippled from ransomware spreading internally within their networks. We’ve seen the return of internet worms like WannaCry and NotPetya. We’ve seen more automated attacks that pivot from an initial point of compromise within a Windows network to Domain Admin access. In fact, experts are predicting significant increases in the volume of these attacks because of developments in attack automation.

Almost every organization needs to improve their network segmentation strategy in their internal network to cut down on these threats. What is preventing organizations from taking action?

Traditional Network Segmentation is Complex and Difficult to Manage

Unfortunately, the traditional approach to implementing network segmentation poses significant challenges. Configuring and managing internal firewalls and VLANs is both labor intensive and relatively inflexible. Network architecture is usually driven by the need to provide connectivity rather than security. Organizing machines with different security requirements onto separate VLANs is complex, and as soon as the work is done, users demand changes. Deploying multi-factor authentication for internal applications and services can also be a daunting project as each application must be separately integrated.

It’s no wonder organizations — particularly midsize enterprises — continue to struggle with implementing a smart, sustainable network segmentation strategy. What are midsize enterprises — and the service providers supporting them — supposed to do?

Zero Trust Software-Defined Network Segmentation from the Cloud

The term “microsegmentation” has recently become a buzzword in the IT world. These solutions provide a manageable way to lock down east/west traffic policies for cloud workloads. However, many of the threats we’re seeing – ransomware, worms, and domain lateralization – target end user workstations instead. What organizations need is a technology that provides easy-to-deploy software-defined microsegmentation capability that is flexible enough to support the entire enterprise network.

Since the acquisition of Drawbridge Networks in May 2017, we have embarked on integrating unique intellectual property into the OPAQ Cloud that allows users to manage software-defined microsegmentation for the entire enterprise, from a single pane of glass. The OPAQ PathProtect™ capability dramatically simplifies network segmentation, enhances network visibility and control, and enforces policy locally at each device, whether it’s a cloud workload or an employee laptop.

OPAQ PathProtect™ works by connecting software agents running on endpoints with a central controller hosted in the OPAQ Cloud. This architecture provides visibility and control from the cloud into every network interaction happening on every endpoint. This capability gives you the power to investigate incidents, protect against insider and external attacks, and prevent certain devices, such as compromised endpoints, from talking to other workstations on the network.

Microsegmentation with OPAQ PathProtect™ can be used to define granular access segments for users that operate independently from the network’s hardware and physical topology. It also can be easily updated when business needs change. Segments can be defined based on user identity, group membership and job function, and they will follow users as their laptops move throughout the network. OPAQ PathProtect™ can be used to enforce multi-factor authentication for access to any resource or service on the network, without any need to integrate with individual applications. This is possible because the central controller oversees all communication within the network and can authenticate users before allowing traffic to flow.

These capabilities allow organizations to adopt a security posture that is more aligned with Zero Trust security principles, in which users only have access to the specific applications required by their job function. Cutting down on unnecessary access closes the avenues that malware and network attackers use to spread laterally within an organization.

Microsegmentation for Endpoints, Not Just Data Centers

OPAQ PathProtect™ is a microsegmentation solution that can protect the whole network, including workstations, servers, datacenters, and cloud workloads, supporting the following capabilities and use cases:

  • Network Visibility provides detailed topological views of the interactions between hosts on the internal network. It is possible to drill down into different timeframes, hosts, users, process names, ports, and protocols for complete insight into network activity.
  • Network Access Control (NAC) to assign which resources, hosts and users can access services on the network. For example, unmanaged hosts can be prevented from accessing sensitive servers, and are identified and cataloged when they send traffic.
  • Multi -Factor Authentication (MFA) integration enables step-up authentication to tighten security for VPN access and within the internal network.
  • Granular Segmentation which is completely separate from the physical network architecture or network addressing, can be used to segment specific devices, applications, and data, and can keep track of hosts as they move around the network.
  • Quarantine allows organizations to quickly isolate infected hosts from sensitive resources at the touch of a button.

To find out more, view the press announcement, sign up for our upcoming webcast and schedule a demo to see how simple microsegmentation can be from the cloud.

Game-Changer: What OPAQ’s Selection of Palo Alto Networks Really Means

We’re thrilled to have announced our partnership with Palo Alto Networks, which opens up tremendous opportunities for our MSP, MSSP, and VAR partners to deliver enterprise-grade security-as-a-service from the OPAQ Cloud.

This is a huge deal. This agreement furthers OPAQ’s mission to provide fully integrated networking and enterprise-grade security as a simple, cloud-based service. It means that OPAQ partners are empowered with:

  • A subscription model designed to make enterprise-grade security affordable and accessible to midsize enterprises. The traditional approach to security has put enterprise-grade security that midsize enterprises need out of their reach because it’s too costly and complex to manage. The OPAQ Cloud is a game changer – it makes enterprise-grade security accessible and affordable to midsize enterprises. This means new, lucrative revenue opportunities for partners.
  • Fortune 100-grade network security that’s known and trusted. The OPAQ Cloud integrates best-of-breed security capabilities that are powered by known, trusted security technologies, such as Palo Alto Networks, and other industry leaders and unique OPAQ intellectual property.
  • Cloud network engineered for speed, strength, and flexibility. OPAQ owns and operates its own private network backbone. In addition to integrating best-of-breed security capabilities into the fabric of the platform, OPAQ optimizes the speed and performance of network traffic by leveraging transit and peering relationships with world-class providers.
  • Single interface designed for simplified management, compliance, and reporting. The OPAQ 360 portal provides a single pane of glass where all customer security policies and network traffic can be centrally managed and enforced — all without the cost and complexity associated with managing dozens of security products from multiple vendors.

We chose Palo Alto Networks because they are a proven technology leader in next-generation security technologies. Bringing Palo Alto Networks into the OPAQ Cloud makes enterprise-grade network security much more accessible for midsize enterprises and manageable for solution providers supporting midsize enterprises.

For more information on OPAQ’s partnership with Palo Alto Networks, read the press release here.

OPAQ CTO Tom Cross Writes on Lateralization Attacks in First Article on CSO Online

Lateralization attacks are commonly used in most sophisticated breaches today. An adversary will typically gain a foothold inside the victim’s network by installing malware on a vulnerable device.

From there, the attacker will compromise other computers within the organization by moving laterally throughout the compromised network. A number of experts are predicting an increase this year in Windows Domain lateralization attacks. Organizations are increasingly looking for a solution that can prevent and isolate lateralization attacks from spreading in their network.

OPAQ chief technology officer Tom Cross was recently invited to be a regular contributor to CSOonline, one of our industry’s most respected publications. In Tom’s first article, he discusses lateralization attacks against Windows networks, and how to defend against them. You can read the full article here.

OPAQ Shortlisted for Best Emerging Technology in 2018 SC Awards

We received some exciting news last week. The OPAQ Cloud was named an Excellence Award finalist in the Best Emerging Technology category for the 2018 SC Awards, an annual competition that recognizes the top solutions in the cybersecurity industry.

This is the second major accolade our technology has received in the past two weeks. The OPAQ Cloud was recently named best (Platinum) Network Security solution in the 2017 GSN Homeland Security Awards for cybersecurity excellence.

Making this list is very gratifying. The SC Awards are widely regarded as the gold standard in cybersecurity. The winners will be announced at the SC Awards ceremony on April 17 in San Francisco, in conjunction with the RSA Conference, the industry’s largest gathering.

According to Illena Armstrong, VP, Editorial for SC Media, “OPAQ Networks has demonstrated unique innovation in its approach to protecting companies from the onslaught of malicious attacks and other threats. Their solution represents some of the most effective security technology on the market today.”

The OPAQ Cloud is a security-as-a-service platform that integrates a private network backbone with built-in enterprise-grade security capabilities from the world’s leading technology providers and our own intellectual property.

Our vision was to create a solution that makes advanced cybersecurity protection accessible to midsize companies that lack the resources and staff to knit together and manage multiple products themselves. With the OPAQ Cloud there’s no hardware or software to buy, install and manage.

Since many midsize companies lack in-house security expertise, the OPAQ Cloud is available from managed service providers who can remotely monitor and protect their networks.

You can read the SC Awards press release here.

OPAQ Cloud Named Best Network Security Solution by Gov Security News

We are pleased to report that the OPAQ Cloud platform was recently named best (Platinum) Network Security/Enterprise Firewall solution in the 2017 GSN Homeland Security Awards for cybersecurity excellence.

The Awards are hosted by Government Security News (GSN) to recognize excellence and leadership in the Cyber Security and Homeland Security sectors. Winners were selected based on a combination of technological innovation, ability to address a recognized government IT security need, and flexibility to meet both current and future needs. Category winners were ranked with Platinum, Gold and Silver designations.

The OPAQ Cloud is tailored to meet the unique needs of State and Local governments, which face the same sophisticated security threats, like ransomware, as larger federal agencies, but tend to lack the resources and technical experts to adequately protect their networks.

The massive WannaCry cyberattack that infected computers in at least 150 countries several months ago is a good example. In the aftermath, many State IT officials said they often don’t have enough money to effectively fight sophisticated cyber threats. And the scale of that attack made them even more concerned.

Doug Robinson, executive director of the National Association of State Chief Information Officers (NASCIO) went on the record to say: “This is a big wake-up call because it is cyber disruption. States and local government need to address this because it’s a serious threat. We have urged states to take action immediately.”

There are many security products that try to do some really great things for state and local governments. However, many products and management systems are isolated and do not talk to each other.

This is why automation and orchestration are becoming a game-changing necessity for state and local governments. Leveraging automation can help state and local governments effectively detect and respond to threats at speed. This is what the OPAQ Cloud is designed to do — and it’s why we were honored with the GSN Homeland Security Award.

To find out more about the GSN Homeland Security Award, see the announcement. To learn about the OPAQ Cloud and the benefits of security-as-a-service visit

Tom Cross Joins Forbes Technology Council

We are pleased to report that our CTO, Tom Cross has been accepted into the Forbes Technology Council, an invitation-only community for world-class CIOs, CTOs and technology executives. 

Tom joins other Forbes Tech Council members, who are hand-selected to submit thought leadership articles on As an expert in cyber security, cyber conflict, malware and vulnerability research, Tom will cover these topics and more.

In his first article for Forbes, Tom weighs in on recent changes to international export controls covering computer network intrusion software.

The Wassenaar Arrangement is a governing body that crafts export control rules for technology with military and civilian applications. There are 42 member states that are party to the Wassenaar Arrangement, including the United States.

The new rules represent a significant victory for computer security practitioners, since they remove obstacles that have interfered with the ability of researchers in different countries to exchange security intelligence on vulnerabilities and malware.

In the article, Tom explores the reasoning behind the old rules, why they were flawed and how the new changes will benefit the cyber security industry. He also discusses why, despite this important victory, more work remains to be done before the new rules are implemented in the United States. You can read Tom’s full article here.

Why we Pivoted to a 100 Percent Channel Sales Model

Today we announced the OPAQ Channel Partner Program and the completion of our transition to an indirect sales model. There are a number of reasons for this change.

First, many midsize enterprises look to service providers to deliver security services. These organizations struggle to protect themselves from cyber threats due to the shortage and high-cost of skilled IT professionals, the growing sophistication of attacks, and the complexity of managing multiple security products and services. These challenges spiked demand by midsize enterprises to outsource their security. According to Gartner, Inc., services will make up over half of all security spending, at $57.7bn in 2018. Meanwhile, spending on security outsourcing services will total $18.5bn, an 11 percent increase from 2017.

Second, both midsize enterprises and service providers struggle with the upfront expense and complexity of acquiring, configuring and maintaining multiple hardware and software security products from different vendors.

For many midsize enterprises, the capital cost of implementing a Fortune 500-grade security infrastructure, not including the human resources to manage it, is overwhelming. Meanwhile, service providers that want to offer managed security services face a similar dilemma, only from a scalability and profit margin standpoint. The traditional hardware/software model requires they purchase products, install them at the customer site(s) and then manage the infrastructure.

Many of the partners’ midsize enterprise customers require complete outsourcing while others prefer a co-managed or self-managed approach. And our partners know which model best suits the customer. We have invested significant time and resources in the development of our “single pane of glass” approach.

This enables partners to deliver end-to-end network security across their customers’ distributed infrastructures — including data centers, branch offices, mobile and remote workers, and IoT devices. The OPAQ 360 portal, a web-based interface, enables our partners to centrally provision, configure and manage an unlimited number of customer sites and policies remotely. Our Partner Portal also makes it simple for partners to go to a single place in order to access training, sales support, deal registration, and other resources that are essential in helping them to accelerate time-to-value.

According to one of our channel partners, Tom Turkot, vice president of client solutions for Arlington Computer Products, “The OPAQ Cloud is a game changer.”

You can read today’s announcement here: OPAQ Channel Partner Program Press Release. Or for information about the OPAQ Channel Partner Program visit:

KPI v. KRI v. KCI: Key Cyber Security Indicators

Companies that have spent significant resources and money on managing their cyber security environment understandably want to know the results of all this expenditure. As such, it is important for Managed Security Service Providers (MSSPs) to be able to provide customers with some visibility into those results. However, results only tell you half the story. For instance, they may demonstrate that there was a breach, but, without significant forensic effort, will not necessarily provide the sequence of events or failures which led up to the compromise.

Organizations are complex and have many performance measures. Most have designated key performance indicators (KPIs) at various levels of the organization, which business management agrees are the most important metrics to monitor. They are designed to be leading indicators of business performance. Key risk indicators (KRIs) are similar in that they are leading indicators; however, rather than signal performance, they signal increased probability of events that have a negative impact on business performance. Then there are key control indicators (KCIs), that are closely related to KRIs in that they measure the effectiveness of risk controls.

Business managers use KPIs to show where things are going well or poorly and KRIs to indicate when the probability of the latter is increasing. KCIs are a measure of how well risks controls are performing. MSSPs can, and should, do the same using security data which is commonly available for most of their clients.

More on KPIs, KRIs and KCIs

You may hear these terms used interchangeably; however, they are distinctive and should be treated differently in order to make them understandable.

  • Key performance indicator (KPI): Shows how the business is performing based on the goals and objectives leadership has set as well as the progress that is being made toward those goals. For security operations, this metric might be used in an effort to resolve open items or tackle a backlog of unresolved security investigations.
  • Key risk indicator (KRI): Measures the company’s level of risk, and how its risk profile changes over time. An example for security operations is to use metrics that measure the severity of threats and vulnerabilities being reported by sensors. Another example is  to look for places in the security defensive chain events that are happening (e.g. end-point-based events are more “risky” than firewall or WAF events). Finally, make sure you have a good understanding as to business-role the assets involved play. Security events that occur on critical assets present more risk than those on noncritical ones.
  • Key control indicator (KCI): Indicates how much control a company has over its environment and its level of risk, or how effectively a particular control is working. Putting this in context with IT security operations, a question to ask is whether you have the necessary controls across all areas of the business – for example, the NIST Cyber Security Framework functional areas (identify, protect, detect, respond and recover). Knowing that these functions have sufficient coverage throughout your defense in depth (devices, applications, networks, data and users) gives you a degree of confidence in your controls.

How to Use These Metrics

The interplay between the performance, risk and control metrics is the key feedback that an organization needs in order to be confident that investments in cyber security are appropriate. Now that we have defined the appropriate use for the individual metrics, let’s see some examples of how to apply them:

  • Risk is the probability of bad things happening applied to the business cost of it happening. You can calculate an estimate of the probability by looking at the number, place (where in the defense in depth model) and severity of events measured by sensors. For the impact, or real cost, look at which hosts are involved. Are they where the crown jewels are kept, or more of an extra store-room full of old furniture? Faced with so much data, organizations can be afflicted with “analysis paralysis,” so simplify these measures into risk metrics everyone can understand.
  • Performance metrics are meant to show how efficient an organization is at accomplishing its mission. In cyber security, the mission happens to be risk mitigation. So performance is how well you manage your backlog of open security cases, time to resolution, etc. with respect to the staff and systems you have. There are significant parallels to customer support metrics in this category.
  • Controls mitigate risks and enable performance. In cyber security, technical (security sensors) and process controls are your bread and butter. They also generate the data that drive risk metrics and allow you to optimize performance. Compliance measures are your friend here. Measure your degree of coverage against a framework such as NIST CSF.

Generating the metrics here seems like a daunting task at first. But, once you start simplifying and categorizing the measures, you will find that you can come to a reasonable set quickly. Then you need to automate their calculation. With experience, you will learn whether you’ve chosen the right KPIs and KRIs, and you can make adjustments as necessary. Getting started can be a challenge for MSSPs, but it’s 80 percent of the battle.

The most important thing to remember is that the statistics coming out of your cyber security systems are not KPIs, KRIs or KCIs. They are just data. Decide what risk performance, risk or control measures you need in order to clearly explain metrics of security operations to the business you support.

Test these on business managers to make sure they resonate, adjust and go again. The more consistent and transparent your measures, the more confidence your clients will have in their security investments.

Putting KPIs, KRIs and KCIs into Practice

On one hand, you have a large amount of security data – the proverbial big data problem. On the other hand, you need actionable output – a list of what to do now to transform your clients’ security programs into a high performance business driver. Metrics will guide your path to success, but generating consistent and reliable information security metrics is hard. So here are a few steps to get you started.

Step 1: Understand your Coverage, Operations, and Compliance Challenges

Security operations involves a set of functions being performed across a set of assets. The NIST Cyber Security Framework (CSF) provides a core list of the functions and the Cyber Defense Matrix from OWASP does a fine job of aligning those functions against a representative set of assets. Categorizing the deployed security products or processes in your client’s environment within the matrix will establish coverage and identify gaps in the program’s architecture.

Operationalizing the matrix by collecting, identifying and assigning the output data from your security products to each cell in the matrix shows evidence of operations and serves as your first step in addressing the ‘big security data’ problem.  Gaps between what you thought you had deployed and what actually shows up as evidence of operations will provide you with an immediate ‘to-do list’.  

Applying a control framework (such as CIS Top 20, GDPR, or FFIEC) adds depth to each of the intersections by mapping specific security controls to both deployed security products and your client’s assets.  The resultant overlay identifies gaps in your compliance effort and your second ‘to-do list’.  When combined with your operational to-dos, the entire list can be mapped to a 30, 60, 90-day plan of action with key milestones. Wash, rinse and repeat for each of your lines of business or departments, and you now have a path for your journey.

Step 2: Measure your Efficacy 

With security products and processes deployed and more on the way as you move down your path, it is time to measure the effectiveness of each action and ensure its alignment with the business.  Recall that operationalizing the matrix served as the first step in solving the big data challenge by categorizing the data and applying business context through the assets in the matrix and each line of business or department.

Enriched with this context, the security data can now be normalized and analyzed to produce key metrics, or as we called them earlier, KPIs, KRIs and KCIs. Examples include the speed of new threats or vulnerabilities for KRIs, the treatment of symptoms or root causes for KPIs, or the reduction defensive workload for KCIs.

With metrics in place, each to-do on your journey can be seen as a resultant change in one or more metrics. What’s more, the value of fixing operational to-dos or implementing a specific control can be measured and communicated specific to the business context it affects. At each milestone on the journey, thresholds for metrics can be set to determine success or identify needed adjustments in the plan.

Final Thought

It’s all about the journey. A successful information security program is not an end-state, but a continually monitored and adjusted compilation of people, process and technologies. Mapping the program’s functions with your client’s assets and required controls provides you the steps needed to mature your program while metrics will keep you honest about how well the program is performing.

Good Security Depends Upon Automation, Analytics and Outsourcing

Joshua Margolin is Principal Analyst at Clutch. He received his BA in Business Communications from the University of New Hampshire, and his MA in Technology & Entrepreneurship from Georgetown University.

OPAQ: Which are the hottest areas within the security tech sector right now in terms of customer demand and innovation?

JM: To set the stage, companies worry most about whether they will be too late in implementing security technology. Another important consideration is the job market, because there isn’t enough cyber security talent to go around. Companies don’t know where they stand from a risk profile standpoint and once they do, many aren’t sure how to address it. There’s going to be less of a demand for security consultants and analysts because more companies will defer to automation solutions for detection, monitoring, privileged access and transparency. The fact that you can subscribe to security services in the cloud means that you don’t need to hire a team of experienced analysts. Our recent survey indicated that 70% of large companies will invest more in cybersecurity technology over the next year.

Another top category is Internet of Things (IoT). Large enterprises have a lot to gain by integrating IoT into their core business. On the consumer side, we are seeing more of these devices all the time – from smart home and car technology to wearables. Companies need to determine whether or not they should invest money in endpoint protections considered outside the traditional realms of interaction.

OPAQ: What types of customers are becoming more interested in cloud or outsourced security services and how do you think this market will evolve?

JM: It makes sense to outsource these activities, especially for smaller companies because it’s so expensive to staff your own team of security experts. Yet before you spend money with any vendor, it’s worth the investment to hire a threat intelligence agency. These companies audit internal data and practices while considering the wider marketplace, all in an effort to determine what threats would most likely be encountered. Companies easily fall into the illusion that technology is the panacea. Not every business requires the same degree of security or even the same approach. It’s also important to remember that at least half of a company’s needs can be addressed by sound policy and effective training. For many companies, hiring a SaaS provider or two is sufficient. With larger project scopes, a MSSP is ideal because they will integrate several complementary SaaS products and manage the vendor relationships.

OPAQ: Both Gartner and IDC predicted earlier this year, 7-8% growth in IT security spending worldwide. How do companies best decide how to use a bigger budget?

JM: It will first depend on what internal expertise they have out of the gate. Any company that has a CSO or CIO has experience and networks to help figure this out. What’s difficult is when a company has no internal IT to rely on. This leaves them at the mercy of vendors’ salesmanship. They might be driven by the fear factor or they might misallocate budget to bring a contractor in-house. This only drives the costs way up. It might offer more peace of mind when compared to outsourcing but then the company is limited by the expertise of any single person. There’s a lot more to gain by tapping into wider talent pools.

OPAQ: Are developers and engineers having a hard time staying abreast of threats and developing the right solutions to counteract new threats and recover from them?

JM: The market for malware and ransomware is booming. There are a lot of talented people out there with malicious intent. These actors are often well financed by corporations or governments and they will find a way in; it’s only a matter of time. Technologists and engineers on the good side are always going to be chasing down the black hat actors. It’s better to be adaptive and react in the nick of time, all made more possible than ever thanks to advances in predictive analytics and artificial intelligence. That’s where the new frontier is for cybersecurity.

Considering Compliance in the Cloud

Gates Marshall is Director of Cyber Services at CompliancePoint. He has many years of experience in information security consulting with expertise across secure architectural design, vulnerability and penetration testing, OWASP, forensics, incident response, GDPR, FISMA, MARS-E, and cryptographic control design and implementation.

OPAQ: What exactly do we mean these days by “cloud compliance” versus other security and compliance topics?

GM: In some respects, there is not a big difference between on-premise and the cloud. HIPAA or PCI standards don’t make special exceptions for the cloud. The rules apply the same everywhere. There are also some cloud-specific compliance solutions out there like CloudeAssurance or CSA Star Certification, which allow organizations to achieve a quantifiable rating on compliance. Yet for a lot of things, being compliant in the cloud is not much different than having a data center somewhere or a colocation provider.

A significant problem is that when people sign on with a cloud service provider (CSP), they sometimes think they are outsourcing the due diligence aspect of compliance. Google, Microsoft and Amazon have a number of certifications, but these are to certify their own services. They are not certifying that their merchants and other customers are compliant in any specific client-level implementation.

OPAQ: There are some differences, though, right?

GM: The way you can configure systems in the cloud is different than a traditional on-premise installation. For instance, take PCI DSS, which is a fairly prescriptive standard for merchants. It calls for having a separate demilitarization (DMZ) zone from your LAN to isolate and protect credit card data with a firewall. CSPs may support other mechanisms, like AWS security groups, to facilitate a similar functionality; however doing so still doesn’t meet all of the compliance requirements for a DMZ.  So organizations are using these new cloud services, but they are missing some of the requirements as relates to architecture controls and/or logical segmentation.

OPAQ: How would you describe the level of security and compliance support at the major cloud providers?

GM: They do quite a bit to reduce the burden of compliance. Most of them produce good documentation to declare what we call a service provider controls responsibility matrix.  It shows what the provider is doing around compliance and that helps because it both reduces the burden on the customer and declares where the customer’s remaining responsibilities begin. Security at the large CSPs has improved a lot, for instance with services like Amazon CloudWatch for monitoring. All the major providers now have good auditing capabilities for the management interface and offer multifactor authentication. These developments give customers more confidence in the cloud.

OPAQ: Is security protection in the cloud as good as or better than an enterprise on-premise environment?

GM: We tend to have an affinity toward legacy configurations in the on-premise world.  By that, meaning we set it up and it works and we never change it. It’s security via obscurity. When you go through the transformation process to become a cloud-first organization, you need to fix all those legacy issues that were acceptable in the LAN environment. You can’t be so sloppy. Cloud providers may be less secure than on premise, however, because you’re letting someone else manage the Layer 1 infrastructure. The physical addressing and networking and storage configurations now fall on the CSP. They may have weaknesses that you don’t know about and the customer has to depend on third-party attestations. Hypervisor hopping has been a concern for a while. If a CSP’s hypervisor technology has a flaw, a malicious actor could jump between different customers’ VM guests through the hypervisor. There aren’t any disclosed examples of this happening, but it’s always a risk in a multi-tenant environment.

OPAQ: Yet most if not all of the massive breaches in recent years have been in on-premise environments, right?

GM: While this is true, many of these breaches could have taken place in the cloud. Equifax had a real problem with inventory because they didn’t have visibility into the software that should have been patched. That scenario could have also occurred with a CSP. Vulnerability management is critical in any implementation. Accenture did have an issue in the cloud recently, which could have been disastrous. In October, it was discovered that the global consulting firm had left an AWS S3 storage location unsecured, leaving over 100GB  of customer data accessible without authentication by anyone on the Internet with the correct S3 URL.  The insecure configuration of Amazon S3 could also apply to on-premise technologies.  No matter where your data sits, IT needs to secure the location against exploitable configurations and software flaws.

OPAQ: Do you foresee more regulation in the area of cloud compliance and security?

GM: Yes. The EU’s General Data Protection Regulation (GDPR) has huge potential to change a lot of things in tech. It goes into enforcement in 2018, and may become a global standard for privacy. GDPR applies to any organization that uses the data of people who are in the EU at the time of data collection. Two key principles of GDPR are that companies and organizations should use data minimization to keep the smallest amount of data possible and use consent mechanisms to ensure they’re authorized to hold or use that data. If you have 10 million customer records, but determine that you only need to keep two million records and purge the rest, your risks go down. If a breach occurs, there is less data loss and lower costs to mitigate the impacts of the loss. Information privacy is the next frontier. The large CSPs realize that if they don’t get in front of this, they will lose business. This will require that CSPs look closely at the leading cyber risk rating mechanisms, and adopt one or two of them. I think we’ll also see more CSPs provide guidance on how to meet global data security and privacy requirements in an effort to help customers help themselves.