Server Security Lies Deep in Hardware

  • A new era of cyber warfare aimed at exploiting hardware vulnerabilities is emerging.
  • End-of-Support (EoS) for Windows Server 2008/2008 R2 is fast approaching.
  • As no layer is immune, expect higher IT operating costs.

Replacing servers is often delayed. Confronted with competing business priorities, limited budgets and personnel, and a sense of comfort as current servers reliably hum along, delay is easy to rationalize. Yet, delays are not without risk and trade-offs. Cases in point are two circumstances that small and midsized enterprises (SMEs) should seriously consider and, in our opinion, initiate action now.

Those circumstances are:

A new era of cyber warfare aimed at exploiting hardware vulnerabilities is emerging-A sample of scholarly articles listed below clearly demonstrates the existence of server hardware vulnerabilities and a growing number of attack variants. For SMEs with susceptible servers, their cyber and associated business risks are increasing.

  • January 2018: Spectre and Meltdown vulnerabilities reported.
  • August 2018: Foreshadow and Foreshadow-NG3 vulnerabilities reported.
  • November 2018: Five new attack variants on Spectre and two new attack variants on Meltdown revealed.
  • January 2019: Baseband Management Controller (BMC) vulnerability reported.

End-of-Support (EoS) for Windows Server 2008/2008 R2 is fast approaching—Reaching EoS on January 14, 2020, SMEs still relying on this server operating system (OS) face a perilous and expensive trade-off, namely:

  • Complimentary provisioning of security updates ends— Complimentary security updates to newly discovered vulnerabilities stop being released through Windows Update.
  • Extra spending becomes the new standard—Security updates are available for three more years after Windows Server 2008/2008 R2 exits its “Extended Support” phase in January 2020, but for a significant extra fee and a three-year commitment. Also, the updates are limited to security issues rated critical or important. With the industrialized state of cyber warfare, attackers systematically prey on the weak and vulnerable. Not doling out dollars for even the limited security updates moves your servers closer to the target’s bullseye.

IMPLICATIONS OF REMAINING WITH AGING SERVERS RUNNING WINDOWS SERVER 2008, INCLUDING THE “R2” SUCCESSOR

Although the implications of continuing with aging servers and server OS will vary by company, there are several implications that broadly apply. This section is devoted to those.

As no layer is immune, expect higher IT operating costs

Historically, exploitable vulnerabilities were concentrated at the host operating system and above software layers. Hardware vulnerabilities and attack variants, as previously summarized, are a more recent development, but growing in number. Consequently, IT operating costs are poised to increase as additional patching of operating systems and hypervisors will be required to offset chipset vulnerabilities (e.g., with Meltdown); and/or servers will need to be pulled offline to return chipsets to original factory specifications.

Cost of data breaches and security incidents continues to rise

Whether calculated based on average total cost or cost per lost or stolen record, the average cost of data breaches rose from the previous year, according to Ponemon Institute (now at $3.86 million average total cost, and $148 cost per record).6 Moreover, the likelihood of one or more material data breaches within the next 24 months has risen to 27.9%; a demonstration that data-exfiltrating attackers return. Placed into the broader context of all incidents (those that resulted in a data breach and those that did not), retargeting is common. FireEye calculated that nearly two-thirds of its incident response clients were retargeted by the same or similarly motivated attack group within a 19-month period.Bottom line, the cost of a data breach is rising and seldom limited to a single incident. And, with hardware vulnerabilities adding to the overall attack surface, incidents and severity of data breaches are poised to increase.

Breach detection remains significantly slower than attackers’ ability to succeed

According to FireEye, the good news is that the time to detect a breach after initial compromise (i.e., dwell time) has fallen precipitously from 416 days in 2011 to 78 days in 2018 (for comparison, Ponemon pegs the mean time to identify a data breach at 197 days). While laudable progress, the bad news is that attackers are typically more than 10x faster. According to Nuix, professional penetration testers and incident responders (groups that approximate criminal hackers in expertise and effectiveness) stated they overcame perimeter defenses, identified valuable data, and exfiltrated that data in less than two days in 80% of their attempts. For additional comparison, 85% of the data breaches investigated by FireEye had dwell times that were considerably longer than two days: a minimum of one week and up to several years. Now, considering the relative newness of hardware vulnerabilities and attack variants, indicators of compromise (IoC) may not be as well-known or identifiable (the clues will likely be more elusive). These circumstances add to dwell time and the business implications of data breaches and other security incidents.

Downtime is costly and back-up potentially risky

Server hardware vulnerabilities inject unknowns into whether aging servers can remain online, powering business without exposing the business to intolerable risk. Unknowns include:

  • How quickly and comprehensively will hardware vulnerabilities be identified?
  • Can identified vulnerabilities be remediated with software patches?
  • How quickly will software patches become available?
  • Will IT staff will be available to apply remediation?
  • And more disconcerting with hardware vulnerabilities, will on-site remediation even be possible?

MIGRATION TO THE CLOUD IS NOT ALWAYS THE OPTIMAL CHOICE

Migrating workloads to the public cloud (i.e., Infrastructure-as-a-Service) is a growing trend. Even so, Frost & Sullivan’s research shows that, while hosting workloads in the public cloud is one of many options that SMEs use, the cloud is not always the optimal choice. To demonstrate, the following series of survey-based findings illustrate IT decision-markers’ views on on-premises versus cloudhosted workloads. For comparison purposes, survey responses are segmented by IT decision-makers employed at businesses with ‘Less than $30 million’ in annual revenues (a proxy for SMEs) versus ‘More than $30 million’.

Frost & Sullivan’s research shows that, while hosting workloads in the public cloud is one of many options that SMEs use, the cloud is not always the optimal choice.

Indicative of a hybrid IT model, a variety of deployment options are currently used for hosting workloads. Additionally, workload deployment plans for the next two years foretell similar diversity. In other words, the hybrid IT model will continue to be prominent into the foreseeable future.

Another cloud consideration is that returning a workload from the cloud to a business-managed environment (i.e., repatriation) is common. Forty-three percent of the ‘Less than $30M’ cohort repatriated workloads versus 48% for the ‘More than $30M’ cohort. For the ‘Less than $30M’ cohort, the reasons for workload repatriation span compliance, security, and operations (see next chart). Reflecting the higher percent of the ‘More than $30M’ cohort having repatriated workloads and having greater IaaS adoption, this cohort’s percentages were also higher for each of the same repatriation reasons.

REASONS WORKLOADS WERE REPATRIATED—TOP 5 REASONS FOR THE ‘LESS THAN $30M’ COHORT

Reasons workloads were repatriated— Top 5 reasons for the “less than $30 million” cohort.

This collection of findings funnels to a conclusion that an ‘all on-premises’ or ‘all cloud’ deployment model is unlikely for most SMEs. Moreover, a full migration to the cloud by SMEs with established on-premises footprints is even less likely. As previously shown, cloud migration issues and missed expectations offer a cautionary tale. Nevertheless, SMEs want choice in order to optimally match business objectives with deployment options. To that end, we believe a hybrid IT model will continue to grow in prominence, with workloads shifting back and forth among hosting locations (i.e., fluidity). Supporting fluidity, OS compatibility between on-premises servers and cloud is critical and is one of the recommended server features listed in the next section.

RECOMMENDED SECURE SERVER FEATURES

To assist in reaching a qualified server modernization decision, we recommend evaluating your server options based on the following built-secure features.

  • Immutable Authenticity Assurance
  • Authoritative Alerts
  • Simple Recovery to Trusted State
  • Built Compliant
  • Native Data-at-Rest Protection

Enhanced security is not the sole objective in server replacement. There should also be a step-up in performance and agility. Additionally, the relationship between the server hardware and server OS should deliver a synergistic “one plus one equals three” windfall. With this in mind, we recommend examining:

  • The server’s ability to fine tune performance to workloads
  • The server OS’s complementary features in security, storage, and virtualization
  • Cloud compatibility of the server OS

Only $1/click

Submit Your Ad Here

techcloud link

Tech Cloud Link is the place to get free technology whitepapers downloads in a variety of formats, including PDF versions of popular articles tech briefs, tech whitepapers, and research articles into profoundly diverse spectrum within IT landscape. Here you will resolve trending IT concerns on topics like – Network Communication – Storage – Data Center – Server – Network Security. The whitepapers accurately address convergence between industrial and enterprise networks and collections of Articles, Features, Slide Shows and Analysis on Enterprise IT, Business and Leadership strategies that focus on critical
https://techcloudlink.com/

Leave a Reply