Windows Servers and Storage – Choosing Your Server

  • When choosing a physical server, you must determine the degree of scalability you need.
  • The advent of x86 virtualization is changing the way organizations select their physical servers.
  • IT makes economic sense for manufacturers to offer composable systems, which can be configured to meet a broad spectrum of needs.

A physical server is such a fundamental component of an information-processing system that its operational state directly affects an organization’s ability to run the business. Selecting, installing and implementing your servers are critical challenges. When deciding which servers to invest in, first carefully consider your proposed use. Start by making a thorough list of the applications your organization needs to run properly. Then talk to the likely users of those applications. The applications themselves can generally be put into two broad categories:

  • “Services” applications provide services such as email server, Internet access, intranet facilities and, when needed, extranet capability.
  • “Domain-Specific” applications handle business-oriented work. These applications will usually be commercial products that are externally acquired or custom applications developed specifically for the company.

Installation and System Testing

When you add a newly acquired physical server to an existing system’s setup, there are two key actions: installation and testing.

When you add a newly acquired physical server to an existing system’s setup, there are two key actions: installation and testing. Yes, the manufacturer will have conducted basic tests of the hardware and software. But you must exhaustively test the system that’s running your organization’s applications.

However, if your new system is intended to replace rather than supplement an existing system, you face additional work. How much work to expect depends on whether the new system is completely compatible with the old one.

If the systems are compatible, the test process should pay special attention to storage. Here you have two options: Reuse the old storage on the new system (after a complete backup) or create a complete backup of the data on the current system and restore it to the new system.

For instance, there are two types of data representation (referring to the order of bytes): Little Endian and Big Endian. Little Endian represents the low order byte and high-order byte of a number stored in memory at the lowest and highest address, respectively. Big Endian represents the high-order byte of the number stored at the lowest address. Systems based on the Intel x86 architecture use Little Endian, while many RISC based systems use Big Endian. Some programs are sensitive to data representation, making this important to test.

The initial testing of the new server also provides an excellent opportunity to check that the operating procedures, especially backup and restore, are in good shape.

Scalability Requirements

When choosing a physical server, you must determine the degree of scalability you need. In this context, scalability means the extent to which you can change the resources and performance of the system to match the organization’s growing needs without having to resort to a complete system replacement. Such adaptability can take place across several factors, such as data handling capacity (processing power), main memory size and LAN or WAN network connections.

Server scale-up and scale-out are not mutually exclusive.

  • Scale-up, or vertical growth, adds processors to a symmetric multiprocessing (SMP) system.
  • Scale-out, or horizontal growth, adds extra individual servers (or nodes) to a cluster or grid system.

Understanding Basic Server Architecture

Architecture is a fuzzy term used in several contexts. It generally refers to structural principles. For example, the instruction set architecture of a computer defines how the computer interprets memory contents when executing them as a program. Any given system may have its architecture described at many different levels, including the Industry Standard Architecture (ISA), I/O, virtual memory and interconnects.

A server exists to provide services to its clients. Workloads vary and an architecture designed to efficiently support one class of applications could be very different from what another class may need.

Generally, the intimate high performance interconnects are proprietary to the vendor and tuned for a specific purpose. Interconnects further out toward the edge of the system are more likely to adhere to standards.

Given an appropriate collection of interconnects and subsystems, it becomes possible to build systems of many different shapes and sizes, varying the mix of processors, memory, storage and connectivity.

VLSI and Interconnects

A key benefit of the physics behind scaling silicon technology to ever-smaller dimensions is that, with reduced size, you generally obtain higher performance while lowering power demands. Smaller transistors take less energy to switch states. However, problems occur when you need to connect these ever-shrinking transistors to actual wires to transfer data between chips that are some distance apart.

The energy needed to drive signals out of a chip package, across a board and into a distant chip does not scale with transistor technology. Therefore, larger transistors must be used for the interchip connections than are used for internal logic.

Furthermore, the transistors needed to handle the logical activities in the interface represent a silicon area that shrinks with every generation of chip. However, the driver devices do not shrink, and thus occupy an increasing percentage of physical space and power.

Another challenge is keeping signals on many wires in sync with each other. When you are signaling at very high frequency, minor differences in wire length can produce significant differences in time of arrival. Connections that hook up multiple chips chips—classical buses, for example—require even bigger transistors.

Physical and Virtual Workloads

The advent of x86 virtualization—the ability to run software that emulates a physical machine’s properties—is changing the way organizations select their physical servers. Many are relying on this technology to transform their current x86 physical workloads into virtual workloads through a process called physical to virtual (P2V) migration. The diagram to the right illustrates this process.

Many are relying on this technology to transform their current x86 physical workloads into virtual workloads through a process called physical to virtual (P2V) migration.

Organizations commonly discover that the resources of their physical servers are underused and they rely on virtualization to increase usage ratios for physical servers. One physical server can run multiple virtual workloads.

Because of this, server selection focuses on the ability to run the virtualization software or hypervisor engine. Applications are contained within the virtual machines that run on top of the hypervisor. Applications no longer dictate the selection of hardware since they address hardware through the emulation capabilities of the virtualization engine.

Most x86 virtualization technologies require x64 processors, which provide access to larger amounts of memory and processors that include hardware-assisted virtualization capabilities. Host servers— physical servers running the virtualization engine—require vast amounts of memory and multiple processor cores to support the multiple virtual workloads that organizations choose to run on them.

Physical Server Architectures

IT makes economic sense for manufacturers to offer composable systems, which can be configured to meet a broad spectrum of needs when an IT person simply plugs together the right subsystems. A key consideration when selecting a physical server is the amount of computing or processing power needed. Because there are strict limits on computer performance available from a single processor, an important factor in composability is the number of processors and processor cores deployed in a system. Ways in which multiple processors can be deployed within a system include:

  • Symmetric multiprocessing (SMP): This method arranges processors so each sees all system memory and all I/O, allowing programs to run on any processor (or processor core) and access all system resources.
  • Clustering: This technique builds the system by connecting separate, independent computer subsystems, each having its own processor, memory, storage and I/O. Clustered servers do not share workloads.
  • Grid Computing: This clustering variant employs the clustering approach using real, separate computers as its building blocks, and a LAN or WAN as the system. Grid computing systems share workloads. Each of these three approaches has its own strengths and weaknesses; each is appropriate for different application classes.

Download Ebook Now

Only $1/click

Submit Your Ad Here

Jack Suri

Tech Cloud Link is the place to get free technology whitepapers downloads in a variety of formats, including PDF versions of popular articles tech briefs, tech whitepapers, and research articles into profoundly diverse spectrum within IT landscape. Here you will resolve trending IT concerns on topics like – Network Communication – Storage – Data Center – Server – Network Security. The whitepapers accurately address convergence between industrial and enterprise networks and collections of Articles, Features, Slide Shows and Analysis on Enterprise IT, Business and Leadership strategies that focus on critical


https://techcloudlink.com/

Leave a Reply