Midrange systems architecture

Introduction
The midrange platform is positioned between the mainframe platform and the x86 platform. The size and cost of the systems, the workload, the availability, their performance, and the maturity of the platform is higher than that of the x86 platforms, but lower than that of a mainframe.

Today midrange systems are produced by three vendors:

  • IBM produces the Power Systems series of midrange servers (the former RS/6000, System p, AS/400, and System i series).
  • Hewlett-Packard produces the HP Integrity systems.
  • Oracle produces the original Sun Microsystems’s based SPARC servers.

Midrange systems are typically built using parts from only one vendor, and run an operating system provided by that same vendor. This makes the platform relatively stable, leading to high availability and security.


History
The term minicomputer evolved in the 1960s to describe the small computers that became possible with the use of IC and core memory technologies. Small was relative, however; a single minicomputer typically was housed in a few cabinets the size of a 19” rack.

The first commercially successful minicomputer was DEC PDP-8, launched in 1964. The PDP-8 sold for one-fifth the price of the smallest IBM 360 mainframe. This enabled manufacturing plants, small businesses, and scientific laboratories to have a computer of their own.

In the late 1970s, DEC produced another very successful minicomputer series called the VAX. VAX systems came in a wide range of different models. They could easily be setup as a VAXcluster for high availability and performance.

DEC was the leading minicomputer manufacturer and the 2nd largest computer company (after IBM). DEC was sold to Compaq in 1998 which in its turn became part of HP some years later.

Minicomputers became powerful systems that ran full multi-user, multitasking operating systems like OpenVMS and UNIX. Halfway through the 1980s minicomputers became less popular as a result of the lower cost of microprocessor based PCs, and the emergence of LANs. In places where high availability, performance, and security are very important, minicomputers (now better known as midrange systems) are still used.
Most midrange systems today run a flavor of the UNIX operating system, OpenVMS or IBM i:

  • HP Integrity servers run HP-UX UNIX and OpenVMS.
  • Oracle/Sun’s SPARC servers run Solaris UNIX.
  • IBM's Power systems run AIX UNIX, Linux and IBM i.

Midrange systems architecture
Midrange systems used to be based on specialized Reduced Instruction Set Computer (RISC) CPUs. These CPUs were optimized for speed and simplicity, but much of the technologies originating from RISC are now implemented in general purpose CPUs. Some midrange systems therefore are moving from RISC based CPUs to general purpose CPUs from Intel, AMD, or IBM.

The architecture of most midrange systems typically use multiple CPUs and is based on a shared memory architecture. In a shared memory architecture all CPUs in the server can access all installed memory blocks. This means that changes made in memory by one CPU are immediately seen by all other CPUs. Each CPU operates independently from the others. To connect all CPUs with all memory blocks, an interconnection network is used based on a shared bus, or a crossbar.

A shared bus connects all CPUs and all RAM, much like a network hub does. The available bandwidth is shared between all users of the shared bus. A crossbar is much like a network switch, in which every communication channel between one CPU and one memory block gets full bandwidth.

The I/O system is also connected to the interconnection network, connecting I/O devices like disks or PCI based expansion cards.

Since each CPU has its own cache, and memory can be changed by other CPUs, cache coherence is needed in midrange systems. Cache coherence means that if one CPU writes to a location in shared memory, all other CPUs must update their caches to reflect the changed data. Maintaining cache coherence introduces a significant overhead. Special-purpose hardware is used to communicate between cache controllers to keep a consistent memory image.

Shared memory architectures come in two flavors: Uniform Memory Access (UMA), and Non Uniform Memory Access (NUMA). Their cache coherent versions are known as ccUMA and ccNUMA.

UMA
The UMA architecture is one of the earliest styles of multi-CPU architectures, typically used in servers with no more than 8 CPUs. In an UMA system the machine is organized into a series of nodes containing either a processor, or a memory block. These nodes are interconnected, usually by a shared bus. Via the shared bus, each processor can access all memory blocks, creating a single system image.

2015-09/uma-architecture.jpg

UMA systems are also known as Symmetric Multi-Processor (SMP) systems. SMP is used in x86 servers as well as early midrange systems.

SMP technology is also used inside multi-core CPUs, in which the interconnect is implemented on-chip and a single path to the main memory is provided between the chip and the memory subsystem elsewhere in the system.

2015-09/smp-on-multi-core-cpu.jpg

UMA is supported by all major operating systems and can be implemented using most of today’s CPUs.

NUMA
In contrast to UMA, NUMA is a server architecture in which the machine is organized into a series of nodes, each containing processors and memory, that are interconnected, typically using a crossbar. NUMA is a newer architecture style than UMA and is better suited for systems with many processors.

2015-09/numa-architecture.jpg

A node can use memory on all other nodes, creating a single system image. But when a processor accesses memory not within its own node, the data must be transferred over the interconnect, which is slower than accessing local memory. Thus, memory access times are non-uniform, depending on the location of the memory, as the architecture’s name implies.

Some of the current servers using NUMA architectures include systems based on AMD Opteron processors, Intel Itanium systems, and HP Integrity and Superdome systems. Most popular operating systems such as OpenVMS, AIX, HP-UX, Solaris, and Windows, and virtualization hypervisors like VMware fully support NUMA systems.


This entry was posted on Friday 25 September 2015

Earlier articles

Quantum computing

Security at cloud providers not getting better because of government regulation

The cloud is as insecure as its configuration

Infrastructure as code

DevOps for infrastructure

Infrastructure as a Service (IaaS)

(Hyper) Converged Infrastructure

Object storage

Software Defined Networking (SDN) and Network Function Virtualization (NFV)

Software Defined Storage (SDS)

What's the point of using Docker containers?

Identity and Access Management

Using user profiles to determine infrastructure load

Public wireless networks

Supercomputer architecture

Desktop virtualization

Stakeholder management

x86 platform architecture

Midrange systems architecture

Mainframe Architecture

Software Defined Data Center - SDDC

The Virtualization Model

What are concurrent users?

Performance and availability monitoring in levels

UX/UI has no business rules

Technical debt: a time related issue

Solution shaping workshops

Architecture life cycle

Project managers and architects

Using ArchiMate for describing infrastructures

Kruchten’s 4+1 views for solution architecture

The SEI stack of solution architecture frameworks

TOGAF and infrastructure architecture

The Zachman framework

An introduction to architecture frameworks

How to handle a Distributed Denial of Service (DDoS) attack

Architecture Principles

Views and viewpoints explained

Stakeholders and their concerns

Skills of a solution architect architect

Solution architects versus enterprise architects

Definition of IT Architecture

What is Big Data?

How to make your IT "Greener"

What is Cloud computing and IaaS?

Purchasing of IT infrastructure technologies and services

IDS/IPS systems

IP Protocol (IPv4) classes and subnets

Infrastructure Architecture - Course materials

Introduction to Bring Your Own Device (BYOD)

Fire prevention in the datacenter

Where to build your datacenter

Availability - Fall-back, hot site, warm site

Reliabilty of infrastructure components

Human factors in availability of systems

Business Continuity Management (BCM) and Disaster Recovery Plan (DRP)

Performance - Design for use

Performance concepts - Load balancing

Performance concepts - Scaling

Performance concept - Caching

Perceived performance

Ethical hacking

The first computers

Open group ITAC /Open CA Certification


Recommended links

Ruth Malan
Gaudi site
Esther Barthel's site on virtualization
Eltjo Poort's site on architecture


Feeds

 
XML: RSS Feed 
XML: Atom Feed 


Disclaimer

The postings on this site are my opinions and do not necessarily represent CGI’s strategies, views or opinions.

 

Copyright Sjaak Laan