Performance concepts - Load balancing

To make optimal use of a horizontal scaled system, most of the time some form of load balancing is used to spread the load over various machines. 

Load balancing uses multiple servers in a system to perform identical tasks (also known as a server farm). Examples would be a web server farm, a mail server farm or an FTP server farm. A load balancer automatically redirects tasks to members in the server farm. A load balancer checks the current load on each server in the farm and moves incoming requests to the least busy server.

A load balancer also increases availability: when servers in the server farm are unavailable the load balancer notices this and ensures no requests are sent to unavailable servers until they are back online again. Of course the availability of the load balancer itself becomes very important in this setting.

It is also important to realise that server load balancing introduces new challenges. The systems must be 100% identical to each other in terms of functionality. For instance, each web server in a load balancing situation must be able to have access to the same information. Furthermore, the application running on a load balanced system must be able to cope with the fact that each request can be handled by a different server. The application must be stateless for this to work.

A typical example is a web application asking the user for a username and password. When the request is sent from web server number one, and the reply (the filled-in form) is sent to web server number two by the load balancer, the web application must be able to handle this. If this is not the case the load balancer must be made more intelligent; being able to contain the states of the application.

Of course if a server in the server farm goes down, its per-session information becomes inaccessible, and any sessions depending on it are lost. In the network realm, load balancing is done to spread network load over multiple network connections.

For instance most network switches support port trunking. In such a configuration multiple Ethernet connections are combines to get a virtual Ethernet connection providing higher throughput. For instance a network switch can trunk three 100Mb/s Ethernet connections to one (virtual) 300Mb/s connection. The load on the three connections is then balanced over the three lines by the network switch.

In storage systems multiple connections are also common. Not only for increasing the bandwidth of the connections, but also to increase availability.


This entry was posted on Tuesday 17 May 2011

Earlier articles

My Book

DevOps for infrastructure

Infrastructure as a Service (IaaS)

(Hyper) Converged Infrastructure

Object storage

Software Defined Networking (SDN) and Network Function Virtualization (NFV)

Software Defined Storage (SDS)

What's the point of using Docker containers?

Identity and Access Management

Using user profiles to determine infrastructure load

Public wireless networks

Supercomputer architecture

Desktop virtualization

Stakeholder management

x86 platform architecture

Midrange systems architecture

Mainframe Architecture

Software Defined Data Center - SDDC

The Virtualization Model

What are concurrent users?

Performance and availability monitoring in levels

UX/UI has no business rules

Technical debt: a time related issue

Solution shaping workshops

Architecture life cycle

Project managers and architects

Using ArchiMate for describing infrastructures

Kruchten’s 4+1 views for solution architecture

The SEI stack of solution architecture frameworks

TOGAF and infrastructure architecture

The Zachman framework

An introduction to architecture frameworks

How to handle a Distributed Denial of Service (DDoS) attack

Architecture Principles

Views and viewpoints explained

Stakeholders and their concerns

Skills of a solution architect architect

Solution architects versus enterprise architects

Definition of IT Architecture

What is Big Data?

How to make your IT "Greener"

What is Cloud computing and IaaS?

Purchasing of IT infrastructure technologies and services

IDS/IPS systems

IP Protocol (IPv4) classes and subnets

Infrastructure Architecture - Course materials

Introduction to Bring Your Own Device (BYOD)

IT Infrastructure Architecture model

Fire prevention in the datacenter

Where to build your datacenter

Availability - Fall-back, hot site, warm site

Reliabilty of infrastructure components

Human factors in availability of systems

Business Continuity Management (BCM) and Disaster Recovery Plan (DRP)

Performance - Design for use

Performance concepts - Load balancing

Performance concepts - Scaling

Performance concept - Caching

Perceived performance

Ethical hacking

The first computers

Open group ITAC /Open CA Certification

Sjaak Laan


Recommended links

Ruth Malan
Gaudi site
Byelex
XR Magazine
Esther Barthel's site on virtualization


Feeds

 
XML: RSS Feed 
XML: Atom Feed 


Disclaimer

The postings on this site are my opinions and do not necessarily represent CGI’s strategies, views or opinions.

 

Copyright Sjaak Laan