DevOps for infrastructure

DevOps is a contraction of the terms "developer" and "system operator". DevOps teams consist of developers, testers and application systems managers, and each team is responsible for developing and running one or more business applications or services.

The whole team is responsible for developing, testing, and running their application(s). In case of incidents with the applications under their responsibility, every team member of the DevOps team is responsible to help fix the problem. The DevOps philosophy is “If you built it, you run it”.

While DevOps is typically used for teams developing and running functional software, the same philosophy can be used to develop and run an infrastructure platform that functional DevOps teams can use. In an infrastructure Devops team, infrastructure developers design, test, and build the infrastructure platforms and manage their lifecycle; infrastructure operators keep the platform running smoothly, fix incidents, and apply small changes.


This entry was posted on Friday 06 January 2017

Infrastructure as a Service (IaaS)

Infrastructure as a Service provides virtual machines, virtualized storage, virtualized networking and the systems management tools to manage them.

2016-09/infrastructure-as-a-service.jpg

IaaS is typically based on cheap commodity white label hardware. The philosophy is to keep the cost down by allowing the hardware to fail every now and then. Failed components are either replaced or simply removed from the pool of available resources.

IaaS provides simple, highly standardized building blocks to applications. It does not provide high availability, performance or security levels. Consequently, applications running on IaaS should be robust to allow for failing hardware and should be horizontally scalable to increase performance.

In order to use IaaS, users must create and start a new server, and then install an operating system and their applications. Since the cloud provider only provides basic services, like billing and monitoring, the user is responsible for patching and maintaining the operating systems and application software.

Not all operating systems and applications can be used in a IaaS cloud; many software licenses prohibit the use of a fully scalable, virtual environment like IaaS, where it is impossible to know in advance on which machines software will run.


This entry was posted on Friday 28 October 2016

(Hyper) Converged Infrastructure

In a traditional infrastructure deployment, compute, storage and networking are deployed and managed independently, often based on components from multiple vendors. In a converged infrastructure, the compute, storage, and network components are designed, assembled, and delivered by one vendor and managed as one system, typically deployed in one or more racks. A converged infrastructure minimizes compatibility issues between servers, storage systems and network devices while also reducing costs for cabling, cooling, power and floor space.

The technology is usually difficult to expand on-demand, requiring the deployment of another rack of infrastructure to add new resources. The following picture shows an example of a converged system.

2016-09/converged-system.jpg


While in a converged infrastructure the infrastructure is deployed as individual components in a rack, a hyperconverged infrastructure (HCI) brings together the same components within a single server node.

A hyperconverged infrastructure comprises a large number of identical physical servers from one vendor with direct attached storage in the server and special software that manages all servers, storage, and networks as one cluster running virtual machines.

The technology is easy to expand on-demand, by adding servers to the hyperconverged cluster. The following picture shows an example of a hyperconverged system.

2016-09/hyperconverged-system.jpg

Hyperconverged systems are an ideal candidate for deploying VDI environments (see section 12.3.3), because the storage is close to the compute (as it is in the same box) and the solution scales well with the rise of the number of users.

A big advantage of converged and hyperconverged infrastructures is having to deal with one firmware and software vendor. Vendors of hyperconverged infrastructures provide all updates for compute, storage and networking in one service pack and deploying these patches is typically much easier than deploying upgrades in all individual components in a traditional infrastructure deployment.

Drawbacks of converged and hyperconverged infrastructures are:

  • Vendor lock-in – the solution is only beneficial if all infrastructure is from the same vendor
  • Scaling can only be done in fixed building blocks – if more storage is needed, compute must also be purchased. This can have a side effect: since some software licenses are based on the number of used CPUs or CPU cores, adding storage also means adding CPUs and hence leads to extra license costs.

This entry was posted on Friday 21 October 2016

Object storage

Object storage is a storage architecture that manages data as objects, where an object is defined as a file with its metadata, and a globally unique identifier called the object ID.

Examples of metadata are filename, date and time stamps, owner, access permissions, the level of data protection, and replication settings to for instance a different geography.

Object storage stores and retrieves data using a REST API over HTTP, served by a webserver, and is designed to be highly scalable.

Where a traditional file system provides a structure that simplifies locating files (for example, a log file is stored in /var/log/proxy/proxy.log), in object storage, a file’s object ID must be administered by the application using it. Using the object ID, the object can be found without knowing the physical location of the data. For example, an application has administered that its log file is stored in object ID 8932189023.

Using object IDs enables simplicity and massive scalability of the storage system, as the object ID is a link to an object that can be stored anywhere.

Data in object storage can’t be modified. Instead, if a file is modified, the original file must be deleted, and a new file must be created, leading to a new object ID. This makes object storage unsuitable for frequenty changing data. But it is a good fit for data that doesn't change much, like backups, archives, video and audio files, and virtual machine images.
Object storage allows for high availability using commodity servers with direct attached disk drives. It can be setup to replicate objects across multiple servers and locations (typically, at least three copies of every file are stored in mutiple geographical zones). If one or more servers or disks fail, data can still be made available, without impact to the application or the end user.

While object storage was not designed to be used as a file system, some systems emulate a file system using object storage. For instance, Amazon’s S3FS creates a virtual filesystem, based on S3 object storage, that can be mounted to an operating system in the traditional way, however, with significant performance degradation. A much better solution is to use object storage with applications designed for it.


This entry was posted on Friday 07 October 2016

Software Defined Networking (SDN) and Network Function Virtualization (NFV)

Software Defined Networking (SDN) is a relatively new concept. It allows networks to be defined and controlled using software external to the physical networking devices.

With SDN, a relatively simple physical network can be programmed to act as a complex virtual network. It can become a hierarchical, complex and secured virtual structure that can easily be changed without touching the physical network components.

An SDN can be controlled from a single management console and open APIs can be used to manage the network using third party software. This is particularly useful in a cloud environment, where networks change frequently as machines are added or removed from a tenant’s environment. With a single click of a button or a single API call, complex networks can be created within seconds.

SDN works by decoupling the control plane and data plane from each other, such that the control plane resides centrally and the data plane (the physical switches) remain distributed, as shown in the next figure.

Software Defined Networking (SDN)

In a traditional switch or router, the network device dynamically learns packet forwarding rules and stores them in each device as ARP or routing tables. In an SDN, the distributed data plane devices are forwarding network packets based on ARP or routing rules that are loaded into the devices by an SDN controller devices in the central control plane. This allows the physical devices to be much simpler and more cost effective.

 

Network Function Virtualization

In addition to SDN, Network Function Virtualization (NFV) is a way to virtualize networking devices like firewalls, VPN gateways and load balancers. Instead of having hardware appliances for each network function, in NFV, these appliances are implemented by virtual machines running applications that perform the network functions.

Using APIs, NFV virtual appliances can be created and configured dynamically and on-demand, leading to a flexible network configuration. It allows, for instance, to deploy a new firewall as part of a script that creates a number of connected virtual machines in a cloud environment.


This entry was posted on Friday 23 September 2016


Earlier articles

DevOps for infrastructure

Infrastructure as a Service (IaaS)

(Hyper) Converged Infrastructure

Object storage

Software Defined Networking (SDN) and Network Function Virtualization (NFV)

Software Defined Storage (SDS)

What's the point of using Docker containers?

Identity and Access Management

Using user profiles to determine infrastructure load

Public wireless networks

Supercomputer architecture

Desktop virtualization

Stakeholder management

x86 platform architecture

Midrange systems architecture

Mainframe Architecture

Software Defined Data Center - SDDC

The Virtualization Model

What are concurrent users?

Performance and availability monitoring in levels

UX/UI has no business rules

Technical debt: a time related issue

Solution shaping workshops

Architecture life cycle

Project managers and architects

Using ArchiMate for describing infrastructures

Kruchten’s 4+1 views for solution architecture

The SEI stack of solution architecture frameworks

TOGAF and infrastructure architecture

The Zachman framework

An introduction to architecture frameworks

How to handle a Distributed Denial of Service (DDoS) attack

Architecture Principles

Views and viewpoints explained

Stakeholders and their concerns

Skills of a solution architect architect

Solution architects versus enterprise architects

Definition of IT Architecture

My Book

What is Big Data?

How to make your IT "Greener"

What is Cloud computing and IaaS?

Purchasing of IT infrastructure technologies and services

IDS/IPS systems

IP Protocol (IPv4) classes and subnets

Infrastructure Architecture - Course materials

Introduction to Bring Your Own Device (BYOD)

IT Infrastructure Architecture model

Fire prevention in the datacenter

Where to build your datacenter

Availability - Fall-back, hot site, warm site

Reliabilty of infrastructure components

Human factors in availability of systems

Business Continuity Management (BCM) and Disaster Recovery Plan (DRP)

Performance - Design for use

Performance concepts - Load balancing

Performance concepts - Scaling

Performance concept - Caching

Perceived performance

Ethical hacking

The first computers

Open group ITAC /Open CA Certification

Sjaak Laan


Recommended links

Ruth Malan
Gaudi site
Byelex
XR Magazine
Esther Barthel's site on virtualization


Feeds

 
XML: RSS Feed 
XML: Atom Feed 


Disclaimer

The postings on this site are my opinions and do not necessarily represent CGI’s strategies, views or opinions.

 

Copyright Sjaak Laan