The last few weeks the news regularly reports about hackers who manage to steal data from public cloud environments, such as a recent hack at Amazon Web Services (AWS). The most common cause of this is the way in which customers have set up their security in the cloud, and not the security of the public cloud itself. The real security problem lies primarily in the way customers have set up their IT environments in the public cloud and how they keep them in order. An automated setup of cloud environments is of the highest importance to prevent security issues. In this blog I show you what cloud suppliers offer in terms of security, what is expected of customers and where things tend to go wrong.
Cloud suppliers put a lot of effort into security
The use of a cloud supplier compared to using your own data centre has a number of benefits. You don't need to invest a lot, you need fewer specialised administrators and the cloud supplier takes care of the physical security of the data center and the setup of the cloud platform.
Because cloud providers provide IT environments for a large number of customers from a wide variety of sectors (from the financial sector to healthcare), they have to meet the most stringent requirements. Cloud providers demonstrate their high level of security by ensuring that their data centers and practices meet the security standards generally accepted in the industries their customers operate in, such as PCI-DSS (financial sector) and HIPAA (healthcare). By comparison, for most organizations it is impractical and very costly to have external audits performed by in-house. Cloud providers can afford this with their economies of scale.
Security is a key issue for cloud providers
The market share, the number of data centers and the number of services of the three largest cloud providers - Google, AWS and Azure - is huge. They have a large number of data centers around the world with many hundreds of thousands of servers, large-scale storage and highly complex network environments. They have large teams in-house to handle specific aspects of security such as network security, encryption, identity & access management (IAM) and logging and monitoring. And with a large number of cloud vendor customers, every customer benefits from knowledge gained from other customers.
Cloud suppliers also have a lot to deal with when it comes to security - they only have a right to exist if there is no reason to question their security. If they make a mistake, they risk losing their credibility as a reliable partner.
A secure setup of the cloud is crucial
The underlying cloud platform may have a high level of security, but the design of cloud environments is a responsibility of the organization (the customer) itself. It is possible the customer has configured its security incomplete or incorrectly. To help their customers, major cloud providers have tools and automated services available to reduce the risk of errors.
By default, services in the public cloud are well protected. But in order to make use of services, they must be made accessible. And here things often go wrong. There are two reasons for this:
- The design of a cloud environment is different from what one is used to in an on-premises environment.
- A minor error can have very serious consequences - with a single click the customer environment can be made readable by the entire internet.
Setting up a cloud environment requires specialist knowledge. It really is a completely different platform than the traditional on-premises landscape, with many configuration options and new best practices. Customers must be able to use this new platform.
If a configuration error is made in such an environment without sufficient expertise, the effects can have a much greater impact than in an on-premises environment. For example, if the security of Amazon's S3 object storage is not configured properly, your data may become publicly accessible. A configuration error in an on-premises environment often has less far-reaching consequences.
Tools and automation are a good help
Fortunately, cloud providers provide a number of tools and services to reduce the risk of configuration errors. Problems can even be resolved automatically. For example, cloud providers provide scripts that can be launched automatically to shut down global readable data storage automatically and then alert you. I would strongly recommend using the available tooling as much as possible so when there is a configuration error, you will be informed as soon as possible.
In addition, it is of the greatest importance not to make manual changes in a cloud environment, but to make use of as much automation as possible. By using templates and scripts, components in the cloud can be implemented automatically and unambiguously.
Securing the public cloud is a shared responsibility
Public cloud environments have a very high security standard, where access to cloud components is locked by definition. It is the responsibility of the cloud provider to keep its platform secure. But it is the responsibility of the organization to ensure the security of the configuration of the cloud. The security of the public cloud is a shared responsibility. Be aware that a small error in the cloud can have far-reaching consequences, use the available tools and ensure maximum automation; after all, scripts do not make mistakes, but people do.
This blog first appeared (in Dutch) on the CGI site.
This entry was posted on Thursday 17 January 2019
Until recently, most servers, storage, and networks were configured manually. Systems managers installed operating systems from an installation medium, added libraries and applications, patched the system to the latest software versions, and configured the software to this specific installation. This approach is, however, slow, error prone, not easily repeatable, introduces variances in server configurations that should be equal, and makes the infrastructure very hard to maintain.
As an alternative, servers, storage, and networks can be created and configured automatically, a concept known as infrastructure as code.
The figure above shows the infrastructure as code building blocks. Tools to implement infrastructure as code include Puppet, Chef, Ansible, SaltStack, and Terraform. The process to create a new infrastructure component is as follows:
- Standard templates are defined that describe the basic setup of infrastructure components.
- Configurations of infrastructure components are defined in configuration definitions.
- New instances of infrastructure components can be created automatically by a creation tool, using the standard templates. This leads to a running, unconfigured infrastructure component.
- After an infrastructure component is created, the configuration tool automatically configures it, based on the configuration definitions, leading to a running, configured infrastructure component.
- When the new infrastructure component is created and configured, its properties, like DNS name and if a server is part of a load balancer pool, are automatically stored in the configuration registry.
- The configuration registry allows running instances of infrastructure to recognize and find each other and ensures all needed components are running.
- Configuration definition files and standard templates are kept in a version control system, which enables roll backs and rolling upgrades. This way, infrastructure is defined and managed the same way as software code.
The point of using configuration definition files and standard templates is not only that an infrastructure deployment can easily be implemented and rebuilt, but also that the configuration is easy to understand, test, and modify. Infrastructure as code ensures all infrastructure components that should be equal, are equal.
This entry was posted on Thursday 18 May 2017
In 2017, the third edition of my book on Infastructure Architecture called "Infrastructure Architecture - Infrastructure Building Blocks and Concepts" was published.
Abstract
IT infrastructure has been the foundation that enabled successful application deployments for many decades. Yet, general and up to date infrastructure knowledge is not widespread. Experience shows that software developers, system administrators, and project managers often have little knowledge of the big influence IT infrastructure has on performance, availability and security of software applications.
This book explains the concepts, history, and implementation of IT infrastructures. Although many of books can be found on each individual infrastructure building block, this is the first book to describe all of them: datacenters, servers, networks, storage, operating systems, and end user devices.
The building blocks described in this book provide functionality, but they also provide the non-functional attributes performance, availability, and security. These attributes are explained on a conceptual level in separate chapters, and more specific in the chapters about each individual building block.
Whether you need an introduction to infrastructure technologies, a refresher course, or a study guide for a computer science class, you will find that the presented building blocks and concepts provide a solid foundation for understanding the complexity of today’s IT infrastructures.
This book can be used as a study book – it is used by a number of universities in the USA, as part of their IT architecture courses, based on the IS 2010.4 curriculum.
Download the Table of Contents.
A preview of the book can be downloaded here.
How to order
Hardcover ISBN 978-1-326-91297-0
eBook ISBN 978-1-326-92569-7
Hardcover: 446 pages
Note to the Third Edition
In the third edition of this book, a number of corrections were made, some terminology is explained in more detail, and several typos and syntax errors were fixed. In addition, the following changes were made:
- The infrastructure model was updated to reflect the Networking-Storage-Compute terminology used by most vendors today, and to emphasize the position of systems management.
- The chapter on infrastructure trends was removed.
- The text was blended with the text in the other chapters.
- The amount of text on the historic context for each building block was reduced.
- The Virtualization chapter and Server chapter were combined and renamed to Compute.
- The storage chapter was reorganized to reflect the new storage building block model.
- The chapter on Security was rearranged and updated.
- Part IV on infrastructure management was added, with chapters on the infrastructure lifecycle, deployment options, assembling and testing, running the infrastructure, systems management processes, and decommissioning.
- In various parts of the book, new cloud technology concepts were added, like Software Defined Networking (SDN), Software Defined Storage (SDS), Software Defined Datacenters (SDDC), Infrastructure as a Service (IaaS), infrastructure as code, and container technology.
- A chapter was added explaining the infrastructure purchase process, as this is part of the IS 2010.4 curriculum.
- All footnotes were converted to endnotes.
- The index was renewed.
- Finally, as technology advanced in the past years, the book was updated to contain the most recent information.
Course Material
The book is used in a number of universities in the USA, Australia, Chile, and Kuwait, as study material for their IT infrastructure courses. The book is especially suited for courses based on the IS 2010.4 curriculum. A reference matrix of the IS 2010.4 curriculum topics (as used in many universities in the USA) and the relevant sections in this book is provided in the appendix.
Based on requests from university professors, I created a set of course materials. It contains all pictures used in the book in both Viso and high-resolution PNG format, the list of abbreviations, a PowerPoint slide deck for each chapter (715 slides in total), and a set of test question per chapter (204 questions in total).
The course materials can be downloaded here. Read the course setup first in the Excel sheet "Course setup".
-
----------------------------------------------
Previous Edition (Second Edition)
While the third edition is more up to date than the previous version, for those that want to keep using the second edition, it is still available from a the following bookstores:
- From my publisher Lulu.com the book can be ordered in hardcover paper format.
-
- From the Apple iTunes Bookstore (for the iPad).
- From Barnes and Noble the book can be ordered as an eBook (NookBook).
Hardcover ISBN 978-1-291-25079-5
eBook ISBN 978-1-291-25682-6
Some course material of the second edition can be found here.
This entry was posted on Tuesday 31 January 2017
DevOps is a contraction of the terms "developer" and "system operator". DevOps teams consist of developers, testers and application systems managers, and each team is responsible for developing and running one or more business applications or services.
The whole team is responsible for developing, testing, and running their application(s). In case of incidents with the applications under their responsibility, every team member of the DevOps team is responsible to help fix the problem. The DevOps philosophy is “If you built it, you run it”.
While DevOps is typically used for teams developing and running functional software, the same philosophy can be used to develop and run an infrastructure platform that functional DevOps teams can use. In an infrastructure Devops team, infrastructure developers design, test, and build the infrastructure platforms and manage their lifecycle; infrastructure operators keep the platform running smoothly, fix incidents, and apply small changes.
This entry was posted on Friday 06 January 2017
Infrastructure as a Service provides virtual machines, virtualized storage, virtualized networking and the systems management tools to manage them.
IaaS is typically based on cheap commodity white label hardware. The philosophy is to keep the cost down by allowing the hardware to fail every now and then. Failed components are either replaced or simply removed from the pool of available resources.
IaaS provides simple, highly standardized building blocks to applications. It does not provide high availability, performance or security levels. Consequently, applications running on IaaS should be robust to allow for failing hardware and should be horizontally scalable to increase performance.
In order to use IaaS, users must create and start a new server, and then install an operating system and their applications. Since the cloud provider only provides basic services, like billing and monitoring, the user is responsible for patching and maintaining the operating systems and application software.
Not all operating systems and applications can be used in a IaaS cloud; many software licenses prohibit the use of a fully scalable, virtual environment like IaaS, where it is impossible to know in advance on which machines software will run.
This entry was posted on Friday 28 October 2016