5 minutes reading time (1029 words)

Container - How does a modern enterprise container environment look like?


When the trend towards virtualisation arose, many companies were sceptical at first. In modernisation projects, operations teams and stakeholders still had to be convinced that applications could be deployed on virtual servers, eliminating the need for dedicated servers. Over time and with increasing operational experience, however, the starting point changed. Nowadays, no one needs to be convinced of virtualisation anymore. On the contrary, in today's age, specific requirements have to justify the use of a dedicated server.

We are currently experiencing a similar situation with the use of container technologies. The advantages are obvious, such as:

  • Compared to classic VMs, containers are very lean, saving resources on the virtualisation hosts
  • Containers can run in multiple instances and can be booted up very quickly
  • Container images can easily be migrated between environments or systems without any compatibility issues arising

Nevertheless, containers are still the exception, especially with major clients. Many companies have gained experiences in the first PoCs, but the comprehensive implementation is far from complete. Why that is? In our experience, many companies underestimate the complexity of such sustainable change. It's just not enough to build a container platform only so that measurable improvements occur. Instead, the way an IT department functions should be changed. The entire supply chain needs to be looked at, from the developer who creates the code to the administrator responsible for operating an application.

In this blog series, we'll demonstrate what a modern container platform can look like and which processes are the prerequisite for safe operation.

So, what does an optimal container platform look like? Firstly, it's not about which products can be used. A selection of products depends on many factors, it would lead the article away from its core, namely, to provide an overview of the composition of a container platform. Nevertheless, we've displayed examples of a few products for a first glimpse.

In our view, a hybrid container architecture makes the most sense. In hybrid environments, a reaction to business or compliance requirements is the most sensible. A temporary scaling of a service is probably cheaper to implement in the cloud. Having said that, it may well be that compliance rules require an on-premises operation of a service. Most companies have already selected a primary cloud provider. In our view, it doesn't matter which provider is chosen with our container platform. All major cloud providers like Azure, AWS or Google support container technologies.

We'd now like to have a closer look at the individual components of the architecture:

The developed code is checked in to an on-premises or cloud repository, tested and managed in different builds. Common repositories are GitHub, Bitbucket or GitLab, among others. The repository doesn't contain a container image yet, but the code itself, which is managed according to standard SW development methods.

Build pipeline
In order to create a container from the checked-in code, it must be transformed into an image. This task is carried out by the build pipeline. With the aid of input files, the container is created and tested manually or by means of automatisms. A finished container is stored in a registry. For an automated build pipeline, products such as Jenkins, Codeship or Bamboo are possible.
Alongside orchestration, the build pipeline is certainly the most important component and brings about the biggest change in the container world. Developing a build pipeline takes time, but it does pay off within a short amount of time thanks to shorter test and deployment times. The high degree of automation also improves the quality at the same time.

The registry contains container images and makes them available to the container engine. Registries can be publicly available or privately accessible to a specific group of users. In the same way, you can opt for a cloud provider or operate a registry locally. As with the other components, you're spoiled for choice as you can use Docker Hub, Nexus, or AWS ECR as a registry, among others.

Container engine
The container engine is the actual heart of the container platform. It's the interface and implements commands and here is where the containers ultimately run. The market standard is certainly the docker engine.

In order to operate containers in a fail-proof and reasonable way, there is the orchestration layer that determines which containers run with which and how many instances and where, etc.. Here, too, there are numerous products such as Kubernetes, OpenShift or Docker Swarm. There are master and worker nodes which, with Docker Swarm, for example, can also be used in connection between Windows and Linux. Alternatively, there are also cloud services available here, such as e.g. Azure AKS.

Systems management
Respective management systems for the listed components become absolutely necessary in an entreprise environment. Most hosts will be Linux-based, even if it's fundamentally possible to run a Docker swarm with a Windows host, for example. Thus, systems such as Satellite are needed to patch the environment. However, equally important is Role Based Access Control (RBAC), for instance – as in, who is allowed to do what. Here, most companies are likely to use an Active Directory environment. This must also be connected to the container platform accordingly. Things like monitoring, logging or backup / restore should not be forgotten either, of course.

If you want to successfully introduce and operate a container platform, you are spoiled for choice. Next to the question of which products are used or how working methods can be changed, know-how must usually be built up, too. With the use of container technologies, previous work processes change rapidly. The classic infrastructure admin has to deal with components from the SW development. It's all implemented and fully automated based on a script. This doesn't only mean a transition for Windows administrators. The use of containers is predestined for a DevOps team that builds, operates and develops the platform further. There are currently numerous manufacturers who frequently publish new functionalities. Something that is a great advantage on the one hand also requires rethinking of the organisation of IT teams on the other.

In our following articles, we will describe how to build a hybrid container platform (on-premises and cloud).

Hybrid Container deployment with Jenkins
ESAE series part 2 – Customer situation and archit...

Related Posts



No comments made yet. Be the first to submit a comment
Freitag, 15. November 2019