loading-652296_640

After having written about the architecture of a modern container platform in general in our first blog article (LINK), we would like to delve deeper into the praxis with this article. We'll describe a technical solution that helps companies to comply with a wide range of legal requirements while remaining flexible and cost-effective.

Strict requirements do not only apply to regulated industries such as banking and insurance, but also to telecommunications companies or other industries. Cloud solutions offer tremendous flexibility and can be more economical than on-premises solutions. However, when it comes to data protection, many organisations find it difficult to implement compliant cloud solutions – instead, they completely do without them. Our sample architecture consists of both on-premises and cloud components which can be moved as required. Thus, fewer resources need to be provided locally than in purely on-premises scenarios. At the same time, new nodes can be provided faster without expensive hardware in the internal data centre having to be made available.

For our demo, we wrote a simple Python web application which is checked into a Bitbucket code repository in our case. We subsequently build a Jenkins-automated container image of the code, run basic test cases and push this into our private docker registry. Finally, Jenkins either triggers our cloud or on-premises Docker Swarm cluster. Swarm then deploys the image to the respective worker nodes

For our demo, we wrote a simple Python web application which is checked into a Bitbucket code repository in our case. We subsequently build a Jenkins-automated container image of the code, run basic test cases and push this into our private docker registry. Finally, Jenkins either triggers our cloud or on-premises Docker Swarm cluster. Swarm then deploys the image to the respective worker nodes.

If the application processes data relevant to data protection, the container is deployed on the on-premises Docker Swarm. If the data isn't critical, the cloud Docker Swarm cluster is used. Jenkins can thus be understood as a kind of switch which is controlled by two simple buttons in the Jenkins and which decides where the container is allowed to run.

In our example, we decided to deploy the code repository, registry, and build-pipeline on-premises. Of course, it would also be possible to install one on-premises and one cloud component each. In our view, however, this only increases the complexity. We deliberately chose two separate clusters. This serves to show that it's very easy to decide via Jenkins where a container image will be deployed to. In addition to this, many companies already have several clusters in operation, anyway.

Naturally, it would also be possible to implement a single cluster that includes on-premises master and worker nodes as well as cloud worker nodes. Using flags, you could then prevent a container relevant to data protection from running in the cloud, for example.

In the video, you can see how easily and fast a container can be built and deployed in the cloud swarm. It is then moved to the on-premises cluster at the push of a button.

In the next article, we will automatically scale our demo app and dynamically deploy or remove instances. We'll then probably also work with a Kubernetes cluster to deliberately realise demos with different orchestration tools (Swarm, Kubernetes and Open Shift) and then shed light on the pros and cons in due course. As always, you can also express your wishes on which topic we should write about next ????....