Skip to main content

Blog

DevOps, Agile, Software Development, Networking, Azure, Terraform, CI/CD, … there’s a lot to blog about! Keep informed and subscribe my RSS feed!

2018


GitLab Kubernetes Integration with RBAC enabled

·2 mins
Officially, GitLab doesn’t support RBAC enabled Kubernetes clusters yet, but with some manual configuration, it is possible to integrate your Kubernetes cluster into Gitlab with RBAC enabled.

Docker on CentOS 7 machine with XFS filesystem can cause trouble when d_type is not supported

·2 mins
I try to automate almost everything. I use Docker to containerize in-house developed software and run these containers on CentOS 7 machines. When you’re using a modern CentOS 7 version, the XFS filesystems are configured correctly with d_type support activated. But when you want to run Docker containers on an older version of CentOS 7, d_type support could be disabled causing a lot of trouble when you’re chowning and chmoding files in a container: files are not found or skipped, etc.

Backup Percona XtraDB Cluster or Galera Cluster with TwinDB

·3 mins
Database servers and clusters should be backed up regularly to prevent data loss when an error or disaster occurs. You can backup database servers logically using mysqldump, but you can also backup databases physically using Percona XtraBackup. XtraBackup enables you to run full and incremental backups, stream backups, compress and encrypt backups. TwinDB has simplified the usage of Xtrabackup and will automatically backup your Percona XtraDB cluster on an hourly basis.

High Available MySQL database cluster to eliminate your next SPOF

·8 mins
In high-available production environments like a Software-as-a-Service Cloud environment, you have to minimize any kind of downtime as much as possible. In most cases, an application needs at least a database server. If this database server gets unavailable, the application won’t function anymore. In this case, the database software is your most critical SPOF to resolve. Percona XtraDB cluster can help you to eliminate this SPOF by setting up a master-master HA cluster.

Extend Kubernetes Persistent Volumes

·2 mins
In Kubernetes it is possible to use Persistent Volumes to add persistent storage to your Docker containers. When creating a Persistent Volume (Claim) you have to configure a storage type and storage capacity. When your application gets successful and your storage exceeds the limits, you have to extend the volume or create a new persistent volume. The latter isn’t a feasible solution in a production environment, but extending a persistent volume isn’t supported out-of-the-box in Kubernetes. There is a solution though! Extending the volume outside Kubernetes!

Install CFSSL and CFSSLJSON - CloudFlare's KPI toolkit

·2 mins
I try to automate almost everything. I use Docker to containerize in-house developed software and run these containers on CentOS 7 machines. When you’re using a modern CentOS 7 version, the XFS filesystems are configured correctly with d_type support activated. But when you want to run Docker containers on an older version of CentOS 7, d_type support could be disabled causing a lot of trouble when you’re chowning and chmoding files in a container: files are not found or skipped, etc.

2017


Kubernetes cluster on OpenStack - Part 1

·5 mins
The last few weeks I’m working with Kubernetes and OpenStack. It’s a steep learning curve to get a production-ready Kubernetes Cluster running on OpenStack, especially because I didn’t want to use the available ready-to-use tools. In the next few blog posts, I want to share my experience how to run Kubernetes on an OpenStack platform.

Request an API (bearer) token from GitLab JWT authentication to control your private Docker registry

·4 mins
My continuous integration and continuous delivery pipeline use Docker containers and a private Docker registry to distribute and deploy my applications automatically. Unfortunately, the Docker command-line tool can’t really control the Docker registry, actually, it is only capable of pushing and pulling image (tags). This is a bit frustrating because, when you’re using your continuous integration pipeline to build containers, push them to the registry, and pull them again to run the QA, the registry will eat up all your disk space due the images are never removed. To clean up your ‘mess’, you have to remove the images manually, but it’s way cooler (and simpler) to use the Docker registry API for this job.

Rancher and Rancher Compose command-line tools

·3 mins
For DevOps engineers like me, command-line tools help me to automate stuff. Rancher provides two CLI tools: rancher and rancher-compose. I use these two tools to automate deployments, upgrades, and cleaning up if environments aren’t in use anymore. Basically, it is possible to completely automate your continuous integration (CI) and continuous deployment (CD) pipeline (but that is something for another blog post!).

Persistent storage in Docker containers using Rancher-NFS

·3 mins
Number one challenge when you are using Docker in production environments is storage. On you local development machine it is possible to mount a local directory path into your docker container, but in production environments, this isn’t an option because you don’t know if the path exists and if the docker container is always running on the same node. A solution is NFS, and when using Rancher, Rancher-NFS.