As part of my journey to “cloud” computing I built a service that is using MySQL and as preparation for the initial deployment I set myself the following constraints:
- Deploy in containers
- Be able to tolerate some failure of ” VM”s
- Be able to grow/replace storage without downtime
Containers
There are pre-made mariadb:10.1 containers but to not rely on a public registry I have used the Microsoft Azure Container Service to upload my container. The integration into the standard docker tools to create and upload containers just worked. It allows me to give a place for modified containers as well.
Cluster
With Azure it doesn’t seem possible to online resize (grow) a volume and if I ever want to switch from ext4 to xfs (or zfs?) I should run some form of fault tolerant MySQL to take a node and upgrade it. These days MariaDB 10.1 includes Galera support and besides some systematic issues (which I don’t seem to run in as I have little to no transactions) it seems quite easy to set-up.
Fault tolerance
Fault tolerance comes in a couple flavors. Galera is a multi-master database where the cluster will continue to allow writes as long as there is a majority of active nodes. If I start with three nodes, I can take one off the cluster to maintain.
Kubernetes will reschedule a pod/container to a different machine (“agent”) in case one becomes unhealthy and it will expose the Galera cluster through a LoadBalancer and a single IPv4 address for it. This means only active members of the cluster will be contacted.
The last part is provided by Microsoft Azures availability set. Distributing the Agents into different zones should prevent all of them to go down at the same time during maintenance.
So in theory this looks quite nice, only practice will tell how this will play out.
Set-up
After having picked Microsoft Azure, Kubernetes and Galera, it is time to set it up. I have started with an example found here. I had to remove some labels to make it work with the current format, moved the container to mariadb:10.1 and modified the default config.
I had to look a bit on how to get persistent storage. I am directly mounting the disk for the pod an alternative is a persistent volume claim. This might be a better approach.
The biggest issue is starting the first service. It requires to pass special parameters to initialize the cluster and involved a round of kubectl edit/kubectl delete to get it up. Having the second and third member join was more easy.
Challenges/TODOs
Besides having to gain more experience with it, I do face a couple of problems with this setup and need to explore solutions (or wait for comments?).
I deployed my application before having a Kubernetes cluster and now need to migrate. The default networking of Kubernetes works by adding a lot of masquerading entries on agents and masters. In the cluster these addresses are routable by masquerading but from external they are not reachable. I need to find a way to access it, probably by sacrificing some redundancy first. The other option is to use kubectl expose but I don’t want my cluster to have a public IPv4 address. I need to see how to have an internal load balancer with a private/internal IPv4 address.
Galera cluster management is a bit troubling. The first time I start with a new disk it will not properly connect to the master but would register itself to the LoadBalancer/Service. I manually need to do a kubectl delete of the pod and wait for it to reschedule. That is probably easy to fix. The second part of the problem is that I should use health checks and only register the pod once it has connected and synced to the primaries.
Rolling upgrades seem to have a systematic issue too. The default way for the built-in replication controller looks like a new pod (N+1) will be launched and brought up and then the current galera node will be stopped (back to N). This falls apart with the way I mount the storage/disk. E.g. the new pod can not mount the disk as it is already mounted and the old pod will not be deleted.
Least problematic is auto-scaling. In the example set-up each node is a service by itself, using one persistent disk. It makes scaling the cluster a bit difficult. I can add new nodes and they will discover the master(s) but to have the masters remember the new nodes, I would need to have the pods recycle.