Replies: 4 comments 3 replies
-
|
Hi, having masters in different locations would definitely make the cluster more stable if a few regions have issues. But for our clusters at work, I moved from using all three locations in Europe to only the German ones for both masters and workers. The reason is that sometimes the latency spikes between Germany and Finland caused problems. Now, for each cluster, I have 2 masters in Falkenstein and 1 in Nuremberg. The workers are split between these two locations as well. Because of this setup, I sometimes run into issues with My advice is to keep the masters in the two German locations, and for now, keep all nodes in just one of them. We should wait for feedback from the CSI driver developers on how to best handle volumes when workers are also in multiple locations. |
Beta Was this translation helpful? Give feedback.
-
|
Are you talking about this csi-driver issue? Could you please describe an example in which it has an advantage if the master nodes are distributed in different locations in Germany, when all workers are still in the same location? As there are only two locations in Germany, there would have to be two Masters in one location and one Master in the other. If one location has major problems, then the one master (or the two masters?) in the other location is of no help I would expect. I hope I'm wrong and would be happy to learn more. |
Beta Was this translation helpful? Give feedback.
-
|
I meant this newer issue: hetznercloud/csi-driver#1059. They are all in Germany - but in different cities and different data centers. So it is better than having all three masters in the same DC. As for big problems that might break the cluster completely, that is why I added support for k3s' built-in off site etcd snapshots to s3. If a serious problem happens, you delete master2 and master3, restore a snapshot from before the problem on master1, then run the create command again with hetzner-k3s to recreate the other two masters and let them join master1. Have you checked the new settings for etcd snapshots to s3? |
Beta Was this translation helpful? Give feedback.
-
|
Ok, thanks for your reply! In my case should such a problem happen, then I simply recreate the cluster, which only takes a couple of minutes. Being able to do so easily is a requirement I set for myself (and is being tested once per month). Therefore I don't have any need for etcd backups and haven't tested them. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I learned that in newer versions of hetzner-k3s I can place the masters in different locations.
In my case, I need the workers all in the same location since they access Hetzner volumes which are tied to a specific location.
Is there any benefit in placing my 3 masters each in a different location? Could that improve high availability or would it make things needlessly more complicated?
Beta Was this translation helpful? Give feedback.
All reactions