Replies: 5 comments 10 replies
-
|
Yeah this is super annoying. I am working a lot on hetzner-k3s at the moment (we use it at the day job too) so I am creating and deleting clusters very often, and this past week I have been running into resource unavailability issues quite often unfortunately. The only workaround I have for now is that at least for the worker nodes I specify an autoscaled node pool for each instance type so during my testing I let the autoscaler pick up whichever instances are available in any location among Falkenstein, Nuremberg, Helsinki and that works fine for the workers. But for the master and static worker node pools there isn't much we can do other than wait for new capacity or just try another location unfortunately. |
Beta Was this translation helpful? Give feedback.
-
|
Good points. The problem with using another location is that (persistent, using |
Beta Was this translation helpful? Give feedback.
-
|
I don't know if it's just me but it's now at least a full day that no x86 VMs bigger than CPX21 are available in Nuremberg. Very annoying. Seems like Falkenstein has more ressources available, but switching location is not that easy for me since Hetzner Volumes and LBs can't be moved to other locations AFAIK. |
Beta Was this translation helpful? Give feedback.
-
|
Note: The problem is documented by Hetzner itself here: https://status.hetzner.com/incident/aa5ce33b-faa5-4fd0-9782-fde43cd270cf |
Beta Was this translation helpful? Give feedback.
-
|
The problem has been documented at Hetzner for 10 months and it doesn't look like it will change anytime soon. Do you see any possibility for a workaround? Perhaps by transferring existing (running) instances to hetzner-k3s so that the tool takes over these instances? I built my clusters with the idea that I could completely rebuild them within minutes if something went very wrong. In reality, I am very hesitant to do so, as there is a risk that hetzner-k3s might not get enough instances for 1-2 hours, leaving the cluster offline for a longer period of time. Ideally, there would be a way to avoid releasing instances to Hetzner when executing Instead of deleting the instances with the Hetzner API, they could simply be switched off by I cannot estimate how complicated such a solution (or a similar one) would be to implement. I would be willing to pay for it. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
My situation is certainly slightly special because I regularily destroy my cluster and recreate it from scratch. I do this regularily because I'm in the middle of long-running process of learning/creating/adjusting the cluster for later production use. One of my goals is that the recreation of the cluster needs to be something very well tested and not something exceptional.
The problem I'm facing however also relates to other hetzner-k3s users using the autoscaler or simply setting up a new cluster.
What I'm talking about is that more often than not, Hetzner appears to not having enough VM ressources available to start a VM of a certain type. That's also noticable in the normal Hetzner Cloud UI, where some VMs are grayed out and can't be ordered, like right now for Nurenberg:
hetzner-k3sin such a case reports the problem like this:Right after deleting my cluster I often can't re-order the same VM types again and need to wait until they become available again.
This issue seems to be common.
Do we have any other options apart from just being patient? Any ideas?
Beta Was this translation helpful? Give feedback.
All reactions