-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixed not creating API server tunnel #19191
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: spowelljr The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/ok-to-test |
This comment has been minimized.
This comment has been minimized.
@@ -440,7 +440,7 @@ func (k *Bootstrapper) StartCluster(cfg config.ClusterConfig) error { | |||
|
|||
// tunnelToAPIServer creates ssh tunnel between apiserver:port inside control-plane node and host on port 8443. | |||
func (k *Bootstrapper) tunnelToAPIServer(cfg config.ClusterConfig) error { | |||
if cfg.APIServerPort != 0 { | |||
if cfg.APIServerPort == 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how is this not broken for everything else ? or since forever ? is this logic new and only affecting qemu builtin ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I was looking into this because our functional tests are failing with this change.
From what I saw this is what happened:
Before HA PR:
tunnelToAPIServer
was only called when it was necessary, ie. QEMU builtin
If it wasn't necessary (docker driver) it wasn't called
After HA PR but before this fix:
tunnelToAPIServer
is getting called for all drivers, but because the logic was broken, it wasn't actually getting created for for any drivers, which broke QEMU w/ builtin
This PR
tunnelToAPIServer
is getting called for all drivers, and now with the logic fixed it's actually creating a tunnel for all drivers, which fixes QEMU w/ builtin, but Docker (and maybe others) are having issues due to the tunnel being created when it shouldn't be.
So I need to figure out how tunnelToAPIServer
was getting called prior to the HA change and match it so it's only getting called when necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I found it, before the HA changes APIServerPort
was only used by QEMU:
if driver.IsQEMU(host.Driver.DriverName()) && network.IsBuiltinQEMU(cfg.Network) {
apiServerPort, err := getPort()
if err != nil {
return runner, preExists, m, host, errors.Wrap(err, "Failed to find apiserver port")
}
cfg.APIServerPort = apiServerPort
}
But now it's also setting it via a flag:
startCmd.Flags().Int(apiServerPort, constants.APIServerPort, "The apiserver listening port")
APIServerPort: viper.GetInt(apiServerPort),
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added the QEMU check below
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
excellent thanks for the investigation
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
/retest-this-please |
kvm2 driver with docker runtime
Times for minikube ingress: 25.0s 23.9s 24.4s 23.9s 23.9s Times for minikube start: 49.2s 46.2s 48.4s 45.5s 47.6s docker driver with docker runtime
Times for minikube start: 21.0s 20.5s 21.0s 24.0s 21.4s Times for minikube ingress: 21.8s 21.8s 21.8s 21.3s 21.8s docker driver with containerd runtime
Times for minikube start: 23.1s 22.4s 21.1s 22.8s 20.9s Times for minikube ingress: 30.8s 31.8s 32.3s 32.2s 31.3s |
These are the flake rates of all failed tests.
To see the flake rates of all tests by environment, click here. |
Fixes #19173
Before:
After: