-
Notifications
You must be signed in to change notification settings - Fork 0
CSI-1655: added two sample apps #12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: integration
Are you sure you want to change the base?
Conversation
29f47e9
to
8294e49
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found 2 things I think should change, left comments.
I'm just now starting to manually test the logic and will test it before commenting on it. (more changes might be requested after I manage to test, but I pulled your code locally so I could test it locally.)
2009741
to
202d236
Compare
Thanks for your feedback. I just applied the changes as you said. |
@neoakris I also noticed that the stack cannot be deleted properly since ALB cannot be deleted firstly. Is there a way to solve this problem? |
I remembered an important thing about the link you shared yesterday I've discovered that occasionally in theory cdk will have 2-3 options for accomplishing the same end result. new eks.Cluster(this, 'HelloEKS', {
version: eks.KubernetesVersion.V1_32,
albController: {
version: eks.AlbControllerVersion.V2_8_2,
additionalHelmChartValues: {
enableWafv2: false
}
},
kubectlLayer: new KubectlV32Layer(this, 'kubectl'),
}); ^-- let's say this = option 1 for deploying AWS LB Controller Note easy eks's logic doesn't use that (notice optional parameter albController is missing) this.cluster = new eks.Cluster(this.stack, this.config.id, {
clusterName: this.config.id,
version: this.config.kubernetesVersion,
kubectlLayer: this.config.kubectlLayer,
vpc: this.config.vpc,
ipFamily: this.config.ipMode,
vpcSubnets: [{ subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS }],
defaultCapacity: 0,
tags: this.config.tags,
authenticationMode: eks.AuthenticationMode.API_AND_CONFIG_MAP,
mastersRole: assumableEKSAdminAccessRole, //<-- adds aws eks update-kubeconfig output
secretsEncryptionKey: kms_key,
}); That's intentional. Because I learned that while in theory there's sometimes 2-3 ways to accomplish an end result.
If I recall correctly I purposefully didn't use the method in the doc you linked, because it could only be used to deploy old versions, and had extremely limited configurability. The doc you linked mentions a fix involving "cluster.albController" Your best bet at solving is probably what we discussed in office hours.
//v-- The following 2 lines help prevent cdk destroy issue
const karpenter_helm_chart_CFR = (stack.node.tryFindChild(config.id)?
.node.tryFindChild('chart-karpenter')?
.node.defaultChild as cdk.CfnResource
);
if(karpenter_helm_chart_CFR){
karpenter_helm_chart_CFR.applyRemovalPolicy(cdk.RemovalPolicy.RETAIN);
}
karpenter.node.addDependency(readiness_dependency); //Expected value of awsLoadBalancerController Helm Release ^-- so something like const AWS_LBC_Cloud_Formation_Resource = (stack.node.tryFindChild(config.id)?
.node.tryFindChild('chart-AWSLoadBalancerController')?
.node.defaultChild as cdk.CfnResource
);
const apply_ingress_YAML = new eks.KubernetesManifest(stack, 'ingress_YAML',
{
cluster: cluster,
manifest: ingress_YAML,
overwrite: true,
prune: true,
}
);
apply_ingress_YAML.node.addDependency(AWS_LBC_Cloud_Formation_Resource) Reminder: const AWS_LBC_Cloud_Formation_Resource = (stack.node.tryFindChild(config.id)?
.node.tryFindChild('chart-AWSLoadBalancerController')?
.node.defaultChild as cdk.CfnResource
); config.id's expected value is dev1-eks, so the above code snippet |
@neoakris Thanks for this information. Yes, I have tried with your suggestion already. By using |
No description provided.