wiki:setup
Differences
This shows you the differences between two versions of the page.
| Next revision | Previous revision | ||
| wiki:setup [2022/09/07 14:55] – created keistc | wiki:setup [2023/03/01 11:17] (current) – keistc | ||
|---|---|---|---|
| Line 36: | Line 36: | ||
| eksctl uses YAML config files to define clusters, the ones | eksctl uses YAML config files to define clusters, the ones | ||
| currently running are in / | currently running are in / | ||
| - | the README.md file in that directory is pretty complete | + | the README.md file in that directory is pretty complete markdown render |
| on github: | on github: | ||
| - | https:// | + | https:// |
| so, to create a new cluster: | so, to create a new cluster: | ||
| - Copy the latest-and-greatest (currently hub-green.yaml ) to another one ( dev-green.yaml for an example) | - Copy the latest-and-greatest (currently hub-green.yaml ) to another one ( dev-green.yaml for an example) | ||
| Line 46: | Line 46: | ||
| Spinning up a new cluster on AWS takes a long time, about 20-30 minutes. | Spinning up a new cluster on AWS takes a long time, about 20-30 minutes. | ||
| something reasonable. | something reasonable. | ||
| + | |||
| ===== Configure the cluster ===== | ===== Configure the cluster ===== | ||
| - | Next up is getting the cluster tooling in place - reverse proxy " | ||
| - | you can copy the | ||
| - | / | ||
| - | / | ||
| - | convention isn't anything special though) | ||
| - | you'll need to edit 01-08*.yaml to reference the new kubeContext (same name as | + | Next up is getting the cluster tooling in place reverse proxy " |
| + | you can copy the / | ||
| + | / | ||
| + | convention isn't anything special. | ||
| + | |||
| + | You'll need to edit 01-08*.yaml to reference the new kubeContext (same name as | ||
| set with k config rename-context) and masterHost (the eventual CNAME), run | set with k config rename-context) and masterHost (the eventual CNAME), run | ||
| them in order and let each finish; note the adminPassword: | them in order and let each finish; note the adminPassword: | ||
| - | defaults to admin | + | defaults to admin. |
| - | when it's up and going, you can find out the host to point the new CNAME | + | When it's up and going, you can find out the host to point the new CNAME |
| with: | with: | ||
| - | | + | < |
| - | (or start with: | + | k get svc master-ingress-nginx-ingress -n cluster-tools |
| - | k ns cluster-tools | + | </ |
| - | to select that namespace and do a k get all to see the | + | or start with: |
| - | various pieces living there including the loadbalancer) | + | < |
| - | assuming that all worked out, the usage dashboard should live at | + | k ns cluster-tools |
| - | https://<cluster-name>.datasci.oregonstate.edu/ | + | </ |
| - | login admin / admin by default | + | |
| + | To select that namespace and do a k get all to see the various pieces living there including the loadbalancer | ||
| + | assuming that all worked out, the usage dashboard should live at https:// | ||
| + | login admin / admin by default. | ||
| There is a work-in-progress dashboard for the hub; to install it click the " | There is a work-in-progress dashboard for the hub; to install it click the " | ||
| icon in grafana and select " | icon in grafana and select " | ||
| Line 77: | Line 82: | ||
| a logging database server and that should get you to the point of creating a hub, based on one of the | a logging database server and that should get you to the point of creating a hub, based on one of the | ||
| examples (e.g. cj-test.yaml - don't forget to change the kubeContext and clusterHostname variables to match the new cluster. The yaml files are executable with / | examples (e.g. cj-test.yaml - don't forget to change the kubeContext and clusterHostname variables to match the new cluster. The yaml files are executable with / | ||
| - | 03-registry.yaml for example is | + | 03-registry.yaml for example is\\ |
| + | |||
| + | # | ||
| + | |||
| + | Instead of listing kubeContext in this and various other files, we should be | ||
| + | able to have a file like common.yaml with contents like\\ | ||
| + | kubeContext: | ||
| + | clusterHostname: | ||
| + | |||
| + | And then replace the #! line with # | ||
| + | / | ||
| + | --values common.yaml --values | ||
| + | if you're feeling brave, give it a try and see how it works - worst case | ||
| + | you delete the cluster with eksctl delete cluster --name < | ||
| + | start over ;) | ||
| + | ===== Adding user access to the cluster ===== | ||
| + | Be default the one that creates the cluster will be the only with access to the cluster. To add someone else to have access will need to do the following.\\ | ||
| + | k edit configmap -n kube-system aws-auth | ||
| + | This will popup an editor where you can edit some of the definitions in the cluster. Directly under the mapUsers section, add this: | ||
| < | < | ||
| - | # | + | mapUsers: | |
| + | | ||
| + | username: name | ||
| + | groups: | ||
| + | | ||
| </ | </ | ||
| - | Instead of listing kubeContext in this and various other files, we should be | + | Spacing is important. You can get the userarn by running **aws sts get coller-identity** |
| - | able to have a file like common.yaml with contents like | + | ==== Changes |
| + | New to version 1.23, you now have to add the Amazon EBS CSI driver as an Amazon EKS add-on to the EKS cluster.\\ | ||
| + | Below are the steps to run after running the eksctl create cluster command above.\\ | ||
| + | First need to Create the Amazon EBS CSI driver IAM role for service accounts. When the plugin is deployed, it creates and is configured to use a service account that's named ebs-csi-controller-sa. The service account is bound to a Kubernetes clusterrole that's assigned the required Kubernetes permissions. Before creating the IAM role first need to enable OIDC provider. | ||
| + | eksctl utils associate-iam-oidc-provider --region=us-west-2 --cluster=dev-yellow --approve | ||
| + | eksctl create iamserviceaccount --name ebs-csi-controller-sa --namespace kube-system --cluster NAME_OF_CLUSTER --attach-policy-arn arn: | ||
| + | Then we can add on the EBS CSI driver.\\ | ||
| + | **NOTE:** To get the arn name for the role created above, login to the AWS console and go to the CloudFormation console. In the list of cloud stacks find the one named " | ||
| + | eksctl create addon --name aws-ebs-csi-driver --cluster NAME_OF_CLUSTER --service-account-role-arn arn: | ||
wiki/setup.1662587741.txt.gz · Last modified: 2022/09/07 14:55 by keistc
