User Tools

Site Tools


wiki:setup

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
wiki:setup [2022/09/07 14:55] – created keistcwiki:setup [2023/03/01 11:17] (current) keistc
Line 36: Line 36:
 eksctl uses YAML config files to define clusters, the ones eksctl uses YAML config files to define clusters, the ones
 currently running are in /datascience/dsosuk8s/cluster/eksctl currently running are in /datascience/dsosuk8s/cluster/eksctl
-the README.md file in that directory is pretty complete (markdown render+the README.md file in that directory is pretty complete markdown render
 on github: on github:
-https://github.com/datasci-osu/dsosuk8s/blob/master/cluster/eksctl/README.md)+https://github.com/datasci-osu/dsosuk8s/blob/master/cluster/eksctl/README.md
 so, to create a new cluster:  so, to create a new cluster: 
   - Copy the latest-and-greatest (currently hub-green.yaml ) to another one ( dev-green.yaml for an example)   - Copy the latest-and-greatest (currently hub-green.yaml ) to another one ( dev-green.yaml for an example)
Line 46: Line 46:
 Spinning up a new cluster on AWS takes a long time, about 20-30 minutes.  Once it is ready you can rename the context.  The default name is something long and obnoxious. **k config get-contexts** will show the contexts including the new one, then **k config rename-context <current-name> <new-name>** to rename it to Spinning up a new cluster on AWS takes a long time, about 20-30 minutes.  Once it is ready you can rename the context.  The default name is something long and obnoxious. **k config get-contexts** will show the contexts including the new one, then **k config rename-context <current-name> <new-name>** to rename it to
 something reasonable. something reasonable.
 +
 ===== Configure the cluster ===== ===== Configure the cluster =====
-Next up is getting the cluster tooling in place - reverse proxy "ingress", autoscaling client, etc. 
-you can copy the 
-/datascience/deployments/hub-green.datasci.oregonstate.edu folder to 
-/datascience/deployments/<new-name>.datasci.oregonstate.edu (this naming 
-convention isn't anything special though) 
  
-you'll need to edit 01-08*.yaml to reference the new kubeContext (same name as+Next up is getting the cluster tooling in place reverse proxy "ingress", autoscaling client, etc. 
 +you can copy the /datascience/deployments/hub-green.datasci.oregonstate.edu folder to 
 +/datascience/deployments/<new-name>.datasci.oregonstate.edu. This naming 
 +convention isn't anything special. 
 + 
 +You'll need to edit 01-08*.yaml to reference the new kubeContext (same name as
 set with k config rename-context) and masterHost (the eventual CNAME), run set with k config rename-context) and masterHost (the eventual CNAME), run
 them in order and let each finish; note the adminPassword: in 06-grafana.yaml them in order and let each finish; note the adminPassword: in 06-grafana.yaml
-defaults to admin+defaults to admin.
  
-when it's up and going, you can find out the host to point the new CNAME+When it's up and going, you can find out the host to point the new CNAME
 with: with:
-  k get svc master-ingress-nginx-ingress -n cluster-tools +<code> 
-(or start with: +k get svc master-ingress-nginx-ingress -n cluster-tools 
-  k ns cluster-tools +</code> 
-to select that namespace and do a k get all to see the +or start with: 
-various pieces living there including the loadbalancer) +<code> 
-assuming that all worked out, the usage dashboard should live at +k ns cluster-tools 
-https://<cluster-name>.datasci.oregonstate.edu/grafana/ , +</code> 
-login admin / admin by default+ 
 +To select that namespace and do a k get all to see the various pieces living there including the loadbalancer 
 +assuming that all worked out, the usage dashboard should live at https://cluster-name.datasci.oregonstate.edu/grafana/ 
 +login admin / admin by default
 There is a work-in-progress dashboard for the hub; to install it click the "+" There is a work-in-progress dashboard for the hub; to install it click the "+"
 icon in grafana and select "import", paste the JSON from icon in grafana and select "import", paste the JSON from
Line 77: Line 82:
 a logging database server and that should get you to the point of creating a hub, based on one of the a logging database server and that should get you to the point of creating a hub, based on one of the
 examples (e.g. cj-test.yaml - don't forget to change the kubeContext and clusterHostname variables to match the new cluster. The yaml files are executable with /usr/bin/env -S, the first line in examples (e.g. cj-test.yaml - don't forget to change the kubeContext and clusterHostname variables to match the new cluster. The yaml files are executable with /usr/bin/env -S, the first line in
-03-registry.yaml for example is+03-registry.yaml for example is\\ 
 + 
 +#!/usr/bin/env -S helm kush upgrade registry/datascience/dsosuk8s/charts/docker-registry --install --kush-interpolate --values where the contents of the file are taken as the param for the last --values\\ 
 + 
 +Instead of listing kubeContext in this and various other files, we should be 
 +able to have a file like common.yaml with contents like\\ 
 +kubeContext: dev-green 
 +clusterHostname: dev-green.datasci.oregonstate.edu\\ 
 + 
 +And then replace the #! line with #!/usr/bin/env -S helm kush upgrade registry 
 +/datascience/dsosuk8s/charts/docker-registry --install --kush-interpolate 
 +--values common.yaml --values 
 +if you're feeling brave, give it a try and see how it works - worst case 
 +you delete the cluster with eksctl delete cluster --name <cluster-name> and 
 +start over ;) 
 +===== Adding user access to the cluster ===== 
 +Be default the one that creates the cluster will be the only with access to the cluster. To add someone else to have access will need to do the following.\\ 
 +  k edit configmap -n kube-system aws-auth 
 +This will popup an editor where you can edit some of the definitions in the cluster. Directly under the mapUsers section, add this:
 <code> <code>
-#!/usr/bin/env -S helm kush upgrade registry/datascience/dsosuk8s/charts/docker-registry --install --kush-interpolate --values where the contents of the file are taken as the param for the last --values+mapUsers: | 
 +  userarn: arn:aws:iam::395763313923:user/name 
 +    username: name 
 +    groups: 
 +      system:masters
 </code> </code>
-Instead of listing kubeContext in this and various other files, we should be +Spacing is important. You can get the userarn by running **aws sts get coller-identity** 
-able to have a file like common.yaml with contents like+==== Changes to EKS 1.23 ==== 
 +New to version 1.23, you now have to add the Amazon EBS CSI driver as an Amazon EKS add-on to the EKS cluster.\\ 
 +Below are the steps to run after running the eksctl create cluster command above.\\ 
 +First need to Create the Amazon EBS CSI driver IAM role for service accounts. When the plugin is deployed, it creates and is configured to use service account that's named ebs-csi-controller-sa. The service account is bound to a Kubernetes clusterrole that's assigned the required Kubernetes permissions. Before creating the IAM role first need to enable OIDC provider.
  
 +  eksctl utils associate-iam-oidc-provider --region=us-west-2 --cluster=dev-yellow --approve
 +  eksctl create iamserviceaccount --name ebs-csi-controller-sa --namespace kube-system --cluster NAME_OF_CLUSTER --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy --approve --role-only --role-name AmazonEKS_EBS_CSI_DriverRole
 +Then we can add on the EBS CSI driver.\\
 +**NOTE:** To get the arn name for the role created above, login to the AWS console and go to the CloudFormation console. In the list of cloud stacks find the one named "eksctl-CLUSTER_NAME-addon-iamserviceaccount-kube-system-ebs-csi-controller-sa. Click on the name linked and then goto to the "Resources" tab. This should list one Role, AmazonEKS_EBS_CSI_DriverRole. Click on that name link and it will bring up a new page with the arn name to use in the diver add on below.\\
 +  eksctl create addon --name aws-ebs-csi-driver --cluster NAME_OF_CLUSTER --service-account-role-arn arn:aws:iam::395703310923:role/AmazonEKS_EBS_CSI_DriverRole --force
wiki/setup.1662587741.txt.gz · Last modified: 2022/09/07 14:55 by keistc

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki