From 807f7836fe766ee83e13a1cc8f6148fcead1048e Mon Sep 17 00:00:00 2001 From: David Young Date: Thu, 7 Feb 2019 12:28:38 +1300 Subject: [PATCH] Update CHANGELOG --- manuscript/CHANGELOG.md | 2 +- manuscript/kubernetes/digitalocean.md | 79 ---------- manuscript/kubernetes/infrastructure.md | 192 ------------------------ mkdocs.yml | 2 +- 4 files changed, 2 insertions(+), 273 deletions(-) delete mode 100644 manuscript/kubernetes/digitalocean.md delete mode 100644 manuscript/kubernetes/infrastructure.md diff --git a/manuscript/CHANGELOG.md b/manuscript/CHANGELOG.md index 38c1318..bef8202 100644 --- a/manuscript/CHANGELOG.md +++ b/manuscript/CHANGELOG.md @@ -9,8 +9,8 @@ ## Recent additions to work-in-progress +* Added detailed description (and diagram) of our [Kubernetes design](/kubernetes/design/), plus a [simple load-balancer design](kubernetes/loadbalancer/) to avoid the complexities/costs of permitting ingress access to a cluster (_7 Feb 2019_) * Added an [introductory/explanatory page, including a children's story, on Kubernetes](/kubernetes/start/) (_29 Jan 2019_) -* [IPFS Cluster](/recipes/ipfs-cluster/), providing inter-planetary docker swarm shared storage with the inter-planetary filesystem (_29 Nov 2018_) ## Recently added recipes diff --git a/manuscript/kubernetes/digitalocean.md b/manuscript/kubernetes/digitalocean.md deleted file mode 100644 index f7ee30c..0000000 --- a/manuscript/kubernetes/digitalocean.md +++ /dev/null @@ -1,79 +0,0 @@ -# Kubernetes on DigitalOcean - -IMO, the easiest Kubernetes cloud provider to experiment with is [DigitalOcean](https://m.do.co/c/e33b78ad621b) (_this is a referral link_). I've included instructions below to start a basic cluster. - -![Kubernetes on Digital Ocean](/images/kubernetes-on-digitalocean.jpg) - - -## Ingredients - -1. [DigitalOcean](https://m.do.co/c/e33b78ad621b) (_yes, this is a referral link, making me some 💰_) account, either linked to a credit card or (_my preference for a trial_) topped up with $5 credit from PayPal. - -## Preparation - -### Create DigitalOcean Account - -Create a project, and then from your project page, click **Manage** -> **Kubernetes (LTD)** in the left-hand panel: - -![Kubernetes on Digital Ocean Screenshot #1](/images/kubernetes-on-digitalocean-screenshot-1.png) - -Until DigitalOcean considers their Kubernetes offering to be "production ready", you'll need the additional step of clicking on **Enable Limited Access**: - -![Kubernetes on Digital Ocean Screenshot #2](/images/kubernetes-on-digitalocean-screenshot-2.png) - -The _Enable Limited Access_ button changes to read _Create a Kubernetes Cluster_ . Cleeeek it: - -![Kubernetes on Digital Ocean Screenshot #3](/images/kubernetes-on-digitalocean-screenshot-3.png) - -When prompted, choose some defaults for your first node pool (_your pool of "compute" resources for your cluster_), and give it a name. In more complex deployments, you can use this concept of "node pools" to run certain applications (_like an inconsequential nightly batch job_) on a particular class of compute instance (_such as cheap, preemptible instances_) - -![Kubernetes on Digital Ocean Screenshot #4](/images/kubernetes-on-digitalocean-screenshot-4.png) - -That's it! Have a sip of your 🍷, a bite of your :cheese:, and wait for your cluster to build. While you wait, follow the instructions to setup kubectl (if you don't already have it) - -![Kubernetes on Digital Ocean Screenshot #5](/images/kubernetes-on-digitalocean-screenshot-5.png) - -DigitalOcean will provide you with a "kubeconfig" file to use to access your cluster. It's at the bottom of the page (_illustrated below_), and easy to miss (_in my experience_). - -![Kubernetes on Digital Ocean Screenshot #6](/images/kubernetes-on-digitalocean-screenshot-6.png) - -## Release the kubectl! - -Save your kubeconfig file somewhere, and test it our by running ```kubectl --kubeconfig= get nodes``` - -Example output: -``` -[davidy:~/Downloads] 130 % kubectl --kubeconfig=penguins-are-the-sexiest-geeks-kubeconfig.yaml get nodes -NAME STATUS ROLES AGE VERSION -festive-merkle-8n9e Ready 20s v1.13.1 -[davidy:~/Downloads] % -``` - -In the example above, my nodes were being deployed. Repeat the command to see your nodes spring into existence: - -``` -[davidy:~/Downloads] % kubectl --kubeconfig=penguins-are-the-sexiest-geeks-kubeconfig.yaml get nodes -NAME STATUS ROLES AGE VERSION -festive-merkle-8n96 Ready 6s v1.13.1 -festive-merkle-8n9e Ready 34s v1.13.1 -[davidy:~/Downloads] % - -[davidy:~/Downloads] % kubectl --kubeconfig=penguins-are-the-sexiest-geeks-kubeconfig.yaml get nodes -NAME STATUS ROLES AGE VERSION -festive-merkle-8n96 Ready 30s v1.13.1 -festive-merkle-8n9a Ready 17s v1.13.1 -festive-merkle-8n9e Ready 58s v1.13.1 -[davidy:~/Downloads] % -``` - -That's it. You have a beautiful new kubernetes cluster ready for some action! - -## Chef's Notes - -1. Ok, yes, there's not much you can do with your cluster _yet_. But stay tuned, more Kubernetes fun to come! - -### Tip your waiter (donate) 👏 - -Did you receive excellent service? Want to make your waiter happy? (_..and support development of current and future recipes!_) See the [support](/support/) page for (_free or paid)_ ways to say thank you! 👏 - -### Your comments? 💬 diff --git a/manuscript/kubernetes/infrastructure.md b/manuscript/kubernetes/infrastructure.md deleted file mode 100644 index 9d2e105..0000000 --- a/manuscript/kubernetes/infrastructure.md +++ /dev/null @@ -1,192 +0,0 @@ -## Terraform - -We _could_ describe the manual gcloud/ssh steps required to deploy a Kubernetes cluster to Google Kubernetes Engine, but using Terraform allows us to abstract ourself from the provider, and focus on just the infrastructure we need built. - -The terraform config we produce is theoretically reusabel across AWS, Azure, OpenStack, as well as GCE. - -Install terraform locally - on OSX, I used ```brew install terraform``` - -Confirm it's correctly installed by running ```terraform -v```. My output looks like this: - -``` -[davidy:~] % terraform -v -Terraform v0.11.8 - -[davidy:~] % -``` - -## Google Cloud SDK - -I can't remember how I installed gcloud, but I don't think I used homebrew. Run ```curl https://sdk.cloud.google.com | bash``` for a standard install, followed by ```gcloud init``` for the first-time setup. - -This works: - -``` -cat <<-"BREWFILE" > Brewfile -cask 'google-cloud-sdk' -brew 'kubectl' -brew 'terraform' -BREWFILE -brew bundle --verbose -``` - - -### Prepare for terraform - -I followed [this guide](https://cloud.google.com/community/tutorials/managing-gcp-projects-with-terraform) to setup the following in the "best" way: - -Run ```gcloud beta billing accounts list``` to get your billing account - -``` - -export TF_ADMIN=tf-admin-funkypenguin -export TF_CREDS=serviceaccount.json -export TF_VAR_org_id=250566349101 -export TF_VAR_billing_account=0156AE-7AE048-1DA888 -export TF_VAR_region=australia-southeast1 -export GOOGLE_APPLICATION_CREDENTIALS=${TF_CREDS} - -gcloud projects create ${TF_ADMIN} --set-as-default -gcloud beta billing projects link ${TF_ADMIN} \ - --billing-account ${TF_VAR_billing_account} - - gcloud iam service-accounts create terraform \ - --display-name "Terraform admin account" - Created service account [terraform]. - - gcloud iam service-accounts keys create ${TF_CREDS} \ - --iam-account terraform@${TF_ADMIN}.iam.gserviceaccount.com - created key [c0a49832c94aa0e23278165e2d316ee3d5bad438] of type [json] as [serviceaccount.json] for [terraform@funkypenguin-terraform-admin.iam.gserviceaccount.com] - - gcloud projects add-iam-policy-binding ${TF_ADMIN} \ - > --member serviceAccount:terraform@${TF_ADMIN}.iam.gserviceaccount.com \ - > --role roles/viewer - bindings: - - members: - - user:googlecloud2018@funkypenguin.co.nz - role: roles/owner - - members: - - serviceAccount:terraform@funkypenguin-terraform-admin.iam.gserviceaccount.com - role: roles/viewer - etag: BwV0VGSzYSU= - version: 1gcloud projects add-iam-policy-binding ${TF_ADMIN} \ -> --member serviceAccount:terraform@${TF_ADMIN}.iam.gserviceaccount.com \ -> --role roles/viewer -bindings: -- members: - - user:googlecloud2018@funkypenguin.co.nz - role: roles/owner -- members: - - serviceAccount:terraform@funkypenguin-terraform-admin.iam.gserviceaccount.com - role: roles/viewer -etag: BwV0VGSzYSU= -version: 1 - -gcloud projects add-iam-policy-binding ${TF_ADMIN} \ -> --member serviceAccount:terraform@${TF_ADMIN}.iam.gserviceaccount.com \ -> --role roles/storage.admin -bindings: -- members: - - user:googlecloud2018@funkypenguin.co.nz - role: roles/owner -- members: - - serviceAccount:terraform@funkypenguin-terraform-admin.iam.gserviceaccount.com - role: roles/storage.admin -- members: - - serviceAccount:terraform@funkypenguin-terraform-admin.iam.gserviceaccount.com - role: roles/viewer -etag: BwV0VGZwXfM= -version: 1 - - -gcloud services enable cloudresourcemanager.googleapis.com -gcloud services enable cloudbilling.googleapis.com -gcloud services enable iam.googleapis.com -gcloud services enable compute.googleapis.com - -## FIXME -Enabled Kubernetes Engine API in the tf-admin project, so that terraform can actually compute versions of the engine available - -## FIXME - -I had to add compute admin, service admin, and kubernetes engine admin to my org-level account, in order to use gcloud get-cluster-credentilals - - - -gsutil mb -p ${TF_ADMIN} gs://${TF_ADMIN} -Creating gs://funkypenguin-terraform-admin/... -[davidy:~/Documents … remix/kubernetes/terraform] master(+1/-0)* ± -[davidy:~/Documents … remix/kubernetes/terraform] master(+1/-0)* ± cat > backend.tf < terraform { -heredoc> backend "gcs" { -heredoc> bucket = "${TF_ADMIN}" -heredoc> path = "/terraform.tfstate" -heredoc> project = "${TF_ADMIN}" -heredoc> } -heredoc> } -heredoc> EOF -[davidy:~/Documents … remix/kubernetes/terraform] master(+1/-0)* ± gsutil versioning set on gs://${TF_ADMIN} -Enabling versioning for gs://funkypenguin-terraform-admin/... -[davidy:~/Documents … remix/kubernetes/terraform] master(+1/-0)* ± export GOOGLE_APPLICATION_CREDENTIALS=${TF_CREDS} -export GOOGLE_PROJECT=${TF_ADMIN} - - -``` - -### Create Service Account - -Since it's probably not a great idea to associate your own, master Google Cloud account with your automation process (after all, you can't easily revoke your own credentials if they leak), create a Service Account for terraform under GCE, and grant it the "Compute Admin" role. - -Download the resulting JSON, and save it wherever you're saving your code. Remember to protect this .json file like a password, so add it to .gitignore if you're checking your code into git (_and if you're not checking your code into git, what's wrong with you, just do it now!_) - -### Setup provider.tf - -I setup my provider like this, noting that the project name (which must already be created) came from the output of ```gloud projects list```, and region/zone came from https://cloud.google.com/compute/docs/regions-zones/ - -``` -# Specify the provider (GCP, AWS, Azure) -provider "google" { -credentials = "${file("serviceaccount.json")}" -project = "funkypenguin-mining-pools" -region = "australia-southeast1" -} -``` - -### Setup compute.tf - -Just playing, I setup this: - -``` -# Create a new instance -resource "google_compute_instance" "ubuntu-xenial" { - name = "ubuntu-xenial" - machine_type = "f1-micro" - zone = "us-west1-a" - boot_disk { - initialize_params { - image = "ubuntu-1604-lts" - } -} -network_interface { - network = "default" - access_config {} -} -service_account { - scopes = ["userinfo-email", "compute-ro", "storage-ro"] - } -} -``` - -### Initialize and plan (it's free) - -Run ```terraform init``` to initialize Terraform - -Then run ```terrafor plan``` to check that the plan looks good. - -### Apply (not necessarily free) - -Once your plan (above) is good, run ```terraform apply``` to put it into motion. This is the point where you may start incurring costs. - -### Setup kubectl - -gcloud container clusters get-credentials $(terraform output cluster_name) --zone $(terraform output cluster_zone) --project $(terraform output project_id) diff --git a/mkdocs.yml b/mkdocs.yml index 40022da..c5f3fb6 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -39,7 +39,7 @@ pages: - Kubernetes Cluster: - Start: kubernetes/start.md - Design: kubernetes/design.md - - Digital Ocean: kubernetes/cluster.md + - Cluster: kubernetes/cluster.md - Load Balancer: kubernetes/loadbalancer.md - Chef's Favorites: - Auto Pirate: