Skip to main content

Create and manage Kubernetes clusters

This guide provides a step-by-step walkthrough for creating Kubernetes clusters with the use of Crossplane and ArgoCD.

Prerequisites​

  1. Prior knowledge of Port Actions is essential for following this guide. Learn more about them here.
  2. A Control plane that will be used to create clusters and other infrastructure. We will use crossplane.
  3. A GitOps operator for automatically running operations on our cluster based on changes in our manifests repository. We will be using ArgoCD.
  4. Install Helm.
  5. A GitHub repository to contain your resources i.e. the github workflow file, port resources, and infrastructure manifests.
Starter Repository

Clone our starter repository here to follow along through the guide. The repository contains the following folders:

  • .github: contains the github workflows.
  • argocd: contains the ArgoCD application manifests. This is where we define the application that automates our process through GitOps.
  • compositions: contains the crossplane compositions that define what a cluster is.
  • crossplane-config: manifests for installing crossplane into your management cluster.
  • infra: will contain the cluster manifests created by the automation.
  • port: contains the Port blueprints and action definitions.
  • scripts: contains the script that the GitHub workflow uses to create cluster manifests.

1. Control Plane Setup​

Creating a Kubernetes cluster

If you don’t have a Kubernetes cluster create one locally with kind.

  • Install Crossplane into what will be your management cluster.
helm repo add crossplane-stable https://charts.crossplane.io/stable

helm repo update

helm upgrade --install crossplane crossplane-stable/crossplane --namespace crossplane-system --create-namespace --wait

  • Now, let's install ArgoCD into the cluster.
kubectl create namespace argocd

kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

kubectl get pods -n argocd

ArgoCD comes with a default user: admin

To get the password, type the command below:

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

To access the UI, Kubectl port-forwarding can also be used to connect to the API server without exposing the service.

kubectl port-forward svc/argocd-server -n argocd 8080:443

The API server can then be accessed using https://localhost:8080


  • Create the following Crossplane compositions, in the crossplane-config folder, to define the set of resources required to create a new cluster.
What are Crossplane Compositions?

Compositions are templates for creating multiple managed resources as a single object. Learn more about them here.

crossplane-config/provider-kubernetes-incluster.yaml
provider-kubernetes-incluster.yaml
---

apiVersion: v1
kind: ServiceAccount
metadata:
name: crossplane-provider-kubernetes
namespace: crossplane-system
annotations:
argocd.argoproj.io/sync-wave: "-1"

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: crossplane-provider-kubernetes
annotations:
argocd.argoproj.io/sync-wave: "-1"
subjects:
- kind: ServiceAccount
name: crossplane-provider-kubernetes
namespace: crossplane-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io

---

apiVersion: pkg.crossplane.io/v1alpha1
kind: ControllerConfig
metadata:
name: crossplane-provider-kubernetes
annotations:
argocd.argoproj.io/sync-wave: "-1"
spec:
serviceAccountName: crossplane-provider-kubernetes

---

apiVersion: pkg.crossplane.io/v1
kind: Provider
metadata:
name: crossplane-provider-kubernetes
annotations:
argocd.argoproj.io/sync-wave: "-1"
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
spec:
package: xpkg.upbound.io/crossplane-contrib/provider-kubernetes:v0.9.0
controllerConfigRef:
name: crossplane-provider-kubernetes
crossplane-config/provider-helm-incluster.yaml
provider-helm-incluster.yaml
---

apiVersion: v1
kind: ServiceAccount
metadata:
name: crossplane-provider-helm
namespace: crossplane-system

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: crossplane-provider-helm
subjects:
- kind: ServiceAccount
name: crossplane-provider-helm
namespace: crossplane-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io

---

apiVersion: pkg.crossplane.io/v1alpha1
kind: ControllerConfig
metadata:
name: crossplane-provider-helm
spec:
serviceAccountName: crossplane-provider-helm

---

apiVersion: pkg.crossplane.io/v1
kind: Provider
metadata:
name: crossplane-provider-helm
spec:
package: xpkg.upbound.io/crossplane-contrib/provider-helm:v0.14.0
controllerConfigRef:
name: crossplane-provider-helm
kubectl apply --filename ./crossplane-config/provider-kubernetes-incluster.yaml

kubectl apply --filename ./crossplane-config/provider-helm-incluster.yaml

kubectl wait --for=condition=healthy provider.pkg.crossplane.io --all --timeout=300s

  • Design a template to create a new cluster with desired properties. This is what the GitHub action will copy and use to define the cluster manifest.
cluster-template.yaml
---
apiVersion: devopstoolkitseries.com/v1alpha1
kind: ClusterClaim
metadata:
name: NAME
namespace: infra
spec:
id: NAME
compositionSelector:
matchLabels:
provider: PROVIDER
cluster: CLUSTER
parameters:
nodeSize: SIZE
minNodeCount: 1

2. Workflow Creation​

  • Create a GitHub workflow named create-cluster.yaml to checkout code, execute the create-cluster.sh script, and push changes back to the repository.
Create Cluster Workflow
name: Create a cluster
on:
workflow_dispatch:
inputs:
name:
required: true
description: "The name of the cluster"
provider:
required: true
description: "The provider where the cluster is hosted"
default: "aws"
cluster:
required: true
description: "The type of the cluster"
node-size:
required: true
description: "The size of the nodes"
default: "small"
min-node-count:
required: true
description: "The minimum number of nodes (autoscaler might increase this number)"
default: "1"
jobs:
deploy-app:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
persist-credentials: false
fetch-depth: 0
- name: Create cluster
run: |
chmod +x scripts/create-cluster.sh
./scripts/create-cluster.sh ${{ inputs.name }} ${{ inputs.provider }} ${{ inputs.cluster }} ${{ inputs.node-size }} ${{ inputs.min-node-count }}
- name: Commit changes
run: |
git config --local user.email "41898282+github-actions[bot]@users.noreply.github.com"
git config --local user.name "github-actions[bot]"
git add .
git commit -m "Create cluster ${{ inputs.name }}"
- name: Push changes
uses: ad-m/github-push-action@master
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
branch: ${{ github.ref }}

  • Create a script named create-cluster.sh in the scripts folder of your repository to copy the template claim to the infra directory and replace placeholders with user inputs from Port.
create-cluster.sh
NAME=$1
PROVIDER=$2
CLUSTER=$3
NODE_SIZE=$4
MIN_NODE_COUNT=$5

FILE_PATH=infra/${NAME}-cluster.yaml

cp crossplane/cluster-template.yaml $FILE_PATH
yq --inplace ".metadata.name = \"${NAME}\"" $FILE_PATH
yq --inplace ".spec.id = \"${NAME}\"" $FILE_PATH
yq --inplace ".spec.compositionSelector.matchLabels.provider = \"${PROVIDER}\"" $FILE_PATH
yq --inplace ".spec.compositionSelector.matchLabels.cluster = \"${CLUSTER}\"" $FILE_PATH
yq --inplace ".spec.parameters.nodeSize = \"${NODE_SIZE}\"" $FILE_PATH
yq --inplace ".spec.parameters.minNodeCount = ${MIN_NODE_COUNT}" $FILE_PATH

  • Create another workflow named delete-cluster.yaml to delete the cluster file and push changes to the repository.
Delete Cluster Workflow
delete-cluster.yaml
name: Delete the cluster
on:
workflow_dispatch:
inputs:
name:
required: true
description: "The name of the cluster"
jobs:
deploy-app:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
persist-credentials: false
fetch-depth: 0
- name: Delete cluster
run: |
rm infra/${{ inputs.name }}.yaml
- name: Commit changes
run: |
git config --local user.email "41898282+github-actions[bot]@users.noreply.github.com"
git config --local user.name "github-actions[bot]"
git add .
git commit -m "Delete cluster ${{ inputs.name }}"
- name: Push changes
uses: ad-m/github-push-action@master
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
branch: ${{ github.ref }}

3. Port Configuration​

  • Create a Port blueprint that defines a data model for the cluster in the UI.
Blueprint
cluster-blueprint.json
{
"identifier": "cluster",
"description": "This blueprint represents a Kubernetes Cluster",
"title": "Cluster",
"icon": "Cluster",
"schema": {
"properties": {
"provider": {
"type": "string",
"title": "Provider",
"default": "aws",
"description": "The provider where the cluster is hosted",
"enum": ["aws", "gcp"]
},
"node-size": {
"type": "string",
"title": "Node Size",
"default": "small",
"description": "The size of the nodes",
"enum": ["small", "medium", "large"]
},
"min-node-count": {
"type": "number",
"title": "Minimum number of nodes",
"default": 1,
"description": "The minimun number of nodes (autoscaler might increase this number)"
},
"kube-config": {
"type": "string",
"title": "Kube config",
"description": "Kube config"
},
"status": {
"type": "string",
"title": "Status",
"description": "The status of the cluster"
}
},
"required": ["provider", "node-size", "min-node-count"]
},
"mirrorProperties": {},
"calculationProperties": {},
"relations": {}
}

  • Create the Port actions for creating and deleting clusters:
    • Head to the self-service page.
    • Click on the + New Action button.
    • Click on the {...} Edit JSON button.
    • Copy and paste each of the following JSON configuration into the editor:
Create Cluster Action
Modification Required

Make sure to replace <GITHUB_ORG> and <GITHUB_REPO> with your GitHub organization and repository names respectively.

cluster-create-action.json
{
"identifier": "cluster_create-cluster",
"title": "Create a cluster",
"description": "Create a new cluster.",
"trigger": {
"type": "self-service",
"operation": "CREATE",
"userInputs": {
"properties": {
"name": {
"type": "string",
"title": "Name",
"description": "The name of the cluster"
},
"provider": {
"type": "string",
"title": "Provider",
"default": "aws",
"description": "The provider where the cluster is hosted",
"enum": [
"aws",
"azure"
]
},
"node-size": {
"type": "string",
"title": "Node Size",
"default": "small",
"description": "The size of the nodes",
"enum": [
"small",
"medium",
"large"
]
},
"min-node-count": {
"type": "string",
"title": "Minimum number of nodes",
"default": "1",
"description": "The minimun number of nodes (autoscaler might increase this number)"
}
},
"required": [
"name",
"provider",
"node-size",
"min-node-count"
]
},
"blueprintIdentifier": "cluster"
},
"invocationMethod": {
"type": "GITHUB",
"org": "<GITHUB_ORG_ID>",
"repo": "<GITHUB_REPO_ID>",
"workflow": "create-cluster.yaml",
"workflowInputs": {
"{{if (.inputs | has(\"ref\")) then \"ref\" else null end}}": "{{.inputs.\"ref\"}}",
"{{if (.inputs | has(\"name\")) then \"name\" else null end}}": "{{.inputs.\"name\"}}",
"{{if (.inputs | has(\"provider\")) then \"provider\" else null end}}": "{{.inputs.\"provider\"}}",
"{{if (.inputs | has(\"node-size\")) then \"node-size\" else null end}}": "{{.inputs.\"node-size\"}}",
"{{if (.inputs | has(\"min-node-count\")) then \"min-node-count\" else null end}}": "{{.inputs.\"min-node-count\"}}"
}
},
"publish": true
}
Delete Cluster Action
Modification Required

Make sure to replace <GITHUB_ORG> and <GITHUB_REPO> with your GitHub organization and repository names respectively.

cluster-delete-action.json
{
"identifier": "cluster_delete-cluster",
"title": "Delete the cluster",
"description": "Delete the cluster.",
"trigger": {
"type": "self-service",
"operation": "DELETE",
"userInputs": {
"properties": {
"name": {
"type": "string",
"title": "Name",
"description": "Confirm by typing the name of the cluster"
}
},
"required": [
"name"
]
},
"blueprintIdentifier": "cluster"
},
"invocationMethod": {
"type": "GITHUB",
"org": "<GITHUB_ORG_ID>",
"repo": "<GITHUB_REPO_ID>",
"workflow": "delete-cluster.yaml",
"workflowInputs": {
"{{if (.inputs | has(\"ref\")) then \"ref\" else null end}}": "{{.inputs.\"ref\"}}",
"{{if (.inputs | has(\"name\")) then \"name\" else null end}}": "{{.inputs.\"name\"}}"
}
},
"publish": true
}

4. GitOps Setup​

  • Now to add the final piece in our automation, create an ArgoCD application that will be responsible for syncing the GitHub repository state into the management cluster so that crossplane creates the resources.
kubectl apply --filename apps.yaml
ArgoCD Application
tip
  • Change the repoURL to your repository.
  • Create or set your namespace.
apps.yaml
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: production-infra
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: production
source:
repoURL: https://github.com/port-labs-labs/crossplane-demo
targetRevision: HEAD
path: infra
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
selfHeal: true
prune: true
allowEmpty: true

5. Let's test it.​

  • On the self-service page, go to the create cluster action and fill in the cluster properties.
  • Click the execute button to trigger the creation process.
  • GitHub actions will generate a manifest and copy it to the infra folder of your repository.
  • ArgoCD will synchronize the manifest within the control plane cluster.
  • Crossplane will create the cluster resources in the specified provider.

Done! πŸŽ‰ You can now create and delete clusters from Port.