Install k3s with dashboard in Rocky9
Jump to navigation
Jump to search
Contents
1 Kubernetes Dashboard via NodePort and Auth Token on K3S
1.1 Description
The goal is to install and expose the Kubernetes Dashboard using NodePort and an authentication token, allowing LAN users to access it without port forwarding.
2 Setup K3S
2.1 Step 1: Upgrade and install necessary packages
dnf -y upgrade dnf -y install setroubleshoot-server curl lsof wget tar vim git bash-completion
2.2 Step 2: Disable swap
sed -i '/swap/d' /etc/fstab swapoff -a
2.3 Step 3: Open necessary firewall ports
systemctl disable firewalld --now # it is recomended to disable firewalld, so do not use this if you do not know how to handle firewall-cmd --permanent --add-port=30443/tcp # dashboard firewall-cmd --permanent --add-port=443/tcp # ingress controller firewall-cmd --permanent --add-port=6443/tcp # API server firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16 # pods firewall-cmd --permanent --zone=trusted --add-source=10.43.0.0/16 # services firewall-cmd --reload reboot
2.4 Step 4: Install k3s
curl -sfL https://get.k3s.io | sh grep 'kubectl completion bash' $HOME/.bashrc || echo 'source <(kubectl completion bash)' >> $HOME/.bashrc
Check k3s version:
k3s -v # Expected output: # k3s version v1.30.4+k3s1 (98262b5d) # go version go1.22.5
2.5 Step 5: Install Helm
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | sh helm completion bash > /etc/bash_completion.d/helm grep KUBECONFIG $HOME/.bashrc || echo 'export KUBECONFIG=/etc/rancher/k3s/k3s.yaml' >> $HOME/.bashrc
Log out and back in to apply changes, then proceed with Helm setup.
3 Setup Dashboard
3.1 Step 1: Add the Kubernetes Dashboard Helm repo and install the Dashboard
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/ helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard
3.2 Upgrade existing Dashboard
# kubernetes-dashboard [root@k3s01 ~]# kubectl config set-context --current --namespace kubernetes-dashboard
Context "default" modified.
# kubernetes-dashboard [root@k3s01 ~]# helm search repo kubernetes-dashboard -l | head
NAME CHART VERSION APP VERSION DESCRIPTION
kubernetes-dashboard/kubernetes-dashboard 7.6.1 General-purpose web UI for Kubernetes clusters
kubernetes-dashboard/kubernetes-dashboard 7.6.0 General-purpose web UI for Kubernetes clusters
kubernetes-dashboard/kubernetes-dashboard 7.5.0 General-purpose web UI for Kubernetes clusters
kubernetes-dashboard/kubernetes-dashboard 7.4.0 General-purpose web UI for Kubernetes clusters
kubernetes-dashboard/kubernetes-dashboard 7.3.2 General-purpose web UI for Kubernetes clusters
kubernetes-dashboard/kubernetes-dashboard 7.3.1 General-purpose web UI for Kubernetes clusters
kubernetes-dashboard/kubernetes-dashboard 7.3.0 General-purpose web UI for Kubernetes clusters
kubernetes-dashboard/kubernetes-dashboard 7.2.0 General-purpose web UI for Kubernetes clusters
kubernetes-dashboard/kubernetes-dashboard 7.1.3 General-purpose web UI for Kubernetes clusters
# kubernetes-dashboard [root@k3s01 ~]# helm upgrade kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --version 7.6.1 -n kubernetes-dashboard --reuse-values
Release "kubernetes-dashboard" has been upgraded. Happy Helming!
NAME: kubernetes-dashboard
LAST DEPLOYED: Fri Nov 29 07:02:49 2024
NAMESPACE: kubernetes-dashboard
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
*************************************************************************************************
*** PLEASE BE PATIENT: Kubernetes Dashboard may need a few minutes to get up and become ready ***
*************************************************************************************************
Congratulations! You have just installed Kubernetes Dashboard in your cluster.
To access Dashboard run:
kubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443
NOTE: In case port-forward command does not work, make sure that kong service name is correct.
Check the services in Kubernetes Dashboard namespace using:
kubectl -n kubernetes-dashboard get svc
Dashboard will be available at:
https://localhost:8443
3.3 Step 2: Expose Dashboard via NodePort
kubectl patch service kubernetes-dashboard-kong-proxy -n kubernetes-dashboard --type='merge' -p '{
"spec": {
"type": "NodePort",
"ports": [
{
"name": "kong-proxy-tls",
"port": 443,
"protocol": "TCP",
"targetPort": 8443,
"nodePort": 30443
}
],
"selector": {
"app.kubernetes.io/component": "app",
"app.kubernetes.io/instance": "kubernetes-dashboard",
"app.kubernetes.io/name": "kong"
},
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {}
}
}'
3.4 Step 3: Create Service Account and RoleBinding for Admin Access
echo 'apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
name: admin-user
namespace: kubernetes-dashboard
annotations:
kubernetes.io/service-account.name: "admin-user"
type: kubernetes.io/service-account-token
' | kubectl apply -f -
3.5 Step 4: Retrieve the Admin User Token
kubectl get secret admin-user -n kubernetes-dashboard -o jsonpath={".data.token"} | base64 -d
Use the retrieved token to log in to the Kubernetes Dashboard at https://your.cluster.fqdn:30443
4 Uninstall Dashboard
To remove the Kubernetes Dashboard, run the following commands:
helm uninstall kubernetes-dashboard --namespace kubernetes-dashboard kubectl get all -n kubernetes-dashboard kubectl delete namespace kubernetes-dashboard helm repo remove kubernetes-dashboard
5 Applications on K3S
This is a cookbook of how to install apps on setup mentioned above
5.1 AWX setup with operator
5.1.1 Install Operator and instance
export NAMESPACE=awx
kubectl create namespace $NAMESPACE
kubectl config set-context --namespace=$NAMESPACE --current
cd ; mkdir awx ; cd awx
git clone https://github.com/ansible/awx-operator.git
cd awx-operator
git tag
git checkout tags/2.19.1
vim kustomization.yaml
------
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
# Find the latest tag here: https://github.com/ansible/awx-operator/releases
- github.com/ansible/awx-operator/config/default?ref=2.19.1
- awx-bitbull.yml
# Set the image tags to match the git version from above
images:
- name: quay.io/ansible/awx-operator
newTag: 2.19.1
# Specify a custom namespace in which to install AWX
namespace: awx
...
------
vim awx-bitbull.yml
------
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: bitbull
spec:
ingress_type: ingress
ingress_hosts:
- hostname: k3s01.domain.tld
...
------
kubectl get namespaces
dnf -y install make
make deploy
kubectl apply -k .
kubectl logs -f deployments/awx-operator-controller-manager -c awx-manager
kubectl get awx
# find password to log into awx
kubectl get secret bitbull-admin-password -o jsonpath='{.data.password}' | base64 --decode
5.1.2 Setup AWX Backup
AWX operator brings some crds like awx, awxbackup, awxrestore.
So lets go this way and implement real kubernetes native backup.