Multi-cluster KCP — Compute as a Service
KCP is a minimal Kubernetes API server, that only knows about basic types and CustomResourceDefinitions (CRDs). KCP doesn’t know about most of the core Kubernetes types (Pods, Deployments etc.), and expects users or controllers to define them as needed, and to run controllers to respond to those resources.
KCP is a generic CustomResourceDefinition apiserver that is divided into multiple “logical clusters” that enable multitenancy of cluster-scoped resources such as CRDs and Namespaces. Each of these logical clusters is fully isolated from the others, allowing different teams, workloads, and use cases to live side by side.
KCP implements Compute as a Service via a concept of Transparent Multi Cluster (TMC). TMC means that Kubernetes clusters are attached to a kcp installation to execute workload objects from the users’ workspaces by syncing these workload objects down to those clusters and the objects’ status back up.
In this blog, we will connect multiple clusters of different architecture to single KCP server and deploy workloads to the registered clusters.
Pre-requisites:
- IBM x86 resource for deploying KCP server
- IBM x86 Openshift cluster for executing workloads
- IBM Power Openshift cluster for executing workloads
You can deploy a Red Hat OpenShift cluster on IBM Power Systems Virtual Servers using steps in this article: https://developer.ibm.com/components/ibm-power/tutorials/install-ocp-on-power-vs/
KCP server setup on x86 VM
- Install Go
git clone https://github.com/rpsene/goconfig.gitcd ./goconfigsource ./go.sh install
2. Install kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"chmod +x kubectl
3. Install Docker
yum install -y yum-utilsyum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repoyum install docker-ce docker-ce-cli containerd.io -ysystemctl start docker
4. Install kcp and kcp plugins
wget https://github.com/kcp-dev/kcp/releases/download/v0.8.2/kcp_0.8.2_linux_amd64.tar.gztar -xvf kcp_0.8.2_linux_amd64.tar.gzwget https://github.com/kcp-dev/kcp/releases/download/v0.8.2/kubectl-kcp-plugin_0.8.2_linux_amd64.tar.gztar -xvf kubectl-kcp-plugin_0.8.2_linux_amd64.tar.gz
export PATH=$PATH:/root/bin/
5. Start kcp server
kcp start
6. Access kcp server in another terminal
export KUBECONFIG=/root/.kcp/admin.kubeconfig #path to kubeconfigkubectl api-resources
Register clusters to kcp server
- Create a workspace
kubectl kcp workspace .kubectl kcp workspace create --type=Organization org1kubectl kcp workspace use org1kubectl kcp workspace create workspace1kubectl kcp workspace use workspace1
2. Copy kubeconfig of clusters to be registered
mkdir /root/.kube && cd /root/.kubevi kubeconfig.x86 #Copy kubeconfig contents of x86 clustervi kubeconfig.ppc #Copy kubeconfig contents of Power cluster
3. Install RedHat Openshift pipelines operator on both clusters
4. Register x86 and Power clusters to kcp
#Register x86 cluster
kubectl kcp workload sync test-x86 --syncer-image ghcr.io/kcp-dev/kcp/syncer:v0.8.2 --resources deployments.apps,services,ingresses.networking.k8s.io,pipelines.tekton.dev,pipelineruns.tekton.dev,tasks.tekton.dev,runs.tekton.dev,networkpolicies.networking.k8s.io --output-file=syncer-x86.yamlKUBECONFIG=/root/.kube/kubeconfig.x86 kubectl apply -f syncer-x86.yamlkubectl label synctarget test-x86 sync=x86#Register Power cluster
kubectl kcp workload sync test-ppc --syncer-image ghcr.io/kcp-dev/kcp/syncer:v0.8.2 --resources deployments.apps,services,ingresses.networking.k8s.io,pipelines.tekton.dev,pipelineruns.tekton.dev,tasks.tekton.dev,runs.tekton.dev,networkpolicies.networking.k8s.io --output-file=syncer-ppc.yamlKUBECONFIG=/root/.kube/kubeconfig.ppc kubectl apply -f syncer-ppc.yamlkubectl label synctarget test-ppc sync=ppc
In above command, you can pass the resources that you want to sync. By default, only deployments will be synced. In this blog, we will be syncing Tekton pipelines.
4. Create locations pointing to synctargets
Location represents a collection of SyncTarget
objects selected via instance labels. You can read more about it here.
#x86 location pointing to x86 synctarget
cat > x86_location.yaml << EOF
apiVersion: scheduling.kcp.dev/v1alpha1
kind: Location
metadata:
name: x86-location
spec:
instanceSelector:
matchLabels:
sync: x86
resource:
group: workload.kcp.dev
resource: synctargets
version: v1alpha1
EOFkubectl apply -f x86_location.yamlkubectl label location x86-location loc=x86#ppc location pointing to ppc synctarget
cat > ppc_location.yaml << EOF
apiVersion: scheduling.kcp.dev/v1alpha1
kind: Location
metadata:
name: ppc-location
spec:
instanceSelector:
matchLabels:
sync: ppc
resource:
group: workload.kcp.dev
resource: synctargets
version: v1alpha1
EOFkubectl apply -f ppc_location.yamlkubectl label location ppc-location loc=ppc
5. Create namespaces for deploying workloads
kubectl create ns hello-worldkubectl label namespace hello-world ns=x86kubectl create ns hello-world-ppckubectl label namespace hello-world-ppc ns=ppc
6. Create placements for connecting location to namespaces
Placement represents a selection rule to choose ONE Location
via location labels, and bind the selected location to MULTIPLE namespaces in a user workspace.
#x86 placement
cat > x86_placement.yaml << EOF
apiVersion: scheduling.kcp.dev/v1alpha1
kind: Placement
metadata:
name: x86-placement
spec:
locationSelectors:
- matchLabels:
loc: x86
locationResource:
group: workload.kcp.dev
resource: synctargets
version: v1alpha1
namespaceSelector:
matchLabels:
ns: x86
locationWorkspace: root:org1:workspace1
EOFkubectl apply -f x86_placement.yaml#ppc placement
cat > ppc_placement.yaml << EOF
apiVersion: scheduling.kcp.dev/v1alpha1
kind: Placement
metadata:
name: ppc-placement
spec:
locationSelectors:
- matchLabels:
loc: ppc
locationResource:
group: workload.kcp.dev
resource: synctargets
version: v1alpha1
namespaceSelector:
matchLabels:
ns: ppc
locationWorkspace: root:org1:workspace1
EOFkubectl apply -f x86_placement.yaml
Deploy workload to registered clusters
In this blog, we will deploy Tekton pipeline to registered clusters and get the output logs for each architecture.
git clone https://github.com/snehakpersistent/tekton-pipeline.gitcd tekton-pipeline#To deploy on x86 cluster use ns hello-world
#To deploy on ppc cluster use ns hello-world-ppc
kubectl apply -f tasks/ -n <ns>
kubectl apply -f pipeline.yaml -n <ns>
kubectl apply -f pipelinerun.yaml -n <ns>
This will sync the Tekton pipeline to respective registered cluster. You can check the logs of pipelinerun with help of tkn commands.
Thats all folks! Thanks for reading. Hope you found this tutorial helpful :)