Skip to content
Snippets Groups Projects
Commit 0ad8d455 authored by ThanKarab's avatar ThanKarab
Browse files

Added kubernetes deployment.

parent 1efb08ee
No related branches found
No related tags found
1 merge request!308Dev/add deployment with kubernetes
apiVersion: v2
name: exareme
description: A helm chart for Kubernetes deployment of Exareme
version: 0.1.0
type: application
# Exareme Development deployment with Kubernetes in one node
## Configuration
The following packages need to be installed:
```
docker
kubectl
helm
```
## Setup the kubernetes cluster with kind
1. Create the cluster using the e2e_tests setup (you can create a custom one if you want) :
```
kind create cluster --config Federated-Deployment/kubernetes/kind_configuration/kind_cluster.yaml
```
2. After the nodes are started, you need to taint them properly:
```
kubectl taint nodes kind-control-plane node-role.kubernetes.io/master-
kubectl label node kind-control-plane nodeType=master
kubectl label node kind-worker nodeType=worker
kubectl label node kind-worker2 nodeType=worker
```
3. (Optional) Load the docker images to the kuberentes cluster, if not the images will be pulled from dockerhub:
```
kind load docker-image hbpmip/exareme:latest
```
4. Deploy the MIP-Engine kubernetes pods using helm charts:
```
helm install exareme Federated-Deployment/kubernetes/
```
# Deploy exareme with Kubernetes
# Exareme deployment with Kubernetes
## Configuration
The following packages need to be installed on **master/worker** nodes:
```
docker
kubelet
kubeadm
```
Packages needed on the **master** node only:
```
helm
```
To configure kubernetes to use docker you should also follow this [guide](https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker "guide") .
## Cluster Management
### Initialize the cluster
On the **master** node:
1. Run the following command to initialize the cluster:
```
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
```
2. To enable kubectl run the following commands as prompted from the previous command:
```
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
```
3. Add calico network tool in the cluster:
```
kubectl apply -f https://docs.projectcalico.org/v3.20/manifests/calico.yaml
```
4. Allow master-specific pods to run on the **master** node with:
```
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl label node <master-node-name> nodeType=master
```
### Add a worker node to the cluster
1. On the **master** node, get the join token with the following command:
```
kubeadm token create --print-join-command
```
Use the provided on the **worker** node, with `sudo`, to join the cluster.
2. Allow worker-specific pods to run on the **worker** node with:
```
kubectl label node <worker-node-name> nodeType=worker
```
3. If the node has status `Ready,SchedulingDisabled` run:
```
kubectl uncordon <node-name>
```
### Remove a worker node from the cluster
On the **master** node execute the following commands:
```
kubectl drain <node-name> --ignore-daemonsets
kubectl delete node <node-name>
```
## Deploy Exareme
1. Configure the [helm chart values](values.yaml).
- The `exareme_images -> version` should be the exareme services' version in dockerhub.
- The `data_path` should be set to the path, in the workers' host machine, that contains the data.
- The `workers` is a counter for the amount of workers in the cluster.
1. From the `exareme` folder, deploy the services:
```
helm install exareme Federated-Deployment/kubernetes/
```
### Change the Exareme version running
1. Modify the `exareme_images -> version` value in the [helm chart values](values.yaml) accordingly.
1. Upgrade the helm chart with:
```
helm upgrade exareme Federated-Deployment/kubernetes/
```
### Increase/reduce the number of workers
1. Modify the `workers` value in the [helm chart values](values.yaml) accordingly.
1. Upgrade the helm chart with:
```
helm upgrade exareme Federated-Deployment/kubernetes/
```
### Restart the federation
You can restart the federation with helm by running:
```
helm uninstall exareme
helm install exareme Federated-Deployment/kubernetes/
```
## Firewall Configuration
Using firewalld the following rules should apply,
in the **master** node:
```
firewall-cmd --permanent --add-port=6443/tcp # Kubelet api port
firewall-cmd --permanent --add-port=30000/tcp # MIPEngine Controller port
```
on all nodes:
```
firewall-cmd --zone=public --permanent --add-rich-rule='rule protocol value="ipip" accept' # Protocol "4" for "calico"-network-plugin.
```
These rules allow for kubectl to only be run on the **master** node.
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30000
hostPort: 9090
- containerPort: 30050
hostPort: 8500
extraMounts:
- hostPath: /opt/exareme_data_1
containerPath: /opt/data
- role: worker
extraMounts:
- hostPath: /opt/exareme_data_2
containerPath: /opt/data
- role: worker
extraMounts:
- hostPath: /opt/exareme_data_3
containerPath: /opt/data
......@@ -46,4 +46,4 @@ spec:
- protocol: TCP
port: 8500
targetPort: 8500
nodePort: 30000
\ No newline at end of file
nodePort: 30050
\ No newline at end of file
......@@ -18,33 +18,34 @@ spec:
nodeType: master
containers:
- name: exareme-master
image: hbpmip/exareme:24.1.2
image: {{ .Values.exareme_images.repository }}/exareme:{{ .Values.exareme_images.version }}
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9090
volumeMounts:
- mountPath: /root/exareme/data
name: csvs
name: data
env:
- name: ENVIRONMENT_TYPE
value: "{{ .Values.exareme.environment_type }}"
- name: LOG_LEVEL
value: "{{ .Values.exareme.log_level }}"
- name: CONVERT_CSVS
value: "{{ .Values.exareme.convert_csvs }}"
- name: TEMP_FILES_CLEANUP_TIME
value: "{{ .Values.exareme.temp_file_cleanup_time }}"
- name: NODE_COMMUNICATION_TIMEOUT
value: "{{ .Values.exareme.node_communication_timeout }}"
- name: CONSULURL
value: "exareme-keystore-service:8500"
- name: FEDERATION_ROLE
value: "master"
- name: NODE_NAME
value: "master"
- name: TEMP_FILES_CLEANUP_TIME
value: "30"
- name: NODE_COMMUNICATION_TIMEOUT
value: "30000"
- name: ENVIRONMENT_TYPE
value: "PROD"
- name: LOG_LEVEL
value: "INFO"
- name: CONVERT_CSVS
value: "TRUE"
volumes:
- name: csvs
- name: data
hostPath:
path: /etc/exareme
path: {{ .Values.data_path }}
---
......@@ -60,4 +61,4 @@ spec:
- protocol: TCP
port: 9090
targetPort: 9090
nodePort: 30090
nodePort: 30000
......@@ -5,7 +5,7 @@ metadata:
labels:
app: exareme-worker
spec:
replicas: 2
replicas: {{ .Values.workers }}
selector:
matchLabels:
app: exareme-worker
......@@ -28,32 +28,33 @@ spec:
topologyKey: "kubernetes.io/hostname"
containers:
- name: exareme-worker
image: hbpmip/exareme:24.1.2
image: {{ .Values.exareme_images.repository }}/exareme:{{ .Values.exareme_images.version }}
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9090
volumeMounts:
- mountPath: /root/exareme/data
name: csvs
name: data
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: ENVIRONMENT_TYPE
value: "{{ .Values.exareme.environment_type }}"
- name: LOG_LEVEL
value: "{{ .Values.exareme.log_level }}"
- name: CONVERT_CSVS
value: "{{ .Values.exareme.convert_csvs }}"
- name: TEMP_FILES_CLEANUP_TIME
value: "{{ .Values.exareme.temp_file_cleanup_time }}"
- name: NODE_COMMUNICATION_TIMEOUT
value: "{{ .Values.exareme.node_communication_timeout }}"
- name: CONSULURL
value: "exareme-keystore-service:8500"
- name: FEDERATION_ROLE
value: "worker"
- name: TEMP_FILES_CLEANUP_TIME
value: "30"
- name: NODE_COMMUNICATION_TIMEOUT
value: "30000"
- name: ENVIRONMENT_TYPE
value: "PROD"
- name: LOG_LEVEL
value: "INFO"
- name: CONVERT_CSVS
value: "TRUE"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumes:
- name: csvs
- name: data
hostPath:
path: /etc/exareme
path: {{ .Values.data_path }}
exareme_images:
repository: hbpmip
version: 24.2.0
data_path: /opt/data
exareme:
log_level: INFO
environment_type: PROD
convert_csvs: TRUE
temp_file_cleanup_time: 30
node_communication_timeout: 30000
workers: 2
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment