diff --git a/docs/develop/deploy_plugin.md b/docs/develop/deploy_plugin.md
new file mode 100644
index 0000000000000000000000000000000000000000..5d73dd684623917b50c8dcaa075e1331eb47a83c
--- /dev/null
+++ b/docs/develop/deploy_plugin.md
@@ -0,0 +1,114 @@
+# Deploy plugin
+
+This section details how plugins can be deployed on ebrains infrastructure (openshift). It should also apply more generally to other infrastructure providers with openshift and/or k8s flavors.
+
+## Prerequisite
+
+- Docker
+- Access to self hosted docker registry (e.g. docker-registry.ebrains.eu) [1]
+- Access to openshift cluster (e.g. okd.hbp.eu)
+- openshift cli `oc` installed [https://github.com/openshift/origin/releases/tag/v3.11.0](https://github.com/openshift/origin/releases/tag/v3.11.0) (for CLI approach)
+
+
+## How
+
+!!! warning
+    This guide assumes the plugin developers are using the template repo provided at [https://github.com/FZJ-INM1-BDA/siibra-toolbox-template](https://github.com/FZJ-INM1-BDA/siibra-toolbox-template), such as [https://github.com/fzj-inm1-bda/siibra-jugex/tree/feat_workerFrontend](https://github.com/fzj-inm1-bda/siibra-jugex/tree/feat_workerFrontend)
+
+!!! info
+    This guide assumes plugin developers successfully tested their plugin locally.
+
+!!! info
+    This guide is suitable for deploying the application for the initial deployment of the application. Documentation on updating of deployed services can be found at [update_plugin_deployment.md](update_plugin_deployment.md)
+
+You can deploy the plugin either via GUI or CLI.
+
+## How (via CLI)
+
+0. (can be skipped if project already created) Goto https://docker-registry.ebrains.eu/ , create a project (hereafter referred to as `<project_name>`). Decide a namespace for your current project, hereafter referred to as `<app_name>`
+
+1. In root working directory, build server image with
+
+    ```sh
+    docker build \
+        -f http.server.dockerfile \
+        -t docker-registry.ebrains.eu/<project_name>/<app_name>:latest-server \
+        .
+    ```
+2. In root working directory, build worker image with
+
+    ```sh
+    docker build \
+        -f http.worker.dockerfile \
+        -t docker-registry.ebrains.eu/<project_name>/<app_name>:latest-worker \
+        .
+    ```
+3. Login to docker registry via CLI
+
+    ```sh
+    docker login -u <USERNAME> -p <PASSWORD> docker-registry.ebrains.eu
+    ```
+
+    !!! info
+        Most docker registry do **not** require you to use your actual password. In docker-registry.ebrains.eu (Harbor), you can obtain a token with auto expiry by clicking your profile > CLI Secret.
+
+4. Push both worker and server images to docker registry
+
+    ```sh
+    docker push docker-registry.ebrains.eu/<project_name>/<app_name>:latest-worker 
+    docker push docker-registry.ebrains.eu/<project_name>/<app_name>:latest-server 
+    ```
+
+5. Login to openshift admin dashboard. (Create a project if you haven't already, hereafter referred to as `<okd_project_name>`). Enter your project by clicking it.
+
+6. Copy `openshift-service-tmpl.yml` to our working directory, or `cd` into this directory
+
+7. Copy the login command via `(top right) [Your Username]` > `Copy Login Command`. Launch a terminal, paste the login command and hit enter.
+
+8. Select the project with `oc project <okd_project_name>`
+
+9. Start the service with 
+    ```sh
+    oc new-app \
+    -f openshift-service-tmpl.yml \
+    -p TOOLBOX_NAME=my_app \
+    -p TOOLBOX_ROUTE=https://my_app_route.apps.hbp.eu \
+    -p TOOLBOX_WORKER_IMAGE=docker-registry.ebrains.eu/<project_name>/<app_name>:latest-worker \
+    -p TOOLBOX_SERVER_IMAGE=docker-registry.ebrains.eu/<project_name>/<app_name>:latest-server
+
+    ```
+
+## How (via GUI)
+
+0. - 5. (follow How (via CLI))
+
+6. Deploy a redis instance via GUI:
+- `(top right) Add to project` > `Deploy Image` > `(radio button) Image Name`
+- enter `docker-registry.ebrains.eu/monitoring/redis:alpine3.17` in the text field
+- click `(button) [magnifying glass]`
+- change or remember the `name` attribute. Hereafter this attribute will be referred to as `<redis_instance_name>`
+- click `(primary button) Deploy`
+
+7. Deploy the server via GUI:
+
+- `(top right) Add to project` > `Deploy Image` > `(radio button) Image Name`
+- enter `docker-registry.ebrains.eu/<project_name>/<app_name>:latest-server` in the text field
+- click `(button) [magnifying glass]`
+- under `Environment Variables`, add the following environment variables[2]:
+    - `SIIBRA_TOOLBOX_CELERY_BROKER`=`redis://<redis_instance_name>:6379`
+    - `SIIBRA_TOOLBOX_CELERY_RESULT`=`redis://<redis_instance_name>:6379`
+- under `Labels`, add the following labels:
+    - `app_role`=`server`
+- click `(primary button) Deploy`
+
+8. Deploy worker via GUI: repeat 7. but
+    - use `docker-registry.ebrains.eu/<project_name>/<app_name>:latest-worker` as the image
+    - under `Labels` use the following labels:
+        - `app_role`=`worker`
+
+9. Create route (TBD)
+
+
+[1] dockerhub rate/count limits pulls from IP addresses. It is likely that the openshift cluster would easily exceed the quota. 
+[2] you may have to adjust the variable names if you have changed them in your project
+
diff --git a/docs/develop/openshift-service-tmpl.yml b/docs/develop/openshift-service-tmpl.yml
new file mode 100644
index 0000000000000000000000000000000000000000..4a3e770be4fd830593479bd202ea7682873aecfa
--- /dev/null
+++ b/docs/develop/openshift-service-tmpl.yml
@@ -0,0 +1,320 @@
+apiVersion: template.openshift.io/v1
+kind: Template
+labels:
+  template: siibra-toolbox-deploy-template
+metadata:
+  annotations:
+    description: Deploy siibra toolbox
+    tags: python,async
+  name: siibra-toolbox-deploy-template
+objects:
+- apiVersion: v1
+  kind: DeploymentConfig
+  metadata:
+    labels:
+      app: siibra-toolbox-deploy-${TOOLBOX_NAME}
+    name: siibra-toolbox-deploy-${TOOLBOX_NAME}-redis
+      spec:
+    replicas: 3
+    revisionHistoryLimit: 10
+    selector:
+      deploymentconfig: siibra-toolbox-deploy-${TOOLBOX_NAME}-redis
+    template:
+      metadata:
+        labels:
+          app: siibra-toolbox-deploy-${TOOLBOX_NAME}
+          deploymentconfig: siibra-toolbox-deploy-${TOOLBOX_NAME}-redis
+      spec:
+        containers:
+        - image: docker-registry.ebrains.eu/monitoring/redis:alpine3.17
+          imagePullPolicy: Always
+          name: redis
+          resources: {}
+          terminationMessagePath: /dev/termination-log
+          terminationMessagePolicy: File
+
+        dnsPolicy: ClusterFirst
+        restartPolicy: Always
+        schedulerName: default-scheduler
+        securityContext: {}
+        terminationGracePeriodSeconds: 30
+
+- apiVersion: v1
+  kind: DeploymentConfig
+  metadata:
+    labels:
+      app: siibra-toolbox-deploy-${TOOLBOX_NAME}
+      app_role: worker
+    name: siibra-toolbox-deploy-${TOOLBOX_NAME}-worker
+  spec:
+    replicas: 3
+    revisionHistoryLimit: 10
+    selector:
+      deploymentconfig: siibra-toolbox-deploy-${TOOLBOX_NAME}-worker
+    template:
+      metadata:
+        labels:
+          app: siibra-toolbox-deploy-${TOOLBOX_NAME}
+          app_role: worker
+          deploymentconfig: siibra-toolbox-deploy-${TOOLBOX_NAME}-worker
+      spec:
+        containers:
+        - env:
+          - name: SIIBRA_TOOLBOX_NAME
+            value: ${SIIBRA_TOOLBOX_NAME}
+          - name: SIIBRA_TOOLBOX_CELERY_BROKER
+            value: redis://redis:6379
+          - name: SIIBRA_JURGEX_CELERY_RESULT
+            value: redis://redis:6379
+
+          # see [2]
+          
+          # - name: SIIBRA_TOOLBOX_DATA_DIR
+          #   value: ${SHARED_VOLUME_MOUNT}
+
+          # see [1]
+
+          # - name: SIIBRA_TOOLBOX_LOG_DIR
+          #   value: ${LOG_VOLUME_MOUNT}
+
+          image: ${TOOLBOX_WORKER_IMAGE}
+          imagePullPolicy: Always
+          name: siibra-toolbox-deploy-${TOOLBOX_NAME}-worker
+          resources: {}
+          terminationMessagePath: /dev/termination-log
+          terminationMessagePolicy: File
+          volumeMounts:
+
+          # see [2]
+
+          # - mountPath: ${SHARED_VOLUME_MOUNT}
+          #   name: volume-${SHARED_VOLUME_WORKER_VOLUME_NAME}
+
+          # see [1]
+          
+          # - mountPath: ${LOG_VOLUME_MOUNT}
+          #   name: volume-${LOG_VOLUME_WORKER_VOLUME_NAME}
+
+        dnsPolicy: ClusterFirst
+        restartPolicy: Always
+        schedulerName: default-scheduler
+        securityContext: {}
+        terminationGracePeriodSeconds: 30
+        volumes:
+
+        # see [2]
+        
+        # - name: volume-${SHARED_VOLUME_WORKER_VOLUME_NAME}
+        #   persistentVolumeClaim:
+        #     claimName: toolbox-storage
+
+        
+        # see [1]
+
+        # - name: volume-${LOG_VOLUME_WORKER_VOLUME_NAME}
+        #   persistentVolumeClaim:
+        #     claimName: log-volume
+
+- apiVersion: v1
+  kind: DeploymentConfig
+  metadata:
+    labels:
+      app: siibra-toolbox-deploy-${TOOLBOX_NAME}
+      app_role: server
+    name: siibra-toolbox-deploy-${TOOLBOX_NAME}-server
+  spec:
+    replicas: 1
+    revisionHistoryLimit: 10
+    selector:
+      deploymentconfig: siibra-toolbox-deploy-${TOOLBOX_NAME}-server
+    template:
+      metadata:
+        labels:
+          app: siibra-toolbox-deploy-${TOOLBOX_NAME}
+          app_role: server
+          deploymentconfig: siibra-toolbox-deploy-${TOOLBOX_NAME}-server
+      spec:
+        containers:
+        - env:
+          - name: SIIBRA_TOOLBOX_NAME
+            value: ${SIIBRA_TOOLBOX_NAME}
+          - name: SIIBRA_TOOLBOX_CELERY_BROKER
+            value: redis://redis:6379
+          - name: SIIBRA_JURGEX_CELERY_RESULT
+            value: redis://redis:6379
+            
+          # see [2]
+
+          # - name: SIIBRA_TOOLBOX_DATA_DIR
+          #   value: ${SHARED_VOLUME_MOUNT}
+
+          # see [1]
+
+          # - name: SIIBRA_TOOLBOX_LOG_DIR
+          #   value: ${LOG_VOLUME_MOUNT}
+          image: ${TOOLBOX_SERVER_IMAGE}
+          imagePullPolicy: Always
+
+          # You can choose to have a liveness probe.
+          # Here, it is at /ready
+          # uncomment if you have
+
+          # livenessProbe:
+          #   failureThreshold: 3
+          #   httpGet:
+          #     path: /ready
+          #     port: 6001
+          #     scheme: HTTP
+          #   initialDelaySeconds: 10
+          #   periodSeconds: 10
+          #   successThreshold: 1
+          #   timeoutSeconds: 1
+
+          name: siibra-toolbox-deploy-${TOOLBOX_NAME}-server
+          ports:
+          - containerPort: 6001
+            protocol: TCP
+
+            
+
+          # You can choose to have a readiness probe.
+          # Here, it is at /ready
+          # uncomment if you have
+
+          # readinessProbe:
+          #   failureThreshold: 3
+          #   httpGet:
+          #     path: /ready
+          #     port: 6001
+          #     scheme: HTTP
+          #   initialDelaySeconds: 3
+          #   periodSeconds: 10
+          #   successThreshold: 1
+          #   timeoutSeconds: 6
+
+          resources: {}
+          terminationMessagePath: /dev/termination-log
+          terminationMessagePolicy: File
+          volumeMounts:
+          
+          # see [2]
+          
+          # - mountPath: ${SHARED_VOLUME_MOUNT}
+          #   name: volume-${SHARED_VOLUME_SERVER_VOLUME_NAME}
+          
+          # see [1]
+
+          # - mountPath: ${LOG_VOLUME_MOUNT}
+          #   name: volume-${LOG_VOLUME_SERVER_VOLUME_NAME}
+
+        dnsPolicy: ClusterFirst
+        restartPolicy: Always
+        schedulerName: default-scheduler
+        securityContext: {}
+        terminationGracePeriodSeconds: 30
+        volumes:
+
+        # see [2]
+
+        # - name: volume-${SHARED_VOLUME_SERVER_VOLUME_NAME}
+        #   persistentVolumeClaim:
+        #     claimName: toolbox-storage
+
+        # see [1]
+
+        # - name: volume-${LOG_VOLUME_SERVER_VOLUME_NAME}
+        #   persistentVolumeClaim:
+        #     claimName: log-volume
+
+- apiVersion: v1
+  kind: Service
+  metadata:
+    labels:
+      app: siibra-toolbox-deploy-${TOOLBOX_NAME}
+    name: siibra-toolbox-deploy-${TOOLBOX_NAME}-service
+  spec:
+    ports:
+    - name: 6001-tcp
+      port: 6001
+      protocol: TCP
+      targetPort: 6001
+    selector:
+      deploymentconfig: siibra-toolbox-deploy-${TOOLBOX_NAME}-server
+    type: ClusterIP
+
+- apiVersion: v1
+  kind: Route
+  metadata:
+    labels:
+      app: siibra-toolbox-deploy-${TOOLBOX_NAME}
+    name: siibra-toolbox-deploy-${TOOLBOX_NAME}-route
+  spec:
+    host: ${TOOLBOX_ROUTE}
+    port:
+      targetPort: 6001-tcp
+    tls:
+      insecureEdgeTerminationPolicy: Redirect
+      termination: edge
+    to:
+      kind: Service
+      name: siibra-toolbox-deploy-${TOOLBOX_NAME}-service
+      weight: 100
+    wildcardPolicy: None
+
+parameters:
+- description: Toolbox name
+  name: TOOLBOX_NAME
+  required: true
+- description: Toolbox Route, without scheme (i.e. no https?://). should be [a-z0-9][a-z0-9-][a-z0-9].apps(-dev)?.hbp.eu
+  name: TOOLBOX_ROUTE
+  required: true
+- description: Docker image for the worker
+  name: TOOLBOX_WORKER_IMAGE
+  required: true
+- description: Docker image for the server
+  name: TOOLBOX_SERVER_IMAGE
+  required: true
+
+- description: Randomly generated volume name. Do not overwrite
+  from: '[a-z0-9]{8}'
+  generate: expression
+  name: SHARED_VOLUME_SERVER_VOLUME_NAME
+- description: Randomly generated volume name. Do not overwrite
+  from: '[a-z0-9]{8}'
+  generate: expression
+  name: SHARED_VOLUME_WORKER_VOLUME_NAME
+- description: Randomly generated volume name. Do not overwrite
+  from: '[a-z0-9]{8}'
+  generate: expression
+  name: LOG_VOLUME_SERVER_VOLUME_NAME
+- description: Path where shared volume will be mounted. Applies to both server and
+    worker pods.
+  name: SHARED_VOLUME_MOUNT
+  value: /siibra_toolbox_volume
+- description: Randomly generated volume name. Do not overwrite
+  from: '[a-z0-9]{8}'
+  generate: expression
+  name: LOG_VOLUME_WORKER_VOLUME_NAME
+- description: Randomly generated volume name. Do not overwrite
+  from: '[a-z0-9]{8}'
+  generate: expression
+  name: LOG_VOLUME_SERVER_VOLUME_NAME
+- description: Path where shared volume will be mounted. Applies to both server and
+    worker pods.
+  name: LOG_VOLUME_MOUNT
+  value: /siibra_toolbox_logs
+
+
+
+# [1] enabling logging volume
+# 
+# If you would like shared log storage between worker and server
+# create a persistent log storage named `log-volume`
+# Then uncomment this block
+
+
+# [2] enabling shared data volume
+# 
+# If you would like shared data storage between worker and server
+# create a persistent data storage named `toolbox-storage`
+# Then uncomment this block
diff --git a/docs/develop/update_plugin_deployment.md b/docs/develop/update_plugin_deployment.md
new file mode 100644
index 0000000000000000000000000000000000000000..2fd9f9570028dac87fd50b6dd37124d915c8b441
--- /dev/null
+++ b/docs/develop/update_plugin_deployment.md
@@ -0,0 +1 @@
+TBD
\ No newline at end of file
diff --git a/docs/index.md b/docs/index.md
index 98192f9c1553737df895415a6b76a895d1fc9f40..e6cf9cd5001c9ab26c54b88c6d6a27f02d2d80eb 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -1,6 +1,6 @@
 # Interactive Atlas Viewer
 
-The interactive atlas viewer is a browser based viewer of brain atlases. Tight integration with the Human Brain Project Knowledge Graph allows seamless querying of semantically and spatially anchored datasets. 
+The interactive atlas viewer is a browser based viewer of brain atlases. Tight integration with the ebrains Knowledge Graph allows seamless querying of semantically and spatially anchored datasets. 
 
 ![](images/desktop_bigbrain_cortical.png)