diff --git a/README.md b/README.md
index 893551080b3e6fc20aa0d9d4e62443df3b0d03fb..ebd275d9c94672199ba0a022330f0e6a74eb9adf 100644
--- a/README.md
+++ b/README.md
@@ -44,47 +44,22 @@ OPERATION: The operation to perform on the spack environment (one of the followi
   
 ## Copy spack .yaml files and packages to the Openshift job pod that does the build
 
-The gitlab runner copies the spack .yaml files and packages to the OpenShift job pod.
+The gitlab runner copies the various files needed for the build to the OpenShift job pod.
+- It copies the {spack, repo}.yaml files, the create_JupyterLab_kernel.sh script and the packages/ directory
 - The runner waits until the job's pod is running to start copying the files
-- The pod (built from tc/ebrains-spack-build-env:latest image) waits until the necessary file(s) has finished copying so that it can continue the build process
+- The pod (built from [tc/ebrains-spack-build-env:latest image](https://docker-registry.ebrains.eu/harbor/projects/8/repositories/ebrains-spack-build-env)) waits until the necessary file(s) has finished copying so that it can continue the build process
 
 ## Bulding software binaries with Spack
 
-**ToDo The current build path needs to be automated with CI (e.g. gitlab runners)**
-
-- As a reference you can find [here](https://spack.readthedocs.io/en/latest/command_index.html) the Spack commands list
-- We will need to have different Spack environments to test specs (and build packages) and when tests are passing release specs from testing environments to production environment
-
-- First of all we need to create an appropriate build environment (at a dedicated VM or container image or gitlab runner). The build environment must run on the same OS as the Collaboratory base container image and fulfill all Spack's pre-requisites ( [1](https://ashki23.github.io/spack.html), [2](https://spack.readthedocs.io/en/latest/getting_started.html) )
-- The Spack installation folder at build time must (currently) be:
-```
-/opt/app-root/src
-```
-to match the Spack installation directory on the current Collab base container image
-- At the build environment a recent version of gcc must be running. Install the most appropriate gcc version based on the OS of the build environment. Below you can find the instructions to install the appropriate gcc version for CentOS 7 (current OS of Collaboratory base image).
-Instructions to install **"devtoolset-9"** [here](https://linuxize.com/post/how-to-install-gcc-compiler-on-centos-7/).
-- Then we can start building and installing tools with Spack:
-```
-spack install arbor %gcc@9.3.1
-spack install neuron %gcc@9.3.1
-spack install nest %gcc@9.3.1 +python
-```
-and perform tests for the installations:
-
-https://docs.arbor-sim.org/en/stable/tutorial/single_cell_model.html  
-https://neuron.yale.edu/neuron/static/docs/neuronpython/firststeps.html  
-https://nest-simulator.readthedocs.io/en/nest-2.20.1/auto_examples/one_neuron.html  
-
-- (Potentially) fix any potential errors for shared libraries (info [1](https://github.com/cdr/code-server/issues/766), [2](https://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html), [3](https://serverkurma.com/linux/how-to-install-and-update-gcc-on-centos-7/))
-
-## Delivery of the software binaries
-
-**ToDo:This methodology will change, and the delivery will be be implemented via Gitlab CI and use of the shared NFS drive**
-
-- After build is complete, we move on to zip the **"$SPACK_ROOT"** and the **~/.spack** folders of the build environment and transfer them to CSCS Object Storage (currently, in the future it will be a shared NFS drive)
-- The artifacts that were built can now be used from the Collaboratory running containers to load and run the available simulation tools
+- The build process is powered by Spack a multi-platform package manager that builds and installs multiple versions and configurations of software.
+- A Job (object) in OpenShift is responsible for the build process.
+- The gitlab runner starts a new Job that runs on an OpenShift pod that uses the container image developed in [this](https://gitlab.ebrains.eu/akarmas/ebrains-spack-build-env/) repository that holds all the Spack specifics needed for the build process. All the Spack configuration necessary for a successfull build is to be changed from the Spack configuration files that are found in the present repository.
+- The OpenShift Job's pod mounts an NFS drive that is also mounted by all Collaboratory Lab containers and performs the entire build process with Spack on that NFS drive and as a result all the installed software is readily available to Collaboratory Lab containers
+- A schema of the build process can be found [here](https://drive.ebrains.eu/smart-link/6adcd99f-c088-472e-a596-37ac38869051/)
 
 ## Activating software in the Collaboratory Lab containers
 
-- Currently to load the pre-built simulation tools in the Collaboratory Lab containers refer to **Quickstart** at the beggining of this file (currently the Object Storage at CSCS is used to download the pre-built software as a stop-gap measure until a shared NFS drive is available).
-- This process will change in the future as all simulation tools will be available in the Collaboratory running containers from a shared NFS drive
+- Currently to activate the pre-built simulation tools in the Collaboratory Lab containers refer to **Quickstart** at the beggining of this file
+- There are two options: i) for using the simulation tools directly in the notebooks and ii) for using the simulation tools from a terminal in a Collaboratory Lab container
+
+**ToDo: put the necessary activation commands in the startup script of a JupyterLab conatiner spawned in OpenShift to hide all implementation details from the users**