Newer
Older
Build and distribute software with Spack
**Quickstart**
- Open a terminal at a running Collaboratory Lab Container and execute the following:
```
git clone https://gitlab.ebrains.eu/akarmas/ebrains-spack-builds.git
cd ebrains-spack-builds
source ./load_sim_tools.sh
```
- Then you can start python and import the available tools
The following variable(s) must be set up if not or re-configured if tokens expire.
OPENSHIFT_TOKEN: Token to login to OpenShift cluster (with the "gitlab" service account)
OPENSHIFT_DEV_SERVER: The URL of the OpenShift Development cluster needed for deploying software in lab-int environment
BUILD_ENV: The name of the environment to deploy the software of the next commit
OPERATION: The operation to perform on the spack environment (one of the following: testing, create, update, delete)
## Copy spack .yaml files and packages to the Openshift job pod that does the build
The gitlab runner copies the spack .yaml files and packages to the OpenShift job pod.
- The runner waits until the job's pod is running to start copying the files
- The pod (built from tc/ebrains-spack-build-env:latest image) waits until the necessary file(s) has finished copying so that it can continue the build process
**ToDo: The current build path needs to be automated with CI (e.g. gitlab runners)**
- As a reference you can find [here](https://spack.readthedocs.io/en/latest/command_index.html) the Spack commands list
- We will need to have different Spack environments to test specs (and build packages) and when tests are passing release specs from testing environments to production environment
- First of all we need to create an appropriate build environment (at a dedicated VM or container image or gitlab runner). The build environment must run on the same OS as the Collaboratory base container image and fulfill all Spack's pre-requisites ( [1](https://ashki23.github.io/spack.html), [2](https://spack.readthedocs.io/en/latest/getting_started.html) )
- The Spack installation folder at build time must (currently) be:
```
/opt/app-root/src
```
to match the Spack installation directory on the current Collab base container image
- At the build environment a recent version of gcc must be running. Install the most appropriate gcc version based on the OS of the build environment. Below you can find the instructions to install the appropriate gcc version for CentOS 7 (current OS of Collaboratory base image).
Instructions to install **"devtoolset-9"** [here](https://linuxize.com/post/how-to-install-gcc-compiler-on-centos-7/).
- Then we can start building and installing tools with Spack:
```
spack install arbor %gcc@9.3.1
spack install neuron %gcc@9.3.1
spack install nest %gcc@9.3.1 +python
```
and perform tests for the installations:
https://docs.arbor-sim.org/en/stable/tutorial/single_cell_model.html
https://neuron.yale.edu/neuron/static/docs/neuronpython/firststeps.html
https://nest-simulator.readthedocs.io/en/nest-2.20.1/auto_examples/one_neuron.html
- (Potentially) fix any potential errors for shared libraries (info [1](https://github.com/cdr/code-server/issues/766), [2](https://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html), [3](https://serverkurma.com/linux/how-to-install-and-update-gcc-on-centos-7/))
## Delivery of the software binaries
**ToDo:This methodology will change, and the delivery will be be implemented via Gitlab CI and use of the shared NFS drive**
- After build is complete, we move on to zip the **"$SPACK_ROOT"** and the **~/.spack** folders of the build environment and transfer them to CSCS Object Storage (currently, in the future it will be a shared NFS drive)
- The artifacts that were built can now be used from the Collaboratory running containers to load and run the available simulation tools
## Activating software in the Collaboratory Lab containers
- Currently to load the pre-built simulation tools in the Collaboratory Lab containers refer to **Quickstart** at the beggining of this file (currently the Object Storage at CSCS is used to download the pre-built software as a stop-gap measure until a shared NFS drive is available).
- This process will change in the future as all simulation tools will be available in the Collaboratory running containers from a shared NFS drive