Skip to content
Snippets Groups Projects
Eleni Mathioulaki's avatar
Eleni Mathioulaki authored
Test drive new py-pybind11

See merge request technical-coordination/project-internal/devops/platform/ebrains-spack-builds!197
5446e138

ebrains-spack-builds

Build and distribute software tools with Spack for the EBRAINS Collaboratory Lab containers.

Quickstart for Notebooks

  • Open a terminal at a running Collaboratory Lab Container and execute the following:
#update modulepath of JupterLab container to include spack arch modulefiles path
module use /srv/test-build/spack/share/spack/modules/linux-centos7-broadwell/
module use /srv/test-build/spack/share/spack/modules/linux-centos7-x86_64/

#start JupyterLab kernel made for spack
jupyter kernelspec install --user /srv/jupyterlab_kernels/int/release20210930/spack_python_kernel_release_20210930/
  • Then select File > New Launcher
  • Open a new Notebook with the "EBRAINS_release_20210930" kernel
  • Import the tools available with spack:
import nest
import arbor
import neuron

Quickstart for terminal

  • Open a terminal at a running Collaboratory Lab Container and execute the following:
 git clone https://gitlab.ebrains.eu/akarmas/ebrains-spack-builds.git
 cd ebrains-spack-builds
 source ./load_sim_tools.sh
  • Then you can start python and import the available tools
  • Currently installed:
    arbor
    neuron
    nest

Gitlab CI/CD Enviromental variables

The following variable(s) must be set up if not or re-configured if tokens expire.

OPENSHIFT_TOKEN: Token to login to OpenShift cluster (with the "gitlab" service account)
OPENSHIFT_DEV_SERVER: The URL of the OpenShift Development cluster needed for deploying software in lab-int environment
BUILD_ENV: The name of the environment to deploy the software of the next commit
OPERATION: The operation to perform on the spack environment (one of the following: testing, create, update, delete)

Copy spack .yaml files and packages to the Openshift job pod that does the build

The gitlab runner copies the various files needed for the build to the OpenShift job pod.

  • It copies the {spack, repo}.yaml files, the create_JupyterLab_kernel.sh script and the packages/ directory
  • The runner waits until the job's pod is running to start copying the files
  • The pod (built from tc/ebrains-spack-build-env:latest image) waits until the necessary file(s) has finished copying so that it can continue the build process

Bulding software binaries with Spack

  • The build process is powered by Spack a multi-platform package manager that builds and installs multiple versions and configurations of software.
  • A Job (object) in OpenShift is responsible for the build process.
  • The gitlab runner starts a new Job that runs on an OpenShift pod that uses the container image developed in this repository that holds all the Spack specifics needed for the build process. All the Spack configuration necessary for a successfull build is to be changed from the Spack configuration files that are found in the present repository.
  • The OpenShift Job's pod mounts an NFS drive that is also mounted by all Collaboratory Lab containers and performs the entire build process with Spack on that NFS drive and as a result all the installed software is readily available to Collaboratory Lab containers
  • A schema of the build process can be found here

Activating software in the Collaboratory Lab containers

  • Currently to activate the pre-built simulation tools in the Collaboratory Lab containers refer to Quickstart at the beggining of this file
  • There are two options: i) for using the simulation tools directly in the notebooks and ii) for using the simulation tools from a terminal in a Collaboratory Lab container

ToDo: put the necessary activation commands in the startup script of a JupyterLab conatiner spawned in OpenShift to hide all implementation details from the users