Unpin py-pybind11
1 unresolved thread
1 unresolved thread
Merge request reports
Activity
Filter activity
@akarmas This should fix the conflict with arbor; however, we introduced the "pin" to workaround a scipy error… let's see what happens.
Edited by Eric Müllermentioned in commit 1222f345
Did not succeed.
py-scipy and py-matplotlib failed.
Pipeline hereNot sure. Do you know, @emuller?
I believe so, this should be a side-effect of ed52e858 — which basically "unpins"
py-torch
's dependency onpy-pybind11
, i.e.:--- spack/var/spack/repos/builtin/packages/py-torch/package.py 2022-05-04 17:52:46.765756617 +0200 +++ packages/py-torch/package.py 2022-07-11 18:33:23.899361894 +0200 @@ -101,7 +101,7 @@ class PyTorch(PythonPackage, CudaPackage depends_on('py-future', when='@1.1: ^python@:2', type=('build', 'run')) depends_on('py-pyyaml', type=('build', 'run')) depends_on('py-typing', when='^python@:3.4', type=('build', 'run')) - depends_on('py-pybind11@2.6.2', when='@1.8:', type=('build', 'link', 'run')) + depends_on('py-pybind11@2.6.2:', when='@1.8:', type=('build', 'link', 'run')) depends_on('py-pybind11@2.3.0', when='@1.1:1.7', type=('build', 'link', 'run')) depends_on('py-pybind11@2.2.4', when='@:1.0', type=('build', 'link', 'run')) depends_on('py-dataclasses', when='@1.7: ^python@3.6', type=('build', 'run'))
I guess so… we have those
arbor
packages installed (on lab-int):jovyan@jupyterhub-nb-USERNAME:/opt/app-root/src$ /srv/test-build/spack/bin/spack find -Lv arbor ==> 10 installed packages -- linux-centos7-x86_64 / gcc@10.3.0 ---------------------------- j3v6fg3exrqhfblzbl2p6nolhsa3mukl arbor@0.5.2~assertions~cuda~doc~ipo+mpi+neuroml+python~vectorize build_type=RelWithDebInfo cuda_arch=none u3bzjzfk5cgqrmwldn4mas2rncfomzzd arbor@0.5.2~assertions~cuda~doc~ipo+mpi+neuroml+python~vectorize build_type=RelWithDebInfo cuda_arch=none vdcabrcspln5nbtojxk5wyp55is42mbu arbor@0.6~assertions~cuda~doc~ipo+mpi+neuroml+python~vectorize build_type=RelWithDebInfo 4nhs2gkitb72lheuxst32lgeasmxwpcv arbor@0.6~assertions~cuda~doc~ipo+mpi+neuroml~python~vectorize build_type=RelWithDebInfo cuda_arch=none 74ympck7o6fajr5v335zp2oahbsghvvt arbor@0.6~assertions~cuda~doc~ipo+mpi+neuroml+python~vectorize build_type=RelWithDebInfo cuda_arch=none ttkhvpevd3wdkl5vddgvn7a3hmbdyxk2 arbor@0.6~assertions~cuda~doc~ipo+mpi+neuroml+python~vectorize build_type=RelWithDebInfo ljuh23i77f6qggxh77yxypz6femnfjov arbor@0.6~assertions~cuda~doc~ipo+mpi+neuroml+python~vectorize build_type=RelWithDebInfo y4e2jzuhtxkc7mzvheoi6auamhem7p4f arbor@0.6~assertions~cuda~doc~ipo+mpi+neuroml+python~vectorize build_type=RelWithDebInfo qei3nktdjpbbivnzhy7myono3p5xnumg arbor@0.6~assertions~cuda~doc~ipo+mpi+neuroml+python~vectorize build_type=RelWithDebInfo r2fcprd3kc7i4r6fsk5ss3fklqreujsq arbor@0.6~assertions~cuda~doc~ipo+mpi+neuroml+python~vectorize build_type=RelWithDebInfo
Maybe @elmath can say something about when/if the last "full" (re)build happened…
Edited by Eric Müller