Skip to content
Snippets Groups Projects
  • Benjamin Cumming's avatar
    Runtime distributed context (#485) · 5fde0b00
    Benjamin Cumming authored and Sam Yates's avatar Sam Yates committed
    Move from choosing the distributed communication model from a compile time choice (the old `arb::communication::communication_policy` type) to a run time decision.
    
    * Add `arb::distributed_context` class that provides the required interface for distributed communication implementations, using type-erasure to provide value semantics.
    * Add two implementations for the distributed context: `arb::mpi_context` and `arb::local_context`.
    * Allow distribution over a user-supplied MPI communicator by providing it as an argument to `arb::mpi_context`.
    * Add `mpi_error` exception type to wrap MPI errors.
    * Move contents of the `arb::communication` namespace to the `arb` namespace.
    * Add preprocessor for-each utility `ARB_PP_FOREACH`.
    * Rewrite all examples and tests to use the new distributed context interface.
    * Add documentation for distributed context class and semantics, and update documentation for load balancer and simulation classes accordingly.
    
    Fixes #472
    5fde0b00
index.rst 1.77 KiB

Arbor

https://travis-ci.org/eth-cscs/arbor.svg?branch=master

What is Arbor?

Arbor is a high-performance library for computational neurscience simulations.

The development team is from from high-performance computing (HPC) centers:

  • Swiss National Supercomputing Center (CSCS), Jülich and BSC in work package 7.5.4 of the HBP.
  • Aim to prepare neuroscience users for new HPC architectures;

Arbor is designed from the ground up for many core architectures:

  • Written in C++11 and CUDA;
  • Distributed parallelism using MPI;
  • Multithreading with TBB and C++11 threads;
  • Open source and open development;
  • Sound development practices: unit testing, continuous Integration, and validation.

Features

We are actively developing Arbor, improving performance and adding features. Some key features include:

  • Optimized back ends for CUDA, KNL and AVX2 intrinsics.
  • Asynchronous spike exchange that overlaps compute and communication.
  • Efficient sampling of voltage and current on all back ends.
  • Efficient implementation of all features on GPU.
  • Reporting of memory and energy consumption (when available on platform).
  • An API for addition of new cell types, e.g. LIF and Poisson spike generators.
  • Validation tests against numeric/analytic models and NEURON.