Skip to content
Snippets Groups Projects
Commit fa7ea458 authored by akuesters's avatar akuesters Committed by Benjamin Cumming
Browse files

Python Documentation PR (#687)

Update documentation for Python.

    splits the conceptual model ideas from the C++ docs into their own section
    has C++ and Python docs for recipes, domain decomposition, etc.

fixes #667

Added the following documentation (structure):

GETTING STARTED:

    Installing Arbor/Requirements/Optional Requirements/Python
    Installing Arbor/Building and Installing Arbor/Python Front End

MODEL BASICS:

    Overview
    Common Types
    Recipes
    Domain Decomposition
    Simulations

PYTHON:

    Overview
    Common Types
    Recipes
    Domain Decomposition
    Simulations

DEVELOPERS:

    Python Profiler
    Python Unit Testing

GETTING STARTED has two added sections of optional requirements using python and how to build the python front end.

MODEL BASICS describes Arbor's concepts in general (independent of programming language), thus general information on concepts in C++ API was moved here/ added.

PYTHON describes Arbor's python frontend in the same structure as MODEL BASICS and C++ API ( needs updates as soon as features are added/changed in new python PR).

DEVELOPERS section has two added sections for meter management and unit testing with python front end.

Further, some corrections in existing documentation (for obvious errors, e.g. duplicate text, not ending sentences) and referencing sections were done.
parent fa549238
No related branches found
No related tags found
No related merge requests found
Showing with 1212 additions and 13 deletions
File added
.. _cppcommon:
Common Types
============
......
......@@ -101,7 +101,7 @@ Hardware
.. cpp:member:: int gpu_id
The identifier of the the GPU to use.
The identifier of the GPU to use.
The gpu id corresponds to the ``int device`` parameter used by CUDA API calls
to identify gpu devices.
Set to -1 to indicate that no GPU device is to be used.
......@@ -117,7 +117,6 @@ Execution Context
The :cpp:class:`proc_allocation` class enumerates the hardware resources on the local hardware
to use for a simulation.
A :cpp:class:`arb::context` ...
.. cpp:namespace:: arb
......@@ -228,7 +227,6 @@ Documentation for the data structures used to describe domain decompositions.
.. Note::
Setting the GPU back end is only meaningful if the
:cpp:class:`cell_group` type supports the GPU backend.
If
.. cpp:class:: domain_decomposition
......
.. _cppoverview:
Overview
=========
......
.. _cpprecipe:
Recipes
===============
An Arbor **recipe** is a description of a model. The recipe is queried during the model
building phase to provide cell information, such as:
* the number of cells in the model;
* the type of a cell;
* a description of a cell;
* incoming network connections on a cell.
The :cpp:class:`arb::recipe` class documentation is below.
Why Recipes?
......@@ -139,7 +133,7 @@ Class Documentation
The type used to describe a cell depends on the kind of the cell.
The interface for querying the kind and description of a cell are
seperate to allow the the cell type to be provided without building
seperate to allow the cell type to be provided without building
a full cell description, which can be very expensive.
**Optional Member Functions**
......
......@@ -52,6 +52,7 @@ Class Documentation
Simulations take the following inputs:
* The **constructor** takes:
* an :cpp:class:`arb::recipe` that describes the model;
* an :cpp:class:`arb::domain_decomposition` that describes how the
cells in the model are assigned to hardware resources;
......
......@@ -7,7 +7,7 @@ Arbor
What is Arbor?
--------------
Arbor is a high-performance library for computational neurscience simulations.
Arbor is a high-performance library for computational neuroscience simulations.
The development team is from from high-performance computing (HPC) centers:
......@@ -42,6 +42,24 @@ Some key features include:
install
.. toctree::
:caption: Arbor Models:
model_intro
model_common
model_recipe
model_domdec
model_simulation
.. toctree::
:caption: Python:
py_overview
py_common
py_recipe
py_domdec
py_simulation
.. toctree::
:caption: C++ API:
......@@ -57,7 +75,9 @@ Some key features include:
library
simd_api
profiler
py_profiler
sampling_api
cpp_distributed_context
cpp_dry_run
py_unittest
.. _installarbor:
Installing Arbor
################
......@@ -113,6 +115,13 @@ Arbor uses MPI to run on HPC cluster systems.
Arbor has been tested on MVAPICH2, OpenMPI, Cray MPI, and IBM MPI.
More information on building with MPI is in the `HPC cluster section <cluster_>`_.
Python
~~~~~~
Arbor has a python front end, for which Python 3.6 is required.
In order to use MPI in combination with the python frontend `mpi4py <https://mpi4py.readthedocs.io/en/stable/install.html#>`_ is required as a site-package of python.
Documentation
~~~~~~~~~~~~~~
......@@ -325,9 +334,42 @@ example:
export CPATH="/opt/cuda/include:$CPATH"
cmake -DARB_WITH_GPU=ON
.. Note::
Arbor supports and has been tested on the Kepler (K20 & K80), Pascal (P100) and Volta (V100) GPUs
Python Front End
----------------
Arbor can be used with a python front end which is enabled by setting the
CMake ``ARB_WITH_PYTHON`` option:
.. code-block:: bash
cmake .. -ARB_WITH_PYTHON=ON
By default ``ARB_WITH_PYTHON=OFF``. When this option is turned on, a python module called :py:mod:`arbor` is built.
Depending on the configuration of the system where Arbor is being built, the
C++ compiler may not be able to find ``mpi4py`` when Arbor is configured with both, python ``-ARB_WITH_PYTHON=ON`` and MPI ``-DARB_WITH_MPI=ON``.
The easiest workaround is to add the path to the include directory containing the header to the
``CPATH`` environment variable before configuring and building Arbor, for
example:
.. code-block:: bash
# search for path tp python's site-package mpi4py
for p in `python3 -c 'import sys; print("\n".join(sys.path))'`; do echo ===== $p; ls $p | grep mpi4py; done
===== /path/to/python3/site-packages
mpi4py
# set CPATH and run cmake
export CPATH="/path/to/python3/site-packages/mpi4py/include/:$CPATH"
cmake .. -ARB_WITH_PYTHON=ON -DARB_WITH_MPI=ON
.. _install:
Installation
......
.. _libref:
Library Reference
#################
......
.. _modelcommon:
Common Types
=================
The basic unit of abstraction in an Arbor model is a cell.
A cell represents the smallest model that can be simulated.
Cells interact with each other only via spike exchange.
Cells can be of various types, admitting different representations and implementations.
A *cell group* represents a collection of cells of the same type together with an implementation of their simulation.
Arbor currently supports specialized leaky integrate and fire cells and cells representing artificial spike sources in addition to multi-compartment neurons.
Since the neuron model and the associated workflow are formulated from a cell-centered perspective, cell identifiers and indexes need to be utilized.
.. table:: Cell identifiers and indexes
======================== ====================== ===========================================================
Identifyer/ Index Type Description
======================== ====================== ===========================================================
gid integer The global identifier of the cell associated with the item.
index unsigned integer The index of the item in a cell-local collection.
cell member tuple (gid, index) The global identification of a cell-local item
associated with a unique cell, identified by the member `gid`,
and identifying an item within a cell-local collection by the member `index`.
cell size unsigned integer Counting collections of cells.
cell local size unsigned integer Counting cell-local data.
cell kind enumerator The identification of the cell type/ kind,
used by the model to group equal kinds in the same cell group:
* Cell with morphology described by branching 1D cable segments,
* Leaky-integrate and fire neuron,
* Regular spiking source,
* Spike source from values inserted via description.
======================== ====================== ===========================================================
Example
An example of the cell member identifyer is uniquely identifying a synapse in the model.
Each synapse has a post-synaptic cell (`gid`), and an `index` into the set of synapses on the post-synaptic cell.
Further, to interact with the model probes are specified whereby the item or value that is subjected to a probe will be specific to a particular cell type.
Probes are specified in the recipe that is used to initialize a model with cell `gid` and index of the probe.
The probe's adress is a cell-type specific location info, specific to the cell kind of `gid`.
C++ specific common types are explained in detail in :ref:`cppcommon` and in :ref:`pycommon` for Arbor's python front end.
.. _modeldomdec:
Domain Decomposition
====================
A *domain decomposition* describes the distribution of the model over the available computational resources. The description partitions the cells in the model as follows:
* group the cells into cell groups of the same kind of cell;
* assign each cell group to either a CPU core or GPU on a specific MPI rank.
The number of cells in each cell group depends on different factors, including the type of the cell, and whether the cell group will run on a CPU core or the GPU. The domain decomposition is soley responsible for describing the distribution of cells across cell groups and domains.
Load Balancers
--------------
A *load balancer* generates the domain decomposition using the
model recipe and a description of the available computational resources on which the model will run described by an execution context.
Currently Arbor provides one load balancer and more will be added over time.
Hardware
--------
*Local resources* are locally available computational resources, specifically the number of hardware threads and the number of GPUs.
An *allocation* enumerates the computational resources to be used for a simulation, typically a subset of the resources available on a physical hardware node.
Execution Context
-----------------
An *execution context* contains the local thread pool, and optionally the GPU state and MPI communicator, if available. Users of the library configure contexts, which are passed to Arbor methods and types.
Detailed documentations can be found in :ref:`cppdomdec` for C++ and in :ref:`pydomdec` for python.
.. _modelintro:
Overview
=========
Arbor's design model was created to enable scalability through abtraction.
Thereby, Arbor makes a distinction between the **description** of a model, and the
**execution** of a model:
a *recipe* describes a model, and a *simulation* is an executable instatiation of a model.
To be able to simulate a model, three basic steps need to be considered:
* first, describe the neuron model by defining a recipe;
* then, get the local computational resources, the execution context, and partition the load balance;
* finally, execute the model by initiating and running the simulation.
.. topic:: Concepts
:ref:`modelrecipe` represent a set of neuron constructions and connections with *mechanisms* specifying ion channel and synapse dynamics in a cell-oriented manner. This has the advantage that cell data can be initiated in parallel.
A cell represents the smallest unit of computation and forms the smallest unit of work distributed across processes. Different :ref:`modelcommon` can be utilized.
:ref:`modelsimulation` manage the instantiation of the model and the scheduling of spike exchange as well as the integration for each cell group. A cell group represents a collection of cells of the same type computed together on the GPU or CPU. The partitioning into cell groups is provided by :ref:`modeldomdec` which describes the distribution of the model over the locally available computational resources.
In order to visualise the result of detected spikes a spike recorder can be used and to analyse Arbor's performance a meter manager is available.
.. _modelrecipe:
Recipes
===============
An Arbor *recipe* is a description of a model. The recipe is queried during the model
building phase to provide cell information, such as:
* the number of cells in the model;
* the type of a cell;
* a description of a cell, e.g. with soma, synapses, detectors, stimuli;
and optionally, e.g.:
* the number of spike targets;
* the number of spike sources;
* incoming network connections from other cells terminating on a cell.
General Best Practices
----------------------
.. topic:: Think of the cells
When formulating a model, think cell-first, and try to formulate the model and
the associated workflow from a cell-centered perspective. If this isn't possible,
please contact the developers, because we would like to develop tools that help
make this simpler.
.. topic:: Be reproducible
Arbor is designed to give reproduceable results when the same model is run on a
different number of MPI ranks or threads, or on different hardware (e.g. GPUs).
This only holds when a recipe provides a reproducible model description, which
can be a challenge when a description uses random numbers, e.g. to pick incoming
connections to a cell from a random subset of a cell population.
To get a reproduceable model, use the cell global identifyer `gid` to seed random number generators.
Mechanisms
----------------------
The description of multi-compartment cells also includes the specification of ion channel and synapse dynamics.
In the recipe, these specifications are called *mechanisms*.
Implementations of mechanisms are either hand-coded or a translator (modcc) is used to compile a
subset of NEURONs mechanism specification language NMODL.
Examples
Common examples are the *passive/ leaky integrate-and-fire* model, the *Hodgkin-Huxley* mechanism, the *(double-) exponential synapse* model, or the *Natrium current* model for an axon.
The detailed documentations and specific best practices for C++ recipes can be found in :ref:`cpprecipe` and in :ref:`pyrecipe` covering python recipes.
.. _modelsimulation:
Simulations
===========
A simulation is the executable form of a model and is used to interact with and monitor the model state. In the simulation the neuron model is initiated and the spike exchange and the integration for each cell group are scheduled.
From recipe to simulation
-------------------------
To build a simulation the following are needed:
* A recipe that describes the cells and connections in the model.
* A context used to execute the simulation.
The workflow to build a simulation is to first generate a domain decomposition that describes the distribution of the model over the local and distributed hardware resources (see :ref:`modeldomdec`), then build the simulation from the recipe, the domain decomposition and the execution context. Optionally experimental inputs that can change between model runs, such as external spike trains, can be injected.
The recipe describes the model, the domain decomposition describes how the cells in the model are assigned to hardware resources and the context is used to execute the simulation.
Simulation execution and interaction
------------------------------------
Simulations provide an interface for executing and interacting with the model:
* The simulation is executed/ *run* by advancing the model state from the current simulation time to another with maximum time step size.
* The model state can be *reset* to its initial state before the simulation was started.
* *Sampling* of the simulation state can be performed during execution with samplers and probes (e.g. compartment voltage and current) and spike output with the total number of spikes generated since either construction or reset.
Detailed documentation can be found in C++ API :ref:`cppsimulation` and :ref:`pysimulation` for Arbor's python frontend.
.. _pycommon:
Common Types
=====================
Cell Identifiers and Indexes
----------------------------
The types defined below are used as identifiers for cells and members of cell-local collections.
.. module:: arbor
.. class:: cell_member
.. function:: cell_member()
Construct a cell member with default values :attr:`gid = 0` and :attr:`index = 0`.
.. function:: cell_member(gid, index)
Construct a cell member with parameters :attr:`gid` and :attr:`index` for global identification of an item of a cell-local item.
Items of type :class:`cell_member` must:
* be associated with a unique cell, identified by the member :attr:`gid`;
* identify an item within a cell-local collection by the member :attr:`index`.
An example is uniquely identifying a synapse in the model.
Each synapse has a post-synaptic cell (with :attr:`gid`), and an :attr:`index` into the set of synapses on the post-synaptic cell.
Lexographically ordered by :attr:`gid`, then :attr:`index`.
.. attribute:: gid
The global identifier of the cell.
.. attribute:: index
The cell-local index of the item.
Local indices for items within a particular cell-local collection should be zero-based and numbered contiguously.
An example of a cell member construction reads as follows:
.. container:: example-code
.. code-block:: python
import arbor
# construct
cmem1 = arbor.cell_member()
cmem2 = arbor.cell_member(0, 0)
# set gid and index
cmem1.gid = 1
cmem1.index = 1
.. class:: cell_kind
Identify the cell type/ kind used by the model to group equal kinds in the same cell group (enumerator).
.. attribute:: cable1d
A cell with morphology described by branching 1D cable segments.
.. attribute:: lif
A leaky-integrate and fire neuron.
.. attribute:: spike_source
A cell that generates spikes at a user-supplied sequence of time points.
.. attribute:: benchmark
A proxy cell used for benchmarking.
An example of a cell construction of :class:`cell_kind.cable1d` reads as follows:
.. container:: example-code
.. code-block:: python
import arbor
kind = arbor.cell_kind.cable1d
Probes
------
Yet to be implemented.
.. _pydomdec:
Domain Decomposition
====================
Decomposition
-------------
As defined in :ref:`modeldomdec` a domain decomposition is a description of the distribution of the model over the available computational resources.
Therefore, the following data structures are used to describe domain decompositions.
.. currentmodule:: arbor
.. class:: backend_kind
Indicate which hardware backend to use for running a :class:`cell_group` (enumeration).
.. attribute:: multicore
Use the multicore backend.
.. attribute:: gpu
Use the GPU back end.
.. Note::
Setting the GPU back end is only meaningful if the
:class:`cell_group` type supports the GPU backend.
.. class:: domain_decomposition
Describe a domain decomposition. The class is soley responsible for describing the
distribution of cells across cell groups and domains.
It holds cell group descriptions (:attr:`groups`) for cells assigned to
the local domain, and a helper function (:func:`gid_domain`) used to
look up which domain a cell has been assigned to.
The :class:`domain_decomposition` object also has meta-data about the
number of cells in the global model, and the number of domains over which
the model is destributed.
.. Note::
The domain decomposition represents a division of **all** of the cells in
the model into non-overlapping sets, with one set of cells assigned to
each domain.
A domain decomposition is generated either by a load balancer or is
directly specified by the user, and it is a requirement that the
decomposition is correct:
* Every cell in the model appears once in one and only one cell :attr:`groups` on one and only one local :class:`domain_decomposition` object.
* :attr:`num_local_cells` is the sum of the number of cells in each of the :attr:`groups`.
* The sum of :attr:`num_local_cells` over all domains matches :attr:`num_global_cells`.
.. function:: gid_domain(gid)
Query the domain id that a cell is assigned to (using global identifier :attr:`arbor.cell_member.gid`).
.. attribute:: num_domains
The number of domains that the model is distributed over.
.. attribute:: domain_id
The index of the local domain.
Always 0 for non-distributed models, and corresponds to the MPI rank
for distributed runs.
.. attribute:: num_local_cells
The total number of cells in the local domain.
.. attribute:: num_global_cells
The total number of cells in the global model
(sum of :attr:`num_local_cells` over all domains).
.. attribute:: groups
The description of the cell groups on the local domain.
See :class:`group_description`.
.. class:: group_description
Return the indexes of a set of cells of the same kind that are grouped together in a cell group in an :class:`arbor.simulation`.
.. function:: group_description(kind, gids, backend)
Construct a group description with parameters :attr:`kind`, :attr:`gids` and :attr:`backend`.
.. attribute:: kind
The kind of cell in the group.
.. attribute:: gids
The (list of) gids of the cells in the cell group, **sorted in ascending order**.
.. attribute:: backend
The back end on which the cell group is to run.
Load Balancers
--------------
Load balancing generates a :class:`domain_decomposition` given an :class:`arbor.recipe`
and a description of the hardware on which the model will run. Currently Arbor provides
one load balancer, :func:`partition_load_balance`, and more will be added over time.
If the model is distributed with MPI, the partitioning algorithm for cells is
distributed with MPI communication. The returned :class:`domain_decomposition`
describes the cell groups on the local MPI rank.
.. Note::
The :class:`domain_decomposition` type is simple and
independent of any load balancing algorithm, so users can supply their
own domain decomposition without using one of the built-in load balancers.
This is useful for cases where the provided load balancers are inadequate,
and when the user has specific insight into running their model on the
target computer.
.. function:: partition_load_balance(recipe, context)
Construct a :class:`domain_decomposition` that distributes the cells
in the model described by an :class:`arbor.recipe` over the distributed and local hardware
resources described by a :class:`context`.
The algorithm counts the number of each cell type in the global model, then
partitions the cells of each type equally over the available nodes.
If a GPU is available, and if the cell type can be run on the GPU, the
cells on each node are put into one large group to maximise the amount of fine
grained parallelism in the cell group.
Otherwise, cells are grouped into small groups that fit in cache, and can be
distributed over the available cores.
.. Note::
The partitioning assumes that all cells of the same kind have equal
computational cost, hence it may not produce a balanced partition for
models with cells that have a large variance in computational costs.
Hardware
--------
.. class:: proc_allocation
Enumerate the computational resources to be used for a simulation, typically a
subset of the resources available on a physical hardware node.
.. container:: example-code
.. code-block:: python
# Default construction uses all detected cores/threads, and the first GPU, if available.
import arbor
alloc = arbor.proc_allocation()
# Remove any GPU from the resource description.
alloc.gpu_id = -1
.. function:: proc_allocation()
Construct an allocation by setting the number of threads to the number available locally for execution, and
chooses either the first available GPU, or no GPU if none are available.
.. function:: proc_allocation(threads, gpu_id)
Construct an allocation by setting the number of threads to :attr:`threads` and selecting the GPU with :attr:`gpu_id`.
.. attribute:: threads
The number of CPU threads available locally for execution.
.. attribute:: gpu_id
The identifier of the GPU to use.
The :attr:`gpu_id` corresponds to the ``int device`` parameter used by CUDA API calls
to identify gpu devices.
Set to -1 to indicate that no GPU device is to be used.
See ``cudaSetDevice`` and ``cudaDeviceGetAttribute`` provided by the
`CUDA API <https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__DEVICE.html>`_.
.. cpp:function:: has_gpu()
Query (with True/ False) whether a GPU is selected (i.e. whether :attr:`gpu_id` is ``-1``).
Execution Context
-----------------
The :class:`proc_allocation` class enumerates the hardware resources on the local hardware
to use for a simulation.
.. class:: context
A :class:`context` is a handle for the interfaces to the hardware resources used in a simulation.
It contains the local thread pool, and optionally the GPU state
and MPI communicator, if available. Users of the library do not directly use the functionality
provided by :class:`context`, instead they configure contexts, which are passed to
Arbor methods and types.
.. function:: context()
Construct the (default) local context that uses all detected threads and a GPU if any are available.
.. function:: context(proc_allocation)
Construct a local context that uses the local resources described by :class:`proc_allocation`.
.. function:: context(proc_allocation, mpi_comm)
Construct a context that uses the local resources described by :class:`proc_allocation`, and
uses an MPI communicator (see e.g. :class:`arbor.mpi_comm` to be documented) for distributed calculation.
.. function:: context(threads, gpu)
Construct a context that uses a set number of :attr:`threads` and gpu id :attr:`gpu`.
.. attribute:: threads
The number of threads available locally for execution (default: 1).
.. attribute:: gpu
The index of the GPU to use (default: none for no GPU).
.. function:: context(threads, gpu, mpi)
Construct a context that uses a set number of :attr:`threads` and gpu id :attr:`gpu` and MPI communicator :attr:`mpi`.
.. attribute:: threads
The number of threads available locally for execution (default: 1).
.. attribute:: gpu
The index of the GPU to use (default: none for no GPU).
.. attribute:: mpi
An MPI communicator (see e.g. :class:`arbor.mpi_comm` to be documented, default: none for no MPI).
.. attribute:: has_mpi
Query whether the context uses MPI for distributed communication.
.. attribute:: has_gpu
Query whether the context has a GPU.
.. attribute:: threads
The number of threads available locally for execution.
.. attribute:: ranks
The number of distributed domains (equivalent to the number of MPI ranks).
.. attribute:: rank
The numeric id of the local domain (equivalent to MPI rank).
Here are some examples of how to create a :class:`context`:
.. container:: example-code
.. code-block:: python
import arbor
# Construct a non-distributed context that uses all detected available resources.
context = arbor.context()
# Construct a context that:
# * does not use a GPU, reguardless of whether one is available;
# * uses 8 threads in its thread pool.
alloc = arbor.proc_allocation(8, -1)
context = arbor.context(alloc)
# Construct a context that:
# * uses all available local hardware resources;
# * uses the standard MPI communicator MPI_COMM_WORLD for distributed computation.
alloc = arbor.proc_allocation() # defaults to all detected local resources
comm = arbor.mpi_comm()
context = arbor.context(alloc, comm);
.. _pyoverview:
Overview
=========
This section gives insights to the usage of Arbor's python front end :py:mod:`arbor` with examples and detailed descriptions of features.
The python front end is the main interface through which Arbor is used.
.. _prerequisites:
Prerequisites
~~~~~~~~~~~~~
Once Arbor is built in the folder ``path/to/arbor/build`` (and/ or installed to ``path/to/arbor/install``, see the :ref:`installarbor` documentation) python needs to be set up by setting
.. code-block:: bash
export PYTHONPATH="path/to/arbor/build/lib:$PYTHONPATH"
or, in case of installation
.. code-block:: bash
export PYTHONPATH="path/to/arbor/install/lib/python3/site-packages:$PYTHONPATH"
With this setup, Arbor's python module :py:mod:`arbor` can be imported with python3 via
>>> import arbor
.. _simsteps:
Simulation steps
~~~~~~~~~~~~~~~~
Then, according to the :ref:`modelsimulation` description Arbor's python module :py:mod:`arbor` can be utilized to
* first, **describe** the neuron model by defining a recipe;
* then, get the local **resources**, the **execution context**, and partition the **load balance**;
* finally, **execute** the model by initiating and running the simulation.
In order to visualise the result a **spike recorder** can be used and to analyse Arbor's performance a **meter manager** is available.
These steps are described and examples are given in the next subsections :ref:`pycommon`, :ref:`pyrecipe`, :ref:`pydomdec` and :ref:`pysimulation`.
.. note::
Detailed information on Arbor's python features can be obtained with the ``help`` function, e.g.
>>> help(arbor.recipe)
Python Profiler
===============
Arbor's python module :py:mod:`arbor` has a profiler for fine-grained timings and memory consumptions of regions of interest in the code.
Instrumenting Code
------------------
Developers manually instrument the regions to profile.
This allows the developer to only profile the parts of the code that are of interest, and choose the appropriate granularity for profiling different regions.
Once a region of code is marked for the profiler, the application will track the total time spent in the region, and how much memory (and if available energy) is consumed.
Marking Regions
~~~~~~~~~~~~~~~
For measuring time, memory (and energy) consumption Arbor's meter manager in python can be used.
First the meter manager needs to be initiated, then the metering started and checkpoints set, wherever the manager should report the meters.
The measurement starts from the start to the first checkpoint and then in between checkpoints.
Checkpoints are defined by a string describing the process to be measured.
Running the Profiler
~~~~~~~~~~~~~~~~~~~~~
The profiler does not need to be started or stopped by the user.
It needs to be initialized before entering any profiling region.
It is initialized using the information provided by the execution context.
At any point a summary of profiler region times and consumptions can be obtained.
For example, the following will record and summarize the total time and memory spent:
.. container:: example-code
.. code-block:: python
import arbor
context = arbor.context()
meter_manager = arbor.meter_manager()
meter_manager.start(context)
n_cells = 100
recipe = my_recipe(n_cells)
meter_manager.checkpoint('recipe create', context)
decomp = arbor.partition_load_balance(recipe, context)
meter_manager.checkpoint('load balance', context)
sim = arbor.simulation(recipe, decomp, context)
meter_manager.checkpoint('simulation init', context)
tSim = 2000
dt = 0.025
sim.run(tSim, dt)
meter_manager.checkpoint('simulation run', context)
print(arbor.make_meter_report(meter_manager, context))
Profiler Output
------------------
The ``meter_report`` holds a summary of the accumulated recorders.
Calling ``make_meter_report`` will generate a profile summary, which can be printed using ``print``.
Take the example output above:
>>> ---- meters -------------------------------------------------------------------------------
>>> meter time(s) memory(MB)
>>> -------------------------------------------------------------------------------------------
>>> recipe create 0.000 0.001
>>> load balance 0.000 0.009
>>> simulation init 0.005 0.707
>>> simulation run 3.357 0.028
For each region there are up to three values reported:
.. table::
:widths: 20,50
============= =========================================================================
Value Definition
============= =========================================================================
time (s) The total accumulated time (in seconds) spent in the region.
memory (MB) The total memory consumption (in mega bytes) in the region.
energy (kJ) The total energy consumption (in kilo joule) in the region (if available).
============= =========================================================================
.. _pyrecipe:
Recipes
=================
A recipe describes neuron models in a cell-oriented manner and supplies methods to provide cell information. Details on why Arbor uses recipes and general best practices can be found in :ref:`modelrecipe`.
.. currentmodule:: arbor
.. class:: recipe
Describe a model by describing the cells and network, without any information about how the model is to be represented or executed.
All recipes derive from this abstract base class.
Recipes provide a cell-centric interface for describing a model. This means that
model properties, such as connections, are queried using the global identifier
(:attr:`arbor.cell_member.gid`) of a cell. In the description below, the term :attr:`gid` is used as shorthand
for the cell with global identifier.
**Required Member Functions**
The following member functions (besides a constructor) must be implemented by every recipe:
.. function:: num_cells()
The number of cells in the model.
.. function:: cell_kind( gid )
The cell kind of the cell with global identifier :attr:`gid` (return type: :class:`arbor.cell_kind`).
.. function:: cell_description( gid )
A high level decription of the cell with global identifier :attr:`gid`,
for example the morphology, synapses and ion channels required to build a multi-compartment neuron.
The type used to describe a cell depends on the kind of the cell.
The interface for querying the kind and description of a cell are seperate
to allow the cell type to be provided without building a full cell description,
which can be very expensive.
**Optional Member Functions**
.. function:: num_sources( gid )
The number of spike sources on :attr:`gid`.
.. function:: num_targets( gid )
The number of event targets on :attr:`gid` (e.g. synapses).
.. function:: connections_on( gid )
A list of all the incoming connections for :attr:`gid`.
Each connection should have post-synaptic target :attr:`connection.destination` that matches the argument :attr:`gid`, and a valid synapse id :attr:`arbor.cell_member.index` on :attr:`gid`.
See :class:`connection`.
By default returns an empty list.
.. function:: event_generator(index, weight, schedule)
A list of all the event generators that are attached to the :attr:`gid` with cell-local :attr:`index`, weight and schedule (:class:`regular_schedule`, :class:`explicit_schedule` or :class:`poisson_schedule`).
By default returns an empty list.
.. class:: connection
Describe a connection between two cells: a pre-synaptic source and a post-synaptic destination.
The source is typically a threshold detector on a cell or a spike source.
The destination is a synapse on the post-synaptic cell.
.. function:: connection(source, destination, weight, delay)
Construct a connection between the :attr:`source` and the :attr:`destination` with a :attr:`weight` and time :attr:`delay`.
.. attribute:: source
The source of the connection (type: :class:`arbor.cell_member`).
.. attribute:: destination
The destination of the connection (type: :class:`arbor.cell_member`).
.. attribute:: weight
The weight of the connection (S⋅cm⁻²).
.. attribute:: delay
The delay time of the connection (ms).
.. class:: regular_schedule
.. function:: regular_schedule()
Construct a default regular schedule with empty time range and zero time step size.
.. function:: regular_schedule(tstart, tstop, dt)
Construct a regular schedule as list of times from :attr:`tstart` to :attr:`tstop` in :attr:`dt` time steps.
.. attribute:: tstart
The start time (ms).
.. attribute:: tstop
The end time (ms).
.. attribute:: dt
The time step size (ms).
.. class:: explicit_schedule
.. function:: explicit_schedule()
Construct a default explicit schedule with an empty list.
.. attribute:: times
Set the list of times in the schedule (ms).
.. class:: poisson_schedule
To be implemented.
Cells
------
A multicompartmental cell in Arbor's python front end can be created by making a soma and adding synapses at specific segment locations.
.. class:: make_soma_cell
Construct a single compartment cell with properties:
- diameter 18.8 µm;
- Hodgkin-Huxley (HH) mechanisms (with default parameters as described below);
- bulk resistivitiy 100 Ω·cm (default);
- capacitance 0.01 F⋅m⁻² (default).
The default parameters of HH mechanisms are:
- Na-conductance 0.12 S⋅m⁻²,
- K-conductance 0.036 S⋅m⁻²,
- passive conductance 0.0003 S⋅m⁻² and
- passive potential -54.3 mV
.. class:: segment_location( segment, position )
Set the location to a cell-local segment and a position.
.. attribute:: segment
The segment as cell-local index.
.. attribute:: position
The position between 0 and 1.
.. class:: mccell
.. function:: add_synapse( location )
Add an exponential synapse at segment location.
.. function:: add_stimulus( location, t0, duration, weight )
Add a stimulus to the cell at a specific location, start time t0 (ms), duration (ms) with weight (nA).
.. function:: add_detector( location, threshold )
Add a detector to the cell at a specific location and threshold (mV).
An example of a recipe construction of a ring network of multicompartmental cells reads as follows:
.. container:: example-code
.. code-block:: python
import arbor
# A recipe, that describes the cells and network of a model, can be defined
# in python by implementing the arbor.recipe interface.
class ring_recipe(arbor.recipe):
def __init__(self, n=4):
# The base C++ class constructor must be called first, to ensure that
# all memory in the C++ class is initialized correctly.
arbor.recipe.__init__(self)
self.ncells = n
# The num_cells method that returns the total number of cells in the model
# must be implemented.
def num_cells(self):
return self.ncells
# The cell_description method returns a cell
def cell_description(self, gid):
# Make a soma cell
cell = arbor.make_soma_cell()
# Add synapse at segment 0 at location 0.5
loc = arbor.segment_location(0, 0.5)
cell.add_synapse(loc)
# Add stimulus to first cell with gid 0 at t0 = 0 ms for duration of 20 ms with weight 0.01 nA
if gid==0:
cell.add_stimulus(loc, 0, 20, 0.01)
return cell
def num_targets(self, gid):
return 1
def num_sources(self, gid):
return 1
# The kind method returns the type of cell with gid.
# Note: this must agree with the type returned by cell_description.
def kind(self, gid):
return arbor.cell_kind.cable1d
# Make a ring network
def connections_on(self, gid):
# Define the source of cell with gid as the previous cell with gid-1
# caution: close the ring at gid 0
src = self.num_cells()-1 if gid==0 else gid-1
return [arbor.connection(arbor.cell_member(src,0), arbor.cell_member(gid,0), 0.1, 10)]
.. _pysimulation:
Simulations
===========
A simulation is the executable form of a model.
From recipe to simulation
-------------------------
To build a simulation the following concepts are needed:
* an :class:`arbor.recipe` that describes the cells and connections in the model;
* an :class:`arbor.context` used to execute the simulation.
The workflow to build a simulation is to first generate an
:class:`arbor.domain_decomposition` based on the :class:`arbor.recipe` and :class:`arbor.context` describing the distribution of the model
over the local and distributed hardware resources (see :ref:`pydomdec`). Then, the simulation is build using the :class:`arbor.domain_decomposition`.
.. container:: example-code
.. code-block:: python
import arbor
# Get hardware resources, create a context
resources = arbor.proc_allocation()
context = arbor.context(resources)
# Initialise a recipe of user defined type my_recipe with 100 cells.
n_cells = 100
recipe = my_recipe(n_cells)
# Get a description of the partition the model over the cores
# (and gpu if available) on node.
decomp = arbor.partition_load_balance(recipe, context)
# Instatitate the simulation.
sim = arbor.simulation(recipe, decomp, context)
# Run the simulation for 2000 ms with time stepping of 0.025 ms
tSim = 2000
dt = 0.025
sim.run(tSim, dt)
.. currentmodule:: arbor
.. class:: simulation
A simulation is constructed from a recipe, and then used to update and monitor the model state.
Simulations take the following inputs:
* an :class:`arbor.recipe` that describes the model;
* an :class:`arbor.domain_decomposition` that describes how the cells in the model are assigned to hardware resources;
* an :class:`arbor.context` which is used to execute the simulation.
Simulations provide an interface for executing and interacting with the model:
* **Advance the model state** from one time to another and reset the model state to its original state before simulation was started.
* Sample the simulation state during the execution (e.g. compartment voltage and current) and generate spike output by using an **I/O interface**.
**Constructor:**
.. function:: simulation(recipe, dom_dec, context)
Initialize the model described by a :attr:`recipe`, with cells and network distributed according to :attr:`dom_dec`, and computation resources described by :attr:`context`.
.. attribute:: recipe
An :class:`arbor.recipe`.
.. attribute:: dom_dec
An :class:`arbor.domain_decomposition`.
.. attribute:: context
An :class:`arbor.context`.
**Updating Model State:**
.. function:: reset()
Reset the state of the simulation to its initial state to rerun the simulation.
.. function:: run(tfinal, dt)
Run the simulation from current simulation time to :attr:`tfinal`,
with maximum time step size :attr:`dt`.
.. attribute:: tfinal
The final simulation time (ms).
.. attribute:: dt
The time step size (ms).
Recording spikes
----------------
In order to analyze the simulation output spikes can be recorded.
**Types**:
.. class:: spike
.. function:: spike()
Construct a spike with default :attr:`arbor.cell_member.gid = 0` and :attr:`arbor.cell_member.index = 0`.
.. attribute:: source
The spike source (of type: :class:`arbor.cell_member` with :attr:`arbor.cell_member.gid` and :attr:`arbor.cell_member.index`).
.. attribute:: time
The spike time (ms, default: -1 ms).
.. class:: sprec
.. function:: sprec()
Initialize the spike recorder.
.. attribute:: spikes
The recorded spikes (of type: :class:`spike`).
**I/O interface**:
.. function:: make_spike_recorder(simulation)
Record all spikes generated over all domains during a simulation (of type: :class:`sprec`)
.. container:: example-code
.. code-block:: python
import arbor
# Instatitate the simulation.
sim = arbor.simulation(recipe, decomp, context)
# Build the spike recorder
recorder = arbor.make_spike_recorder(sim)
# Run the simulation for 2000 ms with time stepping of 0.025 ms
tSim = 2000
dt = 0.025
sim.run(tSim, dt)
# Get the recorder`s spikes
spikes = recorder.spikes
# Print the spikes and according spike time
for i in range(len(spikes)):
spike = spikes[i]
print(' cell %2d at %8.3f ms'%(spike.source.gid, spike.time))
>>> SPIKES:
>>> cell 0 at 5.375 ms
>>> cell 1 at 15.700 ms
>>> cell 2 at 26.025 ms
>>> cell 3 at 36.350 ms
>>> cell 4 at 46.675 ms
>>> cell 5 at 57.000 ms
>>> cell 6 at 67.325 ms
>>> cell 7 at 77.650 ms
>>> cell 8 at 87.975 ms
>>> cell 9 at 98.300 ms
The recorded spikes of the neurons with :attr:`gid` can then for instance be visualized in a raster plot over the spike time.
.. container:: example-code
.. code-block:: python
import numpy as np
import math
import matplotlib.pyplot as plt
# Use a raster plot to visualize spiking activity.
tVec = np.arange(0,tSim,dt)
SpikeMat_rows = n_cells # number of cells
SpikeMat_cols = math.floor(tSim/dt)
SpikeMat = np.zeros((SpikeMat_rows, SpikeMat_cols))
# save spike trains in matrix:
# (if spike in cell n at time step k, then SpikeMat[n,k]=1, else 0)
for i in range(len(spikes)):
spike = spikes[i]
tCur = math.floor(spike.time/dt)
SpikeMat[spike.source.gid][tCur] = 1
for i in range(SpikeMat_rows):
for j in range(SpikeMat_cols):
if(SpikeMat[i,j] == 1):
x1 = [i,i+0.5]
x2 = [j,j]
plt.plot(x2,x1,color = 'black')
plt.title('Spike raster plot')
plt.xlabel('Spike time (ms)')
tick = range(0,SpikeMat_cols+10000,10000)
label = range(0,tSim+250,250)
plt.xticks(tick, label)
plt.ylabel('Neuron (gid)')
plt.show()
.. figure:: Rasterplot
Exemplary spike raster plot.
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment