Skip to content
Snippets Groups Projects
Unverified Commit b3521c23 authored by Brent Huisman's avatar Brent Huisman Committed by GitHub
Browse files

Tutorial uses all logical cores

Add tutorial on using all logical cores on a machine, facilitated by the new flag to context creation.
parent b78b0367
No related branches found
No related tags found
No related merge requests found
......@@ -114,8 +114,14 @@ The execution
To create a simulation, we must create an :class:`arbor.context` and :py:class:`arbor.domain_decomposition`.
Step **(12)** creates a default execution context, and uses the :func:`arbor.partition_load_balance` to create a
default domain decomposition. You can print the objects to see what defaults they produce on your system.
Step **(12)** initalizes the ``threads`` parameter of :class:`arbor.context` with the ``avail_threads`` flag. By supplying
this flag, a context is constructed that will use all locally available threads. On your local machine this will match the
number of logical cores in your system. Especially with large numbers
of cells you will notice the speed-up. (You could instantiate the recipe with 5000 cells and observe the difference. Don't
forget to turn of plotting if you do; it will take more time to generate the image then to run the actual simulation!)
:func:`arbor.partition_load_balance` creates a default domain decomposition, which
for contexts initialized with ``threads=avail_threads`` distributes cells evenly over the available cores. You can print the
objects to see what defaults they produce on your system.
Step **(13)** sets all spike generators to record using the :py:class:`arbor.spike_recording.all` policy.
This means the timestamps of the generated events will be kept in memory. Be default, these are discarded.
......
......@@ -26,6 +26,10 @@ Step **(11)** is changed to generate a network with five hundred cells.
The hardware context
********************
The configuration of the :py:class:`arbor.context` will need to be changed to reflect the change in hardware.
First of all, we scrap setting `threads="avail_threads"` and instead use
`MPI <https://en.wikipedia.org/wiki/Message_Passing_Interface#Overview>`_ to distribute the work over nodes, cores and threads.
Step **(12)** uses the Arbor-built-in :py:class:`MPI communicator <arbor.mpi_comm>`, which is identical to the
``MPI_COMM_WORLD`` communicator you'll know if you are familiar with MPI. The :py:class:`arbor.context` takes a
communicator for its ``mpi`` parameter. Note that you can also pass in communicators created with ``mpi4py``.
......
......@@ -117,7 +117,7 @@ if __name__ == "__main__":
for k,v in vars(opt).items():
print(f"{k} = {v}")
context = arbor.context()
context = arbor.context("avail_threads")
print(context)
meters = arbor.meter_manager()
......
......@@ -108,8 +108,8 @@ class ring_recipe (arbor.recipe):
ncells = 4
recipe = ring_recipe(ncells)
# (12) Create a default execution context, domain decomposition and simulation
context = arbor.context()
# (12) Create an execution context using all locally available threads, domain decomposition and simulation
context = arbor.context("avail_threads")
decomp = arbor.partition_load_balance(recipe, context)
sim = arbor.simulation(recipe, decomp, context)
......@@ -120,7 +120,6 @@ sim.record(arbor.spike_recording.all)
handles = [sim.sample((gid, 0), arbor.regular_schedule(0.1)) for gid in range(ncells)]
# (15) Run simulation for 100 ms
sim.progress_banner()
sim.run(100)
print('Simulation finished')
......
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment