Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • meta-optimization/rateml-hpc
1 result
Show changes
Commits on Source (2)
This diff is collapsed.
%% Cell type:markdown id: tags:
# RateML model generation
### Using RateML to generate the Python or CUDA models
%% Cell type:markdown id: tags:
Building rate based models in RateML, starts by creating an XML model file. To understand which constructs can be used to build the model, one should take a closer look at the README file. The cell below will prints the latest README file from the repository. Every construct which can be used, is explained.
%% Cell type:code id: tags:
``` python
# set root
ROOT = "/opt/app-root/src/drive/My Libraries/main/tvb-root/tvb_library/tvb/rateML/"
```
%% Cell type:code id: tags:
``` python
from IPython.display import Markdown, display, Code
mdfile = open(ROOT + "README.md","r")
model = mdfile.read()
display(Markdown(model))
mdfile.close()
```
%% Cell type:markdown id: tags:
## Generate a model
After reading the README, one should be able to build an XML model file. Lets use the relatively small Kuramoto model as an example.
Your model should look like similar to the Kuramoto python file and define some constants, an exposure and dynamics behavior. The dynamics for the Kuramoto consist of a state variable, a derived variable and a time derivative. Except for the derived variable, there are the construct that a RateML XML model file should contain. The template: tvb-root/scientific_library/tvb/rateML/XMLmodels/model_template.xml is an empty template which can be used to create a model XML file.
%% Cell type:code id: tags:
``` python
# Open the model template
from IPython.display import Markdown, display, Code
model_location = ROOT + "XMLmodels/model_template.xml"
xmlfile = open(model_location,"r")
model = xmlfile.read()
display(Markdown(model))
xmlfile.close()
```
%% Cell type:markdown id: tags:
## Generating the model code
We will use an existing xml model file to generate a model called "inifite_theta". This model relates the biophysically relevant macroscopic quantities, namely the firing rate and the membrane potential, which govern the evolution of the neuronal network. These equations describe exaclty all possible macroscopical dynamical states of the network. (Montbrió, E., Pazó, D., and Roxin, A. (2015). Macroscopic Description for Networks of Spiking Neurons. Phys. Rev. X 5, 1–15. doi:10.1103/PhysRevX. 5.021028).
#### Please open: "tvb-root/tvb_library/tvb/rateML/XMLmodels/montbrio.xml"
If you are satified with the pre set time derivates properties, save the file.
We will call the templating function in order to automatically generate the model code.
In XML2model.py the class
```python
RateML('model_filename', language=('python' | 'cuda'), 'path/to/your/XMLmodels', 'path/to/your/generatedModels')
```
will start the code generation.
If you get an error this might be that either the first 3 steps from the '0_Install_TVB.ipynb' need to be done or that you havent restarted this kernel. Kernel > Restart Kernel ...
%% Cell type:code id: tags:
``` python
from tvb.rateML.XML2model import RateML
# put here the name of the xml file
# model_filename = 'oscillator'
model_filename = 'montbrio'
language = "python"
# language = "cuda"
XMLin = ROOT + "XMLmodels"
GenModOut = ROOT + "generatedModels"
RateML(model_filename, language, XMLin, GenModOut)
```
%% Output
2022-09-28 13:05:49,187 - INFO - tvb.rateML.XML2model - True validation of /opt/app-root/src/drive/My Libraries/main/tvb-root/tvb_library/tvb/rateML/XMLmodels/montbrio.xml against /opt/app-root/src/.local/lib/python3.8/site-packages/tvb/rateML/rML_v0.xsd
2022-09-28 13:05:49,199 - INFO - tvb.rateML.XML2model - model file generated montbrio
<tvb.rateML.XML2model.RateML at 0x7f221c3639a0>
This diff is collapsed.
This diff is collapsed.
# RateML-HPC
## Getting started
To make it easy for you to get started with GitLab, here's a list of recommended next steps.
Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
## Add your files
- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
- [ ] [Add files using the command line](https://docs.gitlab.com/ee/gitlab-basics/add-file.html#add-a-file-using-the-command-line) or push an existing Git repository with the following command:
```
cd existing_repo
git remote add origin https://gitlab.ebrains.eu/meta-optimization/rateml-hpc.git
git branch -M main
git push -uf origin main
```
## Integrate with your tools
- [ ] [Set up project integrations](https://gitlab.ebrains.eu/meta-optimization/rateml-hpc/-/settings/integrations)
## Collaborate with your team
- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
- [ ] [Automatically merge when pipeline succeeds](https://docs.gitlab.com/ee/user/project/merge_requests/merge_when_pipeline_succeeds.html)
## Test and Deploy
Use the built-in continuous integration in GitLab.
- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/index.html)
- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing(SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
***
# Editing this README
When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thank you to [makeareadme.com](https://www.makeareadme.com/) for this template.
## Suggestions for a good README
Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
## Name
Choose a self-explaining name for your project.
## Description
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
## Badges
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
## Visuals
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
## Installation
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
## Usage
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
## Support
Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.
## Roadmap
If you have ideas for releases in the future, it is a good idea to list them in the README.
## Contributing
State if you are open to contributions and what your requirements are for accepting them.
For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.
You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.
## Authors and acknowledgment
Show your appreciation to those who have contributed to the project.
## License
For open source projects, say how it is licensed.
## Project status
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.
#module use $OTHERSTAGES
#module load Stages/2020
jutil env activate -p icei-hbp-2021-0003
ml Stages/2022 GCCcore/.11.2.0
ml PyCUDA/2021.1
ml SciPy-Stack
ml numba
export PYTHONPATH=$PYTHONPATH:/p/project/training2221/$USER/tvb-root/tvb_library
#export PYTHONPATH=$PYTHONPATH:/p/project/cslns/vandervlag1/uniRate/tvb-root/scientific_library/
#ml SciPy-Stack/2020-Python-3.8.5
#ml numba
#export PYTHONPATH=$PYTHONPATH:/p/project/cslns/vandervlag1/uniRate/tvb-root/scientific_library/
File added
File added
File added
images/figure_Goldman.png

380 KiB

images/montbriosweep.png

290 KiB

#!/usr/bin/env bash
##SBATCH -A [!!enter your project here!!]
#SBATCH -A icei-hbp-2021-0003
#SBATCH --partition=gpus
#SBATCH --gres=gpu:1
#SBATCH -N 1
#SBATCH -n 1
#SBATCH --time=04:30:00
#SBATCH -J RateML
# Run the Zerlaut AdEx
# srun python $PROJECT/$USER/tvb-root/tvb_library/tvb/rateML/run/model_driver_zerlaut.py -s0 5 -s1 1 -s2 1 -s3 1 -s4 1 -n 500 -dt .1 -sm 4 -v -w -x7
# Run the MontbrioPaxinRosin model
srun python $PROJECT/$USER/tvb-root/tvb_library/tvb/rateML/run/model_driver_montbrio.py -s0 5 -n 400 -dt .1 -sm 3 -v -w -r 76
\ No newline at end of file
%% Cell type:markdown id: tags:
# RateML
### Using RateML to generate a CUDA model file and Pyunicore to excute parameter sweeps on HPC cluster
%% Cell type:code id: tags:
``` python
# tmp branch clone till argparse bugfix is accepted
!git clone -b rateml_ebrains_fix --single-branch -v https://github.com/the-virtual-brain/tvb-root.git
```
%% Cell type:code id: tags:
``` python
import os
# %tb
# !pip install -U pip setuptools
# !pip3 install -U pip
# !pip list
# install tvb-library
os.chdir("/mnt/user/drive/Shared with groups/RateML TVB/tvb-root/scientific_library/")
# !python setup.py install
!pip install .
# os.chdir("/mnt/user/drive/Shared with groups/RateML TVB/tvb-root/tvb_bin/")
# install tvb-bin
# !python setup.py install
# install tvb-data
# ! pip install tvb-data
# os.chdir("/mnt/user/drive/Shared with groups/RateML TVB/")
```
%% Cell type:markdown id: tags:
## Building a model
Building rate based models in RateML, start by creating an XML model file. To understand which constructs can be used to build the model, one should take a closer look at the README file. The cell below will prints the latest README file from the repository. Every construct which can be used, is explained.
%% Cell type:code id: tags:
``` python
mdfile = open("tvb-root/scientific_library/tvb/rateML/README.md","r")
model = mdfile.read()
display(Markdown(model))
mdfile.close()
```
%% Cell type:markdown id: tags:
## Generate a model
After reading the README, one should be able to build an XML model file. Lets use the relatively small Kuramoto model as an example.
Your model should look like similar to the Kuramoto python file and define some constants, an exposure and dynamics behavior. The dynamics for the Kuramoto consist of a state variable, a derived variable and a time derivative. Except for the derived variable, there are the construct that a RateML XML model file should contain. The template: tvb-root/scientific_library/tvb/rateML/XMLmodels/model_template.xml is an empty template which can be used to create a model XML file.
%% Cell type:code id: tags:
``` python
# Open the model template
model_filename = "model_template"
model_location = "tvb-root/scientific_library/tvb/rateML/XMLmodels/"+model_filename+".xml"
xmlfile = open(model_location,"r")
model = xmlfile.read()
display(Markdown(model))
xmlfile.close()
```
%% Cell type:markdown id: tags:
## Generating the model code
We will call the templating function in order to automatically generate the model code.
In XML2model.py the class
```python
RateML('model_filename', language=('python' | 'cuda'), 'path/to/your/XMLmodels', 'path/to/your/generatedModels')
```
will start the code generation.
%% Cell type:code id: tags:
``` python
#hacky solution till the args.parse bugfix is pr-ed into tvb
import sys
# sys.path.insert(1, 'tvb-root/scientific_library/')
import tvb
# sys.path.insert(1, 'tvb-root/scientific_library/tvb/rateML/')
from XML2model import RateML
```
%% Cell type:code id: tags:
``` python
from tvb.rateML.XML2model import RateML
# %tb
# some preexisting examples:
# model_filename = 'montbrio'
model_filename = 'oscillator'
# model_filename = 'kuramoto'
# model_filename = 'rwongwang'
# model_filename = 'epileptor'
language = "cuda"
XMLin = "tvb-root/scientific_library/tvb/rateML/XMLmodels/"
GenModOut = "tvb-root/scientific_library/tvb/rateML/generatedModels/"
RateML(model_filename, language, XMLin, GenModOut)
```
%% Output
2022-06-02 16:51:52,497 - INFO - tvb.rateML.XML2model - True validation of tvb-root/scientific_library/tvb/rateML/XMLmodels/oscillator.xml against /opt/app-root/src/.local/lib/python3.8/site-packages/tvb/rateML/rML_v0.xsd
<tvb.rateML.XML2model.RateML at 0x7fe2966dacd0>
%% Cell type:code id: tags:
``` python
from IPython.display import Markdown, display, Code
# Open the generated model
model_location = "tvb-root/scientific_library/tvb/rateML/generatedModels/"+model_filename+".c"
genModFile = open(model_location,"r")
model = genModFile.read()
display(Code(model, language='c'))
# display(Markdown(model))
genModFile.close()
```
%% Cell type:markdown id: tags:
## Simulating the result
If the model displays all its features to your whishes, it is time to take her for a spin on a GPU. The sites that are able to run the models are the JUSUF and JUWELS clusters from Forschungszentrum Juelich. This will only work if you have an LDAP account for these clusters and you are registered in the PyUnicore database. If you dont have access any other CUDA enabled GPU will run your generated model, using \_\_main\_\_.py in rateML/run/ folder .
%% Cell type:markdown id: tags:
### Setup PyUnicore
%% Cell type:code id: tags:
``` python
# !pip install pyunicore --upgrade
import pyunicore.client as unicore_client
import json
import os
```
%% Cell type:code id: tags:
``` python
token = clb_oauth.get_token()
tr = unicore_client.Transport(token)
r = unicore_client.Registry(tr, unicore_client._HBP_REGISTRY_URL)
HPC_LOC = "https://zam2125.zam.kfa-juelich.de:9112/JUSUF/rest/core"
# HPC_LOC = 'https://zam2125.zam.kfa-juelich.de:9112/JUWELS/rest/core'
tr.preferences="group:icei-hbp-2021-0003"
site = unicore_client.Client(transport=tr,site_url=HPC_LOC)
site.access_info()
# r.site_urls
# site.get_storages()
# r.site('JUWELS')
```
%% Cell type:code id: tags:
``` python
model_filename='oscillator'
```
%% Cell type:markdown id: tags:
### Transfer model
Transfer the generated model file to JUSUF.
%% Cell type:code id: tags:
``` python
STOR_LOC = 'https://zam2125.zam.kfa-juelich.de:9112/JUSUF/rest/core'
# STOR_LOC = 'https://zam2125.zam.kfa-juelich.de:9112/JUDAC/rest/core/'
base_url = STOR_LOC + "/storages/PROJECT/"
# base_url = "https://zam2125.zam.kfa-juelich.de:9112/JUSUF/rest/core/storages/PROJECT/"
# base_url = "https://zam2125.zam.kfa-juelich.de:9112/JUWELS/rest/core/storages/PROJECT/"
source_location = "drive/Shared with groups/RateML TVB/tvb-root/scientific_library/tvb/rateML/generatedModels/" + model_filename + ".c"
source_path = os.path.join(os.environ['HOME'], source_location)
storage = unicore_client.Storage(tr, base_url)
# storage_location = "wikicollab/RateML/" + model_filename + ".c"
# # storage_location = "test/" + model_filename + ".c"
# # storage_location = model_filename + ".c"
# storage.upload(source_path, destination = storage_location)
```
%% Cell type:markdown id: tags:
### Job setup
%% Cell type:markdown id: tags:
Before the can be executed on HPC some other parameters need to be setup as well
We specify a grid of parameter values to sweep, which will be setup the HPC according to:
```python
couplings = np.logspace(1.6, 3.0, coupling)
speeds = np.logspace(0.0, 2.0, speed)
params_iter = itertools.product(speeds, couplings)
params = np.array([vals for vals in params_iter], np.float32)
```
%% Cell type:code id: tags:
``` python
# by setting the coupling and speed sizes:
coupling = "32"
speed = "32"
# simulation time
simtime = "400"
# The default values of 66, 68, 76, 80, 96, 192 or 998 can be selected and is processed accordingly:
# connectivity = Connectivity.from_file(source_file="connectivity_"+n_nodes+".zip")
n_nodes = "76"
# Set the number of states of your model. The Kuramoto has 1, ReducedWongWang, Generic2DOscillator and Montbrio have 2 and Epileptor has 6 states.
states = "2"
```
%% Cell type:markdown id: tags:
#### Create the unicore job:
%% Cell type:code id: tags:
``` python
my_job = {}
# source some
my_job['Executable'] = "source /p/project/cslns/wikicollab/RateML/activate;"
my_job['RunOnLoginNode'] = "true"
my_job['Job type'] = "interactive"
my_job['Imports'] = []
my_job['Exports'] = []
my_job['Resources'] = {}
job = site.new_job(job_description=my_job)
```
%% Cell type:code id: tags:
``` python
my_job = {}
# executable / application
# arguments for runthingsJusuf are: backend modelname couplings speeds
# my_job['Executable'] = "source /p/project/cslns/wikicollab/RateML/activate; \
# /p/project/cslns/wikicollab/RateML/runthingsJuwels "+model_filename+" "+coupling+" "+speed+" "+simtime+" "+n_nodes+" "+states
# ./runthingsJusuf "+model_filename+" "+coupling+" "+speed+" "+simtime+" "+n_nodes+" "+states
my_job['Executable'] = "cd /p/project/cslns/wikicollab/RateML/; \
./runthingsJuwels "+model_filename+" "+coupling+" "+speed+" "+simtime+" "+n_nodes+" "+states
# environment vars
# run this on login node, not in batch system
my_job['RunOnLoginNode'] = "true"
my_job['Job type'] = "interactive"
# data stage in - TBD
my_job['Imports'] = []
# data stage out - TBD
my_job['Exports'] = []
# Resources - TBD
my_job['Resources'] = {}
my_job
```
%% Cell type:markdown id: tags:
#### Submit Job to selected HPC cluster
%% Cell type:code id: tags:
``` python
job = site.new_job(job_description=my_job)
```
%% Cell type:markdown id: tags:
### Info about JOB
%% Cell type:code id: tags:
``` python
if(job.is_running()):
print('Job is running')
job.poll()
print('Job is finished')
```
%% Cell type:markdown id: tags:
### Fetch results
Copy the output log from cluster to Collab
%% Cell type:code id: tags:
``` python
remote = storage.stat("wikicollab/RateML/output.out")
# help(remote)
# storage.download("../wikicollab/RateML/output.out")
remote.download("output.out")
with open("output.out", "r") as f:
for line in f:
print (line.rstrip())
```
%% Cell type:markdown id: tags:
Copy the error log from JUSUF to Collab
%% Cell type:code id: tags:
``` python
remote = storage.stat("wikicollab/RateML/error.er")
# import time
# from IPython.display import clear_output
# for i in range(4):
# clear_output(wait=True)
remote.download("error.er")
with open("error.er", "r") as f:
for line in f:
print (line.rstrip())
# time.sleep(5)
```
%% Cell type:markdown id: tags:
#### Transfer the produced data from HPC
%% Cell type:code id: tags:
``` python
remote = storage.stat("wikicollab/RateML/tavg_data")
remote.download("tavg_data")
```
%% Cell type:code id: tags:
``` python
job.poll()
print('Job finished!')
result_job = {}
wd = job.working_dir
result_job["stderr"] = [x.decode('utf8') for x in wd.stat("/stderr").raw().readlines()]
result_job["stdout"] = [x.decode('utf8') for x in wd.stat("/stdout").raw().readlines()]
result_job
```
%% Cell type:markdown id: tags:
/p/project/training2221/vandervlag1#### Unpickle it
%% Cell type:code id: tags:
``` python
import pickle
import numpy, sys
# numpy.set_printoptions(threshold=sys.maxsize)
tavg_file = open('tavg_data', 'rb')
tavg_data1 = pickle.load(tavg_file)
tavg_file.close()
# tavg_data(simsteps, states, nodes, paramscombi)
tavg_data1.shape
```
%% Cell type:markdown id: tags:
#### Plot the data
%% Cell type:code id: tags:
``` python
import matplotlib.pyplot as plt
import numpy as np
# plot(np.cos(tavg_data[:, :, 0]) + np.r_[:76]/15.0, 'k', alpha=.2)
plt.plot((tavg_data1[:, 1, :, 0]), 'k', alpha=.2)
plt.xlim(0, 400)
plt.show()
```
%% Cell type:code id: tags:
``` python
import tqdm
import numpy as np
cc = np.zeros((1024, 76, 76), 'f')
for i in tqdm.trange(1024):
# cc[i] = np.corrcoef(tavg_data[..., 0].T)
cc[i] = np.corrcoef(tavg_data1[:, 0, :, i].T)
```
%% Cell type:code id: tags:
``` python
from tvb.simulator.lab import *
source_file='connectivity_76.zip'
connectivity=connectivity.Connectivity.from_file(source_file)
imshow(cc[1023])
# adding all correlation matrices to a single one, using a least squared method (?). Taken from TVB collab
# l2 = np.sqrt(np.sum(np.sum((cc - connectivity.weights)**2, axis=-1), axis=-1)).reshape((32, 32))
# l2 = ((np.sum((cc - connectivity.weights)**2, axis=-1)))
# l2 = np.sum(l2, axis=-1)
# l2 = np.sqrt(l2)
# l2
```
%% Cell type:code id: tags:
``` python
# %pylab inline
imshow(l2)
```
%% Cell type:markdown id: tags:
# RateML
## Defining new models in TVB using RateML
In this demo we show how to use RateML for automatic code generation of models defined in LEMS based XML to Python format and run a simulation in TVB.
First let us gather the latest version (currently a beta), if already cloned one should do a '!git pull'
%% Cell type:code id: tags:
``` python
# install latests tvb libs
! pip install tvb-library
! pip install tvb-data
```
%% Cell type:markdown id: tags:
## Building a model
Building rate based models in RateML, start by creating an XML model file. To understand which constructs can be used to build the model, one should take a closer look at the README file. The cell below will prints the latest README file from the repository. Every construct which can be used, is explained. The Python model generation has no coupling nor noise specification through RateML, therefore one can skip reading the coupling, noise and 'running an example' sections.
%% Cell type:code id: tags:
``` python
from IPython.display import Markdown, display, Code
mdfile = open("tvb-root/scientific_library/tvb/rateML/README.md","r")
model = mdfile.read()
display(Markdown(model))
mdfile.close()
```
%% Cell type:markdown id: tags:
## Generate and Run an example model
After reading the README, one should be able to build an XML model file. Lets use the relatively small Kuramoto model as an example.
Your model should look like similar to the Kuramoto python file and define some constants, an exposure and dynamics behavior. The dynamics for the Kuramoto consist of a state variable, a derived variable and a time derivative. Except for the derived variable, there are the construct that a RateML XML model file should contain.
%% Cell type:code id: tags:
``` python
# Open the Kuramoto model
model_location = "tvb-root/scientific_library/tvb/rateML/XMLmodels/kuramoto.xml"
xmlfile = open(model_location,"r")
model =xmlfile.read()
display(Markdown(model))
xmlfile.close()
```
%% Cell type:markdown id: tags:
## Generating the model code and integrating it into TVB
We will call the templating function in order to automatically generate the model code. This will be directly moved to the right path in the tvb library so it will be recognized by the simulator when we call it later.
In XML2model.py the class
```python
RateML('model_filename', language=('python' | 'cuda'), 'path/to/your/XMLmodels', 'path/to/your/generatedModels')
```
will start the code generation.
%% Cell type:code id: tags:
``` python
#hacky solution till the args.parse bugfix is pr-ed into tvb
import sys
sys.path.insert(1, 'tvb-root/scientific_library/')
import tvb
sys.path.insert(1, 'tvb-root/scientific_library/tvb/rateML/')
from XML2model import RateML
```
%% Output
here
%% Cell type:code id: tags:
``` python
# from tvb.rateML.XML2model import RateML
# import numpy
# %tb
# help(RateML)
# Start generation
# some preexisting examples:
model_filename = 'montbrio'
# model_filename = 'oscillator'
# model_filename = 'kuramoto'
# model_filename = 'rwongwang'
# model_filename = 'epileptor'
# model_filename = "kuramoto_python"
language = "python"
XMLin = "tvb-root/scientific_library/tvb/rateML/XMLmodels"
GenModOut = "tvb-root/scientific_library/tvb/rateML/generatedModels"
# RateML(model_filename=None, language=None, XMLfolder=None, GENfolder=None)
RateML(model_filename=model_filename, language=language, XMLfolder=XMLin, GENfolder=GenModOut)
```
%% Output
No traceback available to show.
<XML2model.RateML at 0x7f17453fef40>
%% Cell type:markdown id: tags:
It will also do XSD validation against rateML/rML_v0.xsd and report validity. If all is well, it should report that the file was generated. Next to the output directory it will place a copy of the model in the TVB simulator folder and make it recognizable for TVB.
Let take a look at the generated model file:
%% Cell type:code id: tags:
``` python
# Open the generated python Kuramoto model
model_location = "tvb-root/scientific_library/tvb/rateML/generatedModels/kuramoto_python.py"
genModFile = open(model_location,"r")
model = genModFile.read()
display(Code(model, language='python'))
# display(Markdown(model))
xmlfile.close()
```
%% Cell type:code id: tags:
``` python
model_filename = 'montbrio'
```
%% Cell type:markdown id: tags:
## Simulating the result
If the model displays all its features to your whishes. It is time to take her for a spin in TVB.
If it simulates, the time series of your generated model will be plotted on screen.
%% Cell type:code id: tags:
``` python
from matplotlib.pyplot import *
from tvb.rateML.run.regular_run import regularRun
from tvb.simulator.lab import *
import numpy
%tb
# Setting for paramsweeps can be found at The virtual brain: A simulator of primate brain network dynamics.
#rww simtime 5e3
simtime = 5e2
# simtime = 2**10
g = 0.1
# g = 0.5 / 50.0
g = 0.0042
s = 4.0
globalspeed = 1
# dt = 2**-3
dt=1
# period = 2**-2
period = 1
# your_gen_model = 'Kuramoto_pythonT'
# your_gen_model = 'Kuramoto'
# your_gen_model = 'RwongwangT'
# your_gen_model = 'ReducedWongWangExcInh'
# your_gen_model = "Generic2dOscillator"
# your_gen_model = "OscillatorT"
your_gen_model = 'MontbrioPazoRoxin'
# coupling=coupling.Linear(a=np.array([0.5 / 50.0]))
# s = numpy.linspace(0, 1, 50).reshape((1, -1, 1))
# integrator=integrators.EulerStochastic(dt=dt, noise=noise.Additive(nsig=numpy.array([1e-5])))
# for oscillatorT
# integrator = integrators.HeunDeterministic(dt=2**-4)
integrator=integrators.HeunStochastic(dt=dt, noise=noise.Additive(nsig=numpy.array([1e-5])))
# integrator = integrators.EulerDeterministic(dt=dt)
#Initialise
source_file='connectivity_76.zip'
connectivity=connectivity.Connectivity.from_file(source_file)
# disabled global speed
(time, data) = regularRun(simtime, g, s, dt, period).simulate_python(your_gen_model)
figure()
plot(data[:, 0, :, 0], 'k', alpha=0.1)
show()
# data.shape(simtime/period, state, node, ?)
# tavg_data = np.squeeze(data[0][1])
# data[0][1].shape
# FCloc = []
# FCloc = np.corrcoef(tavg_data.T)
# FCloc
data.shape
```
%% Cell type:code id: tags:
``` python
from matplotlib.pyplot import *
figure()
plot(data[:, 0, :, 0], 'k', alpha=0.1)
show()
```
%% Cell type:code id: tags:
``` python
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
fig, ax = plt.subplots(ncols=2, figsize=(12, 3))
tavg_data = np.squeeze(data[0][1])
# FCloc = []
# FCloc = np.corrcoef(tavg_data.T)
# print(FCloc.shape)
sns.heatmap((FCloc), xticklabels='',
yticklabels='', ax=ax[0],
cmap='coolwarm', vmin=-1, vmax=1)
sns.heatmap(SC/SC.max(), xticklabels='', yticklabels='',
ax=ax[1], cmap='coolwarm', vmin=-1, vmax=1)
r = np.corrcoef(tavg_data.ravel(), SC.ravel())[0, 1]
# print("Correlation = " + str(r))
# ax[0].set_title('simulated FC. \nSC-FC r = ' + str(r))
ax[0].set_title('simulated FC. \nSC-FC r = ')
ax[1].set_title('SC')
plt.figure()
plt.plot(r)
plt.show()
```
%% Cell type:code id: tags:
``` python
# calculating the corrcoef for every node over the number of simsteps
# all to all correlation of the regions.
numpy.set_printoptions(threshold=1000)
# import tqdm
# cc = np.zeros((1024, 76, 76), 'f')
# for i in tqdm.trange(1024):
# # cc[i] = np.corrcoef(tavg_data[..., 0].T)
# cc[i] = np.corrcoef(data[:, 0, :, 0].T)
# print(cc)
cc = np.corrcoef(data[:, 0, :, 0].T)
cc.shape
cc
```
%% Cell type:code id: tags:
``` python
# cc1 =[[43,21,25,42,57,59],[99,65,79,75,87,81], [88,34,67,35,15,17]]
# ccc = np.corrcoef(cc1)
# ccc.shape
# ccc
# cccc =[[43,21,25],[99,65,79]]
# np.corrcoef(cccc)
np.sum([[0, 1], [2, 5], [2, 6]], axis=-1)
```
%% Cell type:code id: tags:
``` python
l2 = np.sqrt(np.sum(np.sum((cc - connectivity.weights)**2, axis=-1), axis=-1)).reshape((32, 32))
```
%% Cell type:code id: tags:
``` python
# imshow(l2)
imshow(cc)
```
This TVB RateML tutorial contains 4 notebooks with self explanatory titles.
- 0_Install_TVB.ipynb: sets up the local and remote (on a GPU enabled supercomputer) TVB installation.
- 1_RateML_ModGen.ipynb: executes a ratemls code generator for local python or remote cuda models for TVB.
- 2_Simulate_Local.ipynb: enables simulation of the generated .py models in the Ebrains environment.
- 3_Simulate_HPC.ipynb" enables simulation of the generated .c models on a HPC environment.
For a remote installation of TVB on one of the HPC sites, one needs to register [here](https://judoor.fz-juelich.de/register). Once registered a project with compute time can be acquired via the fenix infrastructure of which details can be found [here](https://wiki.ebrains.eu/bin/view/Collabs/fenix-icei/).
If one wants to do only local simulations with rateml generated python models, one should do the first 3 steps of 0_Install_TVB.ipynb and discard 3_Simulate_HPC.ipynb.
The rateml outputted cuda driver and model can be run standalone on any CUDA GPU capable (virtual) machine. For more information check out the [readme](https://github.com/the-virtual-brain/tvb-root/blob/master/tvb_library/tvb/rateML/README.md).
\ No newline at end of file