From cea01e2f48dbb570ec4c292f7629a740aa32d967 Mon Sep 17 00:00:00 2001
From: Sharon Yates <30626642+sharoncy@users.noreply.github.com>
Date: Fri, 22 Mar 2024 11:47:11 +0100
Subject: [PATCH] Update README.md

---
 README.md | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/README.md b/README.md
index 90bf0f3..14db165 100644
--- a/README.md
+++ b/README.md
@@ -7,9 +7,9 @@ https://quint-workflow.readthedocs.io/en/latest/
 # Usage
 As input PyNutil requires:
 1. An alignment JSON generated with the QuickNII or VisuAlign software
-2. A segmented image for each brain section with the feature-of-interests displayed in a unique RGB colour. 
+2. A segmentation file for each brain section with the feature-of-interests displayed in a unique RGB colour (it currently accepts many image formats, png, jpg, jpeg, etc).
 
-To run PyNutil, first fill in a test.json with the paths to the required input files. This includes the reference atlas volume, atlas label file, segmentation directory, and path to the alignment json (from QuickNII or VisuAlign).
+To run PyNutil, first fill in a test.json with the correct reference atlas and paths to the required input files. e.g. reference atlas volume, atlas label file, segmentation directory, and path to the alignment json (from QuickNII or VisuAlign).
 
 Then Run testOOP.py outside of the PyNutil directory (cd ..) to inititate the job. 
 
@@ -24,6 +24,7 @@ pnt.quantify_coordinates()
 
 pnt.save_analysis("PyNutil/outputs/myResults")
 ```
+PyNutil generates a series of reports which are saved to the "outputs" directory. 
  
 # Acknowledgements
 PyNutil is developed at the Neural Systems Laboratory at the Institute of Basic Medical Sciences, University of Oslo, Norway with support from the EBRAINS infrastructure, and funding support from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Framework Partnership Agreement No. 650003 (HBP FPA).
-- 
GitLab