KM3BUU ====== .. image:: https://git.km3net.de/simulation/km3buu/badges/master/pipeline.svg :target: https://git.km3net.de/simulation/km3buu/pipelines .. image:: https://git.km3net.de/simulation/km3buu/badges/master/coverage.svg The KM3BUU project is an integrated environment for the GiBUU studies within the KM3NeT experiment. Installation ------------ The main `KM3BUU` project is python based using ``singularity`` for running GiBUU. The python environment is required to have version 3.5 or higher and singularity is required to have version 3.3 or higher (e.g. `v3.4 <https://sylabs.io/guides/3.4/user-guide/>`__). In the default workflow the singularity image is build remote from the KM3NeT docker server, but it can also be built locally (see :ref:`Local Machine`). `KM3BUU` is not provided via python package manager and can be installed as follows. First the repository needs to be cloned: :: git clone https://git.km3net.de/simulation/km3buu cd km3buu After downloading the repository the package can be installed via: :: pip install -e . GiBUU Only Usage ~~~~~~~~~~~~~~~~ The repository design also allows the usage without python environment. In this scenario the singularity container containing the GiBUU environment has to be built first. This can be done locally if root privileges are available: :: make build If root privileges are not available, e.g. running the `KM3BUU` on a compute cluster, it also can be done remote via the KM3NeT docker server: :: make buildremote If the python environment is used afterwards, the file path of the container can be written to the configuration file and is not required to be built again. For running GiBUU the used jobcards have to be moved to a sub-folder within the jobcards folder of the project. Each sub-folder represents a set of jobcards, which can be passed to GiBUU by: :: make run CARDSET=examples This specific command runs all jobcards within the ``jobcards/examples`` folder and stores the output inside the folder ``output``. The folder structure is applied from the ``jobcards``\ folder. Theory ------ In order to retrieve correct results and provide correct KM3NeT weights (w2) the treatment of the GiBUU weights is an important step. A brief description of the GiBUU weights and how to calculate actual cross sections is given on the `GiBUU Homepage <https://gibuu.hepforge.org/trac/wiki/perWeight>`__ and a more detailed description of the calculation can be found in the `PhD Thesis of Tina Leitner <https://inspirehep.net/literature/849921>`__ in Chapter 8.3. As it is mentioned in the description of the output flux file in the `documentation <https://gibuu.hepforge.org/Documentation/code/init/neutrino/initNeutrino_f90.html#robo1685>`__ this is not taken somehow into account inside the weights. Following the description the GiBUU event weight can be converted to a binned cross section via .. math:: \frac{d\sigma}{E} = \frac{\sum_{i\in I_\text{bin}} w_i}{\Delta E}\cdot\frac{1}{E\Phi}, where :math:`\Phi`__ is the simulated flux As the weights are given for each run individually the weight also has to be divided by the number of runs. Tutorial -------- The python framework is build around the GiBUU workflow, i.e. a jobcard is processed and the output files are written out. The jobcards are technically FORTRAN namelists and can be created using a `Jobcard` object. In the example this is done via loading an existing jobcard: .. code-block:: python3 >>> from km3buu.jobcard import Jobcard, read_jobcard >>> jc = read_jobcard("jobcards/examples/example.job") In the next step the jobcard is processed: .. code-block:: python3 >>> from km3buu.ctrl import run_jobcard >>> run_jobcard(jc, "./output") 0 Finally, the output can be parsed using a `GiBUUOutput` object: .. code-block:: python3 >>> from km3buu.output import GiBUUOutput >>> data = GiBUUOutput("./output")