Skip to content
Snippets Groups Projects
Commit b781d9a1 authored by Stefan Reck's avatar Stefan Reck
Browse files

minor changes and move stuff around

parent 666afcf1
No related branches found
No related tags found
1 merge request!14revive make_data_split
PKGNAME=orcasong
ALLNAMES = $(PKGNAME)
ALLNAMES += orcasong_contrib
install:
......
......@@ -28,12 +28,6 @@ instructions on how to do this.
The resulting DL files can already be used as input for networks!
Step 2.2: Quickly define which files to concatenate
---------------------------------------------------
If wanted, a list with all DL files that should go into one specific file
can be produced with :ref:`make_data_split`. Here, the directories and run_ids
making up the train and validation sets can be set in a config.
Step 3: Concatenate
-------------------
Mandatory for training files, recommended for everything else.
......@@ -47,6 +41,10 @@ See :ref:`concatenate` for details.
X runs for your training set. Instead, choose runs randomly over
the whole period.
.. note::
For mixing e.g. neutrinos and muon, a list with all DL files that should
go into one specific file
can be produced with :ref:`make_data_split`.
Step 4: Shuffle
---------------
......
......@@ -8,9 +8,18 @@ Orcasong comes with some tools to further process data.
Make_data_split
---------------
Create datasets for different tasks (like classification or regression) from the files resulting from OrcaSong, based on the run_id. This is particularly helpful for a run-by-run data analysis or to generate equally large datasets per class. A toml config is used, in which the directories and ranges of runs to be considered can be specified, as well as the subdivision into training and validation sets. Detailed descriptions for the options available can be found in the example config in the subfolder make_data_split_configs. As output, a list in txt format with the filepaths belonging to one set is created that can be passed to the concatenate for creating one single file out of the many.
In fact, with the option make_qsub_bash_files, scripts for the concatenation and shuffle, to be directly submitted on computing clusters, are created.
Create datasets for different tasks (like classification or regression) from the files
resulting from OrcaSong, based on the run_id. This is particularly helpful
for a run-by-run data analysis or to generate equally large datasets per class.
A toml config is used, in which the directories and ranges of runs to be considered
can be specified, as well as the subdivision into training and validation sets.
Detailed descriptions for the options available can be found in examples/make_data_split_config.toml.
As output, a list in txt format with
the filepaths belonging to one set is created that can be passed to the concatenate
for creating one single file out of the many.
In fact, with the option make_qsub_bash_files, scripts for the concatenation
and shuffle, to be directly submitted on computing clusters, are created.
Can be used via the commandline::
......@@ -60,4 +69,4 @@ or import function for general postprocessing:
postproc_file(output_filepath_concat)
Theres also a faster (beta) version available called h5shuffle2.
......@@ -30,9 +30,9 @@ setup(
'concatenate=orcasong.tools.concatenate:main',
'h5shuffle=orcasong.tools.postproc:h5shuffle',
'h5shuffle2=orcasong.tools.shuffle2:run_parser',
'make_dsplit=orcasong.tools.make_data_split:main',
'plot_binstats=orcasong.plotting.plot_binstats:main',
'make_nn_images=legacy.make_nn_images:main',
'make_dsplit=orcasong_contrib.data_tools.make_data_split.make_data_split:main']}
]}
)
__author__ = 'Stefan Reck, Michael Moser, Daniel Guderian'
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment