Artefact in early run trigger rates
At run start, a dip in trigger rates is sometimes observed.
After discussion with Alex Enzenhöfer, it seems like these dips can be caused by Datafilters buffering data which then cannot contribute to the trigger rate. Also, as the data are released, an excess can be observed in the monitoring.
This is an issue since a dip + excess in the trigger rates is observed in the monitoring, which does not reflect the real observed trigger rate.
Would it be possible to
- Retroactively correct for the trigger rates so that at least recent (>30 min ago) data are accurate? or
- Add a marker which indicates the region in which the trigger rates are expected to be affected by this run start procedure. At least, these dips can be taken with a grain of salt.