Statistical inference
How to run limits¶
-
As a temporary workaround, if you want to run multiplie commands, to avoid delays to load environment each time run:
Alternatively addcmbEnv /bin/zsh # or /bin/bash
cmbEnv
in front of each command. E.g.cmbEnv python3 -c 'print("hello")'
-
Create datacards.
Available configurations:python3 StatInference/dc_make/create_datacards.py --input PATH_TO_SHAPES --output PATH_TO_CARDS --config PATH_TO_CONFIG
- For X->HH>bbtautau Run 2: StatInference/config/x_hh_bbtautau_run2.yaml
- For X->HH->bbWW Run 3: StatInference/config/x_hh_bbww_run3.yaml
-
Run limits.
Hints:law run PlotResonantLimits --version dev --datacards 'PATH_TO_CARDS/*.txt' --xsec fb --y-log
- use
--workflow htcondor
to submit on HTCondor (by default it runs locally) - add
--remove-output 4,a,y
to remove previous output files - add
--print-status 0
to get status of the workflow (where0
is a depth). Useful to get the output file name. - for more details see cms-hh inference documentation
- use
-
Plot Pulls and Impacts
Hints:PlotPullsAndImpacts --version dev --datacards "PATH_TO_CARDS/specific_card.txt" --hh-model NO_STR --parameter-values r=1 --parameter-ranges r,-100,100 --method robust --PlotPullsAndImpacts-order-by-impact True --mc-stats True --PullsAndImpacts-custom-args="--expectSignal=1"
- Don't use datacards as *.txt because pulls should be done for each mass point separately
- add
--remove-output 4,a,y
to remove previous output files - add
--print-status 0
to get status of the workflow (where0
is a depth). Useful to get the output file name.