In this paper, we deal with the classical Statistical Learning Theory’s problem of bounding, with high probability, the true risk R(h) of a hypothesis h chosen from a set H of m hypotheses. The Union Bound (UB) allows one to state that P{L(R(h), δqh) ≤ R(h) ≤ U((R(h)), δqh} ≥ 1-δ where R(h) is the empirical errors, if it is possible to prove that P{R(h) ≥ L(R(h), δ)} ≥ 1-δ and P{R(h) ≤ U(R(h), δ)}≥ 1-δ, when h, qh, and ph are chosen before seeing the data such that qh, ph∈[0, 1] and ∑h∈H(qh+ph)=1. If no a priori information is available qh and ph are set to 1/2m, namely equally distributed. This approach gives poor results since, as a matter of fact, a learning procedure targets just particular hypotheses, namely hypotheses with small empirical error, disregarding the others. In this work we set the qh and ph in a distribution-dependent way increasing the probability of being chosen to function with small true risk. We will call this proposal Distribution-Dependent Weighted UB (DDWUB) and we will retrieve the sufficient conditions on the choice of qh and ph that state that DDWUB outperforms or, in the worst case, degenerates into UB. Furthermore, theoretical and numerical results will show the applicability, the validity, and the potentiality of DDWUB.
Distribution-dependent weighted union bound
Oneto L.;Ridella S.
2021-01-01
Abstract
In this paper, we deal with the classical Statistical Learning Theory’s problem of bounding, with high probability, the true risk R(h) of a hypothesis h chosen from a set H of m hypotheses. The Union Bound (UB) allows one to state that P{L(R(h), δqh) ≤ R(h) ≤ U((R(h)), δqh} ≥ 1-δ where R(h) is the empirical errors, if it is possible to prove that P{R(h) ≥ L(R(h), δ)} ≥ 1-δ and P{R(h) ≤ U(R(h), δ)}≥ 1-δ, when h, qh, and ph are chosen before seeing the data such that qh, ph∈[0, 1] and ∑h∈H(qh+ph)=1. If no a priori information is available qh and ph are set to 1/2m, namely equally distributed. This approach gives poor results since, as a matter of fact, a learning procedure targets just particular hypotheses, namely hypotheses with small empirical error, disregarding the others. In this work we set the qh and ph in a distribution-dependent way increasing the probability of being chosen to function with small true risk. We will call this proposal Distribution-Dependent Weighted UB (DDWUB) and we will retrieve the sufficient conditions on the choice of qh and ph that state that DDWUB outperforms or, in the worst case, degenerates into UB. Furthermore, theoretical and numerical results will show the applicability, the validity, and the potentiality of DDWUB.File | Dimensione | Formato | |
---|---|---|---|
entropy-23-00101.pdf
accesso chiuso
Descrizione: Articolo su rivista
Tipologia:
Documento in versione editoriale
Dimensione
762.59 kB
Formato
Adobe PDF
|
762.59 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.