Personal Study of ML
FPR is false alarm rate which is the fraction of false positives over negative samples.
\[FPR = \frac{FP}{TN + FP}\]\(FP\) : the number of false positives, $TN$ is the number of true negatives.
Example 1
For bitwise accuracy $BA(w_1, w_2)\sim B(n, 0.5)/n $, original image $I_0$, watermark $w$, and decoder $D$, FPR that the detector predicts wrongly it as AI-generated image is
Suppose $BA(D(I_0, w) = \frac{m}{n}$ for an original image $I_0$, where $n$ is the length of the watermark and $m$ is the number of matched bits between $D(I_0)$ and $w$. The key idea is that the service provider should pick the ground-truth watermark $w$ uniformly at random. Thus, $m$ is a random variable and follows a binomial distribution $B(n, 0.5)$.
\[\begin{aligned} FPR_{single} &= Pr(BA(D(I_0), w) > \tau) \\ &= Pr(m>n\tau) = \frac{n}{k= \lceil n\tau \rceil} \begin{pmatrix} n \\ k \end{pmatrix} \frac{1}{2^{n}} \end{aligned}\]To make $FPR_{single}(\tau) < \eta$ , $\tau$ should be at least
\[\tau^* = \arg \min_\tau \sum_{k= \lceil n\tau \rceil}^n \begin{pmatrix} n \\ k \end{pmatrix} \frac{1}{2^{n}} < \eta\]๋๋คํ๊ฒ ๋ฝ์์ ๋, ์ ๋ต watermark์์ ๋๋คํ๊ฒ ๊ฑธ๋ฆฌ๋ฏ๋ก ์ํฐ๋งํฌ ์์ธก ํ ์คํธ์ ์ฑ๋ฅ์ ๊ทธ๋ค์ง ์ข์ง ์๋ค. ๊ทธ๋ฐ๋ฐ, ๋๋ฌด ์ฑ๋ฅ์ด ๋๊ฒ ๋์จ๋ค๋ฉด, False Alarm ์ด๋ผ๊ณ ๊ณ ๋ คํ๋ ๊ฒ์ด๋ค. ๋ฐ๋๋ก Adversarial ํ๊ฒ ๋๋ฌด ์ฑ๋ฅ์ด ๋ฎ๊ฒ ๋์จ๋ค๋ฉด, ์ด ๊ฒฝ์ฐ๋ False ์ด๋ค.
\[\begin{aligned} FPR_{double} &= Pr(BA(D(I_0), w) > \tau \operatorname{or} BA(D(I_0), w) < 1- \tau) \\ &= Pr(m>n\tau) = \frac{n}{k= \lceil n\tau \rceil} \begin{pmatrix} n \\ k \end{pmatrix} \frac{1}{2^{n}} \end{aligned}\]
- AI ์์ฑ๋ฌผ์ ๋ํด์ Watermark ์ ์ฉ -> Detection -> ๋๋คํ ์ ๋ต๊ณผ ์ ์ฌ๋ ๋น๊ต.
The probability of a positive test result conditioned on truly being positive. Also called sensitivity
\[TNR = \frac{TN}{TN + FP}\]The probability of a negative test result conditioned on truly being negative. Also called specificity.
\[TNR = \frac{TN}{TN + FP}\]