[关闭]
@HaomingJiang 2017-09-05T16:47:19.000000Z 字数 3937 阅读 1037

ISYE 6412 HW #2

Name: Haoming Jiang


  1. ##Problem #1
  2. ###(a)
  3. $R_{\delta_c}(\theta) = E(L(\theta,\delta_c(X))) = r (2c\sigma) - E(I(\theta\in [\bar{X}_n-c\sigma,\bar{X}_n+c\sigma ])) \\ =r (2c\sigma) - P(\bar{X}_n -\theta \in [-c\sigma,c\sigma ]) \\= r (2c\sigma) - P(\frac{\bar{X}_n-\theta}{\sqrt{n}\sigma} \in [-\sqrt{n}c,+\sqrt{n}c ])$,
  4. with the fact that $\frac{\bar{X}_n-\theta}{\sqrt{n} \sigma} \sim N(0,1)$,$R_{\delta_c}(\theta) = (2c\sigma) - P(Z \in [-\sqrt{n}c,+\sqrt{n}c ]) = 2cr\sigma-2\Phi(c\sqrt{n}) + 1$
  5. ###(b)
  6. $\frac{d}{dc}R_{\delta_c}(\theta) = 2r\sigma - 2f(c\sqrt{n})*\sqrt{n}$, $f$ is the density function of $Z$. So $\frac{d}{dc}R_{\delta_c}(\theta) = 2r\sigma-\frac{2\sqrt{n}}{\sqrt{2\pi}}e^{-nc^2/2}$
  7. ###(c)
  8. Since $e^{-nc^2/2} \leq 1$, the derivative is positive if $r\sigma > \sqrt{n}/\sqrt{2\pi}$.
  9. ###(d)
  10. When $r\sigma \leq \sqrt{n}/\sqrt{2\pi}$. The risk is minimized when the derivative is 0. $\frac{d}{dc}R_{\delta_c}(\theta) = 2r\sigma - \frac{2\sqrt{n}}{\sqrt{2\pi}}e^{-nc^2/2} = 0$ leads to $-nc^2/2 = log(\sqrt{\frac{2\pi}{n}}r\sigma)$. So $c_{opt} = \sqrt{\frac{2}{n}log(\sqrt{\frac{2\pi}{n}}r\sigma)}$
  11. ###(e)
  12. In the case $c_{opt}=z_{\alpha/2}/\sqrt{n}$. So $log(\sqrt{\frac{2\pi}{n}}r^*\sigma) = z_{\alpha/2}^2/2$. Which means $r^* = \frac{\sqrt{n}}{\sqrt{2\pi}\sigma}e^{z_{\alpha/2}^2/2}$
  13. ##Problem #2
  14. ###(a)
  15. $S:$ ${0,1}$
  16. $\Omega:$ $P(head) = 1/(2+\theta),P(tail) = 1 - 1/(2+\theta),\ (\theta \in \{0,1\})$
  17. $D:{d_0,d_1}$
  18. $L_s(d)= I(d\ is\ wrong)$
  19. ###(b)
  20. $R_\delta(\theta) = E_\theta(I(d\ is\ wrong)) = P_\theta(\delta \ reaches\ wrong\ decision)$
  21. $R_{\delta_1}(\theta) = \theta$
  22. $R_{\delta_2}(\theta) = 1-\theta$
  23. $R_{\delta_3}(\theta) = 1/(2+\theta)$
  24. $R_{\delta_4}(\theta) = 1-1/(2+\theta)$
  25. ###(c)
  26. (I) $\delta_1,\ \delta_2,\ \delta_4$ are admissible
  27. (II) $r_{\delta_1}(\pi) = 0.1$
  28. $r_{\delta_2}(\pi) = 0.9$
  29. $r_{\delta_3}(\pi) = 29/60$
  30. $r_{\delta_4}(\pi) = 31/60$
  31. $\delta_1$ is Bayes.
  32. (III)$r_{\delta_1}(\pi) = 0.6$
  33. $r_{\delta_2}(\pi) = 0.4$
  34. $r_{\delta_3}(\pi) = 0.4$
  35. $r_{\delta_4}(\pi) = 0.6$
  36. $\delta_2$ and $\delta_3$ are Bayes.
  37. ###(d)
  38. Assume for some prior distribution, it is also Bayes. Assume that prior distribution is $P_\pi(1/3)=\alpha$. Then,
  39. $r_{\delta_1}(\pi) = \alpha$
  40. $r_{\delta_2}(\pi) = 1-\alpha$
  41. $r_{\delta_3}(\pi) = (1-\alpha)/2 + \alpha/3$
  42. $r_{\delta_4}(\pi) = (1-\alpha)/2 + 2\alpha/3$
  43. And $r_{\delta_1}(\pi)$ is the smallest, if and only if $\alpha \leq \frac{3}{7}$
  44. ##Problem #3
  45. $R_\delta(f_i) = \int L(f_i(x),\delta(x))f_i(x)dx$
  46. $r_\delta(\pi) = R_\delta(f_0) \pi(f_0) + R_\delta(f_1) (1-\pi(f_0))\\
  47. = \int (L(f_0(x),\delta(x))f_0(x)\pi(f_0)+L(f_1(x),\delta(x))f_1(x)\pi(f_1))dx$.
  48. For convenience, $(L(f_0(x),\delta(x))f_0(x)\pi(f_0)+L(f_1(x),\delta(x))f_1(x)\pi(f_1))$ is denoted as $h(x)$
  49. For a certain $x$, if $\delta(x) = f_0$, $h(x) = w_1f_1(x)\pi(f_1)$, if $\delta(x) = f_1$, $h(x) = w_0f_0(x)\pi(f_0)$
  50. In order to minimize $r_\delta(\pi)$.
  51. $\delta(x) = f_0$, when $w_1f_1(x)\pi(f_1) > w_0f_0(x)\pi(f_0) \Leftrightarrow \frac{f_1(x)}{f_0(x)} > \frac{w_0\pi(f_0)}{w_1\pi(f_1)}$
  52. $\delta(x) = f_1$, when $w_1f_1(x)\pi(f_1) < w_0f_0(x)\pi(f_0) \Leftrightarrow \frac{f_1(x)}{f_0(x)} < \frac{w_0\pi(f_0)}{w_1\pi(f_1)}$
  53. $\delta(x) = f_1 or f_2$, when $w_1f_1(x)\pi(f_1) = w_0f_0(x)\pi(f_0) \Leftrightarrow \frac{f_1(x)}{f_0(x)} = \frac{w_0\pi(f_0)}{w_1\pi(f_1)}$
  54. ##Problem #4
  55. ###(a)
  56. This follows at once from our discussion in class that a procedure $\delta$ is Bayes relative to $\pi$ if and only if, for every $x$; it assigns a decision $\delta(x)$ which minimizes (over $D$) $h_\pi^*(x,d) = \int_\Omega L(\theta,d)p_\theta(x)\pi(\theta)d\theta$
  57. or equivalently, to minimize $h_\pi(x,d) = \int_\Omega L(\theta,d) \frac{p_\theta(x)\pi(\theta)}{m(x)}d\theta= \int_\Omega L(\theta,d) \pi(\theta|x)d\theta$
  58. ###(b)
  59. When $r=2$, we have $h_\pi(x,d) = E(\theta^2|x) - 2E(\theta|x)d+d^2$, which is minimized at $d=E(\theta|x)$
  60. ###(c)
  61. When $r=1$, we have
  62. $h_\pi(x,d) \\
  63. = \int_\Omega |\theta-d| \pi(\theta|x)d\theta \\
  64. = \int_{\theta>d} (\theta-d) \pi(\theta|x)d\theta+\int_{\theta<d} (-\theta+d) \pi(\theta|x)d\theta
  65. $
  66. $\frac{\partial h_\pi(x,d)}{ \partial d} \\
  67. = \int_{\theta>d} - \pi(\theta|x)d\theta+\int_{\theta<d} \pi(\theta|x)d\theta \\
  68. = P(\theta<d|x)-P(\theta>d|x)$, which is monotonically increasing and reaches 0 when $P(\theta<d|x)-P(\theta>d|x) = 0$. In orther words, $h_\pi(x,d)$ is minimized when $d$ is the median of the posterior distribution.
添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注