Couple exercises from Functions of Random Variables by Xiaojing Ye (Department of Mathematics & Statistics, Georgia State University)
Example \textbf{Example} Example . If the pdf of X X X is
f ( x ) = { 6 x ( 1 − x ) , if 0 < x < 1 0 , otherwise f(x)= \begin{cases}6 x(1-x), & \text { if } 0<x<1 \\ 0, & \text { otherwise }\end{cases} f ( x ) = { 6 x ( 1 − x ) , 0 , if 0 < x < 1 otherwise
Find the probability density function of Y = X 3 Y=X^3 Y = X 3 .
Then find the probability density function of Y = X 3 Y=X^3 Y = X 3 .
Pr [ Y ≤ k ] = Pr [ X 3 ≤ k ] = Pr [ X ≤ k 1 / 3 ] \Pr[Y \le k] = \Pr[X^3 \le k] = \Pr[X \le k^{1/3}] Pr [ Y ≤ k ] = Pr [ X 3 ≤ k ] = Pr [ X ≤ k 1/3 ]
We know that ∫ 0 k 1 / 3 6 x ( 1 − x ) = ∫ 0 k 1 / 3 6 x − 6 x 2 = 3 ( k 1 / 3 ) 2 − 2 ( k 1 / 3 ) 3 = 3 k 2 / 3 − 2 k \int_{0}^{k^{1/3}} 6x(1-x) = \int_{0}^{k^{1/3}} 6x-6x^2 = 3(k^{1/3})^2 -2(k^{1/3})^3 = 3 k^{2/3} - 2k ∫ 0 k 1/3 6 x ( 1 − x ) = ∫ 0 k 1/3 6 x − 6 x 2 = 3 ( k 1/3 ) 2 − 2 ( k 1/3 ) 3 = 3 k 2/3 − 2 k
So the pdf of Y Y Y will be f ( k ) = 2 k − 1 / 3 − 2 f(k) = 2k^{-1/3}-2 f ( k ) = 2 k − 1/3 − 2 , k ∈ ( 0 , 1 ) k \in(0,1) k ∈ ( 0 , 1 )
Example. \textbf{Example.} Example. Let X X X be a random variable with pdf f ( x ) f(x) f ( x ) and Y = ∣ X ∣ Y=|X| Y = ∣ X ∣ . Show that the pdf of Y Y Y is
g ( y ) = { f ( y ) + f ( − y ) , if y > 0 0 , otherwise g(y)= \begin{cases}f(y)+f(-y), & \text { if } y>0 \\ 0, & \text { otherwise }\end{cases} g ( y ) = { f ( y ) + f ( − y ) , 0 , if y > 0 otherwise
Let's consider Pr [ Y ≤ k ] = Pr [ ∣ X ∣ ≤ k ] = Pr [ − k ≤ X ≤ k ] = ∫ − k k f ( y ) d y \Pr[Y \le k] = \Pr[|X| \le k] = \Pr[-k \le X \le k] = \int_{-k}^k f(y)\, dy Pr [ Y ≤ k ] = Pr [ ∣ X ∣ ≤ k ] = Pr [ − k ≤ X ≤ k ] = ∫ − k k f ( y ) d y
Then if we take derivative with respect to k k k , we will have d d k Pr [ Y ≤ k ] = d d k ∫ − k k f ( y ) d y = f ( k ) − ( − f ( − k ) ) = f ( k ) + f ( − k ) \frac{d}{dk}\Pr[Y \le k] = \frac{d}{dk}\int_{-k}^k f(y)\, dy = f(k) -(-f(-k)) = f(k) +f(-k) d k d Pr [ Y ≤ k ] = d k d ∫ − k k f ( y ) d y = f ( k ) − ( − f ( − k )) = f ( k ) + f ( − k )
Example. \textbf{Example.} Example. Use the previous result to find the pdf of Y = ∣ X ∣ Y=|X| Y = ∣ X ∣ where X X X is the standard normal RV.
X : f ( x ) = 1 2 π exp ( − x 2 2 ) X: f(x) = \frac{1}{\sqrt{2 \pi}} \exp(-\frac{x^2}{2}) X : f ( x ) = 2 π 1 exp ( − 2 x 2 )
So Y : f ( y ) = 2 π exp ( − y 2 2 ) Y: f(y) = \frac{\sqrt{2}}{\sqrt{\pi}} \exp(-\frac{y^2}{2}) Y : f ( y ) = π 2 exp ( − 2 y 2 )
Example. \textbf{Example.} Example. Suppose the joint pdf of ( X 1 , X 2 ) \left(X_1, X_2\right) ( X 1 , X 2 ) is
f ( x 1 , x 2 ) = { 6 e − 3 x 1 − 2 x 2 , if x 1 , x 2 > 0 0 , otherwise. f\left(x_1, x_2\right)= \begin{cases}6 e^{-3 x_1-2 x_2}, & \text { if } x_1, x_2>0 \\ 0, & \text { otherwise. }\end{cases} f ( x 1 , x 2 ) = { 6 e − 3 x 1 − 2 x 2 , 0 , if x 1 , x 2 > 0 otherwise.
Find the pdf of Y = X 1 + X 2 Y=X_1+X_2 Y = X 1 + X 2 .
Let's consider Pr [ Y ≤ k ] = Pr [ X 1 + X 2 ≤ k ] = ∫ 0 k ∫ 0 k − y 6 exp ( − 3 x − 2 y ) d x d y \Pr[Y \le k] = \Pr[X_1 +X_2 \le k] = \int_0^{k} \int_{0}^{k-y} 6 \exp(-3x -2y)\, dx \,dy Pr [ Y ≤ k ] = Pr [ X 1 + X 2 ≤ k ] = ∫ 0 k ∫ 0 k − y 6 exp ( − 3 x − 2 y ) d x d y
= 6 ∫ 0 k exp ( − 2 y ) ∫ 0 k − y exp ( − 3 x ) d x d y = 6\int_0^k \exp(-2y) \int_{0}^{k-y}\exp(-3x)\, dx \, dy = 6 ∫ 0 k exp ( − 2 y ) ∫ 0 k − y exp ( − 3 x ) d x d y
= − 2 ∫ 0 k exp ( − 2 y ) exp ( − 3 x ) ∣ 0 k − y d y = -2\int_0^k \exp(-2y) \exp(-3x) |_{0}^{k-y} \, dy = − 2 ∫ 0 k exp ( − 2 y ) exp ( − 3 x ) ∣ 0 k − y d y
= − 2 ∫ 0 k exp ( − 2 y ) ( exp ( − 3 ( k − y ) ) − 1 ) d y = -2\int_0^k \exp(-2y) (\exp(-3(k-y)) - 1) \, dy = − 2 ∫ 0 k exp ( − 2 y ) ( exp ( − 3 ( k − y )) − 1 ) d y
= − 2 ∫ 0 k ( exp ( − 3 k + y ) ) − exp ( − 2 y ) ) d y =-2 \int_0^k (\exp(-3k+y)) - \exp(-2y)) \, dy = − 2 ∫ 0 k ( exp ( − 3 k + y )) − exp ( − 2 y )) d y
= − 2 ( exp ( − 3 k + y ) + 1 2 exp ( − 2 y ) ∣ 0 k ) = -2 (\exp(-3k + y) + \frac{1}{2}\exp(-2y) |_0^k) = − 2 ( exp ( − 3 k + y ) + 2 1 exp ( − 2 y ) ∣ 0 k )
= − 2 ( exp ( − 2 k ) + 1 2 exp ( − 2 k ) − exp ( − 3 k ) − 1 2 ) = -2 (\exp(-2k ) + \frac{1}{2}\exp(-2k) - \exp(-3k ) - \frac{1}{2} ) = − 2 ( exp ( − 2 k ) + 2 1 exp ( − 2 k ) − exp ( − 3 k ) − 2 1 )
= − 3 exp ( − 2 k ) + 2 exp ( − 3 k ) + 1 = -3\exp(-2k) +2\exp(-3k) +1 = − 3 exp ( − 2 k ) + 2 exp ( − 3 k ) + 1
And if we take derivative with respect to k k k , we will have f ( k ) = 6 exp ( − 2 k ) − 6 exp ( − 3 k ) f(k) = 6 \exp(-2k) -6\exp(-3k) f ( k ) = 6 exp ( − 2 k ) − 6 exp ( − 3 k ) for k > 0 k > 0 k > 0 .
Now, let's look into some discrete distribution.
Example. \textbf{Example.} Example. Let X X X be the number of heads by tossing a fair coin for 4 times. Then the pmf f f f of X X X is
x,f(x)
0,0.0625
1,0.25
2,0.375
3,0.25
4,0.0625
Find the pmf g g g of Y = 1 1 + X Y=\frac{1}{1+X} Y = 1 + X 1 .
So, we know that function 1 1 + X \frac{1}{1+X} 1 + X 1 has unique value over [ 0 , ∞ ) [0, \infty) [ 0 , ∞ ) ,
so we will have
y = 1 , 1 2 , 1 3 , 1 4 , 1 5 y = 1, \frac{1}{2}, \frac{1}{3}, \frac{1}{4}, \frac{1}{5} y = 1 , 2 1 , 3 1 , 4 1 , 5 1
and
f ( y ) = 1 16 , 1 4 , 3 8 , 1 4 , 1 16 f(y) = \frac{1}{16}, \frac{1}{4}, \frac{3}{8}, \frac{1}{4}, \frac{1}{16} f ( y ) = 16 1 , 4 1 , 8 3 , 4 1 , 16 1
Example. \textbf{Example.} Example. Now, let's compute Z = ( X − 2 ) 2 . Z = (X-2)^2. Z = ( X − 2 ) 2 .
z = 0 , 1 , 4 z = 0, 1, 4 z = 0 , 1 , 4
f Z ( z ) = 3 8 , 1 2 , 1 8 f_{Z}(z) = \frac{3}{8}, \frac{1}{2}, \frac{1}{8} f Z ( z ) = 8 3 , 2 1 , 8 1
Theorem. Let X X X be continuous with pdf f ( x ) f(x) f ( x ) and Y = u ( X ) Y=u(X) Y = u ( X ) where u u u is strictly monotone in supp ( f ) \operatorname{supp}(f) supp ( f ) , then pdf g ( y ) g(y) g ( y ) of Y Y Y is
g ( y ) = f ( w ( y ) ) ∣ w ′ ( y ) ∣ g(y)=f(w(y))\left|w^{\prime}(y)\right| g ( y ) = f ( w ( y )) ∣ w ′ ( y ) ∣
where w w w is the inverse of u u u (i.e., y = u ( x ) y=u(x) y = u ( x ) iff x = w ( y ) x=w(y) x = w ( y ) ).
Proof.
Assume that u u u is monotonically increasing. and we are dealing with positive support.
Pr [ Y ≤ y ] = Pr [ u ( X ) ≤ y ] = Pr [ X ≤ w ( y ) ] \Pr[Y \le y] = \Pr[u(X) \le y] = \Pr[X \le w(y)] Pr [ Y ≤ y ] = Pr [ u ( X ) ≤ y ] = Pr [ X ≤ w ( y )]
If we take derivative both sides with respect to y y y , we will have
g ( y ) = f ′ ( ω ( y ) ) ∣ ω ′ ( y ) ∣ g(y) = f'(\omega(y)) |\omega'(y)| g ( y ) = f ′ ( ω ( y )) ∣ ω ′ ( y ) ∣
In the same way, we can prove that for non-increasing function, we will have the same result.
Example \textbf{ Example } Example . Let X X X be the standard exponent RV. Find the pdf of Y = X Y = \sqrt{X} Y = X .
Let's consider
Y = u ( X ) , X = ω ( Y ) Y = u(X), X = \omega(Y) Y = u ( X ) , X = ω ( Y )
so we have g ( x ) = 2 λ x e − λ x 2 g(x) = 2\lambda xe^{-\lambda x^2} g ( x ) = 2 λ x e − λ x 2
Example. \textbf{Example.} Example. If F ( x ) F(x) F ( x ) is the distribution function of the continuous RV X X X . Find the pdf of Y = F ( X ) Y=F(X) Y = F ( X ) .
u ( x ) = F ( x ) , u(x) = F(x), u ( x ) = F ( x ) , then ω ( x ) = F − 1 ( x ) \omega(x) = F^{-1}(x) ω ( x ) = F − 1 ( x )
so we know that F ( F − 1 ( x ) ) = x F(F^{-1}(x)) = x F ( F − 1 ( x )) = x , so F ′ ( F − 1 ( x ) ) F ′ − 1 ( x ) = 1 F'(F^{-1}(x)) F'^{-1}(x) =1 F ′ ( F − 1 ( x )) F ′ − 1 ( x ) = 1
so g ( x ) = f ( F − 1 ( x ) ) F ′ ( F − 1 ( x ) ) = 1 g(x) = \frac{f(F^{-1}(x))}{F'(F^{-1}(x))}=1 g ( x ) = F ′ ( F − 1 ( x )) f ( F − 1 ( x )) = 1
Example. Let X X X be the standard normal random variable. Find the pdf of Z = X 2 Z=X^2 Z = X 2 .
u ( x ) = x 2 u(x) = x^2 u ( x ) = x 2 , then ω ( x ) = x 1 / 2 \omega(x) = x^{1/2} ω ( x ) = x 1/2 so we will have ω ′ ( x ) = 1 2 x − 1 / 2 \omega'(x) = \frac{1}{2}x^{-1/2} ω ′ ( x ) = 2 1 x − 1/2
so g ( x ) = 2 × 1 2 x − 1 / 2 × 1 2 π exp ( − x / 2 ) = x − 1 / 2 2 π exp ( − x / 2 ) g(x) = 2 \times \frac{1}{2}x^{-1/2} \times \frac{1}{\sqrt{2\pi}} \exp(- x/2) = \frac{x^{-1/2}}{\sqrt{2\pi}}\exp(- x/2) g ( x ) = 2 × 2 1 x − 1/2 × 2 π 1 exp ( − x /2 ) = 2 π x − 1/2 exp ( − x /2 )
Example. \textbf{Example.} Example. Let ( X 1 , X 2 ) \left(X_1, X_2\right) ( X 1 , X 2 ) have joint density
f ( x 1 , x 2 ) = { e − ( x 1 + x 2 ) , if x 1 , x 2 > 0 , 0 , otherwise. f\left(x_1, x_2\right)= \begin{cases}e^{-\left(x_1+x_2\right)}, & \text { if } x_1, x_2>0, \\ 0, & \text { otherwise. }\end{cases} f ( x 1 , x 2 ) = { e − ( x 1 + x 2 ) , 0 , if x 1 , x 2 > 0 , otherwise.
Find the pdf of Y = X 1 X 1 + X 2 Y=\frac{X_1}{X_1+X_2} Y = X 1 + X 2 X 1 .
Let's consider U = X 1 X 1 + X 2 U = \frac{X_1}{X_1+X_2} U = X 1 + X 2 X 1 and V = X 1 + X 2 V = X_1 +X_2 V = X 1 + X 2
then X 1 = U V X_1 = UV X 1 = U V and X 2 = V − U V X_2 = V - UV X 2 = V − U V .
Now the Jacobian matrix will be
( V U − V 1 − U ) \begin{pmatrix}V & U\\ -V & 1-U\end{pmatrix} ( V − V U 1 − U ) ,
whose determinant will be V − U V + U V = V V- UV +UV = V V − U V + U V = V .
Then we will have g ( u , v ) = exp ( − u v − v + u v ) v = exp ( − v ) v g(u,v)= \exp(-uv - v +uv)v = \exp(-v)v g ( u , v ) = exp ( − uv − v + uv ) v = exp ( − v ) v
Now since we are interested in u u u , we can integrate exp ( − v ) v \exp(-v)v exp ( − v ) v with respect to its support ( 0 , ∞ ) (0, \infty) ( 0 , ∞ )
∫ v exp ( − v ) = − v exp ( − v ) + ∫ exp ( − v ) = − v exp ( − v ) − exp ( − v ) \int v \exp(-v) = -v\exp(-v) +\int \exp(-v) = -v \exp(-v) -\exp(-v) ∫ v exp ( − v ) = − v exp ( − v ) + ∫ exp ( − v ) = − v exp ( − v ) − exp ( − v )
so we will have the probability density function for Y Y Y will be f ( y ) = 1. f(y) = 1. f ( y ) = 1.
Example. \textbf{Example.} Example. Let ( X 1 , X 2 ) \left(X_1, X_2\right) ( X 1 , X 2 ) be uniformly distributed in ( 0 , 1 ) 2 (0,1)^2 ( 0 , 1 ) 2 . Find the pdf of Y = X 1 + X 2 Y=X_1+X_2 Y = X 1 + X 2
Let's consider U = X 1 + X 2 U = X_1+X_2 U = X 1 + X 2 , V = X 2 V = X_2 V = X 2 , then we will have X 1 = U − V X_1 = U-V X 1 = U − V and X 2 = V X_2 = V X 2 = V .
So the Jacobian matrix will be
( 1 − 1 0 1 ) \begin{pmatrix} 1 & -1 \\ 0 & 1\end{pmatrix} ( 1 0 − 1 1 )
and its determinant is 1 1 1 .
So our pdf for U U U and V V V is 1 1 1 for 0 < V < 1 0< V < 1 0 < V < 1 and 0 < U − V < 1 0< U-V < 1 0 < U − V < 1
Now we will integrate with respect to V V V by fixing our U U U , then we will have
If u ≤ 1 u \le 1 u ≤ 1 :
∫ 0 u 1 d v = u \int_0^u 1 \, dv = u ∫ 0 u 1 d v = u
If u ≥ 1 u \ge 1 u ≥ 1 :
∫ u − 1 1 1 d v = 2 − u \int_{u-1}^1 1\, dv = 2-u ∫ u − 1 1 1 d v = 2 − u
Example. Let ( X 1 , X 2 , X 3 ) \left(X_1, X_2, X_3\right) ( X 1 , X 2 , X 3 ) be RVs with joint pdf f f f as follows:
f ( x 1 , x 2 , x 3 ) = { e − ( x 1 + x 2 + x 3 ) , if x 1 , x 2 , x 3 > 0 0 , otherwise f\left(x_1, x_2, x_3\right)= \begin{cases}e^{-\left(x_1+x_2+x_3\right)}, & \text { if } x_1, x_2, x_3>0 \\ 0, & \text { otherwise }\end{cases} f ( x 1 , x 2 , x 3 ) = { e − ( x 1 + x 2 + x 3 ) , 0 , if x 1 , x 2 , x 3 > 0 otherwise
Suppose Y 1 = X 1 + X 2 + X 3 , Y 2 = X 2 , Y 3 = X 3 Y_1=X_1+X_2+X_3, Y_2=X_2, Y_3=X_3 Y 1 = X 1 + X 2 + X 3 , Y 2 = X 2 , Y 3 = X 3 . Find the marginal pdf of Y 1 Y_1 Y 1 .
X 1 = Y 1 − Y 2 − Y 3 X_1 = Y_1 - Y_2 - Y_3 X 1 = Y 1 − Y 2 − Y 3
X 2 = Y 2 X_2 = Y_2 X 2 = Y 2
X 3 = Y 3 X_3 = Y_3 X 3 = Y 3
( 1 − 1 − 1 0 1 0 0 0 1 ) \begin{pmatrix} 1 & -1 & -1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} 1 0 0 − 1 1 0 − 1 0 1
So the Jacobian will be 1 1 1 .
And we have the pdf for Y 1 , Y 2 , Y 3 Y_1, Y_2, Y_3 Y 1 , Y 2 , Y 3 will be
exp ( − y 1 ) \exp(-y_1) exp ( − y 1 ) with constraints: 0 < y 2 < y 1 , 0 < y 3 < y 1 − y 2 0 < y_2 < y_1, 0< y_3 < y_1 - y_2 0 < y 2 < y 1 , 0 < y 3 < y 1 − y 2
Now if we integrate with respect to y 3 y_3 y 3 , we will get the marginal pdf:
exp ( − y 1 ) ( y 1 − y 2 ) = exp ( − y 1 ) y 1 − exp ( − y 1 ) y 2 \exp(-y_1)(y_1 - y_2) = \exp(-y_1)y_1 - \exp(-y_1)y_2 exp ( − y 1 ) ( y 1 − y 2 ) = exp ( − y 1 ) y 1 − exp ( − y 1 ) y 2
Now if we integrate with respect to y 2 y_2 y 2 , we will have the marginal pdf to be
g ( y 1 ) = 1 2 exp ( − y 1 ) y 1 2 g(y_1) = \frac{1}{2}\exp(-y_1)y_1^2 g ( y 1 ) = 2 1 exp ( − y 1 ) y 1 2
Theorem. \textbf{Theorem.} Theorem. The mgf of Y = X 1 + ⋯ + X n Y=X_1+\cdots+X_n Y = X 1 + ⋯ + X n , where X 1 , … , X n X_1, \ldots, X_n X 1 , … , X n are independent RVs with MGFs M X 1 ( t ) , … , M X n ( t ) M_{X_1}(t), \ldots, M_{X_n}(t) M X 1 ( t ) , … , M X n ( t ) respectively, is
M Y ( t ) = ∏ i = 1 n M X i ( t ) M_Y(t)=\prod_{i=1}^n M_{X_i}(t) M Y ( t ) = i = 1 ∏ n M X i ( t ) .
Proof.
E [ exp ( Y t ) ] = E [ exp ( ( X 1 + . . . + X n ) t ) ] = E [ exp ( ( X 1 t + . . . + X n t ) ] = E [ ∏ i = 1 n exp ( X i t ) ] \mathbb{E}[\exp(Yt)] =\mathbb{E}[\exp((X_1 + ... +X_n)t)] \\= \mathbb{E}[\exp((X_1t + ... +X_nt)] \\= \mathbb{E}[\prod_{i=1}^n\exp(X_it)] E [ exp ( Y t )] = E [ exp (( X 1 + ... + X n ) t )] = E [ exp (( X 1 t + ... + X n t )] = E [ i = 1 ∏ n exp ( X i t )]
Due to independence, we will have
E [ exp ( Y t ) ] = ∏ i = 1 n E [ exp ( X i t ) ] = ∏ i = 1 n M X i ( t ) \mathbb{E}[\exp(Yt)] = \prod_{i=1}^n \mathbb{E}[\exp(X_it)] =\prod_{i=1}^n M_{X_i}(t) E [ exp ( Y t )] = i = 1 ∏ n E [ exp ( X i t )] = i = 1 ∏ n M X i ( t )
Example. \textbf{Example.} Example. Suppose X 1 , … , X n X_1, \ldots, X_n X 1 , … , X n are independent Poisson RVs with parameters λ 1 , … , λ n \lambda_1, \ldots, \lambda_n λ 1 , … , λ n respectively. Find the distribution of Y = X 1 + ⋯ + X n Y=X_1+\cdots+X_n Y = X 1 + ⋯ + X n .
Let's consider the moment generating function E [ exp ( Y t ) ] = ∏ i = 1 n exp ( λ i ( e t − 1 ) ) = exp ( ( e t − 1 ) ∑ i = 1 n λ i ) \mathbb{E}[\exp(Yt)] = \prod_{i=1}^n \exp(\lambda_{i}(e^t-1)) = \exp( (e^t-1)\sum_{i=1}^n \lambda_i) E [ exp ( Y t )] = ∏ i = 1 n exp ( λ i ( e t − 1 )) = exp (( e t − 1 ) ∑ i = 1 n λ i )
So it's a possion distribution with parameter ∑ λ i \sum \lambda_i ∑ λ i
Example. \textbf{Example.} Example. Suppose X 1 , … , X n X_1, \ldots, X_n X 1 , … , X n are independent exponential RVs with the same parameter θ \theta θ . Find the distribution of Y = X 1 + ⋯ + X n Y=X_1+\cdots+X_n Y = X 1 + ⋯ + X n .
E [ exp ( X t ) ] = ∫ 0 ∞ exp ( x t ) 1 θ exp ( − 1 θ x ) d x = 1 θ ∫ 0 ∞ exp ( x ( t − 1 θ ) ) d x = 1 θ − 1 t − 1 θ = 1 1 − θ t \mathbb{E}[\exp(Xt)] = \int_0^{\infty} \exp(xt) \frac{1}{\theta}\exp(-\frac{1}{\theta}x) \, dx = \frac{1}{\theta}\int_0^{\infty} \exp(x(t-\frac{1}{\theta})) \, dx = \frac{1}{\theta} \frac{-1}{t-\frac{1}{\theta}} = \frac{1}{1- \theta t} E [ exp ( Xt )] = ∫ 0 ∞ exp ( x t ) θ 1 exp ( − θ 1 x ) d x = θ 1 ∫ 0 ∞ exp ( x ( t − θ 1 )) d x = θ 1 t − θ 1 − 1 = 1 − θt 1
So E [ exp ( Y t ) ] = ( 1 1 − θ t ) n \mathbb{E}[\exp(Yt)] = \left(\frac{1}{1-\theta t}\right)^n E [ exp ( Y t )] = ( 1 − θt 1 ) n ,
which is gamma distribution with n n n ,θ \theta θ
In total: it took me about one hour and 30 minutes to finish this exercises. It's quite slow but I'm getting there slightly.
Other References:
https://people.stat.sc.edu/hitchcock/stat512spring2012.html
https://www.math.wustl.edu/~sawyer/math494s10.html