TransWikia.com

Maximum Likelihood estimation

Data Science Asked by Mahajna on February 24, 2021

Given a sample $ X_1,X_2 dots X_{100}$ and the density function $ f(x;theta) = frac{1}{pi cdot left(1+left(x-theta right)^2right)}$ , find an approximate solution for $hat{theta}_{MLE.}$

My attempt:

I have found the joint likelihood $L(theta;x_1,x_2dots x_{100}) = prod _{i=1}^{100}left(frac{1}{pi cdot left(1+left(x_i-theta right)^2right)}right):$

$l$ = $log(L) = -100*ln(pi)-sum^{100}_{i=1}(ln(1+(x-theta)^2)$.

I’m not sure of this step

$frac{partial }{partial theta}left(log(L)right) = sum_{i=1}^{100}(frac{2(x_i-theta)}{1+(x_i-theta)^2}$

then I used Newton’s method to find the maxima.

this is the script I used to calculate the maxima

#deravitive of log(L).
fun1 <- function(theta){
  y1 <- 0
  for(i in 1:length(x)){
    y1 <- y1 + (2*(theta-x[i]))/(1+(x[i]-theta)^2)
  }
  return(y1)
}


#derivative of fun1.
fun1.tag <- function(theta){
  y <- 0
  for(i in 1:length(x)){
    y <- 2*(theta^2+(x[i]^2)-20*x[i]-1)/((1+(x[i]-theta)^2)^2)
  }
  return(y)
}


# The Newton's method.


guess <- function(theta_guess){
  theta2 <- theta_guess - fun1(theta_guess)/fun1.tag(theta_guess)
  return(theta2)
}
theta1 <- median(data$x)
epsilon <- 1

theta_before <- 0

while(epsilon >0.0001){
  theta1 <- guess(theta1)
  epsilon <- (theta_before- theta1)^2
  theta_before <- theta1
}

What I got was $hat{theta}_{MLE} = 5.166$

I’m now trying to plot the data(in my case x) and check if $hat{theta}_{MLE} = 5.166$ is actually a maxima.

2 Answers

You have a typo in your formula

#derivative of fun1.
fun1.tag <- function(theta){
  y <- 0
  for(i in 1:length(x)){
    y <- y + 2*(theta^2+(x[i]^2)-20*x[i]-1)/((1+(x[i]-theta)^2)^2)
  }
  return(y)
}

There is y + missing inside the loop.

Correct answer by kate-melnykova on February 24, 2021

It seems it is even easier

The MLE is defined as

$$ theta_{MLE} = argmax -(100 ln pi + sum_{i=1}^{100} ln(1 + (x_{i} - theta)^{2})) $$

so you need to minimize the sum of logs and applying the exponential to each element of the sum does not change the result of the argmin because it is a monotone increasing function, so at the end of the day you have to solve

$$ theta_{MLE} = argmin sum_{i=1}^{100} (x_{i} - theta)^{2} $$

and since it is clearly convex the argmin can be found where the derivative is zero so

$$ frac{partial (sum_{i=1}^{100} (x_{i} - theta)^{2} ))}{partial theta} = 0 $$

so finally

$$ theta_{MLE} = frac{1}{100} sum_{i=1}^{100} x_{i} $$

which is the cnter of mass of the distribution of the observations

Answered by Nicola Bernini on February 24, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP