AnswerBun.com

In this Bayesian network, where does this posterior probability come from?

Cross Validated Asked by Vin on January 13, 2021

I’m reading Building Intelligent Interactive Tutors (Woolf, 2009) on student models for ITSs. On page 261, the author presents an example for a simple Bayesian network ($S rightarrow E$), where $S$ is the unobserved skill variable (with states: $0$ = knows; $1$ = doesn’t know) and $E$ is the observed evidence variable (with states: $0$ = incorrect; $1$ = correct).

The author goes on to compute the posterior probability $P(S|E)$ for $S=1$ through Bayes’ rule, assuming the following probabilities:

  • Prior probability $P(S=1)=0.5$
  • $P(E=1|S=1)=0.8$
  • $P(E=0|S=0)=0.95$

He reaches the following answers:

  • $P(S=1|E=1)=0.94$
  • $P(S=1|E=0)=0.17$

Then, he claims the revised posterior probability for $S=1$ is approximately $0.78$ for the first case and approximately $0.06$ for the second case. Where do these posterior probability values come from? What do they represent?

I’ve coded the example in Python (using the pgmpy library) and got the same values for $P(S=1|E=1)$ and $P(S=1|E=0)$. Here’s the code and its output (NewtonsLaw is $S$ and Problem023 is $E$):

import numpy as np
from pgmpy.models import BayesianModel
from pgmpy.estimators import BayesianEstimator
from pgmpy.inference import VariableElimination
from pgmpy.factors.discrete import TabularCPD

# Bayesian network structure
model = BayesianModel([('NewtonsLaw', 'Problem023')])

# CPDs
cpd_problem_023 = TabularCPD('Problem023', 2, [[0.95, 0.2],
                                          [0.05, 0.8]],
                        evidence=['NewtonsLaw'], evidence_card=[2])
cpd_newtons_law = TabularCPD('NewtonsLaw', 2, [[0.5, 0.5]])

# Add probabilities to model
model.add_cpds(cpd_problem_023, cpd_newtons_law)
model.check_model()

# Query
inference = VariableElimination(model)

posterior_newtons_law_right = inference.query(['NewtonsLaw'], evidence={'Problem023': 1})
print(posterior_newtons_law_right['NewtonsLaw'])

posterior_newtons_law_wrong = inference.query(['NewtonsLaw'], evidence={'Problem023': 0})
print(posterior_newtons_law_wrong['NewtonsLaw'])

Output:

+--------------+-------------------+
| NewtonsLaw   |   phi(NewtonsLaw) |
|--------------+-------------------|
| NewtonsLaw_0 |            0.0588 |
| NewtonsLaw_1 |            0.9412 |
+--------------+-------------------+
+--------------+-------------------+
| NewtonsLaw   |   phi(NewtonsLaw) |
|--------------+-------------------|
| NewtonsLaw_0 |            0.8261 |
| NewtonsLaw_1 |            0.1739 |
+--------------+-------------------+

One Answer

If the answer still matters, I think to update the posterior probabilities (revised posteriors), one must conduct parameter learning as explained here: http://pgmpy.org/models.html

You did good inferencing with pgmpy, but you cannot derive revised posteriors with inferencing, rather use learning with model.fit(...).

Answered by Dyks Vail on January 13, 2021

Add your own answers!

Related Questions

Linear model selection – Subset, Forward

0  Asked on February 16, 2021 by davud-mursalov

   

Churn prediction for customers with limited data

0  Asked on February 14, 2021 by ahmet-turul-bayrak

 

multiple linear regression vs polynomial regression models

0  Asked on February 14, 2021 by gracetam

   

Ask a Question

Get help from others!

© 2022 AnswerBun.com. All rights reserved. Sites we Love: PCI Database, MenuIva, UKBizDB, Menu Kuliner, Sharing RPP, SolveDir