projects:machinelearning:pc5252:finals
Finals
Scope of finals exam:
- Format: 2 hours, 4 questions (30 mins/qns), no special integration needed
- Math manipulation expected, though no need calculators
- May follow examples from lecture notes
- Topical hints:
- Binomial and normal distributions: Need to know basics of probabilities (mean, variance, normalization), priors (conjugate, Jeffrey's, (im)proper), posterior predictive distributions.
- Sampling with Metropolis-Hastings: Markov chains and proofs for stationarity, MH algorithm + formula + idea (i.e. why rule chosen)
- Model comparison using Savage-Dickey ratio: Bayes' factor (less of model validation)
- Binary classification and kernel methods: Validate using confusion matrix parameters, derive ROC, feature maps and kernel methods, validating kernels (less of regression and neural networks)
Topic 1: Binomial/normal distributions
$$$$ p(\theta|x) = \frac{p(x|\theta)p(\theta)}{p(x)} $$$$
Some terminology:
- Likelihood: $$p(x|\theta)$$ or $$L(\theta|x)$$
- Prior: $$p(\theta)$$ or $$\pi(\theta)$$
- Evidence/marginal likelihood: $$p(x)$$ or $$Z(x)$$
- Aleatoric (statistical) vs Epistemic (systematic) uncertainty
Basic definitions
$$$$ E[\tau] \equiv{} \langle{}\tau\rangle{} := \int dx\,\tau(x)p(x) $$$$
$$$$ \mu_n := E[(x-\mu)^n], \qquad{} Var[x] \equiv{} \sigma^2 := \mu_2 $$$$
Show that for two random variables x, y, E[x] = E[E[x|y]].
Show that for two random variables x, y, Var[x] = E[Var[x|y]] + Var[E[x|y]].
Binomial distribution
\begin{align} \text{Bin}(x|n,\theta{}) = \left(\begin{matrix}n\\x\end{matrix}\right) \theta{}^x (1-\theta)^{(n-x)} \end{align}
Binomial distribution:
projects/machinelearning/pc5252/finals.txt · Last modified: 12 months ago (27 November 2023) by justin