Every variable appears in the training examples (no unobserved)
Questions
What does the likelihood objective accomplish?
Is likelihood the right objective function?
How do we optimize the objective function?
What guarantees does the optimizer provide?
What is the mapping from data to model? In what ways can we incorporate our domain knowledge? How does this impact learning?
Settings
ψC(xC)=θC,xC
Random Variables are discrete.
Parameters are the connection between the nodes.
MLE by inspection (Decomposable Models).
Iterative Proportional Fitting (IPF).
ex) If there is only 1 node in a clique, the parameters will be a vector. If there are 2 nodes, the parameters will be a matrix. If there are 3 nodes, the parameters will be a Tensor of 3 dimension, etc.
ψC(xC)=exp(θ∗f(xC))
Random Variables are continuous.
Generalized Iterative Scaling.
Gradient-based Methods.
θ are the parameters, and f(xc) is some feature function.
Let D={x1,...,xn} the observed data, xi.i.d.∼p(x;θ)
Now, MLE is given by:
Fixed-point update means setting the equation at the desired point, assign the updated values, and iterate over.
ϕc(t+1)(yc)←ϕc(t)(yc)p(t)(yc)ϵ(yc)
Where ϵ(yc) is empirical marginal distribution of yc, p(t)(yc) comes from inference of the graph with the currently updated parameters. i.e., p(t)(yc)=∑y′:yc′=yc′p(y′;θ(t))
Feature-based Clique Potentials
For large cliques these general potentials are exponentially costly for inference and have exponential numbers of parameters that we must learn from limited data.
Could make cliques smaller, but this changes the dependencies.
Keep the same graphical model, but use a less general parameterization of the clique potentials.
Feature based models.
Features
Consider a clique xc of random variables in a UGM; three consecutive characters c1c2c3 in a string of English text.
How would we build a model of p(c_1,c_2,c_3)?
If we use a single clique function over c1c2c3, the full joint clique potential would be huge;(26^3-1) parameters. However, we often know that some particular joint settings of the variables in a clique are quite likely or quite unlikely; "ing", "ion",...
A "feature" is a function which is vacuous over all joint settings except a few particular ones on which it is high or low.
For example, we might have fing(c1,c2,c3) which is 1 if the string is 'ing' and 0 otherwise, and similar features for 'ion', etc..
Typically we can let features represent any of our prior knowledge over the cluster, and we can utilize the features to represent the potential function coming from each clusters.
Features as Micropotentials
By exponentiating them, each feature function can be made into a "micropotential". We can multiply these micropotentials together to get a clique potential.
Example: a clique potential ψ(c1,c2,c3) could be expressed as:
ψc(c1,c2,c3)=eθingfing∗...
=exp{k=1∑Kθkfk(c1,c2,c3)}
This is still a potential over 263 possible settings, but only uses K parameters if there are K features.
Note that by having one indicator function per combination of xc, we recover the standard tabular potential.
Each feature has a weight θk which represents the numberical strength of the feature and wheter it increases or decreases the probability of the clique.
The marginal over the clique is a generalized exponential family distribution, actually, a GLM
p(c1,c2,c3)∼exp{k∑Kθkfk(xc)}.
In fact, by using such defined setup, we can see that
Can I use IPF?
Not exactly, but we can make modification to that.
Generalized Iterative Scaling (GIS)
Key Idea:
- Define a function which lower-bounds the log-likelihood
- Observe that the bound is tight at current parameters
- Increase lower-bound by fixed-point iteration in order to increase log-likelihood
** This Idea is akin to a standard derivation of the Expectation-Maximization Algorithm.
IPF vs GIS
IPF is a general algorithm for finding MLE of UGMs.
ϕc(t+1)(yc)←ϕc(t)(yc)p(t)(yc)ϵ(yc)
GIS is iterative scaling on general UGM with feature-based potentials.
IPF is a special case of GIS which the clique potential is built. on featurers defined as an indicator function of clique configurations.
The two methods are essentially using the same logic;
Set next update to be current update + marginalempirical