[Could not find the bibliography file(s)
Latent Dirichlet Allocation, originally proposed in [?], is a generative graphical model for mining latent topics of texts or any data with similar underlying statistical structures.
The Generative Model
The generative process of words in a document are as following:
For each of the topic:
For each document in the corpus:
Choose topic distribution .
For each of the word occurrences in the document:
Choose a topic .
Choose a word
Here presents a simple explanation of the notations:
is (number of topics) dimensional hyperparameter for topic distribution.
is (number of words) dimensional hyperparameter for word distribution over a given topic.
is the probability of choosing word given its drawn from topic .
is the topic distribution for document .
is the corresponding topic for the -th word in document .
Later on, I am also going to use the following notations:
, , are the observed words.
, , are the topic assignments for word occurrence.
is a dimensional vector, in which is the count of word in the corpus assigned to topic .
is without counting information at word occurrence at position .
is a dimensional vector, in which is the count of words in document been assigned for topic .
is without counting information at word occurrence at position .
The graphical representation of above generative process is as following (each node represents a random variable, filled with gray when observed, links denote dependences, and rectangle plates mean repetition, i.e. the big plate indicates there are documents like that, the small plate indicates there are occurrences words in each document):
To keep (writing) this note simple, I am not going to explain the generative process in details here.
What we are interested in this model are random variables , , and maybe . But before going to inferences of these latent random variables, we may first follow the routine and write down the likelihood of the data, i.e.:
Gibbs Sampling Learning
Gibbs sampling learning for LDA was first proposed in [?]. The learning problem or inference problem targeted by [?] is to estimate and . I didn’t find any good justification yet for using Gibbs sampling for LDA, one justification is that given the valid sample , we estimate , by either maximizing it or getting its expectation/mean (since this expression is a Dirichlet p.d.f., getting its expectation/mean is easy, and in literature this is often the common practice rather than using mode obtained by maximum likelihood) can give us an estimation of , similar procedures hold for as well. However, this justification is not satisfying in the sense that it didn’t consider the distribution of rather than just a sample of it, and it shows nothing about why a sample of will be good enough. By considering the distribution of (which is intractable thus requires sampling methods), one can derive that
Where each sample is drawn from distribution . Here the expectation/mean is easier to obtain than the mode by maximum likelihood because we have a summation over p.d.f. and the p.d.f. is of Dirichlet distribution. The expectation of a Dirichlet variable from one sample of can be found in book or Wiki, and the expectation of the same variable over a set of samples is just an average of expectations of each sample. Technically, we need to obtain multiple de-correlated samples of as according to E.q. 2, however, one sample may be good enough depends on the data (for example, it can be shown that for , even one sample of , it could contain many times of samples given the same word ).
I think E.q. 2 gives us enough motivation and justification for doing Gibbs sampling on LDA. Basically we need to be able to generate samples from , which is an intractable distribution, one popular way for this purpose is Gibbs sampling, which generates samples from each variable once at a time while others fixed.
Preparing for derivation of Gibbs sampling for LDA, let me first lightlight some important functions (as for the mathematics behind MCMC and Gibbs Sampling, readers can refer to some good tutorials on the Internet):
Multinomial Beta function:
Noted that in LDA or many of other bayesian statistical models, people use conjugate distribution for likelihood and prior, e.g., Multinomial and Dirichlet. The trick behind this is that posterior will be the same distribution as the prior, so that we can easily find a closed form for the integration of the posterior. In the case of Multinomial and Dirichlet, we have
This piece of formula is extremely useful for deriving the following learning algorithm. (One more note would be: although people call it multinomial distribution in the generative process above, but actually it is one time multinomial, which has already degenerated into discrete distribution, however, this doesn’t affect much for the learning, as we see the formula here will also hold for a sequence of discrete distribution.)
Then let’s derive the joint distribution of and which will be useful just one step later (notations are explained in the beginning of the post):
Now we can start to derive key formula for Gibbs sampling, which is , here means the topic assignments for all word occurrences other than the one at position , without loss of generality, we assume here that the position is in document and associated with the observed word , since
The above E.q. 4 gives us a very clear picture for implementing Gibbs sampling, basically, one just needs to initialize , setting counter for three terms in final of E.q. 4, and each time when sample a word, decrease the corresponding counters, sample according to normalized E.q. 4, than increase corresponding counters according to the sampling result. After “burn-in”, one can read out (de-correlated) samples and start estimation of parameters. By E.q. 2, bayes’ rule and expectation of Dirichlet Distribution, we obtain the expectation of and given a sample of as following:
As mentioned above, one might also want to average parameters over multiple de-correlated samples according to E.q. 2.
Likelihood and perplexity
Until now, I didn’t mention the formula for calculating the likelihood of data , which can be important some time for testing of convergence (or using ). In [?], the authors mentioned an approximation technique for computing , it is the harmonic mean of a set of values of from samples, where is sampled from the posterior . So that is:
Using the previous derivation, we can easily compute that:
Where we have assumed a symmetric . However, the E.q. 7 can easily lead to numerical underflow/overflow if directly computed (due to the large factorial terms). Here we can take a log of it and see what we got:
Where is constant:
The log gamma function should be built in the programming language so you can use it directly. But now since we only have , and it can be quite “negatively large” (maybe depending on the corpus), this again create a computational difficulty when we compute the Harmonic average of . Again we are going to log , and using some trick below enable us to compute it accurately:
Here , is introduced to make log-sum-exp computed correctly, usually we can set . Now, even the corpus contains millions of documents, the above likelihood calculation should still work as the scale is under the range of floating point number.
The likelihood of hold out data sometime is also an useful quantity, it reflects generalized predictive power of the model, and usually it is measured by perplexity, essentially it is the inverse of the geometric mean per-link likelihood. It is defined by:
This term can be evaluated by: first hold out some documents, and then sample topic assignment for words in those documents given a point estimate , then the likelihood (there go the perplexity) of hold out documents can be calculated as stated above.
Some resources on Gibbs Sampling for LDA
GibbsLDA++: A C/C++ Implementation of Latent Dirichlet Allocation using Gibbs sampling.
WinBUGS: Gibbs sampling tools for graphical models.