I need to use TensorFlow Probability to implement Markov Chain Monte Carlo with sampling from a Bernoulli distribution. However, my attempts are showing results not consistent with what I would expect from a Bernoulli distribution.
I modified the example given in the documentation of tfp.mcmc.sample_chain (sampling from a diagonal-variance Gaussian) example here to draw from a Bernoulli distribution. Since the Bernoulli distribution is discrete, I used the RandomWalkMetropolis transition kernel instead of the Hamiltonian Monte Carlo kernel, which I expect would not work since it computes a gradient.
Here is the code:
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
def make_likelihood(event_prob):
return tfd.Bernoulli(probs=event_prob,dtype=tf.float32)
dims=1
event_prob = 0.3
num_results = 30000
likelihood = make_likelihood(event_prob)
states, kernel_results = tfp.mcmc.sample_chain(
num_results=num_results,
current_state=tf.zeros(dims),
kernel = tfp.mcmc.RandomWalkMetropolis(
target_log_prob_fn=likelihood.log_prob,
new_state_fn=tfp.mcmc.random_walk_normal_fn(scale=1.0),
seed=124
),
num_burnin_steps=5000)
chain_vals = states
# Compute sample stats.
sample_mean = tf.reduce_mean(states, axis=0)
sample_var = tf.reduce_mean(
tf.squared_difference(states, sample_mean),
axis=0)
#initialize the variable
init_op = tf.global_variables_initializer()
#run the graph
with tf.Session() as sess:
sess.run(init_op)
[sample_mean_, sample_var_, chain_vals_] = sess.run([sample_mean,sample_var,chain_vals])
chain_samples = (chain_vals_[:] )
print ('Sample mean = {}'.format(sample_mean_))
print ('Sample var = {}'.format(sample_var_))
fig, axes = plt.subplots(2, 1)
fig.set_size_inches(12, 10)
axes[0].plot(chain_samples[:])
axes[0].title.set_text("values sample chain tfd.Bernoulli")
sns.kdeplot(chain_samples[:,0], ax=axes[1], shade=True)
axes[1].title.set_text("chain tfd.Bernoulli distribution")
fig.tight_layout()
plt.show()
I expected to see values for the states of the Markov chain in the interval [0,1].
The result for the Markov chain values does not look like what is expected for a Bernoulli distribution, nor does the KDE plot, as shown in this figure:
Do I have a conceptual flaw with my example, or mistake in using the TensorFlow Probability API ?
Or is there possibly an issue with the TF.Probability implementation of Markov Chain Monte Carlo using a discrete distribution such as the Bernoulli distribution?
I think the source of your confusing experience is that you are still using a continuous proposal distribution in the RandomWalkMetropolis
transition. The convention in TensorFlow Probability for integer distributions (including Bernoulli
) is to implement a continuous relaxation by default. IIRC, for Bernoulli, that's pdf(x) ~ p ** x * (1 - p) ** (1 - x)
; once x
becomes negative, this will steadily drive your random walk Markov chain toward -inf
, as you observe.
Several things you can do about this:
validate_args=True
to the Bernoulli
constructor. This will crash if x
is not 0 or 1, helping you detect the issue (but don't if you want non-integer results in the interval [0, 1]).RandomWalkMetropolis
.I also filed a ticket about making a TransitionKernel for independence proposals (such as what I imagine you might need): https://github.com/tensorflow/probability/issues/218
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With