I am not a statistician (more of a researchy web developer) but I've been hearing a lot about scipy and R these days. So out of curiosity I wanted to ask this question (though it might sound silly to the experts around here) because I am not sure of the advances in this area and want to know how people without a sound statistics background approach these problems.
Given a set of real numbers observed from an experiment, let us say they belong to one of the many distributions out there (like Weibull, Erlang, Cauchy, Exponential etc.), are there any automated ways of finding the right distribution and the distribution parameters for the data? Are there any good tutorials that walk me through the process?
Real-world Scenario: For instance, let us say I initiated a small survey and recorded information about how many people a person talks to every day for say 300 people and I have the following information:
1 10 2 5 3 20 ... ...
where X Y tells me that person X talked to Y people during the period of the survey. Now using the information from the 300 people, I want to fit this into a model. The question boils down to are there any automated ways of finding out the right distribution and distribution parameters for this data or if not, is there a good step-by-step procedure to achieve the same?
Setting up the dialog box to fit a distributionSelect the XLSTAT / Modeling data / Distribution fitting command (see below). The Distribution fitting dialog box then appears. Select the data on the Excel sheet named Data. In the General tab, select column B in the Data field.
Distribution fitting is the process used to select a statistical distribution that best fits the data. Examples of statistical distributions include the normal, gamma, Weibull and smallest extreme value distributions. In the example above, you are trying to determine the process capability of your non-normal process.
A given distribution is a good fit if: The data points roughly follow a straight line. The p-value is greater than 0.05.
This is a complicated question, and there are no perfect answers. I'll try to give you an overview of the major concepts, and point you in the direction of some useful reading on the topic.
Assume that you a one dimensional set of data, and you have a finite set of probability distribution functions that you think the data may have been generated from. You can consider each distribution independently, and try to find parameters that are reasonable given your data. There are two methods for setting parameters for a probability distribution function given data:
In my experience, Maximum Likelihood has been preferred in recent years, although this may not be the case in every field.
Here's a concrete example of how to estimate parameters in R. Consider a set of random points generated from a Gaussian distribution with mean of 0 and standard deviation of 1:
x = rnorm( n = 100, mean = 0, sd = 1 )
Assume that you know the data were generated using a Gaussian process, but you've forgotten (or never knew!) the parameters for the Gaussian. You'd like to use the data to give you reasonable estimates of the mean and standard deviation. In R, there is a standard library that makes this very straightforward:
library(MASS) params = fitdistr( x, "normal" ) print( params )
This gave me the following output:
mean sd -0.17922360 1.01636446 ( 0.10163645) ( 0.07186782)
Those are fairly close to the right answer, and the numbers in parentheses are confidence intervals around the parameters. Remember that every time you generate a new set of points, you'll get a new answer for the estimates.
Mathematically, this is using maximum likelihood to estimate both the mean and standard deviation of the Gaussian. Likelihood means (in this case) "probability of data given values of the parameters." Maximum likelihood means "the values of the parameters that maximize the probability of generating my input data." Maximum likelihood estimation is the algorithm for finding the values of the parameters which maximize the probability of generating the input data, and for some distributions it can involve numerical optimization algorithms. In R, most of the work is done by fitdistr, which in certain cases will call optim.
You can extract the log-likelihood from your parameters like this:
print( params$loglik ) [1] -139.5772
It's more common to work with the log-likelihood rather than likelihood to avoid rounding errors. Estimating the joint probability of your data involves multiplying probabilities, which are all less than 1. Even for a small set of data, the joint probability approaches 0 very quickly, and adding the log-probabilities of your data is equivalent to multiplying the probabilities. The likelihood is maximized as the log-likelihood approaches 0, and thus more negative numbers are worse fits to your data.
With computational tools like this, it's easy to estimate parameters for any distribution. Consider this example:
x = x[ x >= 0 ] distributions = c("normal","exponential") for ( dist in distributions ) { print( paste( "fitting parameters for ", dist ) ) params = fitdistr( x, dist ) print( params ) print( summary( params ) ) print( params$loglik ) }
The exponential distribution doesn't generate negative numbers, so I removed them in the first line. The output (which is stochastic) looked like this:
[1] "fitting parameters for normal" mean sd 0.72021836 0.54079027 (0.07647929) (0.05407903) Length Class Mode estimate 2 -none- numeric sd 2 -none- numeric n 1 -none- numeric loglik 1 -none- numeric [1] -40.21074 [1] "fitting parameters for exponential" rate 1.388468 (0.196359) Length Class Mode estimate 1 -none- numeric sd 1 -none- numeric n 1 -none- numeric loglik 1 -none- numeric [1] -33.58996
The exponential distribution is actually slightly more likely to have generated this data than the normal distribution, likely because the exponential distribution doesn't have to assign any probability density to negative numbers.
All of these estimation problems get worse when you try to fit your data to more distributions. Distributions with more parameters are more flexible, so they'll fit your data better than distributions with less parameters. Also, some distributions are special cases of other distributions (for example, the Exponential is a special case of the Gamma). Because of this, it's very common to use prior knowledge to constrain your choice models to a subset of all possible models.
One trick to get around some problems in parameter estimation is to generate a lot of data, and leave some of the data out for cross-validation. To cross-validate your fit of parameters to data, leave some of the data out of your estimation procedure, and then measure each model's likelihood on the left-out data.
Take a look at fitdistrplus
(http://cran.r-project.org/web/packages/fitdistrplus/index.html).
A couple of quick things to note:
descdist
, which provides a plot of skew vs. kurtosis of the data and also shows some common distributions. fitdist
allows you to fit any distributions you can define in terms of density and cdf.gofstat
which computes the KS and AD stats which measure distance of the fit from the data.If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With