I did a linear regression for a two tailed t-test with 178 degrees of freedom. The summary
function gives me two p-values for my two t-values.
t value Pr(>|t|)
5.06 1.04e-06 ***
10.09 < 2e-16 ***
...
...
F-statistic: 101.8 on 1 and 178 DF, p-value: < 2.2e-16
I want to calculate manually the p-value of the t-values with this formula:
p = 1 - 2*F(|t|)
p_value_1 <- 1 - 2 * pt(abs(t_1), 178)
p_value_2 <- 1 - 2 * pt(abs(t_2), 178)
I don't get the same p-values as in the model summary. Therefore, I want to know how the summary
function Pr(>|t|)
is different from my formula, as I can't find the definition of Pr(>|t|)
.
Can you help me? Thanks a lot!
The p-value is calculated using the sampling distribution of the test statistic under the null hypothesis, the sample data, and the type of test being done (lower-tailed test, upper-tailed test, or two-sided test). The p-value for: a lower-tailed test is specified by: p-value = P(TS ts | H 0 is true) = cdf(ts)
For simple regression, the p-value is determined using a t distribution with n − 2 degrees of freedom (df), which is written as t n − 2 , and is calculated as 2 × area past |t| under a t n − 2 curve. In this example, df = 30 − 2 = 28.
Another way to find the p-value for a given t statistic is to use the t distribution table. Using the table, look up the row that has degrees of freedom (DF) = 13, then find the values that 1.441 lies between. It turns out to be 1.35 and 1.771.
The P-value is a statistical number to conclude if there is a relationship between Average_Pulse and Calorie_Burnage. We test if the true value of the coefficient is equal to zero (no relationship). The statistical test for this is called Hypothesis testing.
It is
2 * pt(-abs(t_value), df)
For example:
2 * pt(-5.06, 178)
#[1] 1.038543e-06
2 * pt(-10.09, 178)
#[1] 3.223683e-19
Alternatively, use
2 * pt(abs(t_value), df, lower.tail = FALSE)
We can compute the p value Pr(>|t|) in the following different ways:
tval <- 5.06
df <- 178
# compute area under the t-pdf
integrate(function(x) dt(x, df), -Inf, -tval)$value + integrate(function(x) dt(x, df), tval, Inf)$value # sum of two areas
# [1] 1.038543e-06
1-integrate(function(x) dt(x, df), -tval, tval)$value
# [1] 1.038543e-06
# 2-sided t-test: Pr_T(|t|>|tval|) = 2*(1 - F_T(|tval|)) = 2*F_T(-|tval|), where T~t(df=178)
2*(1 - pt(tval, df))
# [1] 1.038543e-06
2*pt(tval, df, lower.tail = FALSE)
# [1] 1.038543e-06
1 - (pt(tval, df) - pt(-tval, df))
# [1] 1.038543e-06
2*pt(-tval, df)
# [1] 1.038543e-06
The following illustrates the same geometrically with a different (less extreme) value of the t-statistic, as we can see, there are two (symmetric) blue regions that together represent the corresponding probability, under the 2-sided t-test.
df <- 178
x <- seq(-6, 6,0.01)
y <- dt(x, df)
tval <- 1.25
plot(x, y, type='l', main='t-distribution and p-value (5% significance level, 2-sided t-test)')
abline(h=0)
abline(v = c(tval, -tval), col='red')
index1 <- which(x >= -tval)[1]
index2 <- which(x >= tval)[1]
polygon(x = c(x[1:index1], x[index1], x[1]),
y = c(y[1:index1], 0, 0),
col = "blue")
polygon(x = c(x[index2], x[index2], x[index2:length(x)]),
y = c(0, y[index2], y[index2:length(y)]),
col = "blue")
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With