I want to fit a piecewise linear regression with one break point xt
, such that for x < xt
we have a quadratic polynomial and for x >= xt
we have a straight line. Two pieces should join smoothly, with continuity up to 1st derivative at xt
. Here's picture of what it may look like:
I have parametrize my piecewise regression function as:
where a
, b
, c
and xt
are parameters to be estimated.
I want to compare this model with a quadratic polynomial regression over the whole range in terms of adjusted R-squared.
Here is my data:
y <- c(1, 0.59, 0.15, 0.078, 0.02, 0.0047, 0.0019, 1, 0.56, 0.13,
0.025, 0.0051, 0.0016, 0.00091, 1, 0.61, 0.12, 0.026, 0.0067,
0.00085, 4e-04)
x <- c(0, 5.53, 12.92, 16.61, 20.3, 23.07, 24.92, 0, 5.53, 12.92,
16.61, 20.3, 23.07, 24.92, 0, 5.53, 12.92, 16.61, 20.3, 23.07,
24.92)
My attempt goes as follows, for a known xt
:
z <- pmax(0, x - xt)
x1 <- pmin(x, xt)
fit <- lm(y ~ x1 + I(x1 ^ 2) + z - 1)
But the straight line does not appear to be tangent to the quadratic polynomial at xt
. Where am I doing wrong?
Similar questions:
This is an excellent exercise (maybe hard) to digest the theory and implementation of linear models. My answer will contain two parts:
I have to use a different parametrization because the one you gave in your question is wrong! Your parametrization only ensures continuity of function value, but not the first derivative! That is why your fitted line is not tangent to the fitted quadratic polynomial at xt
.
## generate design matrix
getX <- function (x, c) {
x <- x - c
cbind("beta0" = 1, "beta1" = x, "beta2" = pmin(x, 0) ^ 2)
}
Function est
below wraps up .lm.fit
(for maximum efficiency) for estimation and inference of a model, at a given c
:
## `x`, `y` give data points; `c` is known break point
est <- function (x, y, c) {
## model matrix
X <- getX(x, c)
p <- dim(X)[2L]
## solve least squares with QR factorization
fit <- .lm.fit(X, y)
## compute Pearson estimate of `sigma ^ 2`
r <- c(fit$residuals)
n <- length(r)
RSS <- c(crossprod(r))
sig2 <- RSS / (n - p)
## coefficients summary table
beta <- fit$coefficients
R <- "dimnames<-"(fit$qr[1:p, ], NULL)
Rinv <- backsolve(R, diag(p))
se <- sqrt(rowSums(Rinv ^ 2) * sig2)
tstat <- beta / se
pval <- 2 * pt(abs(tstat), n - p, lower.tail = FALSE)
tab <- matrix(c(beta, se, tstat, pval), nrow = p, ncol = 4L,
dimnames = list(dimnames(X)[[2L]],
c("Estimate", "Std. Error", "t value", "Pr(>|t|)")))
## 2 * negative log-likelihood
nega2logLik <- n * log(2 * pi * sig2) + (n - p)
## AIC / BIC
aic <- nega2logLik + 2 * (p + 1)
bic <- nega2logLik + log(n) * (p + 1)
## multiple R-squared and adjusted R-squared
TSS <- c(crossprod(y - sum(y) / n))
r.squared <- 1 - RSS / TSS
adj.r.squared <- 1 - sig2 * (n - 1) / TSS
## return
list(coefficients = beta, residuals = r, fitted.values = c(X %*% beta),
R = R, sig2 = sig2, coef.table = tab, aic = aic, bic = bic, c = c,
RSS = RSS, r.squared = r.squared, adj.r.squared = adj.r.squared)
}
As you can see, it also returns various summary as if summary.lm
has been called. Now let's write another wrapper function choose.c
. It sketch RSS
against c.grid
and return the best model with selected c
.
choose.c <- function (x, y, c.grid) {
if (is.unsorted(c.grid)) stop("'c.grid' in not increasing")
## model list
lst <- lapply(c.grid, est, x = x, y = y)
## RSS trace
RSS <- sapply(lst, "[[", "RSS")
## verbose
plot(c.grid, RSS, type = "b", pch = 19)
## find `c` / the model minimizing `RSS`
lst[[which.min(RSS)]]
}
So far so good. To complete the story, we also want a predict
routine.
pred <- function (model, x.new) {
## prediction matrix
X <- getX(x.new, model$c)
p <- dim(X)[2L]
## predicted mean
fit <- X %*% model$coefficients
## prediction standard error
Qt <- forwardsolve(t(model$R), t(X))
se <- sqrt(colSums(Qt ^ 2) * model$sig2)
## 95%-confidence interval
alpha <- qt(0.025, length(model$residuals) - p)
lwr <- fit + alpha * se
upr <- fit - alpha * se
## return
matrix(c(fit, se, lwr, upr), ncol = 4L,
dimnames = list(NULL, c("fit", "se", "lwr", "upr")))
}
李哲源 is a genius but I would like to suggest another solution, using the Heaviside (unit step) function, H(x) = 1 if x>0; H = 0 if x ≤ 0
H <- function(x) as.numeric(x>0)
Then, the function to fit is f(x,c) = b0 + b1 (x-c) + b2 (x-c)^2 H(c-x), and can be used with nls:
fit <- nls(y ~ b0+b1*(x-c)+b2*(x-c)^2*H(c-x),
start = list(b0=0,b1=0,b2=1,c=0.5))
Testing it with the 李哲源's toy example, gives
summary(fit)$parameters
Estimate Std. Error t value Pr(>|t|)
b0 0.1199124 0.03177064 3.774315 2.777969e-04
b1 -0.2578121 0.07856856 -3.281365 1.440945e-03
b2 2.4316379 0.40105205 6.063148 2.624975e-08
c 0.5400831 0.05287111 10.215089 5.136550e-17
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With