I would like to simulate (add error/rnorm) a transformed vector of dependent values, but I don't know how to keep the transformation's properties while doing so. I made a toy example to demonstrate my problem.
I have a vector of interval observations (obs) which I transform for modelling:
set.seed(123)
sd=0.1
obs=rnorm(10,10,3) # each value is an age class
#transform observations (in reality something more complex based on cumulative logits)
obs=obs/sum(obs)
These go into a model that estimates the standard deviation based on the transformation:
# model
predict=function(x){
pred=c(1:10)^x
pred=pred/sum(pred)
return(pred)
}
model= function(x){
nll=-sum(mapply(dnorm,predict(x),obs,sd)) #sd is estimated in reality
return(nll)
}
mypar=optim(0,model,lower=0,upper=2,method='Brent')$par
# from my model I get predictions
out=predict(mypar)
# I would now like to simulate observations like this :
# (in reality I do this for predicted future values)
simu=mapply(rnorm,1,out,sd)
sum(simu)
[1] 1.208622
But if I do this than my simulations of course don't follow the transformation rule anymore... In this toy case, the sum of simu should still be one.
I could do an inverse transformation of the predicted values and simulate that, but than my sd is not "appropriate" anymore.
How do I deal with this? Do I need to transform my sd somehow while doing the above (and if so how)? Or is there an other method?
This question cannot be deleted because it has a bounty, though I would like to do so because the problem arose from a tiny mistake in my inverse transformation function. Now it works fine; I can add error and the sum of observations is one.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With