I've found that mem::drop
does not necessary run near where it gets called, which likely results in Mutex
or RwLock
guards being held during expensive computations. How can I control when drop
gets called?
As a simple example, I've made the following test for a zeroing drop of cryptographic material work by using unsafe { ::std::intrinsics::drop_in_place(&mut s); }
instead of simply ::std::mem::drop(s)
.
#[derive(Debug, Default)]
pub struct Secret<T>(pub T);
impl<T> Drop for Secret<T> {
fn drop(&mut self) {
unsafe { ::std::intrinsics::volatile_set_memory::<Secret<T>>(self, 0, 1); }
}
}
#[derive(Debug, Default)]
pub struct AnotherSecret(pub [u8; 32]);
impl Drop for AnotherSecret {
fn drop(&mut self) {
unsafe { ::std::ptr::write_volatile::<$t>(self, AnotherSecret([0u8; 32])); }
assert_eq!(self.0,[0u8; 32]);
}
}
#[cfg(test)]
mod tests {
macro_rules! zeroing_drop_test {
($n:path) => {
let p : *const $n;
{
let mut s = $n([3u8; 32]); p = &s;
unsafe { ::std::intrinsics::drop_in_place(&mut s); }
}
unsafe { assert_eq!((*p).0,[0u8; 32]); }
}
}
#[test]
fn zeroing_drops() {
zeroing_drop_test!(super::Secret<[u8; 32]>);
zeroing_drop_test!(super::AnotherSecret);
}
}
This test fails if I use ::std::mem::drop(s)
or even
#[inline(never)]
pub fn drop_now<T>(_x: T) { }
It's obviously fine to use drop_in_place
for a test that a buffer gets zeroed, but I'd worry that calling drop_in_place
on a Mutex
or RwLock
guard might result in use after free.
These two guards could maybe be handled with this approach :
#[inline(never)]
pub fn drop_now<T>(t: mut T) {
unsafe { ::std::intrinsics::drop_in_place(&mut t); }
unsafe { ::std::intrinsics::volatile_set_memory::<Secret<T>>(&t, 0, 1); }
}
Put simply, a cost function is a measure of how wrong the model is in terms of its ability to estimate the relationship between X and y. This is typically expressed as a difference or distance between the predicted value and the actual value. The cost function (you may also see this referred to as loss or error.)
Learning Rate: This is the hyperparameter that determines the steps the gradient descent algorithm takes. Gradient Descent is too sensitive to the learning rate. If it is too big, the algorithm may bypass the local minimum and overshoot.
Bagging is usually applied where the classifier is unstable and has a high variance. Boosting is usually applied where the classifier is stable and simple and has high bias.
Ensemble modeling is a process where multiple diverse base models are used to predict an outcome. The motivation for using ensemble models is to reduce the generalization error of the prediction. As long as the base models are diverse and independent, the prediction error decreases when the ensemble approach is used.
Answer from https://github.com/rust-lang/rfcs/issues/1850 :
In debug mode, any call to ::std::mem::drop(s)
physically moves s
on the stack, so p
points to an old copy that does not get erased. And unsafe { ::std::intrinsics::drop_in_place(&mut s); }
works because it does not move s
.
In general, there is no good way to either prevent LLVM from moving values around on the stack, or else to zero after moving them, so you must never put cryptographically sensitive data on the stack. Instead you must Box
any sensitive data, like say
#[derive(Debug, Default)]
pub struct AnotherSecret(Box<[u8; 32]>);
impl Drop for AnotherSecret {
fn drop(&mut self) {
*self.0 = [0u8; 32];
}
}
There should be no problem with Mutex
or RwLock
because they can safely leave residue on the stack when they are drop
ed.
Yes: side effects.
Optimizers in general, and LLVM in particular, operate under the as-if rule: you build a program which has a specific observable behavior, and the optimizer is given free reign to produce whatever binary it wants as long as it has the very same observable behavior.
Note that the burden of proof is on the compiler. That is, when calling an opaque function (defined in another library, for example) the compiler has to assume it may have side effects. Furthermore, side effects cannot be re-ordered, as this could change the observable behavior.
In the case of Mutex
, for example, acquiring and releasing the Mutex
is generally opaque for the compiler (it requires an OS call), so it is seen as a side effect. I would expect compilers not to fiddle with those.
On the other hand, your Secret
is a tricky case: most of the time there is no side-effect in dropping the secret (zeroing out to-be-released memory is a dead-write, to be optimized out), which is why you need to go out of your way to ensure it occurs... by convincing the compiler that there are side effects using a volatile
write.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With