After some computations, I have a matrix of floating point numbers with some very small negative entries (e.g. -2.6e-9) which are approximating 0.
I know that this matrix should be positive definite i.e. all positive eigenvalues, however, I suspect that these approximately 0 but negative entries are making the matrix not positive definite, and hence I can't do a further computation I want to do (Cholesky decomposition, for those wondering).
My question is this: is there anyway I can force these extremely small entries negative entries which are approximating 0 to be 0.0?
I think I am misunderstanding floating point numbers here!
Floating-point decimal values generally do not have an exact binary representation due to how the CPU represents floating point data. For this reason, you may experience a loss of precision, and some floating-point operations may produce unexpected results.
Scalars of type float are stored using four bytes (32-bits). The format used follows the IEEE-754 standard. The mantissa represents the actual binary digits of the floating-point number.
[0-9]+|[0-9]+). This regular expression matches an optional sign, that is either followed by zero or more digits followed by a dot and one or more digits (a floating point number with optional integer part), or that is followed by one or more digits (an integer).
There are several nice answers to your question, so let me comment on how to make your matrix positive definite if you have roundoff errors. The simplest way is to add a small diagonal matrix to your original matrix. Let me give an example. Taking the following matrix:
julia> A = fill(-2.6e-12, 5, 5)
5×5 Matrix{Float64}:
-2.6e-12 -2.6e-12 -2.6e-12 -2.6e-12 -2.6e-12
-2.6e-12 -2.6e-12 -2.6e-12 -2.6e-12 -2.6e-12
-2.6e-12 -2.6e-12 -2.6e-12 -2.6e-12 -2.6e-12
-2.6e-12 -2.6e-12 -2.6e-12 -2.6e-12 -2.6e-12
-2.6e-12 -2.6e-12 -2.6e-12 -2.6e-12 -2.6e-12
you will not make it positive definite by making all small negative values equal to 0
. But if instead you add some small values to its diagonal it becomes positive definite:
julia> A + sqrt(eps())*I
5×5 Matrix{Float64}:
1.48986e-8 -2.6e-12 -2.6e-12 -2.6e-12 -2.6e-12
-2.6e-12 1.48986e-8 -2.6e-12 -2.6e-12 -2.6e-12
-2.6e-12 -2.6e-12 1.48986e-8 -2.6e-12 -2.6e-12
-2.6e-12 -2.6e-12 -2.6e-12 1.48986e-8 -2.6e-12
-2.6e-12 -2.6e-12 -2.6e-12 -2.6e-12 1.48986e-8
julia> isposdef(A + sqrt(eps())*I)
true
In practice you might need to do both things: round small negative values to be 0
(if you know that in your problem negative entries are not possible) and add some small values to the diagonal of your matrix.
Sure, one relatively easy way to do this with an arbitrary tolerance might be with isapprox
. For example:
julia> A = fill(-2.6e-9, 5,5)
5×5 Matrix{Float64}:
-2.6e-9 -2.6e-9 -2.6e-9 -2.6e-9 -2.6e-9
-2.6e-9 -2.6e-9 -2.6e-9 -2.6e-9 -2.6e-9
-2.6e-9 -2.6e-9 -2.6e-9 -2.6e-9 -2.6e-9
-2.6e-9 -2.6e-9 -2.6e-9 -2.6e-9 -2.6e-9
-2.6e-9 -2.6e-9 -2.6e-9 -2.6e-9 -2.6e-9
julia> map!(x -> isapprox(x, 0, atol=1e-8) ? 0 : x, A, A)
5×5 Matrix{Float64}:
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
Note that doing this with map!
will not allocate, so should be quite efficient:
julia> @btime map!(x -> isapprox(x, 0, atol=1e-8) ? 0 : x, $A, $A)
30.984 ns (0 allocations: 0 bytes)
if you don't specify a tolerance (what the atol
option is doing above) it will choose a reasonable default for general floating point work using a somewhat complicated formula:
isapprox(x, y; atol::Real=0, rtol::Real=atol>0 ? 0 : √eps, nans::Bool=false[, norm::Function])
Inexact equality comparison. Two numbers compare equal if their relative distance or
their absolute distance is within tolerance bounds: isapprox returns true if norm(x-y)
<= max(atol, rtol*max(norm(x), norm(y))). The default atol is zero and the default
rtol depends on the types of x and y. The keyword argument nans determines whether or
not NaN values are considered equal (defaults to false).
For real or complex floating-point values, if an atol > 0 is not specified, rtol
defaults to the square root of eps of the type of x or y, whichever is bigger (least
precise). This corresponds to requiring equality of about half of the significand
digits. Otherwise, e.g. for integer arguments or if an atol > 0 is supplied, rtol
defaults to zero.
As to whether anything is going wrong in your previous calculations: not necessarily, given that rounding error is an unavoidable part of working with fixed-precision floating point numbers. However -2.6e-9
does sound a bit large compared to
julia> eps(0.0)
5.0e-324
so it depends on what exactly you're doing. If you're, e.g., subtracting two large but similar floats from each other, then this level of error might not be surprising for Float64
s
A[A .≈ 0.0] .= 0.0
≈
, which can be typed with the latex shortcut \approx<tab>
is shorthand for the isapprox
function, which you can use directly to set the specific tolerances you wish to use instead.
EDIT: As pointed out, the defaults don't work for testing against 0.0.
A[isapprox.(A, 0.0, atol = 1e-8)] .= 0.0
Not necessarily the fastest but seems to work:
julia> tol = 1e-5
1.0e-5
julia> X = [0.1 -1e-4 -1e-20; -2.6e-9 0.1 1.1e5]
2×3 Matrix{Float64}:
0.1 -0.0001 -1.0e-20
-2.6e-9 0.1 110000.0
julia> Y = ifelse.((X .< -tol ) .| (X .> 0.0 ), X, 0.0)
2×3 Matrix{Float64}:
0.1 -0.0001 0.0
0.0 0.1 110000.0
This has the advantage to keep true negative numbers as negative, so you can spot eventual errors.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With