Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Numpy high precision

Tags:

People also ask

What is difference between NumPy float64 and float?

float64 are numpy specific 32 and 64-bit float types. Thus, when you do isinstance(2.0, np. float) , it is equivalent to isinstance(2.0, float) as 2.0 is a plain python built-in float type... and not the numpy type. isinstance(np.

Does NumPy use GPU or CPU?

Still, even with that speedup ,Numpy is only running on the CPU. With consumer CPUs typically having 8 cores or less, the amount of parallel processing, and therefore the amount of speedup that can be achieved, is limited. That's where our new friend CuPy comes in!

Is NumPy high level?

NumPy's high level syntax makes it accessible and productive for programmers from any background or experience level. Distributed under a liberal BSD license, NumPy is developed and maintained publicly on GitHub by a vibrant, responsive, and diverse community.

How do you increase float precision in Python?

There are many ways to set the precision of the floating-point values. Some of them are discussed below. Using “%”:- “%” operator is used to format as well as set precision in python. This is similar to “printf” statement in C programming.


I am using numpy and pyfits to manipulate spectra and I require high precision (something like 8-10 decimal places on a value which might go as high as 10^12). For that the data type "decimal" would be perfect (float64 is not good enough), but unfortunalely numpy.interp does not like it:

File ".../modules/manip_fits.py", line 47, in get_shift
pix_shift = np.interp(x, xp, fp)-fp
File "/usr/lib/python2.7/dist-packages/numpy/lib/function_base.py", line 1053, in interp
return compiled_interp(x, xp, fp, left, right)
TypeError: array cannot be safely cast to required type

A simplified version of the code I use:

fp = np.array(range(new_wave.shape[-1]),dtype=Decimal)
pix_shift = np.empty_like(wave,dtype=Decimal)
      x = wave
  xp = new_wave
 pix_shift = np.interp(x, xp, fp)-fp

where 'wave' and 'new_wave' are a one-dimension numpy array representing a 1D spectrum. This code is needed to shift my spectra along the x-axis (which is wavelenght)

My biggest issue is that further down the code I divide my spectra by a template spectrum constructed from the sum of all my spectra in order to analyse the differences and since I do not have enough decimal places I am getting rounding errors. Any ideas?

Thanks!

UPDATE:

Test Example:

import numpy as np
from decimal import *
getcontext().prec = 12

wave = np.array([Decimal(xx*np.pi) for xx in range(0,10)],dtype=np.dtype(Decimal))
new_wave = np.array([Decimal(xx*np.pi+0.5) for xx in range(0,10)],dtype=np.dtype(Decimal))

fp = np.array(range(new_wave.shape[-1]),dtype=Decimal)
pix_shift = np.empty_like(wave,dtype=Decimal)

x = wave
xp = new_wave
pix_shift = np.interp(x, xp, fp)-fp

The error is:

Traceback (most recent call last):
  File "untitled.py", line 16, in <module>
    pix_shift = np.interp(x, xp, fp)-fp
  File "/usr/lib/python2.7/dist-packages/numpy/lib/function_base.py", line 1053, in interp
    return compiled_interp(x, xp, fp, left, right)
TypeError: array cannot be safely cast to required type

this is the closest I can provide without using the real spectra in fits format.

UPDATE 2: Some typical values of my spectra, printed using Decimal:

  18786960689.118938446044921875
  18473926205.282184600830078125
  18325454516.792461395263671875
  18400241010.149127960205078125
2577901751996.03857421875
2571812230557.63330078125
2567431795280.80712890625

the problem I am getting is when I make operations between them, I get rounding up errors. For instance, I create a template for all spectra by summing all of them. Then I use this template to normalize every spectra. An example:

Spectra = np.array([Spectrum1, Spectrum2, ...])
Template = np.nansum(Spectra, axis= 0)

NormSpectra = Spectra/Template

This should return me only the noise on the spectra (assuming that the template is a good representation of the star). I tried normalizing each spectrum to its total flux

(Spectrum1 = Spectrum1/np.nansum(Spectrum1), ...) 

as well as the template, but would get even worse rounding up errors.

Using Decimal would work fine for me, but I need to "shift" my spectra so all spectral features/lines are aligned.

Hope this makes sense?