Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How are PyTorch's tensors implemented?

I am building my own Tensor class in Rust, and I am trying to make it like PyTorch's implementation.

What is the most efficient way to store tensors programmatically, but, specifically, in a strongly typed language like Rust? Are there any resources that provide good insights into how this is done?

I am currently building a contiguous array, so that, given dimensions of 3 x 3 x 3, my array would just have 3^3 elements in it, which would represent the tensor. However, this does make some of the mathematical operations and manipulations of the array harder.

The dimension of the tensor should be dynamic, so that I could have a tensor with n dimensions.

like image 396
RyanM Avatar asked Apr 09 '18 02:04

RyanM


2 Answers

Contiguous array

The commonly used way to store such data is in a single array that is laid out as a single, contiguous block within memory. More concretely, a 3x3x3 tensor would be stored simply as a single array of 27 values, one after the other.

The only place where the dimensions are used is to calculate the mapping between the (many) coordinates and the offset within that array. For example, to fetch the item [3, 1, 1] you would need to know if it is a 3x3x3 matrix, a 9x3x1 matrix, or a 27x1x1 matrix - in all cases the "storage" would be 27 items long, but the interpretation of "coordinates" would be different. If you use zero-based indexing, the calculation is trivial, but you need to know the length of each dimension.

This does mean that resizing and similar operations may require copying the whole array, but that's ok, you trade off the performance of those (rare) operations to gain performance for the much more common operations, e.g. sequential reads.

like image 174
Peteris Avatar answered Nov 03 '22 00:11

Peteris


PyTorch stores its tensors by default in dense format. According to the docs,

Each tensor has an associated torch.Storage, which holds its data. The tensor class provides multi-dimensional, strided view of a storage and defines numeric operations on it.

As you may imagine, when storing large tensors with many zeros, it is a very inefficient use of memory to store all the values. Therefore, PyTorch also supplies sparse tensors. From that page:

Torch supports sparse tensors in COO(rdinate) format, which can efficiently store and process tensors for which the majority of elements are zeros.

If you are interested in the trade-offs between different sparse formats, it may be useful to start with the vast literature which exists on storing sparse matrices. Many of the techniques used for matrices (which are effectively rank 2 tensors) translate to multiple dimensions. This masters dissertation goes into considerable detail about the implementation of sparse tensors specifically.

The summary of the Wikipedia link as well as the dissertation is

  • Formats which are easy to specify and modify are often computationally inefficient, therefore sparse structures are often constructed using one specification but stored in another

  • It is important to choose a storage format which aligns with your expected usage. For instance, matrix-matrix multiplication requires efficient column access, while matrix-vector multiplication requires efficient row access.

Lastly, I recommend looking at sprs for a good sparse matrix library for Rust. It may be a good place to start when looking to extend to multiple dimensions.

like image 35
chthonicdaemon Avatar answered Nov 02 '22 23:11

chthonicdaemon