I am using torch C++ frontend and want to have a tensor with specified value in it. To achieve this one may allocate memory and set value by hand, then use torch::from_blob
to build a tensor on the memory block, but it seems not clean enough for me.
In the very bottom of this document I found out that I can use subscript to directly access and modify the data. However, this approach has a big running time overhead, likely because the subscript access will treat the element of tensor as a 0-d tensor. The following code will cost more than 2 seconds on my machine (-O3
optimization level), which is unreasonably long for modern CPU.
torch::Tensor tensor = torch::empty({1000, 1000});
for(int i=0; i < 1000; i++)
{
for(int j=0 ; j < 1000; j++)
{
tensor[i][j] = calc_tensor_data(i,j);
}
}
Is there a clean and fast way to achieve this goal?
After hours of fruitless search on the Internet, I got a hypothesis in my mind, and I decided to give it a shot. It turns out that the accessor mentioned in the same document works well as left value too, although this feature is not mentioned by document at all. The following code is just fine, and it is as fast as manipulating raw pointer directly.
torch::Tensor tensor = torch::empty({1000, 1000});
auto accessor = tensor.accessor<float,2>();
for(int i=0; i < 1000; i++)
{
for(int j=0 ; j < 1000; j++)
{
accessor[i][j] = calc_tensor_data(i,j);
}
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With