everyone. I'm new to PyTorch. Now I'm learning the indexing of a tensor. I notice that we can indexing a tensor by tensor.index_select() and tensor[sequence].
In [1]: x = torch.randn(3, 4)
In [2]: indices = torch.tensor([0, 2])
In [3]: x.index_select(0, indices)
Out[3]:
tensor([[ 0.2760, -0.9543, -1.0499, 0.7828],
[ 1.3514, -1.1289, 0.5052, -0.0547]])
In [4]: x[[0,2]]
Out[4]:
tensor([[ 0.2760, -0.9543, -1.0499, 0.7828],
[ 1.3514, -1.1289, 0.5052, -0.0547]])
I am puzzled about these two methods and look for some doc. But I failed. Can anyone can tell me are there some differences between them and what are these difference?
This looks like a remnant of old (slower) indexing.
See this pull request.
I also think you used to not be able to do binary logical indexing on tensors.
a = torch.randn((1,3,4,4))
dim = 2
indices = [0,1]
%timeit a.index_select(dim, torch.tensor(indices))
12.7 µs ± 1.28 µs per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit a[:,:,indices,:]
16.7 µs ± 640 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With