I have a program that uses a buffer pool to reduce allocations in a few performance-sensitive sections of the code.
Something like this: play link
// some file or any data source
var r io.Reader = bytes.NewReader([]byte{1,2,3})
// initialize slice to max expected capacity
dat := make([]byte, 20)
// read some data into it. Trim to length.
n, err := r.Read(dat)
handle(err)
dat = dat[:n]
// now I want to reuse it:
for len(dat) < cap(dat) {
dat = append(dat, 0)
}
log.Println(len(dat))
// add it to free list for reuse later
// bufferPool.Put(dat)
I always allocate fixed length slices, which are guaranteed to be larger than the maximum size needed. I need to reduce size to the actual data length to use the buffer, but I also need it to be the maximum size again to read into it the next time I need it.
The only way I know of to expand a slice is with append
, so that is what I am using. The loop feels super dirty though. And potentially inefficient. My benchmarks show it isn't horrible, but I feel like there has to be a better way.
I know only a bit about the internal representation of slices, but if I could only somehow override the length value without actually adding data, it would be really nice. I don't really need to zero it out or anything.
Is there a better way to accomplish this?
"Extending" a slice to its capacity is simply a slice expression, and specify the capacity as the high index. The high index does not need to be less than the length. The restriction is:
For arrays or strings, the indices are in range if
0 <= low <= high <= len(a)
, otherwise they are out of range. For slices, the upper index bound is the slice capacitycap(a)
rather than the length.
Example:
b := make([]byte, 10, 20)
fmt.Println(len(b), cap(b), b)
b = b[:cap(b)]
fmt.Println(len(b), cap(b), b)
Output (try it on the Go Playground):
10 20 [0 0 0 0 0 0 0 0 0 0]
20 20 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
You can expand a slice to its capacity with slicing:
s = s[:cap(s)]
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With