I often work with slices of structs. Here's an example for such a struct:
type MyStruct struct { val1, val2, val3 int text1, text2, text3 string list []SomeType }
So I define my slices as follows:
[]MyStruct
Let's say I have about a million elements in there and I'm working heavily with the slice:
My understanding is that this leads to a lot of shuffling around of the actual struct. The alternative is to create a slice of pointers to the struct:
[]*MyStruct
Now the structs remain where they are and we only deal with pointers which I assume have a smaller footprint and will therefore make my operations faster. But now I'm giving the garbage collector a lot more work.
Just got curious about this myself. Ran some benchmarks:
type MyStruct struct { F1, F2, F3, F4, F5, F6, F7 string I1, I2, I3, I4, I5, I6, I7 int64 } func BenchmarkAppendingStructs(b *testing.B) { var s []MyStruct for i := 0; i < b.N; i++ { s = append(s, MyStruct{}) } } func BenchmarkAppendingPointers(b *testing.B) { var s []*MyStruct for i := 0; i < b.N; i++ { s = append(s, &MyStruct{}) } }
Results:
BenchmarkAppendingStructs 1000000 3528 ns/op BenchmarkAppendingPointers 5000000 246 ns/op
Take aways: we're in nanoseconds. Probably negligible for small slices. But for millions of ops, it's the difference between milliseconds and microseconds.
Btw, I tried running the benchmark again with slices which were pre-allocated (with a capacity of 1000000) to eliminate overhead from append() periodically copying the underlying array. Appending structs dropped 1000ns, appending pointers didn't change at all.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With