Please consider these two snippets in GO and C++11. In C++ std::vector
is a doubling-array which has amortized O(1) insert operation. How to achieve the same performance in GO? Problem is that this GO code is about 3 times slower on my hardware. Run many times.
Compiled:
go build vec.go
(go version go1.2.1 linux/amd64)g++ -O2 -std=gnu++11 -o vec vec.cc
(g++ (Ubuntu 4.8.2-19ubuntu1) 4.8.2)GO version (vec.go):
package main
type X struct {
x int32
y float64
}
const N int = 80000000
func main() {
x := X{123, 2.64}
s := make([]X, 1)
for i := 0; i < N; i++ {
s = append(s, x)
}
}
C++11 version (vec.cc):
#include <vector>
const int N = 80000000;
struct X {
int x;
double y;
};
int main(void)
{
X x{123, 2.64};
std::vector<X> s(1);
for (int i = 0; i < N; ++i) {
s.push_back(x);
}
}
Go's specification doesn't require any particular complexity for append()
, but in practice it's also implemented in ammortized constant time, as described in the answer to this question.
The current implementation works as follows: for array sizes below 1024, it doubles as needed, and above 1024 it increases to 1.25x the original size. Increasing by 1.25x is still amortized constant time, but it has the effect of imposing a higher amortized constant factor than an implementation that always doubles. However 1.25x wastes less memory overall.
If you're getting different performance behavior by only a few times (even at very large N), then you're seeing different constant factors in play. I've noted myself that the machine code produced by the gc
compiler is much more efficient than that generated by gccgo
.
To verify for yourself that Go is operating in ammortized constant time, try plotting the time it takes to run your algorithm for several different values of N.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With