The "Map types" section of the go language specification describes the interface and general usage of map types and the "Go maps in action" post on The Go Blog casually mentions hash tables and "fast lookups, adds, and deletes".
The current runtime/hashmap.go
source code describes its implementation as a hashtable (which are typically amortized O(1)
); however, I don't see any guarantee of performance characteristics (such as Big O performance) in the language specification or other materials.
Does the go language make any performance guarantees (e.g. constant-time insertion/lookup/deletion?) for map types or only interface guarantees? (Compare to the Java language where interfaces and implementations are clearly separate.)
// A map is just a hash table. The data is arranged // into an array of buckets. Each bucket contains up to // 8 key/value pairs. The low-order bits of the hash are // used to select a bucket.
In Go language, a map is a powerful, ingenious, and versatile data structure. Golang Maps is a collection of unordered pairs of key-value. It is widely used because it provides fast lookups and values that can retrieve, update or delete with the help of keys. It is a reference to a hash table.
Golang Create Set Fundamentally, sets are a collection of unique values. We can use a map with an empty struct to represent them. Since an empty struct takes 0 bytes, the set is a very efficient method of implementing a set. The following example creates a simple set using the previous syntax.
The language reference doesn't make explicit guarantees about the performance of maps. There's an implicit expectation that maps perform like you expect hash-tables to perform. I don't see how a performance guarantee would avoid being either vaguely specified or inaccurate.
Big-O complexity is a poor way to describe run-times for maps: practically speaking the actual clock time is relevant and complexity isn't. Theoretically, maps with keys from finite domains (such as ints) are trivially O(1) in space and time, and maps with keys with infinite domains (such as strings) require hashing and the specifics of equality testing to be included in the cost, which makes inserts and lookups best case O(N log N) on average (since the keys have to be at least O(log N) in size on average to construct a hash table with N entries. Unless you get these details right in the specification it's going to be inaccurate, and the benefit of getting it right isn't clearly worth it.
To provide guarantees about actual run-time rather than complexity it'd be also be difficult: there's a wide range of target machines, as well as the confounding problems of caching and garbage collection.
To be precise hash tables performance is O(1 + n/k) to resolve collisions, where n/k refer to load-factor. Go spec declare maps as non-restrictive in keys quantity. So they need dynamic resizing with partly rehashing to keep load-factor when growing or shrinking. This means near O(1) can be easily achieved for lookup in constant size maps, but can't even theoretically be guaranteed for massive insertion or deletions. Such operations want reallocation, partial rehashing and possible garbage collections to keep load-factor and reasonable memory usage.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With