Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why join is faster than normal concatenation

I've seen several examples from different languages that unambiguously prove that joining elements of a list (array) is many times faster than just concatenating string. Why?

What is the inner algorithm that works under both operations and why is the one faster than another?

Here is a Python example of what I mean:

# This is slow
x = 'a'
x += 'b'
...
x += 'z'

# This is fast
x = ['a', 'b', ... 'z']
x = ''.join(x)
like image 760
Ilian Iliev Avatar asked Feb 24 '10 09:02

Ilian Iliev


5 Answers

The other responses have basically covered it, but if you want even more detail, Joel Spolsky has an article where he describes "Schlemiel the painter's algorithm", which is extremely relevant and nicely makes the case for why understanding this sort of low level implementation detail is still very important even if you're working in a high level language like Python.

like image 116
thraxil Avatar answered Sep 27 '22 20:09

thraxil


The code in a join function knows upfront all the strings it’s being asked to concatenate and how large those strings are, and hence it can calculate the final string length before beginning the operation.

Hence it needs only allocate memory for the final string once and then it can place each source string (and delimiter) in the correct place in memory.

On the other hand, a single += operation on a string has no choice but to simply allocate enough memory for the final string which is the concatenation of just two strings. Subsequent +='s must do the same, each allocating memory which on the next += will be discarded. Each time the evergrowing string is copied from one place in memory to another.

like image 36
AnthonyWJones Avatar answered Nov 16 '22 09:11

AnthonyWJones


The reason is that strings in Python (and many other languages) are immutable objects - that is, once created, they can't be changed. Instead, concatenating a string actually makes a new string which consists of the contents of the two smaller strings being concatenated, and then replaces the old string with the new one.

Since creating a string takes a certain amount of time (need to allocate memory, copy the contents of the string to that memory, et cetera), making many strings takes longer than making a single string. Doing N concatenations requires creating N new strings in the process. join(), on the other hand, only has to create a single string (the final result) and thus works much faster.

like image 14
Amber Avatar answered Nov 16 '22 09:11

Amber


This is because a larger and larger chunk of memory has to be allocated for the string concatenation:

x = 'a' # String of size 1 allocated
x += 'b' # String of size 2 allocated, x copied, and 'b' added. Old x discarded
x += 'b' # String of size 3 allocated, x copied, and 'c' added. Old x discarded
x += 'b' # String of size 4 allocated, x copied, and 'd' added. Old x discarded
x += 'b' # String of size 5 allocated, x copied, and 'e' added. Old x discarded

So what happens is you perform large allocations and copies, but then turn around and throw them away. Very wasteful.

x = ['a', 'b', ..., 'z'] # 26 small allocations
x = ''.join(x) # A single, large allocation
like image 3
Ignacio Vazquez-Abrams Avatar answered Nov 16 '22 10:11

Ignacio Vazquez-Abrams


See Python string join performance and one specific answer that describes it very well:

The advice is about concatenating a lot of strings.

To compute s = s1 + s2 + ... + sn,

  1. using +. A new string s1+s2 is created, then a new string s1+s2+s3 is created,..., etc, so a lot of memory allocation and copy operations is involved. In fact, s1 is copied n-1 times, s2 is copied n-2 time, ..., etc.

  2. using "".join([s1,s2,...,sn]). The concatenation is done in one pass, and each char in the strings is copied only once.

like image 3
Leo Avatar answered Nov 16 '22 11:11

Leo