Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Best approach to holding large editable documents in memory

I need to hold a representation of a document in memory, and am looking for the most efficient way to do this.

Assumptions

  • The documents can be pretty large, up to 100MB.
  • More often than not the document will remain unchanged - (i.e. I don't want to do unnecessary up front processing).
  • Changes will typically be quite close to each other in the document (i.e. as the user types).
  • It should be possible to apply changes fast (without copying the whole document)
  • Changes will be applied in terms of offsets and new/deleted text (not as line/col).
  • To work in C#

Current considerations

  • Storing the data as a string. Easy to code, fast to set, very slow to update.
  • Array of Lines, moderatly easy to code, slower to set (as we have to parse the string into lines), faster to update (as we can insert remove lines easily, but finding offsets requires summing line lengths).

There must be a load of standard algorithms for this kind of thing (it's not a million miles of disk allocation and fragmentation).

Thanks for your thoughts.

like image 891
Colin Avatar asked May 08 '09 08:05

Colin


2 Answers

I would suggest to break the file into blocks. All blocks have the same length when you load them, but the length of each block might change if the user edits this blocks. This avoids moving 100 megabyte of data if the user inserts one byte in the front.

To manage the blocks, just but them - together with the offset of each block - into a list. If the user modifies a blocks length you must only update the offsets of the blocks after this one. To find an offset, you can use binary search.

File size: 100 MiB
Block Size: 16 kiB
Blocks: 6400

Finding a offset using binary search (worst case): 13 steps
Modifying a block (worst case): copy 16384 byte data and update 6400 block offsets
Modifying a block (average case): copy 8192 byte data and update 3200 block offsets

16 kiB block size is just a random example - you can balance the costs of the operations by choosing the block size, maybe based on the file size and the probability of operations. Doing some simple math will yield the optimal block size.

Loading will be quite fast, because you load fixed sized blocks, and saving should perform well, too, because you will have to write a few thousand blocks and not millions of single lines. You can optimize loading by loading blocks only on demand and you can optimize saving by only saving all blocks that changed (content or offset).

Finally the implementation will not be to hard, too. You could just use the StringBuilder class to represent a block. But this solution will not work well for very long lines with lengths comparable to the block size or larger because you will have to load many blocks and display only a small parts with the rest being to the left or right of the window. I assume you will have to use a two dimensional partitioning model in this case.

like image 68
Daniel Brückner Avatar answered Nov 15 '22 15:11

Daniel Brückner


Good Math, Bad Math wrote an excellent article about ropes and gap buffers a while ago that details the standard methods for representing text files in a text editor, and even compares them for simplicity of implementation and performance. In a nutshell: a gap buffer - a large character array with an empty section immediately after the current position of the cursor - is your simplest and best bet.

like image 4
Nick Johnson Avatar answered Nov 15 '22 14:11

Nick Johnson