I am wondering how the oncoming SSD technology affects (mosty system) programming. Tons of questions arise, but here are some most obvious ones:
Storage type and capacity Getting an SSD (Solid State Drive) should be near the top of your priorities. This will give you significant performance improvements over a standard hard drive. Every operation will be a lot faster with an SSD: including booting up the OS, compiling code, launching apps, and loading projects.
Memory cell types A solid-state drives (SSD) is a flash-memory based data storage device. Bits are stored into cells, which exist in three types: 1 bit per cell (single level cell, SLC), 2 bits per cell (multiple level cell, MLC), 3 bits per cell (triple-level cell, TLC). See also: Section 1.1.
The new solid-state drives work completely differently. They use a simple memory chip called NAND flash memory, which has no moving parts and near-instant access times. Early experiments with SSD-like technology started in the 1950s, and by the 1970s and 1980s they were being used in high-end supercomputers.
It is true that SSDs eliminate the seek time issue for reading, but writing efficiently on them is quite tricky. We have been doing some research into these issues while looking for the best way to use SSDs for the Acunu storage core.
You might find these interesting:
One factor comes readily to mind...
There has been a growing trend towards treating hard drives as if they are tape drives, due to the high relative cost of making heads move between widely separated tracks. This has led to efforts to optimise data access patterns so that the head can move smoothly across the surface rather than seeking randomly.
SSDs practically eliminate the seek penalty, so we can go back to not worrying so much about the layout of data on disk. (More accurately, we have a different set of worries, due to wear-levelling concerns).
While the seek times of SSDs are better than those of HDDs by an order of magnitude or two, compared to RAM, these times are still significant. This means that issues related to seek times are not as bad, but they still are there. The throughput is still much lower than in RAM. Apart from the storage technology, the connections matter. RAM is physically very close to the CPU and other components on the motherboard and uses a special bus. Mass-storage devices don't have this advantage. There exist battery-backed packages of RAM modules which can act as an ultra-fast HDD substitute but if they attach via SATA, SCSI or other typical disk interface, the still are slower than system RAM.
This means that B-trees stil are significant and for high performance you still need to take care of what is in RAM and what is in permanent storage. Due to the whole architecture and physical limitations (non-volatile writes probably always will tend to be slower than volatile ones), I think this gap may become smaller but I doubt it will be completely gone in any foreseeable future. Even if you look at "RAM", you really don't have a single speed there, but several levels of faster and faster (but smaller and more expensive) caches. So at least some differences are there to stay.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With