While I have found many references that explain that they are and how they work, I’ve battled to find a nice, summarised explanation of why they are so much faster. The above Microsoft article goes into some detail about this, but I turned to ChatGPT and it gave me the summary I was looking for.
Noted here for my future reference:
Columnstore indexes are faster than traditional row-based storage indexes for several reasons:
- Compression: Columnstore indexes store data in a compressed format, reducing the amount of I/O required to retrieve data.
- Batch Processing: Columnstore indexes store data in columns instead of rows. When data is retrieved, it can be processed in bulk, reducing the overhead associated with reading and
processing data one row at a time.
- Vector Processing: Columnstore indexes take advantage of modern hardware capabilities, such as vector processing, to speed up data processing.
- Data Filtering: Columnstore indexes can perform data filtering more efficiently than traditional indexes, since they can eliminate large portions of data before reading it.
- Dedicated Hardware: Columnstore indexes can be supported by dedicated hardware, such as columnar storage systems, which are optimized for columnar data storage and retrieval.
Overall, these design considerations allow columnstore indexes to be significantly faster than traditional row-based storage indexes for large data sets, especially for analytical queries that involve aggregating large amounts of data.