How practical would it really be to use pixz to create a 1TB archive with artifacts that have 3KB median size and 80KB average size ? Are there any blockers / counter performances when using the pixz index to read a single artifact ? What is the CPU cost of uncompressing ? When reading a single artifact what is the overhead implied by the fixed block size ?
There can be up to 1TB / 3KB = ~300 millions artifiacts. Assuming the name of each artifact is a SHA256 i.e. 32 bytes, that's an index of 10GB, i.e. ~1% of the size.