This should decouple ARC compression from on-disk (allowing to compress even blocks stored with compression=off inside ARC, and the other way around). The listed issues could all (possivly except the encryption case, havn't wrapped my head around that yet) addressed by tracking the original on-disk block checksum through the ARC and giving the L2ARC header a on-L2-disk checksum (so reads can be verified against the raw data coming from the cache drive). This breaks at least dedup and nop-write (in the sense of not finding a match in case an accelerator is added/removed - not in terms of data loss), and should get a clear warning in the documentation. Regarding offload cards (and improved software implementations) coming to a compressed representation than differs from the corresponding/old software implementation: The performance overhead of this will be relatively low The 4st issue states in the linked comment: The 3rd issue is IMHO none as the data has to be decompressed anyway: it was requested for a read, else it wouldn't be fetched from L2ARC in the first place. The 1st issue (illumos and FreeBSD) could be avoided by directly compressing the data with the algorythm it was originally read from the pool when evicting it to L2ARC, which is what ZoL is iirc doing in the 2nd issue (leading to more data fitting into L2ARC). I think compressed_arc_enabled=0 has valid use cases (and performance impacts), thus should not be removed.ĪRC compression trades CPU for RAM and as every read will eventually lead to the data being decompressed it can very well reduce performance (the TPS metric) in some scenarios, caused by the overhead for repeated decompression of one and the same data block - which might well offset any benefit gained from being able to fit more blocks into the available RAM. See discussion of the deprecation policy in: Deprecate dedup send/receive.We likely need to follow the not-yet-established OpenZFS Deprecation Policy, to give users warning that this feature is going away, and to give those with use cases for disabling the compressed ARC to make those use cases known to us. Expanding the size of the dbuf cache results in a lot of double buffering (both the compressed and uncompressed version of the block). A large working set of frequently accessed blocks will overwhelm the dbuf cache and spend a lot of time decompressing blocks cached in the ARC.(receiving a compressed stream on a system with compressed_arc disabled) 9321 arc_loan_compressed_buf() can increment arc_loaned_bytes by the wrong value.Latent Bugs caused by the Compressed ARC is disabled case being under-tested: In the future if the version of the ZSTD algorithm is upgraded to take advantage to improvements in the compression ratio, the L2ARC recompression will result in a checksum mismatch. A similar case exists with the forthcoming ZSTD compression feature.If a pool is moved to a system without QAT, or if QAT is temporarily disabled, this will result in L2ARC checksum errors when the blocks are recompressed but the resulting checksum is not the same as that in the block pointer. With Intel QuickAssist, and other offload cards, the implementation of GZIP is "decompress compatible", meaning the software gzip implementation can read data compressed with QAT-gzip, but very often is not bit-for-bit the same.With Native Crypto: Authenticating the data requires re-compression to verify the MAC.On Linux: Every read from the L2ARC requires the data be decompression.On Linux: Every write to the L2ARC requires the data be re-compressed so that its checksum will match when it is later read back.On illumos and FreeBSD: Every read from the L2ARC requires the data be re-compressed to validate the checksum.Pathological Behaviour when Compressed ARC is Disabled: It is often standing in the way of additional new features.Ī number of developers have expressed a desire to retire the ability to disable the Compressed ARC. Over time, the assumption that the ARC is compressed, and dealing with the corner cases when it is not, has increased the complexity of the code base. It is unclear how many users operate systems with the compressed ARC feature disabled, however it is clear that it gets a lot less testing than the default case. Since the introduction of the Compressed ARC feature ( 6950 ARC should cache compressed data), it has been possible to disable the feature using the tunable: compressed_arc_enabled=0
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |