-
Notifications
You must be signed in to change notification settings - Fork 40
Home
This is the Wiki home for LZF Compression library ("compress-lzf", or, for historical reasons, "Ning compress").
Compress-LZF is a Java library for encoding and decoding data, written by Tatu Saloranta ([email protected])
The primary compression format is LZF; but starting with version 0.9, there is also improved support for basic GZIP; latter uses low-level JDK-provided Deflate functionality (which is based on native zlib).
LZF data format library supports is compatible with the original LZF library by Marc A Lehmann. There are other LZF variants that differ from this, such as one used by H2 database project (by Thomas Mueller); although internal block compression structure is the same, block identifiers differ. This package uses the original LZF identifiers to be 100% compatible with existing command-line lzf tool(s).
LZF alfgorithm itself is optimized for speed, with somewhat more modest compression: compared to GZIP, LZF can be 6-8 times as fast to compress, and 2-3 times as fast to decompress.
Finally, note that library also provides for a parallel compressor
implementation (com.ning.compress.lzf.parallel.PLZOutputStream
), which can encode (compress) content using multiple processing cores: concurrent compression works on chunk-by-chunk basis (64k max chunk size) so megabyte-sized content can be processed very efficiently.
- JavaDocs
- LZF Format description
From Maven repository (http://repo1.maven.org/maven2/com/ning/compress-lzf/)
- 1.1.2 (January 2023)
Typical usage is by using one of programmatic interfaces:
- block-based interface (
LZFEncoder
,LZFDecoder
) - streaming interface (
LZFInputStream
/LZFFileInputStream
,LZFOutputStream
/LZFFileOutputStream
)- or, for 'reverse' direction: 'LZFCompressingInputStream'
- or, for parallel compression:
PLZFOutputStream
- "push" interface (reverse of streaming):
LZFUncompressor
(NOTE: only for un-/decompression)
When reading compressed data from a file you can do it simply creating a LZFFileInputStream
(or LZFInputStream
for other kinds of input) and use it for reading content
InputStream in = new LZFFileInputStream("data.lzf");
(note, too, that stream is buffered: there is no need to or benefit from using BufferedInputStream
!)
and similarly you can compress content using LZFFileOutputStream
(or LZFOutputStream
):
OutputStream out = new LZFFileOutputStream("results.lzf");
or, you can even do the reverse, and read uncompressed data, compress as you read by doing this:
InputStream compressingIn = new LZFCompressingInputStream("results.txt");
Compressing and decompressing individual blocks is as simple:
byte[] compressed = LZFEncoder.encode(uncompressedData);
byte[] uncompressed = LZFDecoder.decode(compressedData);
Finally, note that LZF encoded chunks have length of at most 65535 bytes; longer content will be split into such chunks. This is done transparently so that you can compress/uncompressed blocks of any size; chunking is handled by LZF encoders and decoders.
It is also possibly to use jar as a command-line tool since it has manifest that points to 'com.ning.compress.lzf.LZF' as the class having main() method to call.
This means that you can use it like:
java -jar compress-lzf-1.1.2.jar
(which will display necessary usage arguments)
Jar also
- Is a valid (and extremely simple) OSGi bundle to make it work nicely on OSGi containers
- Has basic Java Module (JPMS) manifest,
module-info.class
, as of version 1.1
Check out jvm-compress-benchmark for comparison of space- and time-efficiency of this LZF implementation, relative other available Java-accessible compression libraries.