Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
These variables are internal to liblzma and not exposed in the API.
|
|
This was done for both internal and API headers.
|
|
|
|
This creates an internal liblzma macro to test if the dictionary size
is valid for encoding.
|
|
It doesn't warn on a 64-bit system because truncating
a ptrdiff_t (signed long) to uint32_t is diagnosed under
-Wconversion by GCC and -Wshorten-64-to-32 by Clang.
|
|
|
|
|
|
This allows using two Filter IDs with the same
initialization function and data structures.
|
|
That is, if the specified nice_len is smaller than the minimum
of the match finder, silently use the match finder's minimum value
instead of reporting an error. The old behavior is annoying to users
and it complicates xz options handling too.
|
|
This uses it for CRC table initializations when using --disable-small.
It avoids mythread_once() overhead. It also means that then
--disable-small --disable-threads is thread-safe if this attribute
is supported.
|
|
|
|
Turns out that this is needed for .lzma files as the spec in
LZMA SDK says that end marker may be present even if the size
is stored in the header. Such files are rare but exist in the
real world. The code in liblzma is so old that the spec didn't
exist in LZMA SDK back then and I had understood that such
files weren't possible (the lzma tool in LZMA SDK didn't
create such files).
This modifies the internal API so that LZMA decoder can be told
if EOPM is allowed even when the uncompressed size is known.
It's allowed with .lzma and not with other uses.
Thanks to Karl Beldan for reporting the problem.
|
|
With this it is possible to encode LZMA1 data without EOPM so that
the encoder will encode as much input as it can without exceeding
the specified output size limit. The resulting LZMA1 stream will
be a normal LZMA1 stream without EOPM. The actual uncompressed size
will be available to the caller via the uncomp_size pointer.
One missing thing is that the LZMA layer doesn't inform the LZ layer
when the encoding is finished and thus the LZ may read more input
when it won't be used. However, this doesn't matter if encoding is
done with a single call (which is the planned use case for now).
For proper multi-call encoding this should be improved.
This commit only adds the functionality for internal use.
Nothing uses it yet.
|
|
|
|
|
|
Caught by clang -Wused-but-marked-unused.
|
|
|
|
I should have always known this but I didn't. Here is an example
as a reminder to myself:
int mycopy(void *dest, void *src, size_t n)
{
memcpy(dest, src, n);
return dest == NULL;
}
In the example, a compiler may assume that dest != NULL because
passing NULL to memcpy() would be undefined behavior. Testing
with GCC 8.2.1, mycopy(NULL, NULL, 0) returns 1 with -O0 and -O1.
With -O2 the return value is 0 because the compiler infers that
dest cannot be NULL because it was already used with memcpy()
and thus the test for NULL gets optimized out.
In liblzma, if a null-pointer was passed to memcpy(), there were
no checks for NULL *after* the memcpy() call, so I cautiously
suspect that it shouldn't have caused bad behavior in practice,
but it's hard to be sure, and the problematic cases had to be
fixed anyway.
Thanks to Jeffrey Walton.
|
|
|
|
Only one definition was visible in a translation unit.
It avoided a few casts and temp variables but seems that
this hack doesn't work with link-time optimizations in compilers
as it's not C99/C11 compliant.
Fixes:
http://www.mail-archive.com/xz-devel@tukaani.org/msg00279.html
|
|
When optimizing, GCC can reorder code so that an uninitialized
value gets used in a comparison, which makes Valgrind unhappy.
It doesn't happen when compiled with -O0, which I tend to use
when running Valgrind.
Thanks to Rich Prohaska. I remember this being mentioned long
ago by someone else but nothing was done back then.
|
|
|
|
Thanks to Torsten Rupp for reporting this. I had
forgotten to run Valgrind before the 5.2.0 release.
|
|
I had missed this when writing the commit
5db75054e900fa06ef5ade5f2c21dffdd5d16141.
Thanks to Jun I Jin.
|
|
This doesn't change the match finder output.
|
|
This avoids a memzero() call for a newly-allocated memory,
which can be expensive when encoding small streams with
an over-sized dictionary.
To avoid using lzma_alloc_zero() for memory that doesn't
need to be zeroed, lzma_mf.son is now allocated separately,
which requires handling it separately in normalize() too.
Thanks to Vincenzo Innocente for reporting the problem.
|
|
There is a tiny risk of causing breakage: If an application
assigns lzma_stream.allocator to a non-const pointer, such
code won't compile anymore. I don't know why anyone would do
such a thing though, so in practice this shouldn't cause trouble.
Thanks to Jan Kratochvil for the patch.
|
|
|
|
|
|
It was 8 + nice_len / 4, now it is 4 + nice_len / 4.
This allows faster settings at lower nice_len values,
even though it seems that I won't use automatic depth
calcuation with HC3 and HC4 in the presets.
|
|
When using -O2 with GCC, it liked to swap two comparisons
in one "if" statement. It's otherwise fine except that
the latter part, which is seemingly never executed, got
executed (nothing wrong with that) and then triggered
warning in Valgrind about conditional jump depending on
uninitialized variable. A few people find this annoying
so do things a bit differently to avoid the warning.
|
|
This should avoid some minor portability issues.
|
|
Thanks to Jonathan Nieder.
|
|
in the text editor.
|
|
Originally the idea was that using LZMA_FULL_FLUSH
with Stream encoder would read the filter chain
from the same array that was used to intialize the
Stream encoder. Since most apps wouldn't use
LZMA_FULL_FLUSH, most apps wouldn't need to keep
the filter chain available after initializing the
Stream encoder. However, due to my mistake, it
actually required keeping the array always available.
Since setting the new filter chain via the array
used at initialization time is not a nice way to do
it for a couple of reasons, this commit ditches it
and introduces lzma_filters_update(). This new function
replaces also the "persistent" flag used by LZMA2
(and to-be-designed Subblock filter), which was also
an ugly thing to do.
Thanks to Alexey Tourbin for reminding me about the problem
that Stream encoder used to require keeping the filter
chain allocated.
|
|
This replaces bswap.h and integer.h.
The tuklib module uses <byteswap.h> on GNU,
<sys/endian.h> on *BSDs and <sys/byteorder.h>
on Solaris, which may contain optimized code
like inline assembly.
|
|
in lz_encoder_hash.h.
|
|
Seems that it is a problem in some cases if the same
version of XZ Utils produces different output on different
endiannesses, so this commit fixes that problem. The output
will still vary between different XZ Utils versions, but I
cannot avoid that for now.
This commit bloatens the code on big endian systems by 1 KiB,
which should be OK since liblzma is bloated already. ;-)
|
|
Thanks to Christian Weisgerber for pointing out some of these.
|
|
|
|
Thanks to Jonathan Stott for the bug report.
|
|
Don't use libtool convenience libraries to avoid recently
discovered long-standing subtle but somewhat severe bugs
in libtool (at least 1.5.22 and 2.2.6 are affected). It
was found when porting XZ Utils to Windows
<http://lists.gnu.org/archive/html/libtool/2009-06/msg00070.html>
but the problem is significant also e.g. on GNU/Linux.
Unless --disable-shared is passed to configure, static
library built from a set of convenience libraries will
contain PIC objects. That is, while libtool builds non-PIC
objects too, only PIC objects will be used from the
convenience libraries. On 32-bit x86 (tested on mobile XP2400+),
using PIC instead of non-PIC makes the decompressor 10 % slower
with the default CFLAGS.
So while xz was linked against static liblzma by default,
it got the slower PIC objects unless --disable-shared was
used. I tend develop and benchmark with --disable-shared
due to faster build time, so I hadn't noticed the problem
in benchmarks earlier.
This commit also adds support for building Windows resources
into liblzma and executables.
|
|
Fix the ordering of libgnu.a and LTLIBINTL on the linker
command line and added missing LTLIBINTL to tests/Makefile.am.
|
|
Some minor documentation cleanups were made at the same time.
|
|
Fortunately, this bug had no security risk other than accepting
some corrupt files as valid.
|
|
table, which is used also by LZ encoder. This was needed
because calling lzma_crc32() and ignoring the result is
a no-op due to lzma_attr_pure.
|
|
other compilers than MinGW. This may hurt readability
of the API headers slightly, but I don't know any
better way to do this.
|
|
and LZMA2. It is not supported by the .xz format or the xz
command line tool yet.
|
|
Half of developers were already forgetting to use these
functions, which could have caused total breakage in some future
liblzma version or even now if --enable-small was used. Now
liblzma uses pthread_once() to do the initializations unless
it has been built with --disable-threads which make these
initializations thread-unsafe.
When --enable-small isn't used, liblzma currently gets needlessly
linked against libpthread (on systems that have it). While it is
stupid for now, liblzma will need threads in future anyway, so
this stupidity will be temporary only.
When --enable-small is used, different code CRC32 and CRC64 is
now used than without --enable-small. This made the resulting
binary slightly smaller, but the main reason was to clean it up
and to handle the lack of lzma_init_check().
The pkg-config file lzma.pc was renamed to liblzma.pc. I'm not
sure if it works correctly and portably for static linking
(Libs.private includes -pthread or other operating system
specific flags). Hopefully someone complains if it is bad.
lzma_rc_prices[] is now included as a precomputed array even
with --enable-small. It's just 128 bytes now that it uses uint8_t
instead of uint32_t. Smaller array seemed to be at least as fast
as the more bloated uint32_t array on x86; hopefully it's not bad
on other architectures.
|
|
which made LZ decoder return too early after dictionary
reset. This fixes it.
|
|
|
|
- Updated to the latest, probably final file format version.
- Command line tool reworked to not use threads anymore.
Threading will probably go into liblzma anyway.
- Memory usage limit is now about 30 % for uncompression
and about 90 % for compression.
- Progress indicator with --verbose
- Simplified --help and full --long-help
- Upgraded to the last LGPLv2.1+ getopt_long from gnulib.
- Some bug fixes
|
|
|
|
|
|
|
|
- LZMA_VLI_VALUE_MAX -> LZMA_VLI_MAX
- LZMA_VLI_VALUE_UNKNOWN -> LZMA_VLI_UNKNOWN
- LZMA_HEADER_ERRRO -> LZMA_OPTIONS_ERROR
|
|
|
|
|
|
broken. API has changed a lot and it will still change a
little more here and there. The command line tool doesn't
have all the required changes to reflect the API changes, so
it's easy to get "internal error" or trigger assertions.
|
|
specification. Simplify things by removing most of the
support for known uncompressed size in most places.
There are some miscellaneous changes here and there too.
The API of liblzma has got many changes and still some
more will be done soon. While most of the code has been
updated, some things are not fixed (the command line tool
will choke with invalid filter chain, if nothing else).
Subblock filter is somewhat broken for now. It will be
updated once the encoded format of the Subblock filter
has been decided.
|
|
misunderstanding of the code. There's no tiny fix for this
problem, so I also cleaned up the code in general.
This reduces the speed of the encoder 2-5 % in the fastest
compression mode ("lzma -1"). High compression modes should
have no noticeable performance difference.
This commit breaks things (especially LZMA_SYNC_FLUSH) but I
will fix them once the new format and LZMA2 has been roughly
implemented. Plain LZMA won't support LZMA_SYNC_FLUSH at all
and won't be supported in the new .lzma format. This may
change still but this is what it looks like now.
Support for known uncompressed size (that is, LZMA or LZMA2
without EOPM) is likely to go away. This means there will
be API changes.
|
|
size. The "fix" breaks LZMA_SYNC_FLUSH at end of stream
with known uncompressed size, but since it currently seems
likely that support for encoding with known uncompressed
size will go away anyway, I'm not fixing this problem now.
|
|
|
|
|
|
lz_get_byte(lz, 0) returns zero. This was broken by
1a3b21859818e4d8e89a1da99699233c1bfd197d.
|
|
get initialized as a side-effect after allocating a new decoder,
but not when the decoder was reused.
|
|
decoder. There's no danger of information leak here, so
it isn't required. Doing memzero() takes a lot of time
with large dictionaries, which could make it easier to
construct DoS attack to consume too much CPU time.
|
|
That code is now almost completely in LZ coder, where
it can be shared with other LZ77-based algorithms in
future.
|
|
These changes implement support for LZMA_SYNC_FLUSH in LZMA
encoder, and move the temporary buffer needed by range encoder
from lzma_range_encoder structure to lzma_lz_encoder.
|
|
in match_c.h.
|
|
only in one place which isn't performance criticial.
|
|
|