Age | Commit message (Collapse) | Author | Files | Lines |
|
The API headers have many attributes but these were left
as is for now.
|
|
For compatibility with C23's [[noreturn]], tuklib_attr_noreturn
must be at the beginning of declaration (before "extern" or
"static", and even before any GNU C's __attribute__).
This commit also moves all other function attributes to
the beginning of function declarations. "extern" is kept
at the beginning of a line so the attributes are listed on
separate lines before "extern" or "static".
|
|
xrealloc() is obviously incorrect, modern GCC docs even
mention realloc() as an example where this attribute
cannot be used.
liblzma's lzma_alloc() and lzma_alloc_zero() would be
correct uses most of the time but custom allocators
may use a memory pool or otherwise hold the pointer
so aliasing issues could happen in theory.
The xstrdup() case likely was correct but I removed it anyway.
Now there are no __malloc__ attributes left in the code.
The allocations aren't in hot paths so this should make
no practical difference.
|
|
Thanks to Agostino Sarubbo.
Fixes: https://github.com/tukaani-project/xz/issues/62
|
|
Now the two variations of the format strings are created with
a macro, and the whole detection code can be easily disabled
on platforms where thousand separator formatting is known to
not work (MSVC has no support, and on DJGPP 2.05 it can have
problems in some cases).
|
|
SSIZE_MAX isn't readily available on MSVC. Removing it means
that there is one thing less to worry when porting to MSVC.
|
|
The argument to vli_ceil4() should always guarantee the return value
is also a valid lzma_vli. Thus the highest three valid lzma_vli values
are invalid arguments. All uses of the function ensure this so the
assert is updated to match this.
|
|
This was not a security bug since there was no path to overflow
UINT64_MAX in lzma_index_append() or when it calls index_file_size().
The bug was discovered by a failing assert() in vli_ceil4() when called
from index_file_size() when unpadded_sum (the sum of the compressed size
of current Stream and the unpadded_size parameter) exceeds LZMA_VLI_MAX.
Previously, the unpadded_size parameter was checked to be not greater
than UNPADDED_SIZE_MAX, but no check was done once compressed_base was
added.
This could not have caused an integer overflow in index_file_size() when
called by lzma_index_append(). The calculation for file_size breaks down
into the sum of:
- Compressed base from all previous Streams
- 2 * LZMA_STREAM_HEADER_SIZE (size of the current Streams header and
footer)
- stream_padding (can be set by lzma_index_stream_padding())
- Compressed base from the current Stream
- Unpadded size (parameter to lzma_index_append())
The sum of everything except for Unpadded size must be less than
LZMA_VLI_MAX. This is guarenteed by overflow checks in the functions
that can set these values including lzma_index_stream_padding(),
lzma_index_append(), and lzma_index_cat(). The maximum value for
Unpadded size is enforced by lzma_index_append() to be less than or
equal UNPADDED_SIZE_MAX. Thus, the sum cannot exceed UINT64_MAX since
LZMA_VLI_MAX is half of UINT64_MAX.
Thanks to Joona Kannisto for reporting this.
|
|
The "once_" variable was accidentally referred to as just "once". This
prevented building with Vista threads when
HAVE_FUNC_ATTRIBUTE_CONSTRUCTOR was not defined.
|
|
|
|
|
|
signal.h in WASI SDK doesn't currently provide sigprocmask()
or sigset_t. liblzma doesn't need them so this change makes
liblzma and xzdec build against WASI SDK. xz doesn't build yet
and the tests don't either as tuktest needs setjmp() which
isn't (yet?) implemented in WASI SDK.
Closes: https://github.com/tukaani-project/xz/pull/57
See also: https://github.com/tukaani-project/xz/pull/56
(The original commit was edited a little by Lasse Collin.)
|
|
|
|
To workaround Automake lacking Windows resource compiler support, an
empty source file is compiled to overwrite the resource files for static
library builds. Translation units without an external declaration are
not allowed by the C standard and result in a warning when used with
-Wempty-translation-unit (Clang) or -pedantic (GCC).
|
|
|
|
Changed will print => prints in xz --robot --version description to
match --robot --info-memory description.
|
|
The comment used "flag" when referring to decoder options. Just
referring to them as options is more clear and consistent.
|
|
Reword "options required" to "options read". The previous wording
may have suggested that the options listed were all required when
the filters are used for encoding or decoding. Now it should be
more clear that the options listed are the ones relevant for
encoding or decoding.
|
|
This string is used to print a filename when using "xz -v" and
stderr isn't a terminal.
|
|
|
|
Maybe ICC always #defines _MSC_VER on Windows but now
it's very clear which code will get used.
|
|
|
|
In lzma_memcmplen(), the <intrin.h> header file is only included if
_MSC_VER and _M_X64 are both defined but _BitScanForward64() was
previously used if _M_X64 was defined. GCC for MSYS2 defines _M_X64 but
not _MSC_VER so _BitScanForward64() was used without including
<intrin.h>.
Now, lzma_memcmplen() will use __builtin_ctzll() for MSYS2 GCC builds as
expected.
|
|
The Memory limit information section described three output
columns when it actually has six. This was reworded to
"multiple" to make it more future proof.
|
|
This change only impacts the compiler warning since it was impossible
for the wait_abs struct in stream_encode_mt() to be used before it was
initialized since mythread_condtime_set() will always be called before
mythread_cond_timedwait().
Since the mythread.h code is different between the POSIX and
Windows versions, this warning was only present on Windows builds.
Thanks to Arthur S for reporting the warning and providing an initial
patch.
|
|
None of the liblzma functions may throw an exception, so this
attribute should be applied to all liblzma API functions.
|
|
|
|
Wrong line was changed in 7062348bf35c1e4cbfee00ad9fffb4a21aa6eff7.
Also, this has >= instead of == since ints larger than 32 bits would
work too even if not relevant in practice.
|
|
Legacy Windows did not need to #include <intrin.h> to use the MSVC
intrinsics. Newer versions likely just issue a warning, but the MSVC
documentation says to include the header file for the intrinsics we use.
GCC and Clang can "pretend" to be MSVC on Windows, so extra checks are
needed in tuklib_integer.h to only include <intrin.h> when it will is
actually needed.
|
|
Clang has support for __builtin_clz(), but previously Clang would
fallback to either the MSVC intrinsic or the regular C code. This was
discovered due to a bug where a new version of Clang required the
<intrin.h> header file in order to use the MSVC intrinsics.
Thanks to Anton Kochkov for notifying us about the bug.
|
|
AUTHORS was updated earlier, lzma.h was simply forgotten.
|
|
|
|
|
|
Thanks to Christian Hesse for reporting the issue.
Fixes: https://github.com/tukaani-project/xz/issues/44
|
|
|
|
The xz man page timestamp was intentionally left unchanged.
|
|
This was left in by mistake since an early version of the ARM64 filter
used a different struct for its options.
|
|
The \mainpage command is used in the first block of comments in lzma.h.
This changes the previously nearly empty index.html to use the first
comment block in lzma.h for its contents.
lzma.h is no longer documented separately, but this is for the better
since lzma.h only defined a few macros that users do not need to use.
The individual API header files all have a disclaimer that they should
not be #included directly, so there should be no confusion on the fact
that lzma.h should be the only header used by applications.
Additionally, the note "See ../lzma.h for information about liblzma as
a whole." was removed since lzma.h is now the main page of the
generated HTML and does not have its own page anymore. So it would be
confusing in the HTML version and was only a "nice to have" when
browsing the source files.
|
|
|
|
(This commit combines related commits from the master branch.)
If Capsicum support is missing from the kernel or xz is being run
in an emulator that lacks Capsicum suport, the syscalls will fail
and set errno to ENOSYS. Previously xz would display and error and
exit, making xz unusable. Now it will check for ENOSYS and run
without sandbox support. Other tools like ssh behave similarly.
Displaying a warning for missing Capsicum support was considered
but such extra output would quickly become annoying. It would also
break test_scripts.sh in "make check".
Also move cap_enter() to be the first step instead of the last one.
This matches the example in the cap_rights_limit(2) man page. With
the current code it shouldn't make any practical difference though.
Thanks to Xin Li for the bug report, suggesting a fix, and testing:
https://github.com/tukaani-project/xz/pull/43
Thanks to Jia Tan for most of the original commits.
|
|
Specified parameter and return values for API functions and documented
a few more of the macros.
|
|
lzma_lzma_preset() does not guarentee that the lzma_options_lzma are
usable in an encoder even if it returns false (success). If liblzma
is built with default configurations, then the options will always be
usable. However if the match finders hc3, hc4, or bt4 are disabled, then
the options may not be usable depending on the preset level requested.
The documentation was updated to reflect this complexity, since this
behavior was unclear before.
|
|
The '\n' renders as a newline when the comments are converted to html
by Doxygen.
|
|
Shorten the description for lzma_raw_encoder_memusage() and
lzma_raw_decoder_memusage().
|
|
|
|
All functions now explicitly specify parameter and return values.
The notes and code annotations were moved before the parameter and
return value descriptions for consistency.
Also, the description above lzma_filter_encoder_is_supported() about
not being able to list available filters was removed since
lzma_str_list_filters() will do this.
|
|
In the C99 and C17 standards, section 6.5.6 paragraph 8 means that
adding 0 to a null pointer is undefined behavior. As of writing,
"clang -fsanitize=undefined" (Clang 15) diagnoses this. However,
I'm not aware of any compiler that would take advantage of this
when optimizing (Clang 15 included). It's good to avoid this anyway
since compilers might some day infer that pointer arithmetic implies
that the pointer is not NULL. That is, the following foo() would then
unconditionally return 0, even for foo(NULL, 0):
void bar(char *a, char *b);
int foo(char *a, size_t n)
{
bar(a, a + n);
return a == NULL;
}
In contrast to C, C++ explicitly allows null pointer + 0. So if
the above is compiled as C++ then there is no undefined behavior
in the foo(NULL, 0) call.
To me it seems that changing the C standard would be the sane
thing to do (just add one sentence) as it would ensure that a huge
amount of old code won't break in the future. Based on web searches
it seems that a large number of codebases (where null pointer + 0
occurs) are being fixed instead to be future-proof in case compilers
will some day optimize based on it (like making the above foo(NULL, 0)
return 0) which in the worst case will cause security bugs.
Some projects don't plan to change it. For example, gnulib and thus
many GNU tools currently require that null pointer + 0 is defined:
https://lists.gnu.org/archive/html/bug-gnulib/2021-11/msg00000.html
https://www.gnu.org/software/gnulib/manual/html_node/Other-portability-assumptions.html
In XZ Utils null pointer + 0 issue should be fixed after this
commit. This adds a few if-statements and thus branches to avoid
null pointer + 0. These check for size > 0 instead of ptr != NULL
because this way bugs where size > 0 && ptr == NULL will likely
get caught quickly. None of them are in hot spots so it shouldn't
matter for performance.
A little less readable version would be replacing
ptr + offset
with
offset != 0 ? ptr + offset : ptr
or creating a macro for it:
#define my_ptr_add(ptr, offset) \
((offset) != 0 ? ((ptr) + (offset)) : (ptr))
Checking for offset != 0 instead of ptr != NULL allows GCC >= 8.1,
Clang >= 7, and Clang-based ICX to optimize it to the very same code
as ptr + offset. That is, it won't create a branch. So for hot code
this could be a good solution to avoid null pointer + 0. Unfortunately
other compilers like ICC 2021 or MSVC 19.33 (VS2022) will create a
branch from my_ptr_add().
Thanks to Marcin Kowalczyk for reporting the problem:
https://github.com/tukaani-project/xz/issues/36
|
|
|
|
|
|
|
|
lzma_microlzma_decoder -> lzma_microlzma_encoder
|
|
Standardizing each function to always specify parameters and return
values. Also moved the parameters and return values to the end of each
function description.
|
|
Use "member" to refer to struct members as that's the term used
by the C standard.
Use lzma_options_delta.dist and such in docs so that in Doxygen's
HTML output they will link to the doc of the struct member.
Clean up a few trailing white spaces too.
|
|
|
|
|
|
Also adjusted preset value => preset level.
|
|
It gives C4146 here since unary minus with unsigned integer
is still unsigned (which is the intention here). Doing it
with substraction makes it clearer and avoids the warning.
Thanks to Nathan Moinvaziri for reporting this.
|
|
Standardizing each function to always specify parameters and return
values. Also moved the parameters and return values to the end of each
function description.
A few small things were reworded and long sentences broken up.
|
|
All functions now explicitly specify parameter and return values.
|
|
All functions now explicitly specify parameter and return values.
Also moved the note about SHA-256 functions not being exported to the
top of the file.
|
|
All functions now explicitly specify parameter and return values.
|
|
|
|
Add \private above this field and its sub-fields since it is not meant
to be modified by users.
|
|
LZMA_MEMLIMIT_ERROR was missing the "<" character needed to put
documentation after a member.
|
|
Standardizing each function to always specify params and return values.
Also fixed a small grammar mistake.
|
|
Added [out] annotations to parameters that are pointers and can have
their value changed. Also added a clarification to lzma_vli_is_valid.
|
|
Document LZMA_DELTA_DIST_MIN and LZMA_DELTA_DIST_MAX for completeness
and to avoid Doxygen warnings.
|
|
All functions now explicitly specify parameter and return values.
Also reworded the description of lzma_index_hash_init() for readability.
|
|
The bug is only a problem in applications that do not properly terminate
the filters[] array with LZMA_VLI_UNKNOWN or have more than
LZMA_FILTERS_MAX filters. This bug does not affect xz.
|
|
|
|
Added a few sentences to the description for lzma_block_encoder() and
lzma_block_decoder() to highlight that the Block Header must be coded
before calling these functions.
|
|
|
|
|
|
|
|
|
|
Standardizing each function to always specify params and return values.
Output pointer parameters are also marked with doxygen style [out] to
make it clear. Any note sections were also moved above the parameter and
return sections for consistency.
|
|
The flag description for LZMA_STR_NO_VALIDATION was previously confusing
about the treatment for filters than cannot be used with .xz format
(lzma1) without using LZMA_STR_ALL_FILTERS. Now, it is clear that
LZMA_STR_NO_VALIDATION is not a super set of LZMA_STR_ALL_FILTERS.
|
|
The previous documentation for lzma_str_to_filters() was technically
correct, but misleading. lzma_str_to_filters() returns NULL on success,
which is in practice always defined to 0. This is the same value as
LZMA_OK, but lzma_str_to_filters() does not return lzma_ret so we should
be more clear.
|
|
|
|
This prevents the reserved fields from being part of the generated
Doxygen documentation.
|
|
This improves the generated Doxygen HTML files to better highlight
how to properly use the liblzma API header files.
|
|
tuklib_physmem depends on GetProcAddress() for both MSVC and MinGW-w64
to retrieve a function address. The proper way to do this is to cast the
return value to the type of function pointer retrieved. Unfortunately,
this causes a cast-function-type warning, so the best solution is to
simply ignore the warning.
|
|
|
|
Calling coder_set_compression_settings() in list mode with verbose mode
on caused the filter chain and memory requirements to print. This was
unnecessary since the command results in an error and not consistent
with other formats like lzma and alone.
|
|
It makes no difference here as the return value fits into an int
too and it then gets ignored but this looks better.
|
|
|
|
It doesn't warn on a 64-bit system because truncating
a ptrdiff_t (signed long) to uint32_t is diagnosed under
-Wconversion by GCC and -Wshorten-64-to-32 by Clang.
|
|
|
|
This is similar to 2ce4f36f179a81d0c6e182a409f363df759d1ad0.
The actual initialization of the variables is done inside
mythread_sync() macro. Clang doesn't seem to see that
the initialization code inside the macro is always executed.
|
|
|
|
|
|
On some platforms src/xz/suffix.c may need <strings.h> for
strcasecmp() but suffix.c includes the header when it needs it.
Unless there is an old system that otherwise supports enough C99
to build XZ Utils but doesn't have C89/C90-compatible <string.h>,
there should be no need to include <strings.h> in sysdefs.h.
|
|
SUSv2 and POSIX.1‐2017 declare only a few functions in <strings.h>.
Of these, strcasecmp() is used on some platforms in suffix.c.
Nothing else in the project needs <strings.h> (at least if
building on a modern system).
sysdefs.h currently includes <strings.h> if HAVE_STRINGS_H is
defined and suffix.c relied on this.
Note that dos/config.h doesn't #define HAVE_STRINGS_H even though
DJGPP does have strings.h. It isn't needed with DJGPP as strcasecmp()
is also in <string.h> in DJGPP.
|
|
clang and gcc differ in how they handle -Wformat-nonliteral. gcc will
allow a non-literal format string as long as the function takes its
format arguments as a va_list.
|
|
This affects only 32-bit x86 builds. x86-64 is OK as is.
I still cannot easily test this myself. The reporter has tested
this and it passes the tests included in the CMake build and
performance is good: raw CRC64 is 2-3 times faster than the
C version of the slice-by-four method. (Note that liblzma doesn't
include a MSVC-compatible version of the 32-bit x86 assembly code
for the slice-by-four method.)
Thanks to Iouri Kharon for figuring out a fix, testing, and
benchmarking.
|
|
This reverts commit 36edc65ab4cf10a131f239acbd423b4510ba52d5.
It was reported that it wasn't a good enough fix and MSVC
still produced (different kind of) bad code when building
for 32-bit x86 if optimizations are enabled.
Thanks to Iouri Kharon.
|
|
|
|
It quite probably was never needed, that is, any system where memory.h
was required likely couldn't compile XZ Utils for other reasons anyway.
XZ Utils 5.2.6 and later source packages were generated using
Autoconf 2.71 which no longer defines HAVE_MEMORY_H. So the code
being removed is no longer used anyway.
|
|
I haven't tested with MSVC myself and there doesn't seem to be
information about the problem online, so I'm relying on the bug report.
Thanks to Iouri Kharon for the bug report and the patch.
|
|
common/index.h is needed by liblzma internally and tests. common.h will
include and define many things that are not needed by the tests.
Also, this prevents include order problems because both common.h and
lzma.h define LZMA_API. On most platforms it results only in a warning
but on Windows it would break the build as the definition in common.h
must be used only for building liblzma itself.
|
|
This is for consistency with lzma_index_append.
|
|
|
|
|
|
HAVE_DECL_PROGRAM_INVOCATION_NAME is renamed to
HAVE_PROGRAM_INVOCATION_NAME. Previously,
HAVE_DECL_PROGRAM_INVOCATION_NAME was always set when
building with autotools. CMake would only set this when it was 1, and the
dos/config.h did not define it. The new macro definition is consistent
across build systems.
|
|
Previously, mytime.c depended on mythread.h for <time.h> to be included.
|
|
Previously, <sys/time.h> was always included, even if mythread only used
clock_gettime. <time.h> is still needed even if clock_gettime is not used
though because struct timespec is needed for mythread_condtime.
|
|
Previously, if threading was enabled HAVE_DECL_CLOCK_MONOTONIC would always
be set to 0 or 1. However, this macro was needed in xz so if xz was not
built with threading and HAVE_DECL_CLOCK_MONOTONIC was not defined but
HAVE_CLOCK_GETTIME was, it caused a warning during build. Now,
HAVE_DECL_CLOCK_MONOTONIC has been renamed to HAVE_CLOCK_MONOTONIC and
will only be set if it is 1.
|
|
|
|
Using return_if_error on lzma_lzma_lclppb_encode was improper because
return_if_error is expecting an lzma_ret value, but
lzma_lzma_lclppb_encode returns a boolean. This could result in
lzma_microlzma_encoder, which would be misleading for applications.
|
|
|
|
|
|
|
|
|
|
|
|
The code that parses --memlimit options and --block-list modified
the argv[] when parsing the option string from optarg. This was
visible in "ps auxf" and such and could be confusing. I didn't
understand it back in the day when I wrote that code. Now a copy
is allocated when modifiable strings are needed.
|
|
The API docs gave an impression that such checks are done
but they actually weren't done. In practice it made little
difference since the calling code has a bug if these are NULL.
Thanks to Jia Tan for the original patch that checked for
block->filters == NULL.
|
|
This also sorts the symbol names alphabetically in liblzma_*.map.
|
|
If someone sets up Clang to define __GNUC__ to 10 or greater
then symvers broke. __has_attribute is supported by such GCC
and Clang versions that don't support __symver__ so this should
be much better and simpler way to detect if __symver__ is
actually supported.
Thanks to Tomasz Gajc for the bug report.
|
|
It has some complicated downsides and its usefulness is more limited
than I originally thought. So this change is bad for certain very
specific situations but a generic solution that works for other
filters (and is otherwise better too) is planned anyway. And this
way 7-Zip can use the same compatible filter for the .7z format.
This is still marked as experimental with a new temporary Filter ID.
|
|
|
|
|
|
Thanks to Jia Tan for the original patch.
|
|
This was forgotten from 7484744af6cbabe81e92af7d9e061dfd597fff7b.
|
|
|
|
Thanks to Jia Tan.
|
|
Two uses: Displaying encoder filter chain when compressing with -vv,
and displaying the decoder filter chain in --list -vv.
|
|
lzma_str_to_filters() uses static error messages which makes
them not very precise. It tells the position in the string
where an error occurred though which helps quite a bit if
applications take advantage of it. Dynamic error messages can
be added later with a new flag if it seems important enough.
|
|
|
|
|
|
Here too this avoids the slightly ugly method to set
the uncompressed size.
Also moved the setting of dict_size to the struct initializer.
|
|
This avoids the need to use the slightly ugly method to
set the uncompressed size.
|
|
Some file formats need support for LZMA1 streams that don't use
the end of payload marker (EOPM) alias end of stream (EOS) marker.
So far liblzma API has supported decompressing such streams via
lzma_alone_decoder() when .lzma header specifies a known
uncompressed size. Encoding support hasn't been available in the API.
Instead of adding a new LZMA1-only API for this purpose, this commit
adds a new filter ID for use with raw encoder and decoder. The main
benefit of this approach is that then also filter chains are possible,
for example, if someone wants to implement support for .7z files that
use the x86 BCJ filter with LZMA1 (not BCJ2 as that isn't supported
in liblzma).
|
|
|
|
This allows using two Filter IDs with the same
initialization function and data structures.
|
|
|
|
|
|
|
|
|
|
Now that liblzma accepts these, we avoid the extra check and
there's one message less for translators too.
|
|
That is, if the specified nice_len is smaller than the minimum
of the match finder, silently use the match finder's minimum value
instead of reporting an error. The old behavior is annoying to users
and it complicates xz options handling too.
|
|
A tiny downside of this is that now a 1-4 tiny allocations are made
for every Block because each worker thread needs its own copy of
the filter chain.
|
|
It not only makes no sense to put symbol versions into a static library
but it can also cause breakage.
By default Libtool #defines PIC if building a shared library and
doesn't define it for static libraries. This is documented in the
Libtool manual. It can be overriden using --with-pic or --without-pic.
configure.ac detects if --with-pic or --without-pic is used and then
gives an error if neither --disable-shared nor --disable-static was
used at the same time. Thus, in normal situations it works to build
both shared and static library at the same time on GNU/Linux,
only --with-pic or --without-pic requires that only one type of
library is built.
Thanks to John Paul Adrian Glaubitz from Debian for reporting
the problem that occurred on ia64:
https://www.mail-archive.com/xz-devel@tukaani.org/msg00610.html
|
|
lzma_filters_free() sets the options to NULL and ids to
LZMA_VLI_UNKNOWN so there is no need to do it by caller;
the filter arrays will always be left in a safe state.
Also use memcpy() instead of a loop to copy a filter chain
when it is known to be safe to copy LZMA_FILTERS_MAX + 1
(even if the elements past the terminator might be uninitialized).
|
|
This time it can happen when lzma_stream_encoder_mt() is used
to reinitialize an existing multi-threaded Stream encoder
and one of 1-4 tiny allocations in lzma_filters_copy() fail.
It's very similar to the previous bug
10430fbf3820dafd4eafd38ec8be161a6978ed2b, happening with
an array of lzma_filter structures whose old options are freed
but the replacement never arrives due to a memory allocation
failure in lzma_filters_copy().
|
|
The documentation mentions that lzma_block_encoder() supports
LZMA_SYNC_FLUSH but it was never added to supported_actions[]
in the internal structure. Because of this, LZMA_SYNC_FLUSH could
not be used with the Block encoder unless it was the next coder
after something like stream_encoder() or stream_encoder_mt().
|
|
This is small but convenient and should have been added
a long time ago.
|
|
|
|
The bug was in the single-threaded .xz Stream encoder
in the code that is used for both re-initialization and for
lzma_filters_update(). To trigger it, an application had
to either re-initialize an existing encoder instance with
lzma_stream_encoder() or use lzma_filters_update(), and
then one of the 1-4 tiny allocations in lzma_filters_copy()
(called from stream_encoder_update()) must fail. An error
was correctly reported but the encoder state was corrupted.
This is related to the recent fix in
f8ee61e74eb40600445fdb601c374d582e1e9c8a which is good but
it wasn't enough to fix the main problem in stream_encoder.c.
|
|
|
|
The encoder doesn't support dictionary sizes larger than 1536 MiB.
This is validated, for example, when calculating the memory usage
via lzma_raw_encoder_memusage(). It is also enforced by the LZ
part of the encoder initialization. However, LZMA encoder with
LZMA_MODE_NORMAL did an unsafe calculation with dict_size before
such validation and that results in an infinite loop if dict_size
was 2 << 30 or greater.
|
|
These were caught by clang -Wdocumentation.
|
|
|
|
|
|
|
|
This reverts commit 177bdc922cb17bd0fd831ab8139dfae912a5c2b8
and also does equivalent change to arm64.c.
Now that ARM64 filter will use lzma_options_bcj, this change
is not needed anymore.
|
|
This is incompatible with the previous version.
This has space/tab fixes in filter_*.c and bcj.h too.
|
|
It also works on E2K as it supports these intrinsics.
On x86-64 runtime detection is used so the code keeps working on
older processors too. A CLMUL-only build can be done by using
-msse4.1 -mpclmul in CFLAGS and this will reduce the library
size since the generic implementation and its 8 KiB lookup table
will be omitted.
On 32-bit x86 this isn't used by default for now because by default
on 32-bit x86 the separate assembly file crc64_x86.S is used.
If --disable-assembler is used then this new CLMUL code is used
the same way as on 64-bit x86. However, a CLMUL-only build
(-msse4.1 -mpclmul) won't omit the 8 KiB lookup table on
32-bit x86 due to a currently-missing check for disabled
assembler usage.
The configure.ac check should be such that the code won't be
built if something in the toolchain doesn't support it but
--disable-clmul-crc option can be used to unconditionally
disable this feature.
CLMUL speeds up decompression of files that have compressed very
well (assuming CRC64 is used as a check type). It is know that
the CLMUL code is significantly slower than the generic code for
tiny inputs (especially 1-8 bytes but up to 16 bytes). If that
is a real-world problem then there is already a commented-out
variant that uses the generic version for small inputs.
Thanks to Ilya Kurdyukov for the original patch which was
derived from a white paper from Intel [1] (published in 2009)
and public domain code from [2] (released in 2016).
[1] https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/fast-crc-computation-generic-polynomials-pclmulqdq-paper.pdf
[2] https://github.com/rawrunprotected/crc
|
|
This uses it for CRC table initializations when using --disable-small.
It avoids mythread_once() overhead. It also means that then
--disable-small --disable-threads is thread-safe if this attribute
is supported.
|
|
It claims __GNUC__ >= 10 but doesn't support __symver__ attribute.
Thanks to Stephen Sachs.
|
|
__SSE2__ is the correct macro for SSE2 support with GCC, Clang,
and ICC. __SSE2_MATH__ means doing floating point math with SSE2
instead of 387. Often the latter macro is defined if the first
one is but it was still a bug.
|
|
The other scripts don't need changes for .lz support because
in those scripts it is enough that xz supports .lz.
|
|
In practice this means making the scripts work when
the input files have an unsupported check type which
isn't a problem in practice unless support for
some check types has been disabled at build time.
|
|
The --arm64 isn't actually implemented yet in the form
described in this commit.
Thanks to Jia Tan.
|
|
Modern 32-bit ARM in big endian mode use little endian for
instruction encoding still, so the filters work on such
executables too. It's likely less confusing for users this way.
The --arm64 option hasn't been implemented yet (there is
--experimental-arm64 but it's different). The --arm64 option
is added now anyway because this is the likely result and the
strings need to be ready for translators.
Thanks to Jia Tan.
|
|
|
|
If configured with --disable-lzip-decoder then --long-help will
still list `lzip' in --format but I left it like that since
due to translations it would be messy to have two help strings.
Features are disabled only in special situations so wrong help
in such a situation shouldn't matter much.
Thanks to Michał Górny for the original patch.
|
|
Thanks to Michał Górny for the original patch.
|
|
Support for format version 0 was removed from lzip 1.18 for some
reason. .lz format version 0 files are rare (and old) but some
source packages were released in this format, and some people might
have personal files in this format too. It's very little extra code
to support it along side format version 1 so this commits adds
support for both.
The Sync Flush marker extentension to the original .lz format
version 1 isn't supported. It would require changes to the
LZMA decoder itself. Such files are very rare anyway.
See the API doc for lzma_lzip_decoder() for more details about
the .lz format support.
Thanks to Michał Górny for the original patch.
|
|
This was forgotten from commit 59c4d6e1390f6f4176f43ac1dad1f7ac03c449b8.
|
|
"xz -v < regular_file > out.xz" doesn't display the percentage
and estimated remaining time because it doesn't even try to
check the input file size when input is read from stdin.
This could be improved but for now there's just a comment
to remind about it.
|
|
It worked for one input file since the counters are zero when
xz starts but they weren't reset when starting a new file in
passthru mode. For example, if files A, B, and C are one byte each,
then "xz -dcvf A B C" would show file sizes as 1, 2, and 3 bytes
instead of 1, 1, and 1 byte.
|
|
It is on the man page still.
|
|
|
|
|
|
|
|
|
|
It feels better that the initializations are sandboxed too.
They don't do anything that the pledge() call wouldn't allow.
|
|
Now it includes everything that the human-readable --info-memory shows.
|
|
This affects lzma_memusage() and lzma_memlimit_set() when used
with the threaded decompressor. Now all allocations are reported
by lzma_memusage() (so it's not misleading) and lzma_memlimit_set()
cannot lower the limit below that value.
The alternative would have been to allow lowering the limit if
doing so is possible by freeing the cached memory but since
the primary use case of lzma_memlimit_set() is to increase
memlimit after LZMA_MEMLIMIT_ERROR this simple approach
was selected.
The cached memory was always included when enforcing
the memory usage limit while decoding.
Thanks to Jia Tan.
|
|
This should be smaller too since it avoids the string constants.
|
|
Don't call InitOnceComplete() if initialization was already done.
So far mythread_once() has been needed only when building
with --enable-small. windows/build.bash does this together
with --disable-threads so the Vista-specific mythread_once()
is never needed by those builds. VS project files or
CMake-builds don't support HAVE_SMALL builds at all.
|
|
|
|
Example:
$ xz -dc --single-stream good-0-empty.xz
xz: good-0-empty.xz: Internal error (bug)
The code, that is tries to catch some input file issues early,
didn't anticipate LZMA_STREAM_END which is possible in that
code only when --single-stream is used.
|
|
|
|
Now files with unsupported check will make xz display
a warning, set the exit status to 2 (unless --no-warn is used),
and then decompress the file normally. This is how it was
supposed to work since the beginning but this was broken by
the commit 231c3c7098f1099a56abb8afece76fc9b8699f05, that is,
a little before 5.0.0 was released. The buggy behavior displayed
a message, set exit status 1 (error), and xz didn't attempt to
to decompress the file.
This doesn't matter today except for special builds that disable
CRC64 or SHA-256 at build time (but such builds should be used
in special situations only). The bug matters if new check type
is added in the future and an old xz version is used to decompress
such a file; however, it's likely that such files would use a new
filter too and an old xz wouldn't be able to decompress the file
anyway.
The first hunk in the commit is the actual fix. The second hunk
is a cleanup since LZMA_TELL_ANY_CHECK isn't used in xz.
There is a test file for unsupported check type but it wasn't
used by test_files.sh, perhaps due to different behavior between
xz and the simpler xzdec.
|
|
|
|
Long ago it was used in list.c too but nowadays it's needed
only in io_open_src() so it's nicer to avoid a separate function.
|
|
Treating it as a warning (message + exit status 2) matches gzip
and it seems more logical as at that point the output file has
already been successfully closed. When it's a warning it is
possible to suppress it with --no-warn.
|
|
It's waste of CPU time and electricity to leave the unfinished
worker threads running when it is known that their output will
get ignored.
|
|
On OpenBSD the number of cores online is often less
than what HW_NCPU would return because OpenBSD disables
simultaneous multi-threading (SMT) by default.
Thanks to Christian Weisgerber.
|
|
When encoders were disabled and threading enabled, outqueue.c and
outqueue.h were not compiled. The multi threaded decoder required
these files, so compilation failed.
|
|
Also update the comment in liblzma's memcmplen.h.
Thanks to Michał Górny for the original patch for the reads.
|
|
The bug was fixed in 660739f99ab211edec4071de98889fb32ed04e98.
|
|
The documentation states LZMA_PROG_ERROR can be returned from
lzma_index_cat. Previously, lzma_index_cat could not return
LZMA_PROG_ERROR. Now, the validation is similar to
lzma_index_append, which does a NULL check on the index
parameter.
|
|
The check type of the last Stream in dest was never copied to
dest->checks (the code tried to copy it but it was done too late).
This meant that the value returned by lzma_index_checks() would
only include the check type of the last Stream when multiple
lzma_indexes had been concatenated.
In xz --list this meant that the summary would only list the
check type of the last Stream, so in this sense this was only
a visual bug. However, it's possible that some applications
use this information for purposes other than merely showing
it to the users in an informational message. I'm not aware of
such applications though and it's quite possible that such
applications don't exist.
Regular streamed decompression in xz or any other application
doesn't use lzma_index_cat() and so this bug cannot affect them.
|
|
Thanks to ArSaCiA Game.
|
|
If lzma_code() returns LZMA_MEMLIMIT_ERROR it is now possible
to use lzma_memlimit_set() to increase the limit and continue
decoding. This was supposed to work from the beginning but
there was a bug. With other decoders (.lzma or threaded .xz)
this already worked correctly.
|
|
|
|
|
|
Thanks to Jia Tan.
|