Age | Commit message (Collapse) | Author | Files | Lines |
|
Maybe ICC always #defines _MSC_VER on Windows but now
it's very clear which code will get used.
|
|
|
|
|
|
The new Tests section describes basic information about the tests, how
to run them, and important details when cross compiling. We have had a
few questions about how to compile the tests without running them, so
hopefully this information will help others with the same question in the
future.
Fixes: https://github.com/tukaani-project/xz/issues/54
|
|
This adds an entry to "Other implementations of the .xz format" for
XZ for Java.
|
|
The Memory limit information section described three output
columns when it actually has six. This was reworded to
"multiple" to make it more future proof.
|
|
* Moved max_block_list_size from a global to local variable.
* Reworded error message in validate_block_list_filter().
* Removed helper function filter_chain_error().
* Changed 1 << X to 1U << X in many places
|
|
|
|
|
|
Changed will print => prints in xz --robot --version description to
match --robot --info-memory description.
|
|
The order is now consistent with the order the command line arguments
are documented earlier in the man page. The new order is:
1. --list
2. --info-memory
3. --version
Instead of the previous order:
1. --version
2. --info-memory
3. --list
|
|
|
|
The --filters-help can be used to help create filter chains with the
--filters and --filtersX options. The message in --long-help is too
short to fully explain the syntax to construct complex filter chains.
In --robot mode, xz will only print the output from liblzma function
lzma_str_list_filters.
|
|
The --block-list option description needed updating since the new
--filtersX option changes how it can be used. The new entry for
--filters1=FILTERS ... --filter9=FILTERS was created right after
the --filters option.
|
|
|
|
If a filter chain is set but not used in --block-list, it introduced
unexpected behavior such as requiring an unneeded amount of memory to
compress, reducing the number of threads in multi-threaded encoding, and
printing an incorrect amount of memory needed to decompress.
This also renames filters_init_mask => filters_used_mask. A filter is
assumed to be used if it is specified in --filtersX until
coder_set_compression_settings() determines which filters are referenced
in --block-list.
|
|
When opt_block_size is not used, the Block size for mt encoder is
derived from the minimum of the largest Block specified by
--block-list and the recommended Block size on all filter chains
calculated by lzma_mt_block_size(). This avoids using unnecessary
memory and ensures that all Blocks are large enough for the most memory
needy filter chain.
|
|
|
|
Previously, only the default filter chain could have its memory usage
adjusted. The filter chains specified with --filtersX were not checked
for memory usage. Now, all used filter chains will be adjusted if
necessary.
|
|
The block splitting logic and split_block() function are not needed if
encoders are disabled. This will help slightly reduce the binary size
when built without encoders and allow split_block() to use functions
that require encoders being enabled.
|
|
This will only free filter chains created with --filters1-9 since the
default filter chain may be set from a static function variable. The
complexity to free the default filter chain is not worth the burden on
code maintenance.
|
|
--block-list is only supported with compression in xz format. This avoids
silently ignoring when --block-list is unused.
|
|
The new command line options are meant to be combined with --block-list.
They work as an optional extension to --block-list to specify a custom
filter chain for each block listed. The new options allow the creation
of up to 9 reusable filter chains. For instance:
xz --block-list=1:10MiB,3:5MiB,,2:5MiB,1:0 --filters1=delta--lzma2 \
--filters2=x86--lzma2 --filters3=arm64--lzma2
Will create the following blocks:
1. A block of size 10 MiB with filter chain delta, lzma2.
2. A block of size 5 MiB with filter chain arm64, lzma2.
3. A block of size 5 MiB with filter chain arm64, lzma2.
4. A block of size 5 MiB with filter chain x86, lzma2.
5. A block containing the rest of the file contents with filter chain
delta, lzma2.
|
|
This is a little cleaner than the previous implementation of
forget_filter_chain(). It is also more consistent since
lzma_str_to_filters() will always terminate the filter chain so there
is no need to terminate it later in coder_set_compression_settings().
|
|
Converting from string to filter will also need to be done for block
specific filter chains.
|
|
|
|
|
|
The --filters option uses the new lzma_str_to_filters() function
to convert a string into a full filter chain. Using this option
will reset all previous filters set by --preset, --[filter], or
--filters.
|
|
Fixed a bug where test_compress_* would all fail if arm64 or armthumb
filters were enabled for compression but arm was disabled. Since the
grep tests only checked for "define HAVE_ENCODER_ARM", this would match
on HAVE_ENCODER_ARM64 or HAVE_ENCODER_ARMTHUMB.
Now the config.h feature test requires " 1" at the end to prevent the
prefix problem. have_feature() was also updated for this even though
there were known current bugs affecting it. This is just in case future
features have a similar prefix problem.
|
|
|
|
Commit 78704f36e74205857c898a351c757719a6c8b666 added an empty
initializer {} to prevent a warning. The empty initializer is a GNU
extension and results in a build failure on MSVC. The -wpedantic flag
warns about empty initializers.
|
|
|
|
Several tests were missing calls to lzma_index_end() to clean up the
lzma_index structs. The memory leaks were discovered by using
-fsanitize=address with GCC.
|
|
test_block_header was not properly freeing the filter options between
calls to lzma_block_header_decode(). The memory leaks were discovered by
using -fsanitize=address with GCC.
|
|
This change only impacts the compiler warning since it was impossible
for the wait_abs struct in stream_encode_mt() to be used before it was
initialized since mythread_condtime_set() will always be called before
mythread_cond_timedwait().
Since the mythread.h code is different between the POSIX and
Windows versions, this warning was only present on Windows builds.
Thanks to Arthur S for reporting the warning and providing an initial
patch.
|
|
In lzma_memcmplen(), the <intrin.h> header file is only included if
_MSC_VER and _M_X64 are both defined but _BitScanForward64() was
previously used if _M_X64 was defined. GCC for MSYS2 defines _M_X64 but
not _MSC_VER so _BitScanForward64() was used without including
<intrin.h>.
Now, lzma_memcmplen() will use __builtin_ctzll() for MSYS2 GCC builds as
expected.
|
|
ci_build.sh was updated to accept disabling of __attribute__ ifunc
and CLMUL. This will allow -fsanitize=address to pass because ifunc
is incompatible with -fsanitize=address. The CLMUL implementation has
optimizations that potentially read past the buffer and mask out the
unwanted bytes.
This test will only run on Autotools Linux.
|
|
|
|
|
|
|
|
|
|
It's so that there's a clear difference in wording compared
to liblzma's integrity check types.
|
|
The ifunc method avoids indirection via the function pointer
crc64_func. This works on GNU/Linux and probably on FreeBSD too.
The previous __attribute((__constructor__)) method is kept for
compatibility with ELF platforms which do support ifunc.
The ifunc method has some limitations, for example, building
liblzma with -fsanitize=address will result in segfaults.
The configure option --disable-ifunc must be used for such builds.
Thanks to Hans Jansen for the original patch.
Closes: https://github.com/tukaani-project/xz/pull/53
|
|
CMake build system will now verify if __attribute__((__ifunc__())) can be
used in the build system. If so, HAVE_FUNC_ATTRIBUTE_IFUNC will be
defined to 1.
|
|
configure.ac will now verify if __attribute__((__ifunc__())) can be used in
the build system. If so, HAVE_FUNC_ATTRIBUTE_IFUNC will be defined to 1.
|
|
Without the extra command, all of the CI tests were automatically
failing because the Ubuntu servers could not be reached properly.
|
|
|
|
Boost iostream uses `find_package` in quiet mode and then again uses
`find_package` with required. This second call triggers a
`add_library cannot create imported target "ZLIB::ZLIB" because another
target with the same name already exists.`
This can simply be fixed by skipping the alias part on secondary
`find_package` runs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Reword "options required" to "supported options". The previous may have
suggested that the options listed were all required anytime a filter is
used for encoding or decoding. The reword makes this more clear that
adjusting the options is optional.
|
|
None of the liblzma functions may throw an exception, so this
attribute should be applied to all liblzma API functions.
|
|
The lzma_mt_block_size() was previously just an internal function for
the multithreaded .xz encoder. It is used to provide a recommended Block
size for a given filter chain.
This function is helpful to determine the maximum Block size for the
multithreaded .xz encoder when one wants to change the filters between
blocks. Then, this determined Block size can be provided to
lzma_stream_encoder_mt() in the lzma_mt options parameter when
intializing the coder. This requires one to know all the filter chains
they are using before starting to encode (or at least the filter chain
that will need the largest Block size), but that isn't a bad limitation.
|
|
This creates an internal liblzma macro to test if the dictionary size
is valid for encoding.
|
|
|
|
|
|
|
|
Previous commit 6be460dde07113fe3f08f814b61ddc3264125a96 would cause an
error if the integer size was 32 bit.
|
|
|
|
Wrong line was changed in 7062348bf35c1e4cbfee00ad9fffb4a21aa6eff7.
Also, this has >= instead of == since ints larger than 32 bits would
work too even if not relevant in practice.
|
|
|
|
|
|
|
|
Legacy Windows did not need to #include <intrin.h> to use the MSVC
intrinsics. Newer versions likely just issue a warning, but the MSVC
documentation says to include the header file for the intrinsics we use.
GCC and Clang can "pretend" to be MSVC on Windows, so extra checks are
needed in tuklib_integer.h to only include <intrin.h> when it will is
actually needed.
|
|
Clang has support for __builtin_clz(), but previously Clang would
fallback to either the MSVC intrinsic or the regular C code. This was
discovered due to a bug where a new version of Clang required the
<intrin.h> header file in order to use the MSVC intrinsics.
Thanks to Anton Kochkov for notifying us about the bug.
|
|
AUTHORS was updated earlier, lzma.h was simply forgotten.
|
|
|
|
|
|
|
|
|
|
Signed-off-by: Gabriela Gutierrez <gabigutierrez@google.com>
|
|
|
|
If the cache file is not removed, CMake will not reset configurations
back to their default values. In order to make the tests independent, it
is simplest to purge the cache. Unfortunatly, this will slow down the
tests a little and repeat some checks.
|
|
Now that the threading is configurable, the liblzma CMake package only
needs the threading library when using POSIX threads.
|
|
The thread method is now configurable for the CMake build. It matches
the Autotools build by allowing ON (pick the best threading method),
OFF (no threading), posix, win95, and vista. If both Windows and
posix threading are both available, then ON will choose Windows
threading. Windows threading will also not use:
target_link_libraries(liblzma Threads::Threads)
since on systems like MinGW-w64 it would link the posix threads
without purpose.
|
|
Now, CMake will run similar feature disable tests that the Autotools
version did before. In order to do this without repeating lines in
ci.yml, it now makes sense to use the GitHub Workflow matrix to create
a loop.
|
|
Also included various clean ups for style and helper functions for
repeated work.
|
|
This script is only meant to be run as part of the CI build/test process
on machines that are known to have bash (Ubuntu and MacOS). If this
assumption changes in the future, then the bash specific commands will
need to be replaced with a more portable option. For now, it is
convenient to use bash commands.
|
|
|
|
|
|
It adds only one new policy related to FOLDERS which we don't use.
This makes it clear that the code is compatible with the policies
up to 3.26.
|
|
|
|
This allows users to change the features they build either in
CMakeCache.txt or by using a CMake GUI. The sources built for
liblzma are affected by this too, so only the necessary files
will be compiled.
|
|
It's obsolete in Autoconf >= 2.70 and just an alias for AC_PROG_CC
but Autoconf 2.69 requires AC_PROG_CC_C99 to get a C99 compiler.
|
|
This makes no functional difference in the generated configure
(at least with the Autotools versions I have installed) but this
change might prevent future bugs like the one that was just
fixed in the commit 5a5bd7f871818029d5ccbe189f087f591258c294.
|
|
|
|
This is broken in the releases 5.2.6 to 5.4.2. A workaround
for these releases is to pass EGREP='grep -E' as an argument
to configure in addition to --disable-threads.
The problem appeared when m4/ax_pthread.m4 was updated in
the commit 6629ed929cc7d45a11e385f357ab58ec15e7e4ad which
introduced the use of AC_EGREP_CPP. AC_EGREP_CPP calls
AC_REQUIRE([AC_PROG_EGREP]) to set the shell variable EGREP
but this was only executed if POSIX threads were enabled.
Libtool code also has AC_REQUIRE([AC_PROG_EGREP]) but Autoconf
omits it as AC_PROG_EGREP has already been required earlier.
Thus, if not using POSIX threads, the shell variable EGREP
would be undefined in the Libtool code in configure.
ax_pthread.m4 is fine. The bug was in configure.ac which called
AX_PTHREAD conditionally in an incorrect way. Using AS_CASE
ensures that all AC_REQUIREs get always run.
Thanks to Frank Busse for reporting the bug.
Fixes: https://github.com/tukaani-project/xz/issues/45
|
|
Thanks to Christian Hesse for reporting the issue.
Fixes: https://github.com/tukaani-project/xz/issues/44
|
|
|
|
|
|
|
|
The xz man page timestamp was intentionally left unchanged.
|
|
|
|
|
|
These should have been included in 5.3.2alpha already.
|
|
These should have been included in 5.3.2alpha already.
|
|
|
|
|
|
|
|
When the docs are installed, calling the directory "liblzma" is
confusing since multiple other files in the doc directory are for
liblzma. This should also make it more natural for distros when they
package the documentation.
|
|
This was left in by mistake since an early version of the ARM64 filter
used a different struct for its options.
|
|
Autogen now requires --no-doxygen or having doxygen installed to run
without errors.
|
|
|
|
|
|
The \mainpage command is used in the first block of comments in lzma.h.
This changes the previously nearly empty index.html to use the first
comment block in lzma.h for its contents.
lzma.h is no longer documented separately, but this is for the better
since lzma.h only defined a few macros that users do not need to use.
The individual API header files all have a disclaimer that they should
not be #included directly, so there should be no confusion on the fact
that lzma.h should be the only header used by applications.
Additionally, the note "See ../lzma.h for information about liblzma as
a whole." was removed since lzma.h is now the main page of the
generated HTML and does not have its own page anymore. So it would be
confusing in the HTML version and was only a "nice to have" when
browsing the source files.
|
|
Another command line option (--no-doxygen) was added to disable
creating the doxygen documenation in cases where it not wanted or
if the doxygen tool is not installed.
|
|
This is a helper script to generate the Doxygen documentation. It can be
run in 'liblzma' or 'internal' mode by setting the first argument. It
will default to 'liblzma' mode and only generate documentation for the
liblzma API header files.
The helper script will be run during the custom mydist hook when we
create releases. This hook already alters the source directory, so its
fine to do it here too. This way, we can include the Doxygen generated
files in the distrubtion and when installing.
In 'liblzma' mode, the JavaScript is stripped from the .html files and
the .js files are removed. This avoids license hassle from jQuery and
other libraries that Doxygen 1.9.6 puts into jquery.js in minified form.
|
|
Added a install-data-local target to install the Doxygen documentation
only when it has been generated. In order to correctly remove the docs,
a corresponding uninstall-local target was added.
If the doxygen docs exist in the source tree, they will also be included
in the distribution now too.
|
|
Instead of having Doxyfile.in configured by Autoconf, the Doxyfile
can have the tags that need to be configured piped into the doxygen
command through stdin with the overrides after Doxyfile's contents.
Going forward, the documentation should be generated in two different
modes: liblzma or internal.
liblzma is useful for most users. It is the documentation for just
the liblzma API header files. This is the default.
internal is for people who want to understand how xz and liblzma work.
It might be useful for people who want to contribute to the project.
|
|
|
|
|
|
Converts the existing lzma_index tests into tuktests and covers every
API function from index.h except for lzma_file_info_decoder, which can
be tested in the future.
|
|
Also remove unneeded "sandbox_allowed = false;" as this code
will never be run more than once (making it work with multiple
input files isn't trivial).
|
|
|
|
The warning causes the exit status to be 2, so this will cause problems
for many scripted use cases for xz. The sandbox usage is already very
limited already, so silently disabling this allows it to be more usable.
|
|
Thanks to Xin Li for recommending the fix.
|
|
The warning is only used when errno == ENOSYS. Otherwise, xz still
issues a fatal error.
|
|
If a system has the Capsicum header files but does not actually
implement the system calls, then this would render xz unusable. Instead,
we can check if errno == ENOSYS and not issue a fatal error.
|
|
cap_enter() puts the process into the sandbox. If later calls to
cap_rights_limit() fail, then the process can still have some extra
protections.
|
|
lzma_lzma_preset() does not guarentee that the lzma_options_lzma are
usable in an encoder even if it returns false (success). If liblzma
is built with default configurations, then the options will always be
usable. However if the match finders hc3, hc4, or bt4 are disabled, then
the options may not be usable depending on the preset level requested.
The documentation was updated to reflect this complexity, since this
behavior was unclear before.
|
|
Thanks to autoantwort for reporting the issue and suggesting
a different patch:
https://github.com/tukaani-project/xz/pull/42
|
|
The static global variables can be disabled if encoders and decoders
are not built. If they are not disabled and -Werror is used, it will
cause an usused warning as an error.
|
|
The '\n' renders as a newline when the comments are converted to html
by Doxygen.
|
|
Shorten the description for lzma_raw_encoder_memusage() and
lzma_raw_decoder_memusage().
|
|
|
|
All functions now explicitly specify parameter and return values.
The notes and code annotations were moved before the parameter and
return value descriptions for consistency.
Also, the description above lzma_filter_encoder_is_supported() about
not being able to list available filters was removed since
lzma_str_list_filters() will do this.
|
|
|
|
In the C99 and C17 standards, section 6.5.6 paragraph 8 means that
adding 0 to a null pointer is undefined behavior. As of writing,
"clang -fsanitize=undefined" (Clang 15) diagnoses this. However,
I'm not aware of any compiler that would take advantage of this
when optimizing (Clang 15 included). It's good to avoid this anyway
since compilers might some day infer that pointer arithmetic implies
that the pointer is not NULL. That is, the following foo() would then
unconditionally return 0, even for foo(NULL, 0):
void bar(char *a, char *b);
int foo(char *a, size_t n)
{
bar(a, a + n);
return a == NULL;
}
In contrast to C, C++ explicitly allows null pointer + 0. So if
the above is compiled as C++ then there is no undefined behavior
in the foo(NULL, 0) call.
To me it seems that changing the C standard would be the sane
thing to do (just add one sentence) as it would ensure that a huge
amount of old code won't break in the future. Based on web searches
it seems that a large number of codebases (where null pointer + 0
occurs) are being fixed instead to be future-proof in case compilers
will some day optimize based on it (like making the above foo(NULL, 0)
return 0) which in the worst case will cause security bugs.
Some projects don't plan to change it. For example, gnulib and thus
many GNU tools currently require that null pointer + 0 is defined:
https://lists.gnu.org/archive/html/bug-gnulib/2021-11/msg00000.html
https://www.gnu.org/software/gnulib/manual/html_node/Other-portability-assumptions.html
In XZ Utils null pointer + 0 issue should be fixed after this
commit. This adds a few if-statements and thus branches to avoid
null pointer + 0. These check for size > 0 instead of ptr != NULL
because this way bugs where size > 0 && ptr == NULL will likely
get caught quickly. None of them are in hot spots so it shouldn't
matter for performance.
A little less readable version would be replacing
ptr + offset
with
offset != 0 ? ptr + offset : ptr
or creating a macro for it:
#define my_ptr_add(ptr, offset) \
((offset) != 0 ? ((ptr) + (offset)) : (ptr))
Checking for offset != 0 instead of ptr != NULL allows GCC >= 8.1,
Clang >= 7, and Clang-based ICX to optimize it to the very same code
as ptr + offset. That is, it won't create a branch. So for hot code
this could be a good solution to avoid null pointer + 0. Unfortunately
other compilers like ICC 2021 or MSVC 19.33 (VS2022) will create a
branch from my_ptr_add().
Thanks to Marcin Kowalczyk for reporting the problem:
https://github.com/tukaani-project/xz/issues/36
|
|
|
|
|
|
|
|
lzma_microlzma_decoder -> lzma_microlzma_encoder
|
|
Standardizing each function to always specify parameters and return
values. Also moved the parameters and return values to the end of each
function description.
|
|
|
|
|
|
On MicroBlaze, GCC 12 is broken in sense that
__has_attribute(__symver__) returns true but it still doesn't
support the __symver__ attribute even though the platform is ELF
and symbol versioning is supported if using the traditional
__asm__(".symver ...") method. Avoiding the traditional method is
good because it breaks LTO (-flto) builds with GCC.
See also: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101766
For now the only extra symbols in liblzma_linux.map are the
compatibility symbols with the patch that spread from RHEL/CentOS 7.
These require the use of __symver__ attribute or __asm__(".symver ...")
in the C code. Compatibility with the patch from CentOS 7 doesn't
seem valuable on MicroBlaze so use liblzma_generic.map on MicroBlaze
instead. It doesn't require anything special in the C code and thus
no LTO issues either.
An alternative would be to detect support for __symver__
attribute in configure.ac and CMakeLists.txt and fall back
to __asm__(".symver ...") but then LTO would be silently broken
on MicroBlaze. It sounds likely that MicroBlaze is a special
case so let's treat it as a such because that is simpler. If
a similar issue exists on some other platform too then hopefully
someone will report it and this can be reconsidered.
(This doesn't do the same fix in CMakeLists.txt. Perhaps it should
but perhaps CMake build of liblzma doesn't matter much on MicroBlaze.
The problem breaks the build so it's easy to notice and can be fixed
later.)
Thanks to Vincent Fazio for reporting the problem and proposing
a patch (in the end that solution wasn't used):
https://github.com/tukaani-project/xz/pull/32
|
|
Use "member" to refer to struct members as that's the term used
by the C standard.
Use lzma_options_delta.dist and such in docs so that in Doxygen's
HTML output they will link to the doc of the struct member.
Clean up a few trailing white spaces too.
|
|
|
|
|
|
Also adjusted preset value => preset level.
|
|
It gives C4146 here since unary minus with unsigned integer
is still unsigned (which is the intention here). Doing it
with substraction makes it clearer and avoids the warning.
Thanks to Nathan Moinvaziri for reporting this.
|
|
Standardizing each function to always specify parameters and return
values. Also moved the parameters and return values to the end of each
function description.
A few small things were reworded and long sentences broken up.
|
|
All functions now explicitly specify parameter and return values.
|
|
All functions now explicitly specify parameter and return values.
Also moved the note about SHA-256 functions not being exported to the
top of the file.
|
|
All functions now explicitly specify parameter and return values.
|
|
|
|
Add \private above this field and its sub-fields since it is not meant
to be modified by users.
|
|
LZMA_MEMLIMIT_ERROR was missing the "<" character needed to put
documentation after a member.
|
|
Standardizing each function to always specify params and return values.
Also fixed a small grammar mistake.
|
|
|
|
Added [out] annotations to parameters that are pointers and can have
their value changed. Also added a clarification to lzma_vli_is_valid.
|
|
Document LZMA_DELTA_DIST_MIN and LZMA_DELTA_DIST_MAX for completeness
and to avoid Doxygen warnings.
|
|
All functions now explicitly specify parameter and return values.
Also reworded the description of lzma_index_hash_init() for readability.
|
|
start_time is relative to an arbitary point in time, it's not
time of day, so using it for anything else than time differences
wouldn't make sense.
|
|
Now, the LZMA_VERSION_MAJOR, LZMA_VERSION_MINOR, and LZMA_VERSION_PATCH
macros do not need to be on consecutive lines in version.h. They can be
separated by more whitespace, comments, or even other content, as long
as they appear in the proper order (major, minor, patch).
|
|
|
|
Specified parameter and return values for API functions and documented
a few more of the macros.
|
|
|
|
The bug is only a problem in applications that do not properly terminate
the filters[] array with LZMA_VLI_UNKNOWN or have more than
LZMA_FILTERS_MAX filters. This bug does not affect xz.
|
|
Tests lzma_str_to_filters(), lzma_str_from_filters(), and
lzma_str_list_filters() API functions.
|
|
|
|
Added a few sentences to the description for lzma_block_encoder() and
lzma_block_decoder() to highlight that the Block Header must be coded
before calling these functions.
|
|
|
|
|
|
|
|
|
|
Standardizing each function to always specify params and return values.
Output pointer parameters are also marked with doxygen style [out] to
make it clear. Any note sections were also moved above the parameter and
return sections for consistency.
|
|
The flag description for LZMA_STR_NO_VALIDATION was previously confusing
about the treatment for filters than cannot be used with .xz format
(lzma1) without using LZMA_STR_ALL_FILTERS. Now, it is clear that
LZMA_STR_NO_VALIDATION is not a super set of LZMA_STR_ALL_FILTERS.
|
|
The workflow action for our CI pipeline can only reference artifacts in
the source directory, so we should ignore these files if the ci_build.sh
is run locally.
|
|
|
|
|
|
mythread.h and thus liblzma already does it.
|
|
|
|
This way, if xz is stopped the elapsed time and estimated time
remaining won't get confused by the amount of time spent in
the stopped state.
This raises SIGSTOP. It's not clear to me if this is the correct way.
POSIX and glibc docs say that SIGTSTP shouldn't stop the process if
it is orphaned but this commit doesn't attempt to handle that.
Search for SIGTSTP in section 2.4.3:
https://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html
|
|
Thanks to Rafael Fontenelle.
|
|
|
|
Clang can be configured to fake a too high GCC version so
this way it's more robust.
|
|
The previous documentation for lzma_str_to_filters() was technically
correct, but misleading. lzma_str_to_filters() returns NULL on success,
which is in practice always defined to 0. This is the same value as
LZMA_OK, but lzma_str_to_filters() does not return lzma_ret so we should
be more clear.
|
|
This reverts commit 82e3c968bfa10e3ff13333bd9cbbadb5988d6766.
Macros in the reserved namespace (_foo or __foo) shouldn't be #defined
without a very good reason. Here the alternative would have been
to #define tuklib_has_warning(str) to an approriate value.
Also the tuklib_* files should stay namespace clean if possible.
|
|
__has_warning and other __has_foo macros are meant to become
compiler-agnostic so it's not good to check for __clang__ with it.
This also relied on tuklib_common.h for #defining __has_warning
which was confusing as #defining reserved macros is generally
not a good idea.
|
|
Also edit style to match the existing coding style in the project.
|
|
|
|
This prevents the reserved fields from being part of the generated
Doxygen documentation.
|
|
A few Doxygen tags were obsolete from 1.4.7. Version 1.8.17 released
in 2019, so this should be compatible with resonable modern distros.
The purpose of Doxygen these days is for docs on the website, so it
doesn't necessarily have to work for everyone. Just when the maintainers
want to update the docs.
|
|
Doxygen is now configurable in autotools only with
--enable-doxygen=[api|all]. The default is "api", which will only
generate HTML output for liblzma API functions. The LaTex documentation
output was also disabled.
|
|
This improves the generated Doxygen HTML files to better highlight
how to properly use the liblzma API header files.
|