Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
It is enabled only when decompressing one file to stdout,
similar to how Capsicum is used.
Landlock was added in Linux 5.13.
|
|
This removes support for FreeBSD 10.0 and 10.1 which used
<sys/capability.h> instead of <sys/capsicum.h>. Support for
FreeBSD 10.1 ended on 2016-12-31. So now FreeBSD >= 10.2 is
required to enable Capsicum support.
This also removes support for Capsicum on Linux (libcaprights)
which seems to have been unmaintained since 2017 and Linux 4.11:
https://github.com/google/capsicum-linux
|
|
|
|
Before this commit, the following writes "foo" to the
console and deletes the input file:
echo foo | xz > con_xz
xz --suffix=_xz --decompress con_xz
It cannot happen without --suffix because names like con.xz
are also special and so attempting to decompress con.xz
(or compress con to con.xz) will already fail when opening
the input file.
Similar thing is possible when compressing. The following
writes to "nul" and the input file "n" is deleted.
echo foo | xz > n
xz --suffix=ul n
Now xz checks if the destination is a special file before
continuing. DOS/DJGPP version had a check for this but
Windows (and OS/2) didn't.
|
|
Thanks to Kelvin Lee for the original patches
and testing the modifications I made.
|
|
SSIZE_MAX isn't readily available on MSVC. Removing it means
that there is one thing less to worry when porting to MSVC.
|
|
|
|
Also remove unneeded "sandbox_allowed = false;" as this code
will never be run more than once (making it work with multiple
input files isn't trivial).
|
|
|
|
The warning causes the exit status to be 2, so this will cause problems
for many scripted use cases for xz. The sandbox usage is already very
limited already, so silently disabling this allows it to be more usable.
|
|
Thanks to Xin Li for recommending the fix.
|
|
The warning is only used when errno == ENOSYS. Otherwise, xz still
issues a fatal error.
|
|
If a system has the Capsicum header files but does not actually
implement the system calls, then this would render xz unusable. Instead,
we can check if errno == ENOSYS and not issue a fatal error.
|
|
cap_enter() puts the process into the sandbox. If later calls to
cap_rights_limit() fail, then the process can still have some extra
protections.
|
|
It makes no difference here as the return value fits into an int
too and it then gets ignored but this looks better.
|
|
"xz -v < regular_file > out.xz" doesn't display the percentage
and estimated remaining time because it doesn't even try to
check the input file size when input is read from stdin.
This could be improved but for now there's just a comment
to remind about it.
|
|
|
|
Long ago it was used in list.c too but nowadays it's needed
only in io_open_src() so it's nicer to avoid a separate function.
|
|
Treating it as a warning (message + exit status 2) matches gzip
and it seems more logical as at that point the output file has
already been successfully closed. When it's a warning it is
possible to suppress it with --no-warn.
|
|
It isn't any better now but it's consistent with
the rest of the code base.
|
|
OpenBSD does not allow to change the group of a file if the user
does not belong to this group. In contrast to Linux, OpenBSD also
fails if the new group is the same as the old one. Do not call
fchown(2) in this case, it would change nothing anyway.
This fixes an issue with Perl Alien::Build module.
https://github.com/PerlAlien/Alien-Build/issues/62
|
|
Previously this required using --force but that has other
effects too which might be undesirable. Changing the behavior
of --keep has a small risk of breaking existing scripts but
since this is a fairly special corner case I expect the
likehood of breakage to be low enough.
I think the new behavior is more logical. The only reason for
the old behavior was to be consistent with gzip and bzip2.
Thanks to Vincent Lefevre and Sebastian Andrzej Siewior.
|
|
Perhaps it's too drastic but on the other hand it will let me
learn about possible problems if people report the errors.
This won't be backported to the v5.2 branch.
|
|
|
|
xz --flush-timeout=2000, old version:
1. xz is started. The next flush will happen after two seconds.
2. No input for one second.
3. A burst of a few kilobytes of input.
4. No input for one second.
5. Two seconds have passed and flushing starts.
The first second counted towards the flush-timeout even though
there was no pending data. This can cause flushing to occur more
often than needed.
xz --flush-timeout=2000, after this commit:
1. xz is started.
2. No input for one second.
3. A burst of a few kilobytes of input. The next flush will
happen after two seconds counted from the time when the
first bytes of the burst were read.
4. No input for one second.
5. No input for another second.
6. Two seconds have passed and flushing starts.
|
|
|
|
When input blocked, xz --flush-timeout=1 would wake up every
millisecond and initiate flushing which would have nothing to
flush and thus would just waste CPU time. The fix disables the
timeout when no input has been seen since the previous flush.
|
|
|
|
|
|
Or any off_t which isn't very big (like signed 64 bit integer
that most system have). A small off_t could overflow if the
file being decompressed had long enough run of zero bytes,
which would result in corrupt output.
|
|
lseek() returns -1 on error and checking for -1 is nicer.
|
|
This helps fixing warnings from -Wsign-conversion and makes the
code look better too.
|
|
|
|
|
|
xz --list is random access so POSIX_FADV_SEQUENTIAL was clearly
wrong.
|
|
xz used to call utime() on Windows, but its result gets lost
on close(). Using _futime() seems to work.
Thanks to Martok for reporting the bug:
http://www.mail-archive.com/xz-devel@tukaani.org/msg00261.html
|
|
Thanks to Evan Nemerson.
|
|
unlink() can return EBUSY in errno for open files on some
operating systems and file systems.
|
|
This reverts commit 7a11c4a8e5e15f13d5fa59233b3172e65428efdd.
It is a problem when libc has pipe2() but the kernel is too
old to have pipe2() and thus pipe2() fails. In xz it's pointless
to have a fallback for non-functioning pipe2(); it's better to
avoid pipe2() completely.
Thanks to Michael Fox for the bug report.
|
|
|
|
The sandboxing is used conditionally as described in main.c.
This isn't optimal but it was much easier to implement than
a full sandboxing solution and it still covers the most common
use cases where xz is writing to standard output. This should
have practically no effect on performance even with small files
as fork() isn't needed.
C and locale libraries can open files as needed. This has been
fine in the past, but it's a problem with things like Capsicum.
io_sandbox_enter() tries to ensure that various locale-related
files have been loaded before cap_enter() is called, but it's
possible that there are other similar problems which haven't
been seen yet.
Currently Capsicum is available on FreeBSD 10 and later
and there is a port to Linux too.
Thanks to Loganaden Velvindron for help.
|
|
|
|
Now it reads the old flags instead of blindly setting O_NONBLOCK.
The old code may have worked correctly, but this is better.
|
|
|
|
This is similar to the case with stdin.
Thanks to Brad Smith for the bug report and testing
on OpenBSD.
|
|
|
|
It's a problem at least on OpenBSD which doesn't support
O_NONBLOCK on e.g. /dev/null. I'm not surprised if it's
a problem on other OSes too since this behavior is allowed
in POSIX-1.2008.
The code relying on this behavior was committed in June 2013
and included in 5.1.3alpha released on 2013-10-26. Clearly
the development releases only get limited testing.
|
|
|
|
When --flush-timeout=TIMEOUT is used, xz will use
LZMA_SYNC_FLUSH if read() would block and at least
TIMEOUT milliseconds has elapsed since the previous flush.
This can be useful in realtime-like use cases where the
data is simultanously decompressed by another process
(possibly on a different computer). If new uncompressed
input data is produced slowly, without this option xz could
buffer the data for a long time until it would become
decompressible from the output.
If TIMEOUT is 0, the feature is disabled. This is the default.
This commit affects the compression side. Using xz for
the decompression side for the above purpose doesn't work
yet so well because there is quite a bit of input and
output buffering when decompressing.
The --long-help or man page were not updated yet.
The details of this feature may change.
|
|
|
|
Thanks to Christian Hesse.
|
|
Now both reading and writing should be without
race conditions with signals.
They might still be signal handling issues left.
Signals are blocked during many operations to avoid
EINTR but it may cause problems e.g. if writing to
stderr blocks when trying to display an error message.
|
|
It didn't affect the behavior of the code since -1
becomes true anyway.
|
|
It is possible that a signal to set user_abort arrives right
before a blocking system call is made. In this case the call
may block until another signal arrives, while the wanted
behavior is to make xz clean up and exit as soon as possible.
After this commit, the race condition is avoided with the
input side which already uses non-blocking I/O. The output
side still uses blocking I/O and thus has the race condition.
|
|
|
|
Nowadays errno == EFTYPE is documented in open(2).
|
|
POSIX says that fcntl(fd, F_SETFL, flags) returns -1 on
error and "other than -1" on success. This is how it is
documented e.g. on OpenBSD too. On Linux, success with
F_SETFL is always 0 (at least accorinding to fcntl(2)
from man-pages 3.51).
|
|
Due to a wrong variable name, when writing a sparse file
to standard output, *all* file status flags were cleared
(to the extent the operating system allowed it) instead of
only clearing the O_APPEND flag. In practice this worked
fine in the common situations on GNU/Linux, but I didn't
check how it behaved elsewhere.
The original flags were still restored correctly. I still
changed the code to use a separate boolean variable to
indicate when the flags should be restored instead of
relying on a special value in stdout_flags.
|
|
Input file can be a FIFO or something else that doesn't
support posix_fadvise() so don't check the return value
even with an assertion. Nothing bad happens if the call
to posix_fadvise() fails.
|
|
Now the following works as you would expect:
echo foo | xz > foo.xz
echo bar | xz >> foo.xz
( xz -dc --single-stream ; xz -dc --single-stream ) < foo.xz
Note that it doesn't work if the input is not seekable
or if there is Stream Padding between the concatenated
.xz Streams.
|
|
Spot candidates by running these commands:
git ls-files |xargs perl -0777 -n \
-e 'while (/\b(then?|[iao]n|i[fst]|but|f?or|at|and|[dt]o)\s+\1\b/gims)' \
-e '{$n=($` =~ tr/\n/\n/ + 1); ($v=$&)=~s/\n/\\n/g; print "$ARGV:$n:$v\n"}'
Thanks to Jim Meyering for the original patch.
|
|
Try to avoid overwriting the source file if --force is
used and the generated destination filename refers to
the source file. This can happen with 8.3 filenames where
extra characters are ignored.
If the generated output file refers to a special file
like "con" or "prn", refuse to write to it even if --force
is used.
|
|
|
|
|
|
xz didn't compress setuid/setgid/sticky files and files
with multiple hard links even with --force. This bug was
introduced in 23ac2c44c3ac76994825adb7f9a8f719f78b5ee4.
Thanks to Charles Wilson.
|
|
|
|
Thanks to Joerg Sonnenberger.
|
|
Thanks to Jonathan Nieder.
|
|
xz --force accepted symlinks, but didn't remove
them after successful compression. Instead, an error
message was displayed.
|
|
The opening of the destination file is now delayed a little.
The coder is initialized, and if decompressing, the memory
usage of the first Block compared against the memory
usage limit before the destination file is opened. This
means that if --force was used, the old "target" file won't
be deleted so easily when something goes wrong very early.
Thanks to Mark K for the bug report.
The above fix required some changes to progress message
handling. Now there is a separate function for setting and
printing the filename. It is used also in list.c.
list_file() now handles stdin correctly (gives an error).
A useless check for user_abort was removed from file_io.c.
|
|
|
|
Added a note to translators too.
Thanks to Robert Readman.
|
|
It will be used by --list.
|
|
to stdout even if --force is used.
--force will still enable compression of symlinks, but only
in case they point to a regular file.
The new way simply seems more reasonable. It matches gzip's
behavior while the old one matched bzip2's behavior.
|
|
The problem was introduced when adding sparse file
support in 465d1b0d6518c5d980f2db4c2d769f9905bdd902.
Thanks to Charles Wilson.
|
|
Fix a missing _() in the error message too.
|
|
a regular file.
Sparse file creation can be disabled with --no-sparse.
I don't promise yet that the name of this option won't
change before 5.0.0. It's possible that the code, that
checks when it is safe to use sparse output on stdout,
is not good enough, and a more flexible command line
option is needed to configure sparse file handling.
|
|
Thanks to Jouk Jansen.
|
|
Thanks to Jouk Jansen.
|
|
Separate a few reusable components from XZ Utils specific
code. The reusable code is now in "tuklib" modules. A few
more could be separated still, e.g. bswap.h.
Fix some bugs in lzmainfo.
Fix physmem and cpucores code on OS/2. Thanks to Elbert Pol
for help.
Add OpenVMS support into physmem. Add a few #ifdefs to ease
building XZ Utils on OpenVMS. Thanks to Jouk Jansen for the
original patch.
|
|
|
|
|
|
to avoid problems on systems with system headers with those
names.
|