Age | Commit message (Collapse) | Author | Files | Lines |
|
"xz -v < regular_file > out.xz" doesn't display the percentage
and estimated remaining time because it doesn't even try to
check the input file size when input is read from stdin.
This could be improved but for now there's just a comment
to remind about it.
|
|
Treating it as a warning (message + exit status 2) matches gzip
and it seems more logical as at that point the output file has
already been successfully closed. When it's a warning it is
possible to suppress it with --no-warn.
|
|
Previously this required using --force but that has other
effects too which might be undesirable. Changing the behavior
of --keep has a small risk of breaking existing scripts but
since this is a fairly special corner case I expect the
likehood of breakage to be low enough.
I think the new behavior is more logical. The only reason for
the old behavior was to be consistent with gzip and bzip2.
Thanks to Vincent Lefevre and Sebastian Andrzej Siewior.
|
|
It isn't any better now but it's consistent with
the rest of the code base.
|
|
OpenBSD does not allow to change the group of a file if the user
does not belong to this group. In contrast to Linux, OpenBSD also
fails if the new group is the same as the old one. Do not call
fchown(2) in this case, it would change nothing anyway.
This fixes an issue with Perl Alien::Build module.
https://github.com/PerlAlien/Alien-Build/issues/62
|
|
|
|
xz --flush-timeout=2000, old version:
1. xz is started. The next flush will happen after two seconds.
2. No input for one second.
3. A burst of a few kilobytes of input.
4. No input for one second.
5. Two seconds have passed and flushing starts.
The first second counted towards the flush-timeout even though
there was no pending data. This can cause flushing to occur more
often than needed.
xz --flush-timeout=2000, after this commit:
1. xz is started.
2. No input for one second.
3. A burst of a few kilobytes of input. The next flush will
happen after two seconds counted from the time when the
first bytes of the burst were read.
4. No input for one second.
5. No input for another second.
6. Two seconds have passed and flushing starts.
|
|
|
|
When input blocked, xz --flush-timeout=1 would wake up every
millisecond and initiate flushing which would have nothing to
flush and thus would just waste CPU time. The fix disables the
timeout when no input has been seen since the previous flush.
|
|
|
|
|
|
Or any off_t which isn't very big (like signed 64 bit integer
that most system have). A small off_t could overflow if the
file being decompressed had long enough run of zero bytes,
which would result in corrupt output.
|
|
|
|
xz --list is random access so POSIX_FADV_SEQUENTIAL was clearly
wrong.
|
|
|
|
In the v5.2 branch this feature is considered experimental
and thus disabled by default.
The sandboxing is used conditionally as described in main.c.
This isn't optimal but it was much easier to implement than
a full sandboxing solution and it still covers the most common
use cases where xz is writing to standard output. This should
have practically no effect on performance even with small files
as fork() isn't needed.
C and locale libraries can open files as needed. This has been
fine in the past, but it's a problem with things like Capsicum.
io_sandbox_enter() tries to ensure that various locale-related
files have been loaded before cap_enter() is called, but it's
possible that there are other similar problems which haven't
been seen yet.
Currently Capsicum is available on FreeBSD 10 and later
and there is a port to Linux too.
Thanks to Loganaden Velvindron for help.
|
|
xz used to call utime() on Windows, but its result gets lost
on close(). Using _futime() seems to work.
Thanks to Martok for reporting the bug:
http://www.mail-archive.com/xz-devel@tukaani.org/msg00261.html
|
|
Thanks to Evan Nemerson.
|
|
unlink() can return EBUSY in errno for open files on some
operating systems and file systems.
|
|
This reverts commit 7a11c4a8e5e15f13d5fa59233b3172e65428efdd.
It is a problem when libc has pipe2() but the kernel is too
old to have pipe2() and thus pipe2() fails. In xz it's pointless
to have a fallback for non-functioning pipe2(); it's better to
avoid pipe2() completely.
Thanks to Michael Fox for the bug report.
|
|
|
|
Now it reads the old flags instead of blindly setting O_NONBLOCK.
The old code may have worked correctly, but this is better.
|
|
|
|
This is similar to the case with stdin.
Thanks to Brad Smith for the bug report and testing
on OpenBSD.
|
|
|
|
It's a problem at least on OpenBSD which doesn't support
O_NONBLOCK on e.g. /dev/null. I'm not surprised if it's
a problem on other OSes too since this behavior is allowed
in POSIX-1.2008.
The code relying on this behavior was committed in June 2013
and included in 5.1.3alpha released on 2013-10-26. Clearly
the development releases only get limited testing.
|
|
|
|
When --flush-timeout=TIMEOUT is used, xz will use
LZMA_SYNC_FLUSH if read() would block and at least
TIMEOUT milliseconds has elapsed since the previous flush.
This can be useful in realtime-like use cases where the
data is simultanously decompressed by another process
(possibly on a different computer). If new uncompressed
input data is produced slowly, without this option xz could
buffer the data for a long time until it would become
decompressible from the output.
If TIMEOUT is 0, the feature is disabled. This is the default.
This commit affects the compression side. Using xz for
the decompression side for the above purpose doesn't work
yet so well because there is quite a bit of input and
output buffering when decompressing.
The --long-help or man page were not updated yet.
The details of this feature may change.
|
|
|
|
Thanks to Christian Hesse.
|
|
Now both reading and writing should be without
race conditions with signals.
They might still be signal handling issues left.
Signals are blocked during many operations to avoid
EINTR but it may cause problems e.g. if writing to
stderr blocks when trying to display an error message.
|
|
It didn't affect the behavior of the code since -1
becomes true anyway.
|
|
It is possible that a signal to set user_abort arrives right
before a blocking system call is made. In this case the call
may block until another signal arrives, while the wanted
behavior is to make xz clean up and exit as soon as possible.
After this commit, the race condition is avoided with the
input side which already uses non-blocking I/O. The output
side still uses blocking I/O and thus has the race condition.
|
|
|
|
Nowadays errno == EFTYPE is documented in open(2).
|
|
POSIX says that fcntl(fd, F_SETFL, flags) returns -1 on
error and "other than -1" on success. This is how it is
documented e.g. on OpenBSD too. On Linux, success with
F_SETFL is always 0 (at least accorinding to fcntl(2)
from man-pages 3.51).
|
|
Due to a wrong variable name, when writing a sparse file
to standard output, *all* file status flags were cleared
(to the extent the operating system allowed it) instead of
only clearing the O_APPEND flag. In practice this worked
fine in the common situations on GNU/Linux, but I didn't
check how it behaved elsewhere.
The original flags were still restored correctly. I still
changed the code to use a separate boolean variable to
indicate when the flags should be restored instead of
relying on a special value in stdout_flags.
|
|
Input file can be a FIFO or something else that doesn't
support posix_fadvise() so don't check the return value
even with an assertion. Nothing bad happens if the call
to posix_fadvise() fails.
|
|
Now the following works as you would expect:
echo foo | xz > foo.xz
echo bar | xz >> foo.xz
( xz -dc --single-stream ; xz -dc --single-stream ) < foo.xz
Note that it doesn't work if the input is not seekable
or if there is Stream Padding between the concatenated
.xz Streams.
|
|
Spot candidates by running these commands:
git ls-files |xargs perl -0777 -n \
-e 'while (/\b(then?|[iao]n|i[fst]|but|f?or|at|and|[dt]o)\s+\1\b/gims)' \
-e '{$n=($` =~ tr/\n/\n/ + 1); ($v=$&)=~s/\n/\\n/g; print "$ARGV:$n:$v\n"}'
Thanks to Jim Meyering for the original patch.
|
|
Try to avoid overwriting the source file if --force is
used and the generated destination filename refers to
the source file. This can happen with 8.3 filenames where
extra characters are ignored.
If the generated output file refers to a special file
like "con" or "prn", refuse to write to it even if --force
is used.
|
|
|
|
|
|
xz didn't compress setuid/setgid/sticky files and files
with multiple hard links even with --force. This bug was
introduced in 23ac2c44c3ac76994825adb7f9a8f719f78b5ee4.
Thanks to Charles Wilson.
|
|
|
|
Thanks to Joerg Sonnenberger.
|
|
Thanks to Jonathan Nieder.
|
|
xz --force accepted symlinks, but didn't remove
them after successful compression. Instead, an error
message was displayed.
|
|
The opening of the destination file is now delayed a little.
The coder is initialized, and if decompressing, the memory
usage of the first Block compared against the memory
usage limit before the destination file is opened. This
means that if --force was used, the old "target" file won't
be deleted so easily when something goes wrong very early.
Thanks to Mark K for the bug report.
The above fix required some changes to progress message
handling. Now there is a separate function for setting and
printing the filename. It is used also in list.c.
list_file() now handles stdin correctly (gives an error).
A useless check for user_abort was removed from file_io.c.
|
|
|
|
Added a note to translators too.
Thanks to Robert Readman.
|
|
It will be used by --list.
|
|
to stdout even if --force is used.
--force will still enable compression of symlinks, but only
in case they point to a regular file.
The new way simply seems more reasonable. It matches gzip's
behavior while the old one matched bzip2's behavior.
|
|
The problem was introduced when adding sparse file
support in 465d1b0d6518c5d980f2db4c2d769f9905bdd902.
Thanks to Charles Wilson.
|
|
Fix a missing _() in the error message too.
|
|
a regular file.
Sparse file creation can be disabled with --no-sparse.
I don't promise yet that the name of this option won't
change before 5.0.0. It's possible that the code, that
checks when it is safe to use sparse output on stdout,
is not good enough, and a more flexible command line
option is needed to configure sparse file handling.
|
|
Thanks to Jouk Jansen.
|
|
Thanks to Jouk Jansen.
|
|
Separate a few reusable components from XZ Utils specific
code. The reusable code is now in "tuklib" modules. A few
more could be separated still, e.g. bswap.h.
Fix some bugs in lzmainfo.
Fix physmem and cpucores code on OS/2. Thanks to Elbert Pol
for help.
Add OpenVMS support into physmem. Add a few #ifdefs to ease
building XZ Utils on OpenVMS. Thanks to Jouk Jansen for the
original patch.
|
|
|
|
|
|
to avoid problems on systems with system headers with those
names.
|