diff options
author | Lasse Collin <lasse.collin@tukaani.org> | 2022-11-11 14:35:58 +0200 |
---|---|---|
committer | Lasse Collin <lasse.collin@tukaani.org> | 2022-11-11 14:36:32 +0200 |
commit | 2f01169f5a81e21e7a7e5f799c32472c6277b1d5 (patch) | |
tree | fc605de1e3e02d35e6d29a123afc998f4e8e4241 /src/liblzma/common | |
parent | Scripts: Ignore warnings from xz. (diff) | |
download | xz-2f01169f5a81e21e7a7e5f799c32472c6277b1d5.tar.xz |
liblzma: Fix incorrect #ifdef for x86 SSE2 support.
__SSE2__ is the correct macro for SSE2 support with GCC, Clang,
and ICC. __SSE2_MATH__ means doing floating point math with SSE2
instead of 387. Often the latter macro is defined if the first
one is but it was still a bug.
Diffstat (limited to 'src/liblzma/common')
-rw-r--r-- | src/liblzma/common/memcmplen.h | 3 |
1 files changed, 1 insertions, 2 deletions
diff --git a/src/liblzma/common/memcmplen.h b/src/liblzma/common/memcmplen.h index dcfd8d6f..b76a0b63 100644 --- a/src/liblzma/common/memcmplen.h +++ b/src/liblzma/common/memcmplen.h @@ -80,8 +80,7 @@ lzma_memcmplen(const uint8_t *buf1, const uint8_t *buf2, #elif defined(TUKLIB_FAST_UNALIGNED_ACCESS) \ && defined(HAVE__MM_MOVEMASK_EPI8) \ - && ((defined(__GNUC__) && defined(__SSE2_MATH__)) \ - || (defined(__INTEL_COMPILER) && defined(__SSE2__)) \ + && (defined(__SSE2__) \ || (defined(_MSC_VER) && defined(_M_IX86_FP) \ && _M_IX86_FP >= 2)) // NOTE: Like above, this will use 128-bit unaligned access which |