| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
about optimization options.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@250271 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
| |
and front end tests should avoid this if possible.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@250270 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
| |
tests by predefining _MM_MALLOC_H rather than use -ffreestanding.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@250203 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add intrinsics for the
XSAVE instructions (XSAVE/XSAVE64/XRSTOR/XRSTOR64)
XSAVEOPT instructions (XSAVEOPT/XSAVEOPT64)
XSAVEC instructions (XSAVEC/XSAVEC64)
XSAVES instructions (XSAVES/XSAVES64/XRSTORS/XRSTORS64)
Differential Revision: http://reviews.llvm.org/D13014
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@250158 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
| |
check string accordingly.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@250149 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
| |
!1 = !DIFile(filename: "/var/empty\5C<stdin>", directory: "E:\5Cllvm\5Cbuild\5Ccmake-ninja\5Ctools\5Cclang\5Ctest\5CCodeGen")
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@250136 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
| |
This failed on AArch64 due to the type mismatch using int instead of
__builtin_va_list.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@250112 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
| |
The test failed on Windows due to use of \ as a path separator rather than /.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@250111 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add support for the `-fdebug-prefix-map=` option as in GCC. The syntax is
`-fdebug-prefix-map=OLD=NEW`. When compiling files from a path beginning with
OLD, change the debug info to indicate the path as start with NEW. This is
particularly helpful if you are preprocessing in one path and compiling in
another (e.g. for a build cluster with distcc).
Note that the linearity of the implementation is not as terrible as it may seem.
This is normally done once per file with an expectation that the map will be
small (1-2) entries, making this roughly linear in the number of input paths.
Addresses PR24619.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@250094 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes a bug where one can take the address of a conditionally
enabled function to drop its enable_if guards. For example:
int foo(int a) __attribute__((enable_if(a > 0, "")));
int (*p)(int) = &foo;
int result = p(-1); // compilation succeeds; calls foo(-1)
Overloading logic has been updated to reflect this change, as well.
Functions with enable_if attributes that are always true are still
allowed to have their address taken.
Differential Revision: http://reviews.llvm.org/D13607
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@250090 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rationale :
// sse3
__m128d test_mm_addsub_pd(__m128d A, __m128d B) {
return _mm_addsub_pd(A, B);
}
// mmx
void shift(__m64 a, __m64 b, int c) {
_mm_slli_pi16(a, c);
_mm_slli_pi32(a, c);
_mm_slli_si64(a, c);
_mm_srli_pi16(a, c);
_mm_srli_pi32(a, c);
_mm_srli_si64(a, c);
_mm_srai_pi16(a, c);
_mm_srai_pi32(a, c);
}
clang -msse3 -mno-mmx file.c -c
For this code we should be able to explicitly turn off MMX
without affecting the compilation of the SSE3 function and then
diagnose and error on compiling the MMX function.
This is a preparatory patch to the actual diagnosis code which is
coming in a future patch. This sets us up to have the correct information
where we need it and verifies that it's being emitted for the backend
to handle.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@249733 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
| |
that we can build up an accurate set of features rather than relying on
TargetInfo initialization via handleTargetFeatures to munge the list
of features.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@249732 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
| |
Enums without an explicit, fixed, underlying type are implicitly given a
fixed 'int' type for ABI compatibility with MSVC. However, we can
enforce the standard-mandated rules on these types as-if we didn't know
this fact if the tag is not part of a definition.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@249667 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
| |
These test updates almost exclusively around the change in behavior
around enum: enums without a definition are considered incomplete except
when targeting MSVC ABIs. Since these tests are interested in the
'incomplete-enum' behavior, restrict them to %itanium_abi_triple.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@249660 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
| |
-fms-compatibility
No ABI for C++ currently makes it possible to implement the standard
100% perfectly. We wrongly hid some of our compatible behavior behind
-fms-compatibility instead of tying it to the compiler ABI.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@249656 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With this change, most 'g' options are rejected by CompilerInvocation.
They remain only as Driver options. The new way to request debug info
from cc1 is with "-debug-info-kind={line-tables-only|limited|standalone}"
and "-dwarf-version={2|3|4}". In the absence of a command-line option
to specify Dwarf version, the Toolchain decides it, rather than placing
Toolchain-specific logic in CompilerInvocation.
Also fix a bug in the Windows compatibility argument parsing
in which the "rightmost argument wins" principle failed.
Differential Revision: http://reviews.llvm.org/D13221
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@249655 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
| |
Testing has shown that it is at least as reliable as the old landingpad
pattern matching code.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@249647 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
| |
branch-sensitive checks.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@249499 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
| |
Use llvm.eh.exceptioncode to get the code out of EAX for x64. For
32-bit, the filter is responsible for storing it to memory for us.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@249497 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
| |
Per Intel intrinsics guide:
- _mm256_stream_load_si256 takes `__m256i const *'
- _mm_stream_load_si128 takes `__m128i *', for no good reason.
Let's accept const* for both.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@249213 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
| |
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@249179 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
| |
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@249176 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently FastISel doesn't know how to select vector bitcasts.
During instruction selection, fast-isel always falls back to SelectionDAG
every time it encounters a vector bitcast.
As a consequence of this, all the 'packed vector shift by immedate count'
test cases in avx2-builtins.c are optimized by the DAGCombiner.
In particular, the DAGCombiner would always fold trivial stack loads of
constant shift counts into the operands of packed shift builtins.
This behavior would start changing as soon as I reapply revision 249121.
That revision would teach x86 fast-isel how to select bitcasts between vector
types of the same size.
As a consequence of that change, fast-isel would less often fall back to
SelectionDAG. More importantly, DAGCombiner would no longer be able to
simplify the code by folding the stack reload of a constant.
No functional change.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@249142 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
test that our intrinsics behave the same under -fsigned-char and
-funsigned-char.
This further testing uncovered that AVX-2 has a broken cmpgt for 8-bit
elements, and has for a long time. This is fixed in the same way as
SSE4 handles the case.
The other ISA extensions currently work correctly because they use
specific instruction intrinsics. As soon as they are rewritten in terms
of generic IR, they will need to add these special casts. I've added the
necessary testing to catch this however, so we shouldn't have to chase
it down again.
I considered changing the core typedef to be signed, but that seems like
a bad idea. Notably, it would be an ABI break if anyone is reaching into
the innards of the intrinsic headers and passing __v16qi on an API
boundary. I can't be completely confident that this wouldn't happen due
to a macro expanding in a lambda, etc., so it seems much better to leave
it alone. It also matches GCC's behavior exactly.
A fun side note is that for both GCC and Clang, -funsigned-char really
does change the semantics of __v16qi. To observe this, consider:
% cat x.cc
#include <smmintrin.h>
#include <iostream>
int main() {
__v16qi a = { 1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0};
__v16qi b = _mm_set1_epi8(-1);
std::cout << (int)(a / b)[0] << ", " << (int)(a / b)[1] << '\n';
}
% clang++ -o x x.cc && ./x
-1, 1
% clang++ -funsigned-char -o x x.cc && ./x
0, 1
However, while this may be surprising, both Clang and GCC agree.
Differential Revision: http://reviews.llvm.org/D13324
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@249097 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
recently when we started using direct conversion to model sign
extension. The __v16qi type we use for SSE v16i8 vectors is defined in
terms of 'char' which may or may not be signed! This causes us to
generate pmovsx and pmovzx depending on the setting of -funsigned-char.
This patch just forms an explicitly signed type and uses that to
formulate the sign extension. While this gets the correct behavior
(which we now verify with the enhanced test) this is just the tip of the
ice berg. Now that I know what to look for, I have found errors of this
sort *throughout* our vector code. Fortunately, this is the only
specific place where I know of users actively having their code
miscompiled by Clang due to this, so I'm keeping the fix for those users
minimal and targeted.
I'll be sending a proper email for discussion of how to fix these
systematically, what the implications are, and just how widely broken
this is... From what I can tell, we have never shipped a correct set of
builtin headers for x86 when users rely on -funsigned-char. Oops.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@248980 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary: __nvvm_atom_cas_* returns the old value instead of whether the swap succeeds.
Reviewers: eliben, tra
Subscribers: jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D13306
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@248951 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
vst([1234]|[234]lane) instructions
This is the clang commit associated with llvm r248887.
This commit changes the interface of the vld[1234], vld[234]lane, and vst[1234],
vst[234]lane ARM neon intrinsics and associates an address space with the
pointer that these intrinsics take. This changes, e.g.,
<2 x i32> @llvm.arm.neon.vld1.v2i32(i8*, i32)
to
<2 x i32> @llvm.arm.neon.vld1.v2i32.p0i8(i8*, i32)
This change ensures that address spaces are fully taken into account in the ARM
target during lowering of interleaved loads and stores.
Differential Revision: http://reviews.llvm.org/D13127
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@248888 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch corresponds to review:
http://reviews.llvm.org/D13190
Implemented the following interfaces to conform to ELF V2 ABI version 1.1.
vector signed __int128 vec_adde (vector signed __int128, vector signed __int128, vector signed __int128);
vector unsigned __int128 vec_adde (vector unsigned __int128, vector unsigned __int128, vector unsigned __int128);
vector signed __int128 vec_addec (vector signed __int128, vector signed __int128, vector signed __int128);
vector unsigned __int128 vec_addec (vector unsigned __int128, vector unsigned __int128, vector unsigned __int128);
vector signed int vec_addc(vector signed int __a, vector signed int __b);
vector bool char vec_cmpge (vector signed char __a, vector signed char __b);
vector bool char vec_cmpge (vector unsigned char __a, vector unsigned char __b);
vector bool short vec_cmpge (vector signed short __a, vector signed short __b);
vector bool short vec_cmpge (vector unsigned short __a, vector unsigned short __b);
vector bool int vec_cmpge (vector signed int __a, vector signed int __b);
vector bool int vec_cmpge (vector unsigned int __a, vector unsigned int __b);
vector bool char vec_cmple (vector signed char __a, vector signed char __b);
vector bool char vec_cmple (vector unsigned char __a, vector unsigned char __b);
vector bool short vec_cmple (vector signed short __a, vector signed short __b);
vector bool short vec_cmple (vector unsigned short __a, vector unsigned short __b);
vector bool int vec_cmple (vector signed int __a, vector signed int __b);
vector bool int vec_cmple (vector unsigned int __a, vector unsigned int __b);
vector double vec_double (vector signed long long __a);
vector double vec_double (vector unsigned long long __a);
vector bool char vec_eqv(vector bool char __a, vector bool char __b);
vector bool short vec_eqv(vector bool short __a, vector bool short __b);
vector bool int vec_eqv(vector bool int __a, vector bool int __b);
vector bool long long vec_eqv(vector bool long long __a, vector bool long long __b);
vector signed short vec_madd(vector signed short __a, vector signed short __b, vector signed short __c);
vector signed short vec_madd(vector signed short __a, vector unsigned short __b, vector unsigned short __c);
vector signed short vec_madd(vector unsigned short __a, vector signed short __b, vector signed short __c);
vector unsigned short vec_madd(vector unsigned short __a, vector unsigned short __b, vector unsigned short __c);
vector bool long long vec_mergeh(vector bool long long __a, vector bool long long __b);
vector bool long long vec_mergel(vector bool long long __a, vector bool long long __b);
vector bool char vec_nand(vector bool char __a, vector bool char __b);
vector bool short vec_nand(vector bool short __a, vector bool short __b);
vector bool int vec_nand(vector bool int __a, vector bool int __b);
vector bool long long vec_nand(vector bool long long __a, vector bool long long __b);
vector bool char vec_orc(vector bool char __a, vector bool char __b);
vector bool short vec_orc(vector bool short __a, vector bool short __b);
vector bool int vec_orc(vector bool int __a, vector bool int __b);
vector bool long long vec_orc(vector bool long long __a, vector bool long long __b);
vector signed long long vec_sub(vector signed long long __a, vector signed long long __b);
vector signed long long vec_sub(vector bool long long __a, vector signed long long __b);
vector signed long long vec_sub(vector signed long long __a, vector bool long long __b);
vector unsigned long long vec_sub(vector unsigned long long __a, vector unsigned long long __b);
vector unsigned long long vec_sub(vector bool long long __a, vector unsigned long long __b);
vector unsigned long long vec_sub(vector unsigned long long __V2 ABI V1.1
http://ror float vec_sub(vector float __a, vector float __b);
unsigned char vec_extract(vector bool char __a, int __b);
signed short vec_extract(vector signed short __a, int __b);
unsigned short vec_extract(vector bool short __a, int __b);
signed int vec_extract(vector signed int __a, int __b);
unsigned int vec_extract(vector bool int __a, int __b);
signed long long vec_extract(vector signed long long __a, int __b);
unsigned long long vec_extract(vector unsigned long long __a, int __b);
unsigned long long vec_extract(vector bool long long __a, int __b);
double vec_extract(vector double __a, int __b);
vector bool char vec_insert(unsigned char __a, vector bool char __b, int __c);
vector signed short vec_insert(signed short __a, vector signed short __b, int __c);
vector bool short vec_insert(unsigned short __a, vector bool short __b, int __c);
vector signed int vec_insert(signed int __a, vector signed int __b, int __c);
vector bool int vec_insert(unsigned int __a, vector bool int __b, int __c);
vector signed long long vec_insert(signed long long __a, vector signed long long __b, int __c);
vector unsigned long long vec_insert(unsigned long long __a, vector unsigned long long __b, int __c);
vector bool long long vec_insert(unsigned long long __a, vector bool long long __b, int __c);
vector double vec_insert(double __a, vector double __b, int __c);
vector signed long long vec_splats(signed long long __a);
vector unsigned long long vec_splats(unsigned long long __a);
vector signed __int128 vec_splats(signed __int128 __a);
vector unsigned __int128 vec_splats(unsigned __int128 __a);
vector double vec_splats(double __a);
int vec_all_eq(vector double __a, vector double __b);
int vec_all_ge(vector double __a, vector double __b);
int vec_all_gt(vector double __a, vector double __b);
int vec_all_le(vector double __a, vector double __b);
int vec_all_lt(vector double __a, vector double __b);
int vec_all_nan(vector double __a);
int vec_all_ne(vector double __a, vector double __b);
int vec_all_nge(vector double __a, vector double __b);
int vec_all_ngt(vector double __a, vector double __b);
int vec_any_eq(vector double __a, vector double __b);
int vec_any_ge(vector double __a, vector double __b);
int vec_any_gt(vector double __a, vector double __b);
int vec_any_le(vector double __a, vector double __b);
int vec_any_lt(vector double __a, vector double __b);
int vec_any_ne(vector double __a, vector double __b);
vector unsigned char vec_sbox_be (vector unsigned char);
vector unsigned char vec_cipher_be (vector unsigned char, vector unsigned char);
vector unsigned char vec_cipherlast_be (vector unsigned char, vector unsigned char);
vector unsigned char vec_ncipher_be (vector unsigned char, vector unsigned char);
vector unsigned char vec_ncipherlast_be (vector unsigned char, vector unsigned char);
vector unsigned int vec_shasigma_be (vector unsigned int, const int, const int);
vector unsigned long long vec_shasigma_be (vector unsigned long long, const int, const int);
vector unsigned short vec_pmsum_be (vector unsigned char, vector unsigned char);
vector unsigned int vec_pmsum_be (vector unsigned short, vector unsigned short);
vector unsigned long long vec_pmsum_be (vector unsigned int, vector unsigned int);
vector unsigned __int128 vec_pmsum_be (vector unsigned long long, vector unsigned long long);
vector unsigned char vec_gb (vector unsigned char);
vector unsigned long long vec_bperm (vector unsigned __int128 __a, vector unsigned char __b);
Removed the folowing interfaces either because their signatures have changed
in version 1.1 of the ABI or because they were implemented for ELF V2 ABI but
have actually been deprecated in version 1.1.
vector signed char vec_eqv(vector bool char __a, vector signed char __b);
vector signed char vec_eqv(vector signed char __a, vector bool char __b);
vector unsigned char vec_eqv(vector bool char __a, vector unsigned char __b);
vector unsigned char vec_eqv(vector unsigned char __a, vector bool char __b);
vector signed short vec_eqv(vector bool short __a, vector signed short __b);
vector signed short vec_eqv(vector signed short __a, vector bool short __b);
vector unsigned short vec_eqv(vector bool short __a, vector unsigned short __b);
vector unsigned short vec_eqv(vector unsigned short __a, vector bool short __b);
vector signed int vec_eqv(vector bool int __a, vector signed int __b);
vector signed int vec_eqv(vector signed int __a, vector bool int __b);
vector unsigned int vec_eqv(vector bool int __a, vector unsigned int __b);
vector unsigned int vec_eqv(vector unsigned int __a, vector bool int __b);
vector signed long long vec_eqv(vector bool long long __a, vector signed long long __b);
vector signed long long vec_eqv(vector signed long long __a, vector bool long long __b);
vector unsigned long long vec_eqv(vector bool long long __a, vector unsigned long long __b);
vector unsigned long long vec_eqv(vector unsigned long long __a, vector bool long long __b);
vector float vec_eqv(vector bool int __a, vector float __b);
vector float vec_eqv(vector float __a, vector bool int __b);
vector double vec_eqv(vector bool long long __a, vector double __b);
vector double vec_eqv(vector double __a, vector bool long long __b);
vector unsigned short vec_nand(vector bool short __a, vector unsigned short __b);
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@248813 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
| |
Sema thinks the cast is a no-op, as it does when (e.g.) the
only thing that changes is an alignment attribute.
Fixed PR24944.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@248775 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
| |
Currently it's 64-bit which will lead to mismatch between host and
device code if we compile for i386.
Differential Revision: http://reviews.llvm.org/D13181
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@248753 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, the availability of DSP instructions (ACLE 6.4.7) is handled in
a hand-rolled tricky condition block in lib/Basic/Targets.cpp, with a FIXME:
attached.
http://reviews.llvm.org/D12937 moved the handling of the DSP feature over to
ARMTargetParser.def in LLVM, to be in line with other architecture extensions.
This is the corresponding patch to clang, to clear the FIXME: and update
the tests.
Differential Revision: http://reviews.llvm.org/D12938
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@248521 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Strictly speaking, the MIPS*R2 ISA's should not permit -mnan=2008 since this
feature was added in MIPS*R3. However, other toolchains permit this and we
should do the same.
Reviewers: atanasyan
Subscribers: cfe-commits
Differential Revision: http://reviews.llvm.org/D13057
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@248481 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit fixes an assert that is triggered when optnone is being
added to an IR function that is already marked with minsize and optsize.
rdar://problem/22723716
Differential Revision: http://reviews.llvm.org/D13004
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@248191 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
| |
This was committed without the code review (http://reviews.llvm.org/D12938) being approved.
This reverts commit r248154.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@248173 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, the availability of DSP instructions (ACLE 6.4.7) is handled in
a hand-rolled tricky condition block in lib/Basic/Targets.cpp, with a FIXME:
attached.
http://reviews.llvm.org/D12937 moved the handling of +t2dsp over to
ARMTargetParser.def in LLVM, to be in line with other architecture extensions.
This is the corresponding patch to clang, to clear the FIXME: and update
the tests.
Differential Revision: http://reviews.llvm.org/D12938
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@248154 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
| |
128-bit vector integer sign extensions correctly lower to the pmovsx instructions even for debug builds.
This patch removes the builtins and reimplements the _mm_cvtepi*_epi* intrinsics __using builtin_shufflevector (to extract the bottom most subvector) and __builtin_convertvector (to actually perform the sign extension).
Differential Revision: http://reviews.llvm.org/D12835
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@248092 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
| |
Example:
typedef int __td3;
#pragma weak td3 = __td3
Differential Revision: http://reviews.llvm.org/D12904
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@247975 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This change adds support for `__builtin_ms_va_list`, a GCC extension for
variadic `ms_abi` functions. The existing `__builtin_va_list` support is
inadequate for this because `va_list` is defined differently in the Win64
ABI vs. the System V/AMD64 ABI.
Depends on D1622.
Reviewers: rsmith, rnk, rjmccall
CC: cfe-commits
Differential Revision: http://reviews.llvm.org/D1623
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@247941 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
| |
Mingw generally wraps an old copy of msvcrt.dll which has these
personalities, so things should work out, or so I hear. I haven't tested
it.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@247902 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
| |
fixed the tests.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@247892 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
| |
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@247883 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
| |
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@247882 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
| |
convert i64 to FP and vice versa
reduceps & reducepd
rangeps & rangepd
all in their 512bit versions
Differential Revision: http://reviews.llvm.org/D11716
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@247881 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
| |
"opt -instnamer".
It reverts r231717.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@247667 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Generating call assume(icmp %vtable, %global_vtable) after constructor
call for devirtualization purposes.
For more info go to:
http://lists.llvm.org/pipermail/cfe-dev/2015-July/044227.html
Edit:
Fixed version because of PR24479 and other bug caused in chrome.
After this patch got reverted because of ScalarEvolution bug (D12719)
Merged after John McCall big patch (Added Address).
http://reviews.llvm.org/D11859
http://reviews.llvm.org/D12865
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@247646 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
| |
Revert "Update cxx-irgen.cpp test to allow signext in alwaysinline functions."
Revert "[CodeGen] Remove wrapper-free always_inline functions from COMDATs"
Revert "Always_inline codegen rewrite."
Reason for revert: PR24793.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@247620 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary: Implement DR262 (for C). This patch will mainly affect bitfields of type _Bool
Reviewers: fraggamuffin, rsmith
Subscribers: hubert.reinterpretcast, cfe-commits
Differential Revision: http://reviews.llvm.org/D10018
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@247618 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
| |
Follow up to r247546. The test case reproduces the problem fixed by this commit.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@247548 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Current implementation may end up emitting an undefined reference for
an "inline __attribute__((always_inline))" function by generating an
"available_externally alwaysinline" IR function for it and then failing to
inline all the calls. This happens when a call to such function is in dead
code. As the inliner is an SCC pass, it does not process dead code.
Libc++ relies on the compiler never emitting such undefined reference.
With this patch, we emit a pair of
1. internal alwaysinline definition (called F.alwaysinline)
2a. A stub F() { musttail call F.alwaysinline }
-- or, depending on the linkage --
2b. A declaration of F.
The frontend ensures that F.inlinefunction is only used for direct
calls, and the stub is used for everything else (taking the address of
the function, really). Declaration (2b) is emitted in the case when
"inline" is meant for inlining only (like __gnu_inline__ and some
other cases).
This approach, among other nice properties, ensures that alwaysinline
functions are always internal, making it impossible for a direct call
to such function to produce an undefined symbol reference.
This patch is based on ideas by Chandler Carruth and Richard Smith.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@247494 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
| |
Revert "Always_inline codegen rewrite."
Breaks gdb & lldb tests.
Breaks on Fedora 22 x86_64.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@247491 91177308-0d34-0410-b5e6-96231b3b80d8
|