| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
Similar to the resolution of gcc PR71876.
Nobody uses them or needs the [-Wincomplete-setjmp-declaration]
diagnostic.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@372299 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The clang intrinsic __builtin_preserve_access_index() currently
has signature:
const void * __builtin_preserve_access_index(const void * ptr)
This may cause compiler warning when:
- parameter type is "volatile void *" or "const volatile void *", or
- the assign-to type of the intrinsic does not have "const" qualifier.
Further, this signature does not allow dereference of the
builtin result pointer as it is a "const void *" type, which
adds extra step for the user to do type casting.
Let us change the signature to:
PointerT __builtin_preserve_access_index(PointerT ptr)
such that the result and argument types are the same.
With this, directly dereferencing the builtin return value
becomes possible.
Differential Revision: https://reviews.llvm.org/D67734
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@372294 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The signature of __builtin_rotateright64 was by misstake changed from
unsigned to signed in r360863, this patch will change it back to
unsigned as intended.
This fixes pr43309
Reviewers: efriedma, hans
Reviewed By: hans
Differential Revision: https://reviews.llvm.org/D67606
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@371969 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
| |
The implementation of the OpenCL builtin currently library uses 2
different hacks to get to the corresponding IR intrinsics from the
source. This will allow removal of those.
This is the set that is currently used (minus a few vector ones).
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@367973 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
| |
This patch ensures built-in functions are rewritten using the proper
parent declaration.
Existing tests are modified to run in C++ mode to ensure the
functionality works also with C++ for OpenCL while not increasing the
testing runtime.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@365499 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For background of BPF CO-RE project, please refer to
http://vger.kernel.org/bpfconf2019.html
In summary, BPF CO-RE intends to compile bpf programs
adjustable on struct/union layout change so the same
program can run on multiple kernels with adjustment
before loading based on native kernel structures.
In order to do this, we need keep track of GEP(getelementptr)
instruction base and result debuginfo types, so we
can adjust on the host based on kernel BTF info.
Capturing such information as an IR optimization is hard
as various optimization may have tweaked GEP and also
union is replaced by structure it is impossible to track
fieldindex for union member accesses.
Three intrinsic functions, preserve_{array,union,struct}_access_index,
are introducted.
addr = preserve_array_access_index(base, index, dimension)
addr = preserve_union_access_index(base, di_index)
addr = preserve_struct_access_index(base, gep_index, di_index)
here,
base: the base pointer for the array/union/struct access.
index: the last access index for array, the same for IR/DebugInfo layout.
dimension: the array dimension.
gep_index: the access index based on IR layout.
di_index: the access index based on user/debuginfo types.
If using these intrinsics blindly, i.e., transforming all GEPs
to these intrinsics and later on reducing them to GEPs, we have
seen up to 7% more instructions generated. To avoid such an overhead,
a clang builtin is proposed:
base = __builtin_preserve_access_index(base)
such that user wraps to-be-relocated GEPs in this builtin
and preserve_*_access_index intrinsics only apply to
those GEPs. Such a buyin will prevent performance degradation
if people do not use CO-RE, even for programs which use
bpf_probe_read().
For example, for the following example,
$ cat test.c
struct sk_buff {
int i;
int b1:1;
int b2:2;
union {
struct {
int o1;
int o2;
} o;
struct {
char flags;
char dev_id;
} dev;
int netid;
} u[10];
};
static int (*bpf_probe_read)(void *dst, int size, const void *unsafe_ptr)
= (void *) 4;
#define _(x) (__builtin_preserve_access_index(x))
int bpf_prog(struct sk_buff *ctx) {
char dev_id;
bpf_probe_read(&dev_id, sizeof(char), _(&ctx->u[5].dev.dev_id));
return dev_id;
}
$ clang -target bpf -O2 -g -emit-llvm -S -mllvm -print-before-all \
test.c >& log
The generated IR looks like below:
...
define dso_local i32 @bpf_prog(%struct.sk_buff*) #0 !dbg !15 {
%2 = alloca %struct.sk_buff*, align 8
%3 = alloca i8, align 1
store %struct.sk_buff* %0, %struct.sk_buff** %2, align 8, !tbaa !45
call void @llvm.dbg.declare(metadata %struct.sk_buff** %2, metadata !43, metadata !DIExpression()), !dbg !49
call void @llvm.lifetime.start.p0i8(i64 1, i8* %3) #4, !dbg !50
call void @llvm.dbg.declare(metadata i8* %3, metadata !44, metadata !DIExpression()), !dbg !51
%4 = load i32 (i8*, i32, i8*)*, i32 (i8*, i32, i8*)** @bpf_probe_read, align 8, !dbg !52, !tbaa !45
%5 = load %struct.sk_buff*, %struct.sk_buff** %2, align 8, !dbg !53, !tbaa !45
%6 = call [10 x %union.anon]* @llvm.preserve.struct.access.index.p0a10s_union.anons.p0s_struct.sk_buffs(
%struct.sk_buff* %5, i32 2, i32 3), !dbg !53, !llvm.preserve.access.index !19
%7 = call %union.anon* @llvm.preserve.array.access.index.p0s_union.anons.p0a10s_union.anons(
[10 x %union.anon]* %6, i32 1, i32 5), !dbg !53
%8 = call %union.anon* @llvm.preserve.union.access.index.p0s_union.anons.p0s_union.anons(
%union.anon* %7, i32 1), !dbg !53, !llvm.preserve.access.index !26
%9 = bitcast %union.anon* %8 to %struct.anon.0*, !dbg !53
%10 = call i8* @llvm.preserve.struct.access.index.p0i8.p0s_struct.anon.0s(
%struct.anon.0* %9, i32 1, i32 1), !dbg !53, !llvm.preserve.access.index !34
%11 = call i32 %4(i8* %3, i32 1, i8* %10), !dbg !52
%12 = load i8, i8* %3, align 1, !dbg !54, !tbaa !55
%13 = sext i8 %12 to i32, !dbg !54
call void @llvm.lifetime.end.p0i8(i64 1, i8* %3) #4, !dbg !56
ret i32 %13, !dbg !57
}
!19 = distinct !DICompositeType(tag: DW_TAG_structure_type, name: "sk_buff", file: !3, line: 1, size: 704, elements: !20)
!26 = distinct !DICompositeType(tag: DW_TAG_union_type, scope: !19, file: !3, line: 5, size: 64, elements: !27)
!34 = distinct !DICompositeType(tag: DW_TAG_structure_type, scope: !26, file: !3, line: 10, size: 16, elements: !35)
Note that @llvm.preserve.{struct,union}.access.index calls have metadata llvm.preserve.access.index
attached to instructions to provide struct/union debuginfo type information.
For &ctx->u[5].dev.dev_id,
. The "%6 = ..." represents struct member "u" with index 2 for IR layout and index 3 for DI layout.
. The "%7 = ..." represents array subscript "5".
. The "%8 = ..." represents union member "dev" with index 1 for DI layout.
. The "%10 = ..." represents struct member "dev_id" with index 1 for both IR and DI layout.
Basically, traversing the use-def chain recursively for the 3rd argument of bpf_probe_read() and
examining all preserve_*_access_index calls, the debuginfo struct/union/array access index
can be achieved.
The intrinsics also contain enough information to regenerate codes for IR layout.
For array and structure intrinsics, the proper GEP can be constructed.
For union intrinsics, replacing all uses of "addr" with "base" should be enough.
Signed-off-by: Yonghong Song <yhs@fb.com>
Differential Revision: https://reviews.llvm.org/D61809
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@365438 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
| |
This reverts commit r365435.
Forgot adding the Differential Revision link. Will add to the
commit message and resubmit.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@365436 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For background of BPF CO-RE project, please refer to
http://vger.kernel.org/bpfconf2019.html
In summary, BPF CO-RE intends to compile bpf programs
adjustable on struct/union layout change so the same
program can run on multiple kernels with adjustment
before loading based on native kernel structures.
In order to do this, we need keep track of GEP(getelementptr)
instruction base and result debuginfo types, so we
can adjust on the host based on kernel BTF info.
Capturing such information as an IR optimization is hard
as various optimization may have tweaked GEP and also
union is replaced by structure it is impossible to track
fieldindex for union member accesses.
Three intrinsic functions, preserve_{array,union,struct}_access_index,
are introducted.
addr = preserve_array_access_index(base, index, dimension)
addr = preserve_union_access_index(base, di_index)
addr = preserve_struct_access_index(base, gep_index, di_index)
here,
base: the base pointer for the array/union/struct access.
index: the last access index for array, the same for IR/DebugInfo layout.
dimension: the array dimension.
gep_index: the access index based on IR layout.
di_index: the access index based on user/debuginfo types.
If using these intrinsics blindly, i.e., transforming all GEPs
to these intrinsics and later on reducing them to GEPs, we have
seen up to 7% more instructions generated. To avoid such an overhead,
a clang builtin is proposed:
base = __builtin_preserve_access_index(base)
such that user wraps to-be-relocated GEPs in this builtin
and preserve_*_access_index intrinsics only apply to
those GEPs. Such a buyin will prevent performance degradation
if people do not use CO-RE, even for programs which use
bpf_probe_read().
For example, for the following example,
$ cat test.c
struct sk_buff {
int i;
int b1:1;
int b2:2;
union {
struct {
int o1;
int o2;
} o;
struct {
char flags;
char dev_id;
} dev;
int netid;
} u[10];
};
static int (*bpf_probe_read)(void *dst, int size, const void *unsafe_ptr)
= (void *) 4;
#define _(x) (__builtin_preserve_access_index(x))
int bpf_prog(struct sk_buff *ctx) {
char dev_id;
bpf_probe_read(&dev_id, sizeof(char), _(&ctx->u[5].dev.dev_id));
return dev_id;
}
$ clang -target bpf -O2 -g -emit-llvm -S -mllvm -print-before-all \
test.c >& log
The generated IR looks like below:
...
define dso_local i32 @bpf_prog(%struct.sk_buff*) #0 !dbg !15 {
%2 = alloca %struct.sk_buff*, align 8
%3 = alloca i8, align 1
store %struct.sk_buff* %0, %struct.sk_buff** %2, align 8, !tbaa !45
call void @llvm.dbg.declare(metadata %struct.sk_buff** %2, metadata !43, metadata !DIExpression()), !dbg !49
call void @llvm.lifetime.start.p0i8(i64 1, i8* %3) #4, !dbg !50
call void @llvm.dbg.declare(metadata i8* %3, metadata !44, metadata !DIExpression()), !dbg !51
%4 = load i32 (i8*, i32, i8*)*, i32 (i8*, i32, i8*)** @bpf_probe_read, align 8, !dbg !52, !tbaa !45
%5 = load %struct.sk_buff*, %struct.sk_buff** %2, align 8, !dbg !53, !tbaa !45
%6 = call [10 x %union.anon]* @llvm.preserve.struct.access.index.p0a10s_union.anons.p0s_struct.sk_buffs(
%struct.sk_buff* %5, i32 2, i32 3), !dbg !53, !llvm.preserve.access.index !19
%7 = call %union.anon* @llvm.preserve.array.access.index.p0s_union.anons.p0a10s_union.anons(
[10 x %union.anon]* %6, i32 1, i32 5), !dbg !53
%8 = call %union.anon* @llvm.preserve.union.access.index.p0s_union.anons.p0s_union.anons(
%union.anon* %7, i32 1), !dbg !53, !llvm.preserve.access.index !26
%9 = bitcast %union.anon* %8 to %struct.anon.0*, !dbg !53
%10 = call i8* @llvm.preserve.struct.access.index.p0i8.p0s_struct.anon.0s(
%struct.anon.0* %9, i32 1, i32 1), !dbg !53, !llvm.preserve.access.index !34
%11 = call i32 %4(i8* %3, i32 1, i8* %10), !dbg !52
%12 = load i8, i8* %3, align 1, !dbg !54, !tbaa !55
%13 = sext i8 %12 to i32, !dbg !54
call void @llvm.lifetime.end.p0i8(i64 1, i8* %3) #4, !dbg !56
ret i32 %13, !dbg !57
}
!19 = distinct !DICompositeType(tag: DW_TAG_structure_type, name: "sk_buff", file: !3, line: 1, size: 704, elements: !20)
!26 = distinct !DICompositeType(tag: DW_TAG_union_type, scope: !19, file: !3, line: 5, size: 64, elements: !27)
!34 = distinct !DICompositeType(tag: DW_TAG_structure_type, scope: !26, file: !3, line: 10, size: 16, elements: !35)
Note that @llvm.preserve.{struct,union}.access.index calls have metadata llvm.preserve.access.index
attached to instructions to provide struct/union debuginfo type information.
For &ctx->u[5].dev.dev_id,
. The "%6 = ..." represents struct member "u" with index 2 for IR layout and index 3 for DI layout.
. The "%7 = ..." represents array subscript "5".
. The "%8 = ..." represents union member "dev" with index 1 for DI layout.
. The "%10 = ..." represents struct member "dev_id" with index 1 for both IR and DI layout.
Basically, traversing the use-def chain recursively for the 3rd argument of bpf_probe_read() and
examining all preserve_*_access_index calls, the debuginfo struct/union/array access index
can be achieved.
The intrinsics also contain enough information to regenerate codes for IR layout.
For array and structure intrinsics, the proper GEP can be constructed.
For union intrinsics, replacing all uses of "addr" with "base" should be enough.
Signed-off-by: Yonghong Song <yhs@fb.com>
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@365435 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
| |
This reverts commit SVN r364137. This seems to be cause problems with
casting in C.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@364147 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
| |
This is reduced from MSVC's MSVCPRT 14.21.27702 atomic header. Because
Windows is a LLP64 environment, `long`, `long int`, and `int` are all
synonymous. Change the signature for `__iso_volatile_load32` and
`__iso_volatile_store32` to accept a `long int` instead. This allows
an implicit cast of `int` to `long int` while also permitting `long`
to be accepted.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@364137 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary: According to C99 standard long long is at least 64 bits in
size. However, OpenCL C defines long long as 128 bit signed
integer. This prevents one to use x86 builtins when compiling OpenCL C
code for x86 targets. The patch changes long long to long for OpenCL
only.
Patch by: Alexander Batashev <alexander.batashev@intel.com>
Reviewers: craig.topper, Ka-Ka, eandrews, erichkeane, Anastasia
Reviewed By: Ka-Ka, erichkeane, Anastasia
Subscribers: a.elovikov, yaxunl, Anastasia, cfe-commits, ivankara, etyurin, asavonic
Tags: #clang
Differential Revision: https://reviews.llvm.org/D62580
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@362391 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
is 32/64 bits
Summary:
The definition of the builtins __builtin_bswap32, __builtin_bitreverse32, __builtin_rotateleft32 and __builtin_rotateright32 rely on that the int type is 32 bits wide on the target.
The defintions of the builtins __builtin_bswap64, __builtin_bitreverse64, __builtin_rotateleft64, and __builtin_rotateright64 rely on that the long long type is 64 bits wide.
On targets where this is not the case (e.g. AVR) clang will generate faulty code (wrong llvm assembler intrinsics).
This patch add support for using 'Z' (the int32_t type) in Bultins.def. The builtins above are changed to be based on the int32_t type instead of the int type, and the int64_t type instead of the long long type.
The AVR backend (experimental) have a native int type that is only 16 bits wide. The supplied testcase will therefore fail if running the testcase on trunk as clang will convert e.g. __builtin_bitreverse32 into llvm.bitreverse.i16 on AVR.
Reviewers: dylanmckay, spatel, rsmith, efriedma
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D61845
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@360863 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This patch implements `__builtin_is_constant_evaluated` as specifier by [P0595R2](https://wg21.link/p0595r2). It is built on the back of Bill Wendling's work for `__builtin_constant_p()`.
More tests to come, but early feedback is appreciated.
I plan to implement warnings for common mis-usages like those belowe in a following patch:
```
void foo(int x) {
if constexpr (std::is_constant_evaluated())) { // condition is always `true`. Should use plain `if` instead.
foo_constexpr(x);
} else {
foo_runtime(x);
}
}
```
Reviewers: rsmith, MaskRay, bruno, void
Reviewed By: rsmith
Subscribers: dexonsmith, zoecarver, fdeazeve, kristina, cfe-commits
Differential Revision: https://reviews.llvm.org/D55500
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@359067 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
| |
Future versions of MSVC make these intrinsics available on x86 & x64,
according to:
http://lists.llvm.org/pipermail/cfe-dev/2019-March/061711.html
The purpose of these builtins is to emit plain, non-atomic, volatile
stores when /volatile:ms (-cc1 -fms-volatile) is enabled.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@357220 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
| |
r355322 fixed this, however is being reverted due to concerns with
enabling it in other modes.
Change-Id: I6a939b7469b8fa196d5871a627eb2330dbd30f29
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@355698 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
| |
This reverts commit 24400dafe16716f28cd0e7e5fa6e004c0e50686a.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@355697 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
| |
The above builtins are currently implemented for MSVC mode, however GCC
also implements these. This patch enables them for all platforms.
Additionally, this corrects the type for these builtins to always be
'long int' to match the specification in the Intel Intrinsics Guide.
Change-Id: Ida34be98078709584ef5136c8761783435ec02b1
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@355322 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This makes it consistent with `memcmp` and `__builtin_bcmp`.
Also see the discussion in https://reviews.llvm.org/D56593.
Reviewers: jyknight
Subscribers: kristina, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D58120
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@354023 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This builtin has the same UI as __builtin_object_size, but has the
potential to be evaluated dynamically. It is meant to be used as a
drop-in replacement for libraries that use __builtin_object_size when
a dynamic checking mode is enabled. For instance,
__builtin_object_size fails to provide any extra checking in the
following function:
void f(size_t alloc) {
char* p = malloc(alloc);
strcpy(p, "foobar"); // expands to __builtin___strcpy_chk(p, "foobar", __builtin_object_size(p, 0))
}
This is an overflow if alloc < 7, but because LLVM can't fold the
object size intrinsic statically, it folds __builtin_object_size to
-1. With __builtin_dynamic_object_size, alloc is passed through to
__builtin___strcpy_chk.
rdar://32212419
Differential revision: https://reviews.llvm.org/D56760
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@352665 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to reflect the new license.
We understand that people may be surprised that we're moving the header
entirely to discuss the new license. We checked this carefully with the
Foundation's lawyer and we believe this is the correct approach.
Essentially, all code in the project is now made available by the LLVM
project under our new license, so you will see that the license headers
include that license only. Some of our contributors have contributed
code under our old license, and accordingly, we have retained a copy of
our old license notice in the top-level files in each project and
repository.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@351636 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With commit r351627, LLVM gained the ability to apply (existing) IPO
optimizations on indirections through callbacks, or transitive calls.
The general idea is that we use an abstraction to hide the middle man
and represent the callback call in the context of the initial caller.
It is described in more detail in the commit message of the LLVM patch
r351627, the llvm::AbstractCallSite class description, and the
language reference section on callback-metadata.
This commit enables clang to emit !callback metadata that is
understood by LLVM. It does so in three different cases:
1) For known broker functions declarations that are directly
generated, e.g., __kmpc_fork_call for the OpenMP pragma parallel.
2) For known broker functions that are identified by their name and
source location through the builtin detection, e.g.,
pthread_create from the POSIX thread API.
3) For user annotated functions that carry the "callback(callee, ...)"
attribute. The attribute has to include the name, or index, of
the callback callee and how the passed arguments can be
identified (as many as the callback callee has). See the callback
attribute documentation for detailed information.
Differential Revision: https://reviews.llvm.org/D55483
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@351629 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
This patch adds `__builtin_launder`, which is required to implement `std::launder`. Additionally GCC provides `__builtin_launder`, so thing brings Clang in-line with GCC.
I'm not exactly sure what magic `__builtin_launder` requires, but based on previous discussions this patch applies a `@llvm.invariant.group.barrier`. As noted in previous discussions, this may not be enough to correctly handle vtables.
Reviewers: rnk, majnemer, rsmith
Reviewed By: rsmith
Subscribers: kristina, Romain-Geissler-1A, erichkeane, amharc, jroelofs, cfe-commits, Prazek
Differential Revision: https://reviews.llvm.org/D40218
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@349195 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
for MS compatibility. Remove declarations from intrin.h and implementations from lzcntintrin.h
intrin.h had forward declarations for these and lzcntintrin.h had implementations that were only available with -mlzcnt or a -march that supported the lzcnt feature.
For MS compatibility we should always have these builtins available regardless of X86 being the target or the CPU support the lzcnt instruction. The backends should be able to gracefully fallback to something support even if its just shifts and bit ops.
Unfortunately, gcc also implements 2 of the 3 function names here on X86 when lzcnt feature is enabled.
This patch adds builtins for these for MSVC compatibility and drops the forward declarations from intrin.h. To keep the gcc compatibility the two intrinsics that collided have been turned into macros that use the X86 specific builtins with the lzcnt feature check. These macros are only defined when _MSC_VER is not defined. Without them being macros we can get a redefinition error because -ms-extensions doesn't seem to set _MSC_VER but does make the MS builtins available.
Should fix PR40014
Differential Revision: https://reviews.llvm.org/D55677
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@349098 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
| |
The custom handling seems to all be implemented already.
This avoids regressions in a future patch when float vectors
are ordinarily promoted to double vectors in variadic calls.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@347873 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
| |
Ensure __builtin_(add|sub|mul)_overflow return bool instead of void as per
specification (LanguageExtensions).
Differential Revision: https://reviews.llvm.org/D52875
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@344891 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reviewers: rnk, mstorsjo, compnerd, TomTan, haripul, efriedma
Reviewed By: efriedma
Subscribers: efriedma, kristof.beyls, chrib, jfb, cfe-commits
Differential Revision: https://reviews.llvm.org/D52807
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@343881 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a partial retry of rL340137 (reverted at rL340138 because of gcc host compiler crashing)
with 1 change:
Remove the changes to make microsoft builtins also use the LLVM intrinsics.
This exposes the LLVM funnel shift intrinsics as more familiar bit rotation functions in clang
(when both halves of a funnel shift are the same value, it's a rotate).
We're free to name these as we want because we're not copying gcc, but if there's some other
existing art (eg, the microsoft ops) that we want to replicate, we can change the names.
The funnel shift intrinsics were added here:
https://reviews.llvm.org/D49242
With improved codegen in:
https://reviews.llvm.org/rL337966
https://reviews.llvm.org/rL339359
And basic IR optimization added in:
https://reviews.llvm.org/rL338218
https://reviews.llvm.org/rL340022
...so these are expected to produce asm output that's equal or better to the multi-instruction
alternatives using primitive C/IR ops.
In the motivating loop example from PR37387:
https://bugs.llvm.org/show_bug.cgi?id=37387#c7
...we get the expected 'rolq' x86 instructions if we substitute the rotate builtin into the source.
Differential Revision: https://reviews.llvm.org/D50924
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@340141 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
| |
At least a couple of bots (gcc host compiler on PPC only?) are showing the compiler dying while trying to compile.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@340138 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a retry of rL340135 (reverted at rL340136 because of gcc host compiler crashing)
with 2 changes:
1. Move the code into a helper to reduce code duplication (and hopefully work-around the crash).
2. The original commit had a formatting bug in the docs (missing an underscore).
Original commit message:
This exposes the LLVM funnel shift intrinsics as more familiar bit rotation functions in clang
(when both halves of a funnel shift are the same value, it's a rotate).
We're free to name these as we want because we're not copying gcc, but if there's some other
existing art (eg, the microsoft ops that are modified in this patch) that we want to replicate,
we can change the names.
The funnel shift intrinsics were added here:
https://reviews.llvm.org/D49242
With improved codegen in:
https://reviews.llvm.org/rL337966
https://reviews.llvm.org/rL339359
And basic IR optimization added in:
https://reviews.llvm.org/rL338218
https://reviews.llvm.org/rL340022
...so these are expected to produce asm output that's equal or better to the multi-instruction
alternatives using primitive C/IR ops.
In the motivating loop example from PR37387:
https://bugs.llvm.org/show_bug.cgi?id=37387#c7
...we get the expected 'rolq' x86 instructions if we substitute the rotate builtin into the source.
Differential Revision: https://reviews.llvm.org/D50924
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@340137 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
| |
At least a couple of bots (PPC only?) are showing the compiler dying while trying to compile:
http://lab.llvm.org:8011/builders/clang-ppc64be-linux-multistage/builds/11065/steps/build%20stage%201/logs/stdio
http://lab.llvm.org:8011/builders/clang-ppc64be-linux-lnt/builds/18267/steps/build%20stage%201/logs/stdio
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@340136 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This exposes the LLVM funnel shift intrinsics as more familiar bit rotation functions in clang
(when both halves of a funnel shift are the same value, it's a rotate).
We're free to name these as we want because we're not copying gcc, but if there's some other
existing art (eg, the microsoft ops that are modified in this patch) that we want to replicate,
we can change the names.
The funnel shift intrinsics were added here:
D49242
With improved codegen in:
rL337966
rL339359
And basic IR optimization added in:
rL338218
rL340022
...so these are expected to produce asm output that's equal or better to the multi-instruction
alternatives using primitive C/IR ops.
In the motivating loop example from PR37387:
https://bugs.llvm.org/show_bug.cgi?id=37387#c7
...we get the expected 'rolq' x86 instructions if we substitute the rotate builtin into the source.
Differential Revision: https://reviews.llvm.org/D50924
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@340135 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
| |
gcc defines an intrinsic called __builtin_clrsb which counts the number of extra sign bits on a number. This is equivalent to counting the number of leading zeros on a positive number or the number of leading ones on a negative number and subtracting one from the result. Since we can't count leading ones we need to invert negative numbers to count zeros.
This patch will cause the builtin to be expanded inline while gcc uses a call to a function like clrsbdi2 that is implemented in libgcc. But this is similar to what we already do for popcnt. And I don't think compiler-rt supports clrsbdi2.
Differential Revision: https://reviews.llvm.org/D50168
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@339282 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
__builtin_memmove (in non-type-punning cases).
This is intended to permit libc++ to make std::copy etc constexpr
without sacrificing the optimization that uses memcpy on
trivially-copyable types.
__builtin_strcpy and __builtin_wcscpy are not handled by this change.
They'd be straightforward to add, but we haven't encountered a need for
them just yet.
This reinstates r338455, reverted in r338602, with a fix to avoid trying
to constant-evaluate a memcpy call if either pointer operand has an
invalid designator.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@338941 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
__builtin_memcpy and __builtin_memmove (in non-type-punning cases)."
It caused asserts during Chromium builds, see reply on the cfe-commits thread.
> This is intended to permit libc++ to make std::copy etc constexpr
> without sacrificing the optimization that uses memcpy on
> trivially-copyable types.
>
> __builtin_strcpy and __builtin_wcscpy are not handled by this change.
> They'd be straightforward to add, but we haven't encountered a need for
> them just yet.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@338602 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
__builtin_memmove (in non-type-punning cases).
This is intended to permit libc++ to make std::copy etc constexpr
without sacrificing the optimization that uses memcpy on
trivially-copyable types.
__builtin_strcpy and __builtin_wcscpy are not handled by this change.
They'd be straightforward to add, but we haven't encountered a need for
them just yet.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@338455 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
| |
sed -Ei 's/[[:space:]]+$//' include/**/*.{def,h,td} lib/**/*.{cpp,h}
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@338291 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
Introduce built-ins to read the unsafe stack top and bottom. The unsafe
stack top is required to implement garbage collection scanning for
Oilpan. Currently there is already a built-in 'get_unsafe_stack_start'
to read the bottom of the unsafe stack, but I chose to duplicate this
API because 'start' is ambiguous (e.g. Oilpan uses WTF::GetStackStart to
read the safe stack top.)
Reviewers: pcc
Reviewed By: pcc
Subscribers: llvm-commits, kcc
Differential Revision: https://reviews.llvm.org/D49152
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@337037 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
| |
Differential Revision: http://reviews.llvm.org/D49068
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@336658 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
width. Add a min_vector_width function attribute and tag all x86 instrinsics with it
This is part of an ongoing attempt at making 512 bit vectors illegal in the X86 backend type legalizer due to CPU frequency penalties associated with wide vectors on Skylake Server CPUs. We want the loop vectorizer to be able to emit IR containing wide vectors as intermediate operations in vectorized code and allow these wide vectors to be legalized to 256 bits by the X86 backend even though we are targetting a CPU that supports 512 bit vectors. This is similar to what happens with an AVX2 CPU, the vectorizer can emit wide vectors and the backend will split them. We want this splitting behavior, but still be able to use new Skylake instructions that work on 256-bit vectors and support things like masking and gather/scatter.
Of course if the user uses explicit vector code in their source code we need to not split those operations. Especially if they have used any of the 512-bit vector intrinsics from immintrin.h. And we need to make it so that merely using the intrinsics produces the expected code in order to be backwards compatible.
To support this goal, this patch adds a new IR function attribute "min-legal-vector-width" that can indicate the need for a minimum vector width to be legal in the backend. We need to ensure this attribute is set to the largest vector width needed by any intrinsics from immintrin.h that the function uses. The inliner will be reponsible for merging this attribute when a function is inlined. We may also need a way to limit inlining in the future as well, but we can discuss that in the future.
To make things more complicated, there are two different ways intrinsics are implemented in immintrin.h. Either as an always_inline function containing calls to builtins(can be target specific or target independent) or vector extension code. Or as a macro wrapper around a taget specific builtin. I believe I've removed all cases where the macro was around a target independent builtin.
To support the always_inline function case this patch adds attribute((min_vector_width(128))) that can be used to tag these functions with their vector width. All x86 intrinsic functions that operate on vectors have been tagged with this attribute.
To support the macro case, all x86 specific builtins have also been tagged with the vector width that they require. Use of any builtin with this property will implicitly increase the min_vector_width of the function that calls it. I've done this as a new property in the attribute string for the builtin rather than basing it on the type string so that we can opt into it on a per builtin basis and avoid any impact to target independent builtins.
There will be future work to support vectors passed as function arguments and supporting inline assembly. And whatever else we can find that isn't covered by this patch.
Special thanks to Chandler who suggested this direction and reviewed a preview version of this patch. And thanks to Eric Christopher who has had many conversations with me about this issue.
Differential Revision: https://reviews.llvm.org/D48617
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@336583 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
https://bugs.llvm.org/show_bug.cgi?id=37778
...shows a miscompile resulting from marking nan builtins as 'const'.
The nan libcalls/builtins take a pointer argument:
http://www.cplusplus.com/reference/cmath/nan-function/
...and the chars dereferenced by that arg are used to fill in the NaN constant payload bits.
"const" means that the pointer argument isn't dereferenced. That's translated to "readnone" in LLVM.
"pure" means that the pointer argument may be dereferenced. That's translated to "readonly" in LLVM.
This change prevents the IR optimizer from killing the lead-up to the nan call here:
double a() {
char buf[4];
buf[0] = buf[1] = buf[2] = '9';
buf[3] = '\0';
return __builtin_nan(buf);
}
...the optimizer isn't currently able to simplify this to a constant as we might hope,
but this patch should solve the miscompile.
Differential Revision: https://reviews.llvm.org/D48134
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@334628 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adds support for these intrinsics, which are ARM and ARM64 only:
_interlockedbittestandreset_acq
_interlockedbittestandreset_rel
_interlockedbittestandreset_nf
_interlockedbittestandset_acq
_interlockedbittestandset_rel
_interlockedbittestandset_nf
Refactor the bittest intrinsic handling to decompose each intrinsic into
its action, its width, and its atomicity.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@334239 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
| |
We need to implement _interlockedbittestandset as a builtin for
windows.h, so we might as well do the whole family. It reduces code
duplication anyway.
Fixes PR33188, a long standing bug in our bittest implementation
encountered by Chakra.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@333978 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
| |
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@333594 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
| |
works with them and test accordingly.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@333156 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
| |
to checking for attributes on the call site - and fix up builtin
functions that we were testing for but not ensuring wouldn't be
optimized by the backend.
Leave one set of asm tests to make sure that we're also communicating
builtin-ness to TLI.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@333154 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
| |
These intrinsics work exactly as all other atomic_fetch_* intrinsics and allow to create *atomicrmw* with ordering.
Updated the clang-extensions document.
Differential Revision: https://reviews.llvm.org/D46386
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@332193 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Summary:
A clang builtin for xray typed events. Differs from
__xray_customevent(...) by the presence of a type tag that is vended by
compiler-rt in typical usage. This allows xray handlers to expand logged
events with their type description and plugins to process traced events
based on type.
This change depends on D45633 for the intrinsic definition.
Reviewers: dberris, pelikan, rnk, eizan
Subscribers: cfe-commits, llvm-commits
Differential Revision: https://reviews.llvm.org/D45716
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@330220 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
| |
structure contents at runtime in circumstances where debuggers may not be easily available (such as in kernel work).
Patch by Paul Semel.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@329762 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
| |
The former is not a valid clang argument
Differential Revision: https://reviews.llvm.org/D45102
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@329323 91177308-0d34-0410-b5e6-96231b3b80d8
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A recent addition to Coroutines TS (https://wg21.link/p0913) adds a pre-defined
coroutine noop_coroutine that does nothing. To implement this feature, we implemented
an llvm.coro.noop intrinsic that returns a coroutine handle to a coroutine that
does nothing when resumed or destroyed.
This patch adds a builtin __builtin_coro_noop() that maps to llvm.coro.noop intrinsic.
Related llvm change: https://reviews.llvm.org/D45114
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@328993 91177308-0d34-0410-b5e6-96231b3b80d8
|