summaryrefslogtreecommitdiff
path: root/src/pkg/runtime/stack.c
Commit message (Collapse)AuthorAgeFilesLines
* build: move package sources from src/pkg to srcRuss Cox2014-09-081-1128/+0
| | | | | | Preparation was in CL 134570043. This CL contains only the effect of 'hg mv src/pkg/* src'. For more about the move, see golang.org/s/go14nopkg.
* runtime: fix panic/wrapper/recover mathRuss Cox2014-09-061-4/+0
| | | | | | | | | | | | | | | | | | The gp->panicwrap adjustment is just fatally flawed. Now that there is a Panic.argp field, update that instead. That can be done on entry only, so that unwinding doesn't need to worry about undoing anything. The wrappers emit a few more instructions in the prologue but everything else in the system gets much simpler. It also fixes (without trying) a broken test I never checked in. Fixes #7491. LGTM=khr R=khr CC=dvyukov, golang-codereviews, iant, r https://golang.org/cl/135490044
* runtime: disable StackCopyAlwaysRuss Cox2014-09-051-1/+1
| | | | | | | | I forgot to clear this before submitting. TBR=khr CC=golang-codereviews https://golang.org/cl/132640044
* runtime: use reflect.call during panic instead of newstackcallRuss Cox2014-09-051-53/+39
| | | | | | | | | | newstackcall creates a new stack segment, and we want to be able to throw away all that code. LGTM=khr R=khr, iant CC=dvyukov, golang-codereviews, r https://golang.org/cl/139270043
* runtime: convert panic/recover to GoKeith Randall2014-09-051-2/+22
| | | | | | | | | | | created panic1.go just so diffs were available. After this CL is in, I'd like to move panic.go -> defer.go and panic1.go -> panic.go. LGTM=rsc R=rsc, khr CC=golang-codereviews https://golang.org/cl/133530045
* runtime: use new #include "textflag.h"Russ Cox2014-09-041-1/+1
| | | | | | | | | | I did this just to clean things up, but it will be important when we drop the pkg directory later. LGTM=bradfitz R=r, bradfitz CC=golang-codereviews https://golang.org/cl/132600043
* runtime: make entersyscall/exitsyscall safe for stack splitsRuss Cox2014-09-031-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It is fundamentally unsafe to grow the stack once someone has made a call to syscall.Syscall. That function takes 6 uintptr arguments, but depending on the call some are pointers. In fact, some might be pointers to stack values, and we don't know which. That makes it impossible to copy the stack somewhere else. Since we want to delete all the stack splitting code, relying only on stack copying, make sure that Syscall never needs to split the stack. The only thing Syscall does is: call entersyscall make the system call call exitsyscall As long as we make sure that entersyscall and exitsyscall can live in the nosplit region, they won't ask for more stack. Do this by making entersyscall and exitsyscall set up the stack guard so that any call to a function with a split check will cause a crash. Then move non-essential slow-path work onto the m stack using onM and mark the rest of the work nosplit. The linker will verify that the chain of nosplits fits in the total nosplit budget. LGTM=iant R=golang-codereviews, iant CC=dvyukov, golang-codereviews, khr, r https://golang.org/cl/140950043
* runtime: Start and stop individual goroutines at gc safepointsRick Hudson2014-09-031-0/+8
| | | | | | | | | | | | | | | | Code to bring goroutines to a gc safepoint one at a time, do some work such as scanning, and restart the goroutine, and then move on to the next goroutine. Currently this code does not do much useful work but this infrastructure will be critical to future concurrent GC work. Fixed comments reviewers. LGTM=rsc R=golang-codereviews, rsc, dvyukov CC=golang-codereviews https://golang.org/cl/131580043
* runtime: convert select implementation to Go.Keith Randall2014-09-021-0/+3
| | | | | | | LGTM=rsc R=golang-codereviews, bradfitz, iant, khr, rsc CC=golang-codereviews https://golang.org/cl/139020043
* runtime: convert traceback*.c to GoRuss Cox2014-09-021-3/+9
| | | | | | | | | | | | | The two converted files were nearly identical. Instead of continuing that duplication, I merged them into a single traceback.go. Tested on arm, amd64, amd64p32, and 386. LGTM=r R=golang-codereviews, remyoudompheng, dave, r CC=dvyukov, golang-codereviews, iant, khr https://golang.org/cl/134200044
* runtime: change PC, SP values in Stkframe, Panic, Defer from byte* to uintptrRuss Cox2014-09-011-5/+5
| | | | | | | | | | uintptr is better when translating to Go, and in a few places it's better in C too. LGTM=r R=golang-codereviews, r CC=golang-codereviews, iant, khr https://golang.org/cl/138980043
* runtime: rename SysAlloc to sysAlloc for GoRuss Cox2014-08-301-1/+1
| | | | | | | | | | Renaming the C SysAlloc will let Go define a prototype without exporting it. For use in cpuprof.goc's translation to Go. LGTM=mdempsky R=golang-codereviews, mdempsky CC=golang-codereviews, iant https://golang.org/cl/140060043
* runtime: rename Lock to MutexRuss Cox2014-08-271-1/+1
| | | | | | | | | | | Mutex is consistent with package sync, and when in the unexported Go form it avoids having a conflcit between the type (now mutex) and the function (lock). LGTM=iant R=golang-codereviews, iant CC=dvyukov, golang-codereviews, r https://golang.org/cl/133140043
* runtime: fix nacl buildRuss Cox2014-08-271-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | The NaCl "system calls" were assumed to have a compatible return convention with the C compiler, and we were using tail jumps to those functions. Don't do that anymore. Correct mistake introduced in newstackcall duringconversion from (SP) to (FP) notation. (Actually this fix, in asm_amd64p32.s, slipped into the C compiler change, but update the name to match what go vet wants.) Correct computation of caller stack pointer in morestack: on amd64p32, the saved PC is the size of a uintreg, not uintptr. This may not matter, since it's been like this for a while, but uintreg is the correct one. (And on non-NaCl they are the same.) This will allow the NaCl build to get much farther. It will probably still not work completely. There's a bug in 6l that needs fixing too. TBR=minux CC=golang-codereviews https://golang.org/cl/134990043
* runtime: changes to g->atomicstatus (nee status) to support concurrent GCRick Hudson2014-08-271-17/+39
| | | | | | | | | | | | | | | | | | | | Every change to g->atomicstatus is now done atomically so that we can ensure that all gs pass through a gc safepoint on demand. This allows the GC to move from one phase to the next safely. In some phases the stack will be scanned. This CL only deals with the infrastructure that allows g->atomicstatus to go from one state to another. Future CLs will deal with scanning and monitoring what phase the GC is in. The major change was to moving to using a Gscan bit to indicate that the status is in a scan state. The only bug fix was in oldstack where I wasn't moving to a Gcopystack state in order to block scanning until the new stack was in place. The proc.go file is waiting for an atomic load instruction. LGTM=rsc R=golang-codereviews, dvyukov, josharian, rsc CC=golang-codereviews, khr https://golang.org/cl/132960044
* cmd/gc, runtime: treat slices and strings like pointers in garbage collectionRuss Cox2014-08-251-13/+2
| | | | | | | | | | | | | | | | | | | Before, a slice with cap=0 or a string with len=0 might have its base pointer pointing beyond the actual slice/string data into the next block. The collector had to ignore slices and strings with cap=0 in order to avoid misinterpreting the base pointer. Now, a slice with cap=0 or a string with len=0 still has a base pointer pointing into the actual slice/string data, no matter what. The collector can now always scan the pointer, which means strings and slices are no longer special. Fixes #8404. LGTM=khr, josharian R=josharian, khr, dvyukov CC=golang-codereviews https://golang.org/cl/112570044
* runtime: convert channel operations to Go, part 1 (chansend1).Keith Randall2014-08-241-1/+21
| | | | | | | LGTM=dvyukov R=dvyukov, khr CC=golang-codereviews https://golang.org/cl/127460044
* runtime: convert common scheduler functions to GoDmitriy Vyukov2014-08-211-2/+2
| | | | | | | | | These are required for chans, semaphores, timers, etc. LGTM=khr R=golang-codereviews, khr CC=golang-codereviews, rlh, rsc https://golang.org/cl/123640043
* runtime: allow copying of onM frameDmitriy Vyukov2014-08-191-1/+10
| | | | | | | | | | | Currently goroutines in onM can't be copied/shrunk (including the very goroutine that triggers GC). Special case onM to allow copying. LGTM=daniel.morsing, khr R=golang-codereviews, daniel.morsing, khr, rsc CC=golang-codereviews, rlh https://golang.org/cl/124550043
* runtime: convert Gosched to GoDmitriy Vyukov2014-08-191-1/+1
| | | | | | | LGTM=rlh, khr R=golang-codereviews, rlh, bradfitz, khr CC=golang-codereviews, rsc https://golang.org/cl/127490043
* runtime: improve diagnostics of non-copyable framesDmitriy Vyukov2014-08-191-3/+15
| | | | | | | LGTM=khr R=golang-codereviews, khr CC=golang-codereviews, rlh, rsc https://golang.org/cl/124560043
* cmd/gc, runtime: refactor interface inlining decision into compilerRuss Cox2014-08-181-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | We need to change the interface value representation for concurrent garbage collection, so that there is no ambiguity about whether the data word holds a pointer or scalar. This CL does NOT make any representation changes. Instead, it removes representation assumptions from various pieces of code throughout the tree. The isdirectiface function in cmd/gc/subr.c is now the only place that decides that policy. The policy propagates out from there in the reflect metadata, as a new flag in the internal kind value. A follow-up CL will change the representation by changing the isdirectiface function. If that CL causes problems, it will be easy to roll back. Update #8405. LGTM=iant R=golang-codereviews, iant CC=golang-codereviews, r https://golang.org/cl/129090043
* runtime: fix data race in stackallocDmitriy Vyukov2014-08-081-3/+5
| | | | | | | | | | | | | | Stack shrinking happens during mark phase, and it assumes that it owns stackcache in mcache. Stack cache flushing also happens during mark phase, and it accesses stackcache's w/o any synchronization. This leads to stackcache corruption: http://goperfd.appspot.com/log/309af5571dfd7e1817259b9c9cf9bcf9b2c27610 LGTM=khr R=khr CC=golang-codereviews, rsc https://golang.org/cl/126870043
* runtime: simpler and faster GCDmitriy Vyukov2014-07-291-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | Implement the design described in: https://docs.google.com/document/d/1v4Oqa0WwHunqlb8C3ObL_uNQw3DfSY-ztoA-4wWbKcg/pub Summary of the changes: GC uses "2-bits per word" pointer type info embed directly into bitmap. Scanning of stacks/data/heap is unified. The old spans types go away. Compiler generates "sparse" 4-bits type info for GC (directly for GC bitmap). Linker generates "dense" 2-bits type info for data/bss (the same as stacks use). Summary of results: -1680 lines of code total (-1000+ in mgc0.c only) -25% memory consumption -3-7% binary size -15% GC pause reduction -7% run time reduction LGTM=khr R=golang-codereviews, rsc, christoph, khr CC=golang-codereviews, rlh https://golang.org/cl/106260045
* undo CL 101570044 / 2c57aaea79c4Keith Randall2014-07-171-141/+246
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | redo stack allocation. This is mostly the same as the original CL with a few bug fixes. 1. add racemalloc() for stack allocations 2. fix poolalloc/poolfree to terminate free lists correctly. 3. adjust span ref count correctly. 4. don't use cache for sizes >= StackCacheSize. Should fix bugs and memory leaks in original changelist. ««« original CL description undo CL 104200047 / 318b04f28372 Breaks windows and race detector. TBR=rsc ««« original CL description runtime: stack allocator, separate from mallocgc In order to move malloc to Go, we need to have a separate stack allocator. If we run out of stack during malloc, malloc will not be available to allocate a new stack. Stacks are the last remaining FlagNoGC objects in the GC heap. Once they are out, we can get rid of the distinction between the allocated/blockboundary bits. (This will be in a separate change.) Fixes #7468 Fixes #7424 LGTM=rsc, dvyukov R=golang-codereviews, dvyukov, khr, dave, rsc CC=golang-codereviews https://golang.org/cl/104200047 »»» TBR=rsc CC=golang-codereviews https://golang.org/cl/101570044 »»» LGTM=dvyukov R=dvyukov, dave, khr, alex.brainman CC=golang-codereviews https://golang.org/cl/112240044
* undo CL 104200047 / 318b04f28372Keith Randall2014-06-301-216/+140
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Breaks windows and race detector. TBR=rsc ««« original CL description runtime: stack allocator, separate from mallocgc In order to move malloc to Go, we need to have a separate stack allocator. If we run out of stack during malloc, malloc will not be available to allocate a new stack. Stacks are the last remaining FlagNoGC objects in the GC heap. Once they are out, we can get rid of the distinction between the allocated/blockboundary bits. (This will be in a separate change.) Fixes #7468 Fixes #7424 LGTM=rsc, dvyukov R=golang-codereviews, dvyukov, khr, dave, rsc CC=golang-codereviews https://golang.org/cl/104200047 »»» TBR=rsc CC=golang-codereviews https://golang.org/cl/101570044
* runtime: stack allocator, separate from mallocgcKeith Randall2014-06-301-140/+216
| | | | | | | | | | | | | | | | | | | | In order to move malloc to Go, we need to have a separate stack allocator. If we run out of stack during malloc, malloc will not be available to allocate a new stack. Stacks are the last remaining FlagNoGC objects in the GC heap. Once they are out, we can get rid of the distinction between the allocated/blockboundary bits. (This will be in a separate change.) Fixes #7468 Fixes #7424 LGTM=rsc, dvyukov R=golang-codereviews, dvyukov, khr, dave, rsc CC=golang-codereviews https://golang.org/cl/104200047
* all: remove 'extern register M *m' from runtimeRuss Cox2014-06-261-47/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The runtime has historically held two dedicated values g (current goroutine) and m (current thread) in 'extern register' slots (TLS on x86, real registers backed by TLS on ARM). This CL removes the extern register m; code now uses g->m. On ARM, this frees up the register that formerly held m (R9). This is important for NaCl, because NaCl ARM code cannot use R9 at all. The Go 1 macrobenchmarks (those with per-op times >= 10 µs) are unaffected: BenchmarkBinaryTree17 5491374955 5471024381 -0.37% BenchmarkFannkuch11 4357101311 4275174828 -1.88% BenchmarkGobDecode 11029957 11364184 +3.03% BenchmarkGobEncode 6852205 6784822 -0.98% BenchmarkGzip 650795967 650152275 -0.10% BenchmarkGunzip 140962363 141041670 +0.06% BenchmarkHTTPClientServer 71581 73081 +2.10% BenchmarkJSONEncode 31928079 31913356 -0.05% BenchmarkJSONDecode 117470065 113689916 -3.22% BenchmarkMandelbrot200 6008923 5998712 -0.17% BenchmarkGoParse 6310917 6327487 +0.26% BenchmarkRegexpMatchMedium_1K 114568 114763 +0.17% BenchmarkRegexpMatchHard_1K 168977 169244 +0.16% BenchmarkRevcomp 935294971 914060918 -2.27% BenchmarkTemplate 145917123 148186096 +1.55% Minux previous reported larger variations, but these were caused by run-to-run noise, not repeatable slowdowns. Actual code changes by Minux. I only did the docs and the benchmarking. LGTM=dvyukov, iant, minux R=minux, josharian, iant, dave, bradfitz, dvyukov CC=golang-codereviews https://golang.org/cl/109050043
* runtime: revise CL 105140044 (defer nil) to work on WindowsRuss Cox2014-06-121-1/+10
| | | | | | | | | | | | | | | | | It appears that something about Go on Windows cannot handle the fault cause by a jump to address 0. The way Go represents and calls functions, this never happened at all, until CL 105140044. This CL changes the code added in CL 105140044 to make jump to 0 impossible once again. Fixes #8047. (again, on Windows) TBR=bradfitz R=golang-codereviews, dave CC=adg, golang-codereviews, iant, r https://golang.org/cl/105120044
* runtime: fix defer of nil funcRuss Cox2014-06-121-1/+6
| | | | | | | | | Fixes #8047. LGTM=r, iant R=golang-codereviews, r, iant CC=dvyukov, golang-codereviews, khr https://golang.org/cl/105140044
* runtime: make continuation pc available to stack walkRuss Cox2014-05-311-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | The 'continuation pc' is where the frame will continue execution, if anywhere. For a frame that stopped execution due to a CALL instruction, the continuation pc is immediately after the CALL. But for a frame that stopped execution due to a fault, the continuation pc is the pc after the most recent CALL to deferproc in that frame, or else 0. That is where execution will continue, if anywhere. The liveness information is only recorded for CALL instructions. This change makes sure that we never look for liveness information except for CALL instructions. Using a valid PC fixes crashes when a garbage collection or stack copying tries to process a stack frame that has faulted. Record continuation pc in heapdump (format change). Fixes #8048. LGTM=iant, khr R=khr, iant, dvyukov CC=golang-codereviews, r https://golang.org/cl/100870044
* runtime: stack copier should handle nil defers without faulting.Keith Randall2014-05-271-2/+10
| | | | | | | | | fixes #8047 LGTM=rsc R=golang-codereviews, rsc CC=golang-codereviews https://golang.org/cl/101800043
* runtime: make sure associated defers are copyable before trying to copy a stack.Keith Randall2014-04-071-12/+57
| | | | | | | | | | | | | | | Defers generated from cgo lie to us about their argument layout. Mark those defers as not copyable. CL 83820043 contains an additional test for this code and should be checked in (and enabled) after this change is in. Fixes bug 7695. LGTM=rsc R=golang-codereviews, rsc CC=golang-codereviews https://golang.org/cl/84740043
* cmd/gc, runtime: make GODEBUG=gcdead=1 mode work with livenessRuss Cox2014-04-031-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Trying to make GODEBUG=gcdead=1 work with liveness and in particular ambiguously live variables. 1. In the liveness computation, mark all ambiguously live variables as live for the entire function, except the entry. They are zeroed directly after entry, and we need them not to be poisoned thereafter. 2. In the liveness computation, compute liveness (and deadness) for all parameters, not just pointer-containing parameters. Otherwise gcdead poisons untracked scalar parameters and results. 3. Fix liveness debugging print for -live=2 to use correct bitmaps. (Was not updated for compaction during compaction CL.) 4. Correct varkill during map literal initialization. Was killing the map itself instead of the inserted value temp. 5. Disable aggressive varkill cleanup for call arguments if the call appears in a defer or go statement. 6. In the garbage collector, avoid bug scanning empty strings. An empty string is two zeros. The multiword code only looked at the first zero and then interpreted the next two bits in the bitmap as an ordinary word bitmap. For a string the bits are 11 00, so if a live string was zero length with a 0 base pointer, the poisoning code treated the length as an ordinary word with code 00, meaning it needed poisoning, turning the string into a poison-length string with base pointer 0. By the same logic I believe that a live nil slice (bits 11 01 00) will have its cap poisoned. Always scan full multiword struct. 7. In the runtime, treat both poison words (PoisonGC and PoisonStack) as invalid pointers that warrant crashes. Manual testing as follows: - Create a script called gcdead on your PATH containing: #!/bin/bash GODEBUG=gcdead=1 GOGC=10 GOTRACEBACK=2 exec "$@" - Now you can build a test and then run 'gcdead ./foo.test'. - More importantly, you can run 'go test -short -exec gcdead std' to run all the tests. Fixes #7676. While here, enable the precise scanning of slices, since that was disabled due to bugs like these. That now works, both with and without gcdead. Fixes #7549. LGTM=khr R=khr CC=golang-codereviews https://golang.org/cl/83410044
* cmd/gc, cmd/ld, runtime: compact liveness bitmapsRuss Cox2014-04-021-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Reduce footprint of liveness bitmaps by about 5x. 1. Mark all liveness bitmap symbols as 4-byte aligned (they were aligned to a larger size by default). 2. The bitmap data is a bitmap count n followed by n bitmaps. Each bitmap begins with its own count m giving the number of bits. All the m's are the same for the n bitmaps. Emit this bitmap length once instead of n times. 3. Many bitmaps within a function have the same bit values, but each call site was given a distinct bitmap. Merge duplicate bitmaps so that no bitmap is written more than once. 4. Many functions end up with the same aggregate bitmap data. We used to name the bitmap data funcname.gcargs and funcname.gclocals. Instead, name it gclocals.<md5 of data> and mark it dupok so that the linker coalesces duplicate sets. This cut the bitmap data remaining after step 3 by 40%; I was not expecting it to be quite so dramatic. Applied to "go build -ldflags -w code.google.com/p/go.tools/cmd/godoc": bitmaps pclntab binary on disk before this CL 1326600 1985854 12738268 4-byte align 1154288 (0.87x) 1985854 (1.00x) 12566236 (0.99x) one bitmap len 782528 (0.54x) 1985854 (1.00x) 12193500 (0.96x) dedup bitmap 414748 (0.31x) 1948478 (0.98x) 11787996 (0.93x) dedup bitmap set 245580 (0.19x) 1948478 (0.98x) 11620060 (0.91x) While here, remove various dead blocks of code from plive.c. Fixes #6929. Fixes #7568. LGTM=khr R=khr CC=golang-codereviews https://golang.org/cl/83630044
* runtime: use correct pc to obtain liveness info during stack copyRuss Cox2014-04-011-4/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The old code was using the PC of the instruction after the CALL. Variables live during the call but not live when it returns would not be seen as live during the stack copy, which might lead to corruption. The correct PC to use is the one just before the return address. After this CL the lookup matches what mgc0.c does. The only time this matters is if you have back to back CALL instructions: CALL f1 // x live here CALL f2 // x no longer live If a stack copy occurs during the execution of f1, the old code will use the liveness bitmap intended for the execution of f2 and will not treat x as live. The only way this situation can arise and cause a problem in a stack copy is if x lives on the stack has had its address taken but the compiler knows enough about the context to know that x is no longer needed once f1 returns. The compiler has never known that much, so using the f2 context cannot currently cause incorrect execution. For the same reason, it is not possible to write a test for this today. CL 83090046 will make the compiler precise enough in some cases that this distinction will start mattering. The existing stack growth tests in package runtime will fail if that CL is submitted without this one. While we're here, print the frame PC in debug mode and update the bitmap interpretation strings. LGTM=khr R=khr CC=golang-codereviews https://golang.org/cl/83250043
* runtime: adjust GODEBUG=allocfreetrace=1 and GODEBUG=gcdead=1Russ Cox2014-04-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | GODEBUG=allocfreetrace=1: The allocfreetrace=1 mode prints a stack trace for each block allocated and freed, and also a stack trace for each garbage collection. It was implemented by reusing the heap profiling support: if allocfreetrace=1 then the heap profile was effectively running at 1 sample per 1 byte allocated (always sample). The stack being shown at allocation was the stack gathered for profiling, meaning it was derived only from the program counters and did not include information about function arguments or frame pointers. The stack being shown at free was the allocation stack, not the free stack. If you are generating this log, you can find the allocation stack yourself, but it can be useful to see exactly the sequence that led to freeing the block: was it the garbage collector or an explicit free? Now that the garbage collector runs on an m0 stack, the stack trace for the garbage collector was never interesting. Fix all these problems: 1. Decouple allocfreetrace=1 from heap profiling. 2. Print the standard goroutine stack traces instead of a custom format. 3. Print the stack trace at time of allocation for an allocation, and print the stack trace at time of free (not the allocation trace again) for a free. 4. Print all goroutine stacks at garbage collection. Having all the stacks means that you can see the exact point at which each goroutine was preempted, which is often useful for identifying liveness-related errors. GODEBUG=gcdead=1: This mode overwrites dead pointers with a poison value. Detect the poison value as an invalid pointer during collection, the same way that small integers are invalid pointers. LGTM=khr R=khr CC=golang-codereviews https://golang.org/cl/81670043
* runtime: enable 'bad pointer' check during garbage collection of Go stack framesRuss Cox2014-03-271-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is the same check we use during stack copying. The check cannot be applied to C stack frames, even though we do emit pointer bitmaps for the arguments, because (1) the pointer bitmaps assume all arguments are always live, not true of outputs during the prologue, and (2) the pointer bitmaps encode interface values as pointer pairs, not true of interfaces holding integers. For the rest of the frames, however, we should hold ourselves to the rule that a pointer marked live really is initialized. The interface scanning already implicitly checks this because it interprets the type word as a valid type pointer. This may slow things down a little because of the extra loads. Or it may speed things up because we don't bother enqueuing nil pointers anymore. Enough of the rest of the system is slow right now that we can't measure it meaningfully. Enable for now, even if it is slow, to shake out bugs in the liveness bitmaps, and then decide whether to turn it off for the Go 1.3 release (issue 7650 reminds us to do this). The new m->traceback field lets us force printing of fp= values on all goroutine stack traces when we detect a bad pointer. This makes it easier to understand exactly where in the frame the bad pointer is, so that we can trace it back to a specific variable and determine what is wrong. Update #7650 LGTM=khr R=khr CC=golang-codereviews https://golang.org/cl/80860044
* runtime: eliminate false retention due to m->moreargp/morebufDmitriy Vyukov2014-03-261-7/+10
| | | | | | | | | | | | | | | | m->moreargp/morebuf were not cleared in case of preemption and stack growing, it can lead to persistent leaks of large memory blocks. It seems to fix the sync.Pool finalizer failures. I've run the test 500'000 times w/o a single failure; previously it would fail dozens of times. Fixes #7633. Fixes #7533. LGTM=rsc R=golang-codereviews CC=golang-codereviews, khr, rsc https://golang.org/cl/80480044
* runtime: redo stack map entries to avoid false retentionKeith Randall2014-03-251-17/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Change two-bit stack map entries to encode: 0 = dead 1 = scalar 2 = pointer 3 = multiword If multiword, the two-bit entry for the following word encodes: 0 = string 1 = slice 2 = iface 3 = eface That way, during stack scanning we can check if a string is zero length or a slice has zero capacity. We can avoid following the contained pointer in those cases. It is safe to do so because it can never be dereferenced, and it is desirable to do so because it may cause false retention of the following block in memory. Slice feature turned off until issue 7564 is fixed. Update #7549 LGTM=rsc R=golang-codereviews, bradfitz, rsc CC=golang-codereviews https://golang.org/cl/76380043
* runtime: fix spans corruptionDmitriy Vyukov2014-03-141-10/+2
| | | | | | | | | | | | The problem was that spans end up in wrong lists after split (e.g. in h->busy instead of h->central->empty). Also the span can be non-swept before split, I don't know what it can cause, but it's safer to operate on swept spans. Fixes #7544. R=golang-codereviews, rsc CC=golang-codereviews, khr https://golang.org/cl/76160043
* runtime: report "out of memory" in efence modeDmitriy Vyukov2014-03-141-2/+6
| | | | | | | | | | Currently processes crash with obscure message. Say that it's "out of memory". LGTM=rsc R=golang-codereviews CC=golang-codereviews, khr, rsc https://golang.org/cl/75820045
* runtime: do not shrink stacks GOCOPYSTACK=0Dmitriy Vyukov2014-03-141-0/+2
| | | | | | | LGTM=rsc R=golang-codereviews CC=golang-codereviews, khr, rsc https://golang.org/cl/76070043
* runtime: detect stack split after forkDmitriy Vyukov2014-03-131-0/+2
| | | | | | | | | | This check would allowed to easily prevent issue 7511. Update #7511 LGTM=rsc R=rsc, aram CC=golang-codereviews https://golang.org/cl/75260043
* runtime: fix stack size checkDmitriy Vyukov2014-03-131-4/+4
| | | | | | | | | | When we copy stack, we check only new size of the top segment. This is incorrect, because we can have other segments below it. LGTM=khr R=golang-codereviews, khr CC=golang-codereviews, rsc https://golang.org/cl/73980045
* runtime: efence support for growable stacksDmitriy Vyukov2014-03-121-4/+12
| | | | | | | | | | | | | | | 1. Fix the bug that shrinkstack returns memory to heap. This causes growslice to misbehave (it manually initialized blocks, and in efence mode shrinkstack's free leads to partially-initialized blocks coming out of growslice. Which in turn causes GC to crash while treating the garbage as Eface/Iface. 2. Enable efence for stack segments. LGTM=rsc R=golang-codereviews, rsc CC=golang-codereviews, khr https://golang.org/cl/74080043
* runtime: more Native Client fixesDave Cheney2014-03-111-1/+1
| | | | | | | | | | | | | | | | | | | | Thanks to Ian for spotting these. runtime.h: define uintreg correctly. stack.c: address warning caused by the type of uintreg being 32 bits on amd64p32. Commentary (mainly for my own use) nacl/amd64p32 defines a machine with 64bit registers, but address space is limited to a 4gb window (the window is placed randomly inside the full 48 bit virtual address space of a process). To cope with this 6c defines _64BIT and _64BITREG. _64BITREG is always defined by 6c, so both GOARCH=amd64 and GOARCH=amd64p32 use 64bit wide registers. However _64BIT itself is only defined when 6c is compiling for amd64 targets. The definition is elided for amd64p32 environments causing int, uint and other arch specific types to revert to their 32bit definitions. LGTM=iant R=iant, rsc, remyoudompheng CC=golang-codereviews https://golang.org/cl/72860046
* runtime: round stack size to power of 2.Shenghou Ma2014-03-071-3/+3
| | | | | | | | | | Fixes build on windows/386 and plan9/386. Fixes #7487. LGTM=mattn.jp, dvyukov, rsc R=golang-codereviews, mattn.jp, dvyukov, 0intro, rsc CC=golang-codereviews https://golang.org/cl/72360043
* runtime: refactor and fix stack management codeDmitriy Vyukov2014-03-071-29/+42
| | | | | | | | | | | | | | | | | | | | | There are at least 3 bugs: 1. g->stacksize accounting is broken during copystack/shrinkstack 2. stktop->free is not properly maintained during copystack/shrinkstack 3. stktop->free logic is broken: we can have stktop->free==FixedStack, and we will free it into stack cache, but it actually comes from heap as the result of non-copying segment shrink This shows as at least spurious races on race builders (maybe something else as well I don't know). The idea behind the refactoring is to consolidate stacksize and segment origin logic in stackalloc/stackfree. Fixes #7490. LGTM=rsc, khr R=golang-codereviews, rsc, khr CC=golang-codereviews https://golang.org/cl/72440043
* runtime: shrink bigger stacks without any copying.Keith Randall2014-03-061-12/+66
| | | | | | | | | | | | | | | | Instead, split the underlying storage in half and free just half of it. Shrinking without copying lets us reclaim storage used by a previously profligate Go routine that has now blocked inside some C code. To shrink in place, we need all stacks to be a power of 2 in size. LGTM=rsc R=golang-codereviews, rsc CC=golang-codereviews https://golang.org/cl/69580044