summaryrefslogtreecommitdiff
path: root/gdb/Makefile.in
Commit message (Collapse)AuthorAgeFilesLines
* Handle Ada Pragma Import and Pragma ExportTom Tromey2023-05-121-0/+1
| | | | | | | | | | | | | Ada can import C APIs and also export Ada constructs to C via Pragma Import and Pragma Export. This patch adds support for these to gdb, by arranging to either defer some aspects of a symbol to the underlying C symbol (for Import) or by introducing a second symbol (for Export). A somewhat tricky approach is needed, both because gdb doesn't generally handle symbol aliasing, and because Ada treats symbol names in an unusual way (as compared to the rest of gdb).
* Simplify auto_load_expand_dir_vars and remove substitute_path_componentTom Tromey2023-05-051-1/+0
| | | | | | | | | | | | | | | This simplifies auto_load_expand_dir_vars to first split the string, then do any needed substitutions. This was suggested by Simon, and is much simpler than the current approach. Then this patch also removes substitute_path_component, as it is no longer called. This is nice because it helps with the long term goal of removing utils.h. Regression tested on x86-64 Fedora 36.
* gdb: move struct ui and related things to ui.{c,h}Simon Marchi2023-05-011-0/+2
| | | | | | | | | I'd like to move some things so they become methods on struct ui. But first, I think that struct ui and the related things are big enough to deserve their own file, instead of being scattered through top.{c,h} and event-top.c. Change-Id: I15594269ace61fd76ef80a7b58f51ff3ab6979bc
* PR gdb/30214: Prefer local include paths to system include pathsJohn Baldwin2023-03-101-2/+2
| | | | | | | | | | | | | | | | | Some systems may install binutils headers into a system location (e.g. /usr/local/include on FreeBSD) which may also include headers for other external packages used by GDB such as zlib or zstd. If a system include path such as /usr/local/include is added before local include paths to directories within a clone or release tarball, then headers from the external binutils package are used which can result in build failures if the external binutils package is out of sync with the version of GDB being built. To fix, sort the include paths in INTERNAL_CFLAGS_BASE to add CFLAGS for "local" componenets before external components. Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=30214 Reviewed-By: Tom Tromey <tom@tromey.com>
* Remove two more files in gdb "distclean"Tom Tromey2023-03-061-0/+1
| | | | | | | | | | | | The recent work to have gdb link via libtool means that there are a couple more generated files in the build directory that should be removed by "distclean". Note that gdb can't really fully implement distclean due to the desire to put certain generated files into the distribution. Still, it can get pretty close.
* gdb: remove --disable-gdbmi configure optionSimon Marchi2023-02-231-5/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I noticed that the --disable-gdbmi option was broken for almost a year (since 740b42ceb7c "gdb/python/mi: create MI commands using python"). The problem today is the python/py-cmd.c file. It is included in the build if Python support is enabled, and it calls into some MI functions (e.g. insert_mi_cmd_entry). If MI support is disabled, we get some undefined symbols like: mold: error: undefined symbol: insert_mi_cmd_entry(std::unique_ptr<mi_command, std::default_delete<mi_command> >) >>> referenced by py-micmd.c >>> python/py-micmd.o:(micmdpy_install_command(micmdpy_object*)) The python/py-cmd.c file should be included in the build if both Python and MI support are enabled. It is not a case we support today, but it could be done with a bit more configure code. However, I think we should just remove the --disable-gdbmi option, and just include MI support unconditionally. Tom Tromey proposed a while ago to remove this option, but it ended staying: https://inbox.sourceware.org/gdb-patches/20180628172132.28843-1-tom@tromey.com/ However, there was no strong opposition to remove it. The argument was just "bah, it doesn't hurt anybody". But given today's case, I would rather remove complexity rather than add some. I couldn't find anybody caring deeply for that option, and it's not like MI adds any external dependency. It's just a bit more code. Removing the option will not break anybody using --disable-gdbmi (it can be found in many build scripts [1]), since we don't flag invalid configure flags. So, remove the option from configure.ac, and adjust Makefile.in accordingly to always include the MI objects in the build. [1] https://github.com/search?q=%22--disable-gdbmi%22&type=code Change-Id: Ifcaa8c9fc4abc6fa686ed5fd984598644f745240 Approved-By: Tom Tromey <tom@tromey.com>
* gdb: add AMDGPU header files to HFILES_NO_SRCDIRSimon Marchi2023-02-231-0/+2
| | | | | | | | Commit 18b4d0736bc5 ("gdb: initial support for ROCm platform (AMDGPU) debugging") missed adding these header files to the HFILES_NO_SRCDIR list in the Makefile. Fix that now. Change-Id: Ifd387096aef3d147b51aefa2037da5bf6373ea64
* gdb/dwarf2: split .debug_names reading code to own fileSimon Marchi2023-02-151-0/+2
| | | | | | | | | Move everything related to reading .debug_names from read.c to read-debug-names.c. The only entry point exposed by read-debug-names.{c,h} is dwarf2_read_debug_names. Change-Id: I18b23f3c7a61b14abc3a46e4bf559bc2d078e8bc Approved-By: Tom Tromey <tom@tromey.com>
* gdb/dwarf2: split .gdb_index reading code to own fileSimon Marchi2023-02-151-0/+2
| | | | | | | | | Move everything related to reading .gdb_index from read.c to read-gdb-index.c. The only entry point exposed by read-gdb-index.{c,h} is dwarf2_read_gdb_index. Change-Id: I1e32c8f0720086538de8d2f612f27545377099bc Approved-By: Tom Tromey <tom@tromey.com>
* Move some code from dwarf2/read.c to die.cTom Tromey2023-02-121-0/+1
| | | | | | | | | | | | This patch introduces a new file, dwarf2/die.c, and moves some DIE-related code out of dwarf2/read.c and into this new file. This is just a small part of the long-term project to split up read.c. (According to 'wc', dwarf2/read.c is the largest file in gdb by around 8000 LOC.) Regression tested on x86-64 Fedora 36.
* gdb: initial support for ROCm platform (AMDGPU) debuggingSimon Marchi2023-02-021-2/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds the foundation for GDB to be able to debug programs offloaded to AMD GPUs using the AMD ROCm platform [1]. The latest public release of the ROCm release at the time of writing is 5.4, so this is what this patch targets. The ROCm platform allows host programs to schedule bits of code for execution on GPUs or similar accelerators. The programs running on GPUs are typically referred to as `kernels` (not related to operating system kernels). Programs offloaded with the AMD ROCm platform can be written in the HIP language [2], OpenCL and OpenMP, but we're going to focus on HIP here. The HIP language consists of a C++ Runtime API and kernel language. Here's an example of a very simple HIP program: #include "hip/hip_runtime.h" #include <cassert> __global__ void do_an_addition (int a, int b, int *out) { *out = a + b; } int main () { int *result_ptr, result; /* Allocate memory for the device to write the result to. */ hipError_t error = hipMalloc (&result_ptr, sizeof (int)); assert (error == hipSuccess); /* Run `do_an_addition` on one workgroup containing one work item. */ do_an_addition<<<dim3(1), dim3(1), 0, 0>>> (1, 2, result_ptr); /* Copy result from device to host. Note that this acts as a synchronization point, waiting for the kernel dispatch to complete. */ error = hipMemcpyDtoH (&result, result_ptr, sizeof (int)); assert (error == hipSuccess); printf ("result is %d\n", result); assert (result == 3); return 0; } This program can be compiled with: $ hipcc simple.cpp -g -O0 -o simple ... where `hipcc` is the HIP compiler, shipped with ROCm releases. This generates an ELF binary for the host architecture, containing another ELF binary with the device code. The ELF for the device can be inspected with: $ roc-obj-ls simple 1 host-x86_64-unknown-linux file://simple#offset=8192&size=0 1 hipv4-amdgcn-amd-amdhsa--gfx906 file://simple#offset=8192&size=34216 $ roc-obj-extract 'file://simple#offset=8192&size=34216' $ file simple-offset8192-size34216.co simple-offset8192-size34216.co: ELF 64-bit LSB shared object, *unknown arch 0xe0* version 1, dynamically linked, with debug_info, not stripped ^ amcgcn architecture that my `file` doesn't know about ----´ Running the program gives the very unimpressive result: $ ./simple result is 3 While running, this host program has copied the device program into the GPU's memory and spawned an execution thread on it. The goal of this GDB port is to let the user debug host threads and these GPU threads simultaneously. Here's a sample session using a GDB with this patch applied: $ ./gdb -q -nx --data-directory=data-directory ./simple Reading symbols from ./simple... (gdb) break do_an_addition Function "do_an_addition" not defined. Make breakpoint pending on future shared library load? (y or [n]) y Breakpoint 1 (do_an_addition) pending. (gdb) r Starting program: /home/smarchi/build/binutils-gdb-amdgpu/gdb/simple [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". [New Thread 0x7ffff5db7640 (LWP 1082911)] [New Thread 0x7ffef53ff640 (LWP 1082913)] [Thread 0x7ffef53ff640 (LWP 1082913) exited] [New Thread 0x7ffdecb53640 (LWP 1083185)] [New Thread 0x7ffff54bf640 (LWP 1083186)] [Thread 0x7ffdecb53640 (LWP 1083185) exited] [Switching to AMDGPU Wave 2:2:1:1 (0,0,0)/0] Thread 6 hit Breakpoint 1, do_an_addition (a=<error reading variable: DWARF-2 expression error: `DW_OP_regx' operations must be used either alone or in conjunction with DW_OP_piece or DW_OP_bit_piece.>, b=<error reading variable: DWARF-2 expression error: `DW_OP_regx' operations must be used either alone or in conjunction with DW_OP_piece or DW_OP_bit_piece.>, out=<error reading variable: DWARF-2 expression error: `DW_OP_regx' operations must be used either alone or in conjunction with DW_OP_piece or DW_OP_bit_piece.>) at simple.cpp:24 24 *out = a + b; (gdb) info inferiors Num Description Connection Executable * 1 process 1082907 1 (native) /home/smarchi/build/binutils-gdb-amdgpu/gdb/simple (gdb) info threads Id Target Id Frame 1 Thread 0x7ffff5dc9240 (LWP 1082907) "simple" 0x00007ffff5e9410b in ?? () from /opt/rocm-5.4.0/lib/libhsa-runtime64.so.1 2 Thread 0x7ffff5db7640 (LWP 1082911) "simple" __GI___ioctl (fd=3, request=3222817548) at ../sysdeps/unix/sysv/linux/ioctl.c:36 5 Thread 0x7ffff54bf640 (LWP 1083186) "simple" __GI___ioctl (fd=3, request=3222817548) at ../sysdeps/unix/sysv/linux/ioctl.c:36 * 6 AMDGPU Wave 2:2:1:1 (0,0,0)/0 do_an_addition ( a=<error reading variable: DWARF-2 expression error: `DW_OP_regx' operations must be used either alone or in conjunction with DW_OP_piece or DW_OP_bit_piece.>, b=<error reading variable: DWARF-2 expression error: `DW_OP_regx' operations must be used either alone or in conjunction with DW_OP_piece or DW_OP_bit_piece.>, out=<error reading variable: DWARF-2 expression error: `DW_OP_regx' operations must be used either alone or in conjunction with DW_OP_piece or DW_OP_bit_piece.>) at simple.cpp:24 (gdb) bt Python Exception <class 'gdb.error'>: Unhandled dwarf expression opcode 0xe1 #0 do_an_addition (a=<error reading variable: DWARF-2 expression error: `DW_OP_regx' operations must be used either alone or in conjunction with DW_OP_piece or DW_OP_bit_piece.>, b=<error reading variable: DWARF-2 expression error: `DW_OP_regx' operations must be used either alone or in conjunction with DW_OP_piece or DW_OP_bit_piece.>, out=<error reading variable: DWARF-2 expression error: `DW_OP_regx' operations must be used either alone or in conjunction with DW_OP_piece or DW_OP_bit_piece.>) at simple.cpp:24 (gdb) continue Continuing. result is 3 warning: Temporarily disabling breakpoints for unloaded shared library "file:///home/smarchi/build/binutils-gdb-amdgpu/gdb/simple#offset=8192&size=67208" [Thread 0x7ffff54bf640 (LWP 1083186) exited] [Thread 0x7ffff5db7640 (LWP 1082911) exited] [Inferior 1 (process 1082907) exited normally] One thing to notice is the host and GPU threads appearing under the same inferior. This is a design goal for us, as programmers tend to think of the threads running on the GPU as part of the same program as the host threads, so showing them in the same inferior in GDB seems natural. Also, the host and GPU threads share a global memory space, which fits the inferior model. Another thing to notice is the error messages when trying to read variables or printing a backtrace. This is expected for the moment, since the AMD GPU compiler produces some DWARF that uses some non-standard extensions: https://llvm.org/docs/AMDGPUDwarfExtensionsForHeterogeneousDebugging.html There were already some patches posted by Zoran Zaric earlier to make GDB support these extensions: https://inbox.sourceware.org/gdb-patches/20211105113849.118800-1-zoran.zaric@amd.com/ We think it's better to get the basic support for AMD GPU in first, which will then give a better justification for GDB to support these extensions. GPU threads are named `AMDGPU Wave`: a wave is essentially a hardware thread using the SIMT (single-instruction, multiple-threads) [3] execution model. GDB uses the amd-dbgapi library [4], included in the ROCm platform, for a few things related to AMD GPU threads debugging. Different components talk to the library, as show on the following diagram: +---------------------------+ +-------------+ +------------------+ | GDB | amd-dbgapi target | <-> | AMD | | Linux kernel | | +-------------------+ | Debugger | +--------+ | | | amdgcn gdbarch | <-> | API | <=> | AMDGPU | | | +-------------------+ | | | driver | | | | solib-rocm | <-> | (dbgapi.so) | +--------+---------+ +---------------------------+ +-------------+ - The amd-dbgapi target is a target_ops implementation used to control execution of GPU threads. While the debugging of host threads works by using the ptrace / wait Linux kernel interface (as usual), control of GPU threads is done through a special interface (dubbed `kfd`) exposed by the `amdgpu` Linux kernel module. GDB doesn't interact directly with `kfd`, but instead goes through the amd-dbgapi library (AMD Debugger API on the diagram). Since it provides execution control, the amd-dbgapi target should normally be a process_stratum_target, not just a target_ops. More on that later. - The amdgcn gdbarch (describing the hardware architecture of the GPU execution units) offloads some requests to the amd-dbgapi library, so that knowledge about the various architectures doesn't need to be duplicated and baked in GDB. This is for example for things like the list of registers. - The solib-rocm component is an solib provider that fetches the list of code objects loaded on the device from the amd-dbgapi library, and makes GDB read their symbols. This is very similar to other solib providers that handle shared libraries, except that here the shared libraries are the pieces of code loaded on the device. Given that Linux host threads are managed by the linux-nat target, and the GPU threads are managed by the amd-dbgapi target, having all threads appear in the same inferior requires the two targets to be in that inferior's target stack. However, there can only be one process_stratum_target in a given target stack, since there can be only one target per slot. To achieve it, we therefore resort the hack^W solution of placing the amd-dbgapi target in the arch_stratum slot of the target stack, on top of the linux-nat target. Doing so allows the amd-dbgapi target to intercept target calls and handle them if they concern GPU threads, and offload to beneath otherwise. See amd_dbgapi_target::fetch_registers for a simple example: void amd_dbgapi_target::fetch_registers (struct regcache *regcache, int regno) { if (!ptid_is_gpu (regcache->ptid ())) { beneath ()->fetch_registers (regcache, regno); return; } // handle it } ptids of GPU threads are crafted with the following pattern: (pid, 1, wave id) Where pid is the inferior's pid and "wave id" is the wave handle handed to us by the amd-dbgapi library (in practice, a monotonically incrementing integer). The idea is that on Linux systems, the combination (pid != 1, lwp == 1) is not possible. lwp == 1 would always belong to the init process, which would also have pid == 1 (and it's improbable for the init process to offload work to the GPU and much less for the user to debug it). We can therefore differentiate GPU and non-GPU ptids this way. See ptid_is_gpu for more details. Note that we believe that this scheme could break down in the context of containers, where the initial process executed in a container has pid 1 (in its own pid namespace). For instance, if you were to execute a ROCm program in a container, then spawn a GDB in that container and attach to the process, it will likely not work. This is a known limitation. A workaround for this is to have a dummy process (like a shell) fork and execute the program of interest. The amd-dbgapi target watches native inferiors, and "attaches" to them using amd_dbgapi_process_attach, which gives it a notifier fd that is registered in the event loop (see enable_amd_dbgapi). Note that this isn't the same "attach" as in PTRACE_ATTACH, but being ptrace-attached is a precondition for amd_dbgapi_process_attach to work. When the debugged process enables the ROCm runtime, the amd-dbgapi target gets notified through that fd, and pushes itself on the target stack of the inferior. The amd-dbgapi target is then able to intercept target_ops calls. If the debugged process disables the ROCm runtime, the amd-dbgapi target unpushes itself from the target stack. This way, the amd-dbgapi target's footprint stays minimal when debugging a process that doesn't use the AMD ROCm platform, it does not intercept target calls. The amd-dbgapi library is found using pkg-config. Since enabling support for the amdgpu architecture (amdgpu-tdep.c) depends on the amd-dbgapi library being present, we have the following logic for the interaction with --target and --enable-targets: - if the user explicitly asks for amdgcn support with --target=amdgcn-*-* or --enable-targets=amdgcn-*-*, we probe for the amd-dbgapi and fail if not found - if the user uses --enable-targets=all, we probe for amd-dbgapi, enable amdgcn support if found, disable amdgcn support if not found - if the user uses --enable-targets=all and --with-amd-dbgapi=yes, we probe for amd-dbgapi, enable amdgcn if found and fail if not found - if the user uses --enable-targets=all and --with-amd-dbgapi=no, we do not probe for amd-dbgapi, disable amdgcn support - otherwise, amd-dbgapi is not probed for and support for amdgcn is not enabled Finally, a simple test is included. It only tests hitting a breakpoint in device code and resuming execution, pretty much like the example shown above. [1] https://docs.amd.com/category/ROCm_v5.4 [2] https://docs.amd.com/bundle/HIP-Programming-Guide-v5.4 [3] https://en.wikipedia.org/wiki/Single_instruction,_multiple_threads [4] https://docs.amd.com/bundle/ROCDebugger-API-Guide-v5.4 Change-Id: I591edca98b8927b1e49e4b0abe4e304765fed9ee Co-Authored-By: Zoran Zaric <zoran.zaric@amd.com> Co-Authored-By: Laurent Morichetti <laurent.morichetti@amd.com> Co-Authored-By: Tony Tye <Tony.Tye@amd.com> Co-Authored-By: Lancelot SIX <lancelot.six@amd.com> Co-Authored-By: Pedro Alves <pedro@palves.net>
* gdb: make user-created frames reinflatableSimon Marchi2023-01-201-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch teaches frame_info_ptr to reinflate user-created frames (frames created through create_new_frame, with the "select-frame view" command). Before this patch, frame_info_ptr doesn't support reinflating user-created frames, because it currently reinflates by getting the current target frame (for frame 0) or frame_find_by_id (for other frames). To reinflate a user-created frame, we need to call create_new_frame, to make it lookup an existing user-created frame, or otherwise create one. So, in prepare_reinflate, get the frame id even if the frame has level 0, if it is user-created. In reinflate, if the saved frame id is user create it, call create_new_frame. In order to test this, I initially enhanced the gdb.base/frame-view.exp test added by the previous patch by setting a pretty-printer for the type of the function parameters, in which we do an inferior call. This causes print_frame_args to not reinflate its frame (which is a user-created one) properly. On one machine (my Arch Linux one), it properly catches the bug, as the frame is not correctly restored after printing the first parameter, so it messes up the second parameter: frame #0 baz (z1=hahaha, z2=<error reading variable: frame address is not available.>) at /home/simark/src/binutils-gdb/gdb/testsuite/gdb.base/frame-view.c:40 40 return z1.m + z2.n; (gdb) FAIL: gdb.base/frame-view.exp: with_pretty_printer=true: frame frame #0 baz (z1=hahaha, z2=<error reading variable: frame address is not available.>) at /home/simark/src/binutils-gdb/gdb/testsuite/gdb.base/frame-view.c:40 40 return z1.m + z2.n; (gdb) FAIL: gdb.base/frame-view.exp: with_pretty_printer=true: frame again However, on another machine (my Ubuntu 22.04 one), it just passes fine, without the appropriate fix. I then thought about writing a selftest for that, it's more reliable. I left the gdb.base/frame-view.exp pretty printer test there, it's already written, and we never know, it might catch some unrelated issue some day. Change-Id: I5849baf77991fc67a15bfce4b5e865a97265b386 Reviewed-By: Bruno Larsen <blarsen@redhat.com>
* gdb: move frame_info_ptr to frame.{c,h}Simon Marchi2023-01-201-1/+0
| | | | | | | | | | A patch later in this series will make frame_info_ptr access some fields internal to frame_info, which we don't want to expose outside of frame.c. Move the frame_info_ptr class to frame.h, and the definitions to frame.c. Remove frame-info.c and frame-info.h. Change-Id: Ic5949759e6262ea0da6123858702d48fe5673fea Reviewed-By: Bruno Larsen <blarsen@redhat.com>
* Initial implementation of Debugger Adapter ProtocolTom Tromey2023-01-021-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The Debugger Adapter Protocol is a JSON-RPC protocol that IDEs can use to communicate with debuggers. You can find more information here: https://microsoft.github.io/debug-adapter-protocol/ Frequently this is implemented as a shim, but it seemed to me that GDB could implement it directly, via the Python API. This patch is the initial implementation. DAP is implemented as a new "interp". This is slightly weird, because it doesn't act like an ordinary interpreter -- for example it doesn't implement a command syntax, and doesn't use GDB's ordinary event loop. However, this seemed like the best approach overall. To run GDB in this mode, use: gdb -i=dap The DAP code will accept JSON-RPC messages on stdin and print responses to stdout. GDB redirects the inferior's stdout to a new pipe so that output can be encapsulated by the protocol. The Python code uses multiple threads to do its work. Separate threads are used for reading JSON from the client and for writing JSON to the client. All GDB work is done in the main thread. (The first implementation used asyncio, but this had some limitations, and so I rewrote it to use threads instead.) This is not a complete implementation of the protocol, but it does implement enough to demonstrate that the overall approach works. There is a rudimentary test suite. It uses a JSON parser written in pure Tcl. This parser is under the same license as Tcl itself, so I felt it was acceptable to simply import it into the tree. There is also a bit of documentation -- just documenting the new interpreter name.
* Update copyright year range in header of all files managed by GDBJoel Brobecker2023-01-011-1/+1
| | | | | | | This commit is the result of running the gdb/copyright.py script, which automated the update of the copyright year range for all source files managed by the GDB project to be updated to include year 2023.
* Use toplevel configure for GMP and MPFR for gdbAndrew Pinski2022-12-211-7/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch uses the toplevel configure parts for GMP/MPFR for gdb. The only thing is that gdb now requires MPFR for building. Before it was a recommended but not required library. Also this allows building of GMP and MPFR with the toplevel directory just like how it is done for GCC. We now error out in the toplevel configure of the version of GMP and MPFR that is wrong. OK after GDB 13 branches? Build gdb 3 ways: with GMP and MPFR in the toplevel (static library used at that point for both) With only MPFR in the toplevel (GMP distro library used and MPFR built from source) With neither GMP and MPFR in the toplevel (distro libraries used) Changes from v1: * Updated gdb/README and gdb/doc/gdb.texinfo. * Regenerated using unmodified autoconf-2.69 Thanks, Andrew Pinski ChangeLog: * Makefile.def: Add configure-gdb dependencies on all-gmp and all-mpfr. * configure.ac: Split out MPC checking from MPFR. Require GMP and MPFR if the gdb directory exist. * Makefile.in: Regenerate. * configure: Regenerate. gdb/ChangeLog: PR bug/28500 * configure.ac: Remove AC_LIB_HAVE_LINKFLAGS for gmp and mpfr. Use GMPLIBS and GMPINC which is provided by the toplevel configure. * Makefile.in (LIBGMP, LIBMPFR): Remove. (GMPLIBS, GMPINC): Add definition. (INTERNAL_CFLAGS_BASE): Add GMPINC. (CLIBS): Exchange LIBMPFR and LIBGMP for GMPLIBS. * target-float.c: Make the code conditional on HAVE_LIBMPFR unconditional. * top.c: Remove code checking HAVE_LIBMPFR. * configure: Regenerate. * config.in: Regenerate. * README: Update GMP/MPFR section of the config options. * doc/gdb.texinfo: Likewise. Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=28500
* Fix install-strip targetHannes Domani2022-12-201-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | The libtool patch broke install-strip of gdb: /bin/sh ../../gdb/../mkinstalldirs /src/gdb/inst/share/gdb/python/gdb transformed_name=`t='s,y,y,'; \ echo gdb | sed -e "$t"` ; \ if test "x$transformed_name" = x; then \ transformed_name=gdb ; \ else \ true ; \ fi ; \ /bin/sh ../../gdb/../mkinstalldirs /src/gdb/inst/bin ; \ /bin/sh ./libtool --mode=install STRIPPROG='strip' /bin/sh /src/gdb/gdb.git/install-sh -c -s \ gdb \ /src/gdb/inst/bin/$transformed_name ; \ /bin/sh ../../gdb/../mkinstalldirs /src/gdb/inst/include/gdb ; \ /usr/bin/install -c -m 644 jit-reader.h /src/gdb/inst/include/gdb/jit-reader.h libtool: install: `/src/gdb/inst/bin/gdb' is not a directory libtool: install: Try `libtool --help --mode=install' for more information. Since INSTALL_PROGRAM_ENV is no longer at the beginning of the command, the gdb executable is not installed with install-strip.
* gdb: move frame_info_ptr method implementations to frame-info.cSimon Marchi2022-11-101-0/+1
| | | | | | | | | | | | I don't see any particular reason why the implementations of the frame_info_ptr object are in the header file. It only seems to add some complexity. Since we can't include frame.h in frame-info.h, we have to add declarations of functions defined in frame.c, in frame-info.h. By moving the implementations to a new frame-info.c, we can avoid that. Change-Id: I435c828f81b8a3392c43ef018af31effddf6be9c Reviewed-By: Bruno Larsen <blarsen@redhat.com> Reviewed-By: Tom Tromey <tom@tromey.com>
* Silence libtool during linkTom Tromey2022-11-071-1/+1
| | | | | | | | | | The switch to linking with libtool now shows a very long link line even when V=0. This patch arranges to silence libtool in this situation. Approved-By: Simon Marchi <simon.marchi@efficios.com>
* gdb: link executables with libtoolJose E. Marchesi2022-11-071-5/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch changes the GDB build system in order to use libtool to link the several built executables. This makes it possible to refer to libtool libraries (.la files) in CLIBS. As an application of the above, BFD now refers to ../libbfd/libbfd.la OPCODES now refers to ../opcodes/libopcodes.la LIBBACKTRACE_LIB now refers to ../libbacktrace/libbacktrace.la LIBCTF now refers to ../libctf/libctf.la NOTE1: The addition of libtool adds a few new configure-time options to GDB. Among these, --enable-shared and --disable-shared, which were previously ignored. Now GDB shall honor these options when linking, picking up the right version of the referred libtool libraries automagically. NOTE2: I have not tested the insight build. NOTE3: For regenerating configure I used an environment with Autoconf 2.69 and Automake 1.15.1. This should match the previously used version as announced in the configure script. NOTE4: Now the installed shared objects libbfd.so, libopcodes.so and libctf.so are used by gdb if binutils is installed with --enable-shared. Testing performed: - --enable-shared and --disable-shared (the default in binutils) work as expected: the linked executables link with the archive or shared libraries transparently. - Makefile.in modified for EXEEXT = .exe. It installs the binaries just fine. The installed gdb.exe runs fine. - Native build regtested in x86_64. No regressions found. - Cross build for aarch64-linux-gnu built to exercise program_transform_name and friends. The installed aarch64-linux-gnu-gdb runs fine. Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29372 Approved-By: Simon Marchi <simon.marchi@efficios.com>
* binutils, gdb: support zstd compressed debug sectionsFangrui Song2022-09-261-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PR29397 PR29563: Add new configure option --with-zstd which defaults to auto. If pkgconfig/libzstd.pc is found, define HAVE_ZSTD and support zstd compressed debug sections for most tools. * bfd: for addr2line, objdump --dwarf, gdb, etc * gas: support --compress-debug-sections=zstd * ld: support ELFCOMPRESS_ZSTD input and --compress-debug-sections=zstd * objcopy: support ELFCOMPRESS_ZSTD input for --decompress-debug-sections and --compress-debug-sections=zstd * gdb: support ELFCOMPRESS_ZSTD input. The bfd change references zstd symbols, so gdb has to link against -lzstd in this patch. If zstd is not supported, ELFCOMPRESS_ZSTD input triggers an error. We can avoid HAVE_ZSTD if binutils-gdb imports zstd/ like zlib/, but this is too heavyweight, so don't do it for now. ``` % ld/ld-new a.o ld/ld-new: a.o: section .debug_abbrev is compressed with zstd, but BFD is not built with zstd support ... % ld/ld-new a.o --compress-debug-sections=zstd ld/ld-new: --compress-debug-sections=zstd: ld is not built with zstd support % binutils/objcopy --compress-debug-sections=zstd a.o b.o binutils/objcopy: --compress-debug-sections=zstd: binutils is not built with zstd support % binutils/objcopy b.o --decompress-debug-sections binutils/objcopy: zstd.o: section .debug_abbrev is compressed with zstd, but BFD is not built with zstd support ... ```
* Rewrite registry.hTom Tromey2022-07-281-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | This rewrites registry.h, removing all the macros and replacing it with relatively ordinary template classes. The result is less code than the previous setup. It replaces large macros with a relatively straightforward C++ class, and now manages its own cleanup. The existing type-safe "key" class is replaced with the equivalent template class. This approach ended up requiring relatively few changes to the users of the registry code in gdb -- code using the key system just required a small change to the key's declaration. All existing users of the old C-like API are now converted to use the type-safe API. This mostly involved changing explicit deletion functions to be an operator() in a deleter class. The old "save/free" two-phase process is removed, and replaced with a single "free" phase. No existing code used both phases. The old "free" callbacks took a parameter for the enclosing container object. However, this wasn't truly needed and is removed here as well.
* struct packed: Unit tests and more operatorsPedro Alves2022-07-251-0/+1
| | | | | | | | | | | | | | For PR gdb/29373, I wrote an alternative implementation of struct packed that uses a gdb_byte array for internal representation, needed for mingw+clang. While adding that, I wrote some unit tests to make sure both implementations behave the same. While at it, I implemented all relational operators. This commit adds said unit tests and relational operators. The alternative gdb_byte array implementation will come next. Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29373 Change-Id: I023315ee03622c59c397bf4affc0b68179c32374
* [AArch64] MTE corefile supportLuis Machado2022-07-191-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | Teach GDB how to dump memory tags for AArch64 when using the gcore command and how to read memory tag data back from a core file generated by GDB (via gcore) or by the Linux kernel. The format is documented in the Linux Kernel documentation [1]. Each tagged memory range (listed in /proc/<pid>/smaps) gets dumped to its own PT_AARCH64_MEMTAG_MTE segment. A section named ".memtag" is created for each of those segments when reading the core file back. To save a little bit of space, given MTE tags only take 4 bits, the memory tags are stored packed as 2 tags per byte. When reading the data back, the tags are unpacked. I've added a new testcase to exercise the feature. Build-tested with --enable-targets=all and regression tested on aarch64-linux Ubuntu 20.04. [1] Documentation/arm64/memory-tagging-extension.rst (Core Dump Support)
* gdb/python: implement the print_insn extension language hookAndrew Burgess2022-06-151-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit extends the Python API to include disassembler support. The motivation for this commit was to provide an API by which the user could write Python scripts that would augment the output of the disassembler. To achieve this I have followed the model of the existing libopcodes disassembler, that is, instructions are disassembled one by one. This does restrict the type of things that it is possible to do from a Python script, i.e. all additional output has to fit on a single line, but this was all I needed, and creating something more complex would, I think, require greater changes to how GDB's internal disassembler operates. The disassembler API is contained in the new gdb.disassembler module, which defines the following classes: DisassembleInfo Similar to libopcodes disassemble_info structure, has read-only properties: address, architecture, and progspace. And has methods: __init__, read_memory, and is_valid. Each time GDB wants an instruction disassembled, an instance of this class is passed to a user written disassembler function, by reading the properties, and calling the methods (and other support methods in the gdb.disassembler module) the user can perform and return the disassembly. Disassembler This is a base-class which user written disassemblers should inherit from. This base class provides base implementations of __init__ and __call__ which the user written disassembler should override. DisassemblerResult This class can be used to hold the result of a call to the disassembler, it's really just a wrapper around a string (the text of the disassembled instruction) and a length (in bytes). The user can return an instance of this class from Disassembler.__call__ to represent the newly disassembled instruction. The gdb.disassembler module also provides the following functions: register_disassembler This function registers an instance of a Disassembler sub-class as a disassembler, either for one specific architecture, or, as a global disassembler for all architectures. builtin_disassemble This provides access to GDB's builtin disassembler. A common use case that I see is augmenting the existing disassembler output. The user code can call this function to have GDB disassemble the instruction in the normal way. The user gets back a DisassemblerResult object, which they can then read in order to augment the disassembler output in any way they wish. This function also provides a mechanism to intercept the disassemblers reads of memory, thus the user can adjust what GDB sees when it is disassembling. The included documentation provides a more detailed description of the API. There is also a new CLI command added: maint info python-disassemblers This command is defined in the Python gdb.disassemblers module, and can be used to list the currently registered Python disassemblers.
* Move 64-bit BFD files from ALL_TARGET_OBS to ALL_64_TARGET_OBSLuis Machado2022-05-301-6/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Doing a 32-bit build with "--enable-targets=all --disable-sim" fails to link properly. -- loongarch-tdep.o: In function `loongarch_gdbarch_init': binutils-gdb/gdb/loongarch-tdep.c:443: undefined reference to `loongarch_r_normal_name' loongarch-tdep.o: In function `loongarch_fetch_instruction': binutils-gdb/gdb/loongarch-tdep.c:37: undefined reference to `loongarch_insn_length' loongarch-tdep.o: In function `loongarch_scan_prologue(gdbarch*, unsigned long long, unsigned long long, frame_info*, trad_frame_cache*) [clone .isra.4]': binutils-gdb/gdb/loongarch-tdep.c:87: undefined reference to `loongarch_insn_length' binutils-gdb/gdb/loongarch-tdep.c:88: undefined reference to `loongarch_decode_imm' binutils-gdb/gdb/loongarch-tdep.c:89: undefined reference to `loongarch_decode_imm' binutils-gdb/gdb/loongarch-tdep.c:90: undefined reference to `loongarch_decode_imm' binutils-gdb/gdb/loongarch-tdep.c:91: undefined reference to `loongarch_decode_imm' binutils-gdb/gdb/loongarch-tdep.c:92: undefined reference to `loongarch_decode_imm' -- Given the list of 64-bit BFD files in opcodes/Makefile.am:TARGET64_LIBOPCODES_CFILES, it looks like GDB's ALL_TARGET_OBS list is including files that should be included in ALL_64_TARGET_OBS instead. This patch accomplishes this and enables a 32-bit build with "--enable-targets=all --disable-sim" to complete. Moving the bpf, tilegx and loongarch files to the correct list means GDB can find the correct disassembler function instead of finding a null pointer. We still need the "--disable-sim" switch (or "--enable-64-bit-bfd") to make a 32-bit build with "--enable-targets=all" complete correctly
* Move "catch load" to a new fileTom Tromey2022-04-291-0/+1
| | | | | | | | | | The "catch load" code is reasonably self-contained, and so this patch moves it out of breakpoint.c and into a new file, break-catch-load.c. One function from breakpoint.c, print_solib_event, now has to be exposed, but this seems pretty reasonable.
* gdbsupport: add path_join functionSimon Marchi2022-04-211-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In this review [1], Eli pointed out that we should be careful when concatenating file names to avoid duplicated slashes. On Windows, a double slash at the beginning of a file path has a special meaning. So naively concatenating "/" and "foo/bar" would give "//foo/bar", which would not give the desired results. We already have a few spots doing: if (first_path ends with a slash) path = first_path + second_path else path = first_path + slash + second_path In general, I think it's nice to avoid superfluous slashes in file paths, since they might end up visible to the user and look a bit unprofessional. Introduce the path_join function that can be used to join multiple path components together (along with unit tests). I initially wanted to make it possible to join two absolute paths, to support the use case of prepending a sysroot path to a target file path, or the prepending the debug-file-directory to a target file path. But the code in solib_find_1 shows that it is more complex than this anyway (for example, when the right hand side is a Windows path with a drive letter). So I don't think we need to support that case in path_join. That also keeps the implementation simpler. Change a few spots to use path_join to show how it can be used. I believe that all the spots I changed are guarded by some checks that ensure the right hand side operand is not an absolute path. Regression-tested on Ubuntu 18.04. Built-tested on Windows, and I also ran the new unit-test there. [1] https://sourceware.org/pipermail/gdb-patches/2022-April/187559.html Change-Id: I0df889f7e3f644e045f42ff429277b732eb6c752
* Move target_read_string to target/target.cTom Tromey2022-04-141-1/+1
| | | | | | | | This moves the two overloads of target_read_string to a new file, target/target.c, and updates both gdb and gdbserver to build this.
* Introduce the new DWARF index classTom Tromey2022-04-121-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch introduces the new DWARF index class. It is called "cooked" to contrast against a "raw" index, which is mapped from disk without extra effort. Nothing constructs a cooked index yet. The essential idea here is that index entries are created via the "add" method; then when all the entries have been read, they are "finalize"d -- name canonicalization is performed and the entries are added to a sorted vector. Entries use the DWARF name (DW_AT_name) or linkage name, not the full name as is done for partial symbols. These two facets -- the short name and the deferred canonicalization -- help improve the performance of this approach. This will become clear in later patches, when parallelization is added. Some special code is needed for Ada, because GNAT only emits mangled ("encoded", in the Ada lingo) names, and so we reconstruct the hierarchical structure after the fact. This is also done in the finalization phase. One other aspect worth noting is that the way the "main" function is found is different in the new code. Currently gdb will notice DW_AT_main_subprogram, but won't recognize "main" during reading -- this is done later, via explicit symbol lookup. This is done differently in the new code so that finalization can be done in the background without then requiring a synchronization to look up the symbol.
* Introduce DWARF abbrev cacheTom Tromey2022-04-121-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The replacement for the DWARF psymbol reader works in a somewhat different way. The current reader reads and stores all the DIEs that might be interesting. Then, if it is missing a DIE, it re-scans the CU and reads them all. This approach is used for both intra- and inter-CU references. I instrumented the partial DIE hash to see how frequently it was used: [ 0] -> 1538165 [ 1] -> 4912 [ 2] -> 96102 [ 3] -> 175 [ 4] -> 244 That is, most DIEs are never used, and some are looked up twice -- but this is just an artifact of the implementation of partial_die_info::fixup, which may do two lookups. Based on this, the new implementation doesn't try to store any DIEs, but instead just re-scans them on demand. In order to do this, though, it is convenient to have a cache of DWARF abbrevs. This way, if a second CU is needed to resolve an inter-CU reference, the abbrevs for that CU need only be computed a single time.
* Add name splittingTom Tromey2022-04-121-0/+2
| | | | | | | | | | | The new DWARF index code works by keeping names pre-split. That is, rather than storing a symbol name like "a::b::c", the names "a", "b", and "c" will be stored separately. This patch introduces some helper code to split a full name into its components.
* gdb: move gdb_disassembly_flag into a new disasm-flags.h fileAndrew Burgess2022-04-061-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | While working on the disassembler I was getting frustrated. Every time I touched disasm.h it seemed like every file in GDB would need to be rebuilt. Surely the disassembler can't be required by that many parts of GDB, right? Turns out that disasm.h is included in target.h, so pretty much every file was being rebuilt! The only thing from disasm.h that target.h needed is the gdb_disassembly_flag enum, as this is part of the target_ops api. In this commit I move gdb_disassembly_flag into its own file. This is then included in target.h and disasm.h, after which, the number of files that depend on disasm.h is much reduced. I also audited all the other includes of disasm.h and found that the includes in mep-tdep.c and python/py-registers.c are no longer needed, so I've removed these. Now, after changing disasm.h, GDB rebuilds much quicker. There should be no user visible changes after this commit.
* gdb/Makefile.in: move ALLDEPFILES earlier in Makefile.inAndrew Burgess2022-04-031-218/+229
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If I do 'make tags' in the gdb build directory, the tags target does complete, but I see these warnings: ../../src/gdb/arm.c: No such file or directory ../../src/gdb/arm-get-next-pcs.c: No such file or directory ../../src/gdb/arm-linux.c: No such file or directory The reason for this is the ordering of build rules and make variables in gdb/Makefile.in, specifically, the placement of the tags related rules, and the ALLDEPFILES variable. The ordering is like this: TAGFILES_NO_SRCDIR = .... $(ALLDEPFILES) .... TAGS: $(TAGFILES_NO_SRCDIR) .... # Recipe uses $(TAGFILES_NO_SRCDIR) tags: TAGS ALLDEPFILES = ..... When the TAGS rule is parsed TAGFILES_NO_SRCDIR is expanded, which then expands ALLDEPFILES, which, at that point in the Makefile is undefined, and so expands to the empty string. As a result TAGS does not depend on any file listed in ALLDEPFILES. However, when the TAGS recipe is invoked ALLDEPFILES is now defined. As a result, all the files in ALLDEPFILES are passed to the etags program. The ALLDEPFILES references three files, arm.c, arm-get-next-pcs.c, and arm-linux.c, which are actually in the gdb/arch/ directory, but, in ALLDEPFILES these files don't include the arch/ prefix. As a result, the etags program ends up looking for these files in the wrong location. As ALLDEPFILES is only used by the TAGS rule, this mistake was not previously noticed (the TAGS rule itself was broken until a recent commit). In this commit I make two changes, first, I move ALLDEPFILES to be defined before TAGFILES_NO_SRCDIR, this means that the TAGS rule will depend on all the files in ALLDEPFILES. With this change the TAGS rule now breaks complaining that there's no rule to build the 3 files mentioned above. Next, I have added all *.c files in gdb/arch/ to ALLDEPFILES, including their arch/ prefix, and removed the incorrect (missing arch/ prefix) references. With these two changes the TAGS (or tags if you prefer) target now builds without any errors or warnings.
* gdb/Makefile.in: fix 'make tags' build targetReuben Thomas2022-04-031-1/+0
| | | | | | | | | | | | | | | | | | | | | The gdb_select.h file was moved to the gdbsupport directory long ago, but a reference was accident left in gdb/Makefile.in (in the HFILES_NO_SRCDIR variable), this commit removes that reference. Before this commit, if I use 'make tags' here's what I see: $ make tags make: *** No rule to make target 'gdb_select.h', needed by 'TAGS'. Stop. After this commit 'make tags' completes, but I still see these warnings: ../../src/gdb/arm.c: No such file or directory ../../src/gdb/arm-get-next-pcs.c: No such file or directory ../../src/gdb/arm-linux.c: No such file or directory These are caused by a separate issue, and will be addressed in the next commit.
* gdb/Makefile.in: remove SOURCES variableAndrew Burgess2022-04-031-1/+0
| | | | | | | | | | | | | | The SOURCES variable was added to gdb/Makefile.in as part of commit: commit fb40c20903110ed8af9701ce7c2635abd3770d52 Date: Wed Feb 23 00:25:43 2000 +0000 Add mi/ and testsuite/gdb.mi/ subdirectories. But as far as I can tell was not used at the time it was added, and is not used today. Lets remove it.
* gdb: Remove support for S+corePedro Alves2022-03-171-3/+0
| | | | | | | | | | | | GCC removed support for score back in 2014 already. Back then, we basically agreed about removing it from GDB too, but it ended up being forgotten. See: https://sourceware.org/pipermail/gdb/2014-October/044643.html Following through this time around. Change-Id: I5b25a1ff7bce7b150d6f90f4c34047fae4b1f8b4
* gdb/python/mi: create MI commands using pythonAndrew Burgess2022-03-141-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit allows a user to create custom MI commands using Python similarly to what is possible for Python CLI commands. A new subclass of mi_command is defined for Python MI commands, mi_command_py. A new file, gdb/python/py-micmd.c contains the logic for Python MI commands. This commit is based on work linked too from this mailing list thread: https://sourceware.org/pipermail/gdb/2021-November/049774.html Which has also been previously posted to the mailing list here: https://sourceware.org/pipermail/gdb-patches/2019-May/158010.html And was recently reposted here: https://sourceware.org/pipermail/gdb-patches/2022-January/185190.html The version in this patch takes some core code from the previously posted patches, but also has some significant differences, especially after the feedback given here: https://sourceware.org/pipermail/gdb-patches/2022-February/185767.html A new MI command can be implemented in Python like this: class echo_args(gdb.MICommand): def invoke(self, args): return { 'args': args } echo_args("-echo-args") The 'args' parameter (to the invoke method) is a list containing (almost) all command line arguments passed to the MI command (--thread and --frame are handled before the Python code is called, and removed from the args list). This list can be empty if the MI command was passed no arguments. When used within gdb the above command produced output like this: (gdb) -echo-args a b c ^done,args=["a","b","c"] (gdb) The 'invoke' method of the new command must return a dictionary. The keys of this dictionary are then used as the field names in the mi command output (e.g. 'args' in the above). The values of the result returned by invoke can be dictionaries, lists, iterators, or an object that can be converted to a string. These are processed recursively to create the mi output. And so, this is valid: class new_command(gdb.MICommand): def invoke(self,args): return { 'result_one': { 'abc': 123, 'def': 'Hello' }, 'result_two': [ { 'a': 1, 'b': 2 }, { 'c': 3, 'd': 4 } ] } Which produces output like: (gdb) -new-command ^done,result_one={abc="123",def="Hello"},result_two=[{a="1",b="2"},{c="3",d="4"}] (gdb) I have required that the fields names used in mi result output must match the regexp: "^[a-zA-Z][-_a-zA-Z0-9]*$" (without the quotes). This restriction was never written down anywhere before, but seems sensible to me, and we can always loosen this rule later if it proves to be a problem. Much harder to try and add a restriction later, once people are already using the API. What follows are some details about how this implementation differs from the original patch that was posted to the mailing list. In this patch, I have changed how the lifetime of the Python gdb.MICommand objects is managed. In the original patch, these object were kept alive by an owned reference within the mi_command_py object. As such, the Python object would not be deleted until the mi_command_py object itself was deleted. This caused a problem, the mi_command_py were held in the global mi command table (in mi/mi-cmds.c), which, as a global, was not cleared until program shutdown. By this point the Python interpreter has already been shutdown. Attempting to delete the mi_command_py object at this point was causing GDB to try and invoke Python code after finalising the Python interpreter, and we would crash. To work around this problem, the original patch added code in python/python.c that would search the mi command table, and delete the mi_command_py objects before the Python environment was finalised. In contrast, in this patch, I have added a new global dictionary to the gdb module, gdb._mi_commands. We already have several such global data stores related to pretty printers, and frame unwinders. The MICommand objects are placed into the new gdb.mi_commands dictionary, and it is this reference that keeps the objects alive. When GDB's Python interpreter is shut down gdb._mi_commands is deleted, and any MICommand objects within it are deleted at this point. This change avoids having to make the mi_cmd_table global, and walk over it from within GDB's python related code. This patch handles command redefinition entirely within GDB's python code, though this does impose one small restriction which is not present in the original code (detailed below), I don't think this is a big issue. However, the original patch relied on being able to finish executing the mi_command::do_invoke member function after the mi_command object had been deleted. Though continuing to execute a member function after an object is deleted is well defined, it is also (IMHO) risky, its too easy for someone to later add a use of the object without realising that the object might sometimes, have been deleted. The new patch avoids this issue. The one restriction that is added to avoid this, is that an MICommand object can't be reinitialised with a different command name, so: (gdb) python cmd = MyMICommand("-abc") (gdb) python cmd.__init__("-def") can't reinitialize object with a different command name This feels like a pretty weird edge case, and I'm happy to live with this restriction. I have also changed how the memory is managed for the command name. In the most recently posted patch series, the command name is moved into a subclass of mi_command, the python mi_command_py, which inherits from mi_command is then free to use a smart pointer to manage the memory for the name. In this patch, I leave the mi_command class unchanged, and instead hold the memory for the name within the Python object, as the lifetime of the Python object always exceeds the c++ object stored in the mi_cmd_table. This adds a little more complexity in py-micmd.c, but leaves the mi_command class nice and simple. Next, this patch adds some extra functionality, there's a MICommand.name read-only attribute containing the name of the command, and a read-write MICommand.installed attribute that can be used to install (make the command available for use) and uninstall (remove the command from the mi_cmd_table so it can't be used) the command. This attribute will be automatically updated if a second command replaces an earlier command. This patch adds additional error handling, and makes more use the gdbpy_handle_exception function. Co-Authored-By: Jan Vrany <jan.vrany@labware.com>
* Some "distclean" fixes in gdbTom Tromey2022-03-011-1/+1
| | | | | | | | | | | | | | | | | | PR build/12440 points out that "make distclean" is broken in gdb. Most of the breakage comes from other projects in the tree, but we can fix some of the issues, which is what this patch does. Note that the yacc output files, like c-exp.c, are left alone. In a source distribution, these are included in the tarball, and if the user builds in-tree, we would not want to remove them. While that seems a bit obscure, it seems to me that "distclean" is only really useful for in-tree builds anyway -- out-of-tree I simply delete the entire build directory and start over. Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=12440
* gdb: add operator+= and operator+ overload for std::stringAndrew Burgess2022-02-251-0/+1
| | | | | | | | | | | | | | This commit adds operator+= and operator+ overloads for adding gdb::unique_xmalloc_ptr<char> to a std::string. I could only find 3 places in GDB where this was useful right now, and these all make use of operator+=. I've also added a self test for gdb::unique_xmalloc_ptr<char>, which makes use of both operator+= and operator+, so they are both getting used/tested. There should be no user visible changes after this commit, except when running 'maint selftest', where the new self test is visible.
* gdb: LoongArch: Add Makefile, configure and NEWSTiezhu Yang2022-02-111-0/+8
| | | | | | | | | This commit adds Makefile, configure and NEWS for LoongArch. Signed-off-by: Zhensong Liu <liuzhensong@loongson.cn> Signed-off-by: Qing zhang <zhangqing@loongson.cn> Signed-off-by: Youling Tang <tangyouling@loongson.cn> Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
* Fix flex rule in gdbTom Tromey2022-02-011-14/+13
| | | | | | | | | | | | | | | Currently, if flex fails, it will leave the resulting .c file in the tree. This will cause a cascade of errors, and requires the manual deletion of the .c file in order to recreate the problem. It's better for the rule to fail such that the .c file is not updated. This way, 'make' will fail the same way every time -- which is much handier for debugging syntax errors. This fix just updates the Makefile rule to follow the way that the "yacc" rule already works.
* Move "catch exec" to a new fileTom Tromey2022-01-181-0/+1
| | | | | | | | The "catch exec" code is reasonably self-contained, and so this patch moves it out of breakpoint.c (the second largest source file in gdb) and into a new file, break-catch-exec.c.
* Move "catch fork" to a new fileTom Tromey2022-01-181-0/+1
| | | | | | | | The "catch fork" code is reasonably self-contained, and so this patch moves it out of breakpoint.c (the second largest source file in gdb) and into a new file, break-catch-fork.c.
* Move gdb_regex to gdbsupportTom Tromey2022-01-181-2/+0
| | | | | | This moves the gdb_regex convenience class to gdbsupport.
* Move gdb obstack code to gdbsupportTom Tromey2022-01-181-2/+0
| | | | | | | | This moves the gdb-specific obstack code -- both extensions like obconcat and obstack_strdup, and things like auto_obstack -- to gdbsupport.
* Implement putstr and putstrn in ui_fileTom Tromey2022-01-051-0/+1
| | | | | | | | | | | | | In my tour of the ui_file subsystem, I found that fputstr and fputstrn can be simplified. The _filtered forms are never used (and IMO unlikely to ever be used) and so can be removed. And, the interface can be simplified by removing a callback function and moving the implementation directly to ui_file. A new self-test is included. Previously, I think nothing was testing this code. Regression tested on x86-64 Fedora 34.
* Automatic Copyright Year update after running gdb/copyright.pyJoel Brobecker2022-01-011-1/+1
| | | | | | | | This commit brings all the changes made by running gdb/copyright.py as per GDB's Start of New Year Procedure. For the avoidance of doubt, all changes in this commits were performed by the script.
* Move ordinary gdbarch code to arch-utilsTom Tromey2021-12-171-1/+0
| | | | | | | | | | | | While I think it makes sense to generate gdbarch.c, at the same time I think it is better for ordinary code to be editable in a C file -- not as a hunk of C code embedded in the generator. This patch moves this sort of code out of gdbarch.sh and gdbarch.c and into arch-utils.c, then has arch-utils.c include gdbarch.c.
* gdb: only include mips and riscv targets if building with 64-bit bfdAndrew Burgess2021-12-131-11/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While testing another patch I was trying to build different configurations of GDB, and, during one test build I ran into a problem, I configured with `--enable-targets=all --host=i686-w64-mingw32` and saw this error while linking GDB: .../i686-w64-mingw32/bin/ld: mips-tdep.o: in function `mips_gdbarch_init': .../src/gdb/mips-tdep.c:8730: undefined reference to `disassembler_options_mips' .../i686-w64-mingw32/bin/ld: riscv-tdep.o: in function `riscv_gdbarch_init': .../src/gdb/riscv-tdep.c:3851: undefined reference to `disassembler_options_riscv' So the `disassembler_options_mips` and `disassembler_options_riscv` symbols are missing. This turns out to be because mips-dis.c and riscv-dis.c, in which these symbols are defined, are in the TARGET64_LIBOPCODES_CFILES list in opcodes/Makefile.am, these files are only built when we are building with a 64-bit bfd. If we look further, into bfd/Makefile.am, we can see that all the files matching elf*-riscv.lo are in the BFD64_BACKENDS list, as are the elf*-mips.lo files, and (I know because I tried), the two disassemblers do, not surprisingly, depend on features supplied from libbfd. So, though we can build most of GDB just fine for riscv and mips with a 32-bit bfd, if I understand correctly, the final GDB executable (assuming we could get it to link) would not understand these architectures at the bfd level, nor would there be any disassembler available. This sounds like a pretty useless GDB to me. So, in this commit, I move the riscv and mips targets into GDB's list of targets that require a 64-bit bfd. After this I can build GDB just fine with the configure options given above. This was discussed on the mailing list in a couple of threads: https://sourceware.org/pipermail/gdb-patches/2021-December/184365.html https://sourceware.org/pipermail/binutils/2021-November/118498.html and it is agreed, that it is unfortunate that the 32-bit riscv and 32-bit mips targets require a 64-bit bfd. If in the future this situation ever changes then it would be expected that some (or all) of this patch would be reverted. Until then though, this patch allows GDB to build when configured with --enable-targets=all, and when using a 32-bit libbfd.