| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* BUG: Histogramdd breaks on big arrays in Windows
Resolved the issue with line change from int to np.intp in numpy/numpy/lib/histograms.py
* BUG: Histogramdd breaks on big arrays in Windows
Resolved the issue with line change from int to np.intp in numpy/numpy/lib/histograms.py
* Removed the binary files
* Update test_histograms.py
* Update test_histograms.py
* Update test_histograms.py
|
|\
| |
| | |
BUG: Decrement ref count in gentype_reduce if allocated memory not used
|
| | |
|
|\ \
| | |
| | | |
TYP: Spelling alignment for array flag literal
|
| |/ |
|
|\ \
| | |
| | | |
BUG: Fix boundschecking for `random.logseries`
|
| |/
| |
| |
| |
| |
| |
| |
| | |
Logseries previously did not enforce bounds to be strictly exclusive
for the upper bound, where it leads to incorrect behavior.
The NOT_NAN check is removed, since it was never used: The current bounded
version always excludes NaNs.
|
|/
|
|
|
|
|
|
|
| |
This ensures graceful handling of large header files. Unfortunately,
it may be a bit inconvenient for users, thus the new kwarg and the
work-around of also accepting allow-pickle.
See also the documation here:
https://docs.python.org/3.10/library/ast.html#ast.literal_eval
|
|
|
|
|
|
|
|
| |
This makes sure the test is not flaky, but the test now requires
a leak checker (both valgrind or reference count based should work
in CPython at least).
Closes gh-21169
|
| |
|
|
|
|
| |
Not new things, but in touched lines...
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
In some cases, the replacement is clearly not what is intended,
in those (where setup was called explicitly), I mostly renamed
`setup` to `_setup`.
The `test_ccompile_opt` is a bit confusing, so left it right now
(this will probably fail)
|
|
|
|
|
|
|
|
| |
The aarch64 wheel build tests are failing with OOM. The new test for
complex128 dot for huge vectors is responsible as the useable memory
is incorrectly determined and the check for sufficient memory fails.
The fix here is to define the `NPY_AVAILABLE_MEM="4 GB"` environment
variable before the test call in `cibw_test_command.sh`.
|
| |
|
|\
| |
| | |
DOC: Update delimiter param description.
|
| | |
|
| |
| |
| |
| |
| | |
Explicitly state that only single-character delimiters
are supported.
|
|\ \
| | |
| | | |
TST,TYP: Bump mypy to 0.981
|
| |/ |
|
|\ \
| | |
| | | |
TYP,MAINT: Change more overloads to play nice with pyright
|
| |/ |
|
|\ \
| | |
| | | |
TYP,ENH: Mark ``numpy.typing`` protocols as runtime checkable
|
| |/ |
|
|\ \
| | |
| | | |
REV: Loosen ``lookfor``'s import try/except again
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Some BaseExceptions (at least the Skipped that pytest uses) need to
be caught as well. It seems easiest to be practical and keep ignoring
almost all exception in this particular code path.
Effectively reverts parts of gh-19393
Closes gh-22345
Co-authored-by: Sebastian Berg <sebastianb@nvidia.com>
|
|\ \
| | |
| | | |
BUG: Fix complex vector dot with more than NPY_CBLAS_CHUNK elements
|
| |/
| |
| |
| |
| |
| |
| |
| | |
The iteration was simply using the wrong value, the larger value
might even work sometimes, but then we do another iteration counting
the remaining elements twice.
Closes gh-22262
|
|/
|
|
| |
xref gh-21431
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* BUG, SIMD: Handle overflow errors
Overflows for remainder/divmod/fmod
* If a types minimum value is divided by -1, an overflow will be raised
and result will be set to minimum
* Handle overflow and return 0 in case of mod related operations
* TST: New tests for overflow in division operations
* SIMD: Removed cvtozero
Co-authored-by: Rafael Cardoso Fernandes Sousa <rafaelcfsousa@ibm.com>
* TST: Removed eval | Fixed raises cases
* TST: Changed `raise` to `warns`
* Changed `raise` to `warns` and test for `RuntimeWarning`
* Added results check back
* TST: Add additional tests for division-by-zero and integer overflow
This introduces a helper to iterate through "interesting" array cases
that could maybe be used in other places. Keep the other test intact,
it adds a check for mixed types (which is just casts currently, but
cannot hurt) and is otherwise thorough.
* MAINT: Remove nested NPY_UNLIKELY in division paths
I am not certain the unlikely cases make much sense to begin with,
but they are certainly not helpful within an unlikely block.
* TST: Add unsigned integers to integer divide-by-zero test
* BUG: Added missing `NAME` key to loop generator
* BUG, SIMD: Handle division overflow errors
* If a types minimum value is divided by -1, an overflow will be raised
and result will be set to minimum
* TST: Modified tests to reflect new overflow
* Update numpy/core/src/umath/loops_arithmetic.dispatch.c.src
Co-authored-by: Rafael Sousa <90851201+rafaelcfsousa@users.noreply.github.com>
Co-authored-by: Rafael Cardoso Fernandes Sousa <rafaelcfsousa@ibm.com>
Co-authored-by: Sebastian Berg <sebastian@sipsolutions.net>
Co-authored-by: Rafael Sousa <90851201+rafaelcfsousa@users.noreply.github.com>
|
|\
| |
| | |
BUG: Fix the implementation of numpy.array_api.vecdot
|
| |
| |
| |
| |
| |
| |
| | |
* Fix the implementation of numpy.array_api.vecdot
See https://data-apis.org/array-api/latest/API_specification/generated/signatures.linear_algebra_functions.vecdot.html
* Use moveaxis + matmul instead of einsum in vecdot
|
|\ \
| | |
| | | |
TST: ensure ``np.equal.reduce`` raises a ``TypeError``
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If we cache a promoted version of the loop, that promoted can mismatch
the correct one. This ends up being rejected later in the legacy paths
(should be the only path currently used), but we should reject it here
(or in principle we could reject it after cache lookup, but we are fixing
up the operation DTypes here anyway, so we are looking at the signature).
A call sequence reproducing this directly is:
np.add(1, 2, signature=(bool, int, None)) # should fail
np.add(True, 2) # A promoted loop
np.add(1, 2, signature=(bool, int, None)) # should still fail
Not that the errors differ, because the first one comes from the old
type resolution code and is currently less precise
|
| |/ |
|
|\ \
| | |
| | | |
TYP,BUG: Reduce argument validation in C-based ``__class_getitem__``
|
| |/
| |
| |
| |
| |
| |
| |
| |
| | |
Closes #22185
The __class_getitem__ implementations would previously perform basic validation of the passed value, i.e. it would check whether a tuple of the appropriate length was passed (e.g. np.dtype.__class_getitem__ would expect a single item or a length-1 tuple). As noted in aforementioned issue: this approach can cause issues when (a. 2 or more parameters are involved and (b. a subclasses is created one or more parameters are declared constant (e.g. a fixed dtype & variably shaped array).
This PR fixes aforementioned issue by relaxing the runtime argument validation, thus mimicking the behavior of the standard library (more closely). While we could alternatively fix this by adding more special casing (e.g. only disable validation when cls is not np.ndarray), I'm not convinced this would be worth the additional complexity, especially since the standard library also has zero runtime validation for all of its Py_GenericAlias-based implementations of __class_getitem__.
(Some edits by seberg to the commit message)
|
|\ \
| | |
| | | |
TST,BUG: Use fork context to fix MacOS savez test
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Since Python 3.8, the default start method for multiprocessing has been changed from fork to spawn on macOS
The default start method is still fork on other Unix platforms[1], causing inconsistency on memory sharing model
It will cause a memory-sharing problem for the test test_large_zip on macOS as the memory sharing model between spawn and fork is different
The fix
Change the start method for this test back to fork under this testcase context
In this test case context, the bug that caused default start method changed to spawn for macOS will not be triggered
It is context limited, so this change will not affect default start method other than test_large_zip
All platforms have the same memory sharing model now
After the change, test_large_zip is passed on macOS
https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
Closes gh-22203
|
| | |
|
|/
|
|
| |
Pyright just chooses the first matching type whenever there is ambiguity in type resolution, which leads to NoReturn for the cross function in certain situations. Other overloads were changed to match. See ticket #22146
|
|\
| |
| | |
BUG: Expose heapsort algorithms in a shared header
|
| |
| |
| |
| | |
Fix #22011
|
|\ \
| | |
| | | |
BUG: Support using libunwind for backtrack
|
| |/
| |
| |
| |
| |
| |
| | |
Some system (e.g. musl) do not have "execinfo.h", and the backtracking
is provided by libunwind.
Fix: #22084
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was giving many warnings like this one in the SciPy build:
```
scipy/special/_specfunmodule.c: In function 'complex_double_from_pyobj':
scipy/special/_specfunmodule.c:198:47: warning: passing argument 1 of 'PyArray_DATA' from incompatible pointer type [-Wincompatible-pointer-types]
198 | (*v).r = ((npy_cdouble *)PyArray_DATA(arr))->real;
| ^~~
| |
| PyObject * {aka struct _object *}
In file included from /home/rgommers/code/numpy/numpy/core/include/numpy/ndarrayobject.h:12,
from /home/rgommers/code/numpy/numpy/core/include/numpy/arrayobject.h:5,
from /home/rgommers/code/numpy/numpy/f2py/src/fortranobject.h:16,
from scipy/special/_specfunmodule.c:22:
/home/rgommers/code/numpy/numpy/core/include/numpy/ndarraytypes.h:1524:29: note: expected 'PyArrayObject *' {aka 'struct tagPyArrayObject *'} but argument is of type 'PyObject *' {aka 'struct _object *'}
1524 | PyArray_DATA(PyArrayObject *arr)
| ~~~~~~~~~~~~~~~^~~
```
Fixing pointer mismatches is important for Pyodide/Emscripten.
|
| |
|
| |
|