summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--doc/gmp.texi148
1 files changed, 74 insertions, 74 deletions
diff --git a/doc/gmp.texi b/doc/gmp.texi
index 6334e54c4..77d79079f 100644
--- a/doc/gmp.texi
+++ b/doc/gmp.texi
@@ -25,7 +25,7 @@ software''. A copy of the license is included in
@ref{GNU Free Documentation License}.
@end copying
@c Note the @ref above must be on one line, a line break in an @ref within
-@c @copying will bomb in recent texinfo.tex (eg. 2004-04-07.08 which comes
+@c @copying will bomb in recent texinfo.tex (e.g. 2004-04-07.08 which comes
@c with texinfo 4.7), with messages about missing @endcsname.
@@ -59,7 +59,7 @@ software''. A copy of the license is included in
@c HTML:
@c
@c Nothing special is done for links to external manuals, they just come out
-@c in the usual makeinfo style, eg. "../libc/Locales.html". If you have
+@c in the usual makeinfo style, e.g. "../libc/Locales.html". If you have
@c local copies of such manuals then this is a good thing, if not then you
@c may want to search-and-replace to some online source.
@c
@@ -410,7 +410,7 @@ x
@c on printing a particular section, GMPreftop gives just the title.
@c
@c The texinfo manual recommends putting a likely section name in references
-@c like this, eg. "Introduction", but it seems better to just give the title.
+@c like this, e.g. "Introduction", but it seems better to just give the title.
@c
@iftex
@macro GMPreftop{info,title}
@@ -470,7 +470,7 @@ the GNU Lesser General Public License version 3 (see
the additional option of applying later versions of these licenses. (The
reason for this dual licensing is to make it possible to use the library with
programs which are licensed under GPL version 2, but which for historical or
-other reasons do not allow use under later versions of the GPL).
+other reasons do not allow use under later versions of the GPL.)
Programs which are not part of the library itself, such as demonstration
programs and the GMP testsuite, are licensed under the terms of the GNU
@@ -502,13 +502,13 @@ There is assembly code for these CPUs:
@cindex CPU types
ARM Cortex-A9, Cortex-A15, and generic ARM,
DEC Alpha 21064, 21164, and 21264,
-AMD K8 and K10 (sold under many brands, e.g. Athlon64, Phenom, Opteron)
+AMD K8 and K10 (sold under many brands, e.g. Athlon64, Phenom, Opteron),
Bulldozer, and Bobcat,
Intel Pentium, Pentium Pro/II/III, Pentium 4, Core2, Nehalem, Sandy bridge, Haswell, generic x86,
Intel IA-64,
Motorola/IBM PowerPC 32 and 64 such as POWER970, POWER5, POWER6, and POWER7,
MIPS 32-bit and 64-bit,
-SPARC 32-bit ad 64-bit with special support for all UltraSPARC models.
+SPARC 32-bit and 64-bit with special support for all UltraSPARC models.
There is also assembly code for many obsolete CPUs.
@@ -695,7 +695,7 @@ example @command{m68k-mac-linux-gnu-ranlib} is tried, then plain
@command{ranlib}. This makes it possible for a set of cross-compiling tools
to co-exist with native tools. The prefix is the argument to @samp{--host},
and this can be an alias, such as @samp{m68k-linux}. But note that tools
-don't have to be setup this way, it's enough to just have a @env{PATH} with a
+don't have to be set up this way, it's enough to just have a @env{PATH} with a
suitable cross-compiling @command{cc} etc.
Compiling for a different CPU in the same family as the build system is a form
@@ -745,7 +745,7 @@ Alpha:
@nisamp{alphapca57},
@nisamp{alphaev6},
@nisamp{alphaev67},
-@nisamp{alphaev68}
+@nisamp{alphaev68},
@nisamp{alphaev7}
@item
@@ -1061,7 +1061,7 @@ Enable profiling support, in one of various styles, @pxref{Profiling}.
@item @option{MPN_PATH}
@cindex @code{MPN_PATH}
Various assembly versions of each mpn subroutines are provided. For a given
-CPU, a search is made though a path to choose a version of each. For example
+CPU, a search is made through a path to choose a version of each. For example
@samp{sparcv8} has
@example
@@ -1138,7 +1138,7 @@ Currently no attempt is made to follow whatever conventions a system has for
installing library or header files built for a particular ABI@. This will
probably only matter when installing multiple builds of GMP, and it might be
as simple as configuring with a special @samp{libdir}, or it might require
-more than that. Note that builds for different ABIs need to done separately,
+more than that. Note that builds for different ABIs need to be done separately,
with a fresh @command{./configure} and @command{make} each.
@sp 1
@@ -1207,7 +1207,7 @@ gcc [built for 2.0n]
cc +DA2.0 +e
@end example
-Note that current versions of GCC (eg.@: 3.2) don't generate 64-bit
+Note that current versions of GCC (e.g.@: 3.2) don't generate 64-bit
instructions for @code{long long} operations and so may be slower than for
2.0w. (The GMP assembly code is the same though.)
@@ -1269,7 +1269,7 @@ each. The default is n32.
@item @samp{ABI=o32}
The o32 ABI is 32-bit pointers and integers, and no 64-bit operations. GMP
will be slower than in n32 or 64, this option only exists to support old
-compilers, eg.@: GCC 2.7.2. Applications can be compiled with no special
+compilers, e.g.@: GCC 2.7.2. Applications can be compiled with no special
flags on an old compiler, or on a newer compiler with
@example
@@ -1715,7 +1715,7 @@ script, it exits silently, having died writing a preamble to
@file{config.log}. Use @command{bash} 2.04 or higher.
@samp{make all} was found to run out of memory during the final
-@file{libgmp.la} link on one system tested, despite having 64Mb available.
+@file{libgmp.la} link on one system tested, despite having 64MiB available.
Running @samp{make libgmp.la} directly helped, perhaps recursing into the
various subdirectories uses up memory.
@@ -1806,7 +1806,7 @@ operation.
In particular for long-running GMP applications, and applications demanding
extremely large numbers, building and running the @code{tuneup} program in the
-@file{tune} subdirectory, can be important. For example,
+@file{tune} subdirectory can be important. For example,
@example
cd tune
@@ -2012,7 +2012,7 @@ Most functions don't need to use these pointer types directly; it works fine to
declare a function using the @code{mpz_t} or @code{const mpz_t} as the argument
types, the same "pointer decay" happens in the background regardless.
-Occasionally, it is useful to manipulate pointers directly, e.g, to
+Occasionally, it is useful to manipulate pointers directly, e.g., to
conditionally swap @emph{references} to a function's inputs without changing
the @emph{values} as seen by the caller, or returning a pointer to an
@code{mpz_t} which is part of a larger structure. For these cases, the pointer
@@ -2062,7 +2062,7 @@ Functions})
@item
Functions for floating-point arithmetic, with names beginning with
@code{mpf_}. The associated type is @code{mpf_t}. There are about 70
-functions is this class. (@pxref{Floating-point Functions})
+functions in this class. (@pxref{Floating-point Functions})
@item
Fast low-level functions that operate on natural numbers. These are used by
@@ -2446,7 +2446,7 @@ As an aside, consideration has been given at various times to some sort of
expression evaluation within the main GMP library. Going beyond something
minimal quickly leads to matters like user-defined functions, looping, fixnums
for control variables, etc, which are considered outside the scope of GMP
-(much closer to language interpreters or compilers, @xref{Language Bindings}.)
+(much closer to language interpreters or compilers, @xref{Language Bindings}).
Something simple for program input convenience may yet be a possibility, a
combination of the @file{expr} demo and the @file{pexpr} tree back-end
perhaps. But for now the above evaluators are offered as illustrations.
@@ -2508,7 +2508,7 @@ usually be very rare and testing every time would be wasteful.
The @code{ui} functions and the small number of @code{si} functions exist for
convenience and should be used where applicable. But if for example an
@code{mpz_t} contains a value that fits in an @code{unsigned long} there's no
-need extract it and call a @code{ui} function, just use the regular @code{mpz}
+need to extract it and call a @code{ui} function, just use the regular @code{mpz}
function.
@item In-Place Operations
@@ -2556,7 +2556,7 @@ it's zero (or a particular value).
However when testing divisibility by several small integers, it's best to take
a remainder modulo their product, to save multi-precision operations. For
-instance to test whether a number is divisible by any of 23, 29 or 31 take a
+instance to test whether a number is divisible by 23, 29 or 31 take a
remainder modulo @math{23@times{}29@times{}31 = 20677} and then test that.
The division functions like @code{mpz_tdiv_q_ui} which give a quotient as well
@@ -2971,7 +2971,7 @@ Before you report a bug, check it's not already addressed in @ref{Known Build
Problems}, or perhaps @ref{Notes for Particular Systems}. You may also want
to check @uref{https://gmplib.org/} for patches for this release.
-Please include the following in any report,
+Please include the following in any report:
@itemize @bullet
@item
@@ -3165,8 +3165,8 @@ characters are used: @code{0x} and @code{0X} for hexadecimal, @code{0b} and
@code{0B} for binary, @code{0} for octal, or decimal otherwise.
For bases up to 36, case is ignored; upper-case and lower-case letters have
-the same value. For bases 37 to 62, upper-case letter represent the usual
-10..35 while lower-case letter represent 36..61.
+the same value. For bases 37 to 62, upper-case letters represent the usual
+10..35 while lower-case letters represent 36..61.
This function returns 0 if the entire string is a valid number in base
@var{base}. Otherwise it returns @minus{}1.
@@ -3366,7 +3366,7 @@ Set @var{rop} to the absolute value of @var{op}.
Division is undefined if the divisor is zero. Passing a zero divisor to the
division or modulo functions (including the modular powering functions
-@code{mpz_powm} and @code{mpz_powm_ui}), will cause an intentional division by
+@code{mpz_powm} and @code{mpz_powm_ui}) will cause an intentional division by
zero. This lets a program handle arithmetic exceptions in these functions the
same way as for normal C @code{int} arithmetic.
@@ -3453,7 +3453,7 @@ round the same as the other functions.
For positive @var{n} both @code{mpz_fdiv_q_2exp} and @code{mpz_tdiv_q_2exp}
are simple bitwise right shifts. For negative @var{n}, @code{mpz_fdiv_q_2exp}
-is effectively an arithmetic right shift treating @var{n} as twos complement
+is effectively an arithmetic right shift treating @var{n} as two's complement
the same as the bitwise logical functions do, whereas @code{mpz_tdiv_q_2exp}
effectively treats @var{n} as sign and magnitude.
@end deftypefun
@@ -3795,7 +3795,7 @@ part G.
@deftypefun void mpz_fib_ui (mpz_t @var{fn}, unsigned long int @var{n})
@deftypefunx void mpz_fib2_ui (mpz_t @var{fn}, mpz_t @var{fnsub1}, unsigned long int @var{n})
@cindex Fibonacci sequence functions
-@code{mpz_fib_ui} sets @var{fn} to to @m{F_n,F[n]}, the @var{n}'th Fibonacci
+@code{mpz_fib_ui} sets @var{fn} to @m{F_n,F[n]}, the @var{n}th Fibonacci
number. @code{mpz_fib2_ui} sets @var{fn} to @m{F_n,F[n]}, and @var{fnsub1} to
@m{F_{n-1},F[n-1]}.
@@ -3808,7 +3808,7 @@ similar.
@deftypefun void mpz_lucnum_ui (mpz_t @var{ln}, unsigned long int @var{n})
@deftypefunx void mpz_lucnum2_ui (mpz_t @var{ln}, mpz_t @var{lnsub1}, unsigned long int @var{n})
@cindex Lucas number functions
-@code{mpz_lucnum_ui} sets @var{ln} to to @m{L_n,L[n]}, the @var{n}'th Lucas
+@code{mpz_lucnum_ui} sets @var{ln} to @m{L_n,L[n]}, the @var{n}th Lucas
number. @code{mpz_lucnum2_ui} sets @var{ln} to @m{L_n,L[n]}, and @var{lnsub1}
to @m{L_{n-1},L[n-1]}.
@@ -3874,7 +3874,7 @@ multiple times.
@cindex Integer logical functions
@cindex Integer bit manipulation functions
-These functions behave as if twos complement arithmetic were used (although
+These functions behave as if two's complement arithmetic were used (although
sign-magnitude is the actual implementation). The least significant bit is
number 0.
@@ -3985,8 +3985,8 @@ characters are used: @code{0x} and @code{0X} for hexadecimal, @code{0b} and
@code{0B} for binary, @code{0} for octal, or decimal otherwise.
For bases up to 36, case is ignored; upper-case and lower-case letters have
-the same value. For bases 37 to 62, upper-case letter represent the usual
-10..35 while lower-case letter represent 36..61.
+the same value. For bases 37 to 62, upper-case letters represent the usual
+10..35 while lower-case letters represent 36..61.
Return the number of bytes read, or if an error occurred, return 0.
@end deftypefun
@@ -4023,7 +4023,7 @@ machines.
@cindex Integer random number functions
@cindex Random number functions
-The random number functions of GMP come in two groups; older function
+The random number functions of GMP come in two groups; older functions
that rely on a global state, and newer functions that accept a state
parameter that is read and modified. Please see the @ref{Random Number
Functions} for more information on how to use and not to use random
@@ -4550,7 +4550,7 @@ Compare @var{op1} and @var{num2}/@var{den2}. Return a positive value if
@var{num2} and @var{den2} are allowed to have common factors.
-These functions are implemented as a macros and evaluate their arguments
+These functions are implemented as macros and evaluate their arguments
multiple times.
@end deftypefn
@@ -4870,8 +4870,8 @@ The argument @var{base} may be in the ranges 2 to 62, or @minus{}62 to
decimal.
For bases up to 36, case is ignored; upper-case and lower-case letters have
-the same value; for bases 37 to 62, upper-case letter represent the usual
-10..35 while lower-case letter represent 36..61.
+the same value; for bases 37 to 62, upper-case letters represent the usual
+10..35 while lower-case letters represent 36..61.
Unlike the corresponding @code{mpz} function, the base will not be determined
from the leading characters of the string if @var{base} is 0. This is so that
@@ -5364,7 +5364,7 @@ This function requires that @var{s1n} is greater than or equal to
@deftypefun mp_limb_t mpn_neg (mp_limb_t *@var{rp}, const mp_limb_t *@var{sp}, mp_size_t @var{n})
Perform the negation of @{@var{sp}, @var{n}@}, and write the result to
-@{@var{rp}, @var{n}@}. This is equivalent to calling @code{mpn_sub_n} with a
+@{@var{rp}, @var{n}@}. This is equivalent to calling @code{mpn_sub_n} with an
@var{n}-limb zero minuend and passing @{@var{sp}, @var{n}@} as subtrahend.
Return borrow, either 0 or 1.
@end deftypefun
@@ -5540,7 +5540,7 @@ Shift @{@var{sp}, @var{n}@} left by @var{count} bits, and write the result to
least significant @var{count} bits of the return value (the rest of the return
value is zero).
-@var{count} must be in the range 1 to @nicode{mp_bits_per_limb}@minus{}1. The
+@var{count} must be in the range 1 to @nicode{mp_bits_per_limb}@math{{}-1}. The
regions @{@var{sp}, @var{n}@} and @{@var{rp}, @var{n}@} may overlap, provided
@math{@var{rp} @ge{} @var{sp}}.
@@ -5553,7 +5553,7 @@ Shift @{@var{sp}, @var{n}@} right by @var{count} bits, and write the result to
most significant @var{count} bits of the return value (the rest of the return
value is zero).
-@var{count} must be in the range 1 to @nicode{mp_bits_per_limb}@minus{}1. The
+@var{count} must be in the range 1 to @nicode{mp_bits_per_limb}@math{{}-1}. The
regions @{@var{sp}, @var{n}@} and @{@var{rp}, @var{n}@} may overlap, provided
@math{@var{rp} @le{} @var{sp}}.
@@ -6120,11 +6120,11 @@ Initialize @var{rop} with a copy of the algorithm and state from @var{op}.
Initialize @var{state} with an algorithm selected by @var{alg}. The only
choice is @code{GMP_RAND_ALG_LC}, which is @code{gmp_randinit_lc_2exp_size}
described above. A third parameter of type @code{unsigned long} is required,
-this is the @var{size} for that function. @code{GMP_RAND_ALG_DEFAULT} or 0
+this is the @var{size} for that function. @code{GMP_RAND_ALG_DEFAULT} and 0
are the same as @code{GMP_RAND_ALG_LC}.
@c For reference, this is the only place gmp_errno has been documented, and
-@c due to being non thread safe we won't be adding to it's uses.
+@c due to being non thread safe we won't be adding to its uses.
@findex gmp_errno
@findex GMP_ERROR_UNSUPPORTED_ARGUMENT
@findex GMP_ERROR_INVALID_ARGUMENT
@@ -6150,7 +6150,7 @@ Free all memory occupied by @var{state}.
Set an initial seed value into @var{state}.
The size of a seed determines how many different sequences of random numbers
-that it's possible to generate. The ``quality'' of the seed is the randomness
+it's possible to generate. The ``quality'' of the seed is the randomness
of a given seed compared to the previous seed used, and this affects the
randomness of separate number sequences. The method for choosing a seed is
critical if the generated numbers are to be used for important applications,
@@ -6314,7 +6314,7 @@ meaningful for @samp{Z}, @samp{Q} and @samp{N}.
@samp{M} is a proxy for the C library @samp{l} or @samp{L}, according to the
size of @code{mp_limb_t}. Unsigned conversions will be usual, but a signed
-conversion can be used and will interpret the value as a twos complement
+conversion can be used and will interpret the value as a two's complement
negative.
@samp{n} can be used with any type, even the GMP types.
@@ -6417,7 +6417,7 @@ if the C library @code{vsnprintf} is the older GLIBC 2.0.x style.
@deftypefunx int gmp_vasprintf (char **@var{pp}, const char *@var{fmt}, va_list @var{ap})
Form a null-terminated string in a block of memory obtained from the current
memory allocation function (@pxref{Custom Allocation}). The block will be the
-size of the string and null-terminator. The address of the block in stored to
+size of the string and null-terminator. The address of the block is stored to
*@var{pp}. The return value is the number of characters produced, excluding
the null-terminator.
@@ -6457,7 +6457,7 @@ Print @var{op} to @var{stream}, using its @code{ios} formatting settings.
In hex or octal, @var{op} is printed as a signed number, the same as for
decimal. This is unlike the standard @code{operator<<} routines on @code{int}
-etc, which instead give twos complement.
+etc, which instead give two's complement.
@end deftypefun
@deftypefun ostream& operator<< (ostream& @var{stream}, const mpq_t @var{op})
@@ -7243,7 +7243,7 @@ Get or set the current precision of an @code{mpf_class}.
The restrictions described for @code{mpf_set_prec_raw} (@pxref{Initializing
Floats}) apply to @code{mpf_class::set_prec_raw}. Note in particular that the
-@code{mpf_class} must be restored to it's allocated precision before being
+@code{mpf_class} must be restored to its allocated precision before being
destroyed. This must be done by application code, there's no automatic
mechanism for it.
@end deftypefun
@@ -8116,7 +8116,7 @@ are
The @m{w_i,w[i]} are going to be determined, and when they are they'll give
the final result using @math{w=W(b)}, since
@m{xy=X(b)Y(b),x*y=X(b)*Y(b)=W(b)}. The coefficients will be roughly
-@math{b^2} each, and the final @math{W(b)} will be an addition like,
+@math{b^2} each, and the final @math{W(b)} will be an addition like this:
@tex
\def\GMPbox#1#2{%
@@ -8174,7 +8174,7 @@ nine @m{x_iy_j,x[i]*y[j]} for @math{i,j=0,1,2}, and would be equivalent merely
to a basecase multiply. Instead the following approach is used.
@math{X(t)} and @math{Y(t)} are evaluated and multiplied at 5 points, giving
-values of @math{W(t)} at those points. In GMP the following points are used,
+values of @math{W(t)} at those points. In GMP the following points are used:
@quotation
@multitable {@m{t=\infty,t=inf}M} {MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM}
@@ -8309,7 +8309,7 @@ values of @math{W(t)} at those points. In GMP the following points are used,
@end quotation
The number of additions and subtractions for Toom-4 is much larger than for Toom-3.
-But several subexpressions occur multiple times, for example @m{x_2+x_0,x2+x0}, occurs
+But several subexpressions occur multiple times, for example @m{x_2+x_0,x2+x0} occurs
for both @math{t=1} and @math{t=-1}.
Toom-4 is asymptotically @math{O(N^@W{1.404})}, the exponent being
@@ -8322,19 +8322,19 @@ original size each.
@cindex Toom multiplication
The Toom algorithms described above (@pxref{Toom 3-Way Multiplication},
-@pxref{Toom 4-Way Multiplication}) generalizes to split into an arbitrary
+@pxref{Toom 4-Way Multiplication}) generalize to split into an arbitrary
number of pieces. In general a split of two equally long operands into
@math{r} pieces leads to evaluations and pointwise multiplications done at
@m{2r-1,2*r-1} points. To fully exploit symmetries it would be better to have
a multiple of 4 points, that's why for higher degree Toom'n'half is used.
Toom'n'half means that the existence of one more piece is considered for a
-single operand. It can be virtual, i.e. zero, or real, when the two operand
+single operand. It can be virtual, i.e. zero, or real, when the two operands
are not exactly balanced. By choosing an even @math{r},
Toom-@m{r{1\over2},r+1/2} requires @math{2r} points, a multiple of four.
-The quadruplets of points include 0, @m{\infty,inf}, +1, -1 and
-@m{\pm2^i,+-2^i}, @m{\pm2^{-i},+-2^-i} . Each of them giving shortcuts for the
+The quadruplets of points include 0, @m{\infty,inf}, +1, @m{-1} and
+@m{\pm2^i,+-2^i}, @m{\pm2^{-i},+-2^-i}. Each of them giving shortcuts for the
evaluation phase and for some steps in the interpolation phase. Further tricks
are used to reduce the memory footprint of the whole multiplication algorithm
to a memory buffer equal in size to the result of the product.
@@ -8367,7 +8367,7 @@ pieces of @math{M=N/2^k} bits each. @math{N} must be a multiple of
the split falls on limb boundaries, avoiding bit shifts in the split and
combine stages.
-The evaluations, pointwise multiplications, and interpolation, are all done
+The evaluations, pointwise multiplications, and interpolation are all done
modulo @m{2^{N'}+1, 2^N'+1} where @math{N'} is @math{2M+k+3} rounded up to a
multiple of @math{2^k} and of @code{mp_bits_per_limb}. The results of
interpolation will be the following negacyclic convolution of the input
@@ -8469,7 +8469,7 @@ currently used. The notes here are merely for interest.
In general a split into @math{r+1} pieces is made, and evaluations and
pointwise multiplications done at @m{2r+1,2*r+1} points. A 4-way split does 7
pointwise multiplies, 5-way does 9, etc. Asymptotically an @math{(r+1)}-way
-algorithm is @m{O(N^{log(2r+1)/log(r+1)}), O(N^(log(2*r+1)/log(r+1)))}. Only
+algorithm is @m{O(N^{\log(2r+1)/\log(r+1)}), O(N^(log(2*r+1)/log(r+1)))}. Only
the pointwise multiplications count towards big-@math{O} complexity, but the
time spent in the evaluate and interpolate stages grows with @math{r} and has
a significant practical impact, with the asymptotic advantage of each @math{r}
@@ -8515,7 +8515,7 @@ For really large operands, we invoke FFT directly.
For operands between these sizes, we use Toom inspired algorithms suggested by
Alberto Zanoni and Marco Bodrato. The idea is to split the operands into
polynomials of different degree. GMP currently splits the smaller operand
-onto 2 coefficients, i.e., a polynomial of degree 1, but the larger operand
+into 2 coefficients, i.e., a polynomial of degree 1, but the larger operand
can be split into 2, 3, or 4 coefficients, i.e., a polynomial of degree 1 to
3.
@@ -8863,7 +8863,7 @@ code as described above. For two N-bit operands, the algorithm takes about
0.68 iterations per bit. For optimum performance some attention needs to be
paid to the way the factors of 2 are stripped from @math{a}.
-Firstly it may be noted that in twos complement the number of low zero bits on
+Firstly it may be noted that in two's complement the number of low zero bits on
@math{a-b} is the same as @math{b-a}, so counting or testing can begin on
@math{a-b} without waiting for @math{@abs{}(a-b)} to be determined.
@@ -8872,7 +8872,7 @@ condition is data dependent. But on average there's only a few low zeros, so
an option is to strip one or two bits arithmetically then loop for more (as
done for AMD K6). Or use a lookup table to get a count for several bits then
loop for more (as done for AMD K7). An alternative approach is to keep just
-one of @math{a} or @math{b} odd and iterate
+one of @math{a} and @math{b} odd and iterate
@quotation
@math{a,b = @abs{}(a-b), @min{}(a,b)} @*
@@ -8924,10 +8924,10 @@ the initial quotient sequence for the two large numbers; the final quotient
may sometimes be one off.
@item
-It takes advantage of the fact the quotients are usually small. The division
+It takes advantage of the fact that the quotients are usually small. The division
operator is not used, since the corresponding assembler instruction is very
slow on most architectures. (This code could probably be improved further, it
-uses many branches that are unfriendly to prediction).
+uses many branches that are unfriendly to prediction.)
@item
It switches from double-limb calculations to single-limb calculations half-way
@@ -8987,7 +8987,7 @@ algorithm. Where Lehmer repeatedly chops off the top two limbs, calls
@code{mpn_hgcd2}, and applies the resulting matrix to the full numbers, the
sub-quadratic GCD chops off the most significant third of the limbs (the
proportion is a tuning parameter, and @math{1/3} seems to be more efficient
-than, e.g, @math{1/2}), calls @code{mpn_hgcd}, and applies the resulting
+than, e.g., @math{1/2}), calls @code{mpn_hgcd}, and applies the resulting
matrix. Once the input numbers are reduced to size below
@code{GCD_DC_THRESHOLD}, Lehmer's algorithm is used for the rest of the work.
@@ -9006,7 +9006,7 @@ handle this case. The binary algorithm is used only for single-limb GCDEXT.
Lehmer's algorithm is used for sizes up to @code{GCDEXT_DC_THRESHOLD}. Above
this threshold, GCDEXT is implemented as a loop around HGCD, but with more
book-keeping to keep track of the cofactors. This gives the same asymptotic
-running time as for GCD and HGCD, @m{O(M(N)\log N),O(M(N)*log(N))}
+running time as for GCD and HGCD, @m{O(M(N)\log N),O(M(N)*log(N))}.
One difference to plain GCD is that while the inputs @math{a} and @math{b} are
reduced as the algorithm proceeds, the cofactors @math{x} and @math{y} grow in
@@ -9072,7 +9072,7 @@ In all the routines sign changes for the result are accumulated using fast bit
twiddling which avoids conditional jumps.
The final result is calculated after verifying the inputs are coprime (GCD = 1)
-by raising @m{(-1)^e,(-1)^e}
+by raising @m{(-1)^e,(-1)^e}.
Much of the HGCD code is shared directly with the HGCD implementations, such
as the 2x2 matrix calculation, @xref{Lehmer's Algorithm} basecase and
@@ -9476,9 +9476,9 @@ recursively. The procedure can be best illustrated with an example,
@end quotation
Current code collects all the factors in a single list, with a loop and no
-recursion, and compute the product, with no special care for repeated chunks.
+recursion, and computes the product, with no special care for repeated chunks.
-When @math{n} is larger, computation pass trough prime sieving. An helper
+When @math{n} is larger, computations pass through prime sieving. A helper
function is used, as suggested by Peter Luschny:
@tex
$$\mathop{\rm msf}(n) = {n!\over\lfloor n/2\rfloor!^2\cdot2^k} = \prod_{p=3}^{n}
@@ -9890,10 +9890,10 @@ addressing done with the loop counter as a scaled index.
The final loop control cost can be amortised by processing several limbs in
each iteration (@pxref{Assembly Loop Unrolling}). This at least ensures loop
-control isn't a big fraction the work done.
+control isn't a big fraction of the work done.
Memory throughput is always a limit. If perhaps only one load or one store
-can be done per cycle then 3 cycles/limb will the top speed for ``binary''
+can be done per cycle then 3 cycles/limb will be the top speed for ``binary''
operations like @code{mpn_add_n}, and any code achieving that is optimal.
Integer resources can be freed up by having the loop counter in a float
@@ -9908,7 +9908,7 @@ twiddling.
@node Assembly Floating Point, Assembly SIMD Instructions, Assembly Functional Units, Assembly Coding
@subsection Floating Point
-@cindex Assembly floating Point
+@cindex Assembly floating point
Floating point arithmetic is used in GMP for multiplications on CPUs with poor
integer multipliers. It's mostly useful for @code{mpn_mul_1},
@@ -10150,7 +10150,7 @@ juggling.
If the latency of some instruction is greater than the loop time then it will
be necessary to unroll, so one register has a result ready to use while
-another (or multiple others) are still in progress. (@pxref{Assembly Loop
+another (or multiple others) are still in progress (@pxref{Assembly Loop
Unrolling}).
@@ -10311,12 +10311,12 @@ non-zero @code{_mp_size}. They can only be used as read-only constants. See
@end table
The various bitwise logical functions like @code{mpz_and} behave as if
-negative values were twos complement. But sign and magnitude is always used
+negative values were two's complement. But sign and magnitude is always used
internally, and necessary adjustments are made during the calculations.
Sometimes this isn't pretty, but sign and magnitude are best for other
routines.
-Some internal temporary variables are setup with @code{MPZ_TMP_INIT} and these
+Some internal temporary variables are set up with @code{MPZ_TMP_INIT} and these
have @code{_mp_d} space obtained from @code{TMP_ALLOC} rather than the memory
allocation functions. Care is taken to ensure that these are big enough that
no reallocation is necessary (since it would have unpredictable consequences).
@@ -10581,7 +10581,7 @@ limbs, then an extra added for the high being only non-zero, giving an
back with @code{mpf_get_prec} will take @code{_mp_prec} subtract 1 limb and
multiply by 32, giving 256 bits.
-Strictly speaking, the fact the high limb has at least one bit means that a
+Strictly speaking, the fact that the high limb has at least one bit means that a
float with, say, 3 limbs of 32-bits each will be holding at least 65 bits, but
for the purposes of @code{mpf_t} it's considered simply to be 64 bits, a nice
multiple of the limb size.
@@ -10622,7 +10622,7 @@ multiple of the limb size.
@end ifnottex
The size is 4 bytes written most significant byte first, being the number of
-subsequent data bytes, or the twos complement negative of that when a negative
+subsequent data bytes, or the two's complement negative of that when a negative
integer is represented. The data bytes are the absolute value of the integer,
written most significant byte first.
@@ -10750,7 +10750,7 @@ all subexpressions are evaluated to the precision of @code{f}.
@cindex Contributors
Torbj@"orn Granlund wrote the original GMP library and is still the main
-developer. Code not explicitly attributed to others, was contributed by
+developer. Code not explicitly attributed to others was contributed by
Torbj@"orn. Several other individuals and organizations have contributed
GMP. Here is a list in chronological order on first contribution:
@@ -10796,7 +10796,7 @@ improvements for population count. Robert also wrote highly optimized
Karatsuba and 3-way Toom multiplication functions for GMP 3, and contributed
the ARM assembly code.
-Torsten Ekedahl of the Mathematical department of Stockholm University provided
+Torsten Ekedahl of the Mathematical Department of Stockholm University provided
significant inspiration during several phases of the GMP development. His
mathematical expertise helped improve several algorithms.
@@ -10822,7 +10822,7 @@ Jason Moxham rewrote @code{mpz_fac_ui}.
Pedro Gimeno implemented the Mersenne Twister and made other random number
improvements.
-Niels M@"oller wrote the sub-quadratic GCD, extended GCD and jacobi code, the
+Niels M@"oller wrote the sub-quadratic GCD, extended GCD and Jacobi code, the
quadratic Hensel division code, and (with Torbj@"orn) the new divide and
conquer division code for GMP 4.3. Niels also helped implement the new Toom
multiply code for GMP 4.3 and implemented helper functions to simplify Toom
@@ -10869,7 +10869,7 @@ Ulrich Weigand ported GMP to the powerpc64le ABI.
contributed to GMP but are not listed above, please tell
@email{gmp-devel@@gmplib.org} about the omission!)
-The development of floating point functions of GNU MP 2, were supported in part
+The development of floating point functions of GNU MP 2 was supported in part
by the ESPRIT-BRA (Basic Research Activities) 6846 project POSSO (POlynomial
System SOlving).