Sophie

Sophie

distrib > Mandriva > 2010.0 > i586 > media > contrib-release > by-pkgid > 1641461a394e13ce41f192703861895c > files > 30

pari-2.3.4-3mdv2009.1.i586.rpm

% $Id: usersch3.tex 10084 2008-05-05 14:39:18Z kb $
% Copyright (c) 2000  The PARI Group
%
% This file is part of the PARI/GP documentation
%
% Permission is granted to copy, distribute and/or modify this document
% under the terms of the GNU General Public License
\chapter{Functions and Operations Available in PARI and GP}
\label{se:functions}

The functions and operators available in PARI and in the GP/PARI calculator
are numerous and everexpanding. Here is a description of the ones available
in version \vers. It should be noted that many of these functions accept
quite different types as arguments, but others are more restricted. The list
of acceptable types will be given for each function or class of functions.
Except when stated otherwise, it is understood that a function or operation
which should make natural sense is legal. In this chapter, we will describe
the functions according to a rough classification. The general entry looks
something like:

\key{foo}$(x,\{\fl=0\})$: short description.

\syn{foo}{x,\fl}.

\noindent
This means that the GP function \kbd{foo} has one mandatory argument $x$, and
an optional one, $\fl$, whose default value is 0. (The $\{\}$ should not be
typed, it is just a convenient notation we will use throughout to denote
optional arguments.) That is, you can type \kbd{foo(x,2)}, or \kbd{foo(x)},
which is then understood to mean \kbd{foo(x,0)}. As well, a comma or closing
parenthesis, where an optional argument should have been, signals to GP it
should use the default. Thus, the syntax \kbd{foo(x,)} is also accepted as a
synonym for our last expression. When a function has more than one optional
argument, the argument list is filled with user supplied values, in order.
When none are left, the defaults are used instead. Thus, assuming that
\kbd{foo}'s prototype had been
$$\hbox{%
\key{foo}$(\{x=1\},\{y=2\},\{z=3\})$,%
}$$
typing in \kbd{foo(6,4)} would give
you \kbd{foo(6,4,3)}. In the rare case when you want to set some far away
argument, and leave the defaults in between as they stand, you can use the
``empty arg'' trick alluded to above: \kbd{foo(6,,1)} would yield
\kbd{foo(6,2,1)}. By the way, \kbd{foo()} by itself yields
\kbd{foo(1,2,3)} as was to be expected.

In this rather special case of a function having no mandatory argument, you
can even omit the $()$: a standalone \kbd{foo} would be enough (though we
do not recommend it for your scripts, for the sake of clarity). In defining
GP syntax, we strove to put optional arguments at the end of the argument
list (of course, since they would not make sense otherwise), and in order of
decreasing usefulness so that, most of the time, you will be able to ignore
them.

Finally, an optional argument (between braces) followed by a star, like
$\{\var{x}\}*$, means that any number of such arguments (possibly none) can
be given. This is in particular used by the various \kbd{print} routines.

\misctitle{Flags}. A \tev{flag} is an argument which, rather than conveying
actual information to the routine, intructs it to change its default
behaviour, e.g.~return more or less information. All such
flags are optional, and will be called \fl\ in the function descriptions to
follow. There are two different kind of flags

\item generic: all valid values for the flag are individually
described (``If \fl\ is equal to $1$, then\dots'').

\item binary:\sidx{binary flag} use customary binary notation as a
compact way to represent many toggles with just one integer. Let
$(p_0,\dots,p_n)$ be a list of switches (i.e.~of properties which take either
the value $0$ or~$1$), the number $2^3 + 2^5 = 40$ means that $p_3$ and $p_5$
are set (that is, set to $1$), and none of the others are (that is, they
are set to $0$). This is announced as ``The binary digits of $\fl$ mean 1:
$p_0$, 2: $p_1$, 4: $p_2$'', and so on, using the available consecutive
powers of~$2$.

\misctitle{Mnemonics for flags}. Numeric flags as mentionned above are
obscure, error-prone, and quite rigid: should the authors
want to adopt a new flag numbering scheme (for instance when noticing
flags with the same meaning but different numeric values across a set of
routines), it would break backward compatibility. The only advantage of
explicit numeric values is that they are fast to type, so their use is only
advised when using the calculator \kbd{gp}.

As an alternative, one can replace a numeric flag by a character string
containing symbolic identifiers. For a generic flag, the mnemonic
corresponding to the numeric identifier is given after it as in

\bprog
fun(x, {flag = 0} ):

  If flag is equal to 1 = AGM, use an agm formula\dots
@eprog\noindent
which means that one can use indifferently \kbd{fun($x$, 1)} or \kbd{fun($x$,
AGM)}.

For a binary flag, mnemonics corresponding to the various toggles are given
after each of them. They can be negated by prepending \kbd{no\_} to the
mnemonic, or by removing such a prefix. These toggles are grouped together
using any punctuation character (such as ',' or ';'). For instance (taken
from description of $\tet{ploth}(X=a,b,\var{expr},\{\fl=0\},\{n=0\})$)

\centerline{Binary digits of flags mean: $1=\kbd{Parametric}$,
$2=\kbd{Recursive}$, \dots}

\noindent so that, instead of $1$, one could use the mnemonic
\kbd{"Parametric; no\_Recursive"}, or simply \kbd{"Parametric"} since
\kbd{Recursive} is unset by default (default value of $\fl$ is $0$,
i.e.~everything unset).

\misctitle{Pointers}.\varsidx{pointer} If a parameter in the function
prototype is prefixed with a \& sign, as in

\key{foo}$(x,\&e)$

\noindent it means that, besides the normal return value, the function may
assign a value to $e$ as a side effect. When passing the argument, the \&
sign has to be typed in explicitly. As of version \vers, this \tev{pointer}
argument is optional for all documented functions, hence the \& will always
appear between brackets as in \kbd{Z\_issquare}$(x,\{\&e\})$.

\misctitle{About library programming}.
the \var{library} function \kbd{foo}, as defined
at the beginning of this section, is seen to have two mandatory arguments,
$x$ and \fl: no PARI mathematical function has been implemented so as to
accept a variable number of arguments, so all arguments are mandatory when
programming with the library (often, variants are provided corresponding to
the various flag values). When not mentioned otherwise, the result and
arguments of a function are assumed implicitly to be of type \kbd{GEN}. Most
other functions return an object of type \kbd{long} integer in C (see
Chapter~4). The variable or parameter names \var{prec} and \fl\ always denote
\kbd{long} integers.

The \tet{entree} type is used by the library to implement iterators (loops,
sums, integrals, etc.) when a formal variable has to successively assume a
number of values in a given set. When programming with the library, it is
easier and much more efficient to code loops and the like directly. Hence
this type is not documented, although it does appear in a few library
function prototypes below. See \secref{se:sums} for more details.

\section{Standard monadic or dyadic operators}

\subseckbd{+$/$-}: The expressions \kbd{+}$x$ and \kbd{-}$x$ refer
to monadic operators (the first does nothing, the second negates $x$).

\syn{gneg}{x} for \kbd{-}$x$.

\subseckbd{+}, \kbd{-}: The expression $x$ \kbd{+} $y$ is the \idx{sum} and
$x$ \kbd{-} $y$ is the \idx{difference} of $x$ and $y$. Among the prominent
impossibilities are addition/subtraction between a scalar type and a vector
or a matrix, between vector/matrices of incompatible sizes and between an
intmod and a real number.

\syn{gadd}{x,y} $x$ \kbd{+} $y$, \funs{gsub}{x,y} for $x$ \kbd{-} $y$.

\subseckbd{*}: The expression $x$ \kbd{*} $y$ is the \idx{product} of $x$
and $y$. Among the prominent impossibilities are multiplication between
vector/matrices of incompatible sizes, between an intmod and a real
number. Note that because of vector and matrix operations, \kbd{*} is not
necessarily commutative. Note also that since multiplication between two
column or two row vectors is not allowed, to obtain the \idx{scalar product}
of two vectors of the same length, you must multiply a line vector by a
column vector, if necessary by transposing one of the vectors (using
the operator \kbd{\til} or the function \kbd{mattranspose}, see
\secref{se:linear_algebra}).

If $x$ and $y$ are binary quadratic forms, compose them. See also
\kbd{qfbnucomp} and \kbd{qfbnupow}.

\syn{gmul}{x,y} for $x$ \kbd{*} $y$. Also available is
\funs{gsqr}{x} for $x$ \kbd{*} $x$ (faster of course!).

\subseckbd{/}: The expression $x$ \kbd{/} $y$ is the \idx{quotient} of $x$
and $y$. In addition to the impossibilities for multiplication, note that if
the divisor is a matrix, it must be an invertible square matrix, and in that
case the result is $x*y^{-1}$. Furthermore note that the result is as exact
as possible: in particular, division of two integers always gives a rational
number (which may be an integer if the quotient is exact) and \emph{not} the
Euclidean quotient (see $x$ \kbd{\bs} $y$ for that), and similarly the
quotient of two polynomials is a rational function in general. To obtain the
approximate real value of the quotient of two integers, add \kbd{0.} to the
result; to obtain the approximate $p$-adic value of the quotient of two
integers, add \kbd{O(p\pow k)} to the result; finally, to obtain the
\idx{Taylor series} expansion of the quotient of two polynomials, add
\kbd{O(X\pow k)} to the result or use the \kbd{taylor} function
(see \secref{se:taylor}). \label{se:gdiv}

\syn{gdiv}{x,y} for $x$ \kbd{/} $y$.

\subseckbd{\bs}: The expression \kbd{$x$ \bs\ $y$} is the \idx{Euclidean
quotient} of $x$ and $y$. If $y$ is a real scalar, this is defined as
\kbd{floor($x$/$y$)} if $y > 0$, and \kbd{ceil($x$/$y$)} if $y < 0$ and
the division is not exact. Hence the remainder \kbd{$x$ - ($x$\bs$y$)*$y$}
is in $[0, |y|[$.

Note that when $y$ is an integer and $x$ a polynomial, $y$ is first promoted
to a polynomial of degree $0$. When $x$ is a vector or matrix, the operator
is applied componentwise.

\syn{gdivent}{x,y} for $x$ \kbd{\bs} $y$.

\subseckbd{\bs/}: The expression $x$ \b{/} $y$ evaluates to the rounded
\idx{Euclidean quotient} of $x$ and $y$. This is the same as \kbd{$x$ \bs\ $y$}
except for scalar division: the quotient is such that the corresponding
remainder is smallest in absolute value and in case of a tie the quotient
closest to $+\infty$ is chosen (hence the remainder would belong to
$]-|y|/2, |y|/2]$).

When $x$ is a vector or matrix, the operator is applied componentwise.

\syn{gdivround}{x,y} for $x$ \b{/} $y$.

\subseckbd{\%}: The expression \kbd{$x$ \% $y$} evaluates to the modular
\idx{Euclidean remainder} of $x$ and $y$, which we now define. If $y$ is an
integer, this is the smallest non-negative integer congruent to $x$ modulo
$y$. If $y$ is a polynomial, this is the polynomial of smallest degree
congruent to $x$ modulo $y$. When $y$ is a non-integral real number,
 \kbd{$x$\%$y$} is defined as \kbd{$x$ - ($x$\bs$y$)*$y$}. This
coincides with the definition for $y$ integer if and only if $x$ is an
integer, but still belongs to $[0, |y|[$. For instance:
\bprog
? (1/2) % 3
%1 = 2
? 0.5 % 3
  ***   forbidden division t_REAL % t_INT.
? (1/2) % 3.0
%2 = 1/2
@eprog
Note that when $y$ is an integer and $x$ a polynomial, $y$ is first promoted
to a polynomial of degree $0$. When $x$ is a vector or matrix, the operator
is applied componentwise.

\syn{gmod}{x,y} for $x$ \kbd{\%} $y$.

\subsecidx{divrem}$(x,y,\{v\})$: creates a column vector with two components,
the first being the Euclidean quotient (\kbd{$x$ \bs\ $y$}), the second the
Euclidean remainder (\kbd{$x$ - ($x$\bs$y$)*$y$}), of the division of $x$ by
$y$. This avoids the need to do two divisions if one needs both the quotient
and the remainder. If $v$ is present, and $x$, $y$ are multivariate
polynomials, divide with respect to the variable $v$.

Beware that \kbd{divrem($x$,$y$)[2]} is in general not the same as
\kbd{$x$ \% $y$}; there is no operator to obtain it in GP:
\bprog
? divrem(1/2, 3)[2]
%1 = 1/2
? (1/2) % 3
%2 = 2
? divrem(Mod(2,9), 3)[2]
  ***   forbidden division t_INTMOD \ t_INT.
? Mod(2,9) % 6
%3 = Mod(2,3)
@eprog

\syn{divrem}{x,y,v},where $v$ is a \kbd{long}. Also available as
\funs{gdiventres}{x,y} when $v$ is not needed.

\subseckbd{\pow}: The expression $x\hbox{\kbd{\pow}}n$ is \idx{powering}.
If the exponent is an integer, then exact operations are performed using
binary (left-shift) powering techniques. In particular, in this case $x$
cannot be a vector or matrix unless it is a square matrix (invertible
if the exponent is negative). If $x$ is a $p$-adic number, its
precision will increase if $v_p(n) > 0$. Powering a binary quadratic form
(types \typ{QFI} and \typ{QFR}) returns a reduced representative of the
class, provided the input is reduced. In particular, $x\hbox{\kbd{\pow}}1$ is
identical to $x$.

PARI is able to rewrite the multiplication $x * x$ of two \emph{identical}
objects as $x^2$, or $\kbd{sqr}(x)$. Here, identical means the operands are
two different labels referencing the same chunk of memory; no equality test
is performed. This is no longer true when more than two arguments are
involved.

If the exponent is not of type integer, this is treated as a transcendental
function (see \secref{se:trans}), and in particular has the effect of
componentwise powering on vector or matrices.

As an exception, if the exponent is a rational number $p/q$ and $x$ an
integer modulo a prime or a $p$-adic number, return a solution $y$ of
$y^q=x^p$ if it exists. Currently, $q$ must not have large prime factors.
Beware that
\bprog
    ? Mod(7,19)^(1/2)
    %1 = Mod(11, 19) /* is any square root */
    ? sqrt(Mod(7,19))
    %2 = Mod(8, 19)  /* is the smallest square root */
    ? Mod(7,19)^(3/5)
    %3 = Mod(1, 19)
    ? %3^(5/3)
    %4 = Mod(1, 19)  /* Mod(7,19) is just another cubic root */
@eprog

If the exponent is a negative integer, an \idx{inverse} must be computed.
For non-invertible \typ{INTMOD}, this will fail and implicitly exhibit a
non trivial factor of the modulus:
\bprog
    ? Mod(4,6)^(-1)
      ***   impossible inverse modulo: Mod(2, 6).
@eprog\noindent
(Here, a factor 2 is obtained directly. In general, take the gcd of the
representative and the modulus.) This is most useful when performing
complicated operations modulo an integer $N$ whose factorization is
unknown. Either the computation succeeds and all is well, or a factor $d$
is discovered and the computation may be restarted modulo $d$ or $N/d$.

For non-invertible \typ{POLMOD}, this will fail without exhibiting a
factor.
\bprog
    ? Mod(x^2, x^3-x)^(-1)
      ***   non-invertible polynomial in RgXQ_inv.

    ? a = Mod(3,4)*y^3 + Mod(1,4); b = y^6+y^5+y^4+y^3+y^2+y+1;
    ? Mod(a, b)^(-1);
      ***   non-invertible polynomial in RgXQ_inv.
@eprog\noindent
In fact the latter polynomial is invertible, but the algorithm used
(subresultant) assumes the base ring is a domain. If it is not the case,
as here for $\Z/4\Z$, a result will be correct but chances are an error
will occur first. In this specific case, one should work with $2$-adics.
In general, one can try the following approach
\bprog
    ? inversemod(a, b) =
    { local(m);
      m = polsylvestermatrix(polrecip(a), polrecip(b));
      m = matinverseimage(m, matid(#m)[,1]);
      Polrev( vecextract(m, Str("..", poldegree(b))), variable(b) )
    }
    ? inversemod(a,b)
    %2 = Mod(2,4)*y^5 + Mod(3,4)*y^3 + Mod(1,4)*y^2 + Mod(3,4)*y + Mod(2,4)

@eprog\noindent
This is not guaranteed to work either since it must invert pivots. See
\secref{se:linear_algebra}.

\syn{gpow}{x,n,\var{prec}} for $x\hbox{\kbd{\pow}}n$.

\subsecidx{bittest}$(x,n)$: outputs the $n^{\text{th}}$ bit of $x$ starting
from the right (i.e.~the coefficient of $2^n$ in the binary expansion of $x$).
The result is 0 or 1. To extract several bits at once as a vector, pass a
vector for $n$.

See \secref{se:bitand} for the behaviour at negative arguments.

\syn{bittest}{x,n}, where $n$ and the result are \kbd{long}s.

\subsecidx{shift}$(x,n)$ or $x$ \kbd{<<} $n$ (= $x$ \kbd{>>} $(-n)$): shifts
$x$ componentwise left by $n$ bits if $n\ge0$ and right by $|n|$ bits if $n<0$.
A left shift by $n$ corresponds to multiplication by $2^n$. A right shift of an
integer $x$ by $|n|$ corresponds to a Euclidean division of $x$ by $2^{|n|}$
with a remainder of the same sign as $x$, hence is not the same (in general) as
$x \kbd{\bs} 2^n$.

\syn{gshift}{x,n} where $n$ is a \kbd{long}.

\subsecidx{shiftmul}$(x,n)$: multiplies $x$ by $2^n$. The difference with
\kbd{shift} is that when $n<0$, ordinary division takes place, hence for
example if $x$ is an integer the result may be a fraction, while for shifts
Euclidean division takes place when $n<0$ hence if $x$ is an integer the result
is still an integer.

\syn{gmul2n}{x,n} where $n$ is a \kbd{long}.

\subsec{Comparison and boolean operators}.\sidx{boolean operators} The six
standard \idx{comparison operators} \kbd{<=}, \kbd{<}, \kbd{>=}, \kbd{>},
\kbd{==}, \kbd{!=} are available in GP, and in library mode under the names
\tet{gle}, \tet{glt}, \tet{gge}, \tet{ggt}, \tet{geq}, \tet{gne} respectively.
The library syntax is $\var{co}(x,y)$, where \var{co} is the comparison
operator. The result is 1 (as a \kbd{GEN}) if the comparison is true, 0 (as a
\kbd{GEN}) if it is false. For the purpose of comparison, \typ{STR} objects are
strictly larger than any other non-string type; two \typ{STR} objects are
compared using the standard lexicographic order.

The standard boolean functions  \kbd{||} (\idx{inclusive or}), \kbd{\&\&}
(\idx{and})\sidx{or} and \kbd{!} (\idx{not}) are also available, and the
library syntax is \funs{gor}{x,y}, \funs{gand}{x,y} and \funs{gnot}{x}
respectively.

In library mode, it is in fact usually preferable to use the two basic
functions which are \funs{gcmp}{x,y} which gives the sign (1, 0, or -1) of
$x-y$, where $x$ and $y$ must be in $\R$, and \funs{gequal}{x,y} which can be
applied to any two PARI objects $x$ and $y$ and gives 1 (i.e.~true) if they are
equal (but not necessarily identical), 0 (i.e.~false) otherwise. Comparisons
to special constants are implemented and should be used instead of
\kbd{gequal}: \funs{gcmp0}{x} ($x==0$ ?), \funs{gcmp1}{x} ($x==1$ ?), and
\funs{gcmp_1}{x} ($x==-1$ ?).

Note that $\kbd{gcmp0}(x)$ tests whether $x$ is equal to zero, even if $x$ is
not an exact object. To test whether $x$ is an exact object which is equal to
zero, one must use \funs{isexactzero}{x}.

Also note that the \kbd{gcmp} and \kbd{gequal} functions return a C-integer,
and \emph{not} a \kbd{GEN} like \kbd{gle} etc.

\smallskip
GP accepts the following synonyms for some of the above functions: since we
thought it might easily lead to confusion, we don't use the customary C
operators for bitwise \kbd{and} or bitwise \kbd{or} (use \tet{bitand} or
\tet{bitor}), hence \kbd{|} and \kbd{\&} are accepted as\sidx{bitwise
and}\sidx{bitwise or} synonyms of \kbd{||} and \kbd{\&\&} respectively.
Also, \kbd{<>} is accepted as a synonym for \kbd{!=}. On the other hand,
\kbd{=} is definitely \emph{not} a synonym for \kbd{==} since it is the
assignment statement.

\subsecidx{lex}$(x,y)$: gives the result of a lexicographic comparison
between $x$ and $y$ (as $-1$, $0$ or $1$). This is to be interpreted in quite
a wide sense: It is admissible to compare objects of different types
(scalars, vectors, matrices), provided the scalars can be compared, as well
as vectors/matrices of different lengths. The comparison is recursive.

In case all components are equal up to the smallest length of the operands,
the more complex is considered to be larger. More precisely, the longest is
the largest; when lengths are equal, we have matrix $>$ vector $>$ scalar.
For example:
\bprog
? lex([1,3], [1,2,5])
%1 = 1
? lex([1,3], [1,3,-1])
%2 = -1
? lex([1], [[1]])
%3 = -1
? lex([1], [1]~)
%4 = 0
@eprog

\syn{lexcmp}{x,y}.

\subsecidx{sign}$(x)$: \idx{sign} ($0$, $1$ or $-1$) of $x$, which must be of
type integer, real or fraction.

\syn{gsigne}{x}. The result is a \kbd{long}.

\subsecidx{max}$(x,y)$ and \funs{min}{x,y}: creates the
maximum and minimum of $x$ and $y$ when they can be compared.

\syn{gmax}{x,y} and \funs{gmin}{x,y}.

\subsecidx{vecmax}$(x)$: if $x$ is a vector or a matrix, returns the maximum
of the elements of $x$, otherwise returns a copy of $x$. Error if $x$ is
empty.

\syn{vecmax}{x}.

\subsecidx{vecmin}$(x)$: if $x$ is a vector or a matrix, returns the minimum
of the elements of $x$, otherwise returns a copy of $x$. Error if $x$ is
empty.

\syn{vecmin}{x}.

\section{Conversions and similar elementary functions or commands}
\label{se:conversion}

\noindent
Many of the conversion functions are rounding or truncating operations. In
this case, if the argument is a rational function, the result is the
Euclidean quotient of the numerator by the denominator, and if the argument
is a vector or a matrix, the operation is done componentwise. This will not
be restated for every function.

\subsecidx{Col}$({x=[\,]})$: transforms the object $x$ into a column vector.
The vector will be with one component only, except when $x$ is a
vector or a quadratic form (in which case the resulting vector is simply the
initial object considered as a column vector), a matrix (the column of row
vectors comprising the matrix is returned), a character string (a column of
individual characters is returned), but more importantly when $x$ is a
polynomial or a power series. In the case of a polynomial, the coefficients
of the vector start with the leading coefficient of the polynomial, while for
power series only the significant coefficients are taken into account, but
this time by increasing order of degree.

\syn{gtocol}{x}.

\subsecidx{List}$({x=[\,]})$: transforms a (row or column) vector $x$
into a list. The only other way to create a \typ{LIST} is to use the
function \kbd{listcreate}.

This is useless in library mode.

\subsecidx{Mat}$({x=[\,]})$: transforms the object $x$ into a matrix.
If $x$ is already a matrix, a copy of $x$ is created.
If $x$ is not a vector or a matrix, this creates a $1\times 1$ matrix.
If $x$ is a row (resp. column) vector, this creates a 1-row (resp.
1-column) matrix, \emph{unless} all elements are column (resp.~row) vectors
of the same length, in which case the vectors are concatenated sideways
and the associated big matrix is returned.

\bprog
  ? Mat(x + 1)
  %1 =
  [x + 1]
  ? Vec( matid(3) )
  %2 = [[1, 0, 0]~, [0, 1, 0]~, [0, 0, 1]~]
  ? Mat(%)
  %3 =
  [1 0 0]

  [0 1 0]

  [0 0 1]
  ? Col( [1,2; 3,4] )
  %4 = [[1, 2], [3, 4]]~
  ? Mat(%)
  %5 =
  [1 2]

  [3 4]
@eprog

\syn{gtomat}{x}.

\subsecidx{Mod}$(x,y,\{\fl=0\})$:\label{se:Mod} creates the PARI object
$(x \mod y)$, i.e.~an intmod or a polmod. $y$ must be an integer or a
polynomial. If $y$ is an integer, $x$ must be an integer, a rational
number, or a $p$-adic number compatible with the modulus $y$. If $y$ is a
polynomial, $x$ must be a scalar (which is not a polmod), a polynomial, a
rational function, or a power series.

This function is not the same as $x$ \kbd{\%} $y$, the result of which is an
integer or a polynomial.

$\fl$ is obsolete and should not be used.

\syn{gmodulo}{x,y}.

\subsecidx{Pol}$(x,\{v=x\})$: transforms the object $x$ into a polynomial with
main variable $v$. If $x$ is a scalar, this gives a constant polynomial. If
$x$ is a power series, the effect is identical to \kbd{truncate} (see there),
i.e.~it chops off the $O(X^k)$. If $x$ is a vector, this function creates
the polynomial whose coefficients are given in $x$, with $x[1]$ being the
leading coefficient (which can be zero).

\misctitle{Warning:} this is \emph{not} a substitution function. It will not
transform an object containing variables of higher priority than~$v$.
\bprog
? Pol(x + y, y)
  *** Pol: variable must have higher priority in gtopoly.
@eprog

\syn{gtopoly}{x,v}, where $v$ is a variable number.

\subsecidx{Polrev}$(x,\{v=x\})$: transform the object $x$ into a polynomial
with main variable $v$. If $x$ is a scalar, this gives a constant polynomial.
If $x$ is a power series, the effect is identical to \kbd{truncate} (see
there), i.e.~it chops off the $O(X^k)$. If $x$ is a vector, this function
creates the polynomial whose coefficients are given in $x$, with $x[1]$ being
the constant term. Note that this is the reverse of \kbd{Pol} if $x$ is a
vector, otherwise it is identical to \kbd{Pol}.

\syn{gtopolyrev}{x,v}, where $v$ is a variable number.

\subsecidx{Qfb}$(a,b,c,\{D=0.\})$: creates the binary quadratic form
$ax^2+bxy+cy^2$. If $b^2-4ac>0$, initialize \idx{Shanks}' distance
function to $D$. Negative definite forms are not implemented,
use their positive definite counterpart instead.

\syn{Qfb0}{a,b,c,D,\var{prec}}. Also available are
\funs{qfi}{a,b,c} (when $b^2-4ac<0$), and
\funs{qfr}{a,b,c,d} (when $b^2-4ac>0$).\sidx{binary quadratic form}

\subsecidx{Ser}$(x,\{v=x\})$: transforms the object $x$ into a power series
with main variable $v$ ($x$ by default). If $x$ is a scalar, this gives a
constant power series with precision given by the default \kbd{serieslength}
(corresponding to the C global variable \kbd{precdl}). If $x$ is a
polynomial, the precision is the greatest of \kbd{precdl} and the degree of
the polynomial. If $x$ is a vector, the precision is similarly given, and the
coefficients of the vector are understood to be the coefficients of the power
series starting from the constant term (i.e.~the reverse of the function
\kbd{Pol}).

The warning given for \kbd{Pol} also applies here: this is not a substitution
function.

\syn{gtoser}{x,v}, where $v$ is a variable number (i.e.~a C integer).

\subsecidx{Set}$(\{x=[\,]\})$: converts $x$ into a set, i.e.~into a row
vector of character strings, with strictly increasing entries with respect to
lexicographic ordering. The components of $x$ are put in canonical form (type
\typ{STR}) so as to be easily sorted. To recover an ordinary \kbd{GEN} from
such an element, you can apply \tet{eval} to it.

\syn{gtoset}{x}.

\subsecidx{Str}$(\{x\}*)$: converts its argument list into a
single character string (type \typ{STR}, the empty string if $x$ is omitted).
To recover an ordinary \kbd{GEN} from a string, apply \kbd{eval} to it. The
arguments of \kbd{Str} are evaluated in string context, see \secref{se:strings}.

\bprog
? x2 = 0; i = 2; Str(x, i)
%1 = "x2"
? eval(%)
%2 = 0
@eprog\noindent
This function is mostly useless in library mode. Use the pair
\tet{strtoGEN}/\tet{GENtostr} to convert between \kbd{GEN} and \kbd{char*}.
The latter returns a malloced string, which should be freed after usage.

\subsecidx{Strchr}$(x)$: converts $x$ to a string, translating each integer
into a character.

\bprog
? Strchr(97)
%1 = "a"
? Vecsmall("hello world")
%2 = Vecsmall([104, 101, 108, 108, 111, 32, 119, 111, 114, 108, 100])
? Strchr(%)
%3 = "hello world"
@eprog

\subsecidx{Strexpand}$(\{x\}*)$: converts its argument list into a
single character string (type \typ{STR}, the empty string if $x$ is omitted).
Then performe \idx{environment expansion}, see \secref{se:envir}.
This feature can be used to read \idx{environment variable} values.

\bprog
? Strexpand("$HOME/doc")
%1 = "/home/pari/doc"
@eprog

The individual arguments are read in string context, see \secref{se:strings}.

\subsecidx{Strtex}$(\{x\}*)$: translates its arguments to TeX
format, and concatenates the results into a single character string (type
\typ{STR}, the empty string if $x$ is omitted).

The individual arguments are read in string context, see \secref{se:strings}.

\subsecidx{Vec}$({x=[\,]})$: transforms the object $x$ into a row vector.
The vector will be with one component only, except when $x$ is a
vector or a quadratic form (in which case the resulting vector is
simply the initial object considered as a row vector), a matrix
(the vector of columns comprising the matrix is return), a character string
(a vector of individual characters is returned), but more importantly when
$x$ is a polynomial or a power series. In the case of a polynomial, the
coefficients of the vector start with the leading coefficient of the
polynomial, while for power series only the significant coefficients are
taken into account, but this time by increasing order of degree.

\syn{gtovec}{x}.

\subsecidx{Vecsmall}$({x=[\,]})$: transforms the object $x$ into a row
vector of type \typ{VECSMALL}. This acts as \kbd{Vec}, but only on a
limited set of objects (the result must be representable as a vector of small
integers). In particular, polynomials and power series are forbidden.
If $x$ is a character string, a vector of individual characters in ASCII
encoding is returned (\tet{Strchr} yields back the character string).

\syn{gtovecsmall}{x}.

\subsecidx{binary}$(x)$: outputs the vector of the binary digits of $|x|$.
Here $x$ can be an integer, a real number (in which case the result has two
components, one for the integer part, one for the fractional part) or a
vector/matrix.

\syn{binaire}{x}.

\subsecidx{bitand}$(x,y)$:\label{se:bitand} bitwise \tet{and}
\sidx{bitwise and}of two integers $x$ and $y$, that is the integer
$$\sum_i (x_i~\kbd{and}~y_i) 2^i$$

Negative numbers behave $2$-adically, i.e.~the result is the $2$-adic limit
of \kbd{bitand}$(x_n,y_n)$, where $x_n$ and $y_n$ are non-negative integers
tending to $x$ and $y$ respectively. (The result is an ordinary integer,
possibly negative.)

\bprog
? bitand(5, 3)
%1 = 1
? bitand(-5, 3)
%2 = 3
? bitand(-5, -3)
%3 = -7
@eprog

\syn{gbitand}{x,y}.

\subsecidx{bitneg}$(x,\{n=-1\})$: \idx{bitwise negation} of an integer $x$,
truncated to $n$ bits, that is the integer $$\sum_{i=0}^{n-1} \kbd{not}(x_i)
2^i$$ The special case $n=-1$ means no truncation: an infinite sequence of
leading $1$ is then represented as a negative number.

See \secref{se:bitand} for the behaviour for negative arguments.

\syn{gbitneg}{x}.

\subsecidx{bitnegimply}$(x,y)$: bitwise negated imply of two integers $x$ and
$y$ (or \kbd{not} $(x \Rightarrow y)$), that is the integer $$\sum
(x_i~\kbd{and not}(y_i)) 2^i$$

See \secref{se:bitand} for the behaviour for negative arguments.

\syn{gbitnegimply}{x,y}.

\subsecidx{bitor}$(x,y)$: \sidx{bitwise inclusive or}bitwise (inclusive)
\tet{or} of two integers $x$ and $y$, that is the integer $$\sum
(x_i~\kbd{or}~y_i) 2^i$$

See \secref{se:bitand} for the behaviour for negative arguments.

\syn{gbitor}{x,y}.

\subsecidx{bittest}$(x,n)$: outputs the $n^{\text{th}}$ bit of $|x|$ starting
from the right (i.e.~the coefficient of $2^n$ in the binary expansion of
$x$). The result is 0 or 1. To extract several bits at once as a vector, pass
a vector for $n$.

\syn{bittest}{x,n}, where $n$ and the result are \kbd{long}s.

\subsecidx{bitxor}$(x,y)$: bitwise (exclusive) \tet{or}
\sidx{bitwise exclusive or}of two integers $x$ and $y$, that is the integer
$$\sum (x_i~\kbd{xor}~y_i) 2^i$$

See \secref{se:bitand} for the behaviour for negative arguments.

\syn{gbitxor}{x,y}.

\subsecidx{ceil}$(x)$: ceiling of $x$. When $x$ is in $\R$, the result is the
smallest integer greater than or equal to $x$. Applied to a rational
function, $\kbd{ceil}(x)$ returns the euclidian quotient of the numerator by
the denominator.

\syn{gceil}{x}.

\subsecidx{centerlift}$(x,\{v\})$: lifts an element $x=a \bmod n$ of $\Z/n\Z$
to $a$ in $\Z$, and similarly lifts a polmod to a polynomial. This is the
same as \tet{lift} except that in the particular case of elements of
$\Z/n\Z$, the lift $y$ is such that $-n/2<y\le n/2$. If $x$ is of type
fraction, complex, quadratic, polynomial, power series, rational function,
vector or matrix, the lift is done for each coefficient. Reals are forbidden.

\syn{centerlift0}{x,v}, where $v$ is a \kbd{long} and an omitted $v$ is coded
as $-1$. Also available is \funs{centerlift}{x} = \kbd{centerlift0($x$,-1)}.

\subsecidx{changevar}$(x,y)$: creates a copy of the object $x$ where its
variables are modified according to the permutation specified by the vector
$y$. For example, assume that the variables have been introduced in the
order \kbd{x}, \kbd{a}, \kbd{b}, \kbd{c}. Then, if $y$ is the vector
\kbd{[x,c,a,b]}, the variable \kbd{a} will be replaced by \kbd{c}, \kbd{b} by
\kbd{a}, and \kbd{c} by \kbd{b}, \kbd{x} being unchanged. Note that the
permutation must be completely specified, e.g.~\kbd{[c,a,b]} would not work,
since this would replace \kbd{x} by \kbd{c}, and leave \kbd{a} and \kbd{b}
unchanged (as well as \kbd{c} which is the fourth variable of the initial
list). In particular, the new variable names must be distinct.

\syn{changevar}{x,y}.

\subsec{components of a PARI object}:

There are essentially three ways to extract the \idx{components} from a PARI
object.

The first and most general, is the function \funs{component}{x,n} which
extracts the $n^{\text{th}}$-component of $x$. This is to be understood as
follows: every PARI type has one or two initial \idx{code words}. The
components are counted, starting at 1, after these code words. In particular
if $x$ is a vector, this is indeed the $n^{\text{th}}$-component of $x$, if
$x$ is a matrix, the $n^{\text{th}}$ column, if $x$ is a polynomial, the
$n^{\text{th}}$ coefficient (i.e.~of degree $n-1$), and for power series, the
$n^{\text{th}}$ significant coefficient. The use of the function
\kbd{component} implies the knowledge of the structure of the different PARI
types, which can be recalled by typing \b{t} under \kbd{gp}.

\syn{compo}{x,n}, where $n$ is a \kbd{long}.

The two other methods are more natural but more restricted. The function
\funs{polcoeff}{x,n} gives the coefficient of degree $n$ of the polynomial
or power series $x$, with respect to the main variable of $x$ (to check
variable ordering, or to change it, use the function \tet{reorder}, see
\secref{se:reorder}). In particular if $n$ is less than the valuation of
$x$ or in the case of a polynomial, greater than the degree, the result is
zero (contrary to \kbd{compo} which would send an error message). If $x$ is
a power series and $n$ is greater than the largest significant degree, then
an error message is issued.

For greater flexibility, vector or matrix types are also accepted for $x$,
and the meaning is then identical with that of \kbd{compo}.

Finally note that a scalar type is considered by \kbd{polcoeff} as a
polynomial of degree zero.

\syn{truecoeff}{x,n}.

The third method is specific to vectors or matrices in GP. If $x$ is a
(row or column) vector, then \tet{x[n]} represents the $n^{\text{th}}$
component of $x$, i.e.~\kbd{compo(x,n)}. It is more natural and shorter to
write. If $x$ is a matrix, \tet{x[m,n]} represents the coefficient of
row \kbd{m} and column \kbd{n} of the matrix, \tet{x[m,]} represents
the $m^{\text{th}}$ \emph{row} of $x$, and \tet{x[,n]} represents
the $n^{\text{th}}$ \emph{column} of $x$.

Finally note that in library mode, the macros \teb{gcoeff} and \teb{gmael}
are available as direct accessors to a \kbd{GEN component}. See Chapter 4 for
details.

\subsecidx{conj}$(x)$: conjugate of $x$. The meaning of this
is clear, except that for real quadratic numbers, it means conjugation in the
real quadratic field. This function has no effect on integers, reals,
intmods, fractions or $p$-adics. The only forbidden type is polmod
(see \kbd{conjvec} for this).

\syn{gconj}{x}.

\subsecidx{conjvec}$(x)$: conjugate vector representation of $x$. If $x$ is a
polmod, equal to \kbd{Mod}$(a,q)$, this gives a vector of length
$\text{degree}(q)$ containing the complex embeddings of the polmod if $q$ has
integral or rational coefficients, and the conjugates of the polmod if $q$
has some intmod coefficients. The order is the same as that of the
\kbd{polroots} functions. If $x$ is an integer or a rational number, the
result is~$x$. If $x$ is a (row or column) vector, the result is a matrix
whose columns are the conjugate vectors of the individual elements of $x$.

\syn{conjvec}{x,\var{prec}}.

\subsecidx{denominator}$(x)$: denominator of $x$. The meaning of this
is clear when $x$ is a rational number or function. If $x$ is an integer
or a polynomial, it is treated as a rational number of function,
respectively, and the result is equal to $1$. For polynomials, you
probably want to use 
\bprog
    denominator( content(x) )
@eprog\noindent
instead. As for modular objects, \typ{INTMOD} and \typ{PADIC} have
denominator $1$, and the denominator of a \typ{POLMOD} is the denominator
of its (minimal degree) polynomial representative.

If $x$ is a recursive structure, for instance a vector or matrix, the lcm
of the denominators of its components (a common denominator) is computed.
This also applies for \typ{COMPLEX}s and \typ{QUAD}s.

\misctitle{Warning:} multivariate objects are created according to variable
priorities, with possibly surprising side effects ($x/y$ is a polynomial, but
$y/x$ is a rational function). See \secref{se:priority}.

\syn{denom}{x}.

\subsecidx{floor}$(x)$: floor of $x$. When $x$ is in $\R$, the result is the
largest integer smaller than or equal to $x$. Applied to a rational function,
$\kbd{floor}(x)$ returns the euclidian quotient of the numerator by the
denominator.

\syn{gfloor}{x}.

\subsecidx{frac}$(x)$: fractional part of $x$. Identical to
$x-\text{floor}(x)$. If $x$ is real, the result is in $[0,1[$.

\syn{gfrac}{x}.

\subsecidx{imag}$(x)$: imaginary part of $x$. When
$x$ is a quadratic number, this is the coefficient of $\omega$ in
the ``canonical'' integral basis $(1,\omega)$.

\syn{gimag}{x}. This returns a copy of the imaginary part. The internal
routine \tet{imag_i} is faster, since it returns the pointer and skips the
copy.

\subsecidx{length}$(x)$: number of non-code words in $x$ really used
(i.e.~the effective length minus 2 for integers and polynomials). In
particular, the degree of a polynomial is equal to its length minus 1. If $x$
has type \typ{STR}, output number of letters.

\syn{glength}{x} and the result is a C long.

\subsecidx{lift}$(x,\{v\})$: lifts an element $x=a \bmod n$ of $\Z/n\Z$ to
$a$ in $\Z$, and similarly lifts a polmod to a polynomial if $v$ is omitted.
Otherwise, lifts only polmods whose modulus has main variable $v$ (if $v$
does not occur in $x$, lifts only intmods). If $x$ is of recursive (non
modular) type, the lift is done coefficientwise. For $p$-adics, this routine
acts as \tet{truncate}. It is not allowed to have $x$ of type \typ{REAL}.

\bprog
? lift(Mod(5,3))
%1 = 2
? lift(3 + O(3^9))
%2 = 3
? lift(Mod(x,x^2+1))
%3 = x
? lift(x * Mod(1,3) + Mod(2,3))
%4 = x + 2
? lift(x * Mod(y,y^2+1) + Mod(2,3))
%5 = y*x + Mod(2, 3)   \\@com do you understand this one ?
? lift(x * Mod(y,y^2+1) + Mod(2,3), x)
%6 = Mod(y, y^2+1) * x + Mod(2, y^2+1)
@eprog

\syn{lift0}{x,v}, where $v$ is a \kbd{long} and an omitted $v$ is coded as
$-1$. Also available is \funs{lift}{x} = \kbd{lift0($x$,-1)}.

\subsecidx{norm}$(x)$: algebraic norm of $x$, i.e.~the product of $x$ with
its conjugate (no square roots are taken), or conjugates for polmods. For
vectors and matrices, the norm is taken componentwise and hence is not the
$L^2$-norm (see \kbd{norml2}). Note that the norm of an element of
$\R$ is its square, so as to be compatible with the complex norm.

\syn{gnorm}{x}.

\subsecidx{norml2}$(x)$: square of the $L^2$-norm of $x$. More precisely,
if $x$ is a scalar, $\kbd{norml2}(x)$ is defined to be \kbd{$x$ * conj($x$)}.
If $x$ is a (row or column) vector or a matrix, \kbd{norml2($x$)} is
defined recursively as $\sum_i \kbd{norml2}(x_i)$, where $(x_i)$ run through
the components of $x$. In particular, this yields the usual $\sum |x_i|^2$
(resp.~$\sum |x_{i,j}|^2$) if $x$ is a vector (resp.~matrix) with complex
components.

\bprog
? norml2( [ 1, 2, 3 ] )      \\ vector
%1 = 14
? norml2( [ 1, 2; 3, 4] )   \\ matrix
%1 = 30
? norml2( I + x )
%3 = x^2 + 1
? norml2( [ [1,2], [3,4], 5, 6 ] )   \\ recursively defined
%4 = 91
@eprog

\syn{gnorml2}{x}.

\subsecidx{numerator}$(x)$: numerator of $x$. The meaning of this
is clear when $x$ is a rational number or function. If $x$ is an integer
or a polynomial, it is treated as a rational number of function,
respectively, and the result is $x$ itself. For polynomials, you
probably want to use 
\bprog
    numerator( content(x) )
@eprog\noindent
instead.

In other cases, \kbd{numerator(x)} is defined to be
\kbd{denominator(x)*x}. This is the case when $x$ is a vector or a
matrix, but also for \typ{COMPLEX} or \typ{QUAD}. In particular since a
\typ{PADIC} or \typ{INTMOD} has  denominator $1$, its numerator is
itself.

\misctitle{Warning:} multivariate objects are created according to variable
priorities, with possibly surprising side effects ($x/y$ is a polynomial, but
$y/x$ is a rational function). See \secref{se:priority}.

\syn{numer}{x}.

\subsecidx{numtoperm}$(n,k)$: generates the $k$-th permutation (as a
row vector of length $n$) of the numbers $1$ to $n$. The number $k$ is taken
modulo $n!\,$, i.e.~inverse function of \tet{permtonum}.

\syn{numtoperm}{n,k}, where $n$ is a \kbd{long}.

\subsecidx{padicprec}$(x,p)$: absolute $p$-adic precision of the object $x$.
This is the minimum precision of the components of $x$. The result is
\kbd{VERYBIGINT} ($2^{31}-1$ for 32-bit machines or $2^{63}-1$ for 64-bit
machines) if $x$ is an exact object.

\syn{padicprec}{x,p} and the result is a \kbd{long}
integer.

\subsecidx{permtonum}$(x)$: given a permutation $x$ on $n$ elements,
gives the number $k$ such that $x=\kbd{numtoperm(n,k)}$, i.e.~inverse
function of \tet{numtoperm}.

\syn{permtonum}{x}.

\subsecidx{precision}$(x,\{n\})$: gives the precision in decimal digits of the
PARI object $x$. If $x$ is an exact object, the largest single precision
integer is returned. If $n$ is not omitted, creates a new object equal to $x$
with a new precision $n$. This is to be understood as follows:

For exact types, no change. For $x$ a vector or a matrix, the operation
is done componentwise.

For real $x$, $n$ is the number of desired significant \emph{decimal} digits.
If $n$ is smaller than the precision of $x$, $x$ is truncated, otherwise $x$
is extended with zeros.

For $x$ a $p$-adic or a power series, $n$ is the desired number of
significant $p$-adic or $X$-adic digits, where $X$ is the main variable of
$x$.

Note that the function \kbd{precision} never changes the type of the result.
In particular it is not possible to use it to obtain a polynomial from a
power series. For that, see \kbd{truncate}.

\syn{precision0}{x,n}, where $n$ is a \kbd{long}. Also available are
\funs{ggprecision}{x} (result is a \kbd{GEN}) and \funs{gprec}{x,n}, where
$n$ is a \kbd{long}.

\subsecidx{random}$(\{N=2^{31}\})$: returns a random integer between $0$ and
$N-1$. $N$ is an integer, which can be arbitrary large. This is an internal
PARI function and does not depend on the system's random number generator.

The resulting integer is obtained by means of linear congruences and will not
be well distributed in arithmetic progressions. The random seed may be
obtained via \tet{getrand}, and reset using \tet{setrand}.

Note that \kbd{random(2\pow31)} is \emph{not} equivalent to \kbd{random()},
although both return an integer between $0$ and $2^{31}-1$. In fact, calling
\kbd{random} with an argument generates a number of random words (32bit or
64bit depending on the architecture), rescaled to the desired interval.
The default uses directly a 31-bit generator.

\misctitle{Important technical note:} the implementation of this function
is incorrect unless $N$ is a power of $2$ (integers less than the bound are
not equally likely, some may not even occur). It is kept for backward
compatibility only, and has been rewritten from scratch in the 2.4.x unstable
series. Use the following script for a correct version:
\bprog
RANDOM(N) = 
{ local(n, L);

  L = 1; while (L < N, L <<= 1;);
  /* L/2 < N <= L, L power of 2 */
  until(n < N, n = random(L)); n
}
@eprog

\syn{genrand}{N}. Also available are \tet{pari_rand}$()$ which returns a
random \kbd{unsigned long} (32bit or 64bit depending on the architecture), and
\tet{pari_rand31}$()$ which returns a 31bit \kbd{long} integer.

\subsecidx{real}$(x)$: real part of $x$. In the case where $x$ is a quadratic
number, this is the coefficient of $1$ in the ``canonical'' integral basis
$(1,\omega)$.

\syn{greal}{x}. This returns a copy of the real part. The internal routine
\tet{real_i} is faster, since it returns the pointer and skips the copy.

\subsecidx{round}$(x,\{\&e\})$: If $x$ is in $\R$, rounds $x$ to the nearest
integer and sets $e$ to the number of error bits, that is the binary exponent
of the difference between the original and the rounded value (the
``fractional part''). If the exponent of $x$ is too large compared to its
precision (i.e.~$e>0$), the result is undefined and an error occurs if $e$
was not given.

\misctitle{Important remark:} note that, contrary to the other truncation
functions, this function operates on every coefficient at every level of a
PARI object. For example
$$\text{truncate}\left(\dfrac{2.4*X^2-1.7}{X}\right)=2.4*X,$$ whereas
$$\text{round}\left(\dfrac{2.4*X^2-1.7}{X}\right)=\dfrac{2*X^2-2}{X}.$$ An
important use of \kbd{round} is to get exact results after a long approximate
computation, when theory tells you that the coefficients must be integers.

\syn{grndtoi}{x,\&e}, where $e$ is a \kbd{long} integer. Also available is
\funs{ground}{x}.

\subsecidx{simplify}$(x)$: this function simplifies $x$ as much as it can.
Specifically, a complex or quadratic number whose imaginary part is an exact
0 (i.e.~not an approximate one as a \kbd{O(3)} or \kbd{0.E-28}) is converted
to its real part, and a polynomial of degree $0$ is converted to its constant
term. Simplifications occur recursively.

This function is especially useful before using arithmetic functions,
which expect integer arguments:
\bprog
? x = 1 + y - y
%1 = 1
? divisors(x)
  *** divisors: not an integer argument in an arithmetic function
? type(x)
%2 = "t_POL"
? type(simplify(x))
%3 = "t_INT"
@eprog
Note that GP results are simplified as above before they are stored in the
history. (Unless you disable automatic simplification with \b{y}, that is.)
In particular
\bprog
? type(%1)
%4 = "t_INT"
@eprog

\syn{simplify}{x}.

\subsecidx{sizebyte}$(x)$: outputs the total number of bytes occupied by the
tree representing the PARI object $x$.

\syn{taille2}{x} which returns a \kbd{long}; \funs{taille}{x} returns the
number of \emph{words} instead.

\subsecidx{sizedigit}$(x)$: outputs a quick bound for the number of decimal
digits of (the components of) $x$, off by at most $1$. If you want the
exact value, you can use \kbd{\#Str(x)}, which is slower.

\syn{sizedigit}{x} which returns a \kbd{long}.

\subsecidx{truncate}$(x,\{\&e\})$: truncates $x$ and sets $e$ to the number of
error bits. When $x$ is in $\R$, this means that the part after the decimal
point is chopped away, $e$ is the binary exponent of the difference between
the original and the truncated value (the ``fractional part''). If the
exponent of $x$ is too large compared to its precision (i.e.~$e>0$), the
result is undefined and an error occurs if $e$ was not given. The function
applies componentwise on vector / matrices; $e$ is then the maximal number of
error bits. If $x$ is a rational function, the result is the ``integer part''
(Euclidean quotient of numerator by denominator) and $e$ is not set.

Note a very special use of \kbd{truncate}: when applied to a power series, it
transforms it into a polynomial or a rational function with denominator
a power of $X$, by chopping away the $O(X^k)$. Similarly, when applied to
a $p$-adic number, it transforms it into an integer or a rational number
by chopping away the $O(p^k)$.

\syn{gcvtoi}{x,\&e}, where $e$ is a \kbd{long} integer. Also available is
\funs{gtrunc}{x}.

\subsecidx{valuation}$(x,p)$:\label{se:valuation} computes the highest
exponent of $p$ dividing $x$. If $p$ is of type integer, $x$ must be an
integer, an intmod whose modulus is divisible by $p$, a fraction, a
$q$-adic number with $q=p$, or a polynomial or power series in which case the
valuation is the minimum of the valuation of the coefficients.

If $p$ is of type polynomial, $x$ must be of type polynomial or rational
function, and also a power series if $x$ is a monomial. Finally, the
valuation of a vector, complex or quadratic number is the minimum of the
component valuations.

If $x=0$, the result is \kbd{VERYBIGINT} ($2^{31}-1$ for 32-bit machines or
$2^{63}-1$ for 64-bit machines) if $x$ is an exact object. If $x$ is a
$p$-adic numbers or power series, the result is the exponent of the zero.
Any other type combinations gives an error.

\syn{ggval}{x,p}, and the result is a \kbd{long}.

\subsecidx{variable}$(x)$: gives the main variable of the object $x$, and
$p$ if $x$ is a $p$-adic number. Gives an error if $x$ has no variable
associated to it. Note that this function is useful only in GP, since in
library mode the function \kbd{gvar} is more appropriate.

\syn{gpolvar}{x}. However, in library mode, this function should not be used.
Instead, test whether $x$ is a $p$-adic (type \typ{PADIC}), in which case $p$
is in $x[2]$, or call the function \funs{gvar}{x} which returns the variable
\emph{number} of $x$ if it exists, \kbd{BIGINT} otherwise.

\section{Transcendental functions}\label{se:trans}

As a general rule, which of course in some cases may have exceptions,
transcendental functions operate in the following way:

\item If the argument is either an integer, a real, a rational, a complex
or a quadratic number, it is, if necessary, first converted to a real (or
complex) number using the current \idx{precision} held in the default
\kbd{realprecision}. Note that only exact arguments are converted, while
inexact arguments such as reals are not.

In GP this is transparent to the user, but when programming in library
mode, care must be taken to supply a meaningful parameter \var{prec} as the
last argument of the function if the first argument is an exact object.
This parameter is ignored if the argument is inexact.

   Note that in library mode the precision argument \var{prec} is a word
count including codewords, i.e.~represents the length in words of a real
number, while under \kbd{gp} the precision (which is changed by the metacommand
\b{p} or using \kbd{default(realprecision,...)}) is the number of significant
decimal digits.

Note that some accuracies attainable on 32-bit machines cannot be attained
on 64-bit machines for parity reasons. For example the default \kbd{gp} accuracy
is 28 decimal digits on 32-bit machines, corresponding to \var{prec} having
the value 5, but this cannot be attained on 64-bit machines.\smallskip

After possible conversion, the function is computed. Note that even if the
argument is real, the result may be complex (e.g.~$\text{acos}(2.0)$ or
$\text{acosh}(0.0)$). Note also that the principal branch is always chosen.

\item If the argument is an intmod or a $p$-adic, at present only a
few functions like \kbd{sqrt} (square root), \kbd{sqr} (square), \kbd{log},
\kbd{exp}, powering, \kbd{teichmuller} (Teichm\"uller character) and
\kbd{agm} (arithmetic-geometric mean) are implemented.

Note that in the case of a $2$-adic number, $\kbd{sqr}(x)$ may not be
identical to $x*x$: for example if $x = 1+O(2^5)$ and $y = 1+O(2^5)$ then
$x*y = 1+O(2^5)$ while $\kbd{sqr}(x) = 1+O(2^6)$. Here, $x * x$ yields the
same result as $\kbd{sqr}(x)$ since the two operands are known to be
\emph{identical}. The same statement holds true for $p$-adics raised to the
power $n$, where $v_p(n) > 0$.

\misctitle{Remark:} note that if we wanted to be strictly consistent with
the PARI philosophy, we should have $x*y = (4 \mod 8)$ and $\kbd{sqr}(x) =
(4 \mod 32)$ when both $x$ and $y$ are congruent to $2$ modulo $4$.
However, since intmod is an exact object, PARI assumes that the modulus
must not change, and the result is hence $(0\, \mod\, 4)$ in both cases. On
the other hand, $p$-adics are not exact objects, hence are treated
differently.

\item If the argument is a polynomial, power series or rational function,
it is, if necessary, first converted to a power series using the current
precision held in the variable \tet{precdl}. Under \kbd{gp} this again is
transparent to the user. When programming in library mode, however, the
global variable \kbd{precdl} must be set before calling the function if the
argument has an exact type (i.e.~not a power series). Here \kbd{precdl} is
not an argument of the function, but a global variable.

Then the Taylor series expansion of the function around $X=0$ (where $X$ is
the main variable) is computed to a number of terms depending on the number
of terms of the argument and the function being computed.

\item If the argument is a vector or a matrix, the result is the
componentwise evaluation of the function. In particular, transcendental
functions on square matrices, which are not implemented in the present
version \vers, will have a different name if they are implemented some day.

\subseckbd{\pow}: If $y$ is not of type integer, \kbd{x\pow y} has the same
effect as \kbd{exp(y*log(x))}. It can be applied to $p$-adic numbers as well
as to the more usual types.\sidx{powering}

\syn{gpow}{x,y,\var{prec}}.

\subsecidx{Euler}: Euler's constant $\gamma=0.57721\cdots$. Note that
\kbd{Euler} is one of the few special reserved names which cannot be used for
variables (the others are \kbd{I} and \kbd{Pi}, as well as all function
names). \label{se:euler}

\syn{mpeuler}{\var{prec}} where $\var{prec}$ \emph{must} be given. Note that
this creates $\gamma$ on the PARI stack, but a copy is also created on the
heap for quicker computations next time the function is called.

\subsecidx{I}: the complex number $\sqrt{-1}$.

The library syntax is the global variable \kbd{gi} (of type \kbd{GEN}).

\subsecidx{Pi}: the constant $\pi$ ($3.14159\cdots$).\label{se:pi}

\syn{mppi}{\var{prec}} where $\var{prec}$ \emph{must} be given. Note that
this creates $\pi$ on the PARI stack, but a copy is also created on the heap
for quicker computations next time the function is called.

\subsecidx{abs}$(x)$: absolute value of $x$ (modulus if $x$ is complex).
Rational functions are not allowed. Contrary to most transcendental
functions, an exact argument is \emph{not} converted to a real number before
applying \kbd{abs} and an exact result is returned if possible.
\bprog
? abs(-1)
%1 = 1
? abs(3/7 + 4/7*I)
%2 = 5/7
? abs(1 + I)
%3 = 1.414213562373095048801688724
@eprog\noindent
If $x$ is a polynomial, returns $-x$ if the leading coefficient is
real and negative else returns $x$. For a power series, the constant
coefficient is considered instead.

\syn{gabs}{x,\var{prec}}.

\subsecidx{acos}$(x)$: principal branch of $\text{cos}^{-1}(x)$,
i.e.~such that $\text{Re(acos}(x))\in [0,\pi]$. If
$x\in \R$ and $|x|>1$, then $\text{acos}(x)$ is complex.

\syn{gacos}{x,\var{prec}}.

\subsecidx{acosh}$(x)$: principal branch of $\text{cosh}^{-1}(x)$,
i.e.~such that $\text{Im(acosh}(x))\in [0,\pi]$. If
$x\in \R$ and $x<1$, then $\text{acosh}(x)$ is complex.

\syn{gach}{x,\var{prec}}.

\subsecidx{agm}$(x,y)$: arithmetic-geometric mean of $x$ and $y$. In the
case of complex or negative numbers, the principal square root is always
chosen. $p$-adic or power series arguments are also allowed. Note that
a $p$-adic agm exists only if $x/y$ is congruent to 1 modulo $p$ (modulo
16 for $p=2$). $x$ and $y$ cannot both be vectors or matrices.

\syn{agm}{x,y,\var{prec}}.

\subsecidx{arg}$(x)$: argument of the complex number $x$, such that
$-\pi<\text{arg}(x)\le\pi$.

\syn{garg}{x,\var{prec}}.

\subsecidx{asin}$(x)$: principal branch of $\text{sin}^{-1}(x)$, i.e.~such
that $\text{Re(asin}(x))\in [-\pi/2,\pi/2]$. If $x\in \R$ and $|x|>1$ then
$\text{asin}(x)$ is complex.

\syn{gasin}{x,\var{prec}}.

\subsecidx{asinh}$(x)$: principal branch of $\text{sinh}^{-1}(x)$, i.e.~such
that $\text{Im(asinh}(x))\in [-\pi/2,\pi/2]$.

\syn{gash}{x,\var{prec}}.

\subsecidx{atan}$(x)$: principal branch of $\text{tan}^{-1}(x)$, i.e.~such
that $\text{Re(atan}(x))\in{} ]-\pi/2,\pi/2[$.

\syn{gatan}{x,\var{prec}}.

\subsecidx{atanh}$(x)$: principal branch of $\text{tanh}^{-1}(x)$, i.e.~such
that $\text{Im(atanh}(x))\in{} ]-\pi/2,\pi/2]$. If $x\in \R$ and $|x|>1$ then
$\text{atanh}(x)$ is complex.

\syn{gath}{x,\var{prec}}.

\subsecidx{bernfrac}$(x)$: Bernoulli number\sidx{Bernoulli numbers} $B_x$,
where $B_0=1$, $B_1=-1/2$, $B_2=1/6$,\dots, expressed as a rational number.
The argument $x$ should be of type integer.

\syn{bernfrac}{x}.

\subsecidx{bernreal}$(x)$: Bernoulli number\sidx{Bernoulli numbers}
$B_x$, as \kbd{bernfrac}, but $B_x$ is returned as a real number
(with the current precision).

\syn{bernreal}{x,\var{prec}}.

\subsecidx{bernvec}$(x)$: creates a vector containing, as rational numbers,
the \idx{Bernoulli numbers} $B_0$, $B_2$,\dots, $B_{2x}$.
This routine is obsolete. Use \kbd{bernfrac} instead each time you need a
Bernoulli number in exact form.

\misctitle{Note:} this routine is implemented using repeated independent
calls to \kbd{bernfrac}, which is faster than the standard recursion in exact
arithmetic. It is only kept for backward compatibility: it is not faster than
individual calls to \kbd{bernfrac}, its output uses a lot of memory space,
and coping with the index shift is awkward.

\syn{bernvec}{x}.

\subsecidx{besselh1}$(\var{nu},x)$: $H^1$-Bessel function of index \var{nu}
and argument $x$.

\syn{hbessel1}{\var{nu},x,\var{prec}}.

\subsecidx{besselh2}$(\var{nu},x)$: $H^2$-Bessel function of index \var{nu}
and argument $x$.

\syn{hbessel2}{\var{nu},x,\var{prec}}.

\subsecidx{besseli}$(\var{nu},x)$: $I$-Bessel function of index \var{nu} and
argument $x$. If $x$ converts to a power series, the initial factor
$(x/2)^\nu/\Gamma(\nu+1)$ is omitted (since it cannot be represented in PARI
when $\nu$ is not integral).

\syn{ibessel}{\var{nu},x,\var{prec}}.

\subsecidx{besselj}$(\var{nu},x)$: $J$-Bessel function of index \var{nu} and
argument $x$. If $x$ converts to a power series, the initial factor
$(x/2)^\nu/\Gamma(\nu+1)$ is omitted (since it cannot be represented in PARI
when $\nu$ is not integral).

\syn{jbessel}{\var{nu},x,\var{prec}}.

\subsecidx{besseljh}$(n,x)$: $J$-Bessel function of half integral index.
More precisely, $\kbd{besseljh}(n,x)$ computes $J_{n+1/2}(x)$ where $n$
must be of type integer, and $x$ is any element of $\C$. In the
present version \vers, this function is not very accurate when $x$ is
small.

\syn{jbesselh}{n,x,\var{prec}}.

\subsecidx{besselk}$(\var{nu},x,\{\fl=0\})$: $K$-Bessel function of index
\var{nu} (which can be complex) and argument $x$. Only real and positive
arguments $x$ are allowed in the present version \vers. If $\fl$ is equal to
1, uses another implementation of this function which is faster when $x\gg 1$.

\syn{kbessel}{\var{nu},x,\var{prec}} and
\funs{kbessel2}{\var{nu},x,\var{prec}} respectively.

\subsecidx{besseln}$(\var{nu},x)$: $N$-Bessel function of index \var{nu}
and argument $x$.

\syn{nbessel}{\var{nu},x,\var{prec}}.

\subsecidx{cos}$(x)$: cosine of $x$.

\syn{gcos}{x,\var{prec}}.

\subsecidx{cosh}$(x)$: hyperbolic cosine of $x$.

\syn{gch}{x,\var{prec}}.

\subsecidx{cotan}$(x)$: cotangent of $x$.

\syn{gcotan}{x,\var{prec}}.

\subsecidx{dilog}$(x)$: principal branch of the dilogarithm of $x$,
i.e.~analytic continuation of the power series $\log_2(x)=\sum_{n\ge1}x^n/n^2$.

\syn{dilog}{x,\var{prec}}.

\subsecidx{eint1}$(x,\{n\})$: exponential integral
$\int_x^\infty \dfrac{e^{-t}}{t}\,dt$ ($x\in\R$)

If $n$ is present, outputs the $n$-dimensional vector
$[\kbd{eint1}(x),\dots,\kbd{eint1}(nx)]$ ($x \geq 0$). This is faster than
repeatedly calling \kbd{eint1($i$ * x)}.

\syn{veceint1}{x,n,prec}. Also available is \funs{eint1}{x,prec}.

\subsecidx{erfc}$(x)$: complementary error function
$(2/\sqrt\pi)\int_x^\infty e^{-t^2}\,dt$ ($x\in\R$).

\syn{erfc}{x,\var{prec}}.

\subsecidx{eta}$(x,\{\fl=0\})$: \idx{Dedekind}'s $\eta$ function, without the
$q^{1/24}$. This means the following: if $x$ is a complex number with positive
imaginary part, the result is $\prod_{n=1}^\infty(1-q^n)$, where
$q=e^{2i\pi x}$. If $x$ is a power series (or can be converted to a power
series) with positive valuation, the result is $\prod_{n=1}^\infty(1-x^n)$.

If $\fl=1$ and $x$ can be converted to a complex number (i.e.~is not a power
series), computes the true $\eta$ function, including the leading $q^{1/24}$.

\syn{eta}{x,\var{prec}}.

\subsecidx{exp}$(x)$: exponential of $x$.
$p$-adic arguments with positive valuation are accepted.

\syn{gexp}{x,\var{prec}}.

\subsecidx{gammah}$(x)$: gamma function evaluated at the argument $x+1/2$.

\syn{ggamd}{x,\var{prec}}.

\subsecidx{gamma}$(x)$: gamma function of $x$.

\syn{ggamma}{x,\var{prec}}.

\subsecidx{hyperu}$(a,b,x)$: $U$-confluent hypergeometric function with
parameters $a$ and $b$. The parameters $a$ and $b$ can be complex but
the present implementation requires $x$ to be positive.

\syn{hyperu}{a,b,x,\var{prec}}.

\subsecidx{incgam}$(s,x,{y})$: incomplete gamma function. The argument $x$
and $s$ are complex numbers ($x$ must be a positive real number if $s = 0$).
The result returned is $\int_x^\infty e^{-t}t^{s-1}\,dt$. When $y$ is given,
assume (of course without checking!) that $y=\Gamma(s)$. For small $x$, this
will speed up the computation.

\syn{incgam}{s,x,\var{prec}} and \funs{incgam0}{s,x,y,prec},
respectively (an omitted $y$ is coded as \kbd{NULL}).

\subsecidx{incgamc}$(s,x)$: complementary incomplete gamma function.
The arguments $x$ and $s$ are complex numbers such that $s$ is not a pole of
$\Gamma$ and $|x|/(|s|+1)$ is not much larger than 1 (otherwise the
convergence is very slow). The result returned is $\int_0^x
e^{-t}t^{s-1}\,dt$.

\syn{incgamc}{s,x,\var{prec}}.

\subsecidx{log}$(x)$: principal branch of the natural logarithm of
$x$, i.e.~such that $\text{Im(log}(x))\in{} ]-\pi,\pi]$. The result is complex
(with imaginary part equal to $\pi$) if $x\in \R$ and $x < 0$. In general,
the algorithm uses the formula
$$\log(x) \approx {\pi\over 2\text{agm}(1, 4/s)} - m \log 2, $$
if $s = x 2^m$ is large enough. (The result is exact to $B$ bits provided
$s > 2^{B/2}$.) At low accuracies, the series expansion near $1$ is used.

$p$-adic arguments are also accepted for $x$, with the convention that
$\log(p)=0$. Hence in particular $\exp(\log(x))/x$ is not in general equal to
1 but to a $(p-1)$-th root of unity (or $\pm1$ if $p=2$) times a power of
$p$.

\syn{glog}{x,\var{prec}}.

\subsecidx{lngamma}$(x)$: principal branch of the logarithm of the gamma
function of $x$. This function is analytic on the complex plane with
non-positive integers removed. Can have much larger arguments than \kbd{gamma}
itself. The $p$-adic \kbd{lngamma} function is not implemented.

\syn{glngamma}{x,\var{prec}}.

\subsecidx{polylog}$(m,x,{\fl=0})$: one of the different polylogarithms,
depending on \fl:

If $\fl=0$ or is omitted: $m^\text{th}$ polylogarithm of $x$, i.e.~analytic
continuation of the power series $\text{Li}_m(x)=\sum_{n\ge1}x^n/n^m$
($x < 1$). Uses the functional equation linking the values at $x$ and $1/x$
to restrict to the case $|x|\leq 1$, then the power series when
$|x|^2\le1/2$, and the power series expansion in $\log(x)$ otherwise.

Using $\fl$, computes a modified $m^\text{th}$ polylogarithm of $x$.
We use Zagier's notations; let $\Re_m$ denotes $\Re$ or $\Im$ depending
whether $m$ is odd or even:

If $\fl=1$: compute $\tilde D_m(x)$, defined for $|x|\le1$ by
$$\Re_m\left(\sum_{k=0}^{m-1} \dfrac{(-\log|x|)^k}{k!}\text{Li}_{m-k}(x)
+\dfrac{(-\log|x|)^{m-1}}{m!}\log|1-x|\right).$$

If $\fl=2$: compute $D_m(x)$, defined for $|x|\le1$ by
$$\Re_m\left(\sum_{k=0}^{m-1}\dfrac{(-\log|x|)^k}{k!}\text{Li}_{m-k}(x)
-\dfrac{1}{2}\dfrac{(-\log|x|)^m}{m!}\right).$$

If $\fl=3$: compute $P_m(x)$, defined for $|x|\le1$ by
$$\Re_m\left(\sum_{k=0}^{m-1}\dfrac{2^kB_k}{k!}(\log|x|)^k\text{Li}_{m-k}(x)
-\dfrac{2^{m-1}B_m}{m!}(\log|x|)^m\right).$$

These three functions satisfy the functional equation
$f_m(1/x) = (-1)^{m-1}f_m(x)$.

\syn{polylog0}{m,x,\fl,\var{prec}}.

\subsecidx{psi}$(x)$: the $\psi$-function of $x$, i.e.~the
logarithmic derivative $\Gamma'(x)/\Gamma(x)$.

\syn{gpsi}{x,\var{prec}}.

\subsecidx{sin}$(x)$: sine of $x$.

\syn{gsin}{x,\var{prec}}.

\subsecidx{sinh}$(x)$: hyperbolic sine of $x$.

\syn{gsh}{x,\var{prec}}.

\subsecidx{sqr}$(x)$: square of $x$. This operation is not completely
straightforward, i.e.~identical to $x * x$, since it can usually be
computed more efficiently (roughly one-half of the elementary
multiplications can be saved). Also, squaring a $2$-adic number increases
its precision. For example,
\bprog
? (1 + O(2^4))^2
%1 = 1 + O(2^5)
? (1 + O(2^4)) * (1 + O(2^4))
%2 = 1 + O(2^4)
@eprog\noindent
Note that this function is also called whenever one multiplies two objects
which are known to be \emph{identical}, e.g.~they are the value of the same
variable, or we are computing a power.
\bprog
? x = (1 + O(2^4)); x * x
%3 = 1 + O(2^5)
? (1 + O(2^4))^4
%4 = 1 + O(2^6)
@eprog\noindent
(note the difference between \kbd{\%2} and \kbd{\%3} above).

\syn{gsqr}{x}.

\subsecidx{sqrt}$(x)$: principal branch of the square root of $x$,
i.e.~such that $\text{Arg}(\text{sqrt}(x))\in{} ]-\pi/2, \pi/2]$, or in other
words such that $\Re(\text{sqrt}(x))>0$ or $\Re(\text{sqrt}(x))=0$ and
$\Im(\text{sqrt}(x))\ge 0$. If $x\in \R$ and $x<0$, then the result is
complex with positive imaginary part.

Intmod a prime and $p$-adics are allowed as arguments. In that case,
the square root (if it exists) which is returned is the one whose
first $p$-adic digit (or its unique $p$-adic digit in the case of
intmods) is in the interval $[0,p/2]$. When the argument is an
intmod a non-prime (or a non-prime-adic), the result is undefined.

\syn{gsqrt}{x,\var{prec}}.

\subsecidx{sqrtn}$(x,n,\{\&z\})$: principal branch of the $n$th root of $x$,
i.e.~such that $\text{Arg}(\text{sqrt}(x))\in{} ]-\pi/n, \pi/n]$. Intmod
a prime and $p$-adics are allowed as arguments.

If $z$ is present, it is set to a suitable root of unity allowing to
recover all the other roots. If it was not possible, z is
set to zero. In the case this argument is present and no square root exist,
$0$ is returned instead or raising an error.
\bprog
? sqrtn(Mod(2,7), 2)
%1 = Mod(4, 7)
? sqrtn(Mod(2,7), 2, &z); z
%2 = Mod(6, 7)
? sqrtn(Mod(2,7), 3)
  *** sqrtn: nth-root does not exist in gsqrtn.
? sqrtn(Mod(2,7), 3,  &z)
%2 = 0
? z
%3 = 0
@eprog

The following script computes all roots in all possible cases:
\bprog
sqrtnall(x,n)=
{
  local(V,r,z,r2);
  r = sqrtn(x,n, &z);
  if (!z, error("Impossible case in sqrtn"));
  if (type(x) == "t_INTMOD" || type(x)=="t_PADIC" ,
    r2 = r*z; n = 1;
    while (r2!=r, r2*=z;n++));
  V = vector(n); V[1] = r;
  for(i=2, n, V[i] = V[i-1]*z);
  V
}
addhelp(sqrtnall,"sqrtnall(x,n):compute the vector of nth-roots of x");
@eprog\noindent

\syn{gsqrtn}{x,n,\&z,\var{prec}}.

\subsecidx{tan}$(x)$: tangent of $x$.

\syn{gtan}{x,\var{prec}}.

\subsecidx{tanh}$(x)$: hyperbolic tangent of $x$.

\syn{gth}{x,\var{prec}}.

\subsecidx{teichmuller}$(x)$: Teichm\"uller character of the $p$-adic number
$x$, i.e. the unique $(p-1)$-th root of unity congruent to $x / p^{v_p(x)}$
modulo $p$.

\syn{teich}{x}.

\subsecidx{theta}$(q,z)$: Jacobi sine theta-function.

\syn{theta}{q,z,\var{prec}}.

\subsecidx{thetanullk}$(q,k)$: $k$-th derivative at $z=0$ of
$\kbd{theta}(q,z)$.

\syn{thetanullk}{q,k,\var{prec}}, where $k$ is a \kbd{long}.

\subsecidx{weber}$(x,\{\fl=0\})$: one of Weber's three $f$ functions.
If $\fl=0$, returns
$$f(x)=\exp(-i\pi/24)\cdot\eta((x+1)/2)\,/\,\eta(x) \quad\hbox{such that}\quad
j=(f^{24}-16)^3/f^{24}\,,$$
where $j$ is the elliptic $j$-invariant  (see the function \kbd{ellj}).
If $\fl=1$, returns
$$f_1(x)=\eta(x/2)\,/\,\eta(x)\quad\hbox{such that}\quad
j=(f_1^{24}+16)^3/f_1^{24}\,.$$
Finally, if $\fl=2$, returns
$$f_2(x)=\sqrt{2}\eta(2x)\,/\,\eta(x)\quad\hbox{such that}\quad
j=(f_2^{24}+16)^3/f_2^{24}.$$
Note the identities $f^8=f_1^8+f_2^8$ and $ff_1f_2=\sqrt2$.

\syn{weber0}{x,\fl,prec}. Associated to the various values of \fl, the
following functions are also available: \funs{werberf}{x,prec},
\funs{werberf1}{x,prec} or \funs{werberf2}{x,prec}.

\subsecidx{zeta}$(s)$: For $s$ a complex number, Riemann's zeta
function \sidx{Riemann zeta-function} $\zeta(s)=\sum_{n\ge1}n^{-s}$,
computed using the \idx{Euler-Maclaurin} summation formula, except
when $s$ is of type integer, in which case it is computed using
Bernoulli numbers\sidx{Bernoulli numbers} for $s\le0$ or $s>0$ and
even, and using modular forms for $s>0$ and odd.

For $s$ a $p$-adic number, Kubota-Leopoldt zeta function at $s$, that
is the unique continuous $p$-adic function on the $p$-adic integers
that interpolates the values of $(1 - p^{-k}) \zeta(k)$ at negative
integers $k$ such that $k \equiv 1 \pmod{p-1}$ (resp. $k$ is odd) if
$p$ is odd (resp. $p = 2$).

\syn{gzeta}{s,\var{prec}}.

\section{Arithmetic functions}\label{se:arithmetic}

These functions are by definition functions whose natural domain of
definition is either $\Z$ (or $\Z_{>0}$), or sometimes polynomials
over a base ring. Functions which concern polynomials exclusively will be
explained in the next section. The way these functions are used is
completely different from transcendental functions: in general only the types
integer and polynomial are accepted as arguments. If a vector or matrix type
is given, the function will be applied on each coefficient independently.

In the present version \vers, all arithmetic functions in the narrow sense
of the word~--- Euler's totient\sidx{Euler totient function} function, the
\idx{Moebius} function, the sums over divisors or powers of divisors
etc.--- call, after trial division by small primes, the same versatile
factoring machinery described under \kbd{factorint}. It includes
\idx{Shanks SQUFOF}, \idx{Pollard Rho}, \idx{ECM} and \idx{MPQS} stages, and
has an early exit option for the functions \teb{moebius} and (the integer
function underlying) \teb{issquarefree}. Note that it relies on a (fairly
strong) probabilistic primality test, see \kbd{ispseudoprime}.

\bigskip
\subsecidx{addprimes}$(\{x=[\,]\})$: adds the integers contained in the
vector $x$ (or the single integer $x$) to a special table of
``user-defined primes'', and returns that table. Whenever \kbd{factor} is
subsequently called, it will trial divise by the elements in this table.
If $x$ is empty or omitted, just returns the current list of extra
primes.

The entries in $x$ are not checked for primality, and in fact they need
only be positive integers. The algorithm makes sure that all elements in
the table are pairwise coprime, so it may end up containing divisors
of the input integers. 

It is a useful trick to add known composite numbers, which the function
$\kbd{factor}(x,0)$ was not able to factor. In case the message
``impossible inverse modulo $\langle$\var{some INTMOD}$\rangle$'' shows
up afterwards, you have just stumbled over a non-trivial factor. Note
that the arithmetic functions in the narrow sense, like \teb{eulerphi},
do \emph{not} use this extra table.

To remove primes from the list use \kbd{removeprimes}.

\syn{addprimes}{x}.

\subsecidx{bestappr}$(x,A,\{B\})$: if $B$ is omitted, finds the best rational
approximation to $x\in\R$ (or $\R[X]$, or $\R^n$, \dots) with denominator at
most equal to $A$ using continued fractions.

If $B$ is present, $x$ is assumed to be of type \typ{INTMOD} modulo $M$ (or a
recursive combination of those), and the routine returns the unique fraction
$a/b$ in coprime integers $a\leq A$ and $b\leq B$ which is congruent to $x$
modulo $M$. If $M \leq 2AB$, uniqueness is not guaranteed and the function
fails with an error message. If rational reconstruction is not possible
(no such $a/b$ exists for at least one component of $x$), returns $-1$.

\syn{bestappr0}{x,A,B}. Also available is \funs{bestappr}{x,A} corresponding
to an omitted $B$.

\subsecidx{bezout}$(x,y)$: finds $u$ and $v$ minimal in a
natural sense such that $x*u+y*v=\gcd(x,y)$. The arguments
must be both integers or both polynomials, and the result is a
row vector with three components $u$, $v$, and $\gcd(x,y)$.
\sidx{extended gcd}

\syn{vecbezout}{x,y} to get the vector, or \funs{gbezout}{x,y, \&u, \&v}
which gives as result the address of the created gcd, and puts
the addresses of the corresponding created objects into $u$ and $v$.

\subsecidx{bezoutres}$(x,y)$: as \kbd{bezout}, with the resultant of $x$ and
$y$ replacing the gcd. \sidx{extended gcd} The algorithm uses
(subresultant) assumes the base ring is a domain.

\syn{vecbezoutres}{x,y} to get the vector, or \funs{subresext}{x,y, \&u, \&v}
which gives as result the address of the created gcd, and puts the
addresses of the corresponding created objects into $u$ and $v$.

\subsecidx{bigomega}$(x)$: number of prime divisors of $|x|$ counted with
multiplicity. $x$ must be an integer.

\syn{bigomega}{x}, the result is a \kbd{long}.

\subsecidx{binomial}$(x,y)$: \idx{binomial coefficient} $\binom{x}{y}$.
Here $y$ must be an integer, but $x$ can be any PARI object.

\syn{binomial}{x,y}, where $y$ must be a \kbd{long}.

\subsecidx{chinese}$(x,\{y\})$: if $x$ and $y$ are both intmods or both
polmods, creates (with the same type) a $z$ in the same residue class
as $x$ and in the same residue class as $y$, if it is possible.

This function also allows vector and matrix arguments, in which case the
operation is recursively applied to each component of the vector or matrix.
For polynomial arguments, it is applied to each coefficient.

If $y$ is omitted, and $x$ is a vector, \kbd{chinese} is applied recursively
to the components of $x$, yielding a residue belonging to the same class as all
components of $x$.

Finally $\kbd{chinese}(x,x) = x$ regardless of the type of $x$; this allows
vector arguments to contain other data, so long as they are identical in both
vectors.

\syn{chinese}{x,y}. Also available is \tet{chinese1}$(x)$, corresponding to an
ommitted \kbd{y}.

\subsecidx{content}$(x)$: computes the gcd of all the coefficients of $x$,
when this gcd makes sense. This is the natural definition
if $x$ is a polynomial (and by extension a power series) or a
vector/matrix. This is in general a weaker notion than the \emph{ideal}
generated by the coefficients:
\bprog
    ? content(2*x+y)
    %1 = 1            \\ = gcd(2,y) over Q[y]
@eprog

If $x$ is a scalar, this simply returns the absolute value of $x$ if $x$ is
rational (\typ{INT} or \typ{FRAC}), and either $1$ (inexact input) or $x$
(exact input) otherwise; the result should be identical to \kbd{gcd(x, 0)}. 

The content of a rational function is the ratio of the contents of the
numerator and the denominator. In recursive structures, if a
matrix or vector \emph{coefficient} $x$ appears, the gcd is taken 
not with $x$, but with its content:
\bprog
    ? content([ [2], 4*matid(3) ])
    %1 = 2
@eprog

\syn{content}{x}.

\subsecidx{contfrac}$(x,\{b\},\{nmax\})$: creates the row vector whose
components are the partial quotients of the \idx{continued fraction}
expansion of $x$. That is a result $[a_0,\dots,a_n]$ means that $x \approx
a_0+1/(a_1+\dots+1/a_n)\dots)$. The output is normalized so that $a_n \neq 1$
(unless we also have $n = 0$).

The number of partial quotients $n$ is limited to $nmax$. If $x$ is a real
number, the expansion stops at the last significant partial quotient if
$nmax$ is omitted. $x$ can also be a rational function or a power series.

If a vector $b$ is supplied, the numerators will be equal to the coefficients
of $b$ (instead of all equal to $1$ as above). The length of the result is
then equal to the length of $b$, unless a partial remainder is encountered
which is equal to zero, in which case the expansion stops. In the case of
real numbers, the stopping criterion is thus different from the one mentioned
above since, if $b$ is too long, some partial quotients may not be
significant.

If $b$ is an integer, the command is understood as \kbd{contfrac($x,nmax$)}.

\syn{contfrac0}{x,b,nmax}. Also available are
\funs{gboundcf}{x,nmax}, \funs{gcf}{x}, or \funs{gcf2}{b,x}, where $nmax$
is a C integer.

\subsecidx{contfracpnqn}$(x)$: when $x$ is a vector or a one-row matrix, $x$
is considered as the list of partial quotients $[a_0,a_1,\dots,a_n]$ of a
rational number, and the result is the 2 by 2 matrix
$[p_n,p_{n-1};q_n,q_{n-1}]$ in the standard notation of continued fractions,
so $p_n/q_n=a_0+1/(a_1+\dots+1/a_n)\dots)$. If $x$ is a matrix with two rows
$[b_0,b_1,\dots,b_n]$ and $[a_0,a_1,\dots,a_n]$, this is then considered as a
generalized continued fraction and we have similarly
$p_n/q_n=1/b_0(a_0+b_1/(a_1+\dots+b_n/a_n)\dots)$. Note that in this case one
usually has $b_0=1$.

\syn{pnqn}{x}.

\subsecidx{core}$(n,\{\fl=0\})$: if $n$ is a non-zero integer written as
$n=df^2$ with $d$ squarefree, returns $d$. If $\fl$ is non-zero,
returns the two-element row vector $[d,f]$.

\syn{core0}{n,\fl}.
Also available are \funs{core}{n} (= \funs{core0}{n,0}) and \funs{core2}{n}
(= \funs{core0}{n,1}).

\subsecidx{coredisc}$(n,\{\fl\})$: if $n$ is a non-zero integer written as
$n=df^2$ with $d$ fundamental discriminant (including 1), returns $d$. If
$\fl$ is non-zero, returns the two-element row vector $[d,f]$. Note that if
$n$ is not congruent to 0 or 1 modulo 4, $f$ will be a half integer and not
an integer.

\syn{coredisc0}{n,\fl}.
Also available are
\funs{coredisc}{n} (= \funs{coredisc}{n,0}) and
\funs{coredisc2}{n} (= \funs{coredisc}{n,1}).

\subsecidx{dirdiv}$(x,y)$: $x$ and $y$ being vectors of perhaps different
lengths but with $y[1]\neq 0$ considered as \idx{Dirichlet series}, computes
the quotient of $x$ by $y$, again as a vector.

\syn{dirdiv}{x,y}.

\subsecidx{direuler}$(p=a,b,\var{expr},\{c\})$: computes the
\idx{Dirichlet series} associated to the \idx{Euler product} of
expression \var{expr} as $p$ ranges through the primes from $a$ to $b$.
\var{expr} must be a polynomial or rational function in another variable
than $p$ (say $X$) and $\var{expr}(X)$ is understood as
the local factor $\var{expr}(p^{-s})$.

The series is output as a vector of coefficients. If $c$ is present, output
only the first $c$ coefficients in the series. The following command computes
the \teb{sigma} function, associated to $\zeta(s)\zeta(s-1)$:
\bprog
? direuler(p=2, 10, 1/((1-X)*(1-p*X)))
%1 = [1, 3, 4, 7, 6, 12, 8, 15, 13, 18]
@eprog

\synt{direuler}{void *E, GEN (*eval)(GEN,void*), GEN a, GEN b}

\subsecidx{dirmul}$(x,y)$: $x$ and $y$ being vectors of perhaps different
lengths considered as \idx{Dirichlet series}, computes the product of
$x$ by $y$, again as a vector.

\syn{dirmul}{x,y}.

\subsecidx{divisors}$(x)$: creates a row vector whose components are the
divisors of $x$. The factorization of $x$ (as output by \tet{factor}) can
be used instead.

By definition, these divisors are the products of the irreducible
factors of $n$, as produced by \kbd{factor(n)}, raised to appropriate
powers (no negative exponent may occur in the factorization). If $n$ is
an integer, they are the positive divisors, in increasing order.

\syn{divisors}{x}.

\subsecidx{eulerphi}$(x)$: Euler's $\phi$
(totient)\sidx{Euler totient function} function of $|x|$, in other words
$|(\Z/x\Z)^*|$. $x$ must be of type integer.

\syn{phi}{x}.

\subsecidx{factor}$(x,\{\var{lim}=-1\})$: general factorization function.
If $x$ is of type integer, rational, polynomial or rational function, the
result is a two-column matrix, the first column being the irreducibles
dividing $x$ (prime numbers or polynomials), and the second the exponents.
If $x$ is a vector or a matrix, the factoring is done componentwise (hence
the result is a vector or matrix of two-column matrices). By definition,
$0$ is factored as $0^1$.

   If $x$ is of type integer or rational, the factors are \var{pseudoprimes}
(see \kbd{ispseudoprime}), and in general not rigorously proven primes. In
fact, any factor which is $\leq 10^{13}$ is a genuine prime number. Use
\kbd{isprime} to prove primality of other factors, as in
\bprog
fa = factor(2^2^7 +1)
isprime( fa[,1] )
@eprog\noindent
An argument \var{lim} can be added, meaning that we look only for prime
factors $p < \var{lim}$, or up to \kbd{primelimit}, whichever is lowest
(except when $\var{lim}=0$ where the effect is identical to setting
$\var{lim}=\kbd{primelimit}$). In this case, the remaining part may actually
be a proven composite! See \tet{factorint} for more information about the
algorithms used.

   The polynomials or rational functions to be factored must have scalar
coefficients. In particular PARI does \emph{not} know how to factor
multivariate polynomials. See \tet{factormod} and \tet{factorff} for the
algorithms used over finite fields, \tet{factornf} for the algorithms over
number fields. Over $\Q$, \idx{van Hoeij}'s method is used, which is able to
cope with hundreds of modular factors.

   Note that PARI tries to guess in a sensible way over which ring you want
to factor. Note also that factorization of polynomials is done up to
multiplication by a constant. In particular, the factors of rational
polynomials will have integer coefficients, and the content of a polynomial
or rational function is discarded and not included in the factorization. If
needed, you can always ask for the content explicitly:

\bprog
? factor(t^2 + 5/2*t + 1)
%1 =
[2*t + 1 1]

[t + 2 1]

? content(t^2 + 5/2*t + 1)
%2 = 1/2
@eprog\noindent
See also \tet{factornf} and \tet{nffactor}.

\syn{factor0}{x,\var{lim}}, where \var{lim} is a C integer.
Also available are
\funs{factor}{x} (= \funs{factor0}{x,-1}),
\funs{smallfact}{x} (= \funs{factor0}{x,0}).

\subsecidx{factorback}$(f,\{e\},\{nf\})$: gives back the factored object
corresponding to a factorization. The integer $1$ corresponds to the empty
factorization. If the last argument is of number field type (e.g.~created by
\kbd{nfinit}), assume we are dealing with an ideal factorization in the
number field. The resulting ideal product is given in HNF form.

If $e$ is present, $e$ and $f$ must be vectors of the same length ($e$ being
integral), and the corresponding factorization is the product of the
$f[i]^{e[i]}$.

If not, and $f$ is vector, it is understood as in the preceding case with $e$
a vector of 1 (the product of the $f[i]$ is returned). Finally, $f$ can be a
regular factorization, as produced with any \kbd{factor} command. A few
examples:
\bprog
? factorback([2,2; 3,1])
%1 = 12
? factorback([2,2], [3,1])
%2 = 12
? factorback([5,2,3])
%3 = 30
? factorback([2,2], [3,1], nfinit(x^3+2))
%4 =
[16 0 0]

[0 16 0]

[0 0 16]
? nf = nfinit(x^2+1); fa = idealfactor(nf, 10)
%5 =
[[2, [1, 1]~, 2, 1, [1, 1]~] 2]

[[5, [-2, 1]~, 1, 1, [2, 1]~] 1]

[[5, [2, 1]~, 1, 1, [-2, 1]~] 1]
? factorback(fa)
  ***   forbidden multiplication t_VEC * t_VEC.
? factorback(fa, nf)
%6 =
[10 0]

[0 10]

@eprog
In the fourth example, $2$ and $3$ are interpreted as principal ideals in a
cubic field. In the fifth one, \kbd{factorback(fa)} is meaningless since we
forgot to indicate the number field, and the entries in the first column of
\kbd{fa} can't be multiplied.

\syn{factorback0}{f,e,\var{nf}}, where an omitted
$\var{nf}$ or $e$ is entered as \kbd{NULL}. Also available is
\tet{factorback}$(f,\var{nf})$ (case $e = \kbd{NULL}$) where an omitted
$\var{nf}$ is entered as \kbd{NULL}.

\subsecidx{factorcantor}$(x,p)$: factors the polynomial $x$ modulo the
prime $p$, using distinct degree plus
\idx{Cantor-Zassenhaus}\sidx{Zassenhaus}. The coefficients of $x$ must be
operation-compatible with $\Z/p\Z$. The result is a two-column matrix, the
first column being the irreducible polynomials dividing $x$, and the second
the exponents. If you want only the \emph{degrees} of the irreducible
polynomials (for example for computing an $L$-function), use
$\kbd{factormod}(x,p,1)$. Note that the \kbd{factormod} algorithm is
usually faster than \kbd{factorcantor}.

\syn{factcantor}{x,p}.

\subsecidx{factorff}$(x,p,a)$: factors the polynomial $x$ in the field
$\F_q$ defined by the irreducible polynomial $a$ over $\F_p$. The
coefficients of $x$ must be operation-compatible with $\Z/p\Z$. The result
is a two-column matrix: the first column contains the irreducible factors of
$x$, and the second their exponents. If all the coefficients of $x$ are in
$\F_p$, a much faster algorithm is applied, using the computation of
isomorphisms between finite fields.

\syn{factorff}{x,p,a}.

\subsecidx{factorial}$(x)$ or $x!$: factorial of $x$. The expression $x!$
gives a result which is an integer, while $\kbd{factorial}(x)$ gives a real
number.

\syn{mpfact}{x} for $x!$ and
\funs{mpfactr}{x,prec} for $\kbd{factorial}(x)$. $x$ must be a \kbd{long}
integer and not a PARI integer.

\subsecidx{factorint}$(n,\{\fl=0\})$: factors the integer $n$ into a product of
pseudoprimes (see \kbd{ispseudoprime}), using a combination of the
\idx{Shanks SQUFOF} and \idx{Pollard Rho} method (with modifications due to
Brent), \idx{Lenstra}'s \idx{ECM} (with modifications by Montgomery), and
\idx{MPQS} (the latter adapted from the \idx{LiDIA} code with the kind
permission of the LiDIA maintainers), as well as a search for pure powers
with exponents$\le 10$. The output is a two-column matrix as for
\kbd{factor}. Use \kbd{isprime} on the result if you want to guarantee
primality.

This gives direct access to the integer factoring engine called by most
arithmetical functions. \fl\ is optional; its binary digits mean 1: avoid
MPQS, 2: skip first stage ECM (we may still fall back to it later), 4: avoid
Rho and SQUFOF, 8: don't run final ECM (as a result, a huge composite may be
declared to be prime). Note that a (strong) probabilistic primality test is
used; thus composites might (very rarely) not be detected.

You are invited to play with the flag settings and watch the internals at
work by using \kbd{gp}'s \tet{debuglevel} default parameter (level 3 shows
just the outline, 4 turns on time keeping, 5 and above show an increasing
amount of internal details). If you see anything funny happening, please let
us know.

\syn{factorint}{n,\fl}.

\subsecidx{factormod}$(x,p,\{\fl=0\})$: factors the polynomial $x$ modulo
the prime integer $p$, using \idx{Berlekamp}. The coefficients of $x$ must be
operation-compatible with $\Z/p\Z$. The result is a two-column matrix, the
first column being the irreducible polynomials dividing $x$, and the second
the exponents. If $\fl$ is non-zero, outputs only the \emph{degrees} of the
irreducible polynomials (for example, for computing an $L$-function). A
different algorithm for computing the mod $p$ factorization is
\kbd{factorcantor} which is sometimes faster.

\syn{factormod}{x,p,\fl}. Also available are
\funs{factmod}{x,p} (which is equivalent to \funs{factormod}{x,p,0}) and
\funs{simplefactmod}{x,p} (= \funs{factormod}{x,p,1}).

\subsecidx{fibonacci}$(x)$: $x^{\text{th}}$ Fibonacci number.

\syn{fibo}{x}. $x$ must be a \kbd{long}.

\subsecidx{ffinit}$(p,n,\{v=x\})$: computes a monic polynomial of degree
$n$ which is irreducible over $\F_p$. For instance if
\kbd{P = ffinit(3,2,y)}, you can represent elements in $\F_{3^2}$ as polmods
modulo \kbd{P}. This function uses a fast variant of Adleman-Lenstra's
algorithm.

\syn{ffinit}{p,n,v}, where $v$ is a variable number.

\subsecidx{gcd}$(x,\{y\})$: creates the greatest common divisor of $x$
and $y$. $x$ and $y$ can be of quite general types, for instance both
rational numbers. If $y$ is omitted and $x$ is a vector, returns the
$\text{gcd}$ of all components of $x$, i.e.~this is equivalent to
\kbd{content(x)}.


When $x$ and $y$ are both given and one of them is a vector/matrix type,
the GCD is again taken recursively on each component, but in a different way.
If $y$ is a vector, resp.~matrix, then the result has the same type as $y$,
and components equal to \kbd{gcd(x, y[i])}, resp.~\kbd{gcd(x, y[,i])}. Else
if $x$ is a vector/matrix the result has the same type as $x$ and an
analogous definition. Note that for these types, \kbd{gcd} is not
commutative.

The algorithm used is a naive \idx{Euclid} except for the following inputs:

\item integers: use modified right-shift binary (``plus-minus''
variant).

\item univariate polynomials with coeffients in the same number
field (in particular rational): use modular gcd algorithm.

\item general polynomials: use the \idx{subresultant algorithm} if
coefficient explosion is likely (exact, non modular, coefficients).

\syn{ggcd}{x,y}. For general polynomial inputs, \funs{srgcd}{x,y} is also
available. For univariate \emph{rational} polynomials, one also has
\funs{modulargcd}{x,y}.

\subsecidx{hilbert}$(x,y,\{p\})$: \idx{Hilbert symbol} of $x$ and $y$ modulo
$p$. If $x$ and $y$ are of type integer or fraction, an explicit third
parameter $p$ must be supplied, $p=0$ meaning the place at infinity.
Otherwise, $p$ needs not be given, and $x$ and $y$ can be of compatible types
integer, fraction, real, intmod a prime (result is undefined if the
modulus is not prime), or $p$-adic.

\syn{hil}{x,y,p}.

\subsecidx{isfundamental}$(x)$: true (1) if $x$ is equal to 1 or to the
discriminant of a quadratic field, false (0) otherwise.

\syn{gisfundamental}{x}, but the simpler function \funs{isfundamental}{x}
which returns a \kbd{long} should be used if $x$ is known to be of type
integer.

\subsecidx{ispower}$(x,\{k\}, \{\&n\})$:
if $k$ is given, returns true (1) if $x$ is a $k$-th power, false
(0) if not. In this case, $x$ may be an integer or polynomial,
a rational number or function, or an intmod a prime or $p$-adic.

If $k$ is omitted, only integers and fractions are allowed and the
function returns the maximal $k \geq 2$ such that $x = n^k$ is a perfect
power, or 0 if no such $k$ exist; in particular \kbd{ispower(-1)},
\kbd{ispower(0)}, and \kbd{ispower(1)} all return $0$.

If a third argument $\&n$ is given and a $k$-th root was computed in the
process, then $n$ is set to that root.

\syn{ispower}{x, k, \&n}, the result is a \kbd{long}. Omitted $k$ or $n$
are coded as \kbd{NULL}.

\subsecidx{isprime}$(x,\{\fl=0\})$: true (1) if $x$ is a (proven) prime
number, false (0) otherwise. This can be very slow when $x$ is indeed
prime and has more than $1000$ digits, say. Use \tet{ispseudoprime} to
quickly check for pseudo primality. See also \kbd{factor}.

If $\fl=0$, use a combination of Baillie-PSW pseudo primality test (see
\tet{ispseudoprime}), Selfridge ``$p-1$'' test if $x-1$ is smooth enough, and
Adleman-Pomerance-Rumely-Cohen-Lenstra (APRCL) for general $x$.

If $\fl=1$, use Selfridge-Pocklington-Lehmer ``$p-1$'' test and output a
primality certificate as follows: return 0 if $x$ is composite, 1 if $x$ is
small enough that passing Baillie-PSW test guarantees its primality
(currently $x < 10^{13}$), $2$ if $x$ is a large prime whose primality could
only sensibly be proven (given the algorithms implemented in PARI) using the
APRCL test. Otherwise ($x$ is large and $x-1$ is smooth) output a three
column matrix as a primality certificate. The first column contains the prime
factors $p$ of $x-1$, the second the corresponding elements $a_p$ as in
Proposition~8.3.1 in GTM~138, and the third the output of isprime(p,1). The
algorithm fails if one of the pseudo-prime factors is not prime, which is
exceedingly unlikely (and well worth a bug report).

If $\fl=2$, use APRCL.

\syn{gisprime}{x,\fl}, but the simpler function \funs{isprime}{x}
which returns a \kbd{long} should be used if $x$ is known to be of
type integer.


\subsecidx{ispseudoprime}$(x,\{\fl\})$: true (1) if $x$ is a strong pseudo
prime (see below), false (0) otherwise. If this function returns false, $x$
is not prime; if, on the other hand it returns true, it is only highly likely
that $x$ is a prime number. Use \tet{isprime} (which is of course much
slower) to prove that $x$ is indeed prime.

If $\fl = 0$, checks whether $x$ is a Baillie-Pomerance-Selfridge-Wagstaff
pseudo prime (strong Rabin-Miller pseudo prime for base $2$, followed by
strong Lucas test for the sequence $(P,-1)$, $P$ smallest positive integer
such that $P^2 - 4$ is not a square mod $x$).

There are no known composite numbers passing this test (in particular, all
composites $\leq 10^{13}$ are correctly detected), although it is expected
that infinitely many such numbers exist.

If $\fl > 0$, checks whether $x$ is a strong Miller-Rabin pseudo prime  for
$\fl$ randomly chosen bases (with end-matching to catch square roots of
$-1$).

\syn{gispseudoprime}{x,\fl}, but the simpler function \funs{ispseudoprime}{x}
which returns a \kbd{long} should be used if $x$ is known to be of type
integer.

\subsecidx{issquare}$(x,\{\&n\})$: true (1) if $x$ is a square, false (0)
if not. What ``being a square'' means depends on the type of $x$: all
\typ{COMPLEX} are squares, as well as all non-negative \typ{REAL}; for
exact types such as \typ{INT}, \typ{FRAC} and \typ{INTMOD}, squares are
numbers of the form $s^2$ with $s$ in $\Z$, $\Q$ and $\Z/N\Z$ respectively.
\bprog
    ? issquare(3)          \\ as an integer
    %1 = 0
    ? issquare(3.)         \\ as a real number
    %2 = 1
    ? issquare(Mod(7, 8))  \\ in Z/8Z
    %3 = 0
    ? issquare( 5 + O(13^4) )  \\ in Q_13
    %4 = 0
@eprog
If $n$ is given and an exact square root had to be computed in
the checking process, puts that square root in $n$. This is the case when
$x$ is a \typ{INT}, \typ{FRAC}, \typ{POL} or \typ{RFRAC} (or a vector of
such objects):
\bprog
    ? issquare(4, &n)
    %1 = 1
    ? n
    %2 = 2
    ? issquare([4, x^2], &n)
    %3 = [1, 1]  \\ both are squares
    ? n
    %4 = [2, x]  \\ the square roots
@eprog
This will \emph{not} work for \typ{INTMOD} (use quadratic reciprocity) or
\typ{SER} (only check the leading coefficient).

\syn{gissquarerem}{x,\&n}. Also available is \funs{gissquare}{x}.

\subsecidx{issquarefree}$(x)$: true (1) if $x$ is squarefree, false (0) if not.
Here $x$ can be an integer or a polynomial.

\syn{gissquarefree}{x}, but the simpler function \funs{issquarefree}{x}
which returns a \kbd{long} should be used if $x$ is known to be of type
integer. This \teb{issquarefree} is just the square of the \idx{Moebius}
function, and is computed as a multiplicative arithmetic function much like
the latter.

\subsecidx{kronecker}$(x,y)$:
\idx{Kronecker symbol} $(x|y)$, where $x$ and $y$ must be of type integer. By
definition, this is the extension of \idx{Legendre symbol} to $\Z \times \Z$
by total multiplicativity in both arguments with the following special rules
for $y = 0, -1$ or $2$:

\item $(x|0) = 1$ if $|x| = 1$ and $0$ otherwise.

\item $(x|-1) = 1$ if $x \geq 0$ and $-1$ otherwise.

\item $(x|2) = 0$ if $x$ is even and $1$ if $x = 1,-1 \mod 8$ and $-1$
if $x=3,-3 \mod 8$.

\syn{kronecker}{x,y}, the result ($0$ or $\pm 1$) is a \kbd{long}.

\subsecidx{lcm}$(x,\{y\})$: least common multiple of $x$ and $y$, i.e.~such
that $\text{lcm}(x,y)*\gcd(x,y) = \text{abs}(x*y)$. If $y$ is omitted and $x$
is a vector, returns the $\text{lcm}$ of all components of $x$.

When $x$ and $y$ are both given and one of them is a vector/matrix type,
the LCM is again taken recursively on each component, but in a different way.
If $y$ is a vector, resp.~matrix, then the result has the same type as $y$,
and components equal to \kbd{lcm(x, y[i])}, resp.~\kbd{lcm(x, y[,i])}. Else
if $x$ is a vector/matrix the result has the same type as $x$ and an
analogous definition. Note that for these types, \kbd{lcm} is not
commutative.

Note that \kbd{lcm(v)} is quite different from 
\bprog
    l = v[1]; for (i = 1, #v, l = lcm(l, v[i]))
@eprog\noindent
Indeed, \kbd{lcm(v)} is a scalar, but \kbd{l} may not be (if one of
the \kbd{v[i]} is a vector/matrix). The computation uses a divide-conquer tree
and should be much more efficient, especially when using the GMP
multiprecision kernel (and more subquadratic algorithms become available):
\bprog
    ? v = vector(10^4, i, random);
    ? lcm(v);
    time = 323 ms.
    ? l = v[1]; for (i = 1, #v, l = lcm(l, v[i]))
    time = 833 ms.
@eprog

\syn{glcm}{x,y}.

\subsecidx{moebius}$(x)$: \idx{Moebius} $\mu$-function of $|x|$. $x$ must
be of type integer.

\syn{mu}{x}, the result ($0$ or $\pm 1$) is a \kbd{long}.

\subsecidx{nextprime}$(x)$: finds the smallest pseudoprime (see
\tet{ispseudoprime}) greater than or equal to $x$. $x$ can be of any real
type. Note that if $x$ is a pseudoprime, this function returns $x$ and not
the smallest pseudoprime strictly larger than $x$. To rigorously prove that
the result is prime, use \kbd{isprime}.

\syn{nextprime}{x}.

\subsecidx{numdiv}$(x)$: number of divisors of $|x|$. $x$ must be of type
integer.

\syn{numbdiv}{x}.

\subsecidx{numbpart}$(n)$: gives the number of unrestricted partitions of
$n$, usually called $p(n)$ in the litterature; in other words the number of
nonnegative integer solutions to $a+2b+3c+\cdots=n$. $n$ must be of type
integer and $1\le n<10^{15}$. The algorithm uses the
Hardy-Ramanujan-Rademacher formula.

\syn{numbpart}{n}.

\subsecidx{omega}$(x)$: number of distinct prime divisors of $|x|$. $x$
must be of type integer.

\syn{omega}{x}, the result is a \kbd{long}.

\subsecidx{precprime}$(x)$: finds the largest pseudoprime (see
\tet{ispseudoprime}) less than or equal to $x$. $x$ can be of any real type.
Returns 0 if $x\le1$. Note that if $x$ is a prime, this function returns $x$
and not the largest prime strictly smaller than $x$. To rigorously prove that
the result is prime, use \kbd{isprime}.

\syn{precprime}{x}.

\subsecidx{prime}$(x)$: the $x^{\text{th}}$ prime number, which must be among
the precalculated primes.

\syn{prime}{x}. $x$ must be a \kbd{long}.

\subsecidx{primepi}$(x)$: the prime counting function. Returns the number of
primes $p$, $p \leq x$. Uses a naive algorithm so that $x$ must be less than
\kbd{primelimit}.

\syn{primepi}{x}.

\subsecidx{primes}$(x)$: creates a row vector whose components
are the first $x$ prime numbers, which must be among the precalculated primes.

\syn{primes}{x}. $x$ must be a \kbd{long}.

\subsecidx{qfbclassno}$(D,\{\fl=0\})$: ordinary class number of the quadratic
order of discriminant $D$. In the present version \vers, a $O(D^{1/2})$
algorithm is used for $D > 0$ (using Euler product and the functional
equation) so $D$ should not be too large, say $D < 10^8$, for the time to be
reasonable. On the other hand, for $D < 0$ one can reasonably compute
\kbd{qfbclassno($D$)} for $|D|<10^{25}$, since the routine uses
\idx{Shanks}'s method which is in $O(|D|^{1/4})$. For larger values of $|D|$,
see \kbd{quadclassunit}.

If $\fl=1$, compute the class number using \idx{Euler product}s and the
functional equation. However, it is in $O(|D|^{1/2})$.

\misctitle{Important warning.} For $D < 0$, this function may give incorrect
results when the class group has a low exponent (has many cyclic factors),
because implementing \idx{Shanks}'s method in full generality slows it down
immensely. It is therefore strongly recommended to double-check results using
either the version with $\fl = 1$ or the function \kbd{quadclassunit}.

\misctitle{Warning.} contrary to what its name implies, this routine does not
compute the number of classes of binary primitive forms of discriminant $D$,
which is equal to the \emph{narrow} class number. The two notions are the same
when $D < 0$ or the fundamental unit $\varepsilon$ has negative norm; when $D
> 0$ and $N\varepsilon > 0$, the number of classes of forms is twice the
ordinary class number. This is a problem which we cannot fix for backward
compatibility reasons. Use the following routine if you are only interested
in the number of classes of forms:
\bprog
QFBclassno(D) =
  qfbclassno(D) * if (D < 0 || norm(quadunit(D)) < 0, 1, 2)
@eprog\noindent
Here are a few examples:
\bprog
? qfbclassno(400000028)
time = 3,140 ms.
%1 = 1
? quadclassunit(400000028).no
time = 20 ms. \\@com{ much faster}
%2 = 1
? qfbclassno(-400000028)
time = 0 ms.
%3 = 7253 \\@com{ correct, and fast enough}
? quadclassunit(-400000028).no
time = 0 ms.
%4 = 7253
@eprog

\syn{qfbclassno0}{D,\fl}. Also available:
\funs{classno}{D} (= \funs{qfbclassno}{D}),
\funs{classno2}{D} (= \funs{qfbclassno}{D,1}), and finally
we have the function \funs{hclassno}{D} which computes the class number of
an imaginary quadratic field by counting reduced forms, an $O(|D|)$
algorithm. See also \kbd{qfbhclassno}.

\subsecidx{qfbcompraw}$(x,y)$ \idx{composition} of the binary quadratic forms
$x$ and $y$, without \idx{reduction} of the result. This is useful e.g.~to
compute a generating element of an ideal.

\syn{compraw}{x,y}.

\subsecidx{qfbhclassno}$(x)$: \idx{Hurwitz class number} of $x$, where
$x$ is non-negative and congruent to 0 or 3 modulo 4. For $x > 5\cdot
10^5$, we assume the GRH, and use \kbd{quadclassunit} with default
parameters.

\syn{hclassno}{x}.

\subsecidx{qfbnucomp}$(x,y,l)$: \idx{composition} of the primitive positive
definite binary quadratic forms $x$ and $y$ (type \typ{QFI}) using the NUCOMP
and NUDUPL algorithms of \idx{Shanks}, \`a la Atkin. $l$ is any positive
constant, but for optimal speed, one should take $l=|D|^{1/4}$, where $D$ is
the common discriminant of $x$ and $y$. When $x$ and $y$ do not have the same
discriminant, the result is undefined.

The current implementation is straightforward and in general \emph{slower}
than the generic routine (since the latter take advantadge of asymptotically
fast operations and careful optimizations).

\syn{nucomp}{x,y,l}. The auxiliary function \funs{nudupl}{x,l} can be
used when $x=y$.

\subsecidx{qfbnupow}$(x,n)$: $n$-th power of the primitive positive definite
binary quadratic form $x$ using \idx{Shanks}'s NUCOMP and NUDUPL algorithms
(see \kbd{qfbnucomp}, in particular the final warning).

\syn{nupow}{x,n}.

\subsecidx{qfbpowraw}$(x,n)$: $n$-th power of the binary quadratic form
$x$, computed without doing any \idx{reduction} (i.e.~using \kbd{qfbcompraw}).
Here $n$ must be non-negative and $n<2^{31}$.

\syn{powraw}{x,n} where $n$ must be a \kbd{long}
integer.

\subsecidx{qfbprimeform}$(x,p)$: prime binary quadratic form of discriminant
$x$ whose first coefficient is the prime number $p$. By abuse of notation,
$p = \pm 1$ is a valid special case which returns the unit form. Returns an
error if $x$ is not a quadratic residue mod $p$. In the case where $x>0$,
$p < 0$ is allowed, and the ``distance'' component of the form is set equal
to zero according to the current precision. (Note that negative definite
\typ{QFI} are not implemented.)

\syn{primeform}{x,p,\var{prec}}, where the third variable $\var{prec}$ is a
\kbd{long}, but is only taken into account when $x>0$.

\subsecidx{qfbred}$(x,\{\fl=0\},\{D\},\{\var{isqrtD}\},\{\var{sqrtD}\})$:
reduces the binary quadratic form $x$ (updating Shanks's distance function
if $x$ is indefinite). The binary digits of $\fl$ are toggles meaning

\quad 1: perform a single \idx{reduction} step

\quad 2: don't update \idx{Shanks}'s distance

  $D$, \var{isqrtD}, \var{sqrtD}, if present, supply the values of the
discriminant, $\floor{\sqrt{D}}$, and $\sqrt{D}$ respectively
(no checking is done of these facts). If $D<0$ these values are useless,
and all references to Shanks's distance are irrelevant.

\syn{qfbred0}{x,\fl,D,\var{isqrtD},\var{sqrtD}}. Use \kbd{NULL}
to omit any of $D$, \var{isqrtD}, \var{sqrtD}.

\noindent Also available are

\funs{redimag}{x} (= \funs{qfbred}{x} where $x$ is definite),

\noindent and for indefinite forms:

\funs{redreal}{x} (= \funs{qfbred}{x}),

\funs{rhoreal}{x} (= \funs{qfbred}{x,1}),

\funs{redrealnod}{x,sq} (= \funs{qfbred}{x,2,,isqrtD}),

\funs{rhorealnod}{x,sq} (= \funs{qfbred}{x,3,,isqrtD}).

\subsecidx{qfbsolve}$(Q,p)$: Solve the equation $Q(x,y)=p$ over the integers,
where $Q$ is a binary quadratic form and $p$ a prime number.

Return $[x,y]$ as a two-components vector, or zero if there is no solution.
Note that this function returns only one solution and not all the solutions.

Let $D = \disc Q$. The algorithm used runs in probabilistic polynomial time
in $p$ (through the computation of a square root of $D$ modulo $p$); it is
polynomial time in $D$ if $Q$ is imaginary, but exponential time if $Q$ is
real (through the computation of a full cycle of reduced forms). In the
latter case, note that \tet{bnfisprincipal} provides a solution in heuristic
subexponential time in $D$ assuming the GRH.

\syn{qfbsolve}{Q,n}.

\subsecidx{quadclassunit}$(D,\{\fl=0\},\{\var{tech}=[]\})$:
\idx{Buchmann-McCurley}'s sub-exponential algorithm for computing the class
group of a quadratic order of discriminant $D$.

This function should be used instead of \tet{qfbclassno} or \tet{quadregula}
when $D<-10^{25}$, $D>10^{10}$, or when the \emph{structure} is wanted. It
is a special case of \tet{bnfinit}, which is slower, but more robust.

If $\fl$ is non-zero \emph{and} $D>0$, computes the narrow class group and
regulator, instead of the ordinary (or wide) ones. In the current version
\vers, this does not work at all: use the general function \tet{bnfnarrow}.

Optional parameter \var{tech} is a row vector of the form $[c_1, c_2]$, where
$c_1 \leq c_2$ are positive real numbers which control the execution time and
the stack size. For a given $c_1$, set $c_2 = c_1$ to get maximum speed. To
get a rigorous result under \idx{GRH}, you must take $c_2\geq 6$. Reasonable
values for $c_1$ are between $0.1$ and $2$. More precisely, the algorithm will
\emph{assume} that prime ideals of norm less than $c_2 (\log |D|)^2$ generate
the class group, but the bulk of the work is done with prime ideals of norm
less than $c_1 (\log |D|)^2$. A larger $c_1$ means that relations are easier
to find, but more relations are needed and the linear algebra will be harder.
The default is $c_1 = c_2 = 0.2$, so the result is \emph{not} rigorously
proven.

The result is a vector $v$ with 3 components if $D<0$, and
$4$ otherwise. The correspond respectively to

\item $v[1]$: the class number

\item $v[2]$: a vector giving the structure of the class group as a
product of cyclic groups;

\item $v[3]$: a vector giving generators of those cyclic groups (as
binary quadratic forms).

\item $v[4]$: (omitted if $D < 0$) the regulator, computed to an
accuracy which is the maximum of an internal accuracy determined by the
program and the current default (note that once the regulator is known to a
small accuracy it is trivial to compute it to very high accuracy, see the
tutorial).

\syn{quadclassunit0}{D,\fl,tech}. Also available are
\funs{buchimag}{D,c_1,c_2} and \funs{buchreal}{D,\fl,c_1,c_2}.

\subsecidx{quaddisc}$(x)$: discriminant of the quadratic field
$\Q(\sqrt{x})$, where $x\in\Q$.

\syn{quaddisc}{x}.

\subsecidx{quadhilbert}$(D,\{pq\})$: relative equation defining the
\idx{Hilbert class field} of the quadratic field of discriminant $D$.

If $D < 0$, uses complex multiplication (\idx{Schertz}'s variant). The
technical component $pq$, if supplied, is a vector $[p,q]$ where $p$, $q$ are
the prime numbers needed for the Schertz's method. More precisely, prime
ideals above $p$ and $q$ should be non-principal and coprime to all reduced
representatives of the class group. In addition, if one of these ideals has
order $2$ in the class group, they should have the same class. Finally, for
efficiency, $\gcd(24,(p-1)(q-1))$ should be as large as possible.
The routine returns $0$ if $[p,q]$ is not suitable.

If $D > 0$ \idx{Stark units} are used and (in rare cases) a
vector of extensions may be returned whose compositum is the requested class
field. See \kbd{bnrstark} for details.

\syn{quadhilbert}{D,pq,\var{prec}}.

\subsecidx{quadgen}$(D)$: \label{se:quadgen}creates the quadratic
number\sidx{omega} $\omega=(a+\sqrt{D})/2$ where $a=0$ if $x\equiv0\mod4$,
$a=1$ if $D\equiv1\mod4$, so that $(1,\omega)$ is an integral basis for the
quadratic order of discriminant $D$. $D$ must be an integer congruent to 0 or
1 modulo 4, which is not a square.

\syn{quadgen}{x}.

\subsecidx{quadpoly}$(D,\{v=x\})$: creates the ``canonical'' quadratic
polynomial (in the variable $v$) corresponding to the discriminant $D$,
i.e.~the minimal polynomial of $\kbd{quadgen}(D)$. $D$ must be an integer
congruent to 0 or 1 modulo 4, which is not a square.

\syn{quadpoly0}{x,v}.

\subsecidx{quadray}$(D,f,\{\var{lambda}\})$: relative equation for the ray
class field of conductor $f$ for the quadratic field of discriminant $D$
using analytic methods. A \kbd{bnf} for $x^2 - D$ is also accepted in place
of $D$.

For $D < 0$, uses the $\sigma$ function. If supplied, \var{lambda} is is the
technical element $\lambda$ of \kbd{bnf} necessary for Schertz's method. In
that case, returns 0 if $\lambda$ is not suitable.

For $D>0$, uses Stark's conjecture, and a vector of relative equations may be
returned. See \tet{bnrstark} for more details.

\syn{quadray}{D,f,lambda,prec}, where an omitted \kbd{lambda} is coded as
\kbd{NULL}.

\subsecidx{quadregulator}$(x)$: regulator of the quadratic field of
positive discriminant $x$. Returns an error if $x$ is not a discriminant
(fundamental or not) or if $x$ is a square. See also \kbd{quadclassunit} if
$x$ is large.

\syn{regula}{x,\var{prec}}.

\subsecidx{quadunit}$(D)$: fundamental unit\sidx{fundamental units} of the
real quadratic field $\Q(\sqrt D)$ where  $D$ is the positive discriminant
of the field. If $D$ is not a fundamental discriminant, this probably gives
the fundamental unit of the corresponding order. $D$ must be an integer
congruent to 0 or 1 modulo 4, which is not a square; the result is a
quadratic number (see \secref{se:quadgen}).

\syn{fundunit}{x}.

\subsecidx{removeprimes}$(\{x=[\,]\})$: removes the primes listed in $x$ from
the prime number table. In particular \kbd{removeprimes(addprimes)} empties
the extra prime table. $x$ can also be a single integer. List the current
extra primes if $x$ is omitted.

\syn{removeprimes}{x}.

\subsecidx{sigma}$(x,\{k=1\})$: sum of the $k^{\text{th}}$ powers of the
positive divisors of $|x|$. $x$ and $k$ must be of type integer.

\syn{sumdiv}{x} (= \funs{sigma}{x}) or \funs{gsumdivk}{x,k} (=
\funs{sigma}{x,k}), where $k$ is a C long integer.

\subsecidx{sqrtint}$(x)$: integer square root of $x$, which must be a
non-negative integer. The result is non-negative and rounded towards zero.

\syn{sqrti}{x}. Also available is \tet{sqrtremi}$(x,\&r)$ which returns
$s$ such that $s^2 = x+r$, with $0 \leq r \leq 2s$.

\subsecidx{zncoppersmith}$(P, N, X, \{B=N\})$: finds all integers $x_0$ with
$|x_0| \leq X$ such that
$$\gcd(N, P(x_0)) \geq B.$$
If $N$ is prime or a prime power, \tet{polrootsmod} or \tet{polrootspadic}
will be much faster. $X$ must be smaller than $\exp(\log^2 B / (\deg(P) \log
N))$.

\syn{zncoppersmith}{P, N, X, B}, where an omitted $B$ is coded as \kbd{NULL}.

\subsecidx{znlog}$(x,g)$: $g$ must be a primitive root mod a prime $p$, and
the result is the discrete log of $x$ in the multiplicative group
$(\Z/p\Z)^*$. This function uses a simple-minded combination of
Pohlig-Hellman algorithm and Shanks baby-step/giant-step which requires
$O(\sqrt{q})$ storage, where $q$ is the largest prime factor of $p-1$. Hence
it cannot be used when the largest prime divisor of $p-1$ is greater than
about $10^{13}$.

\syn{znlog}{x,g}.

\subsecidx{znorder}$(x,\{\var{o}\})$: $x$ must be an integer mod $n$, and the
result is the order of $x$ in the multiplicative group $(\Z/n\Z)^*$. Returns
an error if $x$ is not invertible. If optional parameter $o$ is given it is
assumed to be a multiple of the order (used to limit the search space).

\syn{znorder}{x,o}, where an omitted $o$ is coded as \kbd{NULL}. Also
available is \funs{order}{x}.

\subsecidx{znprimroot}$(n)$: returns a primitive root (generator) of
$(\Z/n\Z)^*$, whenever this latter group is cyclic ($n = 4$ or $n = 2p^k$ or
$n = p^k$, where $p$ is an odd prime and $k \geq 0$).

\syn{gener}{x}.

\subsecidx{znstar}$(n)$: gives the structure of the multiplicative group
$(\Z/n\Z)^*$ as a 3-component row vector $v$, where $v[1]=\phi(n)$ is the
order of that group, $v[2]$ is a $k$-component row-vector $d$ of integers
$d[i]$ such that $d[i]>1$ and $d[i]\mid d[i-1]$ for $i \ge 2$ and
$(\Z/n\Z)^* \simeq \prod_{i=1}^k(\Z/d[i]\Z)$, and $v[3]$ is a $k$-component row
vector giving generators of the image of the cyclic groups $\Z/d[i]\Z$.

\syn{znstar}{n}.

\section{Functions related to elliptic curves}

We have implemented a number of functions which are useful for number
theorists working on elliptic curves. We always use \idx{Tate}'s notations.
The functions assume that the curve is given by a general Weierstrass
model\sidx{Weierstrass equation}
$$
  y^2+a_1xy+a_3y=x^3+a_2x^2+a_4x+a_6,
$$
where a priori the $a_i$ can be of any scalar type. This curve can be
considered as a five-component vector \kbd{E=[a1,a2,a3,a4,a6]}. Points on
\kbd{E} are represented as two-component vectors \kbd{[x,y]}, except for the
point at infinity, i.e.~the identity element of the group law, represented by
the one-component vector \kbd{[0]}.

  It is useful to have at one's disposal more information. This is given by
the function \tet{ellinit} (see there), which initalizes and returns an
\tev{ell} structure by default. If a specific flag is added, a
shortened \tev{sell}, for small \tev{ell}, is returned, which is much
faster to compute but contains less information. The following \idx{member
functions} are available to deal with the output of \kbd{ellinit},
both \var{ell} and \var{sell}:
\settabs\+xxxxxxxxxxxxxxxxxx&: &\cr

\+ \kbd{a1}--\kbd{a6}, \kbd{b2}--\kbd{b8}, \kbd{c4}--\kbd{c6} &: &
coefficients of the elliptic curve.\cr

\+ \tet{area} &: &  volume of the complex lattice defining $E$.\cr

\+ \tet{disc} &: & discriminant of the curve.\cr

\+ \tet{j}    &: & $j$-invariant of the curve.\cr

\+ \tet{omega}&: & $[\omega_1,\omega_2]$, periods forming a basis of
the complex lattice defining $E$ ($\omega_1$ is the\cr

\+            &   & real period, and $\omega_2/\omega_1$ belongs to
Poincar\'e's half-plane).\cr

\+ \tet{eta}  &: & quasi-periods $[\eta_1, \eta_2]$, such that
$\eta_1\omega_2-\eta_2\omega_1=i\pi$.\cr

\+ \tet{roots}&: & roots of the associated Weierstrass equation.\cr

\+ \tet{tate} &: & $[u^2,u,v]$ in the notation of Tate.\cr

\+ \tet{w} &: & Mestre's $w$ (this is technical).\cr

\noindent
The member functions \kbd{area}, \kbd{eta} and \kbd{omega} are only available
for curves over $\Q$. Conversely, \kbd{tate} and \kbd{w} are only available
for curves defined over $\Q_p$. The use of member functions is best described
by an example:
\bprog
  ? E = ellinit([0,0,0,0,1]); \\@com The curve $y^2 = x^3 + 1$
  ? E.a6
  %2 = 1
  ? E.c6
  %3 = -864
  ? E.disc
  %4 = -432
@eprog
\smallskip

Some functions, in particular those relative to height computations (see
\kbd{ellheight}) require also that the curve be in minimal Weierstrass
form, which is duly stressed in their description below. This is achieved by
the function \kbd{ellminimalmodel}. \emph{Using a non-minimal model in such a
routine will yield a wrong result!}

All functions related to elliptic curves share the prefix \kbd{ell}, and the
precise curve we are interested in is always the first argument, in either
one of the three formats discussed above, unless otherwise specified. The
requirements are given as the \emph{minimal} ones: any richer structure may
replace the ones requested. For instance, in functions which have no use for
the extra information given by an \tev{ell} structure, the curve can be given
either as a five-component vector, as an \var{sell}, or as an \var{ell};
if an \var{sell} is requested, an \var{ell} may equally be given.

\subsecidx{elladd}$(E,z1,z2)$: sum of the points $z1$ and $z2$ on the
elliptic curve corresponding to $E$.

\syn{addell}{E,z1,z2}.

\subsecidx{ellak}$(E,n)$: computes the coefficient $a_n$ of the
$L$-function of the elliptic curve $E$, i.e.~in principle coefficients of a
newform of weight 2 assuming \idx{Taniyama-Weil conjecture} (which is now
known to hold in full generality thanks to the work of \idx{Breuil},
\idx{Conrad}, \idx{Diamond}, \idx{Taylor} and \idx{Wiles}). $E$ must be an
\var{sell} as output by \kbd{ellinit}. For this function
to work for every $n$ and not just those prime to the conductor, $E$ must
be a minimal Weierstrass equation. If this is not the case, use the
function \kbd{ellminimalmodel} before using \kbd{ellak}.

\syn{akell}{E,n}.

\subsecidx{ellan}$(E,n)$: computes the vector of the first $n$ $a_k$
corresponding to the elliptic curve $E$. All comments in \kbd{ellak}
description remain valid.

\syn{anell}{E,n}, where $n$ is a C integer.

\subsecidx{ellap}$(E,p,\{\fl=0\})$: computes the $a_p$ corresponding to the
elliptic curve $E$ and the prime number $p$. These are defined by the
equation $\#E(\F_p) = p+1 - a_p$, where $\#E(\F_p)$ stands for the number
of points of the curve $E$ over the finite field $\F_p$. When $\fl$ is $0$,
this uses the baby-step giant-step method and a trick due to Mestre. This
runs in time $O(p^{1/4})$ and requires $O(p^{1/4})$ storage, hence becomes
unreasonable when $p$ has about 30 digits.

If $\fl$ is $1$, computes the $a_p$ as a sum of Legendre symbols. This is
slower than the previous method as soon as $p$ is greater than 100, say.

No checking is done that $p$ is indeed prime. $E$ must be an \var{sell} as
output by \kbd{ellinit}, defined over $\Q$, $\F_p$ or $\Q_p$. $E$ must be
given by a Weierstrass equation minimal at $p$.

\syn{ellap0}{E,p,\fl}. Also available are \funs{apell}{E,p}, corresponding
to $\fl=0$, and \funs{apell2}{E,p} ($\fl=1$).

\subsecidx{ellbil}$(E,z1,z2)$: if $z1$ and $z2$ are points on the elliptic
curve $E$, assumed to be integral given by a minimal model, this function
computes the value of the canonical bilinear form on $z1$, $z2$:
$$ ( h(E,z1\kbd{+}z2) - h(E,z1) - h(E,z2) ) / 2 $$
where \kbd{+} denotes of course addition on $E$. In addition, $z1$ or $z2$
(but not both) can be vectors or matrices.

\syn{bilhell}{E,z1,z2,\var{prec}}.

\subsecidx{ellchangecurve}$(E,v)$: changes the data for the elliptic curve $E$
by changing the coordinates using the vector \kbd{v=[u,r,s,t]}, i.e.~if $x'$
and $y'$ are the new coordinates, then $x=u^2x'+r$, $y=u^3y'+su^2x'+t$.
$E$ must be an \var{sell} as output by \kbd{ellinit}.

\syn{coordch}{E,v}.

\subsecidx{ellchangepoint}$(x,v)$: changes the coordinates of the point or
vector of points $x$ using the vector \kbd{v=[u,r,s,t]}, i.e.~if $x'$ and
$y'$ are the new coordinates, then $x=u^2x'+r$, $y=u^3y'+su^2x'+t$ (see also
\kbd{ellchangecurve}).

\syn{pointch}{x,v}.

\subsecidx{ellconvertname}$(\var{name})$: 
converts an elliptic curve name, as found in the \tet{elldata} database,
from a string to a triplet $[\var{conductor}, \var{isogeny class},
\var{index}]$. It will also convert a triplet back to a curve name.
Examples:
\bprog
? ellconvertname("123b1")
%1 = [123, 1, 1]
? ellconvertname(%)
%2 = "123b1"
@eprog

\syn{ellconvertname}{\var{name}}.

\subsecidx{elleisnum}$(E,k,\{\fl=0\})$: $E$ being an elliptic curve as
output by \kbd{ellinit} (or, alternatively, given by a 2-component vector
$[\omega_1,\omega_2]$ representing its periods), and $k$ being an even
positive integer, computes the numerical value of the Eisenstein series of
weight $k$ at $E$, namely
$$
(2i \pi/\omega_2)^k
  \Big(1 + 2/\zeta(1-k) \sum_{n\geq 0} n^{k-1}q^n / (1-q^n)\Big),
$$
where $q = e(\omega_1/\omega_2)$.

When \fl\ is non-zero and $k=4$ or 6, returns the elliptic invariants $g_2$
or $g_3$, such that
 $$y^2 = 4x^3 - g_2 x - g_3$$
is a Weierstrass equation for $E$.

\syn{elleisnum}{E,k,\fl}.

\subsecidx{elleta}$(om)$: returns the two-component row vector
$[\eta_1,\eta_2]$ of quasi-periods associated to $\kbd{om} = [\omega_1,
\omega_2]$

\syn{elleta}{om, \var{prec}}

\subsecidx{ellgenerators}$(E)$: returns a $\Z$-basis of the free part of the
\idx{Mordell-Weil group} associated to $E$.  This function depends on the
\tet{elldata} database being installed and referencing the curve, and so
is only available for curves over $\Z$ of small conductors.

\syn{ellgenerators}{E}.

\subsecidx{ellglobalred}$(E)$: calculates the arithmetic conductor, the global
minimal model of $E$ and the global \idx{Tamagawa number} $c$. 
$E$ must be an \var{sell} as output by \kbd{ellinit}, \emph{and is supposed
to have all its coefficients $a_i$ in} $\Q$. The result is a 3 component
vector $[N,v,c]$. $N$ is the arithmetic conductor of the curve. $v$ gives the
coordinate change for $E$ over $\Q$ to the minimal integral model (see
\tet{ellminimalmodel}). Finally $c$ is the product of the local Tamagawa
numbers $c_p$, a quantity which enters in the \idx{Birch and Swinnerton-Dyer
conjecture}.\sidx{minimal model}

\syn{ellglobalred}{E}.

\subsecidx{ellheight}$(E,z,\{\fl = 2\})$: global \idx{N\'eron-Tate height} of
the point $z$ on the elliptic curve $E$ (defined over $\Q$), given by a
standard minimal integral model. $E$ must be an \kbd{ell} as output by
\kbd{ellinit}. \fl selects the algorithm used to compute the archimedean
local height. If $\fl=0$, this computation is done using sigma and
theta-functions and a trick due to J. Silverman. If $\fl=1$, use Tate's $4^n$
algorithm. If $\fl=2$, use Mestre's AGM algorithm. The latter is much faster
than the other two, both in theory (converges quadratically) and in practice.

\syn{ellheight0}{E,z,\fl,\var{prec}}. Also available are
\funs{ghell}{E,z,\var{prec}} ($\fl=0$) and \funs{ghell2}{E,z,\var{prec}}
($\fl=1$).

\subsecidx{ellheightmatrix}$(E,x)$: $x$ being a vector of points, this
function outputs the Gram matrix of $x$ with respect to the N\'eron-Tate
height, in other words, the $(i,j)$ component of the matrix is equal to
\kbd{ellbil($E$,x[$i$],x[$j$])}. The rank of this matrix, at least in some
approximate sense, gives the rank of the set of points, and if $x$ is a
basis of the \idx{Mordell-Weil group} of $E$, its determinant is equal to
the regulator of $E$. Note that this matrix should be divided by 2 to be in
accordance with certain normalizations. $E$ is assumed to be integral,
given by a minimal model.

\syn{mathell}{E,x,\var{prec}}.

\subsecidx{ellidentify}$(E)$: look up the elliptic curve $E$ (over $\Z$)
in the \tet{elldata} database and return \kbd{[[N, M, G], C]}  where $N$
is the name of the curve in J.  E.  Cremona database, $M$ the minimal
model, $G$ a $\Z$-basis of the free part of the \idx{Mordell-Weil group}
of $E$ and $C$ the coordinates change (see \kbd{ellchangecurve}).

\syn{ellidentify}{E}.

\subsecidx{ellinit}$(E,\{\fl=0\})$: initialize an \tet{ell} structure,
associated to the elliptic curve $E$. $E$ is a $5$-component
vector $[a_1,a_2,a_3,a_4,a_6]$ defining the elliptic curve with Weierstrass
equation
$$ Y^2 + a_1 XY + a_3 Y = X^3 + a_2 X^2 + a_4 X + a_6 $$
or a string, in this case the coefficients of the curve with matching name
are looked in the \tet{elldata} database if available. For the time
being, only curves over a prime field $\F_p$ and over the $p$-adic or
real numbers (including rational numbers) are fully supported. Other
domains are only supported for very basic operations such as point
addition.

The result of \tet{ellinit} is a an \tev{ell} structure by default, and
a shorted \tev{sell} if $\fl=1$. Both contain the following information in
their components:
%
$$ a_1,a_2,a_3,a_4,a_6,b_2,b_4,b_6,b_8,c_4,c_6,\Delta,j.$$
%
All are accessible via member functions. In particular, the discriminant is
\kbd{$E$.disc}, and the $j$-invariant is \kbd{$E$.j}.

The other six components are only present if $\fl$ is $0$ or omitted.
Their content depends on whether the curve is defined over $\R$ or not:
\smallskip
\item When $E$ is defined over $\R$, \kbd{$E$.roots} is a vector whose
three components contain the roots of the right hand side of the associated
Weierstrass equation.
$$ (y + a_1x/2 + a_3/2)^2 = g(x) $$
If the roots are all real, then they are ordered by decreasing value. If only
one is real, it is the first component.

Then $\omega_1 = $\kbd{$E$.omega[1]} is the real period of $E$ (integral of
$dx/(2y+a_1x+a_3)$ over the connected component of the identity element of
the real points of the curve), and $\omega_2 = $\kbd{$E$.omega[2]} is a
complex period. In other words, \kbd{$E$.omega} forms a basis of the
complex lattice defining $E$, with
$\tau=\dfrac{\omega_2}{\omega_1}$ having positive imaginary part.

\kbd{$E$.eta} is a row vector containing the corresponding values $\eta_1$
and $\eta_2$ such that $\eta_1\omega_2-\eta_2\omega_1=i\pi$.

Finally, \kbd{$E$.area} is the volume of the complex lattice defining
$E$.\smallskip

\item When $E$ is defined over $\Q_p$, the $p$-adic valuation of $j$
must be negative. Then \kbd{$E$.roots} is the vector with a single component
equal to the $p$-adic root of the associated Weierstrass equation
corresponding to $-1$ under the Tate parametrization.

\kbd{$E$.tate} yields the three-component vector $[u^2,u,q]$, in the
notations of Tate. If the $u$-component does not belong to $\Q_p$, it is set
to zero.

\kbd{$E$.w} is Mestre's $w$ (this is technical).

\smallskip For all other base fields or rings, the last six components are
arbitrarily set equal to zero. See also the description of member functions
related to elliptic curves at the beginning of this section.

\syn{ellinit0}{E,\fl,\var{prec}}. Also available are
\funs{initell}{E,\var{prec}} ($\fl=0$) and
\funs{smallinitell}{E,\var{prec}} ($\fl=1$).

\subsecidx{ellisoncurve}$(E,z)$: gives 1 (i.e.~true) if the point $z$ is on
the elliptic curve $E$, 0 otherwise. If $E$ or $z$ have imprecise coefficients,
an attempt is made to take this into account, i.e.~an imprecise equality is
checked, not a precise one. It is allowed for $z$ to be a vector of points
in which case a vector (of the same type) is returned.

\syn{ellisoncurve}{E,z}. Also available is \funs{oncurve}{E,z}
which returns a \kbd{long} but does not accept vector of points.

\subsecidx{ellj}$(x)$: elliptic $j$-invariant. $x$ must be a complex number
with positive imaginary part, or convertible into a power series or a
$p$-adic number with positive valuation.

\syn{jell}{x,\var{prec}}.

\subsecidx{elllocalred}$(E,p)$: calculates the \idx{Kodaira} type of the
local fiber of the elliptic curve $E$ at the prime $p$.
$E$ must be an \var{sell} as output by \kbd{ellinit}, and is assumed to have
all its coefficients $a_i$ in $\Z$. The result is a 4-component vector
$[f,kod,v,c]$. Here $f$ is the exponent of $p$ in the arithmetic conductor of
$E$, and $kod$ is the Kodaira type which is coded as follows:

1 means good reduction (type I$_0$), 2, 3 and 4 mean types II, III and IV
respectively, $4+\nu$ with $\nu>0$ means type I$_\nu$;
finally the opposite values $-1$, $-2$, etc.~refer to the starred types
I$_0^*$, II$^*$, etc. The third component $v$ is itself a vector $[u,r,s,t]$
giving the coordinate changes done during the local reduction. Normally, this
has no use if $u$ is 1, that is, if the given equation was already minimal.
Finally, the last component $c$ is the local \idx{Tamagawa number} $c_p$.

\syn{elllocalred}{E,p}.

\subsecidx{elllseries}$(E,s,\{A=1\})$: $E$ being an \var{sell} as output by
\kbd{ellinit}, this computes the value of the L-series of $E$ at $s$. It is
assumed that $E$ is defined over $\Q$, not necessarily minimal. The optional
parameter $A$ is a cutoff point for the integral, which must be chosen close
to 1 for best speed. The result must be independent of $A$, so this allows
some internal checking of the function.

Note that if the conductor of the curve is large, say greater than $10^{12}$,
this function will take an unreasonable amount of time since it uses an
$O(N^{1/2})$ algorithm.

\syn{elllseries}{E,s,A,\var{prec}} where $\var{prec}$ is a \kbd{long} and an
omitted $A$ is coded as \kbd{NULL}.

\subsecidx{ellminimalmodel}$(E,\{\&v\})$:  return the standard minimal
integral model of the rational elliptic curve $E$. If present, sets $v$ to the
corresponding change of variables, which is a vector $[u,r,s,t]$ with
rational components. The return value is identical to that of
\kbd{ellchangecurve(E, v)}.

The resulting model has integral coefficients, is everywhere minimal, $a_1$
is 0 or 1, $a_2$ is 0, 1 or $-1$ and $a_3$ is 0 or 1. Such a model is unique,
and the vector $v$ is unique if we specify that $u$ is positive, which we do.
\sidx{minimal model}

\syn{ellminimalmodel}{E,\&v}, where an omitted $v$ is coded as \kbd{NULL}.

\subsecidx{ellorder}$(E,z)$: gives the order of the point $z$ on the elliptic
curve $E$ if it is a torsion point, zero otherwise. In the present version
\vers, this is implemented only for elliptic curves defined over $\Q$.

\syn{orderell}{E,z}.

\subsecidx{ellordinate}$(E,x)$: gives a 0, 1 or 2-component vector containing
the $y$-coordinates of the points of the curve $E$ having $x$ as
$x$-coordinate.

\syn{ordell}{E,x}.

\subsecidx{ellpointtoz}$(E,z)$: if $E$ is an elliptic curve with coefficients
in $\R$, this computes a complex number $t$ (modulo the lattice defining
$E$) corresponding to the point $z$, i.e.~such that, in the standard
Weierstrass model, $\wp(t)=z[1],\wp'(t)=z[2]$. In other words, this is the
inverse function of \kbd{ellztopoint}. More precisely, if $(w1,w2)$ are the
real and complex periods of $E$, $t$ is such that $0 \leq \Re(t) < w1$
and $0 \leq \Im(t) < \Im(w2)$.

If $E$ has coefficients in $\Q_p$, then either Tate's $u$ is in $\Q_p$, in
which case the output is a $p$-adic number $t$ corresponding to the point $z$
under the Tate parametrization, or only its square is, in which case the
output is $t+1/t$. $E$ must be an \var{ell} as output by \kbd{ellinit}.

\syn{zell}{E,z,\var{prec}}.

\subsecidx{ellpow}$(E,z,n)$: computes $n$ times the point $z$ for the
group law on the elliptic curve $E$. Here, $n$ can be in $\Z$, or $n$
can be a complex quadratic integer if the curve $E$ has complex multiplication
by $n$ (if not, an error message is issued).

\syn{powell}{E,z,n}.

\subsecidx{ellrootno}$(E,\{p=1\})$: $E$ being an \var{sell} as output by
\kbd{ellinit}, this computes the local (if $p\neq 1$) or global (if $p=1$)
root number of the L-series of the elliptic curve $E$. Note that the global
root number is the sign of the functional equation and conjecturally is the
parity of the rank of the \idx{Mordell-Weil group}. The equation for $E$ must
have coefficients in $\Q$ but need \emph{not} be minimal.

\syn{ellrootno}{E,p} and the result (equal to $\pm1$) is a \kbd{long}.

\subsecidx{ellsigma}$(E,z,\{\fl=0\})$: value of the Weierstrass $\sigma$
function of the lattice associated to $E$ as given by \kbd{ellinit}
(alternatively, $E$ can be given as a lattice $[\omega_1,\omega_2]$).

If $\fl=1$, computes an (arbitrary) determination of $\log(\sigma(z))$.

If $\fl=2,3$, same using the product expansion instead of theta series.
\syn{ellsigma}{E,z,\fl}

\subsecidx{ellsearch}$(N)$: if $N$ is an integer, it is taken as a conductor
else if $N$ is a string, it can be a curve name ("11a1"), a isogeny class
("11a") or a conductor "11". This function finds all curves in the
\tet{elldata} database with the given property. 

If $N$ is a full curve name, the output format is $[N, [a_1,a_2,a_3,a_4,a_6],
G]$ where $[a_1,a_2,a_3,a_4,a_6]$ are the coefficients of the Weierstrass
equation of the curve and $G$ is a $\Z$-basis of the free part of the
\idx{Mordell-Weil group} associated to the curve.

If $N$ is not a full-curve name, the output is the list (as a vector) of all
matching curves in the above format.

\syn{ellsearch}{N}. Also available is \funs{ellsearchcurve}{N} that only
accept complete curve names.

\subsecidx{ellsub}$(E,z1,z2)$: difference of the points $z1$ and $z2$ on the
elliptic curve corresponding to $E$.

\syn{subell}{E,z1,z2}.

\subsecidx{elltaniyama}$(E)$: computes the modular parametrization of the
elliptic curve $E$, where $E$ is an \var{sell} as output by \kbd{ellinit}, in
the form of a two-component vector $[u,v]$ of power series, given to the
current default series precision. This vector is characterized by the
following two properties. First the point $(x,y)=(u,v)$ satisfies the
equation of the elliptic curve. Second, the differential $du/(2v+a_1u+a_3)$
is equal to $f(z)dz$, a differential form on $H/\Gamma_0(N)$ where $N$ is the
conductor of the curve. The variable used in the power series for $u$ and $v$
is $x$, which is implicitly understood to be equal to $\exp(2i\pi z)$. It is
assumed that the curve is a \emph{strong} \idx{Weil curve}, and that the
Manin constant is equal to 1. The equation of the curve $E$ must be minimal
(use \kbd{ellminimalmodel} to get a minimal equation).

\syn{elltaniyama}{E, prec}, and the precision of the result is determined by
\kbd{prec}.

\subsecidx{elltors}$(E,\{\fl=0\})$: if $E$ is an elliptic curve \emph{defined
over $\Q$}, outputs the torsion subgroup of $E$ as a 3-component vector
\kbd{[t,v1,v2]}, where \kbd{t} is the order of the torsion group, \kbd{v1}
gives the structure of the torsion group as a product of cyclic groups
(sorted by decreasing order), and \kbd{v2} gives generators for these cyclic
groups. $E$ must be an \var{ell} as output by \kbd{ellinit}.

\bprog
?  E = ellinit([0,0,0,-1,0]);
?  elltors(E)
%1 = [4, [2, 2], [[0, 0], [1, 0]]]
@eprog
Here, the torsion subgroup is isomorphic to $\Z/2\Z \times \Z/2\Z$, with
generators $[0,0]$ and $[1,0]$.

If $\fl = 0$, use Doud's algorithm: bound torsion by computing $\#E(\F_p)$
for small primes of good reduction, then look for torsion points using
Weierstrass parametrization (and Mazur's classification).

If $\fl = 1$, use Lutz-Nagell (\emph{much} slower), $E$ is allowed to be an
\var{sell}.

\syn{elltors0}{E,flag}.

\subsecidx{ellwp}$(E,\{z=x\},\{\fl=0\})$:

Computes the value at $z$ of the Weierstrass $\wp$ function attached to the
elliptic curve $E$ as given by \kbd{ellinit} (alternatively, $E$ can be
given as a lattice $[\omega_1,\omega_2]$).

If $z$ is omitted or is a simple variable, computes the \emph{power series}
expansion in $z$ (starting $z^{-2}+O(z^2)$). The number of terms to an
\emph{even} power in the expansion is the default serieslength in \kbd{gp}, and the
second argument (C long integer) in library mode.

Optional \fl\ is (for now) only taken into account when $z$ is numeric, and
means 0: compute only $\wp(z)$, 1: compute $[\wp(z),\wp'(z)]$.

\syn{ellwp0}{E,z,\fl,\var{prec},\var{precdl}}. Also available is
\funs{weipell}{E,\var{precdl}} for the power series.

\subsecidx{ellzeta}$(E,z)$: value of the Weierstrass $\zeta$ function of the
lattice associated to $E$ as given by \kbd{ellinit} (alternatively, $E$ can
be given as a lattice $[\omega_1,\omega_2]$).

\syn{ellzeta}{E,z}.

\subsecidx{ellztopoint}$(E,z)$: $E$ being an \var{ell} as output by
\kbd{ellinit}, computes the coordinates $[x,y]$ on the curve $E$
corresponding to the complex number $z$. Hence this is the inverse function
of \kbd{ellpointtoz}. In other words, if the curve is put in Weierstrass
form, $[x,y]$ represents the \idx{Weierstrass $\wp$-function} and its
derivative. If $z$ is in the lattice defining $E$ over $\C$, the result is
the point at infinity $[0]$.

\syn{pointell}{E,z,\var{prec}}.

\section{Functions related to general number fields}

In this section can be found functions which are used almost exclusively for
working in general number fields. Other less specific functions can be found
in the next section on polynomials. Functions related to quadratic number
fields are found in section \secref{se:arithmetic} (Arithmetic functions).

\subsec{Number field structures}

Let $K = \Q[X] / (T)$ a number field, $\Z_K$ its ring of integers, $T\in\Z[X]$
is monic. Three basic number field structures can be associated to $K$ in
GP:

\item $\tev{nf}$ denotes a number field, i.e.~a data structure output by
\tet{nfinit}. This contains the basic arithmetic data associated to the
number field: signature, maximal order (given by a basis \kbd{nf.zk}),
discriminant, defining polynomial $T$, etc.

\item $\tev{bnf}$ denotes a ``Buchmann's number field'', i.e.~a
data structure output by \tet{bnfinit}. This contains
$\var{nf}$ and the deeper invariants of the field: units $U(K)$, class group
$\Cl(K)$, as well as technical data required to solve the two associated
discrete logarithm problems.

\item $\tev{bnr}$ denotes a ``ray number field'', i.e.~a data structure
output by \kbd{bnrinit}, corresponding to the ray class group structure of
the field, for some modulus $f$. It contains a \var{bnf}, the modulus
$f$, the ray class group $\Cl_f(K)$ and data associated to
the discrete logarithm problem therein.

\subsec{Algebraic numbers and ideals}

\noindent An \tev{algebraic number} belonging to $K = \Q[X]/(T)$ is given as

\item a \typ{INT}, \typ{FRAC} or \typ{POL} (implicitly modulo $T$), or

\item a \typ{POLMOD} (modulo $T$), or

\item a \typ{COL}~\kbd{v} of dimension $N = [K:\Q]$, representing
the element in terms of the computed integral basis, as
\kbd{sum(i = 1, N,~v[i] * nf.zk[i])}. Note that a \typ{VEC}
will not be recognized.
\medskip

\noindent An \tev{ideal} is given in any of the following ways:

\item an algebraic number in one of the above forms, defining a principal ideal.

\item a prime ideal, i.e.~a 5-component vector in the format output by
\kbd{idealprimedec}.

\item a \typ{MAT}, square and in Hermite Normal Form (or at least
upper triangular with non-negative coefficients), whose columns represent a
basis of the ideal.

One may use \kbd{idealhnf} to convert an ideal to the last (preferred) format.

\misctitle{Note.} Some routines accept non-square matrices, but using this
format is strongly discouraged. Nevertheless, their behaviour is as follows:
If strictly less than $N = [K:\Q]$ generators are given, it is assumed they
form a $\Z_K$-basis. If $N$ or more are given, a $\Z$-basis is assumed. If
exactly $N$ are given, it is further assumed the matrix is in HNF. If any of
these assumptions is not correct the behaviour of the routine is undefined.
\medskip

\item an \tev{idele} is a 2-component vector, the first being an ideal as
above, the second being a $R_1+R_2$-component row vector giving Archimedean
information, as complex numbers.
\smallskip

\subsec{Finite abelian groups}

A finite abelian group $G$ in user-readable format is given by its Smith
Normal Form as a pair $[h,d]$ or triple $[h,d,g]$.
Here $h$ is the cardinality of $G$, $(d_i)$ is the vector of elementary
divisors, and $(g_i)$ is a vector of generators. In short,
$G = \oplus_{i\leq n} (\Z/d_i\Z) g_i$, with $d_n \mid \dots \mid d_2 \mid d_1$
and $\prod d_i = h$. This information can also be retrieved as
$G.\kbd{no}$, $G.\kbd{cyc}$ and $G.\kbd{gen}$.

\item a \tev{character} on the abelian group
$\oplus (\Z/d_i\Z) g_i$
is given by a row vector $\chi = [a_1,\ldots,a_n]$ such that
$\chi(\prod g_i^{n_i}) = \exp(2i\pi\sum a_i n_i / d_i)$.

\item given such a structure, a \tev{subgroup} $H$ is input as a square
matrix, whose column express generators of $H$ on the given generators $g_i$.
Note that the absolute value of the determinant of that matrix is equal to
the index $(G:H)$.

\subsec{Relative extensions}

When defining a relative extension, the base field $\var{nf}$ must be defined
by a variable having a lower priority (see \secref{se:priority}) than the
variable defining the extension. For example, you may use the variable name
$y$ to define the base field, and $x$ to define the relative extension.

\item $\tev{rnf}$ denotes a relative number field, i.e.~a data structure
output by \kbd{rnfinit}.

\item A \emph{relative matrix} is a matrix whose entries are
elements of a (fixed) number field $\var{nf}$, always expressed as column
vectors on the integral basis \kbd{\var{nf}.zk}. Hence it is a matrix of
vectors.

\item An \tev{ideal list} is a row vector of (fractional)
ideals of the number field $\var{nf}$.

\item A \tev{pseudo-matrix} is a pair $(A,I)$ where $A$ is a
relative matrix and $I$ an ideal list whose length is the same as the number
of columns of $A$. This pair is represented by a 2-component row vector.

\item The \tev{projective module} generated by a pseudo-matrix $(A,I)$ is
the sum $\sum_i {\Bbb a}_j A_j$ where the ${\Bbb a}_j$ are the ideals of $I$
and $A_j$ is the $j$-th column of $A$.

\item A pseudo-matrix $(A,I)$ is a \tev{pseudo-basis} of the module
it generates if $A$ is a square matrix with non-zero determinant and all the
ideals of $I$ are non-zero. We say that it is in Hermite Normal
Form\sidx{Hermite normal form} (HNF) if it is upper triangular and all the
elements of the diagonal are equal to 1.

\item The \emph{determinant} of a pseudo-basis $(A,I)$ is the ideal
equal to the product of the determinant of $A$ by all the ideals of $I$. The
determinant of a pseudo-matrix is the determinant of any pseudo-basis of the
module it generates.

\subsec{Class field theory}

A $\tev{modulus}$, in the sense of class field theory, is a divisor supported
on the non-complex places of $K$. In PARI terms, this means either an
ordinary ideal $I$ as above (no archimedean component), or a pair $[I,a]$,
where $a$ is a vector with $r_1$ $\{0,1\}$-components, corresponding to the
infinite part of the divisor. More precisely, the $i$-th component of $a$
corresponds to the real embedding associated to the $i$-th real root of
\kbd{K.roots}. (That ordering is not canonical, but well defined once a
defining polynomial for $K$ is chosen.) For instance, \kbd{[1, [1,1]]} is a
modulus for a real quadratic field, allowing ramification at any of the two
places at infinity.

A \tev{bid} or ``big ideal'' is a structure output by \kbd{idealstar}
needed to compute in $(\Z_K/I)^*$, where $I$ is a modulus in the above sense.
If is a finite abelian group as described above, supplemented by
technical data needed to solve discrete log problems.

Finally we explain how to input ray number fields (or \var{bnr}), using class
field theory. These are defined by a triple $a1$, $a2$, $a3$, where the
defining set $[a1,a2,a3]$ can have any of the following forms: $[\var{bnr}]$,
$[\var{bnr},\var{subgroup}]$, $[\var{bnf},\var{module}]$,
$[\var{bnf},\var{module},\var{subgroup}]$.

\item $\var{bnf}$ is as output by \kbd{bnfinit}, where units are mandatory
unless the modulus is trivial; \var{bnr} is as output by \kbd{bnrinit}. This
is the ground field $K$.

\item \emph{module} is a modulus $\goth{f}$, as described above.

\item \emph{subgroup} a subgroup of the ray class group modulo $\goth{f}$ of
$K$. As described above, this is input as a square matrix expressing
generators of a subgroup of the ray class group \kbd{\var{bnr}.clgp} on the
given generators.

The corresponding \var{bnr} is the subfield of the ray class field of $K$
modulo $\goth{f}$, fixed by the given subgroup.

\subsec{General use}

All the functions which are specific to relative extensions, number fields,
Buchmann's number fields, Buchmann's number rays, share the prefix \kbd{rnf},
\kbd{nf}, \kbd{bnf}, \kbd{bnr} respectively. They take as first argument a
number field of that precise type, respectively output by \kbd{rnfinit},
\kbd{nfinit}, \kbd{bnfinit}, and \kbd{bnrinit}.

However, and even though it may not be specified in the descriptions of the
functions below, it is permissible, if the function expects a $\var{nf}$, to
use a $\var{bnf}$ instead, which contains much more information. On the other
hand, if the function requires a \kbd{bnf}, it will \emph{not} launch
\kbd{bnfinit} for you, which is a costly operation. Instead, it will give you
a specific error message. In short, the types
$$ \kbd{nf} \leq \kbd{bnf} \leq \kbd{bnr}$$
are ordered, each function requires a minimal type to work properly, but you
may always substitute a larger type.

The data types corresponding to the structures described above are rather
complicated. Thus, as we already have seen it with elliptic curves, GP
provides ``member functions'' to retrieve data from these structures (once
they have been initialized of course). The relevant types of number fields
are indicated between parentheses: \smallskip

\sidx{member functions}
\settabs\+xxxxxxx&(\var{bnr},x&\var{bnf},x&nf\hskip2pt&)x&: &\cr
\+\tet{bid}    &(\var{bnr},&&&)&: & bid ideal structure.\cr

\+\tet{bnf}    &(\var{bnr},& \var{bnf}&&)&: & Buchmann's number field.\cr

\+\tet{clgp}  &(\var{bnr},& \var{bnf}&&)&: & classgroup. This one admits the
following three subclasses:\cr

\+      \quad \tet{cyc} &&&&&: & \quad cyclic decomposition
 (SNF)\sidx{Smith normal form}.\cr

\+      \quad \kbd{gen}\sidx{gen (member function)} &&&&&: &
 \quad generators.\cr

\+      \quad \tet{no}  &&&&&: & \quad number of elements.\cr

\+\tet{diff}  &(\var{bnr},& \var{bnf},& \var{nf}&)&: & the different ideal.\cr

\+\tet{codiff}&(\var{bnr},& \var{bnf},& \var{nf}&)&: & the codifferent
(inverse of the different in the ideal group).\cr

\+\tet{disc} &(\var{bnr},& \var{bnf},& \var{nf}&)&: & discriminant.\cr

\+\tet{fu}   &(\var{bnr},& \var{bnf},& \var{nf}&)&: &
 \idx{fundamental units}.\cr

\+\tet{index}   &(\var{bnr},& \var{bnf},& \var{nf}&)&: &
 \idx{index} of the power order in the ring of integers.\cr

\+\tet{nf}   &(\var{bnr},& \var{bnf},& \var{nf}&)&: & number field.\cr

\+\tet{r1} &(\var{bnr},& \var{bnf},& \var{nf}&)&: & the number
of real embeddings.\cr

\+\tet{r2} &(\var{bnr},& \var{bnf},& \var{nf}&)&: & the number
of pairs of complex embeddings.\cr

\+\tet{reg}  &(\var{bnr},& \var{bnf},&&)&: & regulator.\cr

\+\tet{roots}&(\var{bnr},& \var{bnf},& \var{nf}&)&: & roots of the
polynomial generating the field.\cr

\+\tet{t2}   &(\var{bnr},& \var{bnf},& \var{nf}&)&: & the T2 matrix (see
\kbd{nfinit}).\cr

\+\tet{tu}   &(\var{bnr},& \var{bnf},&&)&: & a generator for the torsion
units.\cr

\+\tet{tufu} &(\var{bnr},& \var{bnf},&&)&: &
 $[w,u_1,...,u_r]$, $(u_i)$ is a vector of
fundamental units, $w$ generates the torsion units.\cr

\+\tet{zk}   &(\var{bnr},& \var{bnf},& \var{nf}&)&: & integral basis, i.e.~a
$\Z$-basis of the maximal order.\cr

  For instance, assume that $\var{bnf} = \kbd{bnfinit}(\var{pol})$, for some
polynomial. Then \kbd{\var{bnf}.clgp} retrieves the class group, and
\kbd{\var{bnf}.clgp.no} the class number. If we had set $\var{bnf} =
\kbd{nfinit}(\var{pol})$, both would have output an error message. All these
functions are completely recursive, thus for instance
\kbd{\var{bnr}.bnf.nf.zk} will yield the maximal order of \var{bnr}, which
you could get directly with a simple \kbd{\var{bnr}.zk}.

\subsec{Class group, units, and the GRH}\label{se:GRHbnf}

Some of the functions starting with \kbd{bnf} are implementations of the
sub-exponential algorithms for finding class and unit groups under \idx{GRH},
due to Hafner-McCurley, \idx{Buchmann} and Cohen-Diaz-Olivier. The general
call to the functions concerning class groups of general number fields
(i.e.~excluding \kbd{quadclassunit}) involves a polynomial $P$ and a
technical vector
$$\var{tech} = [c, c2, \var{nrpid} ],$$
where the parameters are to be understood as follows:

$P$ is the defining polynomial for the number field, which must be in
$\Z[X]$, irreducible and monic. In fact, if you supply a non-monic polynomial
at this point, \kbd{gp} issues a warning, then \emph{transforms your
polynomial} so that it becomes monic. The \kbd{nfinit} routine
will return a different result in this case: instead of \kbd{res}, you get a
vector \kbd{[res,Mod(a,Q)]}, where \kbd{Mod(a,Q) = Mod(X,P)} gives the change
of variables. In all other routines, the variable change is simply lost.

The numbers $c \leq c_2$ are positive real numbers which control the
execution time and the stack size. For a given $c$, set
$c_2 = c$ to get maximum speed. To get a rigorous result under \idx{GRH} you
must take $c2\geq 12$ (or $c2\geq 6$ in $P$ is quadratic). Reasonable values
for $c$ are between $0.1$ and $2$. The default is $c = c_2 = 0.3$.

$\var{nrpid}$ is the maximal number of small norm relations associated to each
ideal in the factor base. Set it to $0$ to disable the search for small norm
relations. Otherwise, reasonable values are between 4 and 20. The default is
4.

\misctitle{Warning.} Make sure you understand the above! By default, most of
the \kbd{bnf} routines depend on the correctness of a heuristic assumption
which is stronger than the GRH. In particular, any of the class number, class
group structure, class group generators, regulator and fundamental units may
be wrong, independently of each other. Any result computed from such a
\kbd{bnf} may be wrong. The only guarantee is that the units given generate a
subgroup of finite index in the full unit group. In practice, very few
counter-examples are known, requiring unlucky random seeds. No
counter-example has been reported for $c_2 = 0.5$ (which should be almost as
fast as $c_2 = 0.3$, and shall very probably become the default). If you use
$c_2 = 12$, then everything is correct assuming the GRH holds. You can
use \kbd{bnfcertify} to certify the computations unconditionally.

\misctitle{Remarks.}

Apart from the polynomial $P$, you do not need to supply the technical
parameters (under the library you still need to send at least an empty
vector, coded as \kbd{NULL}). However, should you choose to set some of them,
they \emph{must} be given in the requested order. For example, if you want to
specify a given value of 
\var{nrpid}, you must give some values as well for $c$
and $c_2$, and provide a vector $[c,c_2,\var{nrpid}]$.

Note also that you can use an $\var{nf}$ instead of $P$, which avoids
recomputing the integral basis and analogous quantities.

\smallskip
\subsecidx{bnfcertify}$(\var{bnf})$: $\var{bnf}$ being as output by
\kbd{bnfinit}, checks whether the result is correct, i.e.~whether it is
possible to remove the assumption of the Generalized Riemann
Hypothesis\sidx{GRH}. It is correct if and only if the answer is 1. If it is
incorrect, the program may output some error message, or loop indefinitely.
You can check its progress by increasing the debug level.

\syn{certifybuchall}{\var{bnf}}, and the result is a C long.

\subsecidx{bnfclassunit}$(P,\{\fl=0\},\{\var{tech}=[\,]\})$: \emph{this function
is DEPRECATED, use \kbd{bnfinit}}.

\idx{Buchmann}'s sub-exponential algorithm for computing the class group, the
regulator and a system of \idx{fundamental units} of the general algebraic
number field $K$ defined by the irreducible polynomial $P$ with integer
coefficients.

The result of this function is a vector $v$ with many components, which for
ease of presentation is in fact output as a one column matrix. It is
\emph{not} a $\var{bnf}$, you need \kbd{bnfinit} for that. First we describe
the default behaviour ($\fl=0$):

 $v[1]$ is equal to the polynomial $P$.

 $v[2]$ is the 2-component vector $[r1,r2]$, where $r1$ and $r2$ are as usual
the number of real and half the number of complex embeddings of the number
field $K$.

 $v[3]$ is the 2-component vector containing the field discriminant and the
index.

 $v[4]$ is an integral basis in Hermite normal form.

 $v[5]$ (\kbd{$v$.clgp}) is a 3-component vector containing the class number
(\kbd{$v$.clgp.no}), the structure of the class group as a product of cyclic
groups of order $n_i$ (\kbd{$v$.clgp.cyc}), and the corresponding generators
of the class group of respective orders $n_i$ (\kbd{$v$.clgp.gen}).

 $v[6]$ (\kbd{$v$.reg}) is the regulator computed to an accuracy which is the
maximum of an internally determined accuracy and of the default.

 $v[7]$ is deprecated, maintained for backward compatibility and always equal
to $1$.

 $v[8]$ (\kbd{$v$.tu}) a vector with 2 components, the first being the number
$w$ of roots of unity in $K$ and the second a primitive $w$-th root of unity
expressed as a polynomial.

 $v[9]$ (\kbd{$v$.fu}) is a system of fundamental units also expressed as
polynomials.

If $\fl=1$, and the precision happens to be insufficient for obtaining the
fundamental units, the internal precision is doubled and the computation
redone, until the exact results are obtained. Be warned that this can take a
very long time when the coefficients of the fundamental units on the integral
basis are very large, for example in large real quadratic fields.
For this case, there are alternate compact representations for algebraic
numbers, implemented in PARI but currently not available in GP.

If $\fl=2$, the fundamental units and roots of unity are not computed.
Hence the result has only 7 components, the first seven ones.

\syn{bnfclassunit0}{P,\fl,\var{tech},\var{prec}}.

\subsecidx{bnfclgp}$(P,\{\var{tech}=[\,]\})$: as \kbd{bnfinit}, but only
outputs \kbd{bnf.clgp}, i.e.~the class group.

\syn{classgrouponly}{P,\var{tech},\var{prec}}, where \var{tech}
is as described under \kbd{bnfinit}.

\subsecidx{bnfdecodemodule}$(\var{nf},m)$: if $m$ is a module as output in the
first component of an extension given by \kbd{bnrdisclist}, outputs the
true module.

\syn{decodemodule}{\var{nf},m}.

\subsecidx{bnfinit}$(P,\{\fl=0\},\{\var{tech}=[\,]\})$: initializes a
\var{bnf} structure. Used in programs such as \kbd{bnfisprincipal},
\kbd{bnfisunit} or \kbd{bnfnarrow}. By default, the results are conditional
on a heuristic strengthening of the GRH, see \ref{se:GRHbnf}. The result is a
10-component vector \var{bnf}.

This implements \idx{Buchmann}'s sub-exponential algorithm for computing the
class group, the regulator and a system of \idx{fundamental units} of the
general algebraic number field $K$ defined by the irreducible polynomial $P$
with integer coefficients.

If the precision becomes insufficient, \kbd{gp} outputs a warning
(\kbd{fundamental units too large, not given}) and does not strive to compute
the units by default ($\fl=0$).

   When $\fl=1$, we insist on finding the fundamental units exactly. Be
warned that this can take a very long time when the coefficients of the
fundamental units on the integral basis are very large. If the fundamental
units are simply too large to be represented in this form, an error message
is issued. They could be obtained using the so-called compact representation
of algebraic numbers as a formal product of algebraic integers. The latter is
implemented internally but not publicly accessible yet.

   When $\fl=2$, on the contrary, it is initially agreed that units are not
computed. Note that the resulting \var{bnf} will not be suitable for
\kbd{bnrinit}, and that this flag provides negligible time savings
compared to the default. In short, it is deprecated.

   When $\fl=3$, computes a very small version of \kbd{bnfinit}, a ``small
Buchmann's number field'' (or \var{sbnf} for short) which contains enough
information to recover the full $\var{bnf}$ vector very rapidly, but which is
much smaller and hence easy to store and print. It is supposed to be used in
conjunction with \kbd{bnfmake}.

$\var{tech}$ is a technical vector (empty by default, see \ref{se:GRHbnf}).
Careful use of this parameter may speed up your computations considerably.

\smallskip

The components of a \var{bnf} or \var{sbnf} are technical and never used by
the casual user. In fact: \emph{never access a component directly, always use
a proper member function.} However, for the sake of completeness and internal
documentation, their description is as follows. We use the notations
explained in the book by H. Cohen, \emph{A Course in Computational Algebraic
Number Theory}, Graduate Texts in Maths \key{138}, Springer-Verlag, 1993,
Section 6.5, and subsection 6.5.5 in particular.

$\var{bnf}[1]$ contains the matrix $W$, i.e.~the matrix in Hermite normal
form giving relations for the class group on prime ideal generators
$(\wp_i)_{1\le i\le r}$.

$\var{bnf}[2]$ contains the matrix $B$, i.e.~the matrix containing the
expressions of the prime ideal factorbase in terms of the $\wp_i$. It is an
$r\times c$ matrix.

$\var{bnf}[3]$ contains the complex logarithmic embeddings of the system of
fundamental units which has been found. It is an $(r_1+r_2)\times(r_1+r_2-1)$
matrix.

$\var{bnf}[4]$ contains the matrix $M''_C$ of Archimedean components of the
relations of the matrix $(W|B)$.

$\var{bnf}[5]$ contains the prime factor base, i.e.~the list of prime
ideals used in finding the relations.

$\var{bnf}[6]$ used to contain a permutation of the prime factor base, but
has been obsoleted. It contains a dummy $0$.

$\var{bnf}[7]$ or \kbd{\var{bnf}.nf} is equal to the number field data
$\var{nf}$ as would be given by \kbd{nfinit}.

$\var{bnf}[8]$ is a vector containing the classgroup \kbd{\var{bnf}.clgp}
as a finite abelian group, the regulator \kbd{\var{bnf}.reg}, a $1$ (used to
contain an obsolete ``check number''), the number of roots of unity and a
generator \kbd{\var{bnf}.tu}, the fundamental units \kbd{\var{bnf}.fu}.

$\var{bnf}[9]$ is a 3-element row vector used in \tet{bnfisprincipal} only
and obtained as follows. Let $D = U W V$ obtained by applying the
\idx{Smith normal form} algorithm to the matrix $W$ (= $\var{bnf}[1]$) and
let $U_r$ be the reduction of $U$ modulo $D$. The first elements of the
factorbase are given (in terms of \kbd{bnf.gen}) by the columns of $U_r$,
with Archimedean component $g_a$; let also $GD_a$ be the Archimedean
components of the generators of the (principal) ideals defined by the
\kbd{bnf.gen[i]\pow bnf.cyc[i]}. Then $\var{bnf}[9]=[U_r, g_a, GD_a]$.

$\var{bnf}[10]$ is by default unused and set equal to 0. This field is used
to store further information about the field as it becomes available, which
is rarely needed, hence would be too expensive to compute during the initial
\kbd{bnfinit} call. For instance, the generators of the principal ideals
\kbd{bnf.gen[i]\pow bnf.cyc[i]} (during a call to \tet{bnrisprincipal}), or
those corresponding to the relations in $W$ and $B$ (when the \kbd{bnf}
internal precision needs to be increased). \smallskip

An \var{sbnf} is a 12 component vector $v$, as follows. Let $\var{bnf}$ be
the result of a full \kbd{bnfinit}, complete with units. Then $v[1]$ is the
polynomial $P$, $v[2]$ is the number of real embeddings $r_1$, $v[3]$ is the
field discriminant, $v[4]$ is the integral basis, $v[5]$ is the list of roots
as in the sixth component of \kbd{nfinit}, $v[6]$ is the matrix $MD$ of
\kbd{nfinit} giving a $\Z$-basis of the different, $v[7]$ is the matrix
$\kbd{W} = \var{bnf}[1]$, $v[8]$ is the matrix $\kbd{matalpha}=\var{bnf}[2]$,
$v[9]$ is the prime ideal factor base $\var{bnf}[5]$ coded in a compact way,
and ordered according to the permutation $\var{bnf}[6]$, $v[10]$ is the
2-component vector giving the number of roots of unity and a generator,
expressed on the integral basis, $v[11]$ is the list of fundamental units,
expressed on the integral basis, $v[12]$ is a vector containing the algebraic
numbers alpha corresponding to the columns of the matrix \kbd{matalpha},
expressed on the integral basis.

   Note that all the components are exact (integral or rational), except for
the roots in $v[5]$. Note also that member functions will \emph{not} work on
\var{sbnf}, you have to use \kbd{bnfmake} explicitly first.

\syn{bnfinit0}{P,\fl,\var{tech},\var{prec}}.

\subsecidx{bnfisintnorm}$(\var{bnf},x)$: computes a complete system of
solutions (modulo units of positive norm) of the absolute norm equation
$\Norm(a)=x$,
where $a$ is an integer in $\var{bnf}$. If $\var{bnf}$ has not been certified,
the correctness of the result depends on the validity of \idx{GRH}.

See also \tet{bnfisnorm}.

\syn{bnfisintnorm}{\var{bnf},x}.

\subsecidx{bnfisnorm}$(\var{bnf},x,\{\fl=1\})$: tries to tell whether the
rational number $x$ is the norm of some element y in $\var{bnf}$. Returns a
vector $[a,b]$ where $x=Norm(a)*b$. Looks for a solution which is an $S$-unit,
with $S$ a certain set of prime ideals containing (among others) all primes
dividing $x$. If $\var{bnf}$ is known to be \idx{Galois}, set $\fl=0$ (in
this case, $x$ is a norm iff $b=1$). If $\fl$ is non zero the program adds to
$S$ the following prime ideals, depending on the sign of $\fl$. If $\fl>0$,
the ideals of norm less than $\fl$. And if $\fl<0$ the ideals dividing $\fl$.

Assuming \idx{GRH}, the answer is guaranteed (i.e.~$x$ is a norm iff $b=1$),
if $S$ contains all primes less than $12\log(\disc(\var{Bnf}))^2$, where
$\var{Bnf}$ is the Galois closure of $\var{bnf}$.

See also \tet{bnfisintnorm}.

\syn{bnfisnorm}{\var{bnf},x,\fl,\var{prec}}, where $\fl$ and
$\var{prec}$ are \kbd{long}s.

\subsecidx{bnfissunit}$(\var{bnf},\var{sfu},x)$: $\var{bnf}$ being output by
\kbd{bnfinit}, \var{sfu} by \kbd{bnfsunit}, gives the column vector of
exponents of $x$ on the fundamental $S$-units and the roots of unity.
If $x$ is not a unit, outputs an empty vector.

\syn{bnfissunit}{\var{bnf},\var{sfu},x}.

\subsecidx{bnfisprincipal}$(\var{bnf},x,\{\fl=1\})$: $\var{bnf}$ being the
number field data output by \kbd{bnfinit}, and $x$ being either a $\Z$-basis
of an ideal in the number field (not necessarily in HNF) or a prime ideal in
the format output by the function \kbd{idealprimedec}, this function tests
whether the ideal is principal or not. The result is more complete than a
simple true/false answer: it gives a row vector $[v_1,v_2]$, where

 $v_1$ is the vector of components $c_i$ of the class of the ideal $x$ in the
class group, expressed on the generators $g_i$ given by \kbd{bnfinit}
(specifically \kbd{\var{bnf}.gen}). The $c_i$ are chosen so that $0\le c_i<n_i$
where $n_i$ is the order of $g_i$ (the vector of $n_i$ being \kbd{\var{bnf}.cyc}).

 $v_2$ gives on the integral basis the components of $\alpha$ such that
$x=\alpha\prod_ig_i^{c_i}$. In particular, $x$ is principal if and only if
$v_1$ is equal to the zero vector. In the latter case, $x = \alpha\Z_K$ where
$\alpha$ is given by $v_2$. Note that if $\alpha$ is too large to be given, a
warning message will be printed and $v_2$ will be set equal to the empty
vector.

If $\fl=0$, outputs only $v_1$, which is much easier to compute.

If $\fl=2$, does as if $\fl$ were $0$, but doubles the precision until a
result is obtained.

If $\fl=3$, as in the default behaviour ($\fl=1$), but doubles the precision
until a result is obtained.

The user is warned that these two last setting may induce \emph{very} lengthy
computations.

\syn{isprincipalall}{\var{bnf},x,\fl}.

\subsecidx{bnfisunit}$(\var{bnf},x)$: $\var{bnf}$ being the number field data
output by \kbd{bnfinit} and $x$ being an algebraic number (type integer,
rational or polmod), this outputs the decomposition of $x$ on the fundamental
units and the roots of unity if $x$ is a unit, the empty vector otherwise.
More precisely, if $u_1$,\dots,$u_r$ are the fundamental units, and $\zeta$
is the generator of the group of roots of unity (\kbd{bnf.tu}), the output is
a vector $[x_1,\dots,x_r,x_{r+1}]$ such that $x=u_1^{x_1}\cdots
u_r^{x_r}\cdot\zeta^{x_{r+1}}$. The $x_i$ are integers for $i\le r$ and is an
integer modulo the order of $\zeta$ for $i=r+1$.

\syn{isunit}{\var{bnf},x}.

\subsecidx{bnfmake}$(\var{sbnf})$: \var{sbnf} being a ``small $\var{bnf}$''
as output by \kbd{bnfinit}$(x,3)$, computes the complete \kbd{bnfinit}
information. The result is \emph{not} identical to what \kbd{bnfinit} would
yield, but is functionally identical. The execution time is very small
compared to a complete \kbd{bnfinit}. Note that if the default precision in
\kbd{gp} (or $\var{prec}$ in library mode) is greater than the precision of the
roots $\var{sbnf}[5]$, these are recomputed so as to get a result with
greater accuracy.

Note that the member functions are \emph{not} available for \var{sbnf}, you
have to use \kbd{bnfmake} explicitly first.

\syn{makebigbnf}{\var{sbnf},\var{prec}}, where $\var{prec}$ is a
C long integer.

\subsecidx{bnfnarrow}$(\var{bnf})$: $\var{bnf}$ being as output by
\kbd{bnfinit}, computes the narrow class group of $\var{bnf}$. The output is
a 3-component row vector $v$ analogous to the corresponding class group
component \kbd{\var{bnf}.clgp} (\kbd{\var{bnf}[8][1]}): the first component
is the narrow class number \kbd{$v$.no}, the second component is a vector
containing the SNF\sidx{Smith normal form} cyclic components \kbd{$v$.cyc} of
the narrow class group, and the third is a vector giving the generators of
the corresponding \kbd{$v$.gen} cyclic groups. Note that this function is a
special case of \kbd{bnrinit}.

\syn{buchnarrow}{\var{bnf}}.

\subsecidx{bnfsignunit}$(\var{bnf})$: $\var{bnf}$ being as output by
\kbd{bnfinit}, this computes an $r_1\times(r_1+r_2-1)$ matrix having $\pm1$
components, giving the signs of the real embeddings of the fundamental units.
The following functions compute generators for the totally positive units:

\bprog
/* exponents of totally positive units generators on bnf.tufu */
tpuexpo(bnf)=
{ local(S,d,K);

  S = bnfsignunit(bnf); d = matsize(S);
  S = matrix(d[1],d[2], i,j, if (S[i,j] < 0, 1,0));
  S = concat(vectorv(d[1],i,1), S);   \\ add sign(-1)
  K = lift(matker(S * Mod(1,2)));
  if (K, mathnfmodid(K, 2), 2*matid(d[1]))
}

/* totally positive units */
tpu(bnf)=
{ local(vu = bnf.tufu, ex = tpuexpo(bnf));

  vector(#ex-1, i, factorback(vu, ex[,i+1]))  \\ ex[,1] is 1
}
@eprog

\syn{signunits}{\var{bnf}}.

\subsecidx{bnfreg}$(\var{bnf})$: $\var{bnf}$ being as output by
\kbd{bnfinit}, computes its regulator.

\syn{regulator}{\var{bnf},\var{tech},\var{prec}}, where \var{tech} is as in
\kbd{bnfinit}.

\subsecidx{bnfsunit}$(\var{bnf},S)$: computes the fundamental $S$-units of the
number field $\var{bnf}$ (output by \kbd{bnfinit}), where $S$ is a list of
prime ideals (output by \kbd{idealprimedec}). The output is a vector $v$ with
6 components.

$v[1]$ gives a minimal system of (integral) generators of the $S$-unit group
modulo the unit group.

$v[2]$ contains technical data needed by \kbd{bnfissunit}.

$v[3]$ is an empty vector (used to give the logarithmic embeddings of the
generators in $v[1]$ in version 2.0.16).

$v[4]$ is the $S$-regulator (this is the product of the regulator, the
determinant of $v[2]$ and the natural logarithms of the norms of the ideals
in $S$).

$v[5]$ gives the $S$-class group structure, in the usual format
(a row vector whose three components give in order the $S$-class number,
the cyclic components and the generators).

$v[6]$ is a copy of $S$.

\syn{bnfsunit}{\var{bnf},S,\var{prec}}.

\subsecidx{bnfunit}$(\var{bnf})$: $\var{bnf}$ being as output by
\kbd{bnfinit}, outputs the vector of fundamental units of the number field.

This function is mostly useless, since it will only succeed if
\var{bnf} contains the units, in which case \kbd{bnf.fu} is recommanded
instead, or \var{bnf} was produced with \kbd{bnfinit(,,2)}, which is itself
deprecated.

\syn{buchfu}{\var{bnf}}.

\subsecidx{bnrL1}$(\var{bnr},\{\var{subgroup}\},\{\fl=0\})$: \var{bnr} being
the number field data which is output by \kbd{bnrinit(,,1)} and
\var{subgroup} being a square matrix defining a congruence subgroup of the
ray class group corresponding to \var{bnr} (the trivial congruence subgroup
if omitted), returns for each \idx{character} $\chi$ of the ray class group
which is trivial on this subgroup, the value at $s = 1$ (or $s = 0$) of the
abelian $L$-function associated to $\chi$. For the value at $s = 0$, the
function returns in fact for each character $\chi$ a vector $[r_\chi ,
c_\chi]$ where $r_\chi$ is the order of $L(s, \chi)$ at $s = 0$ and $c_\chi$
the first non-zero term in the expansion of $L(s, \chi)$ at $s = 0$; in other
words
%
$$L(s, \chi) = c_\chi \cdot s^{r_\chi} + O(s^{r_\chi + 1})$$
%
\noindent near $0$. \fl\ is optional, default value is 0; its binary digits
mean 1: compute at $s = 1$ if set to 1 or $s = 0$ if set to 0, 2: compute the
primitive $L$-functions associated to $\chi$ if set to 0 or the $L$-function
with Euler factors at prime ideals dividing the modulus of \var{bnr} removed
if set to 1 (this is the so-called $L_S(s, \chi)$ function where $S$ is the
set of infinite places of the number field together with the finite prime
ideals dividing the modulus of \var{bnr}, see the example below), 3: returns
also the character. Example:
\bprog
bnf = bnfinit(x^2 - 229);
bnr = bnrinit(bnf,1,1);
bnrL1(bnr)
@eprog\noindent
returns the order and the first non-zero term of the abelian
$L$-functions $L(s, \chi)$ at $s = 0$ where $\chi$ runs through the
characters of the class group of $\Q(\sqrt{229})$. Then
\bprog
bnr2 = bnrinit(bnf,2,1);
bnrL1(bnr2,,2)
@eprog\noindent
returns the order and the first non-zero terms of the abelian
$L$-functions $L_S(s, \chi)$ at $s = 0$ where $\chi$ runs through the
characters of the class group of $\Q(\sqrt{229})$ and $S$ is the set
of infinite places of $\Q(\sqrt{229})$ together with the finite prime
$2$. Note that the ray class group modulo $2$ is in fact the class
group, so \kbd{bnrL1(bnr2,0)} returns exactly the same answer as
\kbd{bnrL1(bnr,0)}.

\syn{bnrL1}{\var{bnr},\var{subgroup},\fl,\var{prec}}, where an omitted
\var{subgroup} is coded as \kbd{NULL}.

\subsecidx{bnrclass}$(\var{bnf},\var{ideal},\{\fl=0\})$: \emph{this function
is DEPRECATED, use \kbd{bnrinit}}.

$\var{bnf}$ being as output by \kbd{bnfinit} (the units are mandatory unless
the ideal is trivial), and \var{ideal} being a modulus, computes the ray
class group of the number field for the modulus \var{ideal}, as a
finite abelian group.

\syn{bnrclass0}{\var{bnf},\var{ideal},\fl}.

\subsecidx{bnrclassno}$(\var{bnf},I)$: $\var{bnf}$ being as output by
\kbd{bnfinit} (units are mandatory unless the ideal is trivial), and $I$
being a modulus, computes the ray class number of the number field for the
modulus $I$. This is faster than \kbd{bnrinit} and should be used if only the
ray class number is desired. See \tet{bnrclassnolist} if you need ray class
numbers for all moduli less than some bound.

\syn{bnrclassno}{\var{bnf},I}.

\subsecidx{bnrclassnolist}$(\var{bnf},\var{list})$: $\var{bnf}$ being as
output by \kbd{bnfinit}, and \var{list} being a list of moduli (with units) as
output by \kbd{ideallist} or \kbd{ideallistarch}, outputs the list of the
class numbers of the corresponding ray class groups. To compute a single
class number, \tet{bnrclassno} is more efficient.

\bprog
? bnf = bnfinit(x^2 - 2);
? L = ideallist(bnf, 100, 2);
? H = bnrclassnolist(bnf, L);
? H[98]
%4 = [1, 3, 1]
? l = L[1][98]; ids = vector(#l, i, l[i].mod[1])
%5 = [[98, 88; 0, 1], [14, 0; 0, 7], [98, 10; 0, 1]]
@eprog
The weird \kbd{l[i].mod[1]}, is the first component of \kbd{l[i].mod}, i.e.
the finite part of the conductor. (This is cosmetic: since by construction
the archimedean part is trivial, I do not want to see it). This tells us that
the ray class groups modulo the ideals of norm 98 (printed as \kbd{\%5}) have
respectively order $1$, $3$ and $1$. Indeed, we may check directly :
\bprog
? bnrclassno(bnf, ids[2])
%6 = 3
@eprog

\syn{bnrclassnolist}{\var{bnf},\var{list}}.

\subsecidx{bnrconductor}$(a_1,\{a_2\},\{a_3\}, \{\fl=0\})$: conductor $f$ of
the subfield of a ray class field as defined by $[a_1,a_2,a_3]$ (see
\kbd{bnr} at the beginning of this section).

  If $\fl = 0$, returns $f$.

  If $\fl = 1$, returns $[f, Cl_f, H]$, where $Cl_f$ is the ray class group
modulo $f$, as a finite abelian group; finally $H$ is the subgroup of $Cl_f$
defining the extension.

  If $\fl = 2$, returns $[f, \var{bnr}(f), H]$, as above except $Cl_f$ is
replaced by a \kbd{bnr} structure, as output by $\tet{bnrinit}(,f,1)$.

\syn{conductor}{\var{bnr}, \var{subgroup}, \fl}, where an omitted subgroup
(trivial subgroup, i.e.~ray class field) is input as \kbd{NULL}, and $\fl$ is
a C long.

\subsecidx{bnrconductorofchar}$(\var{bnr},\var{chi})$: \var{bnr} being a big
ray number field as output by \kbd{bnrinit}, and \var{chi} being a row vector
representing a \idx{character} as expressed on the generators of the ray
class group, gives the conductor of this character as a modulus.

\syn{bnrconductorofchar}{\var{bnr},\var{chi}}.

\subsecidx{bnrdisc}$(a1,\{a2\},\{a3\},\{\fl=0\})$: $a1$, $a2$, $a3$
defining a big ray number field $L$ over a ground field $K$ (see \kbd{bnr}
at the beginning of this section for the
meaning of $a1$, $a2$, $a3$), outputs a 3-component row vector $[N,R_1,D]$,
where $N$ is the (absolute) degree of $L$, $R_1$ the number of real places of
$L$, and $D$ the discriminant of $L/\Q$, including sign (if $\fl=0$).

   If $\fl=1$, as above but outputs relative data. $N$ is now the degree of
$L/K$, $R_1$ is the number of real places of $K$ unramified in $L$ (so that
the number of real places of $L$ is equal to $R_1$ times the relative degree
$N$), and $D$ is the relative discriminant ideal of $L/K$.

   If $\fl=2$, as the default case, except that if the modulus is not the
exact conductor corresponding to the $L$, no data is computed and the result
is $0$.

   If $\fl=3$, as case 2, but output relative data.

\syn{bnrdisc0}{a1,a2,a3,\fl}.

\subsecidx{bnrdisclist}$(\var{bnf},\var{bound},\{\var{arch}\})$:
$\var{bnf}$ being as output by \kbd{bnfinit} (with units), computes a list of
discriminants of Abelian extensions of the number field by increasing modulus
norm up to bound \var{bound}. The ramified Archimedean places are given by
\var{arch}; all possible values are taken if \var{arch} is omitted.

The alternative syntax $\kbd{bnrdisclist}(\var{bnf},\var{list})$ is
supported, where \var{list} is as output by \kbd{ideallist} or
\kbd{ideallistarch} (with units), in which case \var{arch} is disregarded.

The output $v$ is a vector of vectors, where $v[i][j]$ is understood to be in
fact $V[2^{15}(i-1)+j]$ of a unique big vector $V$. (This akward scheme
allows for larger vectors than could be otherwise represented.)

$V[k]$ is itself a vector $W$, whose length is the number of ideals of norm
$k$. We consider first the case where \var{arch} was specified. Each
component of $W$ corresponds to an ideal $m$ of norm $k$, and
gives invariants associated to the ray class field $L$ of $\var{bnf}$ of
conductor $[m, \var{arch}]$. Namely, each contains a vector $[m,d,r,D]$ with
the following meaning: $m$ is the prime ideal factorization of the modulus,
$d = [L:\Q]$ is the absolute degree of $L$, $r$ is the number of real places
of $L$, and $D$ is the factorization of its absolute discriminant. We set $d
= r = D = 0$ if $m$ is not the finite part of a conductor.

If \var{arch} was omitted, all $t = 2^{r_1}$ possible values are taken and a
component of $W$ has the form $[m, [[d_1,r_1,D_1], \dots, [d_t,r_t,D_t]]]$,
where $m$ is the finite part of the conductor as above, and
$[d_i,r_i,D_i]$ are the invariants of the ray class field of conductor
$[m,v_i]$, where $v_i$ is the $i$-th archimedean component, ordered by
inverse lexicographic order; so $v_1 = [0,\dots,0]$, $v_2 = [1,0\dots,0]$,
etc. Again, we set $d_i = r_i = D_i = 0$ if $[m,v_i]$ is not a conductor.

Finally, each prime ideal $pr = [p,\alpha,e,f,\beta]$ in the prime
factorization $m$ is coded as the integer $p\cdot n^2+(f-1)\cdot n+(j-1)$,
where $n$ is the degree of the base field and $j$ is such that

\kbd{pr = idealprimedec(\var{nf},p)[j]}.

\noindent $m$ can be decoded using \tet{bnfdecodemodule}.


Note that to compute such data for a single field, either \tet{bnrclassno}
or \tet{bnrdisc} is more efficient.

\syn{bnrdisclist0}{bnf,\var{bound},\var{arch}}.

\subsecidx{bnrinit}$(\var{bnf},f,\{\fl=0\})$: $\var{bnf}$ is as
output by \kbd{bnfinit}, $f$ is a modulus, initializes data linked to
the ray class group structure corresponding to this module, a so-called
\var{bnr} structure. The following member functions are available
on the result: \kbd{.bnf} is the underlying \var{bnf},
\kbd{.mod} the modulus, \kbd{.bid} the \var{bid} structure associated to the
modulus; finally, \kbd{.clgp}, \kbd{.no}, \kbd{.cyc}, \kbd{clgp} refer to the
ray class group (as a finite abelian group), its cardinality, its elementary
divisors, its generators. 

The last group of functions are different from the members of the underlying
\var{bnf}, which refer to the class group; use \kbd{\var{bnr}.bnf.\var{xxx}}
to access these, e.g.~\kbd{\var{bnr}.bnf.cyc} to get the cyclic decomposition
of the class group.

They are also different from the members of the underlying \var{bid}, which
refer to $(\O_K/f)^*$; use \kbd{\var{bnr}.bid.\var{xxx}} to access these,
e.g.~\kbd{\var{bnr}.bid.no} to get $\phi(f)$.

If $\fl=0$ (default), the generators of the ray class group are not computed,
which saves time. Hence \kbd{\var{bnr}.gen} would produce an error.

If $\fl=1$, as the default, except that generators are computed.

\syn{bnrinit0}{\var{bnf},f,\fl}.

\subsecidx{bnrisconductor}$(a1,\{a2\},\{a3\})$: $a1$, $a2$, $a3$ represent
an extension of the base field, given by class field theory for some modulus
encoded in the parameters. Outputs 1 if this modulus is the conductor, and 0
otherwise. This is slightly faster than \kbd{bnrconductor}.

\syn{bnrisconductor}{a1,a2,a3} and the result is a \kbd{long}.

\subsecidx{bnrisprincipal}$(\var{bnr},x,\{\fl=1\})$: \var{bnr} being the
number field data which is output by \kbd{bnrinit}$(,,1)$ and $x$ being an
ideal in any form, outputs the components of $x$ on the ray class group
generators in a way similar to \kbd{bnfisprincipal}. That is a 2-component
vector $v$ where $v[1]$ is the vector of components of $x$ on the ray class
group generators, $v[2]$ gives on the integral basis an element $\alpha$ such
that $x=\alpha\prod_ig_i^{x_i}$.

If $\fl=0$, outputs only $v_1$. In that case, \var{bnr} need not contain the
ray class group generators, i.e.~it may be created with \kbd{bnrinit}$(,,0)$

\syn{bnrisprincipal}{\var{bnr},x,\fl}.

\subsecidx{bnrrootnumber}$(\var{bnr},\var{chi},\{\fl=0\})$:
if $\chi=\var{chi}$ is a (not necessarily primitive)
\idx{character} over \var{bnr}, let
$L(s,\chi) = \sum_{id} \chi(id) N(id)^{-s}$ be the associated
\idx{Artin L-function}. Returns the so-called \idx{Artin root number}, i.e.~the
complex number $W(\chi)$ of modulus 1 such that
%
$$\Lambda(1-s,\chi) = W(\chi) \Lambda(s,\overline{\chi})$$
%
\noindent where $\Lambda(s,\chi) = A(\chi)^{s/2}\gamma_\chi(s) L(s,\chi)$ is
the enlarged L-function associated to $L$.

The generators of the ray class group are needed, and you can set $\fl=1$ if
the character is known to be primitive. Example:

\bprog
bnf = bnfinit(x^2 - 145);
bnr = bnrinit(bnf,7,1);
bnrrootnumber(bnr, [5])
@eprog\noindent
returns the root number of the character $\chi$ of $\Cl_7(\Q(\sqrt{145}))$
such that $\chi(g) = \zeta^5$, where $g$ is the generator of the ray-class
field and $\zeta = e^{2i\pi/N}$ where $N$ is the order of $g$ ($N=12$ as
\kbd{bnr.cyc} readily tells us).

\syn{bnrrootnumber}{\var{bnf},\var{chi},\fl}

\subsecidx{bnrstark}${(\var{bnr},\{\var{subgroup}\})}$: \var{bnr}
being as output by \kbd{bnrinit(,,1)}, finds a relative equation for the
class field corresponding to the modulus in \var{bnr} and the given
congruence subgroup (as usual, omit $\var{subgroup}$ if you want the whole
ray class group).

The routine uses \idx{Stark units} and needs to find a suitable auxilliary
conductor, which may not exist when the class field is not cyclic over the
base. In this case \kbd{bnrstark} is allowed to return a vector of
polynomials defining \emph{independent} relative extensions, whose compositum
is the requested class field. It was decided that it was more useful
to keep the extra information thus made available, hence the user has to take
the compositum herself.

The main variable of \var{bnr} must not be $x$, and the ground field and the
class field must be totally real. When the base field is $\Q$, the vastly
simpler \tet{galoissubcyclo} is used instead. Here is an example:
\bprog
bnf = bnfinit(y^2 - 3);
bnr = bnrinit(bnf, 5, 1);
pol = bnrstark(bnr)
@eprog\noindent
returns the ray class field of $\Q(\sqrt{3})$ modulo $5$. Usually, one wants
to apply to the result one of
\bprog
rnfpolredabs(bnf, pol, 16)     \\@com compute a reduced relative polynomial
rnfpolredabs(bnf, pol, 16 + 2) \\@com compute a reduced absolute polynomial
@eprog

\syn{bnrstark}{\var{bnr},\var{subgroup}}, where an omitted \var{subgroup}
is coded by \kbd{NULL}.

\subsecidx{dirzetak}$(\var{nf},b)$: gives as a vector the first $b$
coefficients of the \idx{Dedekind} zeta function of the number field $\var{nf}$
considered as a \idx{Dirichlet series}.

\syn{dirzetak}{\var{nf},b}.

\subsecidx{factornf}$(x,t)$: factorization of the univariate polynomial $x$
over the number field defined by the (univariate) polynomial $t$. $x$ may
have coefficients in $\Q$ or in the number field. The algorithm reduces to
factorization over $\Q$ (\idx{Trager}'s trick). The direct approach of
\tet{nffactor}, which uses \idx{van Hoeij}'s method in a relative setting, is
in general faster.

The main variable of $t$ must be of \emph{lower} priority than that of $x$
(see \secref{se:priority}). However if non-rational number field elements
occur (as polmods or polynomials) as coefficients of $x$, the variable of
these polmods \emph{must} be the same as the main variable of $t$. For
example

\bprog
? factornf(x^2 + Mod(y, y^2+1), y^2+1);
? factornf(x^2 + y, y^2+1); \\@com these two are OK
? factornf(x^2 + Mod(z,z^2+1), y^2+1)
  *** factornf: inconsistent data in rnf function.
? factornf(x^2 + z, y^2+1)
  *** factornf: incorrect variable in rnf function.
@eprog

\syn{polfnf}{x,t}.

\subsecidx{galoisexport}$(\var{gal},\{\fl=0\})$:
\var{gal} being be a Galois field as output by \tet{galoisinit},
export the underlying permutation group as a string suitable
for (no flags or $\fl=0$) GAP or ($\fl=1$) Magma. The following example
compute the index of the underlying abstract group in the GAP library:
\bprog
? G = galoisinit(x^6+108);
? s = galoisexport(G)
%2 = "Group((1, 2, 3)(4, 5, 6), (1, 4)(2, 6)(3, 5))"
? extern("echo \"IdGroup("s");\" | gap -q")
%3 = [6, 1]
? galoisidentify(G)
%4 = [6, 1]
@eprog

This command also accepts subgroups returned by \kbd{galoissubgroups}.

\syn{galoisexport}{\var{gal},\fl}.

\subsecidx{galoisfixedfield}$(\var{gal},\var{perm},\{\fl=0\},\{v=y\}))$:
\var{gal} being be a Galois field as output by \tet{galoisinit} and
\var{perm} an element of $\var{gal}.group$ or a vector of such elements,
computes the fixed field of \var{gal} by the automorphism defined by the
permutations \var{perm} of the roots $\var{gal}.roots$. $P$ is guaranteed to
be squarefree modulo $\var{gal}.p$.

If no flags or $\fl=0$, output format is the same as for \tet{nfsubfield},
returning $[P,x]$ such that $P$ is a polynomial defining the fixed field, and
$x$ is a root of $P$ expressed as a polmod in $\var{gal}.pol$.

If $\fl=1$ return only the polynomial $P$.

If $\fl=2$ return $[P,x,F]$ where $P$ and $x$ are as above and $F$ is the
factorization of $\var{gal}.pol$ over the field defined by $P$, where
variable $v$ ($y$ by default) stands for a root of $P$. The priority of $v$
must be less than the priority of the variable of $\var{gal}.pol$ (see
\secref{se:priority}). Example:

\bprog
? G = galoisinit(x^4+1);
? galoisfixedfield(G,G.group[2],2)
%2 = [x^2 + 2, Mod(x^3 + x, x^4 + 1), [x^2 - y*x - 1, x^2 + y*x - 1]]
@eprog\noindent
computes the factorization  $x^4+1=(x^2-\sqrt{-2}x-1)(x^2+\sqrt{-2}x-1)$

\syn{galoisfixedfield}{\var{gal},\var{perm},\fl,$v$}, where $v$ is a variable number, an omitted $v$ being coded by $-1$.

\subsecidx{galoisidentify}$(\var{gal})$:
\var{gal} being be a Galois field as output by \tet{galoisinit},
output the isomorphism class of the underlying abstract group as a
two-components vector $[o,i]$, where $o$ is the group order, and $i$ is the
group index in the GAP4 Small Group library, by Hans Ulrich Besche, Bettina
Eick and Eamonn O'Brien.

This command also accepts subgroups returned by \kbd{galoissubgroups}.

The current implementation is limited to degree less or equal to $127$.
Some larger ``easy'' orders are also supported.

The output is similar to the output of the function \kbd{IdGroup} in GAP4.
Note that GAP4 \kbd{IdGroup} handles all groups of order less than $2000$
except $1024$, so you can use \tet{galoisexport} and GAP4 to identify large
Galois groups.

\syn{galoisidentify}{\var{gal}}.

\subsecidx{galoisinit}$(\var{pol},\{den\})$: computes the Galois group
and all necessary information for computing the fixed fields of the
Galois extension $K/\Q$ where $K$ is the number field defined by
$\var{pol}$ (monic irreducible polynomial in $\Z[X]$ or
a number field as output by \tet{nfinit}). The extension $K/\Q$ must be
Galois with Galois group ``weakly'' super-solvable (see \tet{nfgaloisconj})

This is a prerequisite for most of the \kbd{galois}$xxx$ routines. For
instance:
\bprog
  P = x^6 + 108;
  G = galoisinit(P);
  L = galoissubgroups(G);
  vector(#L, i, galoisisabelian(L[i],1))
  vector(#L, i, galoisidentify(L[i]))
@eprog

The output is an 8-component vector \var{gal}.

 $\var{gal}[1]$ contains the polynomial \var{pol}
 (\kbd{\var{gal}.pol}).

 $\var{gal}[2]$ is a three-components vector $[p,e,q]$ where $p$ is a
 prime number (\kbd{\var{gal}.p}) such that \var{pol} totally split
 modulo $p$ , $e$ is an integer and $q=p^e$ (\kbd{\var{gal}.mod}) is the
 modulus of the roots in \kbd{\var{gal}.roots}.

 $\var{gal}[3]$ is a vector $L$ containing the $p$-adic roots of
 \var{pol} as integers implicitly modulo \kbd{\var{gal}.mod}.
 (\kbd{\var{gal}.roots}).

 $\var{gal}[4]$ is the inverse of the Van der Monde matrix of the
 $p$-adic roots of \var{pol}, multiplied by $\var{gal}[5]$.

 $\var{gal}[5]$ is a multiple of the least common denominator of the
 automorphisms expressed as polynomial in a root of \var{pol}.

 $\var{gal}[6]$ is the Galois group $G$ expressed as a vector of
 permutations of $L$ (\kbd{\var{gal}.group}).

 $\var{gal}[7]$ is a generating subset $S=[s_1,\ldots,s_g]$ of $G$
 expressed as a vector of permutations of $L$ (\kbd{\var{gal}.gen}).

 $\var{gal}[8]$ contains the relative orders $[o_1,\ldots,o_g]$ of
 the generators of $S$ (\kbd{\var{gal}.orders}).

Let $H$ be the maximal normal supersolvable subgroup of $G$, we have the
following properties:

\quad\item if $G/H\simeq A_4$ then $[o_1,\ldots,o_g]$ ends by
$[2,2,3]$.

\quad\item if $G/H\simeq S_4$ then $[o_1,\ldots,o_g]$ ends by
$[2,2,3,2]$.

\quad\item else $G$ is super-solvable.

\quad\item for $1\leq i \leq g$ the subgroup of $G$ generated by
$[s_1,\ldots,s_g]$ is normal, with the exception of $i=g-2$ in the
second case and of $i=g-3$ in the third.

\quad\item the relative order $o_i$ of $s_i$ is its order in the
quotient group $G/\langle s_1,\ldots,s_{i-1}\rangle$, with the same
exceptions.

\quad\item for any $x\in G$ there exists a unique family
$[e_1,\ldots,e_g]$ such that (no exceptions):

-- for $1\leq i \leq g$ we have $0\leq e_i<o_i$

-- $x=g_1^{e_1}g_2^{e_2}\ldots g_n^{e_n}$

If present $den$ must be a suitable value for $\var{gal}[5]$.

\syn{galoisinit}{\var{gal},\var{den}}.

\subsecidx{galoisisabelian}$(\var{gal},{fl=0})$: \var{gal} being as output by \kbd{galoisinit}, return $0$ if
 \var{gal} is not an abelian group, and the HNF matrix of \var{gal} over \kbd{gal.gen} if $fl=0$, $1$ if
 $fl=1$.

This command also accepts subgroups returned by \kbd{galoissubgroups}.

\syn{galoisisabelian}{\var{gal},\var{fl}} where \var{fl} is a C long integer.

\subsecidx{galoispermtopol}$(\var{gal},\var{perm})$: \var{gal} being a
Galois field as output by \kbd{galoisinit} and \var{perm} a element of
$\var{gal}.group$, return the polynomial defining the Galois
automorphism, as output by \kbd{nfgaloisconj}, associated with the
permutation \var{perm} of the roots $\var{gal}.roots$. \var{perm} can
also be a vector or matrix, in this case, \kbd{galoispermtopol} is
applied to all components recursively.

\noindent Note that
\bprog
G = galoisinit(pol);
galoispermtopol(G, G[6])~
@eprog\noindent
is equivalent to \kbd{nfgaloisconj(pol)}, if degree of \var{pol} is greater
or equal to $2$.

\syn{galoispermtopol}{\var{gal},\var{perm}}.

\subsecidx{galoissubcyclo}$(N,H,\{fl=0\},\{v\})$: computes the subextension
of $\Q(\zeta_n)$ fixed by the subgroup $H \subset (\Z/n\Z)^*$. By the
Kronecker-Weber theorem, all abelian number fields can be generated in this
way (uniquely if $n$ is taken to be minimal).

\noindent The pair $(n, H)$ is deduced from the parameters $(N, H)$ as follows

\item $N$ an integer: then $n = N$; $H$ is a generator, i.e. an
integer or an integer modulo $n$; or a vector of generators.

\item $N$ the output of \kbd{znstar($n$)}. $H$ as in the first case
above, or a matrix, taken to be a HNF left divisor of the SNF for $(\Z/n\Z)^*$
(of type \kbd{$N$.cyc}), giving the generators of $H$ in terms of \kbd{$N$.gen}.

\item $N$ the output of \kbd{bnrinit(bnfinit(y), $m$, 1)} where $m$ is a
module. $H$ as in the first case, or a matrix taken to be a HNF left
divisor of the SNF for the ray class group modulo $m$
(of type \kbd{$N$.cyc}), giving the generators of $H$ in terms of \kbd{$N$.gen}.

In this last case, beware that $H$ is understood relatively to $N$; in
particular, if the infinite place does not divide the module, e.g if $m$ is
an integer, then it is not a subgroup of $(\Z/n\Z)^*$, but of its quotient by
$\{\pm 1\}$.

If $fl=0$, compute a polynomial (in the variable \var{v}) defining the
the subfield of $\Q(\zeta_n)$ fixed by the subgroup \var{H} of $(\Z/n\Z)^*$.

If $fl=1$, compute only the conductor of the abelian extension, as a module.

If $fl=2$, output $[pol, N]$, where $pol$ is the polynomial as output when
$fl=0$ and $N$ the conductor as output when $fl=1$.

The following function can be used to compute all subfields of
$\Q(\zeta_n)$ (of exact degree \kbd{d}, if \kbd{d} is set):
\bprog
subcyclo(n, d = -1)=
{
  local(bnr,L,IndexBound);
  IndexBound = if (d < 0, n, [d]);
  bnr = bnrinit(bnfinit(y), [n,[1]], 1);
  L = subgrouplist(bnr, IndexBound, 1);
  vector(#L,i, galoissubcyclo(bnr,L[i]));
}
@eprog\noindent
Setting \kbd{L = subgrouplist(bnr, IndexBound)} would produce subfields of exact
conductor $n\infty$.

\syn{galoissubcyclo}{N,H,fl,v} where \var{fl} is a C long integer, and
\var{v} a variable number.

\subsecidx{galoissubfields}$(G,\{fl=0\},\{v\})$: Output all the subfields of
the Galois group \var{G}, as a vector.
This works by applying \kbd{galoisfixedfield} to all subgroups. The meaning of
the flag \var{fl} is the same as for \kbd{galoisfixedfield}.

\syn{galoissubfields}{\var{G},fl,v}, where \var{fl} is a long and \var{v} a
variable number.

\subsecidx{galoissubgroups}$(gal)$: Output all the subgroups of the Galois
group \kbd{gal}. A subgroup is a vector [\var{gen}, \var{orders}], with the same meaning
as for $\var{gal}.gen$ and $\var{gal}.orders$. Hence \var{gen} is a vector of
permutations generating the subgroup, and \var{orders} is the relatives
orders of the generators. The cardinal of a subgroup is the product of the
relative orders. Such subgroup can be used instead of a Galois group in the
following command: \kbd{galoisisabelian}, \kbd{galoissubgroups}, \kbd{galoisexport} and \kbd{galoisidentify}.

To get the subfield fixed by a subgroup \var{sub} of \var{gal}, use
\bprog
galoisfixedfield(gal,sub[1])
@eprog

\syn{galoissubgroups}{\var{gal}}.

\subsecidx{idealadd}$(\var{nf},x,y)$: sum of the two ideals $x$ and $y$ in the
number field $\var{nf}$. When $x$ and $y$ are given by $\Z$-bases, this does
not depend on $\var{nf}$ and can be used to compute the sum of any two
$\Z$-modules. The result is given in HNF.

\syn{idealadd}{\var{nf},x,y}.

\subsecidx{idealaddtoone}$(\var{nf},x,\{y\})$: $x$ and $y$ being two co-prime
integral ideals (given in any form), this gives a two-component row vector
$[a,b]$ such that $a\in x$, $b\in y$ and $a+b=1$.

The alternative syntax $\kbd{idealaddtoone}(\var{nf},v)$, is supported, where
$v$ is a $k$-component vector of ideals (given in any form) which sum to
$\Z_K$. This outputs a $k$-component vector $e$ such that $e[i]\in x[i]$ for
$1\le i\le k$ and $\sum_{1\le i\le k}e[i]=1$.

\syn{idealaddtoone0}{\var{nf},x,y}, where an omitted $y$ is coded as
\kbd{NULL}.

\subsecidx{idealappr}$(\var{nf},x,\{\fl=0\})$: if $x$ is a fractional ideal
(given in any form), gives an element $\alpha$ in $\var{nf}$ such that for
all prime ideals $\wp$ such that the valuation of $x$ at $\wp$ is non-zero, we
have $v_{\wp}(\alpha)=v_{\wp}(x)$, and. $v_{\wp}(\alpha)\ge0$ for all other
${\wp}$.

If $\fl$ is non-zero, $x$ must be given as a prime ideal factorization, as
output by \kbd{idealfactor}, but possibly with zero or negative exponents.
This yields an element $\alpha$ such that for all prime ideals $\wp$ occurring
in $x$, $v_{\wp}(\alpha)$ is equal to the exponent of $\wp$ in $x$, and for all
other prime ideals, $v_{\wp}(\alpha)\ge0$. This generalizes
$\kbd{idealappr}(\var{nf},x,0)$ since zero exponents are allowed. Note that
the algorithm used is slightly different, so that
\kbd{idealappr(\var{nf},idealfactor(\var{nf},x))} may not be the same as
\kbd{idealappr(\var{nf},x,1)}.

\syn{idealappr0}{\var{nf},x,\fl}.

\subsecidx{idealchinese}$(\var{nf},x,y)$: $x$ being a prime ideal factorization
(i.e.~a 2 by 2 matrix whose first column contain prime ideals, and the second
column integral exponents), $y$ a vector of elements in $\var{nf}$ indexed by
the ideals in $x$, computes an element $b$ such that

$v_\wp(b - y_\wp) \geq v_\wp(x)$ for all prime ideals in $x$ and $v_\wp(b)\geq 0$
for all other $\wp$.

\syn{idealchinese}{\var{nf},x,y}.

\subsecidx{idealcoprime}$(\var{nf},x,y)$: given two integral ideals $x$ and $y$
in the number field $\var{nf}$, finds a $\beta$ in the field, expressed on the
integral basis $\var{nf}[7]$, such that $\beta\cdot x$ is an integral ideal
coprime to $y$.

\syn{idealcoprime}{\var{nf},x,y}.

\subsecidx{idealdiv}$(\var{nf},x,y,\{\fl=0\})$: quotient $x\cdot y^{-1}$ of the
two ideals $x$ and $y$ in the number field $\var{nf}$. The result is given in
HNF.

If $\fl$ is non-zero, the quotient $x \cdot y^{-1}$ is assumed to be an
integral ideal. This can be much faster when the norm of the quotient is
small even though the norms of $x$ and $y$ are large.

\syn{idealdiv0}{\var{nf},x,y,\fl}. Also available
are \funs{idealdiv}{\var{nf},x,y} ($\fl=0$) and
\funs{idealdivexact}{\var{nf},x,y} ($\fl=1$).

\subsecidx{idealfactor}$(\var{nf},x)$: factors into prime ideal powers the
ideal $x$ in the number field $\var{nf}$. The output format is similar to the
\kbd{factor} function, and the prime ideals are represented in the form
output by the \kbd{idealprimedec} function, i.e.~as 5-element vectors.

\syn{idealfactor}{\var{nf},x}.

\subsecidx{idealhnf}$(\var{nf},a,\{b\})$: gives the \idx{Hermite normal form}
matrix of the ideal $a$. The ideal can be given in any form whatsoever
(typically by an algebraic number if it is principal, by a $\Z_K$-system of
generators, as a prime ideal as given by \kbd{idealprimedec}, or by a
$\Z$-basis).

If $b$ is not omitted, assume the ideal given was $a\Z_K+b\Z_K$, where $a$
and $b$ are elements of $K$ given either as vectors on the integral basis
$\var{nf}[7]$ or as algebraic numbers.

\syn{idealhnf0}{\var{nf},a,b} where an omitted $b$ is coded as \kbd{NULL}.
Also available is \funs{idealhermite}{\var{nf},a} ($b$ omitted).

\subsecidx{idealintersect}$(\var{nf},A,B)$: intersection of the two ideals
$A$ and $B$ in the number field $\var{nf}$. The result is given in HNF.
\bprog
    ? nf = nfinit(x^2+1);
    ? idealintersect(nf, 2, x+1)
    %2 = 
    [2 0]

    [0 2]
@eprog

This function does not apply to general $\Z$-modules, e.g.~orders, since its
arguments are replaced by the ideals they generate. The following script
intersects $\Z$-modules $A$ and $B$ given by matrices of compatible
dimensions with integer coefficients:
\bprog
    ZM_intersect(A,B) =
    { local( Ker = matkerint(concat(A,B)) );
      mathnf(A * vecextract(Ker, Str("..", #A), ".."))
    }
@eprog

\syn{idealintersect}{\var{nf},A,B}.

\subsecidx{idealinv}$(\var{nf},x)$: inverse of the ideal $x$ in the
number field $\var{nf}$. The result is the Hermite normal form of the
inverse of the ideal, together with the opposite of the Archimedean
information if it is given.

\syn{idealinv}{\var{nf},x}.

\subsecidx{ideallist}$(\var{nf},\var{bound},\{\fl=4\})$: computes the list
of all ideals of norm less or equal to \var{bound} in the number field
\var{nf}. The result is a row vector with exactly \var{bound} components.
Each component is itself a row vector containing the information about
ideals of a given norm, in no specific order, depending on the value of
$\fl$:

The possible values of $\fl$ are:

\quad 0: give the \var{bid} associated to the ideals, without generators.

\quad 1: as 0, but include the generators in the \var{bid}.

\quad 2: in this case, \var{nf} must be a \var{bnf} with units. Each
component is of the form $[\var{bid},U]$, where \var{bid} is as case 0
and $U$ is a vector of discrete logarithms of the units. More precisely, it
gives the \kbd{ideallog}s with respect to \var{bid} of \kbd{bnf.tufu}.
This structure is technical, and only meant to be used in conjunction with
\tet{bnrclassnolist} or \tet{bnrdisclist}.

\quad 3: as 2, but include the generators in the \var{bid}.

\quad 4: give only the HNF of the ideal.

\bprog
? nf = nfinit(x^2+1);
? L = ideallist(nf, 100);
? L[1]
%3 = [[1, 0; 0, 1]]  \\@com A single ideal of norm 1
? #L[65]
%4 = 4               \\@com There are 4 ideals of norm 4 in $\Z[i]$
@eprog
If one wants more information, one could do instead:
\bprog
? nf = nfinit(x^2+1);
? L = ideallist(nf, 100, 0);
? l = L[25]; vector(#l, i, l[i].clgp)
%3 = [[20, [20]], [16, [4, 4]], [20, [20]]]
? l[1].mod
%4 = [[25, 18; 0, 1], []]
? l[2].mod
%5 = [[5, 0; 0, 5], []]
? l[3].mod
%6 = [[25, 7; 0, 1], []]
@eprog\noindent where we ask for the structures of the $(\Z[i]/I)^*$ for all
three ideals of norm $25$. In fact, for all moduli with finite part of norm
$25$ and trivial archimedean part, as the last 3 commands show. See
\tet{ideallistarch} to treat general moduli.

\syn{ideallist0}{\var{nf},\var{bound},\fl}, where \var{bound} must
be a C long integer. Also available is \funs{ideallist}{\var{nf},\var{bound}},
corresponding to the case $\fl=4$.

\subsecidx{ideallistarch}$(\var{nf},\var{list},\var{arch})$:
\var{list} is a vector of vectors of bid's, as output by \tet{ideallist} with
flag $0$ to $3$. Return a vector of vectors with the same number of
components as the original \var{list}. The leaves give information about
moduli whose finite part is as in original list, in the same order, and
archimedean part is now \var{arch} (it was originally trivial). The
information contained is of the same kind as was present in the input; see
\tet{ideallist}, in particular the meaning of \fl.

\bprog
? bnf = bnfinit(x^2-2);
? bnf.sign  
%2 = [2, 0]                         \\@com two places at infinity
? L = ideallist(bnf, 100, 0);
? l = L[98]; vector(#l, i, l[i].clgp)
%4 = [[42, [42]], [36, [6, 6]], [42, [42]]]
? La = ideallistarch(bnf, L, [1,1]); \\@com add them to the modulus
? l = La[98]; vector(#l, i, l[i].clgp)
%6 = [[168, [42, 2, 2]], [144, [6, 6, 2, 2]], [168, [42, 2, 2]]]
@eprog
Of course, the results above are obvious: adding $t$ places at infinity will
add $t$ copies of $\Z/2\Z$ to the ray class group. The following application
is more typical:
\bprog
? L = ideallist(bnf, 100, 2);        \\@com units are required now
? La = ideallistarch(bnf, L, [1,1]);
? H = bnrclassnolist(bnf, La);
? H[98];
%6 = [2, 12, 2]
@eprog

\syn{ideallistarch}{\var{nf},\var{list},\var{arch}}.

\subsecidx{ideallog}$(\var{nf},x,\var{bid})$: $\var{nf}$ is a number field,
\var{bid} a ``big ideal'' as output by \kbd{idealstar} and $x$ a
non-necessarily integral element of \var{nf} which must have valuation
equal to 0 at all prime ideals dividing $I=\var{bid}[1]$. This function
computes the ``discrete logarithm'' of $x$ on the generators given in
$\var{bid}[2]$. In other words, if $g_i$ are these generators, of orders
$d_i$ respectively, the result is a column vector of integers $(x_i)$ such
that $0\le x_i<d_i$ and
$$x\equiv\prod_ig_i^{x_i}\pmod{\ ^*I}\enspace.$$
Note that when $I$ is a module, this implies also sign conditions on the
embeddings.

\syn{zideallog}{\var{nf},x,\var{bid}}.

\subsecidx{idealmin}$(\var{nf},x,\{\var{vdir}\})$: computes a minimum of
the ideal $x$ in the direction \var{vdir} in the number field \var{nf}.

\syn{minideal}{\var{nf},x,\var{vdir},\var{prec}}, where an omitted
\var{vdir} is coded as \kbd{NULL}.

\subsecidx{idealmul}$(\var{nf},x,y,\{\fl=0\})$: ideal multiplication of the
ideals $x$ and $y$ in the number field \var{nf}. The result is a generating
set for the ideal product with at most $n$ elements, and is in Hermite normal
form if either $x$ or $y$ is in HNF or is a prime ideal as output by
\kbd{idealprimedec}, and this is given together with the sum of the
Archimedean information in $x$ and $y$ if both are given.

If $\fl$ is non-zero, reduce the result using \kbd{idealred}.

\syn{idealmul}{\var{nf},x,y} ($\fl=0$) or
\funs{idealmulred}{\var{nf},x,y,\var{prec}} ($\fl\neq0$), where as usual,
$\var{prec}$ is a C long integer representing the precision.

\subsecidx{idealnorm}$(\var{nf},x)$: computes the norm of the ideal~$x$
in the number field~$\var{nf}$.

\syn{idealnorm}{\var{nf}, x}.

\subsecidx{idealpow}$(\var{nf},x,k,\{\fl=0\})$: computes the $k$-th power of
the ideal $x$ in the number field $\var{nf}$. $k$ can be positive, negative
or zero. The result is NOT reduced, it is really the $k$-th ideal power, and
is given in HNF.

If $\fl$ is non-zero, reduce the result using \kbd{idealred}. Note however
that this is NOT the same as as $\kbd{idealpow}(\var{nf},x,k)$ followed by
reduction, since the reduction is performed throughout the powering process.

The library syntax corresponding to $\fl=0$ is
\funs{idealpow}{\var{nf},x,k}. If $k$ is a \kbd{long}, you can use
\funs{idealpows}{\var{nf},x,k}. Corresponding to $\fl=1$ is
\funs{idealpowred}{\var{nf},vp,k,\var{prec}}, where $\var{prec}$ is a
\kbd{long}.

\subsecidx{idealprimedec}$(\var{nf},p)$: computes the prime ideal
decomposition of the prime number $p$ in the number field $\var{nf}$. $p$
must be a (positive) prime number. Note that the fact that $p$ is prime is
not checked, so if a non-prime $p$ is given the result is undefined.

The result is a vector of \tev{pr} structures, each representing one of the
prime ideals above $p$ in the number field $\var{nf}$. The representation
$P=[p,a,e,f,b]$ of a prime ideal means the following. The prime ideal is
equal to $p\Z_K+\alpha\Z_K$ where $\Z_K$ is the ring of integers of the field
and $\alpha=\sum_i a_i\omega_i$ where the $\omega_i$ form the integral basis
\kbd{\var{nf}.zk}, $e$ is the ramification index, $f$ is the residual index,
and $b$ represents a $\beta\in\Z_K$ such that $P^{-1}=\Z_K+\beta/p\Z_K$ which
will be useful for computing valuations, but which the user can ignore. The
number $\alpha$ is guaranteed to have a valuation equal to 1 at the prime
ideal (this is automatic if $e>1$).

The components of \kbd{P} should be accessed by member functions: \kbd{P.p},
\kbd{P.e}, \kbd{P.f}, and \kbd{P.gen} (returns the vector $[p,a]$).

\syn{primedec}{\var{nf},p}.

\subsecidx{idealprincipal}$(\var{nf},x)$: creates the principal ideal
generated by the algebraic number $x$ (which must be of type integer,
rational or polmod) in the number field $\var{nf}$. The result is a
one-column matrix.

\syn{principalideal}{\var{nf},x}.

\subsecidx{idealred}$(\var{nf},I,\{\var{vdir}=0\})$: \idx{LLL} reduction of
the ideal $I$ in the number field \var{nf}, along the direction \var{vdir}.
If \var{vdir} is present, it must be an $r1+r2$-component vector ($r1$ and
$r2$ number of real and complex places of \var{nf} as usual).

This function finds a ``small'' $a$ in $I$ (it is an LLL pseudo-minimum
along direction \var{vdir}). The result is the Hermite normal form of
the LLL-reduced ideal $r I/a$, where $r$ is a rational number such that the
resulting ideal is integral and primitive. This is often, but not always, a
reduced ideal in the sense of \idx{Buchmann}. If $I$ is an idele, the
logarithmic embeddings of $a$ are subtracted to the Archimedean part.

More often than not, a \idx{principal ideal} will yield the identity
matrix. This is a quick and dirty way to check if ideals are principal
without computing a full \kbd{bnf} structure, but it's not a necessary
condition; hence, a non-trivial result doesn't prove the ideal is
non-trivial in the class group.

Note that this is \emph{not} the same as the LLL reduction of the lattice
$I$ since ideal operations are involved.

\syn{ideallllred}{\var{nf},x,\var{vdir},\var{prec}}, where an omitted
\var{vdir} is coded as \kbd{NULL}.

\subsecidx{idealstar}$(\var{nf},I,\{\fl=1\})$: outputs a \var{bid} structure,
necessary for computing in the finite abelian group $G = (\Z_K/I)^*$. Here,
\var{nf} is a number field and $I$ is a \var{modulus}: either an ideal in any
form, or a row vector whose first component is an ideal and whose second
component is a row vector of $r_1$ 0 or 1.

This \var{bid} is used in \tet{ideallog} to compute discrete logarithms. It
also contains useful information which can be conveniently retrieved as
\kbd{\var{bid}.mod} (the modulus),
\kbd{\var{bid}.clgp} ($G$ as a finite abelian group),
\kbd{\var{bid}.no} (the cardinality of $G$),
\kbd{\var{bid}.cyc} (elementary divisors) and
\kbd{\var{bid}.gen} (generators).

If $\fl=1$ (default), the result is a \var{bid} structure without
generators.

If $\fl=2$, as $\fl=1$, but including generators, which wastes some time.

If $\fl=0$, \emph{deprecated}. Only outputs $(\Z_K/I)^*$ as an abelian group,
i.e as a 3-component vector $[h,d,g]$: $h$ is the order, $d$ is the vector of
SNF\sidx{Smith normal form} cyclic components and $g$ the corresponding
generators. This flag is deprecated: it is in fact slightly faster
to compute a true \var{bid} structure, which contains much more information.

\syn{idealstar0}{\var{nf},I,\fl}.

\subsecidx{idealtwoelt}$(\var{nf},x,\{a\})$: computes a two-element
representation of the ideal $x$ in the number field $\var{nf}$, using a
straightforward (exponential time) search. $x$ can be an ideal in any form,
(including perhaps an Archimedean part, which is ignored) and the result is a
row vector $[a,\alpha]$ with two components such that $x=a\Z_K+\alpha\Z_K$
and $a\in\Z$, where $a$ is the one passed as argument if any. If $x$ is given
by at least two generators, $a$ is chosen to be the positive generator of
$x\cap\Z$.

Note that when an explicit $a$ is given, we use an asymptotically faster
method, however in practice it is usually slower.

\syn{ideal_two_elt0}{\var{nf},x,a}, where an omitted $a$ is entered as
\kbd{NULL}.

\subsecidx{idealval}$(\var{nf},x,\var{vp})$: gives the valuation of the
ideal $x$ at the prime ideal \var{vp} in the number field $\var{nf}$,
where \var{vp} must be a
5-component vector as given by \kbd{idealprimedec}.

\syn{idealval}{\var{nf},x,\var{vp}}, and the result is a \kbd{long}
integer.

\subsecidx{ideleprincipal}$(\var{nf},x)$: creates the principal idele
generated by the algebraic number $x$ (which must be of type integer,
rational or polmod) in the number field $\var{nf}$. The result is a
two-component vector, the first being a one-column matrix representing the
corresponding principal ideal, and the second being the vector with $r_1+r_2$
components giving the complex logarithmic embedding of $x$.

\syn{principalidele}{\var{nf},x}.

\subsecidx{matalgtobasis}$(\var{nf},x)$: $\var{nf}$ being a number field in
\kbd{nfinit} format, and $x$ a matrix whose coefficients are expressed as
polmods in $\var{nf}$, transforms this matrix into a matrix whose
coefficients are expressed on the integral basis of $\var{nf}$. This is the
same as applying \kbd{nfalgtobasis} to each entry, but it would be dangerous
to use the same name.

\syn{matalgtobasis}{\var{nf},x}.

\subsecidx{matbasistoalg}$(\var{nf},x)$: $\var{nf}$ being a number field in
\kbd{nfinit} format, and $x$ a matrix whose coefficients are expressed as
column vectors on the integral basis of $\var{nf}$, transforms this matrix
into a matrix whose coefficients are algebraic numbers expressed as
polmods. This is the same as applying \kbd{nfbasistoalg} to each entry, but
it would be dangerous to use the same name.

\syn{matbasistoalg}{\var{nf},x}.

\subsecidx{modreverse}$(a)$: $a$ being a polmod $A(X)$ modulo $T(X)$, finds
the ``reverse polmod'' $B(X)$ modulo $Q(X)$, where $Q$ is the minimal
polynomial of $a$, which must be equal to the degree of $T$, and such that if
$\theta$ is a root of $T$ then $\theta=B(\alpha)$ for a certain root $\alpha$
of $Q$.

This is very useful when one changes the generating element in algebraic
extensions.

\syn{polmodrecip}{x}.

\subsecidx{newtonpoly}$(x,p)$: gives the vector of the slopes of the Newton
polygon of the polynomial $x$ with respect to the prime number $p$. The $n$
components of the vector are in decreasing order, where $n$ is equal to the
degree of $x$. Vertical slopes occur iff the constant coefficient of $x$ is
zero and are denoted by \kbd{VERYBIGINT}, the biggest single precision
integer representable on the machine ($2^{31}-1$ (resp.~$2^{63}-1$) on 32-bit
(resp.~64-bit) machines), see \secref{se:valuation}.

\syn{newtonpoly}{x,p}.

\subsecidx{nfalgtobasis}$(\var{nf},x)$: this is the inverse function of
\kbd{nfbasistoalg}. Given an object $x$ whose entries are expressed as
algebraic numbers in the number field $\var{nf}$, transforms it so that the
entries are expressed as a column vector on the integral basis
\kbd{\var{nf}.zk}.

\syn{algtobasis}{\var{nf},x}.

\subsecidx{nfbasis}$(x,\{\fl=0\},\{\var{fa}\})$: \idx{integral basis} of the number
field defined by the irreducible, preferably monic, polynomial $x$, using a
modified version of the \idx{round 4} algorithm by default, due to David
\idx{Ford}, Sebastian \idx{Pauli} and Xavier \idx{Roblot}. The binary digits
of $\fl$ have the following meaning:

1: assume that no square of a prime greater than the default \kbd{primelimit}
divides the discriminant of $x$, i.e.~that the index of $x$ has only small
prime divisors.

2: use \idx{round 2} algorithm. For small degrees and coefficient size, this
is sometimes a little faster. (This program is the translation into C of a
program written by David \idx{Ford} in Algeb.)

Thus for instance, if $\fl=3$, this uses the round 2 algorithm and outputs
an order which will be maximal at all the small primes.

If \var{fa} is present, we assume (without checking!) that it is the two-column
matrix of the factorization of the discriminant of the polynomial $x$. Note
that it does \emph{not} have to be a complete factorization. This is
especially useful if only a local integral basis for some small set of places
is desired: only factors with exponents greater or equal to 2 will be
considered.

\syn{nfbasis0}{x,\fl,\var{fa}}. An extended version is
\funs{nfbasis}{x,\&d,\fl,\var{fa}}, where $d$ receives the discriminant of the
number field (\emph{not} of the polynomial $x$), and an omitted \var{fa} is input
as \kbd{NULL}. Also available are \funs{base}{x,\&d} ($\fl=0$),
\funs{base2}{x,\&d} ($\fl=2$) and \funs{factoredbase}{x,\var{fa},\&d}.

\subsecidx{nfbasistoalg}$(\var{nf},x)$: this is the inverse function of
\kbd{nfalgtobasis}. Given an object $x$ whose entries are expressed on the
integral basis \kbd{\var{nf}.zk}, transforms it into an object whose entries
are algebraic numbers (i.e.~polmods).

\syn{basistoalg}{\var{nf},x}.

\subsecidx{nfdetint}$(\var{nf},x)$: given a pseudo-matrix $x$, computes a
non-zero ideal contained in (i.e.~multiple of) the determinant of $x$. This
is particularly useful in conjunction with \kbd{nfhnfmod}.

\syn{nfdetint}{\var{nf},x}.

\subsecidx{nfdisc}$(x,\{\fl=0\},\{fa\})$: \idx{field discriminant} of the
number field defined by the integral, preferably monic, irreducible
polynomial $x$. $\fl$ and $fa$ are exactly as in \kbd{nfbasis}. That is, $fa$
provides the matrix of a partial factorization of the discriminant of $x$,
and binary digits of $\fl$ are as follows:

1: assume that no square of a prime greater than \kbd{primelimit}
divides the discriminant.

2: use the round 2 algorithm, instead of the default \idx{round 4}. This
should be slower except maybe for polynomials of small degree and
coefficients.

\syn{nfdiscf0}{x,\fl,fa} where an omitted $fa$ is input as \kbd{NULL}. You
can also use \funs{discf}{x} ($\fl=0$).

\subsecidx{nfeltdiv}$(\var{nf},x,y)$: given two elements $x$ and $y$ in
\var{nf}, computes their quotient $x/y$ in the number field $\var{nf}$.

\syn{element_div}{\var{nf},x,y}.

\subsecidx{nfeltdiveuc}$(\var{nf},x,y)$: given two elements $x$ and $y$ in
\var{nf}, computes an algebraic integer $q$ in the number field $\var{nf}$
such that the components of $x-qy$ are reasonably small. In fact, this is
functionally identical to \kbd{round(nfeltdiv(\var{nf},x,y))}.

\syn{nfdiveuc}{\var{nf},x,y}.

\subsecidx{nfeltdivmodpr}$(\var{nf},x,y,\var{pr})$: given two elements $x$
and $y$ in \var{nf} and \var{pr} a prime ideal in \kbd{modpr} format (see
\tet{nfmodprinit}), computes their quotient $x / y$ modulo the prime ideal
\var{pr}.

\syn{element_divmodpr}{\var{nf},x,y,\var{pr}}.

\subsecidx{nfeltdivrem}$(\var{nf},x,y)$: given two elements $x$ and $y$ in
\var{nf}, gives a two-element row vector $[q,r]$ such that $x=qy+r$, $q$ is
an algebraic integer in $\var{nf}$, and the components of $r$ are
reasonably small.

\syn{nfdivrem}{\var{nf},x,y}.

\subsecidx{nfeltmod}$(\var{nf},x,y)$: given two elements $x$ and $y$ in
\var{nf}, computes an element $r$ of $\var{nf}$ of the form $r=x-qy$ with
$q$ and algebraic integer, and such that $r$ is small. This is functionally
identical to
$$\kbd{x - nfeltmul(\var{nf},round(nfeltdiv(\var{nf},x,y)),y)}.$$

\syn{nfmod}{\var{nf},x,y}.

\subsecidx{nfeltmul}$(\var{nf},x,y)$: given two elements $x$ and $y$ in
\var{nf}, computes their product $x*y$ in the number field $\var{nf}$.

\syn{element_mul}{\var{nf},x,y}.

\subsecidx{nfeltmulmodpr}$(\var{nf},x,y,\var{pr})$: given two elements $x$ and
$y$ in \var{nf} and \var{pr} a prime ideal in \kbd{modpr} format (see
\tet{nfmodprinit}), computes their product $x*y$ modulo the prime ideal
\var{pr}.

\syn{element_mulmodpr}{\var{nf},x,y,\var{pr}}.

\subsecidx{nfeltpow}$(\var{nf},x,k)$: given an element $x$ in \var{nf},
and a positive or negative integer $k$, computes $x^k$ in the number field
$\var{nf}$.

\syn{element_pow}{\var{nf},x,k}.

\subsecidx{nfeltpowmodpr}$(\var{nf},x,k,\var{pr})$: given an element $x$ in
\var{nf}, an integer $k$ and a prime ideal \var{pr} in \kbd{modpr} format
(see \tet{nfmodprinit}), computes $x^k$ modulo the prime ideal \var{pr}.

\syn{element_powmodpr}{\var{nf},x,k,\var{pr}}.

\subsecidx{nfeltreduce}$(\var{nf},x,\var{ideal})$: given an ideal in
Hermite normal form and an element $x$ of the number field $\var{nf}$,
finds an element $r$ in $\var{nf}$ such that $x-r$ belongs to the ideal
and $r$ is small.

\syn{element_reduce}{\var{nf},x,\var{ideal}}.

\subsecidx{nfeltreducemodpr}$(\var{nf},x,\var{pr})$: given
an element $x$ of the number field $\var{nf}$ and a prime ideal \var{pr} in
\kbd{modpr} format compute a canonical representative for the class of $x$
modulo \var{pr}.

\syn{nfreducemodpr}{\var{nf},x,\var{pr}}.

\subsecidx{nfeltval}$(\var{nf},x,\var{pr})$: given an element $x$ in
\var{nf} and a prime ideal \var{pr} in the format output by
\kbd{idealprimedec}, computes their the valuation at \var{pr} of the
element $x$. The same result could be obtained using
\kbd{idealval(\var{nf},x,\var{pr})} (since $x$ would then be converted to a
principal ideal), but it would be less efficient.

\syn{element_val}{\var{nf},x,\var{pr}}, and the result is a \kbd{long}.

\subsecidx{nffactor}$(\var{nf},x)$: factorization of the univariate
polynomial $x$ over the number field $\var{nf}$ given by \kbd{nfinit}. $x$
has coefficients in $\var{nf}$ (i.e.~either scalar, polmod, polynomial or
column vector). The main variable of $\var{nf}$ must be of \emph{lower}
priority than that of $x$ (see \secref{se:priority}). However if
the polynomial defining the number field occurs explicitly  in the
coefficients of $x$ (as modulus of a \typ{POLMOD}), its main variable must be
\emph{the same} as the main variable of $x$. For example,
\bprog
? nf = nfinit(y^2 + 1);
? nffactor(nf, x^2 + y); \\@com OK
? nffactor(nf, x^2 + Mod(y, y^2+1)); \\ @com OK
? nffactor(nf, x^2 + Mod(z, z^2+1)); \\ @com WRONG
@eprog

\syn{nffactor}{\var{nf},x}.

\subsecidx{nffactormod}$(\var{nf},x,\var{pr})$: factorization of the
univariate polynomial $x$ modulo the prime ideal \var{pr} in the number
field $\var{nf}$. $x$ can have coefficients in the number field (scalar,
polmod, polynomial, column vector) or modulo the prime ideal (intmod
modulo the rational prime under \var{pr}, polmod or polynomial with
intmod coefficients, column vector of intmod). The prime ideal
\var{pr} \emph{must} be in the format output by \kbd{idealprimedec}. The
main variable of $\var{nf}$ must be of lower priority than that of $x$
(see \secref{se:priority}). However if the coefficients of the number
field occur explicitly (as polmods) as coefficients of $x$, the variable of
these polmods \emph{must} be the same as the main variable of $t$ (see
\kbd{nffactor}).

\syn{nffactormod}{\var{nf},x,\var{pr}}.

\subsecidx{nfgaloisapply}$(\var{nf},\var{aut},x)$: $\var{nf}$ being a
number field as output by \kbd{nfinit}, and \var{aut} being a \idx{Galois}
automorphism of $\var{nf}$ expressed either as a polynomial or a polmod
(such automorphisms being found using for example one of the variants of
\kbd{nfgaloisconj}), computes the action of the automorphism \var{aut} on
the object $x$ in the number field. $x$ can be an element (scalar, polmod,
polynomial or column vector) of the number field, an ideal (either given by
$\Z_K$-generators or by a $\Z$-basis), a prime ideal (given as a 5-element
row vector) or an idele (given as a 2-element row vector). Because of
possible confusion with elements and ideals, other vector or matrix
arguments are forbidden.

\syn{galoisapply}{\var{nf},\var{aut},x}.

\subsecidx{nfgaloisconj}$(\var{nf},\{\fl=0\},\{d\})$: $\var{nf}$ being a
number field as output by \kbd{nfinit}, computes the conjugates of a root
$r$ of the non-constant polynomial $x=\var{nf}[1]$ expressed as
polynomials in $r$. This can be used even if the number field $\var{nf}$ is
not \idx{Galois} since some conjugates may lie in the field.

$\var{nf}$ can simply be a polynomial if $\fl\neq 1$.

If no flags or $\fl=0$, if $\var{nf}$ is a number field use a
combination of flag $4$ and $1$ and the result is always complete,
else use a combination of flag $4$ and $2$ and the result is subject
to the restriction of $\fl=2$, but a warning is issued when it is not
proven complete.

If $\fl=1$, use \kbd{nfroots} (require a number field).

If $\fl=2$, use complex approximations to the roots and an integral
\idx{LLL}. The result is not guaranteed to be complete: some
conjugates may be missing (no warning issued), especially so if the
corresponding polynomial has a huge index. In that case, increasing
the default precision may help.

If $\fl=4$, use Allombert's algorithm and permutation testing. If the
field is Galois with ``weakly'' super solvable Galois group, return
the complete list of automorphisms, else only the identity element. If
present, $d$ is assumed to be a multiple of the least common
denominator of the conjugates expressed as polynomial in a root of
\var{pol}.

A group G is ``weakly'' super solvable (WKSS) if it contains a super solvable
normal subgroup $H$ such that $G=H$ , or $G/H \simeq A_4$ , or $G/H \simeq
S_4$. Abelian and nilpotent groups are WKSS. In practice, almost all groups
of small order are WKSS, the exceptions having order 36(1 exception), 48(2),
56(1), 60(1), 72(5), 75(1), 80(1), 96(10) and $\geq 108$.

Hence $\fl = 4$ permits to quickly check whether a polynomial of order
strictly less than $36$ is Galois or not. This method is much faster than
\kbd{nfroots} and can be applied to polynomials of degree larger than $50$.

This routine can only compute $\Q$-automorphisms, but it may be used to get
$K$-automorphism for any base field $K$ as follows:
\bprog
  rnfgaloisconj(nfK, R) = \\ K-automorphisms of L = K[X] / (R)
  { local(polabs, N, H);
    R *= Mod(1, nfK.pol);             \\ convert coeffs to polmod elts of K
    polabs = rnfequation(nfK, R);
    N = nfgaloisconj(polabs) % R;     \\ Q-automorphisms of L
    H = [];
    for(i=1, #N,                      \\ select the ones that fix K
      if (subst(R, variable(R), Mod(N[i],R)) == 0,
        H = concat(H,N[i])
      )
    ); H
  }
  K  = nfinit(y^2 + 7);
  polL = x^4 - y*x^3 - 3*x^2 + y*x + 1;
  rnfgaloisconj(K, polL)             \\ K-automorphisms of L
@eprog

\syn{galoisconj0}{\var{nf},\fl,d,\var{prec}}. Also available are
\funs{galoisconj}{\var{nf}} for $\fl=0$,
\funs{galoisconj2}{\var{nf},n,\var{prec}} for $\fl=2$ where $n$ is a bound
on the number of conjugates, and  \funs{galoisconj4}{\var{nf},d}
corresponding to $\fl=4$.

\subsecidx{nfhilbert}$(\var{nf},a,b,\{\var{pr}\})$: if \var{pr} is omitted,
compute the global \idx{Hilbert symbol} $(a,b)$ in $\var{nf}$, that is $1$
if $x^2 - a y^2 - b z^2$ has a non trivial solution $(x,y,z)$ in $\var{nf}$,
and $-1$ otherwise. Otherwise compute the local symbol modulo the prime ideal
\var{pr} (as output by \kbd{idealprimedec}).

\syn{nfhilbert}{\var{nf},a,b,\var{pr}}, where an omitted \var{pr} is coded
as \kbd{NULL}.

\subsecidx{nfhnf}$(\var{nf},x)$: given a pseudo-matrix $(A,I)$, finds a
pseudo-basis in \idx{Hermite normal form} of the module it generates.

\syn{nfhermite}{\var{nf},x}.

\subsecidx{nfhnfmod}$(\var{nf},x,\var{detx})$: given a pseudo-matrix $(A,I)$
and an ideal \var{detx} which is contained in (read integral multiple of) the
determinant of $(A,I)$, finds a pseudo-basis in \idx{Hermite normal form}
of the module generated by $(A,I)$. This avoids coefficient explosion.
\var{detx} can be computed using the function \kbd{nfdetint}.

\syn{nfhermitemod}{\var{nf},x,\var{detx}}.

\subsecidx{nfinit}$(\var{pol},\{\fl=0\})$: \var{pol} being a non-constant,
preferably monic, irreducible polynomial in $\Z[X]$, initializes a
\emph{number field} structure (\kbd{nf}) associated to the field $K$ defined
by \var{pol}. As such, it's a technical object passed as the first argument
to most \kbd{nf}\var{xxx} functions, but it contains some information which
may be directly useful. Access to this information via \emph{member
functions} is preferred since the specific data organization specified below
may change in the future. Currently, \kbd{nf} is a row vector with 9
components:

$\var{nf}[1]$ contains the polynomial \var{pol} (\kbd{\var{nf}.pol}).

$\var{nf}[2]$ contains $[r1,r2]$ (\kbd{\var{nf}.sign}, \kbd{\var{nf}.r1},
\kbd{\var{nf}.r2}), the number of real and complex places of $K$.

$\var{nf}[3]$ contains the discriminant $d(K)$ (\kbd{\var{nf}.disc}) of $K$.

$\var{nf}[4]$ contains the index of $\var{nf}[1]$ (\kbd{\var{nf}.index}),
i.e.~$[\Z_K : \Z[\theta]]$, where $\theta$ is any root of $\var{nf}[1]$.

$\var{nf}[5]$ is a vector containing 7 matrices $M$, $G$, $T2$, $T$,
$MD$, $TI$, $MDI$ useful for certain computations in the number field $K$.

\quad\item $M$ is the $(r1+r2)\times n$ matrix whose columns represent
the numerical values of the conjugates of the elements of the integral
basis.

\quad\item $G$ is such that $T2 = {}^t G G$, where $T2$ is the quadratic
form $T_2(x) = \sum |\sigma(x)|^2$, $\sigma$ running over the embeddings of
$K$ into $\C$.

\quad\item The $T2$ component is deprecated and currently unused.

\quad\item $T$ is the $n\times n$ matrix whose coefficients are
$\text{Tr}(\omega_i\omega_j)$ where the $\omega_i$ are the elements of the
integral basis. Note also that $\det(T)$ is equal to the discriminant of the
field $K$.

\quad\item The columns of $MD$ (\kbd{\var{nf}.diff}) express a $\Z$-basis
of the different of $K$ on the integral basis.

\quad\item $TI$ is equal to $d(K)T^{-1}$, which has integral
coefficients. Note that, understood as as ideal, the matrix $T^{-1}$
generates the codifferent ideal.

\quad\item Finally, $MDI$ is a two-element representation (for faster
ideal product) of $d(K)$ times the codifferent ideal
(\kbd{\var{nf}.disc$*$\var{nf}.codiff}, which is an integral ideal). $MDI$
is only used in \tet{idealinv}.

$\var{nf}[6]$ is the vector containing the $r1+r2$ roots
(\kbd{\var{nf}.roots}) of $\var{nf}[1]$ corresponding to the $r1+r2$
embeddings of the number field into $\C$ (the first $r1$ components are real,
the next $r2$ have positive imaginary part).

$\var{nf}[7]$ is an integral basis for $\Z_K$ (\kbd{\var{nf}.zk}) expressed
on the powers of~$\theta$. Its first element is guaranteed to be $1$. This
basis is LLL-reduced with respect to $T_2$ (strictly speaking, it is a
permutation of such a basis, due to the condition that the first element be
$1$).

$\var{nf}[8]$ is the $n\times n$ integral matrix expressing the power
basis in terms of the integral basis, and finally

$\var{nf}[9]$ is the $n\times n^2$ matrix giving the multiplication table
of the integral basis.

If a non monic polynomial is input, \kbd{nfinit} will transform it into a
monic one, then reduce it (see $\fl=3$). It is allowed, though not very
useful given the existence of \tet{nfnewprec}, to input a \kbd{nf} or a
\kbd{bnf} instead of a polynomial.

\bprog
  ? nf = nfinit(x^3 - 12); \\ initialize number field Q[X] / (X^3 - 12)
  ? nf.pol   \\ defining polynomial
  %2 = x^3 - 12
  ? nf.disc  \\ field discriminant
  %3 = -972
  ? nf.index \\ index of power basis order in maximal order
  %4 = 2
  ? nf.zk    \\ integer basis, lifted to Q[X]
  %5 = [1, x, 1/2*x^2]
  ? nf.sign  \\ signature
  %6 = [1, 1]
  ? factor(abs(nf.disc ))  \\ determines ramified primes
  %7 =
  [2 2]

  [3 5]
  ? idealfactor(nf, 2)
  %8 =
  [[2, [0, 0, -1]~, 3, 1, [0, 1, 0]~] 3]  \\ @com $\goth{P}_2^3$
@eprog

In case \var{pol} has a huge discriminant which is difficult to factor,
the special input format $[\var{pol},B]$ is also accepted where \var{pol} is a
polynomial as above and $B$ is the integer basis, as would be computed by
\tet{nfbasis}. This is useful if the integer basis is known in advance,
or was computed conditionnally.
\bprog
  ? pol = polcompositum(x^5 - 101, polcyclo(7))[1];
  ? B = nfbasis(pol, 1);   \\ faster than nfbasis(pol), but conditional
  ? nf = nfinit( [pol, B] );
  ? factor( abs(nf.disc) )
  [5 18]

  [7 25]

  [101 24]
@eprog
\kbd{B} is conditional when its discriminant, which is \kbd{nf.disc}, can't be
factored. In this example, the above factorization proves the correctness of
the computation.
\medskip

If $\fl=2$: \var{pol} is changed into another polynomial $P$ defining the same
number field, which is as simple as can easily be found using the \kbd{polred}
algorithm, and all the subsequent computations are done using this new
polynomial. In particular, the first component of the result is the modified
polynomial.

If $\fl=3$, does a \kbd{polred} as in case 2, but outputs
$[\var{nf},\kbd{Mod}(a,P)]$, where $\var{nf}$ is as before and
$\kbd{Mod}(a,P)=\kbd{Mod}(x,\var{pol})$ gives the change of
variables. This is implicit when \var{pol} is not monic: first a linear change
of variables is performed, to get a monic polynomial, then a \kbd{polred}
reduction.

If $\fl=4$, as $2$ but uses a partial \kbd{polred}.

If $\fl=5$, as $3$ using a partial \kbd{polred}.

\syn{nfinit0}{x,\fl,\var{prec}}.

\subsecidx{nfisideal}$(\var{nf},x)$: returns 1 if $x$ is an ideal in
the number field $\var{nf}$, 0 otherwise.

\syn{isideal}{x}.

\subsecidx{nfisincl}$(x,y)$: tests whether the number field $K$ defined
by the polynomial $x$ is conjugate to a subfield of the field $L$ defined
by $y$ (where $x$ and $y$ must be in $\Q[X]$). If they are not, the output
is the number 0. If they are, the output is a vector of polynomials, each
polynomial $a$ representing an embedding of $K$ into $L$, i.e.~being such
that $y\mid x\circ a$.

If $y$ is a number field (\var{nf}), a much faster algorithm is used
(factoring $x$ over $y$ using \tet{nffactor}). Before version 2.0.14, this
wasn't guaranteed to return all the embeddings, hence was triggered by a
special flag. This is no more the case.

\syn{nfisincl}{x,y,\fl}.

\subsecidx{nfisisom}$(x,y)$: as \tet{nfisincl}, but tests
for isomorphism. If either $x$ or $y$ is a number field, a much faster
algorithm will be used.

\syn{nfisisom}{x,y,\fl}.

\subsecidx{nfnewprec}$(\var{nf})$: transforms the number field $\var{nf}$
into the corresponding data using current (usually larger) precision. This
function works as expected if $\var{nf}$ is in fact a $\var{bnf}$ (update
$\var{bnf}$ to current precision) but may be quite slow (many generators of
principal ideals have to be computed).

\syn{nfnewprec}{\var{nf},\var{prec}}.

\subsecidx{nfkermodpr}$(\var{nf},a,\var{pr})$: kernel of the matrix $a$ in
$\Z_K/\var{pr}$, where \var{pr} is in \key{modpr} format
(see \kbd{nfmodprinit}).

\syn{nfkermodpr}{\var{nf},a,\var{pr}}.

\subsecidx{nfmodprinit}$(\var{nf},\var{pr})$: transforms the prime ideal
\var{pr} into \tet{modpr} format necessary for all operations modulo
\var{pr} in the number field \var{nf}.\label{se:nfmodprinit}

\syn{nfmodprinit}{\var{nf},\var{pr}}.

\subsecidx{nfsubfields}$(\var{pol},\{d=0\})$: finds all subfields of degree
$d$ of the number field defined by the (monic, integral) polynomial
\var{pol} (all subfields if $d$ is null or omitted). The result is a vector
of subfields, each being given by $[g,h]$, where $g$ is an absolute equation
and $h$ expresses one of the roots of $g$ in terms of the root $x$ of the
polynomial defining $\var{nf}$. This routine uses J.~Kl\"uners's algorithm
in the general case, and B.~Allombert's \tet{galoissubfields} when \var{nf}
is Galois (with weakly supersolvable Galois group).\sidx{Galois}\sidx{subfield}

\syn{subfields}{\var{nf},d}.

\subsecidx{nfroots}$(\{\var{nf}\},x)$: roots of the polynomial $x$ in the
number field $\var{nf}$ given by \kbd{nfinit} without multiplicity (in $\Q$
if $\var{nf}$ is omitted). $x$ has coefficients in the number field (scalar,
polmod, polynomial, column vector). The main variable of $\var{nf}$ must be
of lower priority than that of $x$ (see \secref{se:priority}). However if the
coefficients of the number field occur explicitly (as polmods) as
coefficients of $x$, the variable of these polmods \emph{must} be the same as
the main variable of $t$ (see \kbd{nffactor}).

\syn{nfroots}{\var{nf},x}.

\subsecidx{nfrootsof1}$(\var{nf})$: computes the number of roots of unity
$w$ and a primitive $w$-th root of unity (expressed on the integral basis)
belonging to the number field $\var{nf}$. The result is a two-component
vector $[w,z]$ where $z$ is a column vector expressing a primitive $w$-th
root of unity on the integral basis \kbd{\var{nf}.zk}.

\syn{rootsof1}{\var{nf}}.

\subsecidx{nfsnf}$(\var{nf},x)$: given a torsion module $x$ as a 3-component
row
vector $[A,I,J]$ where $A$ is a square invertible $n\times n$ matrix, $I$ and
$J$ are two ideal lists, outputs an ideal list $d_1,\dots,d_n$ which is the
\idx{Smith normal form} of $x$. In other words, $x$ is isomorphic to
$\Z_K/d_1\oplus\cdots\oplus\Z_K/d_n$ and $d_i$ divides $d_{i-1}$ for $i\ge2$.
The link between $x$ and $[A,I,J]$ is as follows: if $e_i$ is the canonical
basis of $K^n$, $I=[b_1,\dots,b_n]$ and $J=[a_1,\dots,a_n]$, then $x$ is
isomorphic to
$$ (b_1e_1\oplus\cdots\oplus b_ne_n) / (a_1A_1\oplus\cdots\oplus a_nA_n)
\enspace, $$
where the $A_j$ are the columns of the matrix $A$. Note that every finitely
generated torsion module can be given in this way, and even with $b_i=Z_K$
for all $i$.

\syn{nfsmith}{\var{nf},x}.

\subsecidx{nfsolvemodpr}$(\var{nf},a,b,\var{pr})$: solution of $a\cdot x = b$
in $\Z_K/\var{pr}$, where $a$ is a matrix and $b$ a column vector, and where
\var{pr} is in \key{modpr} format (see \kbd{nfmodprinit}).

\syn{nfsolvemodpr}{\var{nf},a,b,\var{pr}}.

\subsecidx{polcompositum}$(P,Q,\{\fl=0\})$:\sidx{compositum} $P$ and $Q$
being squarefree polynomials in $\Z[X]$ in the same variable, outputs
the simple factors of the \'etale $\Q$-algebra $A = \Q(X, Y) / (P(X), Q(Y))$.
The factors are given by a list of polynomials $R$ in $\Z[X]$, associated to
the number field $\Q(X)/ (R)$, and sorted by increasing degree (with respect
to lexicographic ordering for factors of equal degrees). Returns an error if
one of the polynomials is not squarefree.

Note that it is more efficient to reduce to the case where $P$ and $Q$ are
irreducible first. The routine will not perform this for you, since it may be
expensive, and the inputs are irreducible in most applications anyway.
Assuming $P$ is irreducible (of smaller degree than $Q$ for efficiency), it
is in general \emph{much} faster to proceed as follows
\bprog
   nf = nfinit(P); L = nffactor(nf, Q)[,1];
   vector(#L, i, rnfequation(nf, L[i]))
@eprog\noindent
to obtain the same result. If you are only interested in the degrees of the
simple factors, the \kbd{rnfequation} instruction can be replaced by a
trivial \kbd{poldegree(P) * poldegree(L[i])}.

If $\fl=1$, outputs a vector of 4-component vectors $[R,a,b,k]$, where $R$
ranges through the list of all possible compositums as above, and $a$
(resp. $b$) expresses the root of $P$ (resp. $Q$) as an element of
$\Q(X)/(R)$. Finally, $k$ is a small integer such that $b + ka = X$ modulo
$R$.

A compositum is quite often defined by a complicated polynomial, which it is
advisable to reduce before further work. Here is a simple example involving
the field $\Q(\zeta_5, 5^{1/5})$:
\bprog
? z = polcompositum(x^5 - 5, polcyclo(5), 1)[1];
? pol = z[1]                 \\@com \kbd{pol} defines the compositum
%2 = x^20 + 5*x^19 + 15*x^18 + 35*x^17 + 70*x^16 + 141*x^15 + 260*x^14 \
  + 355*x^13 + 95*x^12 - 1460*x^11 - 3279*x^10 - 3660*x^9 - 2005*x^8    \
  + 705*x^7 + 9210*x^6 + 13506*x^5 + 7145*x^4 - 2740*x^3 + 1040*x^2     \
  - 320*x + 256
? a = z[2]; a^5 - 5          \\@com \kbd{a} is a fifth root of $5$
%3 = 0
? z = polredabs(pol, 1);     \\@com look for a simpler polynomial
? pol = z[1]
%5 = x^20 + 25*x^10 + 5
? a = subst(a.pol, x, z[2])  \\@com \kbd{a} in the new coordinates
%6 = Mod(-5/22*x^19 + 1/22*x^14 - 123/22*x^9 + 9/11*x^4, x^20 + 25*x^10 + 5)
@eprog

\syn{polcompositum0}{P,Q,\fl}.

\subsecidx{polgalois}$(x)$: \idx{Galois} group of the non-constant
polynomial $x\in\Q[X]$. In the present version \vers, $x$ must be irreducible
and the degree of $x$ must be less than or equal to 7. On certain versions for
which the data file of Galois resolvents has been installed (available in the
Unix distribution as a separate package), degrees 8, 9, 10 and 11 are also
implemented.

The output is a 4-component vector $[n,s,k,name]$ with the
following meaning: $n$ is the cardinality of the group, $s$ is its signature
($s=1$ if the group is a subgroup of the alternating group $A_n$, $s=-1$
otherwise) and name is a character string containing name of the transitive
group according to the GAP 4 transitive groups library by Alexander Hulpke.

$k$ is more arbitrary and the choice made up to version~2.2.3 of PARI is rather
unfortunate: for $n > 7$, $k$ is the numbering of the group among all
transitive subgroups of $S_n$, as given in ``The transitive groups of degree up
to eleven'', G.~Butler and J.~McKay, \emph{Communications in Algebra}, vol.~11,
1983,
pp.~863--911 (group $k$ is denoted $T_k$ there). And for $n \leq 7$, it was ad
hoc, so as to ensure that a given triple would design a unique group.
Specifically, for polynomials of degree $\leq 7$, the groups are coded as
follows, using standard notations
\smallskip
In degree 1: $S_1=[1,1,1]$.
\smallskip
In degree 2: $S_2=[2,-1,1]$.
\smallskip
In degree 3: $A_3=C_3=[3,1,1]$, $S_3=[6,-1,1]$.
\smallskip
In degree 4: $C_4=[4,-1,1]$, $V_4=[4,1,1]$, $D_4=[8,-1,1]$, $A_4=[12,1,1]$,
$S_4=[24,-1,1]$.
\smallskip
In degree 5: $C_5=[5,1,1]$, $D_5=[10,1,1]$, $M_{20}=[20,-1,1]$,
 $A_5=[60,1,1]$, $S_5=[120,-1,1]$.
\smallskip
In degree 6: $C_6=[6,-1,1]$, $S_3=[6,-1,2]$, $D_6=[12,-1,1]$, $A_4=[12,1,1]$,
$G_{18}=[18,-1,1]$, $S_4^-=[24,-1,1]$, $A_4\times C_2=[24,-1,2]$,
$S_4^+=[24,1,1]$, $G_{36}^-=[36,-1,1]$, $G_{36}^+=[36,1,1]$,
$S_4\times C_2=[48,-1,1]$, $A_5=PSL_2(5)=[60,1,1]$, $G_{72}=[72,-1,1]$,
$S_5=PGL_2(5)=[120,-1,1]$, $A_6=[360,1,1]$, $S_6=[720,-1,1]$.
\smallskip
In degree 7: $C_7=[7,1,1]$, $D_7=[14,-1,1]$, $M_{21}=[21,1,1]$,
$M_{42}=[42,-1,1]$, $PSL_2(7)=PSL_3(2)=[168,1,1]$, $A_7=[2520,1,1]$,
$S_7=[5040,-1,1]$.
\smallskip
This is deprecated and obsolete, but for reasons of backward compatibility,
we cannot change this behaviour yet. So you can use the default
\tet{new_galois_format} to switch to a consistent naming scheme, namely $k$ is
always the standard numbering of the group among all transitive subgroups of
$S_n$. If this default is in effect, the above groups will be coded as:
\smallskip
In degree 1: $S_1=[1,1,1]$.
\smallskip
In degree 2: $S_2=[2,-1,1]$.
\smallskip
In degree 3: $A_3=C_3=[3,1,1]$, $S_3=[6,-1,2]$.
\smallskip
In degree 4: $C_4=[4,-1,1]$, $V_4=[4,1,2]$, $D_4=[8,-1,3]$, $A_4=[12,1,4]$,
$S_4=[24,-1,5]$.
\smallskip
In degree 5: $C_5=[5,1,1]$, $D_5=[10,1,2]$, $M_{20}=[20,-1,3]$,
 $A_5=[60,1,4]$, $S_5=[120,-1,5]$.
\smallskip
In degree 6: $C_6=[6,-1,1]$, $S_3=[6,-1,2]$, $D_6=[12,-1,3]$, $A_4=[12,1,4]$,
$G_{18}=[18,-1,5]$, $A_4\times C_2=[24,-1,6]$, $S_4^+=[24,1,7]$,
$S_4^-=[24,-1,8]$, $G_{36}^-=[36,-1,9]$, $G_{36}^+=[36,1,10]$,
$S_4\times C_2=[48,-1,11]$, $A_5=PSL_2(5)=[60,1,12]$, $G_{72}=[72,-1,13]$,
$S_5=PGL_2(5)=[120,-1,14]$, $A_6=[360,1,15]$, $S_6=[720,-1,16]$.
\smallskip
In degree 7: $C_7=[7,1,1]$, $D_7=[14,-1,2]$, $M_{21}=[21,1,3]$,
$M_{42}=[42,-1,4]$, $PSL_2(7)=PSL_3(2)=[168,1,5]$, $A_7=[2520,1,6]$,
$S_7=[5040,-1,7]$.
\smallskip

\misctitle{Warning:} The method used is that of resolvent polynomials and is
sensitive to the current precision. The precision is updated internally but,
in very rare cases, a wrong result may be returned if the initial precision
was not sufficient.

\syn{polgalois}{x,\var{prec}}. To enable the new format in library mode,
set the global variable \tet{new_galois_format} to $1$.

\subsecidx{polred}$(x,\{\fl=0\},\{fa\})$: finds polynomials with reasonably
small coefficients defining subfields of the number field defined by $x$.
One of the polynomials always defines $\Q$ (hence is equal to $x-1$),
and another always defines the same number field as $x$ if $x$ is irreducible.
All $x$ accepted by \tet{nfinit} are also allowed here (e.g. non-monic
polynomials, \kbd{nf}, \kbd{bnf}, \kbd{[x,Z\_K\_basis]}).

The following binary digits of $\fl$ are significant:

1: possibly use a suborder of the maximal order. The primes dividing the
index of the order chosen are larger than \tet{primelimit} or divide integers
stored in the \tet{addprimes} table.

2: gives also elements. The result is a two-column matrix, the first column
giving the elements defining these subfields, the second giving the
corresponding minimal polynomials.

If $fa$ is given, it is assumed that it is the two-column matrix of the
factorization of the discriminant of the polynomial $x$.

\syn{polred0}{x,\fl,fa}, where an omitted $fa$ is coded by \kbd{NULL}. Also
available are \funs{polred}{x} and \funs{factoredpolred}{x,fa}, both
corresponding to $\fl=0$.

\subsecidx{polredabs}$(x,\{\fl=0\})$: finds one of the polynomial defining
the same number field as the one defined by $x$, and such that the sum of the
squares of the modulus of the roots (i.e.~the $T_2$-norm) is minimal.
All $x$ accepted by \tet{nfinit} are also allowed here (e.g. non-monic
polynomials, \kbd{nf}, \kbd{bnf}, \kbd{[x,Z\_K\_basis]}).

\misctitle{Warning:} this routine uses an exponential-time algorithm to
enumerate all potential generators, and may be exceedingly slow when the
number field has many subfields, hence a lot of elements of small $T_2$-norm.
E.g. do not try it on the compositum of many quadratic fields, use
\tet{polred} instead.

The binary digits of $\fl$ mean

1: outputs a two-component row vector $[P,a]$, where $P$ is the default
output and $a$ is an element expressed on a root of the polynomial $P$,
whose minimal polynomial is equal to $x$.

4: gives \emph{all} polynomials of minimal $T_2$ norm (of the two polynomials
$P(x)$ and $P(-x)$, only one is given).

16: possibly use a suborder of the maximal order. The primes dividing the
index of the order chosen are larger than \tet{primelimit} or divide integers
stored in the \tet{addprimes} table. In that case it may happen that the
output polynomial does not have minimal $T_2$ norm.\label{se:polredabs}

\syn{polredabs0}{x,\fl}.

\subsecidx{polredord}$(x)$: finds polynomials with reasonably small
coefficients and of the same degree as that of $x$ defining suborders of the
order defined by $x$. One of the polynomials always defines $\Q$ (hence
is equal to $(x-1)^n$, where $n$ is the degree), and another always defines
the same order as $x$ if $x$ is irreducible.

\syn{ordred}{x}.

\subsecidx{poltschirnhaus}$(x)$:  applies a random Tschirnhausen
transformation to the polynomial $x$, which is assumed to be non-constant
and separable, so as to obtain a new equation for the \'etale algebra
defined by $x$. This is for instance useful when computing resolvents,
hence is used by the \kbd{polgalois} function.

\syn{tschirnhaus}{x}.

\subsecidx{rnfalgtobasis}$(\var{rnf},x)$:  expresses $x$ on the relative
integral basis. Here, $\var{rnf}$ is a relative number field extension $L/K$
as output by \kbd{rnfinit}, and $x$ an element of $L$ in absolute form, i.e.
expressed as a polynomial or polmod with polmod coefficients, \emph{not} on
the relative integral basis.

\syn{rnfalgtobasis}{\var{rnf},x}.

\subsecidx{rnfbasis}$(\var{bnf}, M)$: let $K$ the field represented by
\var{bnf}, as output by \kbd{bnfinit}. $M$ is a projective $\Z_K$-module
given by a pseudo-basis, as output by \kbd{rnfhnfbasis}. The routine returns
either a true $\Z_K$-basis of $M$ if it exists, or an $n+1$-element
generating set of $M$ if not, where $n$ is the rank of $M$ over $\var{K}$.
(Note that $n$ is the size of the pseudo-basis.)

It is allowed to use a polynomial $P$ with coefficients in $K$ instead of $M$,
in which case, $M$ is defined as the ring of integers of $K[X]/(P)$
($P$ is assumed irreducible over $K$), viewed as a $\Z_K$-module.

\syn{rnfbasis}{\var{bnf},x}.

\subsecidx{rnfbasistoalg}$(\var{rnf},x)$: computes the representation of $x$
as a polmod with polmods coefficients. Here, $\var{rnf}$ is a relative number
field extension $L/K$ as output by \kbd{rnfinit}, and $x$ an element of
$L$ expressed on the relative integral basis.

\syn{rnfbasistoalg}{\var{rnf},x}.

\subsecidx{rnfcharpoly}$(\var{nf},T,a,\{v=x\})$: characteristic polynomial of
$a$ over $\var{nf}$, where $a$ belongs to the algebra defined by $T$ over
$\var{nf}$, i.e.~$\var{nf}[X]/(T)$. Returns a polynomial in variable $v$
($x$ by default).

\syn{rnfcharpoly}{\var{nf},T,a,v}, where $v$ is a variable number.

\subsecidx{rnfconductor}$(\var{bnf},\var{pol},\{\fl=0\})$: given $\var{bnf}$
as output by \kbd{bnfinit}, and \var{pol} a relative polynomial defining an
\idx{Abelian extension}, computes the class field theory conductor of this
Abelian extension. The result is a 3-component vector
$[\var{conductor},\var{rayclgp},\var{subgroup}]$, where \var{conductor} is
the conductor of the extension given as a 2-component row vector
$[f_0,f_\infty]$, \var{rayclgp} is the full ray class group corresponding to
the conductor given as a 3-component vector [h,cyc,gen] as usual for a group,
and \var{subgroup} is a matrix in HNF defining the subgroup of the ray class
group on the given generators gen. If $\fl$ is non-zero, check that \var{pol}
indeed defines an Abelian extension, return 0 if it does not.

\syn{rnfconductor}{\var{rnf},\var{pol},\fl}.

\subsecidx{rnfdedekind}$(\var{nf},\var{pol},\var{pr})$: given a number field
$\var{nf}$ as output by \kbd{nfinit} and a polynomial \var{pol} with
coefficients in $\var{nf}$ defining a relative extension $L$ of $\var{nf}$,
evaluates the relative \idx{Dedekind} criterion over the order defined by a
root of \var{pol} for the prime ideal \var{pr} and outputs a 3-component
vector as the result. The first component is a flag equal to 1 if the
enlarged order could be proven to be \var{pr}-maximal and to 0 otherwise (it
may be maximal in the latter case if \var{pr} is ramified in $L$), the second
component is a pseudo-basis of the enlarged order and the third component is
the valuation at \var{pr} of the order discriminant.

\syn{rnfdedekind}{\var{nf},\var{pol},\var{pr}}.

\subsecidx{rnfdet}$(\var{nf},M)$: given a pseudo-matrix $M$ over the maximal
order of $\var{nf}$, computes its determinant.

\syn{rnfdet}{\var{nf},M}.

\subsecidx{rnfdisc}$(\var{nf},\var{pol})$: given a number field $\var{nf}$ as
output by \kbd{nfinit} and a polynomial \var{pol} with coefficients in
$\var{nf}$ defining a relative extension $L$ of $\var{nf}$, computes the
relative discriminant of $L$. This is a two-element row vector $[D,d]$, where
$D$ is the relative ideal discriminant and $d$ is the relative discriminant
considered as an element of $\var{nf}^*/{\var{nf}^*}^2$. The main variable of
$\var{nf}$ \emph{must} be of lower priority than that of \var{pol}, see
\secref{se:priority}.

\syn{rnfdiscf}{\var{bnf},\var{pol}}.

\subsecidx{rnfeltabstorel}$(\var{rnf},x)$: $\var{rnf}$ being a relative
number field extension $L/K$ as output by \kbd{rnfinit} and $x$ being an
element of $L$ expressed as a polynomial modulo the absolute equation
\kbd{\var{rnf}.pol}, computes $x$ as an element of the relative extension
$L/K$ as a polmod with polmod coefficients.

\syn{rnfelementabstorel}{\var{rnf},x}.

\subsecidx{rnfeltdown}$(\var{rnf},x)$: $\var{rnf}$ being a relative number
field extension $L/K$ as output by \kbd{rnfinit} and $x$ being an element of
$L$ expressed as a polynomial or polmod with polmod coefficients, computes
$x$ as an element of $K$ as a polmod, assuming $x$ is in $K$ (otherwise an
error will occur). If $x$ is given on the relative integral basis, apply
\kbd{rnfbasistoalg} first, otherwise PARI will believe you are dealing with a
vector.

\syn{rnfelementdown}{\var{rnf},x}.

\subsecidx{rnfeltreltoabs}$(\var{rnf},x)$: $\var{rnf}$ being a relative
number field extension $L/K$ as output by \kbd{rnfinit} and $x$ being an
element of $L$ expressed as a polynomial or polmod with polmod
coefficients, computes $x$ as an element of the absolute extension $L/\Q$ as
a polynomial modulo the absolute equation \kbd{\var{rnf}.pol}. If $x$ is
given on the relative integral basis, apply \kbd{rnfbasistoalg} first,
otherwise PARI will believe you are dealing with a vector.

\syn{rnfelementreltoabs}{\var{rnf},x}.

\subsecidx{rnfeltup}$(\var{rnf},x)$: $\var{rnf}$ being a relative number
field extension $L/K$ as output by \kbd{rnfinit} and $x$ being an element of
$K$ expressed as a polynomial or polmod, computes $x$ as an element of the
absolute extension $L/\Q$ as a polynomial modulo the absolute equation
\kbd{\var{rnf}.pol}. If $x$ is given on the integral basis of $K$, apply
\kbd{nfbasistoalg} first, otherwise PARI will believe you are dealing with a
vector.

\syn{rnfelementup}{\var{rnf},x}.

\subsecidx{rnfequation}$(\var{nf},\var{pol},\{\fl=0\})$: given a number field
$\var{nf}$ as output by \kbd{nfinit} (or simply a polynomial) and a
polynomial \var{pol} with coefficients in $\var{nf}$ defining a relative
extension $L$ of $\var{nf}$, computes the absolute equation of $L$ over
$\Q$.

  If $\fl$ is non-zero, outputs a 3-component row vector $[z,a,k]$, where
$z$ is the absolute equation of $L$ over $\Q$, as in the default behaviour,
$a$ expresses as an element of $L$ a root $\alpha$ of the polynomial
defining the base field $\var{nf}$, and $k$ is a small integer such that
$\theta = \beta+k\alpha$ where $\theta$ is a root of $z$ and $\beta$ a root
of $\var{pol}$.

  The main variable of $\var{nf}$ \emph{must} be of lower priority than that
of \var{pol} (see \secref{se:priority}). Note that for efficiency, this does
not check whether the relative equation is irreducible over $\var{nf}$, but
only if it is squarefree. If it is reducible but squarefree, the result will
be the absolute equation of the \'etale algebra defined by \var{pol}. If
\var{pol} is not squarefree, an error message will be issued.

\syn{rnfequation0}{\var{nf},\var{pol},\fl}.

\subsecidx{rnfhnfbasis}$(\var{bnf},x)$: given $\var{bnf}$ as output by
\kbd{bnfinit}, and either a polynomial $x$ with coefficients in $\var{bnf}$
defining a relative extension $L$ of $\var{bnf}$, or a pseudo-basis $x$ of
such an extension, gives either a true $\var{bnf}$-basis of $L$ in upper
triangular Hermite normal form, if it exists, and returns $0$ otherwise.

\syn{rnfhnfbasis}{\var{nf},x}.

\subsecidx{rnfidealabstorel}$(\var{rnf},x)$: let $\var{rnf}$ be a relative
number field extension $L/K$ as output by \kbd{rnfinit}, and $x$ an ideal of
the absolute extension $L/\Q$ given by a $\Z$-basis of elements of $L$.
Returns the relative pseudo-matrix in HNF giving the ideal $x$ considered as
an ideal of the relative extension $L/K$.

If $x$ is an ideal in HNF form, associated to an \var{nf} structure, for
instance as output by $\tet{idealhnf}(\var{nf},\dots)$,
use \kbd{rnfidealabstorel(rnf, nf.zk * x)} to convert it to a relative ideal.

\syn{rnfidealabstorel}{\var{rnf},x}.\label{se:rnfidealabstorel}

\subsecidx{rnfidealdown}$(\var{rnf},x)$: let $\var{rnf}$ be a relative number
field extension $L/K$ as output by \kbd{rnfinit}, and $x$ an ideal of
$L$, given either in relative form or by a $\Z$-basis of elements of $L$
(see \secref{se:rnfidealabstorel}), returns the ideal of $K$ below $x$,
i.e.~the intersection of $x$ with $K$.

\syn{rnfidealdown}{\var{rnf},x}.

\subsecidx{rnfidealhnf}$(\var{rnf},x)$: $\var{rnf}$ being a relative number
field extension $L/K$ as output by \kbd{rnfinit} and $x$ being a relative
ideal (which can be, as in the absolute case, of many different types,
including of course elements), computes the HNF pseudo-matrix associated to
$x$, viewed as a $\Z_K$-module.

\syn{rnfidealhermite}{\var{rnf},x}.

\subsecidx{rnfidealmul}$(\var{rnf},x,y)$: $\var{rnf}$ being a relative number
field extension $L/K$ as output by \kbd{rnfinit} and $x$ and $y$ being ideals
of the relative extension $L/K$ given by pseudo-matrices, outputs the ideal
product, again as a relative ideal.

\syn{rnfidealmul}{\var{rnf},x,y}.

\subsecidx{rnfidealnormabs}$(\var{rnf},x)$: $\var{rnf}$ being a relative
number field extension $L/K$ as output by \kbd{rnfinit} and $x$ being a
relative ideal (which can be, as in the absolute case, of many different
types, including of course elements), computes the norm of the ideal $x$
considered as an ideal of the absolute extension $L/\Q$. This is identical to
\kbd{idealnorm(rnfidealnormrel(\var{rnf},x))}, but faster.

\syn{rnfidealnormabs}{\var{rnf},x}.

\subsecidx{rnfidealnormrel}$(\var{rnf},x)$: $\var{rnf}$ being a relative
number field extension $L/K$ as output by \kbd{rnfinit} and $x$ being a
relative ideal (which can be, as in the absolute case, of many different
types, including of course elements), computes the relative norm of $x$ as a
ideal of $K$ in HNF.

\syn{rnfidealnormrel}{\var{rnf},x}.

\subsecidx{rnfidealreltoabs}$(\var{rnf},x)$: $\var{rnf}$ being a relative
number field extension $L/K$ as output by \kbd{rnfinit} and $x$ being a
relative ideal, gives the ideal $x\Z_L$ as an absolute ideal of $L/\Q$, in
the form of a $\Z$-basis, given by a vector of polynomials (modulo
\kbd{rnf.pol}).
The following routine might be useful:
\bprog
    \\ return y = rnfidealreltoabs(rnf,...) as an ideal in HNF form
    \\ associated to nf = nfinit( rnf.pol );
    idealgentoHNF(nf, y) = mathnf( Mat( nfalgtobasis(nf, y) ) );
@eprog

\syn{rnfidealreltoabs}{\var{rnf},x}.

\subsecidx{rnfidealtwoelt}$(\var{rnf},x)$: $\var{rnf}$ being a relative
number field extension $L/K$ as output by \kbd{rnfinit} and $x$ being an
ideal of the relative extension $L/K$ given by a pseudo-matrix, gives a
vector of two generators of $x$ over $\Z_L$ expressed as polmods with polmod
coefficients.

\syn{rnfidealtwoelement}{\var{rnf},x}.

\subsecidx{rnfidealup}$(\var{rnf},x)$: $\var{rnf}$ being a relative number
field extension $L/K$ as output by \kbd{rnfinit} and $x$ being an ideal of
$K$, gives the ideal $x\Z_L$ as an absolute ideal of $L/\Q$, in the form of a
$\Z$-basis, given by a vector of polynomials (modulo \kbd{rnf.pol}).
The following routine might be useful:
\bprog
    \\ return y = rnfidealup(rnf,...) as an ideal in HNF form
    \\ associated to nf = nfinit( rnf.pol );
    idealgentoHNF(nf, y) = mathnf( Mat( nfalgtobasis(nf, y) ) );
@eprog

\syn{rnfidealup}{\var{rnf},x}.

\subsecidx{rnfinit}$(\var{nf},\var{pol})$: $\var{nf}$ being a number field in
\kbd{nfinit}
format considered as base field, and \var{pol} a polynomial defining a relative
extension over $\var{nf}$, this computes all the necessary data to work in the
relative extension. The main variable of \var{pol} must be of higher priority
(see \secref{se:priority}) than that of $\var{nf}$, and the coefficients of
\var{pol} must be in $\var{nf}$.

The result is a row vector, whose components are technical. In the following
description, we let $K$ be the base field defined by $\var{nf}$, $m$ the
degree of the base field, $n$ the relative degree, $L$ the large field (of
relative degree $n$ or absolute degree $nm$), $r_1$ and $r_2$ the number of
real and complex places of $K$.

$\var{rnf}[1]$ contains the relative polynomial \var{pol}.

$\var{rnf}[2]$ is currently unused.

$\var{rnf}[3]$ is a two-component row vector $[\goth{d}(L/K),s]$ where
$\goth{d}(L/K)$ is the relative ideal discriminant of $L/K$ and $s$ is the
discriminant of $L/K$ viewed as an element of $K^*/(K^*)^2$, in other words
it is the output of \kbd{rnfdisc}.

$\var{rnf}[4]$ is the ideal index $\goth{f}$, i.e.~such that
$d(pol)\Z_K=\goth{f}^2\goth{d}(L/K)$.

$\var{rnf}[5]$ is currently unused.

$\var{rnf}[6]$ is currently unused.

$\var{rnf}[7]$ is a two-component row vector, where the first component is
the relative integral pseudo basis expressed as polynomials (in the variable of
$pol$) with polmod coefficients in $\var{nf}$, and the second component is the
ideal list of the pseudobasis in HNF.

$\var{rnf}[8]$ is the inverse matrix of the integral basis matrix, with
coefficients polmods in $\var{nf}$.

$\var{rnf}[9]$ is currently unused.

$\var{rnf}[10]$ is $\var{nf}$.

$\var{rnf}[11]$ is the output of \kbd{rnfequation(nf, pol, 1)}. Namely, a
vector \var{vabs} with 3 entries describing the \emph{absolute} extension
$L/\Q$. $\var{vabs}[1]$ is an absolute equation, more conveniently obtained
as \kbd{rnf.pol}. $\var{vabs}[2]$ expresses the generator $\alpha$ of the
number field $\var{nf}$ as a polynomial modulo the absolute equation
$\var{vabs}[1]$. $\var{vabs}[3]$ is a small integer $k$ such that, if $\beta$
is an abstract root of \var{pol} and $\alpha$ the generator of $\var{nf}$,
the generator whose root is \var{vabs} will be
$\beta + k \alpha$. Note that one must be very careful if $k\neq0$ when
dealing simultaneously with absolute and relative quantities since the
generator chosen for the absolute extension is not the same as for the
relative one. If this happens, one can of course go on working, but we
strongly advise to change the relative polynomial so that its root will be
$\beta + k \alpha$. Typically, the GP instruction would be

\kbd{pol = subst(pol, x, x - k*Mod(y,\var{nf}.pol))}

$\var{rnf}[12]$ is by default unused and set equal to 0. This
field is used to store further information about the field as it becomes
available (which is rarely needed, hence would be too expensive to compute
during the initial \kbd{rnfinit} call).

\syn{rnfinitalg}{\var{nf},\var{pol},\var{prec}}.

\subsecidx{rnfisfree}$(\var{bnf},x)$: given $\var{bnf}$ as output by
\kbd{bnfinit}, and either a polynomial $x$ with coefficients in $\var{bnf}$
defining a relative extension $L$ of $\var{bnf}$, or a pseudo-basis $x$ of
such an extension, returns true (1) if $L/\var{bnf}$ is free, false (0) if
not.

\syn{rnfisfree}{\var{bnf},x}, and the result is a \kbd{long}.

\subsecidx{rnfisnorm}$(T,a,\{\fl=0\})$: similar to
\kbd{bnfisnorm} but in the relative case. $T$ is as output by
\tet{rnfisnorminit} applied to the extension $L/K$. This tries to decide
whether the element $a$ in $K$ is the norm of some $x$ in the extension
$L/K$.

The output is a vector $[x,q]$, where $a = \Norm(x)*q$. The
algorithm looks for a solution $x$ which is an $S$-integer, with $S$ a list
of places of $K$ containing at least the ramified primes, the generators of
the class group of $L$, as well as those primes dividing $a$. If $L/K$ is
Galois, then this is enough; otherwise, $\fl$ is used to add more primes to
$S$: all the places above the primes $p \leq \fl$ (resp.~$p|\fl$) if $\fl>0$
(resp.~$\fl<0$).

The answer is guaranteed (i.e.~$a$ is a norm iff $q = 1$) if the field is
Galois, or, under \idx{GRH}, if $S$ contains all primes less than
$12\log^2\left|\disc(M)\right|$, where $M$ is the normal
closure of $L/K$.

If \tet{rnfisnorminit} has determined (or was told) that $L/K$ is
\idx{Galois}, and $\fl \neq 0$, a Warning is issued (so that you can set
$\fl = 1$ to check whether $L/K$ is known to be Galois, according to $T$).
Example:

\bprog
bnf = bnfinit(y^3 + y^2 - 2*y - 1);
p = x^2 + Mod(y^2 + 2*y + 1, bnf.pol);
T = rnfisnorminit(bnf, p);
rnfisnorm(T, 17)
@eprog\noindent
checks whether $17$ is a norm in the Galois extension $\Q(\beta) /
\Q(\alpha)$, where $\alpha^3 + \alpha^2 - 2\alpha - 1 = 0$ and $\beta^2 +
\alpha^2 + 2\alpha + 1 = 0$ (it is).

\syn{rnfisnorm}{\var{T},x,\fl}.

\subsecidx{rnfisnorminit}$(\var{pol},\var{polrel},\{\fl=2\})$:
let $K$ be defined by a root of \var{pol}, and $L/K$ the extension defined by
the polynomial \var{polrel}. As usual, \var{pol} can in fact be an \var{nf},
or \var{bnf}, etc; if \var{pol} has degree $1$ (the base field is $\Q$),
polrel is also allowed to be an \var{nf}, etc. Computes technical data needed
by \tet{rnfisnorm} to solve norm equations $Nx = a$, for $x$ in $L$, and $a$
in $K$.

If $\fl = 0$, do not care whether $L/K$ is Galois or not.

If $\fl = 1$, $L/K$ is assumed to be Galois (unchecked), which speeds up
\tet{rnfisnorm}.

If $\fl = 2$, let the routine determine whether $L/K$ is Galois.

\syn{rnfisnorminit}{\var{pol},\var{polrel},\fl}.

\subsecidx{rnfkummer}$(\var{bnr},\{\var{subgroup}\},\{deg=0\})$: \var{bnr}
being as output by \kbd{bnrinit}, finds a relative equation for the
class field corresponding to the module in \var{bnr} and the given
congruence subgroup (the full ray class field if \var{subgroup} is omitted).
If \var{deg} is positive, outputs the list of all relative equations of
degree \var{deg} contained in the ray class field defined by \var{bnr}, with
the \emph{same} conductor as $(\var{bnr}, \var{subgroup})$.

\misctitle{Warning:} this routine only works for subgroups of prime index. It
uses Kummer theory, adjoining necessary roots of unity (it needs to compute a
tough \kbd{bnfinit} here), and finds a generator via Hecke's characterization
of ramification in Kummer extensions of prime degree. If your extension does
not have prime degree, for the time being, you have to split it by hand as a
tower / compositum of such extensions.

\syn{rnfkummer}{\var{bnr},\var{subgroup},\var{deg},\var{prec}}, where
\var{deg} is a \kbd{long} and an omitted \var{subgroup} is coded as
\kbd{NULL}

\subsecidx{rnflllgram}$(\var{nf},\var{pol},\var{order})$: given a polynomial
\var{pol} with coefficients in \var{nf} defining a relative extension $L$ and
a suborder \var{order} of $L$ (of maximal rank), as output by
\kbd{rnfpseudobasis}$(\var{nf},\var{pol})$ or similar, gives
$[[\var{neworder}],U]$, where \var{neworder} is a reduced order and $U$ is
the unimodular transformation matrix.

\syn{rnflllgram}{\var{nf},\var{pol},\var{order},\var{prec}}.

\subsecidx{rnfnormgroup}$(\var{bnr},\var{pol})$: \var{bnr} being a big ray
class field as output by \kbd{bnrinit} and \var{pol} a relative polynomial
defining an \idx{Abelian extension}, computes the norm group (alias Artin
or Takagi group) corresponding to the Abelian extension of $\var{bnf}=bnr[1]$
defined by \var{pol}, where the module corresponding to \var{bnr} is assumed
to be a multiple of the conductor (i.e.~\var{pol} defines a subextension of
bnr). The result is the HNF defining the norm group on the given generators
of $\var{bnr}[5][3]$. Note that neither the fact that \var{pol} defines an
Abelian extension nor the fact that the module is a multiple of the conductor
is checked. The result is undefined if the assumption is not correct.

\syn{rnfnormgroup}{\var{bnr},\var{pol}}.

\subsecidx{rnfpolred}$(\var{nf},\var{pol})$: relative version of \kbd{polred}.
Given a monic polynomial \var{pol} with coefficients in $\var{nf}$, finds a
list of relative polynomials defining some subfields, hopefully simpler and
containing the original field. In the present version \vers, this is slower
and less efficient than \kbd{rnfpolredabs}.

\syn{rnfpolred}{\var{nf},\var{pol},\var{prec}}.

\subsecidx{rnfpolredabs}$(\var{nf},\var{pol},\{\fl=0\})$: relative version of
\kbd{polredabs}. Given a monic polynomial \var{pol} with coefficients in
$\var{nf}$, finds a simpler relative polynomial defining the same field. The
binary digits of $\fl$ mean

1: returns $[P,a]$ where $P$ is the default output and $a$ is an
element expressed on a root of $P$ whose characteristic polynomial is
\var{pol}

2: returns an absolute polynomial (same as
{\tt rnfequation(\var{nf},rnfpolredabs(\var{nf},\var{pol}))}
but faster).

16: possibly use a suborder of the maximal order. This is slower than the
default when the relative discriminant is smooth, and much faster otherwise.
See \secref{se:polredabs}.

\misctitle{Remark.} In the present implementation, this is both faster and
much more efficient than \kbd{rnfpolred}, the difference being more
dramatic than in the absolute case. This is because the implementation of
\kbd{rnfpolred} is based on (a partial implementation of) an incomplete
reduction theory of lattices over number fields, the function
\kbd{rnflllgram}, which deserves to be improved.

\syn{rnfpolredabs}{\var{nf},\var{pol},\fl,\var{prec}}.

\subsecidx{rnfpseudobasis}$(\var{nf},\var{pol})$: given a number field
$\var{nf}$ as output by \kbd{nfinit} and a polynomial \var{pol} with
coefficients in $\var{nf}$ defining a relative extension $L$ of $\var{nf}$,
computes a pseudo-basis $(A,I)$ for the maximal order $\Z_L$ viewed as a
$\Z_K$-module, and the relative discriminant of $L$. This is output as a
four-element row vector $[A,I,D,d]$, where $D$ is the relative ideal
discriminant and $d$ is the relative discriminant considered as an element of
$\var{nf}^*/{\var{nf}^*}^2$.

\syn{rnfpseudobasis}{\var{nf},\var{pol}}.

\subsecidx{rnfsteinitz}$(\var{nf},x)$: given a number field $\var{nf}$ as
output by \kbd{nfinit} and either a polynomial $x$ with coefficients in
$\var{nf}$ defining a relative extension $L$ of $\var{nf}$, or a pseudo-basis
$x$ of such an extension as output for example by \kbd{rnfpseudobasis},
computes another pseudo-basis $(A,I)$ (not in HNF in general) such that all
the ideals of $I$ except perhaps the last one are equal to the ring of
integers of $\var{nf}$, and outputs the four-component row vector $[A,I,D,d]$
as in \kbd{rnfpseudobasis}. The name of this function comes from the fact
that the ideal class of the last ideal of $I$, which is well defined, is the
\idx{Steinitz class} of the $\Z_K$-module $\Z_L$ (its image in $SK_0(\Z_K)$).

\syn{rnfsteinitz}{\var{nf},x}.

\subsecidx{subgrouplist}$(\var{bnr},\{\var{bound}\},\{\fl=0\})$:
\var{bnr} being as output by \kbd{bnrinit} or a list of cyclic components
of a finite Abelian group $G$, outputs the list of subgroups of $G$. Subgroups
are given as HNF left divisors of the SNF matrix corresponding to $G$.

\misctitle{Warning:} the present implementation cannot treat a group $G$
where any cyclic factor has more than $2^{31}$, resp.~$2^{63}$ elements on a
$32$-bit, resp.~$64$-bit architecture. \tet{forsubgroup} is a bit more
general and can handle $G$ if all $p$-Sylow subgroups of $G$ satisfy the
condition above.

If $\fl=0$ (default) and \var{bnr} is as output by \kbd{bnrinit}, gives
only the subgroups whose modulus is the conductor. Otherwise, the modulus is
not taken into account.

If \var{bound} is present, and is a positive integer, restrict the output to
subgroups of index less than \var{bound}. If \var{bound} is a vector
containing a single positive integer $B$, then only subgroups of index
exactly equal to $B$ are computed. For instance
\bprog
? subgrouplist([6,2])
%1 = [[6, 0; 0, 2], [2, 0; 0, 2], [6, 3; 0, 1], [2, 1; 0, 1], [3, 0; 0, 2],
      [1, 0; 0, 2], [6, 0; 0, 1], [2, 0; 0, 1], [3, 0; 0, 1], [1, 0; 0, 1]]
? subgrouplist([6,2],3)    \\@com index less than 3
%2 = [[2, 1; 0, 1], [1, 0; 0, 2], [2, 0; 0, 1], [3, 0; 0, 1], [1, 0; 0, 1]]
? subgrouplist([6,2],[3])  \\@com index 3
%3 = [[3, 0; 0, 1]]
? bnr = bnrinit(bnfinit(x), [120,[1]], 1);
? L = subgrouplist(bnr, [8]);
@eprog\noindent
In the last example, $L$ corresponds to the 24 subfields of
$\Q(\zeta_{120})$, of degree $8$ and conductor $120\infty$ (by setting \fl,
we see there are a total of $43$ subgroups of degree $8$).
\bprog
? vector(#L, i, galoissubcyclo(bnr, L[i]))
@eprog\noindent
will produce their equations. (For a general base field, you would
have to rely on \tet{bnrstark}, or \tet{rnfkummer}.)

\syn{subgrouplist0}{\var{bnr},\var{bound},\fl}, where $\fl$
is a long integer, and an omitted \var{bound} is coded by \kbd{NULL}.

\subsecidx{zetak}$(\var{znf},x,\{\fl=0\})$: \var{znf} being a number
field initialized by \kbd{zetakinit} (\emph{not} by \kbd{nfinit}),
computes the value of the \idx{Dedekind} zeta function of the number
field at the complex number $x$. If $\fl=1$ computes Dedekind $\Lambda$
function instead (i.e.~the product of the Dedekind zeta function by its gamma
and exponential factors).

\misctitle{CAVEAT.} This implementation is not satisfactory and must be
rewritten. In particular

$\bullet$ The accuracy of the result depends in an essential way on the
accuracy of both the \kbd{zetakinit} program and the current accuracy.
Be wary in particular that $x$ of large imaginary part or, on the
contrary, very close to an ordinary integer will suffer from precision
loss, yielding fewer significant digits than expected. Computing with 28
eight digits of relative accuracy, we have

\bprog
? zeta(3)
    %1 = 1.202056903159594285399738161
    ? zeta(3-1e-20)
    %2 = 1.202056903159594285401719424
    ? zetak(zetakinit(x), 3-1e-20)
    %3 = 1.2020569031595952919  \\ 5 digits are wrong
    ? zetak(zetakinit(x), 3-1e-28)
    %4 = -25.33411749           \\ junk
@eprog

$\bullet$ As the precision increases, results become unexpectedly
completely wrong:
\bprog
    ? \p100
    ? zetak(zetakinit(x^2-5), -1) - 1/30 
    %1 = 7.26691813 E-108    \\ perfect
    ? \p150
    ? zetak(zetakinit(x^2-5), -1) - 1/30 
    %2 = -2.486113578 E-156  \\ perfect
    ? \p200
    ? zetak(zetakinit(x^2-5), -1) - 1/30
    %3 = 4.47... E-75        \\ more than half of the digits are wrong
    ? \p250
    ? zetak(zetakinit(x^2-5), -1) - 1/30
    %4 = 1.6 E43             \\ junk
@eprog

\syn{glambdak}{\var{znf},x,\var{prec}} or
\funs{gzetak}{\var{znf},x,\var{prec}}.

\subsecidx{zetakinit}$(x)$: computes a number of initialization data
concerning the number field defined by the polynomial $x$ so as to be able
to compute the \idx{Dedekind} zeta and lambda functions (respectively
$\kbd{zetak}(x)$ and $\kbd{zetak}(x,1)$). This function calls in particular
the \kbd{bnfinit} program. The result is a 9-component vector $v$ whose
components are very technical and cannot really be used by the user except
through the \kbd{zetak} function. The only component which can be used if
it has not been computed already is $v[1][4]$ which is the result of the
\kbd{bnfinit} call.

This function is very inefficient and should be rewritten. It needs to
computes millions of coefficients of the corresponding Dirichlet series if
the precision is big. Unless the discriminant is small it will not be able
to handle more than 9 digits of relative precision. For instance,
\kbd{zetakinit(x\pow 8 - 2)} needs 440MB of memory at default precision.

\syn{initzeta}{x}.

\section{Polynomials and power series}

We group here all functions which are specific to polynomials or power
series. Many other functions which can be applied on these objects are
described in the other sections. Also, some of the functions described here
can be applied to other types.

\subsecidx{O}$(p\hbox{\kbd{\pow}}e)$: if $p$ is an integer
greater than $2$, returns a $p$-adic $0$ of precision $e$. In all other
cases, returns a power series zero with precision given by $e v$, where $v$
is the $X$-adic valuation of $p$ with respect to its main variable.

\syn{zeropadic}{p,e} for a $p$-adic and \funs{zeroser}{v,e} for a
power series zero in variable $v$, which is a \kbd{long}. The precision $e$
is a \kbd{long}.

\subsecidx{deriv}$(x,\{v\})$: derivative of $x$ with respect to the main
variable if $v$ is omitted, and with respect to $v$ otherwise. The derivative
of a scalar type is zero, and the derivative of a vector or matrix is done
componentwise. One can use $x'$ as a shortcut if the derivative is with
respect to the main variable of $x$.

By definition, the main variable of a \typ{POLMOD} is the main variable among
the coefficients from its two polynomial components (representative and
modulus); in other words, assuming a polmod represents an element of
$R[X]/(T(X))$, the variable $X$ is a mute variable and the derivative is
taken with respect to the main variable used in the base ring $R$.

\syn{deriv}{x,v}, where $v$ is a \kbd{long}, and an omitted $v$ is coded as
$-1$. When $x$ is a \typ{POL}, $\tet{derivpol}(x)$ is a shortcut for
$\kbd{deriv}(x, -1)$.

\subsecidx{eval}$(x)$: replaces in $x$ the formal variables by the values that
have been assigned to them after the creation of $x$. This is mainly useful
in GP, and not in library mode. Do not confuse this with substitution (see
\kbd{subst}).

If $x$ is a character string, \kbd{eval($x$)} executes $x$ as a GP
command, as if directly input from the keyboard, and returns its
output.\label{se:eval} For convenience, $x$ is evaluated as if
\kbd{strictmatch} was off. In particular, unused characters at the end of
$x$ do not prevent its evaluation:
\bprog
    ? eval("1a")
    % 1 = 1
@eprog

\syn{geval}{x}. The more basic functions \funs{poleval}{q,x},
\funs{qfeval}{q,x}, and \funs{hqfeval}{q,x} evaluate $q$ at $x$, where $q$
is respectively assumed to be a polynomial, a quadratic form (a symmetric
matrix), or an Hermitian form (an Hermitian complex matrix).

\subsecidx{factorpadic}$(\var{pol},p,r,\{\fl=0\})$: $p$-adic factorization
of the polynomial \var{pol} to precision $r$, the result being a
two-column matrix as in \kbd{factor}. The factors are normalized so that
their leading coefficient is a power of $p$. $r$ must be strictly larger than
the $p$-adic valuation of the discriminant of \var{pol} for the result to
make any sense. The method used is a modified version of the \idx{round 4}
algorithm of \idx{Zassenhaus}.

If $\fl=1$, use an algorithm due to \idx{Buchmann} and \idx{Lenstra}, which is
usually less efficient.

\syn{factorpadic4}{\var{pol},p,r}, where $r$ is a \kbd{long} integer.

\subsecidx{intformal}$(x,\{v\})$: \idx{formal integration} of $x$ with
respect to the main variable if $v$ is omitted, with respect to the variable
$v$ otherwise. Since PARI does not know about ``abstract'' logarithms (they
are immediately evaluated, if only to a power series), logarithmic terms in
the result will yield an error. $x$ can be of any type. When $x$ is a
rational function, it is assumed that the base ring is an integral domain of
characteristic zero.

\syn{integ}{x,v}, where $v$ is a \kbd{long} and an omitted $v$ is coded
as $-1$.

\subsecidx{padicappr}$(\var{pol},a)$: vector of $p$-adic roots of the
polynomial $pol$ congruent to the $p$-adic number $a$ modulo $p$, and with
the same $p$-adic precision as $a$. The number $a$ can be an ordinary
$p$-adic number (type \typ{PADIC}, i.e.~an element of $\Z_p$) or can be an
integral element of a finite extension of $\Q_p$, given as a \typ{POLMOD}
at least one of whose coefficients is a \typ{PADIC}. In this case, the result
is the vector of roots belonging to the same extension of $\Q_p$ as $a$.

\syn{padicappr}{\var{pol},a}.

\subsecidx{polcoeff}$(x,s,\{v\})$: coefficient of degree $s$ of the
polynomial $x$, with respect to the main variable if $v$ is omitted, with
respect to $v$ otherwise. Also applies to power series, scalars (polynomial
of degree $0$), and to rational functions provided the denominator is a
monomial.

\syn{polcoeff0}{x,s,v}, where $v$ is a \kbd{long} and an omitted $v$ is coded
as $-1$. Also available is \funs{truecoeff}{x,v}.

\subsecidx{poldegree}$(x,\{v\})$: degree of the polynomial $x$ in the main
variable if $v$ is omitted, in the variable $v$ otherwise.

The degree of $0$ is a fixed negative number, whose exact value should
not be used. The degree of a non-zero scalar is $0$. Finally, when $x$ is
a non-zero polynomial or rational function, returns the ordinary degree
of $x$. Raise an error otherwise.

\syn{poldegree}{x,v}, where $v$ and the result are \kbd{long}s (and an
omitted $v$ is coded as $-1$). Also available is \funs{degree}{x}, which is
equivalent to \kbd{poldegree($x$,-1)}.

\subsecidx{polcyclo}$(n,\{v=x\})$: $n$-th cyclotomic polynomial, in variable
$v$ ($x$ by default). The integer $n$ must be positive.

\syn{cyclo}{n,v}, where $n$ and $v$ are \kbd{long}
integers ($v$ is a variable number, usually obtained through \kbd{varn}).

\subsecidx{poldisc}$(\var{pol},\{v\})$: discriminant of the polynomial
\var{pol} in the main variable is $v$ is omitted, in $v$ otherwise. The
algorithm used is the \idx{subresultant algorithm}.

\syn{poldisc0}{x,v}. Also available is \funs{discsr}{x}, equivalent
to \kbd{poldisc0(x,-1)}.

\subsecidx{poldiscreduced}$(f)$: reduced discriminant vector of the
(integral, monic) polynomial $f$. This is the vector of elementary divisors
of $\Z[\alpha]/f'(\alpha)\Z[\alpha]$, where $\alpha$ is a root of the
polynomial $f$. The components of the result are all positive, and their
product is equal to the absolute value of the discriminant of~$f$.

\syn{reduceddiscsmith}{x}.

\subsecidx{polhensellift}$(x, y, p, e)$: given a prime $p$, an integral
polynomial $x$ whose leading coefficient is a $p$-unit, a vector $y$ of
integral polynomials that are pairwise relatively prime modulo $p$, and whose
product is congruent to $x$ modulo $p$, lift the elements of $y$ to
polynomials whose product is congruent to $x$ modulo $p^e$.

\syn{polhensellift}{x,y,p,e} where $e$ must be a \kbd{long}.

\subsecidx{polinterpolate}$(xa,\{ya\},\{v=x\},\{\&e\})$: given the data vectors
$xa$ and $ya$ of the same length $n$ ($xa$ containing the $x$-coordinates,
and $ya$ the corresponding $y$-coordinates), this function finds the
\idx{interpolating polynomial} passing through these points and evaluates it
at~$v$. If $ya$ is omitted, return the polynomial interpolating the
$(i,xa[i])$. If present, $e$ will contain an error estimate on the returned
value.

\syn{polint}{xa,ya,v,\&e}, where $e$ will contain an error estimate on the
returned value.

\subsecidx{polisirreducible}$(\var{pol})$: \var{pol} being a polynomial
(univariate in the present version \vers), returns 1 if \var{pol} is
non-constant and irreducible, 0 otherwise. Irreducibility is checked over
the smallest base field over which \var{pol} seems to be defined.

\syn{gisirreducible}{\var{pol}}.

\subsecidx{pollead}$(x,\{v\})$: leading coefficient of the polynomial or
power series $x$. This is computed with respect to the main variable of $x$
if $v$ is omitted, with respect to the variable $v$ otherwise.

\syn{pollead}{x,v}, where $v$ is a \kbd{long} and an omitted $v$ is coded as
$-1$. Also available is \funs{leading_term}{x}.

\subsecidx{pollegendre}$(n,\{v=x\})$: creates the $n^{\text{th}}$
\idx{Legendre polynomial}, in variable $v$.

\syn{legendre}{n}, where $x$ is a \kbd{long}.

\subsecidx{polrecip}$(\var{pol})$: reciprocal polynomial of \var{pol},
i.e.~the coefficients are in reverse order. \var{pol} must be a polynomial.

\syn{polrecip}{x}.

\subsecidx{polresultant}$(x,y,\{v\},\{\fl=0\})$: resultant of the two
polynomials $x$ and $y$ with exact entries, with respect to the main
variables of $x$ and $y$ if $v$ is omitted, with respect to the variable $v$
otherwise. The algorithm assumes the base ring is a domain.

If $\fl=0$, uses the \idx{subresultant algorithm}.

If $\fl=1$, uses the determinant of Sylvester's matrix instead (here $x$ and
$y$ may have non-exact coefficients).

If $\fl=2$, uses Ducos's modified subresultant algorithm. It should be much
faster than the default if the coefficient ring is complicated (e.g
multivariate polynomials or huge coefficients), and slightly slower
otherwise.

\syn{polresultant0}{x,y,v,\fl}, where $v$ is a \kbd{long} and an omitted $v$
is coded as $-1$. Also available are \funs{subres}{x,y} ($\fl=0$) and
\funs{resultant2}{x,y} ($\fl=1$).

\subsecidx{polroots}$(\var{pol},\{\fl=0\})$: complex roots of the polynomial
\var{pol}, given as a column vector where each root is repeated according to
its multiplicity. The precision is given as for transcendental functions: in
GP it is kept in the variable \kbd{realprecision} and is transparent to the
user, but it must be explicitly given as a second argument in library mode.

The algorithm used is a modification of A.~Sch\"onhage\sidx{Sch\"onage}'s
root-finding algorithm, due to and implemented by X.~Gourdon. Barring bugs, it
is guaranteed to converge and to give the roots to the required accuracy.

If $\fl=1$, use a variant of the Newton-Raphson method, which is \emph{not}
guaranteed to converge, but is rather fast. If you get the messages ``too
many iterations in roots'' or ``INTERNAL ERROR: incorrect result in roots'',
use the default algorithm. This used to be the default root-finding function in
PARI until version 1.39.06.

\syn{roots}{\var{pol},\var{prec}} or \funs{rootsold}{\var{pol},\var{prec}}.

\subsecidx{polrootsmod}$(\var{pol},p,\{\fl=0\})$: row vector of roots modulo
$p$ of the polynomial \var{pol}. The particular non-prime value $p=4$ is
accepted, mainly for $2$-adic computations. Multiple roots are \emph{not}
repeated.

If $p$ is very small, you may try setting $\fl=1$, which uses a naive search.

\syn{rootmod}{\var{pol},p} ($\fl=0$) or
\funs{rootmod2}{\var{pol},p} ($\fl=1$).

\subsecidx{polrootspadic}$(\var{pol},p,r)$: row vector of $p$-adic roots of
the polynomial \var{pol}, given to $p$-adic precision $r$. Multiple roots are
\emph{not} repeated. $p$ is assumed to be a prime, and \var{pol} to be
non-zero modulo $p$. Note that this is not the same as the roots in
$\Z/p^r\Z$, rather it gives approximations in $\Z/p^r\Z$ of the true
roots living in $\Q_p$.

If \var{pol} has inexact \typ{PADIC} coefficients, this is not always
well-defined; in this case, the equation is first made integral, then lifted
to $\Z$. Hence the roots given are approximations of the roots of a
polynomial which is $p$-adically close to the input.

\syn{rootpadic}{\var{pol},p,r}, where $r$ is a \kbd{long}.

\subsecidx{polsturm}$(\var{pol},\{a\},\{b\})$: number of real roots of the real
polynomial \var{pol} in the interval $]a,b]$, using Sturm's algorithm. $a$
(resp.~$b$) is taken to be $-\infty$ (resp.~$+\infty$) if omitted.

\syn{sturmpart}{\var{pol},a,b}. Use \kbd{NULL} to omit an argument.
\funs{sturm}{\var{pol}} is equivalent to
\funs{sturmpart}{\var{pol},\kbd{NULL},\kbd{NULL}}. The result is a
\kbd{long}.

\subsecidx{polsubcyclo}$(n,d,\{v=x\})$: gives polynomials (in variable
$v$) defining the sub-Abelian extensions of degree $d$ of the cyclotomic
field $\Q(\zeta_n)$, where $d\mid \phi(n)$.

If there is exactly one such extension the output is a polynomial, else it is
a vector of polynomials, eventually empty.

To be sure to get a vector, you can use \kbd{concat([],polsubcyclo(n,d))}

The function \tet{galoissubcyclo} allows to specify more closely which sub-Abelian extension should be computed.

\syn{polsubcyclo}{n,d,v}, where $n$, $d$ and $v$ are \kbd{long} and $v$ is a
variable number. When $(\Z/n\Z)^*$ is cyclic, you can use
\funs{subcyclo}{n,d,v}, where $n$, $d$ and $v$ are \kbd{long} and $v$ is a
variable number.

\subsecidx{polsylvestermatrix}$(x,y)$: forms the Sylvester matrix
corresponding to the two polynomials $x$ and $y$, where the coefficients of
the polynomials are put in the columns of the matrix (which is the natural
direction for solving equations afterwards). The use of this matrix can be
essential when dealing with polynomials with inexact entries, since
polynomial Euclidean division doesn't make much sense in this case.

\syn{sylvestermatrix}{x,y}.

\subsecidx{polsym}$(x,n)$: creates the vector of the \idx{symmetric powers}
of the roots of the polynomial $x$ up to power $n$, using Newton's
formula.

\syn{polsym}{x}.

\subsecidx{poltchebi}$(n,\{v=x\})$: creates the $n^{\text{th}}$
\idx{Chebyshev} polynomial~$T_n$ of the first kind in variable $v$.

\syn{tchebi}{n,v}, where $n$ and $v$ are \kbd{long}
integers ($v$ is a variable number).

\subsecidx{polzagier}$(n,m)$: creates Zagier's polynomial $P_n^{(m)}$ used in
the functions \kbd{sumalt} and \kbd{sumpos} (with $\fl=1$). One must have $m\le
n$. The exact definition can be found in ``Convergence acceleration of
alternating series'', Cohen et al., Experiment.~Math., vol.~9, 2000, pp.~3--12.

%@article {MR2001m:11222,
%    AUTHOR = {Cohen, Henri and Rodriguez Villegas, Fernando and Zagier, Don},
%     TITLE = {Convergence acceleration of alternating series},
%   JOURNAL = {Experiment. Math.},
%    VOLUME = {9},
%      YEAR = {2000},
%    NUMBER = {1},
%     PAGES = {3--12},

\syn{polzagreel}{n,m,\var{prec}} if the result is only wanted as a polynomial
with real coefficients to the precision $\var{prec}$, or \funs{polzag}{n,m}
if the result is wanted exactly, where $n$ and $m$ are \kbd{long}s.

\subsecidx{serconvol}$(x,y)$: convolution (or \idx{Hadamard product}) of the
two power series $x$ and $y$; in other words if $x=\sum a_k*X^k$ and $y=\sum
b_k*X^k$ then $\kbd{serconvol}(x,y)=\sum a_k*b_k*X^k$.

\syn{convol}{x,y}.

\subsecidx{serlaplace}$(x)$: $x$ must be a power series with non-negative
exponents. If $x=\sum (a_k/k!)*X^k$ then the result is $\sum a_k*X^k$.

\syn{laplace}{x}.

\subsecidx{serreverse}$(x)$: reverse power series (i.e.~$x^{-1}$, not $1/x$)
of $x$. $x$ must be a power series whose valuation is exactly equal to one.

\syn{recip}{x}.

\subsecidx{subst}$(x,y,z)$:
replace the simple variable $y$ by the argument $z$ in the ``polynomial''
expression $x$. Every type is allowed for $x$, but if it is not a genuine
polynomial (or power series, or rational function), the substitution will be
done as if the scalar components were polynomials of degree zero. In
particular, beware that:

\bprog
? subst(1, x, [1,2; 3,4])
%1 =
[1 0]

[0 1]

? subst(1, x, Mat([0,1]))
  ***   forbidden substitution by a non square matrix
@eprog\noindent
If $x$ is a power series, $z$ must be either a polynomial, a power
series, or a rational function.

\syn{gsubst}{x,y,z}, where $y$ is the variable number.

\subsecidx{substpol}$(x,y,z)$:
replace the ``variable'' $y$ by the argument $z$ in the ``polynomial''
expression $x$. Every type is allowed for $x$, but the same behaviour
as \kbd{subst} above apply.

The difference with \kbd{subst} is that $y$ is allowed to be any polynomial
here. The substitution is done as per the following script:
\bprog
   subst_poly(pol, from, to) =
   { local(t = 'subst_poly_t, M = from - t);

     subst(lift(Mod(pol,M), variable(M)), t, to)
   }
@eprog\noindent
For instance
\bprog
? substpol(x^4 + x^2 + 1, x^2, y)
%1 = y^2 + y + 1
? substpol(x^4 + x^2 + 1, x^3, y)
%2 = x^2 + y*x + 1
? substpol(x^4 + x^2 + 1, (x+1)^2, y)
%3 = (-4*y - 6)*x + (y^2 + 3*y - 3)
@eprog

\syn{gsubstpol}{x,y,z}.

\subsecidx{substvec}$(x,v,w)$: $v$ being a vector of monomials (variables),
$w$ a vector of expressions of the same length, replace in the expression
$x$ all occurences of $v_i$ by $w_i$. The substitutions are done
simultaneously; more precisely, the $v_i$ are first replaced by new
variables in $x$, then these are replaced by the $w_i$:

\bprog
? substvec([x,y], [x,y], [y,x])
%1 = [y, x]
? substvec([x,y], [x,y], [y,x+y])
%2 = [y, x + y]     \\ not [y, 2*y]
@eprog

\syn{gsubstvec}{x,v,w}.

\subsecidx{taylor}$(x,y)$: Taylor expansion around $0$ of $x$ with respect
to\label{se:taylor}
the simple variable $y$. $x$ can be of any reasonable type, for example a
rational function. The number of terms of the expansion is transparent to the
user in GP, but must be given as a second argument in library mode.

\syn{tayl}{x,y,n}, where the \kbd{long} integer $n$ is the desired number of
terms in the expansion.

\subsecidx{thue}$(\var{tnf},a,\{\var{sol}\})$: solves the equation
$P(x,y)=a$ in integers $x$ and $y$, where \var{tnf} was created with
$\kbd{thueinit}(P)$. \var{sol}, if present, contains the solutions of
$\Norm(x)=a$ modulo units of positive norm in the number field
defined by $P$ (as computed by \kbd{bnfisintnorm}). If the
result is conditional (on the GRH or some heuristic strenghtening),
a Warning is printed. Otherwise, the result is unconditional, barring bugs.
For instance, here's how to solve the Thue equation $x^{13} - 5y^{13} = - 4$:

\bprog
? tnf = thueinit(x^13 - 5);
? thue(tnf, -4)
%1 = [[1, 1]]
@eprog
Hence, the only solution is $x = 1$, $y = 1$ and the result is
unconditional. On the other hand:

\bprog
? tnf = thueinit(x^3-2*x^2+3*x-17);
? thue(tnf, -15)
  *** thue: Warning: Non trivial conditional class group.
  *** May miss solutions of the norm equation.
%2 = [[1, 1]]
@eprog
This time the result is conditional. All results computed using this tnf
are likewise conditional, \emph{except} for a right-hand side of $\pm 1$.

\syn{thue}{\var{tnf},a,\var{sol}}, where an omitted \var{sol} is coded
as \kbd{NULL}.

\subsecidx{thueinit}$(P,\{\fl=0\})$: initializes the \var{tnf}
corresponding to $P$. It is meant to be used in conjunction with \tet{thue}
to solve Thue equations $P(x,y) = a$, where $a$ is an integer. If $\fl$ is
non-zero, certify the result unconditionnally. Otherwise, assume \idx{GRH},
this being much faster of course.

\emph{If} the conditional computed class group is trivial \emph{or} you are
only interested in the case $a = \pm1$, then results are unconditional
anyway. So one should only use the flag is \kbd{thue} prints a Warning (see
the example there).

\syn{thueinit}{P,\fl,\var{prec}}.

\section{Vectors, matrices, linear algebra and sets}
\label{se:linear_algebra}

Note that most linear algebra functions operating on subspaces defined by
generating sets (such as \tet{mathnf}, \tet{qflll}, etc.) take matrices as
arguments. As usual, the generating vectors are taken to be the
\emph{columns} of the given matrix.

Since PARI does not have a strong typing system, scalars live in
unspecified commutative base rings. It is very difficult to write
robust linear algebra routines in such a general setting. The
developpers's choice has been to assume the base ring is a domain
and work over its field of fractions. If the base ring is \emph{not}
a domain, one gets an error as soon as a non-zero pivot turns out to be
non-invertible. Some functions, e.g.~\kbd{mathnf} or \kbd{mathnfmod},
specifically assume the base ring is $\Z$.

\subsecidx{algdep}$(x,k,\{\fl=0\})$:\sidx{algebraic dependence}
\label{se:algdep} $x$ being real/complex, or $p$-adic, finds a polynomial of
degree at most $k$ with integer coefficients having $x$ as approximate root.
Note that the polynomial which is obtained is not necessarily the ``correct''
one. In fact it is not even guaranteed to be irreducible. One can check the
closeness either by a polynomial evaluation (use \tet{subst}), or by
computing the roots of the polynomial given by \kbd{algdep} (use 
\tet{polroots}).

Internally, \tet{lindep}$([1,x,\ldots,x^k], \fl)$ is used. If
\tet{lindep} is not able to find a relation and returns a lower bound for the
sup norm of the smallest relation, \tet{algdep} returns that bound instead.
A suitable non-zero value of $\fl$ may improve on the default behaviour:
\bprog
\\\\\\\\\ LLL
? \p200
? algdep(2^(1/6)+3^(1/5), 30);      \\ wrong in 3.8s
? algdep(2^(1/6)+3^(1/5), 30, 100); \\ wrong in 1s
? algdep(2^(1/6)+3^(1/5), 30, 170); \\ right in 3.3s
? algdep(2^(1/6)+3^(1/5), 30, 200); \\ wrong in 2.9s
? \p250
? algdep(2^(1/6)+3^(1/5), 30);      \\ right in 2.8s
? algdep(2^(1/6)+3^(1/5), 30, 200); \\ right in 3.4s
\\\\\\\\\ PSLQ
? \p200
? algdep(2^(1/6)+3^(1/5), 30, -3);  \\ failure in 14s.
? \p250
? algdep(2^(1/6)+3^(1/5), 30, -3);  \\ right in 18s
@eprog\noindent
Proceeding by increments of 5 digits of accuracy, \kbd{algdep} with default
flag produces its first correct result at 205 digits, and from then on a
steady stream of correct results. Interestingly enough, our PSLQ also
reliably succeeds from 205 digits on (and is 5 times slower at that
accuracy).

The above example is the testcase studied in a 2000 paper by Borwein and
Lisonek, Applications of integer relation algorithms, \emph{Discrete Math.},
{\bf 217}, p.~65--82. The paper conludes in the superiority of the PSLQ
algorithm, which either shows that PARI's implementation of PSLQ is lacking,
or that its LLL is extremely good. The version of PARI tested there was
1.39, which succeeded reliably from precision 265 on, in about 60 as much
time as the current version.

\syn{algdep0}{x,k,\fl,\var{prec}}, where $k$ and $\fl$ are \kbd{long}s.
Also available is \funs{algdep}{x,k,\var{prec}} ($\fl=0$).

\subsecidx{charpoly}$(A,\{v=x\},\{\fl=0\})$: \idx{characteristic polynomial}
of $A$ with respect to the variable $v$, i.e.~determinant of $v*I-A$ if $A$
is a square matrix. If $A$ is not a square matrix, it returns the characteristic polynomial of the map ``multiplication by $A$'' if $A$
is a scalar, in particular a polmod. E.g.~\kbd{charpoly(I) = x\pow2+1}.

The value of $\fl$ is only significant for matrices.

If $\fl=0$, the method used is essentially the same as for computing the
adjoint matrix, i.e.~computing the traces of the powers of $A$.

If $\fl=1$, uses Lagrange interpolation which is almost always slower.

If $\fl=2$, uses the Hessenberg form. This is faster than the default when
the coefficients are intmod a prime or real numbers, but is usually
slower in other base rings.

\syn{charpoly0}{A,v,\fl}, where $v$ is the variable number. Also available
are the functions \funs{caract}{A,v} ($\fl=1$), \funs{carhess}{A,v}
($\fl=2$), and \funs{caradj}{A,v,\var{pt}} where, in this last case,
\var{pt} is a \kbd{GEN*} which, if not equal to \kbd{NULL}, will receive
the address of the adjoint matrix of $A$ (see \kbd{matadjoint}), so both
can be obtained at once.

\subsecidx{concat}$(x,\{y\})$: concatenation of $x$ and $y$. If $x$ or $y$ is
not a vector or matrix, it is considered as a one-dimensional vector. All
types are allowed for $x$ and $y$, but the sizes must be compatible. Note
that matrices are concatenated horizontally, i.e.~the number of rows stays
the same. Using transpositions, it is easy to concatenate them vertically.

To concatenate vectors sideways (i.e.~to obtain a two-row or two-column
matrix), use \tet{Mat} instead (see the example there). Concatenating a row
vector to a matrix having the same number of columns will add the row to the
matrix (top row if the vector is $x$, i.e.~comes first, and bottom row
otherwise).

The empty matrix \kbd{[;]} is considered to have a number of rows compatible
with any operation, in particular concatenation. (Note that this is
definitely \emph{not} the case for empty vectors \kbd{[~]} or \kbd{[~]\til}.)

If $y$ is omitted, $x$ has to be a row vector or a list, in which case its
elements are concatenated, from left to right, using the above rules.

\bprog
? concat([1,2], [3,4])
%1 = [1, 2, 3, 4]
? a = [[1,2]~, [3,4]~]; concat(a)
%2 =
[1 3]

[2 4]

? concat([1,2; 3,4], [5,6]~)
%3 =
[1 2 5]

[3 4 6]
? concat([%, [7,8]~, [1,2,3,4]])
%5 =
[1 2 5 7]

[3 4 6 8]

[1 2 3 4]
@eprog

\syn{concat}{x,y}.

\subsecidx{lindep}$(x,\{\fl=0\})$:\sidx{linear dependence}$x$ being a
vector with $p$-adic or real/complex coefficients, finds a small integral
linear combination among these coefficients.

If $x$ is $p$-adic, $\fl$ is meaningless and the algorithm LLL-reduces a
suitable (dual) lattice.

Otherwise, the value of $\fl$ determines the algorithm used; in the current
version of PARI, we suggest to use \emph{non-negative} values, since it is by
far the fastest and most robust implementation. See the detailed example in
\secref{se:algdep} (\kbd{algdep}).

If $\fl\geq 0$, uses a floating point (variable precision) LLL algorithm.
This is in general much faster than the other variants. 
If $\fl = 0$ the accuracy is chosen internally using a crude heuristic.
If $\fl > 0$ the computation is done with an accuracy of $\fl$ decimal digits.
In that case, the parameter $\fl$ should be between 0.6 and 0.9 times the
number of correct decimal digits in the input.

If $\fl=-1$, uses a variant of the \idx{LLL} algorithm due to Hastad,
Lagarias and Schnorr (STACS 1986). If the precision is too low, the routine
may enter an infinite loop.

If $\fl=-2$, $x$ is allowed to be (and in any case interpreted as) a matrix.
Returns a non trivial element of the kernel of $x$, or $0$ if $x$ has trivial
kernel. The element is defined over the field of coefficients of $x$, and is
in general not integral.

If $\fl=-3$, uses the PSLQ algorithm. This may return a real number $B$,
indicating that the input accuracy was exhausted and that no relation exist
whose sup norm is less than $B$.

If $\fl=-4$, uses an experimental 2-level PSLQ, which does not work at all.
(Should be rewritten.)

\syn{lindep0}{x,\fl,\var{prec}}. Also available is
\funs{lindep}{x,\var{prec}} ($\fl=0$).

\subsecidx{listcreate}$(n)$: creates an empty list of maximal length $n$.

This function is useless in library mode.

\subsecidx{listinsert}$(\var{list},x,n)$: inserts the object $x$ at
position $n$ in \var{list} (which must be of type \typ{LIST}). All the
remaining elements of \var{list} (from position $n+1$ onwards) are shifted
to the right. This and \kbd{listput} are the only commands which enable
you to increase a list's effective length (as long as it remains under
the maximal length specified at the time of the \kbd{listcreate}).

This function is useless in library mode.

\subsecidx{listkill}$(\var{list})$: kill \var{list}. This deletes all
elements from \var{list} and sets its effective length to $0$. The maximal
length is not affected.

This function is useless in library mode.

\subsecidx{listput}$(\var{list},x,\{n\})$: sets the $n$-th element of the list
\var{list} (which must be of type \typ{LIST}) equal to $x$. If $n$ is omitted,
or greater than the list current effective length, just appends $x$. This and
\kbd{listinsert} are the only commands which enable you to increase a list's
effective length (as long as it remains under the maximal length specified at
the time of the \kbd{listcreate}).

If you want to put an element into an occupied cell, i.e.~if you don't want to
change the effective length, you can consider the list as a vector and use
the usual \kbd{list[n] = x} construct.

This function is useless in library mode.

\subsecidx{listsort}$(\var{list},\{\fl=0\})$: sorts \var{list} (which must
be of type \typ{LIST}) in place. If $\fl$ is non-zero, suppresses all repeated
coefficients. This is much faster than the \kbd{vecsort} command since no
copy has to be made.

This function is useless in library mode.

\subsecidx{matadjoint}$(x)$: \idx{adjoint matrix} of $x$, i.e.~the matrix $y$
of cofactors of $x$, satisfying $x*y=\det(x)*\Id$. $x$ must be a
(non-necessarily invertible) square matrix.

\syn{adj}{x}.

\subsecidx{matcompanion}$(x)$: the left companion matrix to the polynomial $x$.

\syn{assmat}{x}.

\subsecidx{matdet}$(x,\{\fl=0\})$: determinant of $x$. $x$ must be a
square matrix.

If $\fl=0$, uses Gauss-Bareiss.

If $\fl=1$, uses classical Gaussian elimination, which is better when the
entries of the matrix are reals or integers for example, but usually much
worse for more complicated entries like multivariate polynomials.

\syn{det}{x} ($\fl=0$) and \funs{det2}{x} ($\fl=1$).

\subsecidx{matdetint}$(x)$: $x$ being an $m\times n$ matrix with integer
coefficients, this function computes a \emph{multiple} of the determinant of the
lattice generated by the columns of $x$ if it is of rank $m$, and returns
zero otherwise. This function can be useful in conjunction with the function
\kbd{mathnfmod} which needs to know such a multiple. To obtain the
exact determinant (assuming the rank is maximal), you can compute
\kbd{matdet(mathnfmod(x, matdetint(x)))}.

Note that as soon as one of the dimensions gets large ($m$ or $n$ is larger
than 20, say), it will often be much faster to use \kbd{mathnf(x, 1)} or
\kbd{mathnf(x, 4)} directly.

\syn{detint}{x}.

\subsecidx{matdiagonal}$(x)$: $x$ being a vector, creates the diagonal matrix
whose diagonal entries are those of $x$.

\syn{diagonal}{x}.

\subsecidx{mateigen}$(x)$: gives the eigenvectors of $x$ as columns of a
matrix.

\syn{eigen}{x}.

\subsecidx{matfrobenius}$(M,\{\fl=0\},\{v=x\})$: returns the Frobenius form of
the square matrix \kbd{M}. If $\fl=1$, returns only the elementary divisors as
a vectr of polynomials in the variable \kbd{v}.  If $\fl=2$, returns a
two-components vector [F,B] where \kbd{F} is the Frobenius form and \kbd{B} is
the basis change so that $M=B^{-1}FB$.

\syn{matfrobenius}{M,\fl,v}, where $v$ is the variable number.

\subsecidx{mathess}$(x)$: Hessenberg form of the square matrix $x$.

\syn{hess}{x}.

\subsecidx{mathilbert}$(x)$: $x$ being a \kbd{long}, creates the
\idx{Hilbert matrix}of order $x$, i.e.~the matrix whose coefficient
($i$,$j$) is $1/ (i+j-1)$.

\syn{mathilbert}{x}.

\subsecidx{mathnf}$(x,\{\fl=0\})$: if $x$ is a (not necessarily square)
matrix with integer entries, finds the \emph{upper triangular}
\idx{Hermite normal form} of $x$. If the rank of $x$ is equal to its number
of rows, the result is a square matrix. In general, the columns of the result
form a basis of the lattice spanned by the columns of $x$.

If $\fl=0$, uses the naive algorithm. This should never be used if the
dimension is at all large (larger than 10, say). It is recommanded to use
either \kbd{mathnfmod(x, matdetint(x))} (when $x$ has maximal rank) or
\kbd{mathnf(x, 1)}. Note that the latter is in general faster than
\kbd{mathnfmod}, and also provides a base change matrix.

If $\fl=1$, uses Batut's algorithm, which is much faster than the default.
Outputs a two-component row vector $[H,U]$, where $H$ is the \emph{upper
triangular} Hermite normal form of $x$ defined as above,  and $U$ is the
unimodular transformation matrix such that $xU=[0|H]$. $U$ has in general
huge coefficients, in particular when the kernel is large.

If $\fl=3$, uses Batut's algorithm, but outputs $[H,U,P]$, such that $H$ and
$U$ are as before and $P$ is a permutation of the rows such that $P$ applied
to $xU$ gives $H$. The matrix $U$ is smaller than with $\fl=1$, but may still
be large.

If $\fl=4$, as in case 1 above, but uses a heuristic variant of \idx{LLL}
reduction along the way. The matrix $U$ is in general close to optimal (in
terms of smallest $L_2$ norm), but the reduction is slower than in case $1$.

\syn{mathnf0}{x,\fl}. Also available are \funs{hnf}{x} ($\fl=0$) and
\funs{hnfall}{x} ($\fl=1$). To reduce \emph{huge} (say $400 \times 400$ and
more) relation matrices (sparse with small entries), you can use the pair
\kbd{hnfspec} / \kbd{hnfadd}. Since this is rather technical and the
calling interface may change, they are not documented yet. Look at the code
in \kbd{basemath/alglin1.c}.

\subsecidx{mathnfmod}$(x,d)$: if $x$ is a (not necessarily square) matrix of
maximal rank with integer entries, and $d$ is a multiple of the (non-zero)
determinant of the lattice spanned by the columns of $x$, finds the
\emph{upper triangular} \idx{Hermite normal form} of $x$.

If the rank of $x$ is equal to its number of rows, the result is a square
matrix. In general, the columns of the result form a basis of the lattice
spanned by the columns of $x$. This is much faster than \kbd{mathnf} when $d$
is known.

\syn{hnfmod}{x,d}.

\subsecidx{mathnfmodid}$(x,d)$: outputs the (upper triangular)
\idx{Hermite normal form} of $x$ concatenated with $d$ times
the identity matrix. Assumes that $x$ has integer entries.

\syn{hnfmodid}{x,d}.

\subsecidx{matid}$(n)$: creates the $n\times n$ identity matrix.

\syn{matid}{n} where $n$ is a \kbd{long}.

Related functions are \funs{gscalmat}{x,n}, which creates $x$ times the
identity matrix ($x$ being a \kbd{GEN} and $n$ a \kbd{long}), and
\funs{gscalsmat}{x,n} which is the same when $x$ is a \kbd{long}.

\subsecidx{matimage}$(x,\{\fl=0\})$: gives a basis for the image of the
matrix $x$ as columns of a matrix. A priori the matrix can have entries of
any type. If $\fl=0$, use standard Gauss pivot. If $\fl=1$, use
\kbd{matsupplement}.

\syn{matimage0}{x,\fl}. Also available is \funs{image}{x} ($\fl=0$).

\subsecidx{matimagecompl}$(x)$: gives the vector of the column indices which
are not extracted by the function \kbd{matimage}. Hence the number of
components of \kbd{matimagecompl(x)} plus the number of columns of
\kbd{matimage(x)} is equal to the number of columns of the matrix $x$.

\syn{imagecompl}{x}.

\subsecidx{matindexrank}$(x)$: $x$ being a matrix of rank $r$, gives two
vectors $y$ and $z$ of length $r$ giving a list of rows and columns
respectively (starting from 1) such that the extracted matrix obtained from
these two vectors using $\tet{vecextract}(x,y,z)$ is invertible.

\syn{indexrank}{x}.

\subsecidx{matintersect}$(x,y)$: $x$ and $y$ being two matrices with the same
number of rows each of whose columns are independent, finds a basis of the
$\Q$-vector space equal to the intersection of the spaces spanned by the
columns of $x$ and $y$ respectively. See also the function
\tet{idealintersect}, which does the same for free $\Z$-modules.

\syn{intersect}{x,y}.

\subsecidx{matinverseimage}$(M,y)$: gives a column vector belonging to the
inverse image $z$ of the column vector or matrix $y$ by the matrix $M$ if one
exists (i.e such that $Mz = y$), the empty vector otherwise. To get the
complete inverse image, it suffices to add to the result any element of the
kernel of $x$ obtained for example by \kbd{matker}.

\syn{inverseimage}{x,y}.

\subsecidx{matisdiagonal}$(x)$: returns true (1) if $x$ is a diagonal matrix,
false (0) if not.

\syn{isdiagonal}{x}, and this returns a \kbd{long}
integer.

\subsecidx{matker}$(x,\{\fl=0\})$: gives a basis for the kernel of the
matrix $x$ as columns of a matrix. A priori the matrix can have entries of
any type.

If $x$ is known to have integral entries, set $\fl=1$.

\misctitle{Note:} The library function $\tet{FpM_ker}(x, p)$, where $x$ has
integer entries \emph{reduced mod p} and $p$ is prime, is equivalent to, but
orders of magnitude faster than, \kbd{matker(x*Mod(1,p))} and needs much
less stack space. To use it under \kbd{gp}, type \kbd{install(FpM\_ker, GG)} first.

\syn{matker0}{x,\fl}. Also available are \funs{ker}{x} ($\fl=0$),
\funs{keri}{x} ($\fl=1$).

\subsecidx{matkerint}$(x,\{\fl=0\})$: gives an \idx{LLL}-reduced $\Z$-basis
for the lattice equal to the kernel of the matrix $x$ as columns of the
matrix $x$ with integer entries (rational entries are not permitted).

If $\fl=0$, uses a modified integer LLL algorithm.

If $\fl=1$, uses $\kbd{matrixqz}(x,-2)$. If LLL reduction of the final result
is not desired, you can save time using \kbd{matrixqz(matker(x),-2)} instead.

\syn{matkerint0}{x,\fl}. Also available is
\funs{kerint}{x} ($\fl=0$).

\subsecidx{matmuldiagonal}$(x,d)$: product of the matrix $x$ by the diagonal
matrix whose diagonal entries are those of the vector $d$. Equivalent to,
but much faster than $x*\kbd{matdiagonal}(d)$.

\syn{matmuldiagonal}{x,d}.

\subsecidx{matmultodiagonal}$(x,y)$: product of the matrices $x$ and $y$
assuming that the result is a diagonal matrix. Much faster than $x*y$ in
that case. The result is undefined if $x*y$ is not diagonal.

\syn{matmultodiagonal}{x,y}.

\subsecidx{matpascal}$(x,\{q\})$: creates as a matrix the lower triangular
\idx{Pascal triangle} of order $x+1$ (i.e.~with binomial coefficients
up to $x$). If $q$ is given, compute the $q$-Pascal triangle (i.e.~using
$q$-binomial coefficients).

\syn{matqpascal}{x,q}, where $x$ is a \kbd{long} and $q=\kbd{NULL}$ is used
to omit $q$. Also available is \funs{matpascal}{x}.

\subsecidx{matrank}$(x)$: rank of the matrix $x$.

\syn{rank}{x}, and the result is a \kbd{long}.

\subsecidx{matrix}$(m,n,\{X\},\{Y\},\{\var{expr}=0\})$: creation of the
$m\times n$ matrix whose coefficients are given by the expression
\var{expr}. There are two formal parameters in \var{expr}, the first one
($X$) corresponding to the rows, the second ($Y$) to the columns, and $X$
goes from 1 to $m$, $Y$ goes from 1 to $n$. If one of the last 3 parameters
is omitted, fill the matrix with zeroes.

\synt{matrice}{GEN nlig,GEN ncol,entree *e1,entree *e2,char *expr}.

\subsecidx{matrixqz}$(x,p)$: $x$ being an $m\times n$ matrix with $m\ge n$
with rational or integer entries, this function has varying behaviour
depending on the sign of $p$:

If $p\geq 0$, $x$ is assumed to be of maximal rank. This function returns a
matrix having only integral entries, having the same image as $x$, such that
the GCD of all its $n\times n$ subdeterminants is equal to 1 when $p$ is
equal to 0, or not divisible by $p$ otherwise. Here $p$ must be a prime
number (when it is non-zero). However, if the function is used when $p$ has
no small prime factors, it will either work or give the message ``impossible
inverse modulo'' and a non-trivial divisor of $p$.

If $p=-1$, this function returns a matrix whose columns form a basis of the
lattice equal to $\Z^n$ intersected with the lattice generated by the
columns of $x$.

If $p=-2$, returns a matrix whose columns form a basis of the lattice equal
to $\Z^n$ intersected with the $\Q$-vector space generated by the
columns of $x$.

\syn{matrixqz0}{x,p}.

\subsecidx{matsize}$(x)$: $x$ being a vector or matrix, returns a row vector
with two components, the first being the number of rows (1 for a row vector),
the second the number of columns (1 for a column vector).

\syn{matsize}{x}.

\subsecidx{matsnf}$(X,\{\fl=0\})$: if $X$ is a (singular or non-singular)
matrix outputs the vector of elementary divisors of $X$ (i.e.~the diagonal of
the \idx{Smith normal form} of $X$).

The binary digits of \fl\ mean:

1 (complete output): if set, outputs $[U,V,D]$, where $U$ and $V$ are two
unimodular matrices such that $UXV$ is the diagonal matrix $D$. Otherwise
output only the diagonal of $D$.

2 (generic input): if set, allows polynomial entries, in which case the
input matrix must be square. Otherwise, assume that $X$ has integer
coefficients with arbitrary shape.

4 (cleanup): if set, cleans up the output. This means that elementary
divisors equal to $1$ will be deleted, i.e.~outputs a shortened vector $D'$
instead of $D$. If complete output was required, returns $[U',V',D']$ so
that $U'XV' = D'$ holds. If this flag is set, $X$ is allowed to be of the
form $D$ or $[U,V,D]$ as would normally be output with the cleanup flag
unset.

\syn{matsnf0}{X,\fl}. Also available is \funs{smith}{X} ($\fl=0$).

\subsecidx{matsolve}$(x,y)$: $x$ being an invertible matrix and $y$ a column
vector, finds the solution $u$ of $x*u=y$, using Gaussian elimination. This
has the same effect as, but is a bit faster, than $x^{-1}*y$.

\syn{gauss}{x,y}.

\subsecidx{matsolvemod}$(m,d,y,\{\fl=0\})$: $m$ being any integral matrix,
$d$ a vector of positive integer moduli, and $y$ an integral
column vector, gives a small integer solution to the system of congruences
$\sum_i m_{i,j}x_j\equiv y_i\pmod{d_i}$ if one exists, otherwise returns
zero. Shorthand notation: $y$ (resp.~$d$) can be given as a single integer,
in which case all the $y_i$ (resp.~$d_i$) above are taken to be equal to $y$
(resp.~$d$).
\bprog
  ? m = [1,2;3,4];
  ? matsolvemod(m, [3,4], [1,2]~)
  %2 = [-2, 0]~
  ? matsolvemod(m, 3, 1) \\ m X = [1,1]~ over F_3
  %3 = [-1, 1]~
@eprog

If $\fl=1$, all solutions are returned in the form of a two-component row
vector $[x,u]$, where $x$ is a small integer solution to the system of
congruences and $u$ is a matrix whose columns give a basis of the homogeneous
system (so that all solutions can be obtained by adding $x$ to any linear
combination of columns of $u$). If no solution exists, returns zero.

\syn{matsolvemod0}{m,d,y,\fl}. Also available
are \funs{gaussmodulo}{m,d,y} ($\fl=0$)
and \funs{gaussmodulo2}{m,d,y} ($\fl=1$).

\subsecidx{matsupplement}$(x)$: assuming that the columns of the matrix $x$
are linearly independent (if they are not, an error message is issued), finds
a square invertible matrix whose first columns are the columns of $x$,
i.e.~supplement the columns of $x$ to a basis of the whole space.

\syn{suppl}{x}.

\subsecidx{mattranspose}$(x)$ or $x\til$: transpose of $x$.
This has an effect only on vectors and matrices.

\syn{gtrans}{x}.

\subsecidx{minpoly}$(A,\{v=x\},\{\fl=0\})$: \idx{minimal polynomial}
of $A$ with respect to the variable $v$., i.e. the monic polynomial $P$
of minimal degree (in the variable $v$) such that $P(A) = 0$.

\syn{minpoly}{A,v}, where $v$ is the variable number. 

\subsecidx{qfgaussred}$(q)$: \idx{decomposition into squares} of the
quadratic form represented by the symmetric matrix $q$. The result is a
matrix whose diagonal entries are the coefficients of the squares, and the
non-diagonal entries represent the bilinear forms. More precisely, if
$(a_{ij})$ denotes the output, one has
$$ q(x) = \sum_i a_{ii} (x_i + \sum_{j>i} a_{ij} x_j)^2 $$

\syn{sqred}{x}.

\subsecidx{qfjacobi}$(x)$: $x$ being a real symmetric matrix, this gives a
vector having two components: the first one is the vector of eigenvalues of
$x$, the second is the corresponding orthogonal matrix of eigenvectors of
$x$. The method used is Jacobi's method for symmetric matrices.

\syn{jacobi}{x}.

\subsecidx{qflll}$(x,\{\fl=0\})$: \idx{LLL} algorithm applied to the
\emph{columns} of the matrix $x$. The columns of $x$ must be linearly
independent, unless specified otherwise below. The result is a unimodular
transformation matrix $T$ such that $x \cdot T$ is an LLL-reduced basis of
the lattice generated by the column vectors of $x$.

If $\fl=0$ (default), the computations are done with floating point numbers,
using Householder matrices for orthogonalization. If $x$ has integral
entries, then computations are nonetheless approximate, with precision
varying as needed (Lehmer's trick, as generalized by Schnorr).

If $\fl=1$, it is assumed that $x$ is integral. The computation is done
entirely with integers. In this case, $x$ needs not be of maximal rank, but
if it is not, $T$ will not be square. This is slower and no more
accurate than $\fl=0$ above if $x$ has small dimension (say $100$ or less).

If $\fl=2$, $x$ should be an integer matrix whose columns are linearly
independent. Returns a partially reduced basis for $x$, using an unpublished
algorithm by Peter Montgomery: a basis is said to be \emph{partially reduced}
if $|v_i \pm v_j| \geq |v_i|$ for any two distinct basis vectors $v_i, \,
v_j$.

This is significantly faster than $\fl=1$, esp. when one row is huge compared
to the other rows. Note that the resulting basis is \emph{not} LLL-reduced in
general.

If $\fl=4$, $x$ is assumed to have integral entries, but needs not be of
maximal rank. The result is a two-component vector of matrices: the
columns of the first matrix represent a basis of the integer kernel of $x$
(not necessarily LLL-reduced) and the second matrix is the transformation
matrix $T$ such that $x\cdot T$ is an LLL-reduced $\Z$-basis of the image
of the matrix $x$.

If $\fl=5$, case as case $4$, but $x$ may have polynomial coefficients.

If $\fl=8$, same as case $0$, but $x$ may have polynomial coefficients.

\syn{qflll0}{x,\fl,\var{prec}}. Also available are
\funs{lll}{x,\var{prec}} ($\fl=0$), \funs{lllint}{x} ($\fl=1$), and
\funs{lllkerim}{x} ($\fl=4$).

\subsecidx{qflllgram}$(G,\{\fl=0\})$: same as \kbd{qflll}, except that the
matrix $G = \kbd{x\til * x}$ is the Gram matrix of some lattice vectors $x$,
and not the coordinates of the vectors themselves. In particular, $G$ must
now be a square symmetric real matrix, corresponding to a positive definite
quadratic form. The result is a unimodular transformation matrix $T$ such
that $x \cdot T$ is an LLL-reduced basis of the lattice generated by the
column vectors of $x$.

If $\fl=0$ (default): the computations are done with floating point numbers,
using Householder matrices for orthogonalization. If $G$ has integral
entries, then computations are nonetheless approximate, with precision
varying as needed (Lehmer's trick, as generalized by Schnorr).

If $\fl=1$: $G$ has integer entries, still positive but not necessarily
definite (i.e~$x$ needs not have maximal rank). The computations are all
done in integers and should be slower than the default, unless the latter
triggers accuracy problems.

$\fl=4$: $G$ has integer entries, gives the kernel and reduced image of $x$.

$\fl=5$: same as case $4$, but $G$ may have polynomial coefficients.

\syn{qflllgram0}{G,\fl,\var{prec}}. Also available are
\funs{lllgram}{G,\var{prec}} ($\fl=0$), \funs{lllgramint}{G} ($\fl=1$), and
\funs{lllgramkerim}{G} ($\fl=4$).

\subsecidx{qfminim}$(x,\{b\},\{m\},\{\fl=0\})$: $x$ being a square and symmetric
matrix representing a positive definite quadratic form, this function
deals with the vectors of $x$ whose norm is less than or equal to $b$,
enumerated using the Fincke-Pohst algorithm. The function searches for
the minimal non-zero vectors if $b$ is omitted. The precise behaviour
depends on $\fl$.

If $\fl=0$ (default), seeks at most $2m$ vectors. The result is a
three-component vector, the first component being the number of vectors
found, the second being the maximum norm found, and the last vector is a
matrix whose columns are the vectors found, only one being given for each
pair $\pm v$ (at most $m$ such pairs). The vectors are returned in no
particular order. In this variant, an explicit $m$ must be provided.

If $\fl=1$, ignores $m$ and returns the first vector whose norm is less
than $b$. In this variant, an explicit $b$ must be provided.

In both these cases, $x$ is assumed to have integral entries. The
implementation uses low precision floating point computations for maximal
speed, which gives incorrect result when $x$ has large entries. (The
condition is checked in the code and the routine will raise an error if
large rounding errors occur.) A more robust, but much slower,
implementation is chosen if the following flag is used:

If $\fl=2$, $x$ can have non integral real entries. In this case, if $b$
is omitted, the ``minimal'' vectors only have approximately the same norm.
If $b$ is omitted, $m$ is an upper bound for the number of vectors that
will be stored and returned, but all minimal vectors are nevertheless
enumerated. If $m$ is omitted, all vectors found are stored and returned;
note that this may be a huge vector! 

\syn{qfminim0}{x,b,m,\fl,\var{prec}}, also available are \funs{minim}{x,b,m}
($\fl=0$), \funs{minim2}{x,b,m} ($\fl=1$). In all cases, an omitted $b$
or $m$ is coded as \kbd{NULL}.

\subsecidx{qfperfection}$(x)$: $x$ being a square and symmetric matrix with
integer entries representing a positive definite quadratic form, outputs the
perfection rank of the form. That is, gives the rank of the family of the $s$
symmetric matrices $v_iv_i^t$, where $s$ is half the number of minimal
vectors and the $v_i$ ($1\le i\le s$) are the minimal vectors.

As a side note to old-timers, this used to fail bluntly when $x$ had more
than $5000$ minimal vectors. Beware that the computations can now be very
lengthy when $x$ has many minimal vectors.

\syn{perf}{x}.

\subsecidx{qfrep}$(q, B, \{\fl = 0\})$: $q$ being a square and symmetric
matrix with integer entries representing a positive definite quadratic form,
outputs the vector whose $i$-th entry, $1 \leq i \leq B$ is half the number
of vectors $v$ such that $q(v) = i$. This routine uses a naive algorithm
based on \tet{qfminim}, and will fail if any entry becomes larger than
$2^{31}$.

\noindent The binary digits of \fl\ mean:

\item 1: count vectors of even norm from $1$ to $2B$.

\item 2: return a \typ{VECSMALL} instead of a \typ{GEN}

\syn{qfrep0}{q, B, \fl}.

\subsecidx{qfsign}$(x)$: signature of the quadratic form represented by the
symmetric matrix $x$. The result is a two-component vector.

\syn{signat}{x}.

\subsecidx{setintersect}$(x,y)$: intersection of the two sets $x$ and $y$.

\syn{setintersect}{x,y}.

\subsecidx{setisset}$(x)$: returns true (1) if $x$ is a set, false (0) if
not. In PARI, a set is simply a row vector whose entries are strictly
increasing. To convert any vector (and other objects) into a set, use the
function \kbd{Set}.

\syn{setisset}{x}, and this returns a \kbd{long}.

\subsecidx{setminus}$(x,y)$: difference of the two sets $x$ and $y$,
i.e.~set of elements of $x$ which do not belong to $y$.

\syn{setminus}{x,y}.

\subsecidx{setsearch}$(x,y,\{\fl=0\})$: searches if $y$ belongs to the set
$x$. If it does and $\fl$ is zero or omitted, returns the index $j$ such that
$x[j]=y$, otherwise returns 0. If $\fl$ is non-zero returns the index $j$
where $y$ should be inserted, and $0$ if it already belongs to $x$ (this is
meant to be used in conjunction with \kbd{listinsert}).

This function works also if $x$ is a \emph{sorted} list (see \kbd{listsort}).

\syn{setsearch}{x,y,\fl} which returns a \kbd{long}
integer.

\subsecidx{setunion}$(x,y)$: union of the two sets $x$ and $y$.

\syn{setunion}{x,y}.

\subsecidx{trace}$(x)$: this applies to quite general $x$. If $x$ is not a
matrix, it is equal to the sum of $x$ and its conjugate, except for polmods
where it is the trace as an algebraic number.

For $x$ a square matrix, it is the ordinary trace. If $x$ is a
non-square matrix (but not a vector), an error occurs.

\syn{gtrace}{x}.

\subsecidx{vecextract}$(x,y,\{z\})$: extraction of components of the
vector or matrix $x$ according to $y$. In case $x$ is a matrix, its
components are as usual the \emph{columns} of $x$. The parameter $y$ is a
component specifier, which is either an integer, a string describing a
range, or a vector.

If $y$ is an integer, it is considered as a mask: the binary bits of $y$ are
read from right to left, but correspond to taking the components from left to
right. For example, if $y=13=(1101)_2$ then the components 1,3 and 4 are
extracted.

If $y$ is a vector, which must have integer entries, these entries correspond
to the component numbers to be extracted, in the order specified.

If $y$ is a string, it can be

\item a single (non-zero) index giving a component number (a negative
index means we start counting from the end).

\item a range of the form \kbd{"$a$..$b$"}, where $a$ and $b$ are
indexes as above. Any of $a$ and $b$ can be omitted; in this case, we take
as default values $a = 1$ and $b = -1$, i.e.~ the first and last components
respectively. We then extract all components in the interval $[a,b]$, in
reverse order if $b < a$.

In addition, if the first character in the string is \kbd{\pow}, the
complement of the given set of indices is taken.

If $z$ is not omitted, $x$ must be a matrix. $y$ is then the \emph{line}
specifier, and $z$ the \emph{column} specifier, where the component specifier
is as explained above.

\bprog
? v = [a, b, c, d, e];
? vecextract(v, 5)          \\@com mask
%1 = [a, c]
? vecextract(v, [4, 2, 1])  \\@com component list
%2 = [d, b, a]
? vecextract(v, "2..4")     \\@com interval
%3 = [b, c, d]
? vecextract(v, "-1..-3")   \\@com interval + reverse order
%4 = [e, d, c]
? vecextract(v, "^2")       \\@com complement
%5 = [a, c, d, e]
? vecextract(matid(3), "2..", "..")
%6 =
[0 1 0]

[0 0 1]
@eprog

\syn{extract}{x,y} or \funs{matextract}{x,y,z}.

\subsecidx{vecsort}$(x,\{k\},\{\fl=0\})$: sorts the vector $x$ in ascending
order, using a mergesort method. $x$ must be a vector, and its components
integers, reals, or fractions.

If $k$ is present and is an integer, sorts according to the value of the
$k$-th subcomponents of the components of~$x$. Note that mergesort is
stable, hence is the initial ordering of "equal" entries (with respect to
the sorting criterion) is not changed.

$k$ can also be a vector, in which case the sorting is done lexicographically
according to the components listed in the vector $k$. For example, if
$k=[2,1,3]$, sorting will be done with respect to the second component, and
when these are equal, with respect to the first, and when these are equal,
with respect to the third.

\noindent The binary digits of \fl\ mean:

\item 1: indirect sorting of the vector $x$, i.e.~if $x$ is an
$n$-component vector, returns a permutation of $[1,2,\dots,n]$ which
applied to the components of $x$ sorts $x$ in increasing order.
For example, \kbd{vecextract(x, vecsort(x,,1))} is equivalent to
\kbd{vecsort(x)}.

\item 2: sorts $x$ by ascending lexicographic order (as per the
\kbd{lex} comparison function).

\item 4: use descending instead of ascending order.

\syn{vecsort0}{x,k,flag}. To omit $k$, use \kbd{NULL} instead. You can also
use the simpler functions

\funs{sort}{x} (= \funs{vecsort0}{x,\kbd{NULL},0}).

\funs{indexsort}{x} (= \funs{vecsort0}{x,\kbd{NULL},1}).

\funs{lexsort}{x} (= \funs{vecsort0}{x,\kbd{NULL},2}).

Also available are \funs{sindexsort}{x} and \funs{sindexlexsort}{x} which
return a \typ{VECSMALL} $v$, where $v[1]\dots v[n]$ contain the indices.

\subsecidx{vector}$(n,\{X\},\{\var{expr}=0\})$: creates a row vector (type
\typ{VEC}) with $n$ components whose components are the expression
\var{expr} evaluated at the integer points between 1 and $n$. If one of the
last two arguments is omitted, fill the vector with zeroes.

Avoid modifying $X$ within \var{expr}; if you do, the formal variable
still runs from $1$ to $n$. In particular, \kbd{vector(n,i,expr)} is not
equivalent to
\bprog
    v = vector(n)
    for (i = 1, n, v[i] = expr)
@eprog\noindent
as the following example shows:
\bprog
    n = 3
    v = vector(n); vector(n, i, i++)            ----> [2, 3, 4]
    v = vector(n); for (i = 1, n, v[i] = i++)   ----> [2, 0, 4]
@eprog\noindent


\synt{vecteur}{GEN nmax, entree *ep, char *expr}.

\subsecidx{vectorsmall}$(n,\{X\},\{\var{expr}=0\})$: creates a row vector of small integers (type
\typ{VECSMALL}) with $n$ components whose components are the expression
\var{expr} evaluated at the integer points between 1 and $n$. If one of the
last two arguments is omitted, fill the vector with zeroes.

\synt{vecteursmall}{GEN nmax, entree *ep, char *expr}.

\subsecidx{vectorv}$(n,X,\var{expr})$: as \tet{vector}, but returns a
column vector (type \typ{COL}).

\synt{vvecteur}{GEN nmax, entree *ep, char *expr}.

\section{Sums, products, integrals and similar functions}
\label{se:sums}

Although the \kbd{gp} calculator is programmable, it is useful to have
preprogrammed a number of loops, including sums, products, and a certain
number of recursions. Also, a number of functions from numerical analysis
like numerical integration and summation of series will be described here.

One of the parameters in these loops must be the control variable, hence a
simple variable name. In the descriptions, the letter $X$ will always denote
any simple variable name, and represents the formal parameter used in the
function. The expression to be summed, integrated, etc. is any legal PARI
expression, including of course expressions using loops.

\misctitle{Library mode.}
Since it is easier to program directly the loops in library mode, these
functions are mainly useful for GP programming. Using them in library mode is
tricky and we will not give any details, although the reader can try and figure
it out by himself by checking the example given for \tet{sum}.

On the other hand, numerical routines code a function (to be integrated,
summed, etc.) with two parameters named
\bprog
  GEN (*eval)(GEN,void*)
  void *E;
@eprog\noindent
The second is meant to contain all auxilliary data needed by your function.
The first is such that \kbd{eval(x, E)} returns your function evaluated at
\kbd{x}. For instance, one may code the family of functions
$f_t: x \to (x+t)^2$ via
\bprog
GEN f(GEN x, void *t) { return gsqr(gadd(x, (GEN)t)); }
@eprog\noindent
One can then integrate $f_1$ between $a$ and $b$ with the call
\bprog
intnum((void*)stoi(1), &fun, a, b, NULL, prec);
@eprog\noindent
Since you can set \kbd{E} to a pointer to any \kbd{struct} (typecast to
\kbd{void*}) the above mechanism handles arbitrary functions. For simple
functions without extra parameters, you may set \kbd{E = NULL} and ignore
that argument in your function definition.

\misctitle{Numerical integration.}\sidx{numerical integration}
Starting with version 2.2.9 the powerful ``double exponential'' univariate
integration method is implemented in \tet{intnum} and its variants. Romberg
integration is still available under the name \kbd{intnumromb}, but
superseded. It is possible to compute numerically integrals to thousands of
decimal places in reasonable time, as long as the integrand is regular. It is
also reasonable to compute numerically integrals in several variables,
although more than two becomes lengthy. The integration domain may be
non-compact, and the integrand may have reasonable singularities at
endpoints. To use \kbd{intnum}, the user must split the integral into a sum
of subintegrals where the function has (possible) singularities only at the
endpoints. Polynomials in logarithms are not considered singular, and
neglecting these logs, singularities are assumed to be algebraic (in other
words asymptotic to $C(x-a)^{-\alpha}$ for some $\alpha$ such that
$\alpha>-1$ when $x$ is close to $a$), or to correspond to simple
discontinuities of some (higher) derivative of the function. For instance,
the point $0$ is a singularity of $\text{abs}(x)$.

See also the discrete summation methods below (sharing the prefix \kbd{sum}).

\subsecidx{intcirc}$(X=a,R,\var{expr}, \{\var{tab}\})$: numerical
integration of \var{expr} with respect to $X$ on the circle $|X-a|=R$,
divided by $2i\pi$. In other words, when \var{expr} is a meromorphic
function, sum of the residues in the corresponding disk. \var{tab} is as in
\kbd{intnum}, except that if computed with \kbd{intnuminit} it should be with
the endpoints \kbd{[-1, 1]}.

\bprog
? \p105
? intcirc(s=1, 0.5, zeta(s)) - 1
time = 3,460 ms.
%1 = -2.40... E-104 - 2.7... E-106*I
@eprog

\synt{intcirc}{void *E, GEN (*eval)(GEN,void*), GEN a,GEN R,GEN tab, long prec}.

\subsecidx{intfouriercos}$(X=a,b,z,\var{expr},\{\var{tab}\})$: numerical
integration of $\var{expr}(X)\cos(2\pi zX)$ from $a$ to $b$, in other words
Fourier cosine transform (from $a$ to $b$) of the function represented by
\var{expr}. $a$ and $b$ are coded as in \kbd{intnum}, and are not necessarily
at infinity, but if they are, oscillations (i.e. $[[\pm1],\alpha I]$) are
forbidden.

\synt{intfouriercos}{void *E, GEN (*eval)(GEN,void*), GEN a, GEN b, GEN z, GEN tab, long prec}.

\subsecidx{intfourierexp}$(X=a,b,z,\var{expr},\{\var{tab}\})$: numerical
integration of $\var{expr}(X)\exp(-2\pi zX)$ from $a$ to $b$, in other words
Fourier transform (from $a$ to $b$) of the function represented by
\var{expr}. Note the minus sign. $a$ and $b$ are coded as in \kbd{intnum},
and are not necessarily at infinity but if they are, oscillations (i.e.
$[[\pm1],\alpha I]$) are forbidden.

\synt{intfourierexp}{void *E, GEN (*eval)(GEN,void*), GEN a, GEN b, GEN z, GEN tab, long prec}.

\subsecidx{intfouriersin}$(X=a,b,z,\var{expr},\{\var{tab}\})$: numerical
integration of $\var{expr}(X)\sin(2\pi zX)$ from $a$ to $b$, in other words
Fourier sine transform (from $a$ to $b$) of the function represented by
\var{expr}. $a$ and $b$ are coded as in \kbd{intnum}, and are not necessarily
at infinity but if they are, oscillations (i.e. $[[\pm1],\alpha I]$) are
forbidden.

\synt{intfouriersin}{void *E, GEN (*eval)(GEN,void*), GEN a, GEN b, GEN z, GEN tab, long prec}.

\subsecidx{intfuncinit}$(X=a,b,\var{expr},\{\fl=0\},\{m=0\})$:
initalize tables for use with integral transforms such as \kbd{intmellininv},
etc., where $a$ and $b$ are coded as in \kbd{intnum}, $\var{expr}$ is the
function $s(X)$ to which the integral transform is to be applied (which will
multiply the weights of integration) and $m$ is as in \kbd{intnuminit}. If
$\fl$ is nonzero, assumes that $s(-X)=\overline{s(X)}$, which makes the
computation twice as fast. See \kbd{intmellininvshort} for examples of the
use of this function, which is particularly useful when the function $s(X)$
is lengthy to compute, such as a gamma product.

\synt{intfuncinit}{void *E, GEN (*eval)(GEN,void*), GEN a,GEN b,long m, long flag, long prec}.
Note that the order of $m$ and $\fl$ are reversed compared to the \kbd{GP}
syntax.

\subsecidx{intlaplaceinv}$(X=sig,z,\var{expr},\{\var{tab}\})$:
numerical integration of $\var{expr}(X)e^{Xz}$ with respect to $X$ on the line
$\Re(X)=sig$, divided by $2i\pi$, in other words, inverse Laplace transform
of the function corresponding to \var{expr} at the value $z$.

$sig$ is coded as follows. Either it is a real number $\sigma$, equal to the
abcissa of integration, and then the function to be integrated is assumed to
be slowly decreasing when the imaginary part of the variable tends to
$\pm\infty$. Or it is a two component vector $[\sigma,\alpha]$, where
$\sigma$ is as before, and either $\alpha=0$ for slowly decreasing functions,
or $\alpha>0$ for functions decreasing like $\exp(-\alpha t)$. Note that it
is not necessary to choose the exact value of $\alpha$. \var{tab} is as in
\kbd{intnum}.

It is often a good idea to use this function with a value of $m$ one or two
higher than the one chosen by default (which can be viewed thanks to the
function \kbd{intnumstep}), or to increase the abcissa of integration
$\sigma$. For example:

\bprog
? \p 105
? intlaplaceinv(x=2, 1, 1/x) - 1
time = 350 ms.
%1 = 7.37... E-55 + 1.72... E-54*I \\@com not so good
? m = intnumstep()
%2 = 7
? intlaplaceinv(x=2, 1, 1/x, m+1) - 1
time = 700 ms.
%3 = 3.95... E-97 + 4.76... E-98*I \\@com better
? intlaplaceinv(x=2, 1, 1/x, m+2) - 1
time = 1400 ms.
%4 = 0.E-105 + 0.E-106*I \\@com perfect but slow.
? intlaplaceinv(x=5, 1, 1/x) - 1
time = 340 ms.
%5 = -5.98... E-85 + 8.08... E-85*I \\@com better than \%1
? intlaplaceinv(x=5, 1, 1/x, m+1) - 1
time = 680 ms.
%6 = -1.09... E-106 + 0.E-104*I \\@com perfect, fast.
? intlaplaceinv(x=10, 1, 1/x) - 1
time = 340 ms.
%7 = -4.36... E-106 + 0.E-102*I \\@com perfect, fastest, but why $sig=10$?
? intlaplaceinv(x=100, 1, 1/x) - 1
time = 330 ms.
%7 = 1.07... E-72 + 3.2... E-72*I \\@com too far now...
@eprog

\synt{intlaplaceinv}{void *E, GEN (*eval)(GEN,void*), GEN sig,GEN z, GEN tab, long prec}.

\subsecidx{intmellininv}$(X=sig,z,\var{expr},\{\var{tab}\})$: numerical
integration of $\var{expr}(X)z^{-X}$ with respect to $X$ on the line
$\Re(X)=sig$, divided by $2i\pi$, in other words, inverse Mellin transform of
the function corresponding to \var{expr} at the value $z$.

$sig$ is coded as follows. Either it is a real number $\sigma$, equal to the
abcissa of integration, and then the function to be integrated is assumed to
decrease exponentially fast, of the order of $\exp(-t)$ when the imaginary
part of the variable tends to $\pm\infty$. Or it is a two component vector
$[\sigma,\alpha]$, where $\sigma$ is as before, and either $\alpha=0$ for
slowly decreasing functions, or $\alpha>0$ for functions decreasing like
$\exp(-\alpha t)$, such as gamma products. Note that it is not necessary to
choose the exact value of $\alpha$, and that $\alpha=1$ (equivalent to $sig$
alone) is usually sufficient. \var{tab} is as in \kbd{intnum}.

As all similar functions, this function is provided for the convenience of
the user, who could use \kbd{intnum} directly. However it is in general
better to use \kbd{intmellininvshort}.

\bprog
? \p 105
? intmellininv(s=2,4, gamma(s)^3);
time = 1,190 ms. \\@com reasonable.
? \p 308
? intmellininv(s=2,4, gamma(s)^3);
time = 51,300 ms. \\@com slow because of $\Gamma(s)^3$.
@eprog\noindent

\synt{intmellininv}{void *E, GEN (*eval)(GEN,void*), GEN sig, GEN z, GEN tab, long prec}.

\subsecidx{intmellininvshort}$(sig,z,tab)$: numerical integration
of $s(X)z^{-X}$ with respect to $X$ on the line $\Re(X)=sig$, divided by
$2i\pi$, in other words, inverse Mellin transform of $s(X)$ at the value $z$.
Here $s(X)$ is implicitly contained in \var{tab} in \kbd{intfuncinit} format,
typically
\bprog
  tab = intfuncinit(T = [-1], [1], s(sig + I*T))
@eprog\noindent
or similar commands. Take the example of the inverse Mellin transform of
$\Gamma(s)^3$ given in \kbd{intmellininv}:

\bprog
? \p 105
? oo = [1]; \\@com for clarity
? A = intmellininv(s=2,4, gamma(s)^3);
time = 2,500 ms. \\@com not too fast because of $\Gamma(s)^3$.
\\ @com function of real type, decreasing as $\exp(-3\pi/2\cdot |t|)$
? tab = intfuncinit(t=[-oo, 3*Pi/2],[oo, 3*Pi/2], gamma(2+I*t)^3, 1);
time = 1,370 ms.
? intmellininvshort(2,4, tab) - A
time = 50 ms.
%4 = -1.26... - 3.25...E-109*I \\@com 50 times faster than \kbd{A} and perfect.
? tab2 = intfuncinit(t=-oo, oo, gamma(2+I*t)^3, 1);
? intmellininvshort(2,4, tab2)
%6 = -1.2...E-42 - 3.2...E-109*I  \\@com 63 digits lost
@eprog\noindent
In the computation of \var{tab}, it was not essential to include the
\emph{exact} exponential decrease of $\Gamma(2+it)^3$. But as the last
example shows, a rough indication \emph{must} be given, otherwise slow
decrease is assumed, resulting in catastrophic loss of accuracy.

\synt{intmellininvshort}{GEN sig, GEN z, GEN tab, long prec}.

\subsecidx{intnum}$(X=a,b,\var{expr},\{\var{tab}\})$: numerical integration
of \var{expr} on $[a,b]$ (possibly infinite interval) with respect to $X$,
where $a$ and $b$ are coded as explained below. The integrand may have values
belonging to a vector space over the real numbers; in particular, it can be
complex-valued or vector-valued.

If \var{tab} is omitted, necessary integration tables are computed using
\kbd{intnuminit} according to the current precision. It may be a positive
integer $m$, and tables are computed assuming the integration step is
$1/2^m$. Finally \var{tab} can be a table output by \kbd{intnuminit}, in
which case it is used directly. This is important if several integrations of
the same type are performed (on the same kind of interval and functions, and
the same accuracy), since it saves expensive precomputations.

If \var{tab} is omitted the algorithm guesses a reasonable value for $m$
depending on the current precision. That value may be obtained as
\bprog
  intnumstep()
@eprog\noindent
However this value may be off from the optimal one, and this is important
since the integration time is roughly proportional to $2^m$. One may try
consecutive values of $m$ until they give the same value up to an accepted
error.

The endpoints $a$ and $b$ are coded as follows. If $a$ is not at $\pm\infty$,
it is either coded as a scalar (real or complex), or as a two component vector
$[a,\alpha]$, where the function is assumed to have a singularity of the
form $(x-a)^{\alpha+\epsilon}$ at $a$, where $\epsilon$ indicates that powers
of logarithms are neglected. In particular, $[a,\alpha]$ with $\alpha\ge 0$
is equivalent to $a$. If a wrong singularity exponent is used, the result
will lose a catastrophic number of decimals, for instance approximately half
the number of digits will be correct if $\alpha=-1/2$ is omitted.

The endpoints of integration can be $\pm\infty$, which is coded as
$[\pm 1]$ or as $[[\pm1],\alpha]$. Here $\alpha$ codes the behaviour of the
function at $\pm\infty$ as follows.

\item $\alpha=0$ (or no $\alpha$ at all, i.e. simply $[\pm1]$) assumes that the
function to be integrated tends to zero, but not exponentially fast, and not
oscillating such as $\sin(x)/x$.

\item $\alpha>0$ assumes that the function tends to zero exponentially fast
approximately as $\exp(-\alpha x)$, including reasonably oscillating
functions such as $\exp(-x)\sin(x)$. The precise choice of $\alpha$, while
useful in extreme cases, is not critical, and may be off by a \emph{factor}
of $10$ or more from the correct value.

\item $\alpha<-1$ assumes that the function tends to $0$ slowly, like
$x^{\alpha}$. Here it is essential to give the correct $\alpha$, if possible,
but on the other hand $\alpha\le -2$ is equivalent to $\alpha=0$, in other
words to no $\alpha$ at all.

\smallskip The last two codes are reserved for oscillating functions.
Let $k > 0$ real, and $g(x)$ a nonoscillating function tending to $0$, then

\item $\alpha=k I$ assumes that the function behaves like $\cos(kx)g(x)$.

\item $\alpha=-kI$ assumes that the function behaves like $\sin(kx)g(x)$.

\noindent Here it is critical to give the exact value of $k$. If the
oscillating part is not a pure sine or cosine, one must expand it into a
Fourier series, use the above codings, and sum the resulting contributions.
Otherwise you will get nonsense. Note that $\cos(kx)$ (and similarly
$\sin(kx)$) means that very function, and not a translated version such as
$\cos(kx+a)$.

If for instance $f(x)=\cos(kx)g(x)$ where $g(x)$ tends to zero exponentially
fast as $\exp(-\alpha x)$, it is up to the user to choose between
$[[\pm1],\alpha]$ and $[[\pm1],kI]$, but a good rule of thumb is that if the
oscillations are much weaker than the exponential decrease, choose
$[[\pm1],\alpha]$, otherwise choose $[[\pm1],kI]$, although the latter can
reasonably be used in all cases, while the former cannot. To take a specific
example, in the inverse Mellin transform, the function to be integrated is
almost always exponentially decreasing times oscillating. If we choose the
oscillating type of integral we perhaps obtain the best results, at the
expense of having to recompute our functions for a different value of the
variable $z$ giving the transform, preventing us to use a function such as
\kbd{intmellininvshort}. On the other hand using the exponential type of
integral, we obtain less accurate results, but we skip expensive
recomputations. See \kbd{intmellininvshort} and \kbd{intfuncinit} for more
explanations.

\smallskip
\misctitle{Note.} If you do not like the code $[\pm1]$ for $\pm\infty$, you
are welcome to set, e.g \kbd{oo = [1]} or \kbd{INFINITY = [1]}, then
using \kbd{+oo}, \kbd{-oo}, \kbd{-INFINITY}, etc. will have the expected
behaviour.

We shall now see many examples to get a feeling for what the various
parameters achieve. All examples below assume precision is set to $105$
decimal digits. We first type
\bprog
? \p 105
? oo = [1]  \\@com for clarity
@eprog

\misctitle{Apparent singularities.} Even if the function $f(x)$ represented
by \var{expr} has no singularities, it may be important to define the
function differently near special points. For instance, if $f(x) = 1
/(\exp(x)-1) - \exp(-x)/x$, then $\int_0^\infty f(x)\,dx=\gamma$, Euler's
constant \kbd{Euler}. But

\bprog
? f(x) = 1/(exp(x)-1) - exp(-x)/x
? intnum(x = 0, [oo,1],  f(x)) - Euler
%1 = 6.00... E-67
@eprog\noindent
thus only correct to $76$ decimal digits. This is because close to $0$ the
function $f$ is computed with an enormous loss of accuracy.
 A better solution is

\bprog
? f(x) = 1/(exp(x)-1)-exp(-x)/x
? F = truncate( f(t + O(t^7)) ); \\@com expansion around t = 0
? g(x) = if (x > 1e-18, f(x), subst(F,t,x))  \\@com note that $6 \cdot 18 > 105$
? intnum(x = 0, [oo,1],  g(x)) - Euler
%2 = 0.E-106 \\@com perfect
@eprog\noindent
It is up to the user to determine constants such as the $10^{-18}$ and $7$
used above.

\misctitle{True singularities.} With true singularities the result is much
worse. For instance

\bprog
? intnum(x = 0, 1,  1/sqrt(x)) - 2
%1 = -1.92... E-59 \\@com only $59$ correct decimals

? intnum(x = [0,-1/2], 1,  1/sqrt(x)) - 2
%2 = 0.E-105 \\@com better
@eprog

\misctitle{Oscillating functions.}

\bprog
? intnum(x = 0, oo, sin(x) / x) - Pi/2
%1 = 20.78.. \\@com nonsense
? intnum(x = 0, [oo,1], sin(x)/x) - Pi/2
%2 = 0.004.. \\@com bad
? intnum(x = 0, [oo,-I], sin(x)/x) - Pi/2
%3 = 0.E-105 \\@com perfect
? intnum(x = 0, [oo,-I], sin(2*x)/x) - Pi/2  \\@com oops, wrong $k$
%4 = 0.07...
? intnum(x = 0, [oo,-2*I], sin(2*x)/x) - Pi/2
%5 = 0.E-105 \\@com perfect

? intnum(x = 0, [oo,-I], sin(x)^3/x) - Pi/4
%6 = 0.0092... \\@com bad
? sin(x)^3 - (3*sin(x)-sin(3*x))/4
%7 = O(x^17)
@eprog\noindent
We may use the above linearization and compute two oscillating integrals with
``infinite endpoints'' \kbd{[oo, -I]} and \kbd{[oo, -3*I]} respectively, or
notice the obvious change of variable, and reduce to the single integral
${1\over 2}\int_0^\infty \sin(x)/x\,dx$. We finish with some more complicated
examples:

\bprog
? intnum(x = 0, [oo,-I], (1-cos(x))/x^2) - Pi/2
%1 = -0.0004... \\@com bad
? intnum(x = 0, 1, (1-cos(x))/x^2) \
+ intnum(x = 1, oo, 1/x^2) - intnum(x = 1, [oo,I], cos(x)/x^2) - Pi/2
%2 = -2.18... E-106 \\@com OK

? intnum(x = 0, [oo, 1], sin(x)^3*exp(-x)) - 0.3
%3 = 5.45... E-107 \\@com OK
? intnum(x = 0, [oo,-I], sin(x)^3*exp(-x)) - 0.3
%4 = -1.33... E-89 \\@com lost 16 decimals. Try higher $m$:
? m = intnumstep()
%5 = 7 \\@com the value of $m$ actually used above.
? tab = intnuminit(0,[oo,-I], m+1); \\@com try $m$ one higher.
? intnum(x = 0, oo, sin(x)^3*exp(-x), tab) - 0.3
%6 = 5.45... E-107 \\@com OK this time.
@eprog

\misctitle{Warning.} Like \tet{sumalt}, \kbd{intnum} often assigns a
reasonable value to diverging integrals. Use these values at your own risk!
For example:

\bprog
? intnum(x = 0, [oo, -I], x^2*sin(x))
%1 = -2.0000000000...
@eprog\noindent
Note the formula
$$ \int_0^\infty \sin(x)/x^s\,dx = \cos(\pi s/2) \Gamma(1-s)\;, $$
a priori valid only for $0 < \Re(s) < 2$, but the right hand side provides an
analytic continuation which may be evaluated at $s = -2$\dots

\misctitle{Multivariate integration.}
Using successive univariate integration with respect to different formal
parameters, it is immediate to do naive multivariate integration. But it is
important to use a suitable \kbd{intnuminit} to precompute data for the
\emph{internal} integrations at least!

For example, to compute the double integral on the unit disc $x^2+y^2\le1$
of the function $x^2+y^2$, we can write
\bprog
? tab = intnuminit(-1,1);
? intnum(x=-1,1, intnum(y=-sqrt(1-x^2),sqrt(1-x^2), x^2+y^2, tab), tab)
@eprog\noindent
The first \var{tab} is essential, the second optional. Compare:

\bprog
? tab = intnuminit(-1,1);
time = 30 ms.
? intnum(x=-1,1, intnum(y=-sqrt(1-x^2),sqrt(1-x^2), x^2+y^2));
time = 54,410 ms. \\@com slow
? intnum(x=-1,1, intnum(y=-sqrt(1-x^2),sqrt(1-x^2), x^2+y^2, tab), tab);
time = 7,210 ms.  \\@com faster
@eprog\noindent
However, the \kbd{intnuminit} program is usually pessimistic when it comes to
choosing the integration step $2^{-m}$. It is often possible to improve the
speed by trial and error. Continuing the above example:
\bprog
? test(M) =
{
  tab = intnuminit(-1,1, M);
  intnum(x=-1,1, intnum(y=-sqrt(1-x^2),sqrt(1-x^2), x^2+y^2,tab), tab) - Pi/2
}
? m = intnumstep() \\@com what value of $m$ did it take ?
%1 = 7
? test(m - 1)
time = 1,790 ms.
%2 = -2.05... E-104 \\@com $4 = 2^2$ times faster and still OK.
? test(m - 2)
time = 430 ms.
%3 = -1.11... E-104 \\@com $16 = 2^4$ times faster and still OK.
? test(m - 3)
time = 120 ms.
%3 = -7.23... E-60 \\@com $64 = 2^6$ times faster, lost $45$ decimals.
@eprog

\synt{intnum}{void *E, GEN (*eval)(GEN,void*), GEN a,GEN b,GEN tab, long prec},
where an omitted \var{tab} is coded as \kbd{NULL}.

\subsecidx{intnuminit}$(a,b,\{m=0\})$: initialize tables for integration from
$a$ to $b$, where $a$ and $b$ are coded as in \kbd{intnum}. Only the
compactness, the possible existence of singularities, the speed of decrease
or the oscillations at infinity are taken into account, and not the values.
For instance {\tt intnuminit(-1,1)} is equivalent to {\tt intnuminit(0,Pi)},
and {\tt intnuminit([0,-1/2],[1])} is equivalent to {\tt
intnuminit([-1],[-1,-1/2])}. If $m$ is not given, it is computed according to
the current precision. Otherwise the integration step is $1/2^m$. Reasonable
values of $m$ are $m=6$ or $m=7$ for $100$ decimal digits, and $m=9$ for
$1000$ decimal digits.

The result is technical, but in some cases it is useful to know the output.
Let $x=\phi(t)$ be the change of variable which is used. \var{tab}[1] contains
the integer $m$ as above, either given by the user or computed from the default
precision, and can be recomputed directly using the function \kbd{intnumstep}.
\var{tab}[2] and \var{tab}[3] contain respectively the abcissa and weight
corresponding to $t=0$ ($\phi(0)$ and $\phi'(0)$). \var{tab}[4] and
\var{tab}[5] contain the abcissas and weights corresponding to positive
$t=nh$ for $1\le n\le N$ and $h=1/2^m$ ($\phi(nh)$ and $\phi'(nh)$). Finally
\var{tab}[6] and \var{tab}[7] contain either the abcissas and weights
corresponding to negative $t=nh$ for $-N\le n\le -1$, or may be empty (but
not always) if $\phi(t)$ is an odd function (implicitly we would have
$\var{tab}[6]=-\var{tab}[4]$ and $\var{tab}[7]=\var{tab}[5]$).

\synt{intnuminit}{GEN a, GEN b, long m, long prec}.

\subsecidx{intnumromb}$(X=a,b,\var{expr},\{\fl=0\})$: numerical integration of
\var{expr} (smooth in $]a,b[$), with respect to $X$. This function is
deprecated, use \tet{intnum} instead.

Set $\fl=0$ (or omit it altogether) when $a$ and $b$ are not too large, the
function is smooth, and can be evaluated exactly everywhere on the interval
$[a,b]$.

If $\fl=1$, uses a general driver routine for doing numerical integration,
making no particular assumption (slow).

$\fl=2$ is tailored for being used when $a$ or $b$ are infinite. One
\emph{must} have $ab>0$, and in fact if for example $b=+\infty$, then it is
preferable to have $a$ as large as possible, at least $a\ge1$.

If $\fl=3$, the function is allowed to be undefined (but continuous) at $a$
or $b$, for example the function $\sin(x)/x$ at $x=0$.

The user should not require too much accuracy: 18 or 28 decimal digits is OK,
but not much more. In addition, analytical cleanup of the integral must have
been done: there must be no singularities in the interval or at the
boundaries. In practice this can be accomplished with a simple change of
variable. Furthermore, for improper integrals, where one or both of the
limits of integration are plus or minus infinity, the function must decrease
sufficiently rapidly at infinity. This can often be accomplished through
integration by parts. Finally, the function to be integrated should not be
very small (compared to the current precision) on the entire interval. This
can of course be accomplished by just multiplying by an appropriate constant.

Note that \idx{infinity} can be represented with essentially no loss of
accuracy by 1e1000. However beware of real underflow when dealing with
rapidly decreasing functions. For example, if one wants to compute the
$\int_0^\infty e^{-x^2}\,dx$ to 28 decimal digits, then one should set
infinity equal to 10 for example, and certainly not to 1e1000.

\synt{intnumromb}{void *E, GEN (*eval)(GEN,void*), GEN a, GEN b, long flag, long prec},
where $\kbd{eval}(x, E)$ returns the value of the function at $x$.
You may store any additional information required by \kbd{eval} in $E$, or set
it to \kbd{NULL}.

\subsecidx{intnumstep}$()$: give the value of $m$ used in all the
\kbd{intnum} and \kbd{sumnum} programs, hence such that the integration
step is equal to $1/2^m$.

\synt{intnumstep}{long prec}.

\subsecidx{prod}$(X=a,b,\var{expr},\{x=1\})$: product of expression
\var{expr}, initialized at $x$, the formal parameter $X$ going from $a$ to
$b$. As for \kbd{sum}, the main purpose of the initialization parameter $x$
is to force the type of the operations being performed. For example if it is
set equal to the integer 1, operations will start being done exactly. If it
is set equal to the real $1.$, they will be done using real numbers having
the default precision. If it is set equal to the power series $1+O(X^k)$ for
a certain $k$, they will be done using power series of precision at most $k$.
These are the three most common initializations.

\noindent As an extreme example, compare

\bprog
? prod(i=1, 100, 1 - X^i);  \\@com this has degree $5050$ !!
time = 3,335 ms.
? prod(i=1, 100, 1 - X^i, 1 + O(X^101))
time = 43 ms.
%2 = 1 - X - X^2 + X^5 + X^7 - X^12 - X^15 + X^22 + X^26 - X^35 - X^40 + \
  X^51 + X^57 - X^70 - X^77 + X^92 + X^100 + O(X^101)
@eprog

\synt{produit}{entree *ep, GEN a, GEN b, char *expr, GEN x}.

\subsecidx{prodeuler}$(X=a,b,\var{expr})$: product of expression \var{expr},
initialized at 1. (i.e.~to a \emph{real} number equal to 1 to the current
\kbd{realprecision}), the formal parameter $X$ ranging over the prime numbers
between $a$ and $b$.\sidx{Euler product}

\synt{prodeuler}{void *E, GEN (*eval)(GEN,void*), GEN a,GEN b, long prec}.

\subsecidx{prodinf}$(X=a,\var{expr},\{\fl=0\})$: \idx{infinite product} of
expression \var{expr}, the formal parameter $X$ starting at $a$. The evaluation
stops when the relative error of the expression minus 1 is less than the
default precision. The expressions must always evaluate to an element of
$\C$.

If $\fl=1$, do the product of the ($1+\var{expr}$) instead.

\synt{prodinf}{void *E, GEN (*eval)(GEN, void*), GEN a, long prec}
($\fl=0$), or \teb{prodinf1} with the same arguments ($\fl=1$).

\subsecidx{solve}$(X=a,b,\var{expr})$: find a real root of expression
\var{expr} between $a$ and $b$, under the condition
$\var{expr}(X=a) * \var{expr}(X=b) \le 0$.
This routine uses Brent's method and can fail miserably if \var{expr} is
not defined in the whole of $[a,b]$ (try \kbd{solve(x=1, 2, tan(x)}).

\synt{zbrent}{void *E,GEN (*eval)(GEN,void*),GEN a,GEN b,long prec}.

\subsecidx{sum}$(X=a,b,\var{expr},\{x=0\})$: sum of expression \var{expr},
initialized at $x$, the formal parameter going from $a$ to $b$. As for
\kbd{prod}, the initialization parameter $x$ may be given to force the type
of the operations being performed.

\noindent As an extreme example, compare

\bprog
? sum(i=1, 5000, 1/i); \\@com rational number: denominator has $2166$ digits.
time = 1,241 ms.
? sum(i=1, 5000, 1/i, 0.)
time = 158 ms.
%2 = 9.094508852984436967261245533
@eprog

\synt{somme}{entree *ep, GEN a, GEN b, char *expr, GEN x}. This is to be
used as follows: \kbd{ep} represents the dummy variable used in the
expression \kbd{expr}
\bprog
/* compute a^2 + @dots + b^2 */
{
  /* define the dummy variable "i" */
  entree *ep = is_entry("i");
  /* sum for a <= i <= b */
  return somme(ep, a, b, "i^2", gen_0);
}
@eprog

\subsecidx{sumalt}$(X=a,\var{expr},\{\fl=0\})$: numerical summation of the
series \var{expr}, which should be an \idx{alternating series}, the formal
variable $X$ starting at $a$. Use an algorithm of F.~Villegas as modified by
D.~Zagier (improves on \idx{Euler}-\idx{Van Wijngaarden} method).

If $\fl=1$, use a variant with slightly different polynomials. Sometimes
faster.

Divergent alternating series can sometimes be summed by this method, as well
as series which are not exactly alternating (see for example
\secref{se:user_defined}). If the series already converges geometrically,
\tet{suminf} is often a better choice:
\bprog
? \p28
? sumalt(i = 1, -(-1)^i / i)  - log(2)
time = 0 ms.
%1 = -2.524354897 E-29
? suminf(i = 1, -(-1)^i / i)
  *** suminf: user interrupt after 10min, 20,100 ms.
? \p1000
? sumalt(i = 1, -(-1)^i / i)  - log(2)
time = 90 ms.
%2 = 4.459597722 E-1002

? sumalt(i = 0, (-1)^i / i!) - exp(-1)
time = 670 ms.
%3 = -4.03698781490633483156497361352190615794353338591897830587 E-944
? suminf(i = 0, (-1)^i / i!) - exp(-1)
time = 110 ms.
%4 = -8.39147638 E-1000   \\ @com faster and more accurate
@eprog

\synt{sumalt}{void *E, GEN (*eval)(GEN,void*),GEN a,long prec}. Also
available is \tet{sumalt2} with the same arguments ($\fl = 1$).

\subsecidx{sumdiv}$(n,X,\var{expr})$: sum of expression \var{expr} over
the positive divisors of $n$.

Arithmetic functions like \tet{sigma} use the multiplicativity of the
underlying expression to speed up the computation. In the present version
\vers, there is no way to indicate that \var{expr} is multiplicative in
$n$, hence specialized functions should be preferred whenever possible.

\synt{divsum}{entree *ep, GEN num, char *expr}.

\subsecidx{suminf}$(X=a,\var{expr})$: \idx{infinite sum} of expression
\var{expr}, the formal parameter $X$ starting at $a$. The evaluation stops
when the relative error of the expression is less than the default precision
for 3 consecutive evaluations. The expressions must always evaluate to a
complex number.

If the series converges slowly, make sure \kbd{realprecision} is low (even 28
digits may be too much). In this case, if the series is alternating or the
terms have a constant sign, \tet{sumalt} and \tet{sumpos} should be used
instead.

\bprog
? \p28
? suminf(i = 1, -(-1)^i / i)
  *** suminf: user interrupt after 10min, 20,100 ms.
? sumalt(i = 1, -(-1)^i / i) - log(2)
time = 0 ms.
%1 = -2.524354897 E-29
@eprog

\synt{suminf}{void *E, GEN (*eval)(GEN,void*), GEN a, long prec}.

\subsecidx{sumnum}$(X=a,sig,\var{expr},\{\var{tab}\}),\{\fl=0\}$: numerical
summation of \var{expr}, the variable $X$ taking integer values from ceiling
of $a$ to $+\infty$, where \var{expr} is assumed to be a holomorphic function
$f(X)$ for $\Re(X)\ge \sigma$.

The parameter $\sigma\in\R$ is coded in the argument \kbd{sig} as follows: it
is either

\item a real number $\sigma$. Then the function $f$ is assumed to
decrease at least as $1/X^2$ at infinity, but not exponentially;

\item a two-component vector $[\sigma,\alpha]$, where $\sigma$ is as
before, $\alpha < -1$. The function $f$ is assumed to decrease like
$X^{\alpha}$. In particular, $\alpha\le-2$ is equivalent to no $\alpha$ at all.

\item a two-component vector $[\sigma,\alpha]$, where $\sigma$ is as
before, $\alpha > 0$. The function $f$ is assumed to decrease like
$\exp(-\alpha X)$. In this case it is essential that $\alpha$ be exactly the
rate of exponential decrease, and it is usually a good idea to increase
the default value of $m$ used for the integration step. In practice, if
the function is exponentially decreasing \kbd{sumnum} is slower and less
accurate than \kbd{sumpos} or \kbd{suminf}, so should not be used.

The function uses the \tet{intnum} routines and integration on the line
$\Re(s) = \sigma$. The optional argument \var{tab} is as in intnum, except it
must be initialized with \kbd{sumnuminit} instead of \kbd{intnuminit}.

When \var{tab} is not precomputed, \kbd{sumnum} can be slower than
\kbd{sumpos}, when the latter is applicable. It is in general faster for
slowly decreasing functions.


Finally, if $\fl$ is nonzero, we assume that the function $f$ to be summed is
of real type, i.e. satisfies $\overline{f(z)}=f(\overline{z})$, which
speeds up the computation.

\bprog
? \p 308
? a = sumpos(n=1, 1/(n^3+n+1));
time = 1,410 ms.
? tab = sumnuminit(2);
time = 1,620 ms. \\@com slower but done once and for all.
? b = sumnum(n=1, 2, 1/(n^3+n+1), tab);
time = 460 ms. \\@com 3 times as fast as \kbd{sumpos}
? a - b
%4 = -1.0... E-306 + 0.E-320*I \\@com perfect.
? sumnum(n=1, 2, 1/(n^3+n+1), tab, 1) - a; \\@com function of real type
time = 240 ms.
%2 = -1.0... E-306 \\@com twice as fast, no imaginary part.
? c = sumnum(n=1, 2, 1/(n^2+1), tab, 1);
time = 170 ms. \\@com fast
? d = sumpos(n=1, 1 / (n^2+1));
time = 2,700 ms. \\@com slow.
? d - c
time = 0 ms.
%5 = 1.97... E-306 \\@com perfect.
@eprog

For slowly decreasing function, we must indicate singularities:
\bprog
? \p 308
? a = sumnum(n=1, 2, n^(-4/3));
time = 9,930 ms. \\@com slow because of the computation of $n^{-4/3}$.
? a - zeta(4/3)
time = 110 ms.
%1 = -2.42... E-107 \\@com lost 200 decimals because of singularity at $\infty$
? b = sumnum(n=1, [2,-4/3], n^(-4/3), /*omitted*/, 1); \\@com of real type
time = 12,210 ms.
? b - zeta(4/3)
%3 = 1.05... E-300 \\@com better
@eprog

Since the \emph{complex} values of the function are used, beware of
determination problems. For instance:
\bprog
? \p 308
? tab = sumnuminit([2,-3/2]);
time = 1,870 ms.
? sumnum(n=1,[2,-3/2], 1/(n*sqrt(n)), tab,1) - zeta(3/2)
time = 690 ms.
%1 = -1.19... E-305 \\@com fast and correct
? sumnum(n=1,[2,-3/2], 1/sqrt(n^3), tab,1) - zeta(3/2)
time = 730 ms.
%2 = -1.55... \\@com nonsense. However
? sumnum(n=1,[2,-3/2], 1/n^(3/2), tab,1) - zeta(3/2)
time = 8,990 ms.
%3 = -1.19... E-305 \\@com perfect, as $1/(n*\sqrt{n})$ above but much slower
@eprog

For exponentially decreasing functions, \kbd{sumnum} is given for
completeness, but one of \tet{suminf} or \tet{sumpos} should always be
preferred. If you experiment with such functions and \kbd{sumnum} anyway,
indicate the exact rate of decrease and increase $m$ by $1$ or $2$:

\bprog
? suminf(n=1, 2^(-n)) - 1
time = 10 ms.
%1 = -1.11... E-308 \\@com fast and perfect
? sumpos(n=1, 2^(-n)) - 1
time = 10 ms.
%2 = -2.78... E-308 \\@com also fast and perfect
? sumnum(n=1,2, 2^(-n)) - 1
   *** sumnum: precision too low in mpsc1 \\@com nonsense
? sumnum(n=1, [2,log(2)], 2^(-n), /*omitted*/, 1) - 1 \\@com of real type
time = 5,860 ms.
%3 = -1.5... E-236 \\@com slow and lost $70$ decimals
? m = intnumstep()
%4 = 9
? sumnum(n=1,[2,log(2)], 2^(-n), m+1, 1) - 1
time = 11,770 ms.
%5 = -1.9... E-305 \\@com now perfect, but slow.
@eprog

\synt{sumnum}{void *E, GEN (*eval)(GEN,void*), GEN a,GEN sig,GEN tab,long flag, long prec}.

\subsecidx{sumnumalt}$(X=a,sig,\var{expr},\{\var{tab}\},\{\fl=0\})$: numerical
summation of $(-1)^X\var{expr}(X)$, the variable $X$ taking integer values from
ceiling of $a$ to $+\infty$, where \var{expr} is assumed to be a holomorphic
function for $\Re(X)\ge sig$ (or $sig[1]$).

\misctitle{Warning.} This function uses the \kbd{intnum} routines and is
orders of magnitude slower than \kbd{sumalt}. It is only given for
completeness and should not be used in practice.

\misctitle{Warning2.} The expression \var{expr} must \emph{not} include the
$(-1)^X$ coefficient. Thus $\kbd{sumalt}(n=a,(-1)^nf(n))$ is (approximately)
equal to $\kbd{sumnumalt}(n=a,sig,f(n))$.

$sig$ is coded as in \kbd{sumnum}. However for slowly decreasing functions
(where $sig$ is coded as $[\sigma,\alpha]$ with $\alpha<-1$), it is not
really important to indicate $\alpha$. In fact, as for \kbd{sumalt}, the
program will often give meaningful results (usually analytic continuations)
even for divergent series. On the other hand the exponential decrease must be
indicated.

\var{tab} is as in \kbd{intnum}, but if used must be initialized with
\kbd{sumnuminit}. If $\fl$ is nonzero, assumes that the function $f$ to be
summed is of real type, i.e. satisfies $\overline{f(z)}=f(\overline{z})$, and
then twice faster when \var{tab} is precomputed.

\bprog
? \p 308
? tab = sumnuminit(2, /*omitted*/, -1); \\@com abcissa $\sigma=2$, alternating sums.
time = 1,620 ms. \\@com slow, but done once and for all.
? a = sumnumalt(n=1, 2, 1/(n^3+n+1), tab, 1);
time = 230 ms. \\@com similar speed to \kbd{sumnum}
? b = sumalt(n=1, (-1)^n/(n^3+n+1));
time = 0 ms. \\@com infinitely faster!
? a - b
time = 0 ms.
%1 = -1.66... E-308 \\@com perfect
@eprog

\synt{sumnumalt}{void *E, GEN (*eval)(GEN,void*), GEN a, GEN sig, GEN tab, long flag, long prec}.

\subsecidx{sumnuminit}$(sig,{m=0},{sgn=1})$: initialize tables for numerical
summation using \kbd{sumnum} (with $\var{sgn}=1$) or \kbd{sumnumalt} (with
$\var{sgn}=-1$), $sig$ is the abcissa of integration coded as in \kbd{sumnum},
and $m$ is as in \kbd{intnuminit}.

\synt{sumnuminit}{GEN sig, long m, long sgn, long prec}.

\subsecidx{sumpos}$(X=a,\var{expr},\{\fl=0\})$: numerical summation of the
series \var{expr}, which must be a series of terms having the same sign,
the formal
variable $X$ starting at $a$. The algorithm used is Van Wijngaarden's trick
for converting such a series into an alternating one, and is quite slow. For
regular functions, the function \kbd{sumnum} is in general much faster once the
initializations have been made using \kbd{sumnuminit}.

If $\fl=1$, use slightly different polynomials. Sometimes faster.

\synt{sumpos}{void *E, GEN (*eval)(GEN,void*),GEN a,long prec}. Also
available is \tet{sumpos2} with the same arguments ($\fl = 1$).

\section{Plotting functions}

  Although plotting is not even a side purpose of PARI, a number of plotting
functions are provided. Moreover, a lot of people suggested ideas or submitted
patches for this section of the code. Among these, special thanks go to
Klaus-Peter Nischke who suggested the recursive plotting and the
forking/resizing stuff under X11, and Ilya Zakharevich who undertook a
complete rewrite of the graphic code, so that most of it is now
platform-independent and should be easy to port or expand. There are three
types of graphic functions.

\subsec{High-level plotting functions} (all the functions starting with \kbd{ploth}) in which the user has little to
do but explain what type of plot he wants, and whose syntax is similar to the
one used in the preceding section.

\subsec{Low-level plotting functions} (called \var{rectplot} functions,
sharing the prefix \kbd{plot}), where every drawing primitive (point, line,
box, etc.) is specified by the user. These low-level functions work as
follows. You have at your disposal 16 virtual windows which are filled
independently, and can then be physically ORed on a single window at
user-defined positions. These windows are numbered from 0 to 15, and must be
initialized before being used by the function \kbd{plotinit}, which specifies
the height and width of the virtual window (called a \var{rectwindow} in the
sequel). At all times, a virtual cursor (initialized at $[0,0]$) is associated
to the window, and its current value can be obtained using the function
\kbd{plotcursor}.

A number of primitive graphic objects (called \var{rect} objects) can then
be drawn in these windows, using a default color associated to that window
(which can be changed under X11, using the \kbd{plotcolor} function, black
otherwise) and only the part of the object which is inside the window will be
drawn, with the exception of polygons and strings which are drawn entirely.
The ones sharing the prefix \kbd{plotr} draw relatively to the current
position of the virtual cursor, the others use absolute coordinates. Those
having the prefix \kbd{plotrecth} put in the rectwindow a large batch of rect
objects corresponding to the output of the related \kbd{ploth} function.

   Finally, the actual physical drawing is done using the function
\kbd{plotdraw}. The rectwindows are preserved so that further drawings
using the same windows at different positions or different windows can be
done without extra work. To erase a window (and free the corresponding
memory), use the function \kbd{plotkill}. It is not possible to partially
erase a window. Erase it completely, initialize it again and then fill it with
the graphic objects that you want to keep.

   In addition to initializing the window, you may use a scaled
window to avoid unnecessary conversions. For this, use the function
\kbd{plotscale} below. As long as this function is not called, the scaling is
simply the number of pixels, the origin being at the upper left and the
$y$-coordinates going downwards.

   Note that in the present version \vers\ all plotting functions (both low
and high level) are written for the X11-window system (hence also for GUI's
based on X11 such as Openwindows and Motif) only, though little code
remains which is actually platform-dependent. It is also possible to compile
\kbd{gp} with either of the Qt or FLTK graphical libraries. A
Suntools/Sunview, Macintosh, and an Atari/Gem port were provided for previous
versions, but are now obsolete.

   Under X11, the physical window (opened by \kbd{plotdraw} or any of the
\kbd{ploth*} functions) is completely separated from \kbd{gp} (technically, a
\kbd{fork} is done, and the non-graphical memory is immediately freed in the
child process), which means you can go on working in the current \kbd{gp}
session, without having to kill the window first. Under X11, this window can
be closed, enlarged or reduced using the standard window manager functions.
No zooming procedure is implemented though (yet).

\subsec{Functions for PostScript output:} in the same way that \kbd{printtex} allows you to have a \TeX\ output
corresponding to printed results, the functions starting with \kbd{ps} allow
you to have \tet{PostScript} output of the plots. This will not be absolutely
identical with the screen output, but will be sufficiently close. Note that
you can use PostScript output even if you do not have the plotting routines
enabled. The PostScript output is written in a file whose name is derived from
the \tet{psfile} default (\kbd{./pari.ps} if you did not tamper with it). Each
time a new PostScript output is asked for, the PostScript output is appended
to that file. Hence you probably want to remove this file, or change the value
of \kbd{psfile}, in between plots. On the other hand, in this manner, as many
plots as desired can be kept in a single file. \smallskip

\subsec{And library mode ?} \emph{None of the graphic functions are available
within the PARI library, you must be under \kbd{gp} to use them}. The reason
for that is that you really should not use PARI for heavy-duty graphical work,
there are better specialized alternatives around. This whole set of routines
was only meant as a convenient, but simple-minded, visual aid. If you really
insist on using these in your program (we warned you), the source
(\kbd{plot*.c}) should be readable enough for you to achieve something.

\subsecidx{plot}$(X=a,b,\var{expr},\{\var{Ymin}\},\{\var{Ymax}\})$: crude
ASCII plot of the function represented by expression \var{expr} from
$a$ to $b$, with \var{Y} ranging from \var{Ymin} to \var{Ymax}. If
\var{Ymin} (resp. \var{Ymax}) is not given, the minima (resp. the
maxima) of the computed values of the expression is used instead.

\subsecidx{plotbox}$(w,x2,y2)$: let $(x1,y1)$ be the current position of the
virtual cursor. Draw in the rectwindow $w$ the outline of the rectangle which
is such that the points $(x1,y1)$ and $(x2,y2)$ are opposite corners. Only
the part of the rectangle which is in $w$ is drawn. The virtual cursor does
\emph{not} move.

\subsecidx{plotclip}$(w)$: `clips' the content of rectwindow $w$, i.e
remove all parts of the drawing that would not be visible on the screen.
Together with \tet{plotcopy} this function enables you to draw on a
scratchpad before commiting the part you're interested in to the final
picture.

\subsecidx{plotcolor}$(w,c)$: set default color to $c$ in rectwindow $w$.
In present version \vers, this is only implemented for the X11 window system,
and you only have the following palette to choose from:

1=black, 2=blue, 3=sienna, 4=red, 5=green, 6=grey, 7=gainsborough.

Note that it should be fairly easy for you to hardwire some more colors by
tweaking the files \kbd{rect.h} and \kbd{plotX.c}. User-defined
colormaps would be nice, and \emph{may} be available in future versions.

\subsecidx{plotcopy}$(w1,w2,dx,dy)$: copy the contents of rectwindow
$w1$ to rectwindow $w2$, with offset $(dx,dy)$.

\subsecidx{plotcursor}$(w)$: give as a 2-component vector the current
(scaled) position of the virtual cursor corresponding to the rectwindow $w$.

\subsecidx{plotdraw}$(list)$: physically draw the rectwindows given in $list$
which must be a vector whose number of components is divisible by 3. If
$list=[w1,x1,y1,w2,x2,y2,\dots]$, the windows $w1$, $w2$, etc.~are
physically placed with their upper left corner at physical position
$(x1,y1)$, $(x2,y2)$,\dots\ respectively, and are then drawn together.
Overlapping regions will thus be drawn twice, and the windows are considered
transparent. Then display the whole drawing in a special window on your
screen.

\subsecidx{ploth}$(X=a,b,\var{expr},\{\fl=0\},\{n=0\})$: high precision
plot of the function $y=f(x)$ represented by the expression \var{expr}, $x$
going from $a$ to $b$. This opens a specific window (which is killed
whenever you click on it), and returns a four-component vector giving the
coordinates of the bounding box in the form
$[\var{xmin},\var{xmax},\var{ymin},\var{ymax}]$.

\misctitle{Important note}: Since this may involve a lot of function calls,
it is advised to keep the current precision to a minimum (e.g.~9) before
calling this function.

$n$ specifies the number of reference point on the graph (0 means use the
hardwired default values, that is: 1000 for general plot, 1500 for
parametric plot, and 15 for recursive plot).

If no $\fl$ is given, \var{expr} is either a scalar expression $f(X)$, in which
case the plane curve $y=f(X)$ will be drawn, or a vector
$[f_1(X),\dots,f_k(X)]$, and then all the curves $y=f_i(X)$ will be drawn in
the same window.

\noindent The binary digits of $\fl$ mean:

\item $1 = \kbd{Parametric}$: \tev{parametric plot}. Here \var{expr} must
be a vector with an even number of components. Successive pairs are then
understood as the parametric coordinates of a plane curve. Each of these are
then drawn.

For instance:

\kbd{ploth(X=0,2*Pi,[sin(X),cos(X)],1)} will draw a circle.

\kbd{ploth(X=0,2*Pi,[sin(X),cos(X)])} will draw two entwined sinusoidal
curves.

\kbd{ploth(X=0,2*Pi,[X,X,sin(X),cos(X)],1)} will draw a circle and the line
$y=x$.

\item $2 = \kbd{Recursive}$: \tev{recursive plot}. If this flag is set,
only \emph{one} curve can be drawn at a time, i.e.~\var{expr} must be either a
two-component vector (for a single parametric curve, and the parametric flag
\emph{has} to be set), or a scalar function. The idea is to choose pairs of
successive reference points, and if their middle point is not too far away
from the segment joining them, draw this as a local approximation to the
curve. Otherwise, add the middle point to the reference points. This is
fast, and usually more precise than usual plot. Compare the results of
$$\kbd{ploth(X=-1,1,sin(1/X),2)}\quad
 \text{and}\quad\kbd{ploth(X=-1,1,sin(1/X))}$$
for instance. But beware that if you are extremely unlucky, or choose too few
reference points, you may draw some nice polygon bearing little resemblance
to the original curve. For instance you should \emph{never} plot recursively
an odd function in a symmetric interval around 0. Try
\bprog
  ploth(x = -20, 20, sin(x), 2)
@eprog\noindent
to see why. Hence, it's usually a good idea to try and plot the same curve
with slightly different parameters.

The other values toggle various display options:

\item $4 = \kbd{no\_Rescale}$: do not rescale plot according to the
computed extrema. This is meant to be used when graphing multiple functions
on a rectwindow (as a \tet{plotrecth} call), in conjunction with
\tet{plotscale}.

\item $8 = \kbd{no\_X\_axis}$: do not print the $x$-axis.

\item $16 = \kbd{no\_Y\_axis}$: do not print the $y$-axis.

\item $32 = \kbd{no\_Frame}$: do not print frame.

\item $64 = \kbd{no\_Lines}$: only plot reference points, do not join them.

\item $128 = \kbd{Points\_too}$: plot both lines and points.

\item $256 = \kbd{Splines}$: use splines to interpolate the points.

\item $512 = \kbd{no\_X\_ticks}$: plot no $x$-ticks.

\item $1024 = \kbd{no\_Y\_ticks}$: plot no $y$-ticks.

\item $2048 = \kbd{Same\_ticks}$: plot all ticks with the same length.

\subsecidx{plothraw}$(\var{listx},\var{listy},\{\fl=0\})$: given
\var{listx} and \var{listy} two vectors of equal length, plots (in high
precision) the points whose $(x,y)$-coordinates are given in \var{listx}
and \var{listy}. Automatic positioning and scaling is done, but with the
same scaling factor on $x$ and $y$. If $\fl$ is 1, join points, other non-0
flags toggle display options and should be combinations of bits $2^k$, $k
\geq 3$ as in \kbd{ploth}.

\subsecidx{plothsizes}$()$: return data corresponding to the output window
in the form of a 6-component vector: window width and height, sizes for ticks
in horizontal and vertical directions (this is intended for the \kbd{gnuplot}
interface and is currently not significant), width and height of characters.

\subsecidx{plotinit}$(w,x,y,\{\fl\})$: initialize the rectwindow $w$,
destroying any rect objects you may have already drawn in $w$. The virtual
cursor is set to $(0,0)$. The rectwindow size is set to width $x$ and height
$y$. If $\fl=0$, $x$ and $y$ represent pixel units. Otherwise, $x$ and $y$
are understood as fractions of the size of the current output device (hence
must be between $0$ and $1$) and internally converted to pixels.

The plotting device imposes an upper bound for $x$ and $y$, for instance the
number of pixels for screen output. These bounds are available through the
\tet{plothsizes} function. The following sequence initializes in a portable
way (i.e independent of the output device) a window of maximal size, accessed
through coordinates in the $[0,1000] \times [0,1000]$ range:

\bprog
s = plothsizes();
plotinit(0, s[1]-1, s[2]-1);
plotscale(0, 0,1000, 0,1000);
@eprog

\subsecidx{plotkill}$(w)$: erase rectwindow $w$ and free the corresponding
memory. Note that if you want to use the rectwindow $w$ again, you have to
use \kbd{plotinit} first to specify the new size. So it's better in this case
to use \kbd{plotinit} directly as this throws away any previous work in the
given rectwindow.

\subsecidx{plotlines}$(w,X,Y,\{\fl=0\})$: draw on the rectwindow $w$
the polygon such that the (x,y)-coordinates of the vertices are in the
vectors of equal length $X$ and $Y$. For simplicity, the whole
polygon is drawn, not only the part of the polygon which is inside the
rectwindow. If $\fl$ is non-zero, close the polygon. In any case, the
virtual cursor does not move.

$X$ and $Y$ are allowed to be scalars (in this case, both have to).
There, a single segment will be drawn, between the virtual cursor current
position and the point $(X,Y)$. And only the part thereof which
actually lies within the boundary of $w$. Then \emph{move} the virtual cursor
to $(X,Y)$, even if it is outside the window. If you want to draw a
line from $(x1,y1)$ to $(x2,y2)$ where $(x1,y1)$ is not necessarily the
position of the virtual cursor, use \kbd{plotmove(w,x1,y1)} before using this
function.

\subsecidx{plotlinetype}$(w,\var{type})$: change the type of lines
subsequently plotted in rectwindow $w$. \var{type} $-2$ corresponds to
frames, $-1$ to axes, larger values may correspond to something else. $w =
-1$ changes highlevel plotting. This is only taken into account by the
\kbd{gnuplot} interface.

\subsecidx{plotmove}$(w,x,y)$: move the virtual cursor of the rectwindow $w$
to position $(x,y)$.

\subsecidx{plotpoints}$(w,X,Y)$: draw on the rectwindow $w$ the
points whose $(x,y)$-coordinates are in the vectors of equal length $X$ and
$Y$ and which are inside $w$. The virtual cursor does \emph{not} move. This
is basically the same function as \kbd{plothraw}, but either with no scaling
factor or with a scale chosen using the function \kbd{plotscale}.

As was the case with the \kbd{plotlines} function, $X$ and $Y$ are allowed to
be (simultaneously) scalar. In this case, draw the single point $(X,Y)$ on
the rectwindow $w$ (if it is actually inside $w$), and in any case
\emph{move} the virtual cursor to position $(x,y)$.

\subsecidx{plotpointsize}$(w,size)$: changes the ``size'' of following
points in rectwindow $w$. If $w = -1$, change it in all rectwindows.
This only works in the \kbd{gnuplot} interface.

\subsecidx{plotpointtype}$(w,\var{type})$:  change the type of
points subsequently plotted in rectwindow $w$. $\var{type} = -1$
corresponds to a dot, larger values may correspond to something else. $w = -1$
changes highlevel plotting. This is only taken into account by the
\kbd{gnuplot} interface.

\subsecidx{plotrbox}$(w,dx,dy)$: draw in the rectwindow $w$ the outline of
the rectangle which is such that the points $(x1,y1)$ and $(x1+dx,y1+dy)$ are
opposite corners, where $(x1,y1)$ is the current position of the cursor.
Only the part of the rectangle which is in $w$ is drawn. The virtual cursor
does \emph{not} move.

\subsecidx{plotrecth}$(w,X=a,b,\var{expr},\{\fl=0\},\{n=0\})$: writes to
rectwindow $w$ the curve output of \kbd{ploth}$(w,X=a,b,\var{expr},\fl,n)$.

\subsecidx{plotrecthraw}$(w,\var{data},\{\fl=0\})$: plot graph(s) for
\var{data} in rectwindow $w$. $\fl$ has the same significance here as in
\kbd{ploth}, though recursive plot is no more significant.

\var{data} is a vector of vectors, each corresponding to a list a coordinates.
If parametric plot is set, there must be an even number of vectors, each
successive pair corresponding to a curve. Otherwise, the first one contains
the $x$ coordinates, and the other ones contain the $y$-coordinates
of curves to plot.

\subsecidx{plotrline}$(w,dx,dy)$: draw in the rectwindow $w$ the part of the
segment $(x1,y1)-(x1+dx,y1+dy)$ which is inside $w$, where $(x1,y1)$ is the
current position of the virtual cursor, and move the virtual cursor to
$(x1+dx,y1+dy)$ (even if it is outside the window).

\subsecidx{plotrmove}$(w,dx,dy)$: move the virtual cursor of the rectwindow
$w$ to position $(x1+dx,y1+dy)$, where $(x1,y1)$ is the initial position of
the cursor (i.e.~to position $(dx,dy)$ relative to the initial cursor).

\subsecidx{plotrpoint}$(w,dx,dy)$: draw the point $(x1+dx,y1+dy)$ on the
rectwindow $w$ (if it is inside $w$), where $(x1,y1)$ is the current position
of the cursor, and in any case move the virtual cursor to position
$(x1+dx,y1+dy)$.

\subsecidx{plotscale}$(w,x1,x2,y1,y2)$: scale the local coordinates of the
rectwindow $w$ so that $x$ goes from $x1$ to $x2$ and $y$ goes from $y1$ to
$y2$ ($x2<x1$ and $y2<y1$ being allowed). Initially, after the initialization
of the rectwindow $w$ using the function \kbd{plotinit}, the default scaling
is the graphic pixel count, and in particular the $y$ axis is oriented
downwards since the origin is at the upper left. The function \kbd{plotscale}
allows to change all these defaults and should be used whenever functions are
graphed.

\subsecidx{plotstring}$(w,x,\{\fl=0\})$: draw on the rectwindow $w$ the
String $x$ (see \secref{se:strings}), at the current position of the cursor.

\fl\ is used for justification: bits 1 and 2 regulate horizontal alignment:
left if 0, right if 2, center if 1. Bits 4 and 8 regulate vertical
alignment: bottom if 0, top if 8, v-center if 4. Can insert additional
small gap between point and string: horizontal if bit 16 is set, vertical
if bit 32 is set (see the tutorial for an example).

\subsecidx{psdraw}$(\var{list})$: same as \kbd{plotdraw}, except that the
output is a PostScript program appended to the \kbd{psfile}.

\subsecidx{psploth}$(X=a,b,\var{expr})$: same as \kbd{ploth}, except that the
output is a PostScript program appended to the \kbd{psfile}.

\subsecidx{psplothraw}$(\var{listx},\var{listy})$: same as \kbd{plothraw},
except that the output is a PostScript program appended to the \kbd{psfile}.

\section{Programming in GP}
\sidx{programming}\label{se:programming}
\subsecidx{Control statements}.

  A number of control statements are available in GP. They are simpler and
have a syntax slightly different from their C counterparts, but are quite
powerful enough to write any kind of program. Some of them are specific to
GP, since they are made for number theorists. As usual, $X$ will denote any
simple variable name, and \var{seq} will always denote a sequence of
expressions, including the empty sequence.

\misctitle{Caveat:} in constructs like
\bprog
    for (X = a,b, seq)
@eprog\noindent
the variable \kbd{X} is considered local to the loop, leading to possibly
unexpected behaviour:
\bprog
    n = 5;
    for (n = 1, 10,
      if (something_nice(), break);
    );
    \\ @com at this point \kbd{n} is 5 !
@eprog\noindent
If the sequence \kbd{seq} modifies the loop index, then the loop
is modified accordingly:
\bprog
    ? for (n = 1, 10, n += 2; print(n))
    3
    6
    9
    12
@eprog

\subsubsecidx{break}$(\{n=1\})$: interrupts execution of current \var{seq}, and
immediately exits from the $n$ innermost enclosing loops, within the
current function call (or the top level loop). $n$ must be bigger than 1.
If $n$ is greater than the number of enclosing loops, all enclosing loops
are exited.

\subsubsecidx{for}$(X=a,b,\var{seq})$: evaluates \var{seq}, where 
the formal variable $X$ goes from $a$ to $b$. Nothing is done if $a>b$.
$a$ and $b$ must be in $\R$.

\subsubsecidx{fordiv}$(n,X,\var{seq})$: evaluates \var{seq}, where
the formal variable $X$ ranges through the divisors of $n$
(see \tet{divisors}, which is used as a subroutine). It is assumed that
\kbd{factor} can handle $n$, without negative exponents. Instead of $n$,
it is possible to input a factorization matrix, i.e. the output of
\kbd{factor(n)}.

This routine uses \kbd{divisors} as a subroutine, then loops over the
divisors. In particular, if $n$ is an integer, divisors are sorted by
increasing size.

To avoid storing all divisors, possibly using a lot of memory, the following
(much slower) routine loops over the divisors using essentially constant
space:
\bprog
    FORDIV(N)=
    { local(P, E);
   
      P = factor(N); E = P[,2]; P = P[,1]; 
      forvec( v = vector(#E, i, [0,E[i]]),
        X = factorback(P, v)
        \\ ...
      );
    }
    ? for(i=1,10^5, FORDIV(i))
    time = 3,445 ms.
    ? for(i=1,10^5, fordiv(i, d, ))
    time = 490 ms.
@eprog

\subsubsecidx{forell}$(E,a,b,\var{seq})$: evaluates \var{seq}, where 
the formal variable $E$ ranges through all elliptic curves of conductors from
$a$ to $b$. Th \tet{elldata} database must be installed and contain data for
the specified conductors.

\subsubsecidx{forprime}$(X=a,b,\var{seq})$: evaluates \var{seq},
where the formal variable $X$ ranges over the prime numbers between $a$ to
$b$ (including $a$ and $b$ if they are prime). More precisely, the value of
$X$ is incremented to the smallest prime strictly larger than $X$ at the end
of each iteration. Nothing is done if $a>b$. Note that $a$ and $b$ must be in
$\R$.

\bprog
? { forprime(p = 2, 12,
      print(p);
      if (p == 3, p = 6);
    )
  }
2
3
7
11
@eprog

\subsubsecidx{forstep}$(X=a,b,s,\var{seq})$: evaluates \var{seq},
where the formal variable $X$ goes from $a$ to $b$, in increments of $s$.
Nothing is done if $s>0$ and $a>b$ or if $s<0$ and $a<b$. $s$ must be in
$\R^*$ or a vector of steps $[s_1,\dots,s_n]$. In the latter case, the
successive steps are used in the order they appear in $s$.

\bprog
? forstep(x=5, 20, [2,4], print(x))
5
7
11
13
17
19
@eprog

\subsubsecidx{forsubgroup}$(H=G,\{B\},\var{seq})$: evaluates \var{seq} for
each subgroup $H$ of the \emph{abelian} group $G$ (given in
SNF\sidx{Smith normal form} form or as a vector of elementary divisors),
whose index is bounded by $B$. The subgroups are not ordered in any
obvious way, unless $G$ is a $p$-group in which case Birkhoff's algorithm
produces them by decreasing index. A \idx{subgroup} is given as a matrix
whose columns give its generators on the implicit generators of $G$. For
example, the following prints all subgroups of index less than 2 in $G =
\Z/2\Z g_1 \times \Z/2\Z g_2$:

\bprog
? G = [2,2]; forsubgroup(H=G, 2, print(H))
[1; 1]
[1; 2]
[2; 1]
[1, 0; 1, 1]
@eprog\noindent
The last one, for instance is generated by $(g_1, g_1 + g_2)$. This
routine is intended to treat huge groups, when \tet{subgrouplist} is not an
option due to the sheer size of the output.

For maximal speed the subgroups have been left as produced by the algorithm.
To print them in canonical form (as left divisors of $G$ in HNF form), one
can for instance use
\bprog
? G = matdiagonal([2,2]); forsubgroup(H=G, 2, print(mathnf(concat(G,H))))
[2, 1; 0, 1]
[1, 0; 0, 2]
[2, 0; 0, 1]
[1, 0; 0, 1]
@eprog\noindent
Note that in this last representation, the index $[G:H]$ is given by the
determinant. See \tet{galoissubcyclo} and \tet{galoisfixedfield} for
\tet{nfsubfields} applications to \idx{Galois} theory.

\misctitle{Warning:} the present implementation cannot treat a group $G$, if
one of its $p$-Sylow subgroups has a cyclic factor with more than $2^{31}$,
resp.~$2^{63}$ elements on a $32$-bit, resp.~$64$-bit architecture.

\subsubsecidx{forvec}$(X=v,\var{seq},\{\fl=0\})$: Let $v$ be an $n$-component
vector (where $n$ is arbitrary) of two-component vectors $[a_i,b_i]$
for $1\le i\le n$. This routine evaluates \var{seq}, where the formal
variables $X[1],\dots, X[n]$ go from $a_1$ to $b_1$,\dots, from $a_n$ to
$b_n$, i.e.~$X$ goes from $[a_1,\dots,a_n]$ to $[b_1,\dots,b_n]$ with respect
to the lexicographic ordering. (The formal variable with the highest index
moves the fastest.) If $\fl=1$, generate only nondecreasing vectors $X$, and
if $\fl=2$, generate only strictly increasing vectors $X$.

\subsubsecidx{if}$(a,\{\var{seq1}\},\{\var{seq2}\})$:
evaluates the expression sequence \var{seq1} if $a$ is non-zero, otherwise
the expression \var{seq2}. Of course, \var{seq1} or \var{seq2} may be empty:

\kbd{if ($a$,\var{seq})} evaluates \var{seq} if $a$ is not equal to zero
(you don't have to write the second comma), and does nothing otherwise,

\kbd{if ($a$,,\var{seq})} evaluates \var{seq} if $a$ is equal to zero, and
does nothing otherwise. You could get the same result using the \kbd{!}
(\kbd{not}) operator: \kbd{if (!$a$,\var{seq})}.

Note that the boolean operators \kbd{\&\&} and \kbd{||} are evaluated
according to operator precedence as explained in \secref{se:operators}, but
that, contrary to other operators, the evaluation of the arguments is stopped
as soon as the final truth value has been determined. For instance
\bprog
    if (reallydoit && longcomplicatedfunction(), ...)%
@eprog
\noindent is a perfectly safe statement.

Recall that functions such as \kbd{break} and \kbd{next} operate on
\emph{loops} (such as \kbd{for$xxx$}, \kbd{while}, \kbd{until}). The \kbd{if}
statement is \emph{not} a loop (obviously!).

\subsubsecidx{next}$(\{n=1\})$: interrupts execution of current $seq$,
resume the next iteration of the innermost enclosing loop, within the
current function call (or top level loop). If $n$ is specified, resume at
the $n$-th enclosing loop. If $n$ is bigger than the number of enclosing
loops, all enclosing loops are exited.

\subsubsecidx{return}$(\{x=0\})$: returns from current subroutine, with
result $x$. If $x$ is omitted, return the \kbd{(void)} value (return no
result, like \kbd{print}).

\subsubsecidx{until}$(a,\var{seq})$: evaluates \var{seq} until $a$ is not
equal to 0 (i.e.~until $a$ is true). If $a$ is initially not equal to 0,
\var{seq} is evaluated once (more generally, the condition on $a$ is tested
\emph{after} execution of the \var{seq}, not before as in \kbd{while}).

\subsubsecidx{while}$(a,\var{seq})$: while $a$ is non-zero, evaluates the
expression sequence \var{seq}. The test is made \emph{before} evaluating
the $seq$, hence in particular if $a$ is initially equal to zero the
\var{seq} will not be evaluated at all.

\subsec{Specific functions used in GP programming}.
\label{se:gp_program}

  In addition to the general PARI functions, it is necessary to have some
functions which will be of use specifically for \kbd{gp}, though a few of these can
be accessed under library mode. Before we start describing these, we recall
the difference between \emph{strings} and \emph{keywords} (see
\secref{se:strings}): the latter don't get expanded at all, and you can type
them without any enclosing quotes. The former are dynamic objects, where
everything outside quotes gets immediately expanded.

\subsubsecidx{addhelp}$(S,\var{str})$:\label{se:addhelp} changes the help
message for the symbol $S$. The string \var{str} is expanded on the spot
and stored as the online help for $S$. If $S$ is a function \emph{you} have
defined, its definition will still be printed before the message \var{str}.
It is recommended that you document global variables and user functions in
this way. Of course \kbd{gp} will not protest if you skip this.

Nothing prevents you from modifying the help of built-in PARI
functions. (But if you do, we would like to hear why you needed to do it!)

\subsubsecidx{alias}$(\var{newkey},\var{key})$: defines the keyword
\var{newkey} as an alias for keyword \var{key}. \var{key} must correspond
to an existing \emph{function} name. This is different from the general user
macros in that alias expansion takes place immediately upon execution,
without having to look up any function code, and is thus much faster. A
sample alias file \kbd{misc/gpalias} is provided with the standard
distribution. Alias commands are meant to be read upon startup from the
\kbd{.gprc} file, to cope with function names you are dissatisfied with, and
should be useless in interactive usage.

\subsubsecidx{allocatemem}$(\{x=0\})$: this is a very special operation which
allows the user to change the stack size \emph{after} initialization. $x$
must be a non-negative integer. If $x \neq 0$, a new stack of size
$16*\ceil{x/16}$ bytes is allocated, all the PARI data on the old stack is
moved to the new one, and the old stack is discarded. If $x=0$, the size of
the new stack is twice the size of the old one.

Although it is a function, \kbd{allocatemem} cannot be used in loop-like
constructs, or as part of a larger expression, e.g \kbd{2 + allocatemem()}.
Such an attempt will raise an error. The technical reason is that this
routine usually moves the stack, so objects from the current expression may
not be correct anymore, e.g. loop indexes.

\syn{allocatemoremem}{x}, where $x$ is an unsigned long, and the return type
is void. \kbd{gp} uses a variant which makes sure it was not called within a
loop.

\subsubsecidx{default}$(\{\var{key}\},\{\var{val}\})$: returns
the default corresponding to keyword \var{key}. If \var{val} is present,
sets the default to \var{val} first (which is subject to string expansion
first). Typing \kbd{default()} (or \b{d}) yields the complete default
list as well as their current values.\label{se:default}
See \secref{se:defaults} for a list of available defaults, and
\secref{se:meta} for some shortcut alternatives. Note that the shortcut
are meant for interactive use and usually display more information than
\kbd{default}.

\syn{gp_default}{key, val}, where \var{key} and \var{val} are
\kbd{char *}.

\subsubsecidx{error}$(\{\var{str}\}*)$: outputs its argument list (each of
them interpreted as a string), then interrupts the running \kbd{gp} program,
returning to the input prompt. For instance
\bprog
error("n = ", n, " is not squarefree !")
@eprog\noindent

\subsubsecidxunix{extern}$(\var{str})$: the string \var{str} is the name
of an external command (i.e.~one you would type from your UNIX shell prompt).
This command is immediately run and its input fed into \kbd{gp}, just as if read
from a file.

\syn{extern0}{str}, where \var{str} is a \kbd{char *}.

\subsubsecidx{getheap}$()$: returns a two-component row vector giving the
number of objects on the heap and the amount of memory they occupy in long
words. Useful mainly for debugging purposes.

\syn{getheap}{}.

\subsubsecidx{getrand}$()$: returns the current value of the random number
seed. Useful mainly for debugging purposes.

\syn{getrand}{}, returns a C long.

\subsubsecidx{getstack}$()$: returns the current value of
$\kbd{top}-\kbd{avma}$, i.e.~the number of bytes used up to now on the stack.
Should be equal to $0$ in between commands. Useful mainly for debugging
purposes.

\syn{getstack}{}, returns a C long.

\subsubsecidx{gettime}$()$: returns the time (in milliseconds) elapsed since
either the last call to \kbd{gettime}, or to the beginning of the containing
GP instruction (if inside \kbd{gp}), whichever came last.

\syn{gettime}{}, returns a C long.

\subsubsecidx{global}$(\var{list of variables})$: \label{se:global}
declares the corresponding variables to be global. From now on, you will be
forbidden to use them as formal parameters for function definitions or as
loop indexes. This is especially useful when patching together various
scripts, possibly written with different naming conventions. For instance the
following situation is dangerous:
%
\bprog
p = 3   \\@com fix characteristic
...
forprime(p = 2, N, ...)
f(p) = ...
@eprog\noindent
since within the loop or within the function's body (even worse: in the
subroutines called in that scope), the true global value of \kbd{p} will be
hidden. If the statement \kbd{global(p = 3)} appears at the beginning of
the script, then both expressions will trigger syntax errors.

Calling \kbd{global} without arguments prints the list of global variables in
use. In particular, \kbd{eval(global)} will output the values of all global
variables.

\subsubsecidx{input}$()$: reads a string, interpreted as a GP expression,
from the input file, usually standard input (i.e.~the keyboard). If a
sequence of expressions is given, the result is the result of the last
expression of the sequence. When using this instruction, it is useful to
prompt for the string by using the \kbd{print1} function. Note that in the
present version 2.19 of \kbd{pari.el}, when using \kbd{gp} under GNU Emacs (see
\secref{se:emacs}) one \emph{must} prompt for the string, with a string
which ends with the same prompt as any of the previous ones (a \kbd{"? "}
will do for instance).

\subsubsecidxunix{install}$(\var{name},\var{code},\{\var{gpname}\},\{\var{lib}\})$:
loads from dynamic library \var{lib} the function \var{name}. Assigns to it
the name \var{gpname} in this \kbd{gp} session, with argument code \var{code}
(see the Libpari Manual for an explanation of those). If \var{lib} is
omitted, uses \kbd{libpari.so}. If \var{gpname} is omitted, uses
\var{name}.\label{se:install}

This function is useful for adding custom functions to the \kbd{gp} interpreter,
or picking useful functions from unrelated libraries. For instance, it
makes the function \tet{system} obsolete:

\bprog
? install(system, vs, sys, "libc.so")
? sys("ls gp*")
gp.c            gp.h            gp_rl.c
@eprog

But it also gives you access to all (non static) functions defined in the
PARI library. For instance, the function \kbd{GEN addii(GEN x, GEN y)} adds
two PARI integers, and is not directly accessible under \kbd{gp} (it's eventually
called by the \kbd{+} operator of course):

\bprog
? install("addii", "GG")
? addii(1, 2)
%1 = 3
@eprog\noindent
Re-installing a function will print a Warning, and update the prototype code
if needed, but will reload a symbol from the library, even it the latter has
been recompiled.

\misctitle{Caution:} This function may not work on all systems, especially
when \kbd{gp} has been compiled statically. In that case, the first use of an
installed function will provoke a Segmentation Fault, i.e.~a major internal
blunder (this should never happen with a dynamically linked executable).
Hence, if you intend to use this function, please check first on some
harmless example such as the ones above that it works properly on your
machine.

\subsubsecidx{kill}$(s)$:\label{se:kill} kills the present value of the
variable, alias or user-defined function $s$. The corresponding identifier
can now be used to name any GP object (variable or function). This is the
only way to replace a variable by a function having the same name (or the
other way round), as in the following example:

\bprog
? f = 1
%1 = 1
? f(x) = 0
  ***   unused characters: f(x)=0
                            ^----
? kill(f)
? f(x) = 0
? f()
%2 = 0
@eprog

  When you kill a variable, all objects that used it become invalid. You
can still display them, even though the killed variable will be printed in a
funny way. For example:

\bprog
? a^2 + 1
%1 = a^2 + 1
? kill(a)
? %1
%2 = #<1>^2 + 1
@eprog

If you simply want to restore a variable to its ``undefined'' value
(monomial of degree one), use the \idx{quote} operator: \kbd{a = 'a}.
Predefined symbols (\kbd{x} and GP function names) cannot be killed.

\subsubsecidx{print}$(\{\var{str}\}*)$: outputs its (string) arguments in raw
format, ending with a newline.

\subsubsecidx{print1}$(\{\var{str}\}*)$: outputs its (string) arguments in raw
format, without ending with a newline (note that you can still embed newlines
within your strings, using the \b{n} notation~!).

\subsubsecidx{printp}$(\{\var{str}\}*)$: outputs its (string) arguments in
prettyprint (beautified) format, ending with a newline.

\subsubsecidx{printp1}$(\{\var{str}\}*)$: outputs its (string) arguments in
prettyprint (beautified) format, without ending with a newline.

\subsubsecidx{printtex}$(\{\var{str}\}*)$: outputs its (string) arguments in
\TeX\ format. This output can then be used in a \TeX\ manuscript.
The printing is done on the standard output. If you want to print it to a
file you should use \kbd{writetex} (see there).

Another possibility is to enable the \tet{log} default
(see~\secref{se:defaults}).
You could for instance do:\sidx{logfile}
%
\bprog
default(logfile, "new.tex");
default(log, 1);
printtex(result);
@eprog

\subsubsecidx{quit}$()$: exits \kbd{gp}.\label{se:quit}

\subsubsecidx{read}$(\{\var{filename}\})$: reads in the file
\var{filename} (subject to string expansion). If \var{filename} is
omitted, re-reads the last file that was fed into \kbd{gp}. The return
value is the result of the last expression evaluated.\label{se:read}

If a GP \tet{binary file} is read using this command (see
\secref{se:writebin}), the file is loaded and the last object in the file
is returned.

\subsubsecidx{readvec}$(\{\var{str}\})$:  reads in the file
\var{filename} (subject to string expansion). If \var{filename} is
omitted, re-reads the last file that was fed into \kbd{gp}. The return
value is a vector whose components are the evaluation of all sequences
of instructions contained in the file. For instance, if \var{file} contains
\bprog
  1
  2
  3
@eprog\noindent
then we will get:
\bprog
  ? \r a
  %1 = 1
  %2 = 2
  %3 = 3
  ? read(a)
  %4 = 3
  ? readvec(a)
  %5 = [1, 2, 3]
@eprog
In general a sequence is just a single line, but as usual braces and
\kbd{\bs\bs} may be used to enter multiline sequences.

\subsubsecidx{reorder}$(\{x=[\,]\})$: $x$ must be a vector. If $x$ is the
empty vector, this gives the vector whose components are the existing
variables in increasing order (i.e.~in decreasing importance). Killed
variables (see \kbd{kill}) will be shown as \kbd{0}. If $x$ is
non-empty, it must be a permutation of variable names, and this permutation
gives a new order of importance of the variables, \emph{for output only}. For
example, if the existing order is \kbd{[x,y,z]}, then after
\kbd{reorder([z,x])} the order of importance of the variables, with respect
to output, will be \kbd{[z,y,x]}. The internal representation is unaffected.
\label{se:reorder}

\subsubsecidx{setrand}$(n)$: reseeds the random number generator to the value
$n$. The initial seed is $n=1$.

\syn{setrand}{n}, where $n$ is a \kbd{long}. Returns $n$.

\subsubsecidxunix{system}$(\var{str})$: \var{str} is a string representing
a system command. This command is executed, its output written to the
standard output (this won't get into your logfile), and control returns
to the PARI system. This simply calls the C \kbd{system} command.

\subsubsecidx{trap}$(\{e\}, \{\var{rec}\}, \{\var{seq}\})$: tries to
evaluate \var{seq}, trapping error $e$, that is effectively preventing it
from aborting computations in the usual way; the recovery sequence
\var{rec} is executed if the error occurs and the evaluation of \var{rec}
becomes the result of the command. If $e$ is omitted, all exceptions are
trapped. Note in particular that hitting \kbd{\pow C} (Control-C) raises an
exception. See \secref{se:errorrec} for an introduction to error recovery
under \kbd{gp}.

\bprog
? \\@com trap division by 0
? inv(x) = trap (gdiver, INFINITY, 1/x)
? inv(2)
%1 = 1/2
? inv(0)
%2 = INFINITY
@eprog

If \var{seq} is omitted, defines \var{rec} as a default action when
catching exception $e$, provided no other trap as above intercepts it first.
The error message is printed, as well as the result of the evaluation of
\var{rec}, and control is given back to the \kbd{gp} prompt. In particular, current
computation is then lost.

The following error handler prints the list of all user variables, then
stores in a file their name and their values:\kbdsidx{writebin}
\bprog
? { trap( ,
      print(reorder);
      writebin("crash")) }
@eprog

If no recovery code is given (\var{rec} is omitted) a \tev{break loop} will
be started (see \secref{se:breakloop}). In particular
\bprog
? trap()
@eprog\noindent
by itself installs a default error handler, that will start a break
loop whenever an exception is raised.

If \var{rec} is the empty string \kbd{""} the default handler (for that error
if $e$ is present) is disabled.

\misctitle{Note:} The interface is currently not adequate for trapping
individual exceptions. In the current version \vers, the following keywords
are recognized, but the name list will be expanded and changed in the
future (all library mode errors can be trapped: it's a matter of defining
the keywords to \kbd{gp}, and there are currently far too many useless ones):

\kbd{accurer}: accuracy problem

\kbd{archer}: not available on this architecture or operating system

\kbd{errpile}: the PARI stack overflows

\kbd{gdiver}: division by 0

\kbd{invmoder}: impossible inverse modulo

\kbd{siginter}: SIGINT received (usually from Control-C)

\kbd{talker}: miscellaneous error

\kbd{typeer}: wrong type

\kbd{user}: user error (from the \kbd{error} function)

\subsubsecidx{type}$(x)$: this is useful only under \kbd{gp}. Returns the
internal type name of the PARI object $x$ as a  string. Check out
existing type names with the metacommand \b{t}.\label{se:gptype}
For example \kbd{type(1)} will return "\typ{INT}".

\syn{type0}{\var{x}}, though the macro \kbd{typ} is usually simpler to use
since it return an integer that can easily be matched with the symbols \typ{*}.
The name \kbd{type} was avoided due to the fact that \kbd{type} is a reserved identifier for some C(++) compilers.

\subsubsecidx{version}$()$: Returns the current version number as a \typ{VEC}
with three integer components: major version number, minor version number and
patchlevel. To check against a particular version number, you can use:
\bprog
   if (lex(version(), [2,2,0]) >= 0,
     \\ code to be executed if we are running 2.2.0 or more recent.
   ,
     \\ compatibility code
   );
@eprog

\subsubsecidx{whatnow}$(\var{key})$: if keyword \var{key} is the name
of a function that was present in GP version 1.39.15 or lower, outputs
the new function name and syntax, if it changed at all ($387$ out of $560$
did).\label{se:whatnow}

\subsubsecidx{write}$(\var{filename},\{\var{str}\}*)$: writes (appends)
to \var{filename} the remaining arguments, and appends a newline (same output
as \kbd{print}).\label{se:write}

\subsubsecidx{write1}$(\var{filename},\{\var{str}\}*)$: writes (appends) to
\var{filename} the remaining arguments without a trailing newline
(same output as \kbd{print1}).

\subsubsecidx{writebin}$(\var{filename},\{x\})$: writes (appends) to
\var{filename} the object $x$ in binary format. This format is not human
readable, but contains the exact internal structure of $x$, and is much
faster to save/load than a string expression, as would be produced by
\tet{write}. The binary file format includes a magic number, so that such a
file can be recognized and correctly input by the regular \tet{read} or \b{r}
function. If saved objects refer to (polynomial) variables that are not
defined in the new session, they will be displayed in a funny way (see
\secref{se:kill}).

If $x$ is omitted, saves all user variables from the session, together with
their names. Reading such a ``named object'' back in a \kbd{gp} session will set
the corresponding user variable to the saved value. E.g after
\bprog
x = 1; writebin("log")
@eprog\noindent
reading \kbd{log} into a clean session will set \kbd{x} to $1$.
The relative variables priorities (see \secref{se:priority}) of new variables
set in this way remain the same (preset variables retain their former
priority, but are set to the new value). In particular, reading such a
session log into a clean session will restore all variables exactly as they
were in the original one.

User functions, installed functions and history objects can not be saved via
this function. Just as a regular input file, a binary file can be compressed
using \tet{gzip}, provided the file name has the standard \kbd{.gz}
extension. \label{se:writebin}\sidx{binary file}

In the present implementation, the binary files are architecture dependent
and compatibility with future versions of \kbd{gp} is not guaranteed. Hence
binary files should not be used for long term storage (also, they are
larger and harder to compress than text files).

\subsubsecidx{writetex}$(\var{filename},\{\var{str}\}*)$: as \kbd{write},
in \TeX\ format.\label{se:writetex}

\vfill\eject