User-defined Aggregates
aggregate function
user-defined
Aggregate functions in PostgreSQL
are defined in terms of state values
and state transition functions.
That is, an aggregate operates using a state value that is updated
as each successive input row is processed.
To define a new aggregate
function, one selects a data type for the state value,
an initial value for the state, and a state transition
function. The state transition function takes the previous state
value and the aggregate's input value(s) for the current row, and
returns a new state value.
A final function
can also be specified, in case the desired result of the aggregate
is different from the data that needs to be kept in the running
state value. The final function takes the last state value
and returns whatever is wanted as the aggregate result.
In principle, the transition and final functions are just ordinary
functions that could also be used outside the context of the
aggregate. (In practice, it's often helpful for performance reasons
to create specialized transition functions that can only work when
called as part of an aggregate.)
Thus, in addition to the argument and result data types seen by a user
of the aggregate, there is an internal state-value data type that
might be different from both the argument and result types.
If we define an aggregate that does not use a final function,
we have an aggregate that computes a running function of
the column values from each row. sum> is an
example of this kind of aggregate. sum> starts at
zero and always adds the current row's value to
its running total. For example, if we want to make a sum>
aggregate to work on a data type for complex numbers,
we only need the addition function for that data type.
The aggregate definition would be:
CREATE AGGREGATE sum (complex)
(
sfunc = complex_add,
stype = complex,
initcond = '(0,0)'
);
which we might use like this:
SELECT sum(a) FROM test_complex;
sum
-----------
(34,53.9)
(Notice that we are relying on function overloading: there is more than
one aggregate named sum>, but
PostgreSQL can figure out which kind
of sum applies to a column of type complex.)
The above definition of sum will return zero
(the initial state value) if there are no nonnull input values.
Perhaps we want to return null in that case instead — the SQL standard
expects sum to behave that way. We can do this simply by
omitting the initcond phrase, so that the initial state
value is null. Ordinarily this would mean that the sfunc
would need to check for a null state-value input. But for
sum and some other simple aggregates like
max> and min>,
it is sufficient to insert the first nonnull input value into
the state variable and then start applying the transition function
at the second nonnull input value. PostgreSQL
will do that automatically if the initial state value is null and
the transition function is marked strict> (i.e., not to be called
for null inputs).
Another bit of default behavior for a strict> transition function
is that the previous state value is retained unchanged whenever a
null input value is encountered. Thus, null values are ignored. If you
need some other behavior for null inputs, do not declare your
transition function as strict; instead code it to test for null inputs and
do whatever is needed.
avg> (average) is a more complex example of an aggregate.
It requires
two pieces of running state: the sum of the inputs and the count
of the number of inputs. The final result is obtained by dividing
these quantities. Average is typically implemented by using an
array as the state value. For example,
the built-in implementation of avg(float8)
looks like:
CREATE AGGREGATE avg (float8)
(
sfunc = float8_accum,
stype = float8[],
finalfunc = float8_avg,
initcond = '{0,0,0}'
);
float8_accum> requires a three-element array, not just
two elements, because it accumulates the sum of squares as well as
the sum and count of the inputs. This is so that it can be used for
some other aggregates besides avg>.
Aggregate functions can use polymorphic
state transition functions or final functions, so that the same functions
can be used to implement multiple aggregates.
See
for an explanation of polymorphic functions.
Going a step further, the aggregate function itself can be specified
with polymorphic input type(s) and state type, allowing a single
aggregate definition to serve for multiple input data types.
Here is an example of a polymorphic aggregate:
CREATE AGGREGATE array_accum (anyelement)
(
sfunc = array_append,
stype = anyarray,
initcond = '{}'
);
Here, the actual state type for any given aggregate call is the array type
having the actual input type as elements. The behavior of the aggregate
is to concatenate all the inputs into an array of that type.
(Note: the built-in aggregate array_agg> provides similar
functionality, with better performance than this definition would have.)
Here's the output using two different actual data types as arguments:
SELECT attrelid::regclass, array_accum(attname)
FROM pg_attribute
WHERE attnum > 0 AND attrelid = 'pg_tablespace'::regclass
GROUP BY attrelid;
attrelid | array_accum
---------------+---------------------------------------
pg_tablespace | {spcname,spcowner,spcacl,spcoptions}
(1 row)
SELECT attrelid::regclass, array_accum(atttypid::regtype)
FROM pg_attribute
WHERE attnum > 0 AND attrelid = 'pg_tablespace'::regclass
GROUP BY attrelid;
attrelid | array_accum
---------------+---------------------------
pg_tablespace | {name,oid,aclitem[],text[]}
(1 row)
An aggregate function can be made to accept a varying number of arguments
by declaring its last argument as a VARIADIC> array, in much
the same fashion as for regular functions; see
. The aggregate's transition
function must have the same array type as its last argument. The
transition function typically would also be marked VARIADIC>,
but this is not strictly required.
Variadic aggregates are easily misused in connection with
the ORDER BY> option (see ),
since the parser cannot tell whether the wrong number of actual arguments
have been given in such a combination. Keep in mind that everything to
the right of ORDER BY> is a sort key, not an argument to the
aggregate. For example, in
SELECT myaggregate(a ORDER BY a, b, c) FROM ...
the parser will see this as a single aggregate function argument and
three sort keys. However, the user might have intended
SELECT myaggregate(a, b, c ORDER BY a) FROM ...
If myaggregate> is variadic, both these calls could be
perfectly valid.
For the same reason, it's wise to think twice before creating aggregate
functions with the same names and different numbers of regular arguments.
Aggregate function calls in SQL allow DISTINCT>
and ORDER BY> options that control which rows are fed
to the aggregate's transition function and in what order. These
options are implemented behind the scenes and are not the concern
of the aggregate's support functions.
The aggregates we have been describing so far are normal>
aggregates. PostgreSQL> also
supports ordered-set aggregates>, which differ from
normal aggregates in two key ways. First, in addition to ordinary
aggregated arguments that are evaluated once per input row, an
ordered-set aggregate can have direct> arguments that are
evaluated only once per aggregation operation. Second, the syntax
for the ordinary aggregated arguments specifies a sort ordering
for them explicitly. An ordered-set aggregate is usually
used to implement a computation that depends on a specific row
ordering, for instance rank or percentile, so that the sort ordering
is a required aspect of any call. For example, the built-in
definition of percentile_disc> is equivalent to:
CREATE AGGREGATE percentile_disc (float8 ORDER BY anyelement)
(
sfunc = ordered_set_transition,
stype = internal,
finalfunc = percentile_disc_final
);
which could be used to obtain a median household income like this:
SELECT percentile_disc(0.5) WITHIN GROUP (ORDER BY income) FROM households;
percentile_disc
-----------------
50489
Here, 0.5> is a direct argument; it would make no sense
for the percentile fraction to be a value varying across rows.
Unlike the case for normal aggregates, the sorting of input rows for
an ordered-set aggregate is not> done behind the scenes,
but is the responsibility of the aggregate's support functions.
The typical implementation approach is to keep a reference to
a tuplesort> object in the aggregate's state value, feed the
incoming rows into that object, and then complete the sorting and
read out the data in the final function. This design allows the
final function to perform special operations such as injecting
additional hypothetical> rows into the data to be sorted.
While normal aggregates can often be implemented with support
functions written in PL/pgSQL or another
PL language, ordered-set aggregates generally have to be written in
C, since their state values aren't definable as any SQL datatype.
(In the above example, notice that the state value is declared as
type internal> — this is typical.)
The state transition function for an ordered-set aggregate receives
the current state value plus the aggregated input values for
each row, and returns the updated state value. This is the
same definition as for normal aggregates, but note that the direct
arguments (if any) are not provided. The final function receives
the last state value, the values of the direct arguments if any,
and null values corresponding to the aggregated input(s). While the
null values seem useless at first sight, they are important because
they make it possible to include the data types of the aggregated
input(s) in the final function's signature, which may be necessary
to resolve the output type of a polymorphic aggregate. For example,
the built-in mode()> ordered-set aggregate takes a
single aggregated column of any sortable data type and returns a
value of that same type. This is possible because the final function
is declared as mode_final(internal, anyelement) returns
anyelement>, with the anyelement> parameter corresponding
to the dummy null argument that represents the aggregated column.
The actual data is conveyed in the internal>-type state
value, but type resolution needs a parse-time indication of what the
result data type will be, and the dummy argument provides that.
In the example of percentile_disc>, the support functions
are respectively declared as
ordered_set_transition(internal, "any") returns internal>
and percentile_disc_final(internal, float8, anyelement)
returns anyelement>.
A function written in C can detect that it is being called as an
aggregate transition or final function by calling
AggCheckCallContext>, for example:
if (AggCheckCallContext(fcinfo, NULL))
One reason for checking this is that when it is true for a transition
function, the first input
must be a temporary state value and can therefore safely be modified
in-place rather than allocating a new copy.
See int8inc()> for an example.
(This is the only>
case where it is safe for a function to modify a pass-by-reference input.
In particular, final functions for normal aggregates must not
modify their inputs in any case, because in some cases they will be
re-executed on the same final state value.)
Another support routine available to aggregate functions written in C
is AggGetAggref>, which returns the Aggref>
parse node that defines the aggregate call. This is mainly useful
for ordered-set aggregates, which can inspect the substructure of
the Aggref> node to find out what sort ordering they are
supposed to implement. Examples can be found
in orderedsetaggs.c> in the PostgreSQL>
source code.
For further details see the
command.