Quentin F. Stout

Computer Science and Engineering, University of Michigan

**Abstract:**
This paper gives algorithms for determining weighted
L_{∞} isotonic regressions satisfying order constraints
on a partially ordered set specified via a directed acyclic graph (dag).
Except for degenerate caes, L_{∞} isotonic regression is not unique, and some regressoins have undesirable properties.
Hence we examine properties of the regressions an algorithm produces, in adition to its running time.
Note that the L_{∞} metric is also called the uniform metric, Chebyshev metric, and minimax metric, and isotonic regression is also known as monotonic regression.
Isotonic regression is a form of nonparametric shape-constrained regression.

For a dag with *n* vertices and *m* edges we give an
algorithm taking Θ(m log n) time.
Unfortunately it is based on parametric search, which is completely infeasible, so we also give a simple, fast, randomized algorithm achieving the same expected time.
While not worst-case time, it achieves this time with high probability.
Further, its approach can easily replace parametric search in several other problems, such as approximation by step functions.
In practice it should be significantly faster than algorithms with guaranteed worst-case performance.

Algorithms are also given to determine unimodal regression for an undirected tree, for isotonic regressions where the regression values must come from a specified set, and for isotonic regressions when the number of different weights, or number of different values, is much smaller than the number of vertices. An algorithm is also given for isotonic regression on a rooted tree, where the value of a parent must be the largest value of its children. We call this ``river isotonic regression'' since it is similar to the naming convention for rivers.

The parametric search and randomized algorithms mentioned above can produce Max, the regression
which is pointwise the largest of any L_{∞} isotonic
regression, and Min, the pointwise minimum.
Other regressions considered
are Basic and Strict. Basic is the most commonly
studied one, but people have rarely addressed the fact that it
is nontrivial to compute for weighted data.
Prefix is somewhat easier.
Strict is discussed more thoroughly in
Strict L_{∞} isotonic regression.
Strict regression is the "best best" L_{∞} isotonic regression.
It is the limit, as p → ∞, of L_{p} isotonic regression.
It has a strong property concerning the number of vertices with large
regression errors (see the paper for a precise statement).
Prefix and Basic have a somewhat weaker property in that they minimize the
set of vertices with maximal error.
At the other end of the spectrum, at any vertex, one or both of
Max and Min have the worst error among all isotonic regressions.

**Complete paper**.
This appears in *J. Computer Sys. and Sci.* 91 (2018), pp. 69-81. This version is a major revision of a much longer version from 2008.
That version included an approach for multidimensional data which was moved to
Isotonic Regression for Multiple Independent Variables where the approach was also applied to L_{1} and
L_{2} isotonic regressions.
That paper was published a few years ago, and now, after a sequence of absurdly long delays, including ones caused by the death of an editor, this version is finally published.

**Related work:**
Here is information about the fastest
isotonic regression algorithms known to date, for a variety of orderings and metrics.

Here are some new, faster, but more complex, algorithms for L_{∞} isotonic regression on linear, multidimensional, and tree orders, and algorithms for approximation by step functions with a specified number of steps.
Here are papers on L_{1} isotonic regression (also known as isotonic median regression),
isotonic regression for multiple independent variables,
isotonic regression with a fixed number of level sets, and
unimodal regression.
The paper on L_{1} isotonic regression also includes new algorithms
for L_{p} isotonic regression, 1 < p < ∞.
These are based on using a sequence of L_{1} binary-valued isotonic
regressions.
For L_{2} several of these improve upon prior results, which is
somewhat surprising since it is the case that has been studied the most.

Copyright © 2008-2018 Quentin F. Stout. |