koorio.com

# Adaptive thinning for terrain modelling and image compression

Adaptive Thinning for Terrain Modelling and Image Compression
Laurent Demaret1 , Nira Dyn2 , Michael S. Floater3 , and Armin Iske1
1

2

3

Zentrum Mathematik, Technische Universit¨t M¨nchen, Munich, Germany, a u {demaret,iske}@ma.tum.de School of Mathematical Sciences, Tel-Aviv University, Israel, niradyn@post.tau.ac.il Computer Science Department, Oslo University, Norway, michaelf@ifi.uio.no

Summary. Adaptive thinning algorithms are greedy point removal schemes for bivariate scattered data sets with corresponding function values, where the points are recursively removed according to some data-dependent criterion. Each subset of points, together with its function values, de?nes a linear spline over its Delaunay triangulation. The basic criterion for the removal of the next point is to minimize the error between the resulting linear spline at the bivariate data points and the original function values. This leads to a hierarchy of linear splines of coarser and coarser resolutions. This paper surveys the various removal strategies developed in our earlier papers, and the application of adaptive thinning to terrain modelling and to image compression. In our image test examples, we found that our thinning scheme, adapted to diminish the least squares error, combined with a postprocessing least squares optimization and a customized coding scheme, often gives better or comparable results to the wavelet-based scheme SPIHT.

1 Introduction
This paper concerns the construction of multiresolution approximations to bivariate functions from irregular point samples. These approximations are linear splines over decremental Delaunay triangulations, generated by adaptive thinning algorithms. A thinning algorithm is a scheme which recursively removes points from a set of scattered data, according to some speci?c criterion. By recursively removing points, one at a time, a thinning algorithm yields an ordering of the points, which in turn yields a data hierarchy of the input point set. In adaptive thinning algorithms, the criterion for the removal of points depends on both the locations of the given points and the sampled function values at the points. This is in contrast to non-adaptive thinning, where the

2

L. Demaret, N. Dyn, M.S. Floater, A. Iske

criterion for removal depends only on the geometry of the given planar point set [10]. Linear splines over Delaunay triangulations are used in order to compute multiresolution approximations to the sampled function. Each subset of points de?nes a unique Delaunay triangulation (up to co-circularity) and a corresponding linear spline function. The resulting sequence of linear splines constitutes the multiresolution approximations of the sampled function. For every point in the current subset, we maintain an anticipated error which provides a local estimate for the true error incurred by its removal. The error is the deviation of the spline function at the 2D data points from the given function values, measured in some speci?c norm. The point with minimal anticipated error is considered to be the least signi?cant in the current situation, and is removed. In order to obtain good multiresolution approximations to the sampled function, the choice of removal criterion requires care. We customize our removal strategies according to the application. The idea of thinning scattered data is closely related to mesh simpli?cation methods. Indeed, thinning combined with linear splines over Delaunay triangulations is only one of several mesh simpli?cation methods. One of the earliest methods for mesh simpli?cation is the incremental algorithm of Fowler and Little [11]. An early survey paper is that of Lee [16]. De Floriani, Puppo, and co-workers have proposed several algorithms for mesh simpli?cation and developed good data structures for hierarchical triangulations, see [4] and the references therein. Heckbert and Garland [13] give an extensive survey of simpli?cation methods both for terrain models (triangulated scattered data in the plane) and free form models (manifold surfaces represented by 3D triangle meshes). Speci?c mesh simpli?cation algorithms include techniques like edge-collapse, half-edge collapse, and vertex collapse. For a more recent survey paper on these methods, see the tutorial [12]. Adaptive thinning algorithms are useful for both model simpli?cation and data compression, and have been applied to hierarchical terrain modelling and to image compression. This paper starts with a generic introduction to adaptive thinning algorithms, before going on to various application-speci?c measures of anticipated errors, and ends with numerical examples of the algorithms applied to terrain modelling and image compression. In the image compression application, adaptive thinning constructs a hierarchy of most signi?cant pixel positions in a digital image. We use a thinning scheme, whose anticipated error is measured in the 2 norm, combined with a post-processing step which optimizes the luminances at the most signi?cant pixels. The positions of the pixels in the set of most signi?cant pixels, along with their optimized luminance values, are then converted into a bitstream, using a customized coding scheme for scattered data, developed in our previous paper [5]. At the decoder, the transmitted data is used for the image reconstruction, by evaluating the linear spline over the Delaunay triangulation of the transmitted pixels, interpolating the optimized luminance values at the vertices.

3

The result is a novel image compression scheme, AT? , which, in our nu2 merical examples, often gives better or comparable compression rates to the well-established wavelet-based compression method SPIHT (Set Partitioning Into Hierarchical Trees).

2 Generic Formulation of Thinning
This section provides a generic introduction to the basic features and concepts of adaptive thinning algorithms. Let X = {x1 , . . . , xN } ? R2 denote a ?nite scattered point set in R2 , and let fX = (f (x1 ), . . . , f (xN ))T ∈ RN denote a corresponding data vector containing point samples taken from an unknown function f : R2 → R at the points of X. Thinning is a recursive point removal scheme for bivariate scattered data, whose generic formulation is given by the following algorithm, where n is the number of removals. Algorithm 1 (Thinning). (1) Let XN = X; (2) FOR k = 1, . . . , n (2a) Locate a removable point x ∈ XN ?k+1 ; (2b) Let XN ?k = XN ?k+1 \ x; Note that thinning constructs, for given data (X, fX ), a nested sequence XN ?n ? · · · ? XN ?1 ? XN = X (1)

of subsets of X, where the size |Xk | of any subset Xk in (1) is k, N ?n ≤ k ≤ N . Two consecutive subsets in (1) di?er only by one point. In order to select a speci?c thinning strategy, it remains to give a de?nition for a removable point in step (2a) above. Before we propose several di?erent preferred removal strategies, let us ?rst discuss our motivation for the construction of the data hierarchy in (1). Our intention is to use the data hierarchy (1) in order to create a multiresolution approximation of the sampled function f from the given data fX . The multiresolution approximation of f combines the data hierarchy (1) with linear splines. Recall that a linear spline is a continuous function, which is piecewise linear over a partitioning of its domain ? ? R2 . In the setting of this paper, we let the domain ? coincide with the convex hull [X] of the input point set X. This makes sense, if the convex hull [Y ] of any subset Y ? X, constructed by thinning, coincides with the convex hull of X. We ensure this by not removing extremal points from X. A convenient choice for the partitioning of ? is the Delaunay triangulation D(Y ) of Y . Although we assume that the reader is familiar with Delaunay triangulation methods, let us recall some of their basic properties, which are relevant to the construction of adaptive thinning algorithms. For a comprehensive discussion of triangulation

4

L. Demaret, N. Dyn, M.S. Floater, A. Iske

methods, in particular Delaunay triangulations, we refer to the textbook [17] and the paper [19]. Firstly, we remark that a Delaunay triangulation D(Y ) of a ?nite planar point set Y is one, such that for any triangle in D(Y ) its corresponding circumcircle does not contain any point from Y in its interior. This property is termed the Delaunay property. Moreover, there is a unique triangulation of Y with the Delaunay property, provided that no four points in Y are cocircular [17]. We assume this condition on the given set X in order to avoid lengthy but immaterial discussions concerning the non-uniqueness of D(Y ), for Y ? X. Secondly, note that the removal of one point y from Y requires an update of D(Y ) in order to obtain the Delaunay triangulation D(Y \ y). Due to the Delaunay property, this update of D(Y ) is local. Indeed, the required retriangulation, incurred by the removal of the vertex y in D(Y ), can be performed by the retriangulation of its cell C(y). Recall that the cell C(y) of y is the union of all triangles in D(Y ) which contain y as a vertex. Figure 1 shows, for a vertex y in a Delaunay triangulation, the retriangulation of its cell C(y).

y
(a) (b)

Fig. 1. Removal of the vertex y, and retriangulation of its cell. The ?ve triangles of the cell in (a) are replaced by the three triangles in (b).

For any Y ? X let SY = s : s ∈ C([Y ]) and s
T

linear for all T ∈ D(Y ) ,

be the spline space containing all continuous functions over [Y ] whose restriction to any triangle in D(Y ) is linear. Any element in SY is referred to as a linear spline over D(Y ). For given function values fY , there is a unique linear spline L(Y ; f ) ∈ SY which interpolates f at the points of Y , L(Y ; f )(y) = f (y), for all y ∈ Y.

5

The aim of adaptive thinning is to construct a data hierarchy of the form (1) from the given data (X, fX ), such that for any subset Y = Xk ? X in (1) the interpolatory linear spline L(Y ; f ) ∈ SY , is close at X to the given function values fX ∈ RN . In order to establish this, we require that, for some norm · on RN , the approximation error η(Y ; X) = L(Y ; f )
X

? fX

(2)

is small. Note that η(Y ; X) in (2) depends also on the input values fX , but for notational simplicity we omit this. For the discrete ∞ -norm, the approximation error η(Y ; X) becomes η∞ (Y ; X) = max |L(Y ; f )(x) ? f (x)|,
x∈X

(3)

whereas for the discrete

2 -norm,

we obtain |L(Y ; f )(x) ? f (x)|2 . (4)

η2 (Y ; X) =
x∈X

Ideally, for any k, N ? n < k < N , we would like to locate a subset Y ? X which minimizes the error in (2) among all subsets of X of equal size |Y | = k. We remark, however, that the problem of ?nding an algorithm which outputs for any possible input (X, fX , k), N ? n < k < |X|, such an optimal subset Y ? of size |Y ? | = k satisfying η(Y ? ; X) = min η(Y ; X)
Y ?X |Y |=k

(5)

is closely related to the NP-hard k-center problem. For a comprehensive discussion on the k-center problem we refer to the textbook [14, Section 9.4.1] and the survey [20]. To overcome this di?culty, thinning algorithms are based on a greedy removal strategy. The application of greedy algorithms to the k-center problem is developed in [15]. Greedy algorithms are in general known as e?cient and e?ective methods of dynamic programming for solving optimization problems approximately. Greedy algorithms typically go through a sequence of steps, where for each step a choice is made that looks best at the moment. For a general introduction to greedy algorithms we recommend the textbook [2, Chapter 16]. For the thinning Algorithm 1, the most natural removal criterion in step (2a) for approximately solving (5) by a greedy algorithm is De?nition 1. (Removal Criterion AT) For Y ? X, a point y ? ∈ Y is said to be removable from Y , i? it satis?es η(Y \ y ? ; X) = min η(Y \ y; X).
y∈Y

6

L. Demaret, N. Dyn, M.S. Floater, A. Iske

We refer to the adaptive thinning algorithm, resulting from this removal criterion in Algorithm 1, as AT. Let us make a few remarks on the idea of this particular de?nition of a removable point. When using the above removal criterion AT, we assign to each current point y ∈ Y an anticipated error e(y) = η(Y \ y; X), which is incurred by the removal of y. Moreover, we interpret the value e(y) as the signi?cance of the point y in the current subset Y . In this sense, a point y ? whose removal gives the least anticipated error e(y ? ) is considered as least signi?cant in the current situation, and so it is removed from Y . For the implementation of the thinning algorithm, we use a priority queue of the points, according to their signi?cances. This priority queue has a least signi?cant point at its head, and is updated after each removal of its head. For more details concerning the e?cient maintenance of the priority queue of the scattered points, we refer to our paper [8]. In the remainder of this section, we propose di?erent removal criteria, which are adapted to terrain modelling and to image compression. Let us brie?y explain the di?erences between these two applications. In terrain modelling, it is of primary importance to keep the maximal deviation η∞ (Y ; X) between the linear spline interpolant L(f ; Y ) and the given point samples fX as small as possible. This is in contrast to applications in image compression, where the quality measure relies on the mean square error
2 η2 (Y ; X) = η2 (Y ; X)/N. ?2

(6)

We develop two classes of customized adaptive thinning criteria. Those for terrain modelling work with the error measure η∞ in (3), whereas the anticipated error measures for image compression rely on the discrete 2 -error 2 η2 in (4). Accordingly, we denote by AT∞ the adaptive thinning algorithm AT which works with the ∞ -norm, whereas AT2 is the algorithm AT for the choice of the 2 -norm. The removal criterion for AT∞ cannot be computed locally, but alternative local removal criteria are suggested in the next subsection. 3.1 Anticipated Errors for Terrain Modelling In this subsection, we propose three locally computable, alternative removal criteria, AT1, AT2 and AT3, which reduce the computational costs of the resulting thinning algorithm, in comparison with AT∞ . The removal criteria AT1 and AT2 require the retriangulation of Y \ y for the computation of the anticipated error of any y in Y . Due to the Delaunay property, only the retriangulation of the cell C(y) is required. The removal criterion AT3 does not require the retriangulation of the cell C(y) for the computation of the anticipated error of y in Y .

7

The ?rst alternative, AT1, measures the anticipated error of a point y only in its cell C(y), e1 (y) = η∞ (Y \ y; X ∩ C(y)). De?nition 2. (Removal Criterion AT1) For Y ? X, a point y ? ∈ Y is said to be removable from Y , i? it satis?es e1 (y ? ) = min e1 (y).
y∈Y

We remark that the adaptive thinning algorithm AT∞ is not equivalent to AT1. This is con?rmed by the following counter-example. Example 1. (AT∞ = AT1) Consider the eight data points, X = {x1 , . . . , x8 } and the values fX given as follows. i xi f (xi ) 1 (1, 0) 5 2 (2, 0) ?1 3 (3, 0) 0 4 (4, 0) ?3 5 (5, 0) 0 6 (6, 0) ?1.1 7 (7, 0) 2.5 8 (1, 1) 0

In this case, the extremal points of X are x1 , x7 and x8 , so that only the ?ve points xi , i = 2, . . . , 6, can be removed. It is easy to see that both AT∞ and AT1 remove the point x3 = (3, 0) ?rst, and then they remove x5 = (5, 0). In the third step, however, the algorithm AT1 removes x6 = (6, 0), whereas the algorithm AT∞ removes x4 = (4, 0). Now let us turn to the adaptive thinning algorithm AT2, being a simpli?cation of the previous AT1. In order to further reduce the required computational costs, the removal criterion AT2 depends only on the sample values fY of the points in the current subset Y ? X. This is in contrast to both AT∞ and AT1, which depend on points in X which were removed in previous steps. The anticipated error of AT2 is, for any y ∈ Y , given by e2 (y) = η∞ (Y \ y; Y ). Note that this expression can be rewritten as e2 (y) = |L(Y \ y; f )(y) ? f (y)|. De?nition 3. (Removal Criterion AT2) For Y ? X, a point y ? ∈ Y is said to be removable from Y , i? it satis?es e2 (y ? ) = min e2 (y).
y∈Y

We have also explored an adaptive thinning algorithm, AT3, which is faster than AT2. The algorithm AT3 does not only ignore the points already

8

L. Demaret, N. Dyn, M.S. Floater, A. Iske

removed but also computes an anticipated error for each point without needing to temporarily retriangulate its cell. The basic idea behind AT3 is to de?ne this anticipated error e3 (y) as the maximum of the directional anticipated errors at y in a certain sample of directions. For each neighbouring vertex z of y in D(Y ) we consider the unique point p lying at the intersection of the boundary of C(y) and the straight line passing through z and y (other than z itself). Such a point exists, since C(y) is a star-shaped polygon. The point p is either a vertex of the cell’s boundary ?C(y) or a point on one of its sides. In either case, p lies on at least one edge of ?C(y). Let us denote such an edge by [z2 , z3 ]; see Figure 2. Then the triangle Tz = [z, z2 , z3 ], with vertices in Y \ y, contains y. We call Tz a directional triangle of y. We then let ez (y) = |L(Tz ; f )(y) ? f (y)| 3 be the (unique) directional anticipated error of y in the direction z ? y, where L(Tz ; f ) is the linear function which interpolates f at the vertices of Tz . Now, we let e3 (y) = max ez (y) 3
z∈Vy

for the anticipated error of the adaptive thinning algorithm AT3, where Vy is the set of all neighbouring vertices of y in D(Y ).

z2 y p z3
Fig. 2. Directional triangle of y.

z

De?nition 4. (Removal Criterion AT3) For Y ? X, a point y ? ∈ Y is said to be removable from Y , i? it satis?es e3 (y ? ) = min e3 (y).
y∈Y

A detailed analysis of the complexity of the adaptive thinning algorithms AT∞ , AT1, AT2, and AT3 can be found in [8]. It is shown in [8] that the asymptotic computational costs of any of these three algorithms is

9

O(N log(N )), but with di?erent constants. For further illustration, we refer to the numerical examples in Section 4, where these algorithms are applied to one selected test case from terrain modelling. 3.2 Anticipated Errors for Image Compression In this subsection, we propose one customized removal criterion for image compression. In this application, the quality of image compression schemes is usually measured in Peak Signal to Noise Ratio (PSNR) (7), see the discussion in Section 5. It is su?cient for the moment to say that PSNR is an equivalent measure to the reciprocal of the mean square error η 2 (Y ; X) in (6). 2 The aim of adaptive thinning, when applied to image compression, is to keep the mean square error as small as possible. This is accomplished by using adaptive thinning algorithms, which generate subsets Y ? X in (1), whose 2 square error η2 (Y ; X) is, among subsets of equal size, small. Therefore, we work with the discrete 2 -norm η2 in (4). Let us now discuss the adaptive thinning algorithm AT2 , whose anticipated error is given by
2 e(y) = η2 (Y \ y; X),

for y ∈ Y.

2 By the additivity of η2 and by the observations X = (X \ C(y)) ∪ (X ∩ C(y)), 2 2 and η2 (Y \ y; X \ C(y)) = η2 (Y ; X \ C(y)), for any y ∈ Y , we get 2 2 2 η2 (Y \ y; X) = η2 (Y \ y; X \ C(y)) + η2 (Y \ y; X ∩ C(y)) 2 2 = η2 (Y ; X \ C(y)) + η2 (Y \ y; X ∩ C(y)) 2 2 2 = η2 (Y ; X) + η2 (Y \ y; X ∩ C(y)) ? η2 (Y ; X ∩ C(y)). 2 Hence, for any Y ? X, the minimization of η2 (Y \ y; X) is equivalent to minimizing the di?erence 2 2 eδ (y) = η2 (Y \ y; X ∩ C(y)) ? η2 (Y ; X ∩ C(y)),

for y ∈ Y,

where C(y) is the cell of y in D(Y ). De?nition 5. (Removal Criterion AT2 ) For Y ? X, a point y ? ∈ Y is said to be removable from Y , i? it satis?es eδ (y ? ) = min eδ (y).
y∈Y

We remark that we can establish the complexity O(N log(N )) for the adaptive thinning algorithm AT2 , by following along the lines of the analysis in [8]. This, however, is beyond the scope of this survey.

10

L. Demaret, N. Dyn, M.S. Floater, A. Iske

4 Adaptive Thinning in Terrain Modelling
We have implemented the thinning algorithms AT1, AT2, and AT3, together with one non-adaptive thinning algorithm, called NAT [8]. The algorithm NAT, proposed in [10], ignores the given samples fX , and favours evenly distributed subsets of points Y ? X. We refrained from implementing the algorithm AT∞ , since it requires signi?cantly more computations [8], and due to our experience in the univariate setting [7], we do not expect AT∞ to be signi?cantly better than AT1. In this section, we compare the performance of the four algorithms AT1, AT2, AT3, and NAT in terms of both approximation quality and computational cost on one speci?c example from terrain modelling. The corresponding data set, Hurrungane, contains 23092 data points. Each data point is of the form (x, f (x)), where f (x) denotes the terrain’s height value sampled at the location x ∈ R2 . This data set is displayed in Figure 3 (a) (2D view) and in Figure 3 (b) (3D view).

(a)

(b)

Fig. 3. Hurrungane: (a) 2D view and (b) 3D view.

For all four thinning algorithms, we have recorded both the required seconds of CPU time (without considering the computational costs required for building the initial data structures, such as the Delaunay triangulation) and the sequence of approximation errors η∞ (Y ; X) after the removal of n = 1000, 2000, . . . , 22000 points from X. Not surprisingly, we found that NAT is the fastest method but also the worst one in terms of its approximation error. For example, for n = 22000 the algorithm AT1 takes 247.53 seconds of CPU time, whereas NAT takes only 11.37 seconds. On the other hand, we obtain in this particular example η∞ (Y ; X) = 278.61 for NAT, but only η∞ (Y ; X) = 30.09 when using AT1. The two corresponding triangulations D(Y ) output by NAT and AT1 are

11

displayed in Figure 4 (a) and (b) (2D view), and in Figure 5 (a) and (b) (3D view).

(a)

(b)

Fig. 4. Thinned Hurrungane with 1092 points, 2D view. (a) NAT and (b) AT1.

(a)

(b)

Fig. 5. Thinned Hurrungane with 1092 points, 3D view. (a) NAT and (b) AT1.

In Figure 6 (a) and in Figure 7 (a) the approximation error η∞ (Y ; X) as a function of the number of removed points is plotted for the four di?erent thinning algorithms. In Figure 6 (b) and in Figure 7 (b) the corresponding seconds of CPU time are displayed. The graphs show that, with respect to approximation error, the three adaptive thinning algorithms AT1, AT2, and AT3 are much better than NAT. Among the three adaptive thinning algorithms, AT1 is the best, followed by AT3, and AT2 is the worst. Note that by de?nition AT3 can only be inferior to AT2 after one removal. In the numerical example, AT3 has continued to

12
300

L. Demaret, N. Dyn, M.S. Floater, A. Iske
250

250

200

AT1

AT2

200

CPU seconds

NAT

150

η (Y;X)

150

100

AT3

100

50

AT2

50

AT3

NAT

0

AT1
0 0.5 1 1.5 2 2.5 x 10
4

0

0

0.5

1

1.5

2

2.5 x 10
4

n

n

(a)

(b)

Fig. 6. Hurrungane: comparison between NAT (dash-dot line), AT1 (solid), AT2 (dashed), and AT3 (dotted), (a) approximation error and (b) seconds of CPU time.
100

260

90

240

80

220

AT1 AT2

70

200

CPU seconds

60

180

η (Y;X)

50

AT3 AT2

160

40

140

30

120

20

100

AT3

10

80

AT1
0 0 0.5 1 1.5 2 2.5 x 10
4

60

0

0.5

1

1.5

2

2.5 x 10
4

n

n

(a)

(b)

Fig. 7. Hurrungane: comparison between AT1 (solid line), AT2 (dashed), and AT3 (dotted), (a) approximation error and (b) seconds of CPU time.

be inferior for about 50 removal steps, after which its approximation error is smaller than that of AT2. As to the computational costs for the adaptive thinning algorithms, AT3 is the fastest, and AT1 the slowest, see Figure 6 (b) and Figure 7 (b). Our conclusion is that AT1 is our recommended thinning algorithm. But if computational time is a critical issue, AT3 is a good alternative.

5 Adaptive Thinning in Image Compression
This section reviews our recent research on the application of adaptive thinning to image compression. The information reduction and e?cient coding of digital images are essential for fast transmission across an information channel,

13

such as the internet. For a comprehensive introduction to image compression and coding, we recommend the textbook [22]. Many of the well-established techniques in image compression, including JPEG2000 [22], are based on wavelets and related techniques, see [3] for a recent survey on wavelet-based image coding. When working with wavelets, digital images are represented by using rectangular grids of wavelet coe?cients. This is in contrast to adaptive thinning, which works with scattered data. Adaptive thinning constructs a hierarchy of sets of most signi?cant pixels, where for each set the image is approximated by the linear spline over the Delaunay triangulation of the pixels in the set. The idea to approximate an image by ?rst identifying signi?cant pixels is not new (see e.g. [9]). In this section we go further and so obtain a competitive compression scheme, based on adaptive thinning. Any compression scheme is mainly concerned with the following sequence of tasks. (1) (2) (3) (4) (5) data reduction; encoding of the reduced data at the sender; transmission of the encoded data from the sender to the receiver; decoding of the transmitted data at the receiver; data reconstruction.

Adaptive thinning is mainly used in the above step (1), the data reduction. In the following discussion, we ?rst explain how adaptive thinning works in image data reduction, before we show how the reconstruction step (5) is accomplished. For a discussion of scattered data coding, required in steps (2) and (4), we refer to our paper [5]. Before we explain any details on our compression scheme, we wish to make a few preliminary remarks concerning our image model. Note that many images contain discontinuities. It might therefore seem like a good idea to approximate such images by discontinuous functions over triangulations, instead of keeping to the continuous piecewise linear functions of Algorithm 1. One possibility would be to use piecewise constant functions over triangulations. This, however, reduces the approximation quality of the model, because piecewise constants have only O(h) approximation order, rather than O(h2 ) for piecewise linear functions. Another possibility would be to use piecewise linear functions that are not necessarily globally continuous. This, however, requires assigning several height values to each vertex, and so in this approach there is signi?cantly more data attached to each triangulation, which leads to higher coding costs. Unless one works with a very sophisticated coding scheme, which makes fundamental use of the statistical correlation of the data around the image’s discontinuities, either of these two discontinuous models leads, due to our experience, to inferior compression ratios. Therefore, we prefer to work with

14

L. Demaret, N. Dyn, M.S. Floater, A. Iske

a continuous image model, so as the one using continuous piecewise linear functions in Algorithm 1. 5.1 Adaptive Thinning and Image Reduction and Reconstruction A digital image is a rectangular grid of pixels. Each pixel bears a color value or greyscale luminance. For the sake of simplicity, we restrict the following discussion to greyscale images. The image can be viewed as a matrix F = (f (i, j))i,j , whose entries f (i, j) are the luminance values at the pixels. The pixel positions (i, j) ∈ X are pairs of non-negative integers i and j, whose range is often of the form [0..2p ? 1] × [0..2q ? 1], for some positive integers p, q, where we let [0..n] = [0, n] ∩ Z for any non-negative integer n ∈ Z. In this case, the size of the pixel set X is 2p × 2q . Likewise, the entries f (i, j) in F are non-negative integers whose range is typically [0..2r ? 1], for some positive integer r. In the examples of the test images below, we work with 256 greyscale luminances in [0..255], so that in this case r = 8. A well-known quality measure for the evaluation of image compression schemes is the Peak Signal to Noise Ratio (PSNR), PSNR = 10 ? log10 2r × 2r η2 (Y ; X) ?2 , (7)

which is an equivalent measure to the reciprocal of the mean square error η2 (Y ; X) in (6). The PSNR is expressed in dB (decibels). Good image com?2 pressions typically have PSNR values of 30 dB or more [22] for the reconstructed image. The popularity of PSNR as a measure of image distortion derives partly from the ease with which it may be calculated, and partly from the tractability of linear optimization problems involving squared error metrics. More appropriate measures of visual distortion are discussed in [22]. Adaptive thinning, when applied to digital images, recursively deletes pixels using the thinning Algorithm 1, in combination with the adaptive removal criterion AT2 of Section 3. In other words, the pixel positions form the initial point set X on which the adaptive thinning algorithm is applied. At any step of the algorithm, a removable pixel (point) is removed from the image. The output of adaptive thinning is a set Y ? X of pixels combined with their corresponding luminances FY . However, due to the regular distribution of pixel positions, the Delaunay triangulations of X, and of its subsets Y ? X, might be non-unique. To avoid this ambiguity, we apply a small perturbation to the pixels X and apply the thinning algorithm to the perturbed pixels. As a postprocess to the thinning, we further minimize the mean square error by least squares approximation [1]. More precisely, we compute from the output set Y and the values F the unique best 2 -approximation L? (Y ; F ) ∈ SY satisfying |L? (Y ; F )(i, j) ? f (i, j)|2 = min
(i,j)∈X s∈SY (i,j)∈X

|s(i, j) ? f (i, j)|2 .

(8)

15

Such a unique solution exists since Y ? X. The compressed information to be transferred consists of the output set Y and the corresponding optimized luminances {f ? (i, j) = L? (Y ; F )(i, j) : (i, j) ∈ Y }. Following along the lines of our papers [5, 6], we apply a uniform quantization to these optimized luminances. This yields the quantized symbols {Q(f ? (i, j)) : (i, j) ∈ Y }, corresponding to the quantized luminance values ? ? {f (i, j) : (i, j) ∈ Y }, where f (i, j) ≈ f ? (i, j) for (i, j) ∈ Y . The elements ? of the set {(i, j, Q(f (i, j))) : (i, j) ∈ Y } are coded by using the customized scattered data coding scheme of [5]. At the receiver, the reconstruction of the image F (step (5)) is then accomplished as follows. The unique Delaunay triangulation D(Y ) of the pixel positions Y is computed at the decoder, using the same perturbation rules applied previously at the encoder. This de?nes, in combination with ? ? the decoded luminance values FY = {f (i, j) : (i, j) ∈ Y }, the unique ? ? ; FY ) ∈ SY satisfying L(Y ; FY )(i, j) = f (i, j) for every ? ? ? linear spline L(Y (i, j) ∈ Y . Finally, the reconstruction of the image is given by the image ? ? ? matrix F = (L(Y ; FY )(i, j))(i,j)∈X . We denote the novel image compression scheme, presented in this subsection, by AT? . 2 5.2 Comparison between AT? and SPIHT 2 In this subsection we compare the performance of our compression scheme AT? with that of the wavelet-based compression scheme Set Partitioning Into 2 Hierarchical Trees (SPIHT) [18] on two test images. We work with greyscale values of the luminances f (i, j) in [0..255], i.e., r = 8. In the test examples below, we use the range [0..31] for the quantized symbols Q(f ? (i, j)), with (i, j) ∈ Y . In each test case, the compression rate, measured in bits per pixel (bpp), is ?xed. The quality of the resulting reconstructions is then evaluated by the comparison of the di?erences in PSNR, and in visual quality. We remark that the good compression rate of SPIHT is, at low bit rates, comparable with that of the powerful method EBCOT [21], which is the basis algorithm of the standard JPEG2000 [22]. A Geometric Test Image We ?rst consider one arti?cial test image, Reflex, of small size 128 × 128 (p = q = 7). This geometric test image is displayed in Figure 8 (a). The purpose of this test case is to demonstrate the good performance of our compression scheme AT? on texture-free images with sharp edges. 2 In this test case, we ?x the compression rate to 0.251 bpp. The resulting reconstructions corresponding to AT? and to SPIHT are displayed in Fig2 ure 8 (b),(d). Our compression scheme AT? yields the PSNR value 41.73 dB, 2 whereas SPIHT provides the inferior PSNR value 30.42 dB. Hence, with respect to this quality measure, our compression method AT? is much better. 2

16

L. Demaret, N. Dyn, M.S. Floater, A. Iske

(a)

(b)

(c)

(d)

Fig. 8. Reflex. (a) Original image of size 128 × 128. Compression at 0.251 bpp and reconstruction by (b) SPIHT with PSNR 30.42 db, (d) AT? with PSNR 41.73 db. 2 (c) The Delaunay triangulation of the 384 most signi?cant pixels output by AT2 .

Moreover, the reconstruction by AT? provides also a superior visual quality 2 to that of the reconstructed image by SPIHT, see Figures 8 (b),(d). Indeed, our compression scheme AT? manages to localize the sharp edges of the test 2 image Reflex. Moreover, it avoids undesired oscillations around the edges, unlike SPIHT. This is due to the well-adapted distribution of the 384 most signi?cant pixels, output by the adaptive thinning algorithm AT2 , whose Delaunay triangulation is displayed in Figure 8 (c). We have recorded the results of this example, along with those of the following test case, in Table 1. A Popular Test Case of a Real Image We considered also applying our compression scheme AT? to one popular test 2 case of a real image, called Fruits, which is also used as a standard test case in the textbook [22]. The original image Fruits, of size 512 × 512, is displayed in Figure 9.

17

It is remarkable that our compression scheme AT? is, at low bitrates, 2 quite competitive with SPIHT. This is con?rmed by the following comparison between SPIHT and AT? at the bitrate 0.185 bpp. The di?erent PSNR values 2 are shown in the second row of Table 1. Note that the PSNR obtained by AT? 2 is only slightly smaller than that obtained by SPIHT. Now let us turn to the visual quality of the reconstructions. The reconstruction by SPIHT is shown in Figure 11 (a), whereas Figure 11 (b) shows the reconstruction by AT? . 2 The set Y of most signi?cant pixel positions obtained by AT2 , along with its Delaunay triangulation D(Y ), are displayed in Figure 10. Note that by the distribution of the most signi?cant pixels, the main features of the image, such as sharp edges and silhouettes, are captured very well. Moreover, our compression scheme AT? manages to denoise the test image Fruits quite 2 successfully, in contrast to SPIHT. On balance, in terms of the visual quality of the two reconstructions of Fruits, we feel that our compression scheme AT? is at least as good as 2 SPIHT.
Peak Signal to Noise Ratio (PSNR) SPIHT AT? 2 30.42 41.73 32.33 31.85

Test Case Re?ex Fruits

bpp 0.251 0.185

Table 1. Comparison between the compression schemes SPIHT and AT? . 2

Acknowledgment
The assistance of Konstantinos Panagiotou, Bertolt Meier, Eugene Rudoy, Georgi Todorov and Rumen Traykov with the implementation of the compression schemes and the preparation of the numerical examples is gratefully appreciated. The authors were partly supported by the European Union within the project MINGLE (Multiresolution in Geometric Modelling), HPRN-CT1999-00117.

18

L. Demaret, N. Dyn, M.S. Floater, A. Iske

Fig. 9. Fruits. Original image of size 512 × 512.

(a)

(b)

Fig. 10. Fruits. (a) 4044 most signi?cant pixels output by AT2 and (b) their Delaunay triangulation.

19

(a)

(b) Fig. 11. Fruits. Compression at 0.185 bpp and reconstruction by (a) SPIHT with PSNR 32.33 db and (b) AT? with PSNR 31.85 db. 2

20

L. Demaret, N. Dyn, M.S. Floater, A. Iske

References
1. ?. Bj¨rck: Numerical Methods for Least Squares Problems, SIAM, Philadelphia, A o 1996. 2. T.H. Cormen, C.E. Leiserson, R.L. Rivest, and C. Stein (2001) Introduction to Algorithms, 2nd edition. MIT Press, Cambridge, Massachusetts. 3. G.M. Davis, A. Nosratinia: Wavelet-based image coding: an overview, Appl. Comp. Control, Signal & Circuits, B.N. Datta (ed), Birkhauser, 1999, 205–269. 4. L. De Floriani, P. Magillo, E. Puppo (1997) Building and traversing a surface at variable resolution. Proceedings of IEEE Visualization 97 103–110. 5. L. Demaret and A. Iske (2003) Scattered data coding in digital image compression. Curve and Surface Fitting: Saint-Malo 2002, A. Cohen, J.-L. Merrien, and L.L. Schumaker (eds.), Nashboro Press, Brentwood, 107–117. 6. L. Demaret and A. Iske (2003) Advances in digital image compression by adaptive thinning. To appear in the MCFA Annals, Volume III. 7. N. Dyn, M.S. Floater, and A. Iske (2001) Univariate adaptive thinning. Mathematical Methods for Curves and Surfaces: Oslo 2000, T. Lyche and L.L. Schumaker (eds.), Vanderbilt University Press, Nashville, 123–134. 8. N. Dyn, M.S. Floater, and A. Iske (2002) Adaptive thinning for bivariate scattered data. J. Comput. Appl. Math. 145(2), 505–517. 9. Y. Eldar, M. Lindenbaum, M. Porat, and Y.Y. Zeevi (1997) The farthest point strategy for progressive image sampling. IEEE Transactions on Image Processing 6(9), September 1997, 1305–1315. 10. M.S. Floater and A. Iske (1998) Thinning algorithms for scattered data interpolation. BIT 38, 705–720. 11. R.J. Fowler and J.J. Little (1979) Automatic extraction of irregular network digital terrain models. Computer Graphics 13, 199–207. 12. C. Gotsman, S. Gumhold, and L. Kobbelt (2002) Simpli?cation and Compression of 3D Meshes. Tutorials on Multiresolution in Geometric Modelling, A. Iske, E. Quak, and M.S. Floater (eds.), Springer-Verlag, Heidelberg, 319–361. 13. P.S. Heckbert and M. Garland (1997) Survey of surface simpli?cation algorithms. Technical Report, Computer Science Dept., Carnegie Mellon University. 14. D.S. Hochbaum (1997) Approximation algorithms for NP-hard problems. PWS Publishing Company, Boston. 15. A. Iske (2003) Progressive scattered data ?ltering. J. Comput. Appl. Math. 158(2), 297–316. 16. J. Lee (1991) Comparison of existing methods for building triangular irregular network models of terrain from grid digital elevation models. Int. J. of Geographical Information Systems 5(3), 267–285. 17. F.P. Preparata and M.I. Shamos (1988) Computational Geometry. Springer, New York. 18. A. Said and W.A. Pearlman (1996) A new, fast, and e?cient image codec based on set partitioning in hierarchical trees, IEEE Trans. Circuits and Systems for Video Technology 6, 243–250. 19. L.L. Schumaker (1987) Triangulation methods. Topics in Multivariate Approximation, C. K. Chui, L. L. Schumaker, and F. Utreras (eds.), Academic Press, New York, 219–232. 20. D.B. Shmoys (1995) Computing near-optimal solutions to combinatorial optimization problems. DIMACS, Ser. Discrete Math. Theor. Comput. Sci. 20, 355–397.