koorio.com
海量文库 文档专家
赞助商链接
当前位置:首页 >> >>

Robust photometric invariant features from the color tensor


118

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 1, JANUARY 2006

Robust Photometric Invariant Features From the Color Tensor
Joost van de Weijer, Theo Gevers, Member, IEEE, and Arnold W. M Smeulders
Abstract—Luminance-based features are widely used as low-level input for computer vision applications, even when color data is available. The extension of feature detection to the color domain prevents information loss due to isoluminance and allows us to exploit the photometric information. To fully exploit the extra information in the color data, the vector nature of color data has to be taken into account and a sound framework is needed to combine feature and photometric invariance theory. In this paper, we focus on the structure tensor, or color tensor, which adequately handles the vector nature of color images. Further, we combine the features based on the color tensor with photometric invariant derivatives to arrive at photometric invariant features. We circumvent the drawback of unstable photometric invariants by deriving an uncertainty measure to accompany the photometric invariant derivatives. The uncertainty is incorporated in the color tensor, hereby allowing the computation of robust photometric invariant features. The combination of the photometric invariance theory and tensor-based features allows for detection of a variety of features such as photometric invariant edges, corners, optical ?ow, and curvature. The proposed features are tested for noise characteristics and robustness to photometric changes. Experiments show that the proposed features are robust to scene incidental events and that the proposed uncertainty measure improves the applicability of full invariants. Index Terms—Color image processing, edge and corner detection, optical ?ow, photometric invariance.

I. INTRODUCTION

D

IFFERENTIAL-BASED features, such as edges, corners, and salient points, are used abundantly in a variety of applications such as matching, object recognition, and object tracking [12], [21], [23]. We distinguish between feature detection and feature extraction. Feature detection aims at ?nding the position of features in the images, whereas for feature extraction, a position in the images is described by a set of features, which characterize the local neighborhood. Although the majority of images is recorded in color format nowadays, computer vision research is still mostly restricted to luminance-based feature detection and extraction. In this paper we focus on color information to detect and extract features. In the basic approach to color images, the gradient is computed from the derivatives of the separate channels. The derivatives of a single edge can point in opposing directions for the separate channels. DiZenzo [5] argues that a simple summation
Manuscript received July 12, 2004; revised November 11, 2004. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Luca Lucchese. The authors are with the Intelligent Sensory Information Systems, University of Amsterdam, Kruislaan 403, 1098 SJ Amsterdam, The Netherlands (e-mail: joostw@science.uva.nl; gevers@science.uva.nl; smeulders@science.uva.nl). Digital Object Identi?er 10.1109/TIP.2005.860343

of the derivatives ignores the correlation between the channels. This also happens by converting the color image to luminance values. In the case of isoluminance of adjacent color regions, it will lead to cancellation of the edge. As a solution to the opposing vector problem, DiZenzo proposes the color tensor for color gradient computation. The same problem as occurs for color image derivatives, exists for oriented patterns (e.g., ?ngerprint images). Due to the high-frequency nature of oriented patterns, opposing derivative vectors occur in a small neighborhood. The same solution which was found for color image features, is used to compute features for oriented patterns. Kass and Witkin [15] derived orientation estimation from the structure tensor. Adaptations of the tensor lead to a variety of features, such as circle detectors and curvature estimation [3], [4], [11], [26]. Lee and Medioni [18] apply the structure tensor within the context of perceptual grouping. A step forward in the understanding of color images was made by the dichromatic re?ection model by Shafer [22]. The model describes how photometric changes, such as shadows and specularities, affect the RGB-values. On the basis of this model, others provided algorithms invariant to various photometric events such as shadows and specularities [8], [16]. The extension to differential photometric invariance was investigated by Geusebroek et al. [7]. Recently, van de Weijer et al. [25] introduced the photometric quasiinvariants which are a set of photometric invariant derivatives with better noise and stability characteristics compared to existing photometric invariants. Combining photometric quasiinvariants with derivative-based feature detectors leads to features which can identify various physical causes, e.g., shadow corners and object corners. A drawback of the quasiinvariants is that they can only be applied for feature detection. In the case of feature extraction, where the values of multiple frames are compared, full invariance is necessary. We propose a framework to combine the differential-based features with the photometric invariance theory. The framework is designed according to the following criteria. 1) Features must target the photometric variation needed for their application. To achieve that accidental physical events, such as shadows and specularities, will not in?uence results. 2) Features must be robust against noise and should not contain instabilities. Especially for the photometric invariant features, instabilities must be dissolved. 3) Physically meaningful features should be independent of the accidental choice of the color coordinate frame. Next to satisfying the criteria the framework should also be generally applicable to existing features. To meet these criteria we start from the observation that tensors are well-suited to combine ?rst order derivatives for color images. The ?rst

1057-7149/04$20.00 ? 2006 IEEE

VAN DE WEIJER et al.: ROBUST PHOTOMETRIC INVARIANT FEATURES

119

Subspace of measured light in the Hilbert space of possible spectra. (b) RGB coordinate system and an alternative orthonormal color coordinate system which spans the same subspace. (Color version available online at http://ieeexplore.ieee.org.)

contribution is a novel framework that combines tensor-based features with photometric derivatives for photometric invariant feature detection and extraction. The second contribution is that for feature extraction applications, for which quasiinvariants are unsuited, we propose a new uncertainty measure which robusti?es the feature extraction. The third contribution is that the proposed features are proven to be invariant with respect to color coordinate transformations. The paper is organized as follows. In Section II, the prerequisites for color feature detection from tensors are discussed. In Section III, an uncertainty measure is proposed. Based on this uncertainty measure, robust photometric feature extraction is derived. In Section IV, a overview of tensor-based features is given. Section V provides several experiments, and Section VI contains the concluding remarks. II. TENSOR-BASED FEATURES FOR COLOR IMAGES The extension of differential-based operations to color images can be done in various ways. The main challenge to color feature detection is how to transform the three-dimensional color differential structure to a representation of the presence of a feature. In this section, we ensure that the transformation agrees with the criteria mentioned in the introduction. In Section II-A, the invariance with respect to color coordinate transformation is discussed. In Section II-B, the transformation is written in tensor mathematics which links it with a set of tensor-based features, thereby ensuring generality. In Section II-C, the photometric invariance of the transformation is discussed. A. Invariance to Color Coordinate Transformations From a physical point of view, only features make sense which are invariant to rotation of the coordinate axes. This starting point has been applied in the design of image geometry features, resulting in, for example, gradient and Laplace operators [6]. For the design of physically meaningful color features, not only the invariance with respect to spatial coordinate changes is desired, but also the invariance with respect to

color coordinate systems rotations. Features based on different measurement devices which measure the same spectral space should yield the same results. For color images, values are represented in the RGB coordinate system. In fact, the in?nite-dimensional Hilbert space is sampled with three probes which results in the red, green and blue channels (see Fig. 1). For operations on the color coordinate system to be physically meaningful, they should be independent of orthonormal transformation of the three axes in Hilbert space. An example of an orthonormal color coordinate system is the opponent color space [see Fig. 1(b)]. The opponent color space spans the same subspace as the subspace de?ned by the RGB axes, and, hence, both subspaces should yield the same features. B. Color Tensor Simply summing differential structure of various color channels may result in cancellation even when evident structure exists in the image [5]. Rather than adding the direction informa) of the channels, it is more appropriate tion, (de?ned in ). Such a to sum the orientation information (de?ned in method is provided by tensor mathematics for which vectors in opposite directions reinforce one another. Tensors describe the local orientation rather than the direction. More precisely, the tensor of a vector and its 180 rotated counterpart vector are equal. It is for that reason that we use the tensor as a basis for color feature detection. Given an image , the structure tensor is given by [4] (1) where the subscripts indicate spatial derivatives and the bar indicates convolution with a Gaussian ?lter. Note that there are two scales involved in the computation of the structure tensor. First, the scale at which the derivatives are computed and, second, the tensor-scale, which is the scale at which the spatial derivatives are averaged. The structure tensor describes the local differential structure of images and is suited to ?nd

120

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 1, JANUARY 2006

features such as edges and corners [3], [5], [11]. For a multi, the structure tensor is channel image given by (2) In the case that , (2) is the color tensor. For derivaand tives which are accompanied with a weighting function, , which appoint a weight to every measurement in and , the structure tensor is de?ned by

can be seen as a weighted summation of two vectors (5) is the color of the body re?ectance, the color in which of the interface re?ectance (i.e., specularities or highlights), and are scalars representing the corresponding magnitudes of re?ection, and is the intensity of the light source. For matte surfaces, there is no interface re?ection and the model further simpli?es to (6)

(3)

The photometric derivative structure of the image can be computed by computing the spatial derivative of (5) (7) The spatial derivative is a summation of three weighted vectors, successively caused by body re?ectance, shading-shadow, and specular changes. From (6), it follows that for matte surfaces the shadow-shading direction is parallel to the RGB vector . The specular direction follows from the assumption that the color of the light source is known. ), the projection of the For matte surfaces (i.e., spatial derivative on the shadow-shading axis yields the shadow-shading variant containing all energy which could be explained by changes due to shadow and shading. Subtraction from the total derivative of the shadow-shading variant results in the shadow-shading quasiinvariant

In Section II-A, we discussed that physically meaningful features should be invariant with respect to rotation of the color coordinates axes. The elements of the tensor are known to be invariant under rotation and translation of the spatial axes. To , prove the invariant, we use the fact that where is a rotation operator (4) where we have rewritten the inner product according to . C. Photometric Invariant Derivatives A good motivation for using color images is that photometric information can be exploited to understand the physical nature of features. For example, pixels can be classi?ed as being from the same color but having different intensities which is possibly caused by a shadow or shading change in the image. Further, pixels differences can also indicate specular re?ection. For many applications, it is important to distinguish the scene incidental information from material edges. When color images are converted to luminance, this photometric information is lost [8]. The incorporation of photometric invariance in (2) can be obtained by using invariant derivatives to compute the structure tensor. In [25], we derive photometric quasiinvariant derivatives and full invariant derivatives. Quasiinvariants differ from full invariants by the fact that they are variant with respect to a physical parameter. Full invariants can be computed from quasiinvariants by the normalization with a signal dependent scalar. The quasiinvariants have the advantage that they do not exhibit the instabilities common to full photometric invariants. However, the applicability of the quasiinvariants is restricted to photometric invariant feature detection. For feature extractionm full photometric invariance is desired. The dichromatic model divides the re?ection in the interface (specular) and body (diffuse) re?ection component for optically inhomogeneous materials [22]. We assume white illumination, i.e., smooth spectrum of nearly equal energy at all wavelengths, and neutral interface re?ection. For the validity of the photometric assumptions, see [7] and [22]. The RGB vector

(8) which does not contain derivative energy caused by shadows is used to denote unit vectors. The full and shading. The hat shadow-shading invariant results from normalizing the quasiinby the intensity magnitude variant (9) which is invariant for . For the construction of the shadow-shading-specular quasiinvariant, we introduce the hue direction which is perpendicular to the light source direction and the shadow-shading direction (10) Projection of the derivative on the hue direction results in the shadow-shading-specular quasiinvariant

(11)

VAN DE WEIJER et al.: ROBUST PHOTOMETRIC INVARIANT FEATURES

121

The second part of this equation is zero if we assume that shadow-shading changes do not occur within a specularity, or . since then either Subtraction of the quasiinvariant from the spatial derivative results in the shadow-shading-specular variant (12) The full shadow-shading invariant is computed by dividing the quasiinvariant by the saturation. The saturation is equal to the norm of the color-vector after the projection on the plane perpendicular to the light source direction (which is equal to subtraction of the part in the light source direction)

TABLE I APPLICABILITY OF THE DIFFERENT INVARIANTS FOR FEATURE DETECTION AND EXTRACTION

(13) The expression is invariant for both and . By projecting the local spatial derivative on three photometric axes in the RGB cube, we have derived the photometric quasiinvariants. These can be combined with the structure tensor of (18) for photometric quasiinvariant feature detection. As discussed in Section II-A, we would like features to be independent of the accidental choice of the color coordinate frame. As a consequence, a rotation of the color coordinates should result in the same rotation of the quasiinvariant derivatives. For example, for the shadow-shading quasivariant , this can be proven by

which a small deviation of the original pixel color value may result in a large deviation of the invariant derivative. In this section, we propose a measure which describes the uncertainty of the photometric invariant derivatives, thereby allowing for robust full photometric invariant feature detection. We will ?rst derive the uncertainty for the shadow-shading full invariant from its relation to the quasiinvariant. We assume additive uncorrelated uniform Gaussian noise. Due to the highpass nature of differentiation, we assume the noise of the zero to be negligible compared to the noise on the order signal ?rst order signal . In Section II-C, the quasiinvariant has been derived by a linear projection of the derivative on the plane perpendicular to the shadow-shading direction. Therefore, uniform noise in will result in uniform noise in . The noise in the full invariant can be written as (15) The uncertainty of the measurement of depends on the magnitude of . For small , the error increases proportionally. Therefore, we propose to weigh the full shadow-shading into robustify the color tensor variant with the function based on the chromatic invariant. For shadow-shading invariance, examples of the equations used to compute the color tensor are given in Table I. For the shadow-shading-specular invariant, the weighting function should be proportional with the saturation, since (16) This leads us to propose as the weighting function of the hue derivative (see Fig. 2). In places where there is an edge, the saturation drops, and with the saturation, the certainty of the hue measurement. The quasiinvariant [see Fig. 2(d)], which is equal to the weighted hue, is more stable than the full invariant derivative due to the incorporation of the certainty in the measurements. With the derived weighting function we can compute the robust photometric invariant tensor (3). The uncertainties of the full invariant by ways of error propagation have also been investigated by Gevers and Stokman [9]. Our assumption of uniform noise in the RGB channels together with the choice of invariants based on orthogonal color space transformations leads to a simpli?cation of the uncertainty measure. It also connects with the intuitive notion that the uncertainty of the hue is depended on the saturation and the uncertainty of the chromaticity (shadow-shading invariant) with the intensity.

(14) Similar proofs hold for the other photometric variants and quasiinvariants. The invariance with respect to color coordinate transformation of the shadow-shading full invariants follow from the . For the shadow-shading-specular full-infact that variant, the invariance is proven by the fact that the inner product between two vectors remains the same under rotations, and, . therefore, Since the elements of the structure tensor are also invariant for color coordinate transformations [see (4)], the combination of the quasiinvariants and the structure tensor into a quasiinvariant structure tensor is also invariant for color coordinate transformations. III. ROBUST FULL PHOTOMETRIC INVARIANCE In Section II-C, the quasi- and full-invariant derivatives are described. The quasiinvariants outperform the full-invariants on discriminative power and are more robust to noise [25]. However, the quasiinvariants are not suited for applications which require feature extraction. These applications compare the photometric invariant values between various images and need full photometric invariance (see Table I). A disadvantage of full photometric invariants is that they are unstable in certain areas of the RGB-cube. For example, the invariants for shadow-shading and specularities are unstable near the gray axis. These instabilities greatly reduce the applicability of the invariant derivatives for

122

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 1, JANUARY 2006

Fig. 2.

(a) Test image. (b) Hue derivative. (c) Saturation. (d) Quasiinvariant. (Color version available online at http://ieeexplore.ieee.org.)

IV. COLOR TENSOR-BASED FEATURES In this section, we show the generality of the proposed method by summing features which can be derived from the color tensor. In Sections II-C and III, we described how to compute invariant derivatives. Dependent on the task at hand, we proposed to use either quasiinvariants for detection or robust full invariants for extraction. The features in this chapter will by one be derived for . By replacing the inner product of of the following: (17) the acquired photometric invariant features are attained. In Section IV-A, we describe features derived from the eigenvalues of the tensor. In Section IV-B, we describe features which are derived from an adapted version of the structure tensor, and, in Section IV-C, we describe color optical ?ow. A. Eigenvalue-Based Features Eigenvalue analysis of the tensor leads to two eigenvalues which are de?ned by

describes the amount of derivative energy perpendicular to the prominent local orientation which is used to select features for tracking [23]. An often applied feature detector is the Harris corner detector [13]. The color Harris operator can be written as a function of the eigenvalues of the structure tensor ?

(20) B. Adaptations of the Color Tensor The same equations as DiZenzo’s equations for orientation estimation are found by Kass and Witkin [15]. They studied orientation estimation for oriented patterns (e.g., ?ngerprint images). Oriented patterns are de?ned as patterns with a dominant orientation everywhere. For oriented patterns, other mathematics are needed than for regular object images. The local structure of object images is described by a step edge, whereas for oriented patterns, the local structure is described as a set of lines (roof edges). Lines generate opposing vectors on a small scale. Hence, for geometric operations on oriented patterns, methods are needed for which opposing vectors enforce one another. This is the same problem as encountered for all color images, where the opposing vector problem does not only occur for oriented patterns, but also for step edges, for which the opposing vectors occur in the different channels. Hence, similar equations were found in both ?elds. Next to orientation estimation, a number of other estimators were proposed by oriented pattern research [3], [11], [26]. These operation are based on adaptations of the structure tensor and can also be applied to the color tensor. The structure tensor of (2) can also be seen as a local projection of the derivative energy on two perpendicular axes, namely and

(18) The direction of indicates the prominent local orientation (19) The s can be combined to give the following local descriptors. ? describes the total local derivative energy. is the derivative energy in the most prominent direc? tion. describes the line-energy (see [20]). The deriva? tive energy in the prominent orientation is corrected for the energy contributed by the noise .

(21) in which . From the Lie group of transformation, several other choices of perpendicular projections can be derived [3], [11]. They include feature extraction for circle, spiral, and star-like structures. The star and circle detector is given as an example. It is based on , which coincide with the derivative pattern of a circular patterns and

VAN DE WEIJER et al.: ROBUST PHOTOMETRIC INVARIANT FEATURES

123

, which denotes the perpendicular vector ?eld, which coincides with the derivative pattern of starlike patterns. These vectors can be used to compute the adapted structure tensor with (21). Only the elements on the diagonal have nonzero entries and are equal to (22), shown describes the amount of at the bottom of the page. Here, the derivative energy contributing to circular structures and derivative energy which describes a starlike structure. Similar to the proof given in (4) the elements of (22) can be proven to be invariant under transformations of the RGB-space. Curvature is another feature which can be derived from an adaption of the structure tensor [26]. The ?t between the local differential structure and a parabolic model function can be written as a function of the curvature. Finding the optimum of this function yields an estimation of the local curvature. For vector data, the equation for the curvature is given by (23), and are the shown at the bottom of the page, in which derivatives in gauge coordinates. C. Color Optical Flow Optical ?ow can also be computed from the structure tensor. This is originally proposed by Simoncelli [24] and has been extended to color in [2] and [10]. The vector of a multichannel point over time stays constant [14], [19] (24) Differentiating yields the following set of equations: (25) with as the optical ?ow. To solve the singularity problem and to robustify the optical ?ow computation, we follow Simoncelli [24] and assume a constant ?ow within a Gaussian window. Solving (25) leads to the following optical ?ow equation: (26) with (27) and (28)

The assumption of color optical ?ow based on RGB is that RGB pixel values remain constant over time [see (24)]. A change of brightness introduced due to a shadow, or a light source with ?uctuating brightness such as the sun results in nonexistent optical ?ow. This problem can be overcome by assuming constant chromaticity over time. For photometric invariant optical ?ow, full invariance is necessary since the optical ?ow estimation is based upon comparing the (extracted) edge response of multiple frames. Consequently, photometric invariant optical ?ow can be attained by replacing the inner by one of the following: product of (29)

V. EXPERIMENTS The experiments test the features on the required criteria of our framework: 1) photometric invariance and 2) robustness. The third criterion, i.e., invariance with respect to color coordinate transformations, we have already proven theoretically. In this section, we aim to demonstrate invariance by experiment and illustrate the generality of the experiments by the variety of examples. For all experiments, the derivatives are computed and the color tensor scale is with a Gaussian derivative of , except when mentioned otherwise. The computed with experiments are performed using a Sony 3CCD color camera XC-003P, Matrox Corona Frame-grabber, and two Osram 18 Watt “Lumilux deLuxe daylight” ?uorescent light sources. A. Photometric Invariant Harris Point Detection Robustness with respect to photometric changes, stability of the invariants, and robustness to noise, are tested. Further, the ability of invariants to detect and extract features is examined (see also Table I). The experiment is performed with the photometric invariant Harris corner detector (20) and is executed on the Soil47 multi object set [17], which comprises 23 images [see Fig. 3(a)]. First, the feature detection accuracy of the invariants is tested. For each image and invariant, the 20 most prominent Harris points are extracted. Next, Gaussian uncorrelated noise is added to the data, and the Harris point detection is computed ten times per image. The percentage of points which do not correspond to the Harris points in the noiseless case are given in Table II. The Harris point detector based on the quasiinvariant outperforms

(22)

(23)

124

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 1, JANUARY 2006

Fig. 3. (a) An example from Soil-47 image. (b) Shadow-shading distortion with the shadow-shading quasiinvariant Harris points superimposed. (c) Specular distortion and the shadow-shading-specular Harris points superimposed. (Color version available online at http://ieeexplore.ieee.org.) TABLE II PERCENTAGE OF FALSELY DETECTED POINTS AND PERCENTAGE OF WRONGLY CLASSIFIED POINTS. CLASSIFICATION IS BASED ON THE EXTRACTION OF INVARIANT INFORMATION. UNCORRELATED GAUSSIAN NOISE IS ADDED WITH STANDARD DEVIATION 5 AND 20

performance drops drastically. In accordance with the theory in Section III, the robust full invariants successfully improve the performance. B. Color Optical Flow Robustness of the full photometric invariance features is tested on photometric invariant optical ?ow estimation. The optical ?ow is estimated on a synthetical image sequence with constant optical ?ow. We use the robust full photometric structure tensor for the estimation of optical ?ow and compare it with “classical” photometric optical ?ow as proposed by [10]. Derivatives are computed with a Gaussian derivative of and the color tensor scale is . The shadow-shading photometric optical ?ow is tested on image with decreasing intensity [see Fig. 4(a)] which is shifted one pixel per frame. Uncorrelated Gaussian noise with is added to the sequence. In Fig. 4(b) and (c), the mean and the standard deviation of the optical ?ow along the axis of Fig. 4(a) are depicted. Similarly to the shadow-shadingspecular invariant, optical ?ow is tested on a sequence with increasing achromaticity along the axes [see Fig. 4(d)–(f)]. The results show that robust invariant methods (red lines) outperform the standard photometric optical ?ow (blue lines). The gained robustness becomes apparent for the measurements around the instable region, which are the black area for the shadow-shading invariant and the achromatic and the grey area for the shadow-shading-specular invariant optical ?ow. As an example of a real-world scene, multiple frames are taken from static objects while the light source position is changed. This results in a violation of the brightness constraint by changing shading and moving shadows. Since both the camera and the objects did not move, the ground truth optical ?ow is zero. The violation of the brightness constraint disturbs the optical ?ow estimation based on the RGB [Fig. 5(b)]. The shadow-shading invariant optical ?ow estimation is much less disturbed by the violation of the brightness constrain [Fig. 5(c)]. However, the ?ow estimation is still unstable around some of the edges. The robust shadow-shading invariant optical ?ow has the best results and is only unstable in low-gradient areas [Fig. 5(d)]. C. Color Canny Edge Detection We illustrate the use of eigenvalue-based features by adapting the Canny edge detection algorithm to allow for vectorial input data. The algorithm consists of the following steps.

the alternatives. The instability within the full invariant can be partially repaired by the robust full invariant; however, for detection purposes, the quasiinvariants remain the best choice. Next, the feature extraction for the invariants is tested. Again, the 20 most prominent Harris points are detected in the noise free image. For these points, the photometric invariant derivative , where is an estimaenergy is extracted by tion of the noise which contributes to the energy in both and . To imitate photometric variations of images we apply, the following photometric distortion to the images [compare with (5)]: (30) is a smooth function resembling variation similar where is a smooth function which to shading and shadow effects, is Gaussian noise. To test imitates specular re?ections, and is chosen to vary between the shadow-shading extraction, is 0. To test the shadow-shading-specular in0 and 1, and was chosen constant at 0.7 and varied bevariants, tween 0 and 50. After the photometric distortion, the derivative energy is extracted at the same twenty points. The extraction is considered correct if the deviation of the derivative energy between the distorted and the noise-free case is less then 10%. The results are given in Table II. Quasiinvariants which not suited for extraction have a hundred percent error. The full invariants have better results but with worsening signal-to-noise ratio its

VAN DE WEIJER et al.: ROBUST PHOTOMETRIC INVARIANT FEATURES

125

Fig. 4. (a), (d) Frame from test sequence with constant optical ?ow of one pixel per frame. (b), (c) Mean and relative standard deviation mean of the optical ?ow based on (black line) RGB, (dark grey line) shadow-shading invariant, and (light grey line) robust shadow-shading invariant. (e), (f) Mean and relative standard deviation of the optical ?ow based on (black line) RGB, (dark grey line) shadow-shading-specular invariant, and (light grey line) robust shadow-shading-specular invariant. (Color version available online at http://ieeexplore.ieee.org.)

Fig. 5. (a) Frame 1 of object scene with ?lter size superimposed on it. (b) RGB gradient optical ?ow. (c) Shadow-shading invariant optical ?ow. (d) Robust shadow-shading invariant optical ?ow. (Color version available online at http://ieeexplore.ieee.org.)

Fig. 6. (a) Input image with Canny edge detection based on successively. (b) Luminance derivative. (c) RGB derivatives. (d) Shadow-shading quasiinvariant. (e) Shadow-shading-specular quasiinvariant. (Color version available online at http://ieeexplore.ieee.org.)

1) Compute the spatial derivatives and combine them, if desired, into a quasiinvariant [(8) or (11)]. 2) Compute the maximum eigenvalue (18) and its orientation (19). in the prominent 3) Apply nonmaximum suppression on direction. In Fig. 6, the results of color Canny edge detection for several photometric quasiinvariants is shown. The results show that the luminance-based Canny [Fig. 6(b)] misses several edges which are correctly found by the RGB-based method [Fig. 6(c)].

Also, the removal of spurious edges by photometric invariance is demonstrated. In Fig. 6(d), the edge detection is robust to shadow and shading changes and only detects material and specular edges. In Fig. 6(e), only the material edges are depicted. D. Circular Object Detection The use of photometric invariant orientation and curvature estimation is demonstrated on a circle detection example. Other than the previous experiments, these images have been recorded

126

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 1, JANUARY 2006

Fig. 7. (a) Detected circles based on luminance. (b) Detected circles based on shadow-shading-specular quasiinvariant. (c) Detected circles based on shadow-shading-specular quasiinvariant. (Color version available online at http://ieeexplore.ieee.org.)

Fig. 8. (a) Input image. (b) The circularity coef?cient C . (c) The detected circles. (Color version available online at http://ieeexplore.ieee.org.)

by the Nikon Coolpix 950, a commercial digital camera of average quality. The images have size 267 200 pixels with JPEG compression. The digitization was done in 8 bits per color. Circular object recognition is complicated due to shadow, shading and specular events which in?uence the feature extraction. We apply the following algorithm for circle detection. 1) Compute the spatial derivatives and combine them, if desired, into a quasiinvariant [(8) or (11)]. 2) Compute the local orientation (19) and curvature (23). , where is 3) Compute the hough space [1] the radius of the circle and and indicate the center of the circle. The computation of the orientation and curvature reduces the number of votes per pixel to one. Namely, for a pixel at position

and center points of the circles. Note that, although the recordings do not ful?ll the assumptions on which the dichromatic model is based, such as white light source, saturated pixels and linear camera response, the invariants still improve performance by partially suppressing scene incidental events, such as shadows and specularities. In Fig. 7, an outdoor example with a shadow partially covering the objects is given. E. Local Color Symmetry Detector The applicability of the features derived from an adaptation of the structure tensor (Section IV-B) is illustrated here for a symmetry detector. We apply the circle detector to an image containing Lego blocks (Fig. 8). Because we know that the color within the blocks remains the same, the circle detection is done (11). The shadowon the shadow-shading-specular variant shading-specular variant contains all the derivative energy except for the energy which can only be caused by a material edge. With the shadow-shading-specular variant the circular enand the starlike energy are computed according to ergy (22). Dividing the circular energy by the total energy yields a descriptor of local circularity [see Fig. 8(b)] (32) The superimposed maxima of of the circle centers. [Fig. 8(c)] give good estimation

(31) Every pixel votes with its the derivative energy . 4) Compute the maxima in the hough space. These maxima indicate the circle centers and the radii of the circle. In Fig. 7, the results of the circle detection are given. The luminance-based circle detection is corrupted by the photometric variation in the image. Nine circles had to be detected before the ?ve balls were detected. For the shadow-shading-specular quasiinvariant-based method, the ?ve most prominent peaks in the hough space coincide with reasonable estimates of the radii

VI. CONCLUSION In this paper, we proposed a framework to combine tensorbased features and photometric invariance theory. The tensor basis of these features ensures that opposing vectors in different

VAN DE WEIJER et al.: ROBUST PHOTOMETRIC INVARIANT FEATURES

127

channels do not cancel out, but instead that they reinforce each other. To overcome the instability caused by transformation to an photometric full invariant, we propose an uncertainty measure to accompany the full invariant. This uncertainty measure is incorporated in the color tensor to generate robust photometric invariant features. Experiments show that: 1) the color-based features outperform their luminance counterparts, 2) the quasiinvariants give stable detection, and 3) the robust invariants give better extraction results. REFERENCES
[1] D. H. Ballard, “Generalizing the hough transform to detect arbitrary shapes,” Pattern Recognit., vol. 12, no. 2, pp. 111–122, 1981. [2] J. Barron and R. Klette, “Quantitative color optical ?ow,” in Proc. Int. Conf. Pattern Recognition, Vancouver, BC, Canada, 2002, pp. 251–255. [3] J. Bigun, “Pattern recognition in images by symmetry and coordinate transformations,” Comput. Vis. Image Understand., vol. 68, no. 3, pp. 290–307, 1997. [4] J. Bigun, G. Granlund, and J. Wiklund, “Multidimensional orientation estimation with applications to texture analysis and opitcal ?ow,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 13, no. 8, pp. 775–790, Aug. 1991. [5] S. Di Zenzo, “Note: A note on the gradient of a multi-image,” Comput. Vis., Graph., Image Process., vol. 33, no. 1, pp. 116–125, 1986. [6] L. M. J. Florack, B. M. ter Haar Romeny, J. J. Koenderink, and M. A. Viergever, “Scale and the differential structure of images,” Image Vis. Comput., vol. 10, no. 6, pp. 376–388, Jul./Aug. 1992. [7] J. M. Geusebroek, R. van den Boomgaard, A. W. M. Smeulders, and H. Geerts, “Color invariance,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 12, pp. 1338–1350, Dec. 2001. [8] T. Gevers and A. Smeulders, “Color based object recognition,” Pattern Recognit., vol. 32, pp. 453–464, Mar. 1999. [9] T. Gevers and H. M. G. Stokman, “Classi?cation of color edges in video into shadow-geometry, highlight, or material transitions,” IEEE Trans. Multimedia, vol. 5, no. 2, pp. 237–243, Jun. 2003. [10] P. Golland and A. M. Bruckstein, “Motion from color,” Comput. Vis. Image Understand., vol. 68, no. 3, pp. 346–362, Dec. 1997. [11] O. Hansen and J. Bigun, “Local symmetry modeling in multi-dimensional images,” Pattern Recognit. Lett., vol. 13, pp. 253–262, 1992. [12] R. M. Haralick and L. G. Shapiro, Computer and Robot Vision. Reading, MA: Addison-Wesley, 1992, vol. II. [13] C. Harris and M. Stephens, “A combined corner and edge detector,” in Proc. 4th Alvey Vision Conf., vol. 15, 1988, pp. 147–151. [14] B. K. P. Horn and B. G. Schunk, “Determing optical ?ow,” Artif. Intell., vol. 17, pp. 185–203, 1981. [15] M. Kass and A. Witkin, “Analyzing oriented patterns,” Comput., Vis., Graph. Image Process., vol. 37, pp. 362–385, 1987. [16] G. J. Klinker and S. A. Shafer, “A physical approach to color image understanding,” Int. J. Comput. Vis., vol. 4, pp. 7–38, 1990. [17] D. Koubaroulis, J. Matas, and J. Kittler, “Evaluating color-based object recognition algorithms using the soil-47 database,” presented at the Asian Conf. Computer Vision, 2002. [18] M. S. Lee and G. Medioni, “Grouping into regions, curves, and junctions,” Comput. Vis. Image Understand., vol. 76, no. 1, pp. 54–69, 1999. [19] B. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in Proc. DARPA Image Understanding Workshop, 1981, pp. 121–130. [20] G. Sapiro and D. Ringach, “Anisotropic diffusion of multivalued images with applications to color ?ltering,” IEEE Trans. Image Process., vol. 5, no. 10, pp. 1582–1586, Oct. 1996.

[21] C. Schmid, R. Mohr, and C. Bauckhage, “Evaluation of interest point detectors,” Int. J. Comput. Vis., vol. 37, no. 2, pp. 151–172, 2000. [22] S. A. Shafer, “Using color to seperate re?ection components,” COLOR Res. Appl., vol. 10, no. 4, pp. 210–218, 1985. [23] J. Shi and C. Tomasi, “Good features to track,” presented at the IEEE Conf. Computer Vision and Pattern Recognition, 1994. [24] E. P. Simoncelli, E. H. Adelson, and D. J. Heeger, “Probability distributions of optical ?ow,” in IEEE Conf. Computer Vision and Pattern Recognition, 1991, pp. 310–315. [25] J. van de Weijer, T. Gevers, and J. M. Geusebroek, “Color edge detection by photometric quasiinvariants,” in Proc. Int. Conf. Comput. Vis., Nice, France, 2003, pp. 1520–1526. [26] J. van de Weijer, L. J. van Vliet, P. W. Verbeek, and M. van Ginkel, “Curvature estimation in oriented patterns using curvilinear models applied to gradient vector ?elds,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 9, pp. 1035–1042, Sep. 2001.

Joost van de Weijer received the M.Sc. degree in applied physics from the Delft University of Technology, Delft, The Netherlands, in 1998. He is currently pursuing the Ph.D. degree in the ISIS Group at the University of Amsterdam, Amsterdam, The Netherlands. His current research interests include ?lter theory, color image ?ltering, saliency detection, and photometric invariance.

Theo Gevers (M’01) is an Associate Professor of computer science at the University of Amsterdam, Amsterdam, The Netherlands. His main research interests are in the fundamentals of image database system design, image retrieval by content, theoretical foundations of geometric and photometric invariants, and color image processing.

Arnold W. M. Smeulders is a Full Professor of multimedia information analysis with a special interest in content-based image retrieval systems as well as systems for the analysis of video. He heads the ISIS Group at the University of Amsterdam, Amsterdam, The Netherlands, which concentrates on theory, practice, and implementation of image retrieval and computer vision. The group has an extensive record in co-operations with Dutch industry in the area of multimedia and video analysis. He received a Fulbright Grant from Yale University, New Haven, CT, in 1987, and a visiting professorship at the City University Hong Kong, Hong Kong, in 1996, and ETL Tsukuba, Japan, in 1998. His current research interest is in image retrieval, especially perceptual similarity, material recognition, and the connection between pictures and language. Dr. Smeulders was elected fellow of the International Association of Pattern Recognition. He was an Associate Editor of the IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE.


赞助商链接
推荐相关:
网站首页 | 网站地图
All rights reserved Powered by 酷我资料网 koorio.com
copyright ©right 2014-2019。
文档资料库内容来自网络,如有侵犯请联系客服。zhit325@126.com