Subscription Review Article

Review on CBIR Image Based on Color, Texture & Shape Features of Biomedical Image Applications

by 
   Kommu Naveen,    RMS. Parvathi,
Volume :  11 | Issue :  01 | Received :  November 7, 2023 | Accepted :  December 21, 2023 | Published :  April 3, 2024
DOI :  10.37591

[This article belongs to Journal of Image Processing & Pattern Recognition Progress(joipprp)]

Keywords

Image enhancement, Bio medical, nuclear medicine, image pixel, Gamma-ray imaging

Abstract

This study seeks to understand how different image enhancing methods affect the sensitivity of contrast- based textural measures and morphological traits derived from high-resolution satellite data (three- band SPOT-5). The built-up/non-built-up detection framework is the backbone of every biomedical application. Using supervised learning while working with a low-resolution reference layer reduces uncertainty and boosts the reference layer’s quality in a roundabout way. The image’s histogram is recalculated based on contrast in order to determine textural and morphological features in light of the revised label assignments for each class. In this case study, we compare the effectiveness of several picture enhancing procedures, such as linear and de correlation stretching, by measuring their outputs against actual floor plans. The contrast of grayscale pictures is shown to be mostly determined by the mix of different spectral bands, as shown through experiments. Adjusting the contrast of a picture (either before or after combining and merging the bands) greatly aids in the extraction of useful characteristics from an otherwise low-contrast image, whereas doing so yields only little benefits for a well-contrasted one.

Full Text

References

1. M. Andriluka, S. Roth, and B. Schiele, “Pictorial structures revisited: People detection and articulated pose estimation,” in Proc. IEEE Conf.Comput. Vis. Pattern Recog., 2009, pp. 1014–1021.
2. M. Everingham, L. Van Gool, C. K.Williams, J.Winn, and A. Zisserman,“The pascal visual object classes (VOC) challenge,” Int. J. Comput. Vis.,vol. 88, no. 2, pp. 303–338, 2010.
3. V. Ferrari, M. Marin-Jimenez, and A. Zisserman, “Progressive searchspace reduction for human pose estimation,” in Proc. IEEE Conf. Comput.Vis. Pattern Recog., 2008, pp. 1–8.
4. M. P. Kumar, A. Zisserman, and P. H. Torr, “Efficient discriminativelearning of parts-based models,” in Proc. IEEE 12th Int. Conf. Comput.Vis., 2009, pp. 552–559.
5. V. Delaitre, I. Laptev, and J. Sivic, “Recognizing human actions in stillimages: A study of bag-of- features and part-based representations,” inProc. IEEE Brit. Mach. Vis. Conf., 2010.
6. A. Gupta, A. Kembhavi, and L. S. Davis, “Observing human-object interactions:Using spatial and functional compatibility for recognition,” IEEETrans. Pattern Anal. Mach. Intell., vol. 31, no. 10, pp. 1775–1789, Oct.2009.
7. B. Yao and L. Fei-Fei, “Grouplet: A structured image representation forrecognizing human and object interactions,” in Proc. IEEE Conf. Comput.Vis. Pattern Recog., 2010, pp. 9–16.
8. P. Buehler, M. Everingham, D. P. Huttenlocher, and A. Zisserman, “Longterm arm and hand tracking for continuous sign language TV broadcasts,”in Proc. 19th Brit. Mach. Vis. Conf., 2008, pp. 1105–1114.
9. A. Farhadi and D. Forsyth, “Aligning ASL for statistical translation usinga discriminative word model,” in Proc. IEEE Comput. Soc. Conf. Comput.Vis. Pattern Recog., 2006, pp. 1471–1476.
10. L. Zhao and L. S. Davis, “Iterative figure-ground discrimination,” in Proc.17th Int. Conf. Pattern Recog., 2004, pp. 67–70.