[{“box”:0,”content”:”[if 992 equals=”Open Access”]
Open Access
n
[/if 992]n
n
n
n
n
- n t
n
Kommu Naveen, RMS. Parvathi
[/foreach]
n
n
n[if 2099 not_equal=”Yes”]n
- [foreach 286] [if 1175 not_equal=””]n t
- Ph. D Scholar, Professor HOD, Department of Electronics & Communication Engineering, Anna University, Chennai, Department of Computer Science Engineering, Sri Rama Krishna Institute of Technology, Perur Chettipalayam, Pachapalayam, Coimbatore, Tamil Nadu, Tamil Nadu, India, India
n[/if 1175][/foreach]
[/if 2099][if 2099 equals=”Yes”][/if 2099]nn
Abstract
nThis study seeks to understand how different image enhancing methods affect the sensitivity of contrast- based textural measures and morphological traits derived from high-resolution satellite data (three- band SPOT-5). The built-up/non-built-up detection framework is the backbone of every biomedical application. Using supervised learning while working with a low-resolution reference layer reduces uncertainty and boosts the reference layer’s quality in a roundabout way. The image’s histogram is recalculated based on contrast in order to determine textural and morphological features in light of the revised label assignments for each class. In this case study, we compare the effectiveness of several picture enhancing procedures, such as linear and de correlation stretching, by measuring their outputs against actual floor plans. The contrast of grayscale pictures is shown to be mostly determined by the mix of different spectral bands, as shown through experiments. Adjusting the contrast of a picture (either before or after combining and merging the bands) greatly aids in the extraction of useful characteristics from an otherwise low-contrast image, whereas doing so yields only little benefits for a well-contrasted one.
n
Keywords: Image enhancement, Bio medical, nuclear medicine, image pixel, Gamma-ray imaging
n[if 424 equals=”Regular Issue”][This article belongs to Journal of Image Processing & Pattern Recognition Progress(joipprp)]
n
n
n
n
n
n
n[if 992 equals=”Open Access”] Full Text PDF Download[else] nvar fieldValue = “[user_role]”;nif (fieldValue == ‘indexingbodies’) {n document.write(‘Full Text PDF‘);n }nelse if (fieldValue == ‘administrator’) { document.write(‘Full Text PDF‘); }nelse if (fieldValue == ‘joipprp’) { document.write(‘Full Text PDF‘); }n else { document.write(‘ ‘); }n [/if 992] [if 379 not_equal=””]n
Browse Figures
n
n
n[/if 379]n
References
n[if 1104 equals=””]n
1. M. Andriluka, S. Roth, and B. Schiele, “Pictorial structures revisited: People detection and articulated pose estimation,” in Proc. IEEE Conf.Comput. Vis. Pattern Recog., 2009, pp. 1014–1021.
2. M. Everingham, L. Van Gool, C. K.Williams, J.Winn, and A. Zisserman,“The pascal visual object classes (VOC) challenge,” Int. J. Comput. Vis.,vol. 88, no. 2, pp. 303–338, 2010.
3. V. Ferrari, M. Marin-Jimenez, and A. Zisserman, “Progressive searchspace reduction for human pose estimation,” in Proc. IEEE Conf. Comput.Vis. Pattern Recog., 2008, pp. 1–8.
4. M. P. Kumar, A. Zisserman, and P. H. Torr, “Efficient discriminativelearning of parts-based models,” in Proc. IEEE 12th Int. Conf. Comput.Vis., 2009, pp. 552–559.
5. V. Delaitre, I. Laptev, and J. Sivic, “Recognizing human actions in stillimages: A study of bag-of- features and part-based representations,” inProc. IEEE Brit. Mach. Vis. Conf., 2010.
6. A. Gupta, A. Kembhavi, and L. S. Davis, “Observing human-object interactions:Using spatial and functional compatibility for recognition,” IEEETrans. Pattern Anal. Mach. Intell., vol. 31, no. 10, pp. 1775–1789, Oct.2009.
7. B. Yao and L. Fei-Fei, “Grouplet: A structured image representation forrecognizing human and object interactions,” in Proc. IEEE Conf. Comput.Vis. Pattern Recog., 2010, pp. 9–16.
8. P. Buehler, M. Everingham, D. P. Huttenlocher, and A. Zisserman, “Longterm arm and hand tracking for continuous sign language TV broadcasts,”in Proc. 19th Brit. Mach. Vis. Conf., 2008, pp. 1105–1114.
9. A. Farhadi and D. Forsyth, “Aligning ASL for statistical translation usinga discriminative word model,” in Proc. IEEE Comput. Soc. Conf. Comput.Vis. Pattern Recog., 2006, pp. 1471–1476.
10. L. Zhao and L. S. Davis, “Iterative figure-ground discrimination,” in Proc.17th Int. Conf. Pattern Recog., 2004, pp. 67–70.
nn[/if 1104][if 1104 not_equal=””]n
- [foreach 1102]n t
- [if 1106 equals=””], [/if 1106][if 1106 not_equal=””],[/if 1106]
n[/foreach]
n[/if 1104]
nn
nn[if 1114 equals=”Yes”]n
n[/if 1114]
n
n
n
Journal of Image Processing & Pattern Recognition Progress
n
n
n
n
n
n
Volume | 11 | |
[if 424 equals=”Regular Issue”]Issue[/if 424][if 424 equals=”Special Issue”]Special Issue[/if 424] [if 424 equals=”Conference”][/if 424] | 01 | |
Received | November 7, 2023 | |
Accepted | December 21, 2023 | |
Published | April 3, 2024 |
n
n
n
n
n
nn function myFunction2() {n var x = document.getElementById(“browsefigure”);n if (x.style.display === “block”) {n x.style.display = “none”;n }n else { x.style.display = “Block”; }n }n document.querySelector(“.prevBtn”).addEventListener(“click”, () => {n changeSlides(-1);n });n document.querySelector(“.nextBtn”).addEventListener(“click”, () => {n changeSlides(1);n });n var slideIndex = 1;n showSlides(slideIndex);n function changeSlides(n) {n showSlides((slideIndex += n));n }n function currentSlide(n) {n showSlides((slideIndex = n));n }n function showSlides(n) {n var i;n var slides = document.getElementsByClassName(“Slide”);n var dots = document.getElementsByClassName(“Navdot”);n if (n > slides.length) { slideIndex = 1; }n if (n (item.style.display = “none”));n Array.from(dots).forEach(n item => (item.className = item.className.replace(” selected”, “”))n );n slides[slideIndex – 1].style.display = “block”;n dots[slideIndex – 1].className += ” selected”;n }n”}]