[{“box”:0,”content”:”[if 992 equals=”Open Access”]n
n
Open Access
nn
n
n[/if 992]n
n
n
n
n

n
Anushri Kulkarni, Raj Shaikh, Suraj Shinde, Yashraj Deshpande,
n
- n t
n
n
n[/foreach]
n
n[if 2099 not_equal=”Yes”]n
- [foreach 286] [if 1175 not_equal=””]n t
- Student,, Student,, Student,, Assistant Professor, Smt. Kashibai Navale College of Engineering, Pune,, Smt. Kashibai Navale College of Engineering, Pune,, Smt. Kashibai Navale College of Engineering, Pune,, Smt. Kashibai Navale College of Engineering, Pune, Maharashtra,, Maharashtra,, Maharashtra,, Maharashtra, India, India, India, India
n[/if 1175][/foreach]
n[/if 2099][if 2099 equals=”Yes”][/if 2099]n
Abstract
nThis paper shows the Surveillance Car system which leverages the ESP32 Cam module and incorporates advanced image processing through a Generative Adversarial Network (GAN) model to redefine the landscape of mobile surveillance systems. The ESP32 Cam serves as the core hardware platform, offering compact design and wireless capabilities for real-time image capture and remote monitoring. The system’s innovation lies in the integration of a GAN model for image processing, enhancing the system’s ability to intelligently analyze captured data. This combination of cutting-edge hardware and artificial intelligence aims to provide a sophisticated, context-aware surveillance solution. The GAN model enables the identification of anomalies and specific objects within surveillance frames, enhancing adaptability to dynamic environments. With potential applications in security, law enforcement, and smart city initiatives, this paper underscores the synergy between ESP32 Cam and GAN-based image processing to create an intelligent and adaptable surveillance system for contemporary needs.
n
Keywords: Surveillance, ESP32 CAM, GAN Model, Smart Phone, Camera, Motor Driver
n[if 424 equals=”Regular Issue”][This article belongs to Journal of VLSI Design Tools and Technology(jovdtt)]
n
n
n
n
n
nn[if 992 equals=”Open Access”] Full Text PDF Download[/if 992] n
nn[if 379 not_equal=””]n
Browse Figures
n
n
n[/if 379]n
References
n[if 1104 equals=””]n
- Yuanfu Gong, Puyun Liao, ‘Enlighten-GAN for Super Resolution Reconstruction in Mid-Resolution Remote Sensing Images’, Remote Sensing, Vol.37, 2021 -22 , pp. 586–595.
- Ifan Jiang, Xinyu Gong, ‘Deep Light Enhancement without Paired Supervision’, Image Processing, Volume-30, Pg-2340-2349
- Pallavi Kulkarni and Deepa Madathil, ‘A review on echocardiographic image speckle reduction filters’, Biomedical Research 2018;29 (12): 2582-2589
- Kin Gwn Lore, Adedotun Akintayo, and Soumik Sarkar. ‘A deep autoencoder approach to natural. low-light image enhancement’ Vol. 40, No. 3, Pg.241–261.
- Chen Wei, Wenjing Wang, Wenhan Yang, and Jiaying Liu. ‘Deep retinex decomposition for lowlight enhancement’, Computer Vision and Pattern Recognition, Vol- 47 , No. 19, pp.119-231. Ms. Pallavi Kulkarni and Deepa Madathil (2022) ‘Fully automatic segmentation of LV from echocardiography images and calculation of ejection fraction using deep learning’, Int. J. Biomedical Engineering and Technology
- Bernard O., Bosch J. G. et al. 2016. Standardized Evaluation System for Left Ventricular Segmentation Algorithms in 3D Echocardiography. IEEE Trans Med Imag. 35(4): 967-977
- Xiaojie Huang, Ben A. Lin, Colin B. Compas, Albert J. Sinusas, Lawrence H. Staib and James S. Duncan. 2012. Segmentation of Left Ventricles from Echocardiographic Sequences via Sparse Appearance Representation. IEEE Mathematical Methods in Biomedical Image Analysis.
- Mishra, P. K. Dutta and M. K. Ghosh. 2003. A GA based approach for boundary detection of left ventricle with echocardiographic image sequences. Image Vis. Compute. 21: 967-976.
- Swetava Ganguli, “GeoGAN: A Conditional GAN with Reconstruction and Style Loss to Generate Standard Layer of Maps from Satellite Images”
- Phillip Isola, Jun-Yan Zhu et al,“Image-to-Image Translation with Conditional Adversarial Networks”,IEEE Xplore, 2017,pp. 1125-1134.
- Devabalan, “Satellite Image Processing On A Grid Based Computing Environment”,International Journal of Computer Science and Mobile Computing, Vol.3 Issue.3, March2014
nn[/if 1104][if 1104 not_equal=””]n
- [foreach 1102]n t
- [if 1106 equals=””], [/if 1106][if 1106 not_equal=””],[/if 1106]
n[/foreach]
n[/if 1104]
nn
nn[if 1114 equals=”Yes”]n
n[/if 1114]
n
n

n
n
n
n
n
| Volume | 14 | |
| [if 424 equals=”Regular Issue”]Issue[/if 424][if 424 equals=”Special Issue”]Special Issue[/if 424] [if 424 equals=”Conference”][/if 424] | 02 | |
| Received | July 18, 2024 | |
| Accepted | July 25, 2024 | |
| Published | August 7, 2024 |
n
n
n
n
n
n nfunction myFunction2() {nvar x = document.getElementById(“browsefigure”);nif (x.style.display === “block”) {nx.style.display = “none”;n}nelse { x.style.display = “Block”; }n}ndocument.querySelector(“.prevBtn”).addEventListener(“click”, () => {nchangeSlides(-1);n});ndocument.querySelector(“.nextBtn”).addEventListener(“click”, () => {nchangeSlides(1);n});nvar slideIndex = 1;nshowSlides(slideIndex);nfunction changeSlides(n) {nshowSlides((slideIndex += n));n}nfunction currentSlide(n) {nshowSlides((slideIndex = n));n}nfunction showSlides(n) {nvar i;nvar slides = document.getElementsByClassName(“Slide”);nvar dots = document.getElementsByClassName(“Navdot”);nif (n > slides.length) { slideIndex = 1; }nif (n (item.style.display = “none”));nArray.from(dots).forEach(nitem => (item.className = item.className.replace(” selected”, “”))n);nslides[slideIndex – 1].style.display = “block”;ndots[slideIndex – 1].className += ” selected”;n}n”}]