[{“box”:0,”content”:”[if 992 equals=”Open Access”]n
n
Open Access
nn
n
n[/if 992]n
n
n
n
n

n
Vaishnavi Bhapkar, Sankita Salvi, Jankar Tejaswini, Bandal Rutuja, K.S. Khamkar,
n
- n t
n
n
n[/foreach]
n
n[if 2099 not_equal=”Yes”]n
- [foreach 286] [if 1175 not_equal=””]n t
- Student,, Student, Student, Student, Professor, RDTC’s Shri Chhatrapati Shivajiraje College of Engineering,, RDTC’s, Shri Chhatrapati Shivajiraje College of Engineering,, RDTC’s, Shri Chhatrapati Shivajiraje College of Engineering,, RDTC’s, Shri Chhatrapati Shivajiraje College of Engineering,, Dhangwadi, Bhor, Dhangwadi, Bhor, Pune Maharashtra, Dhangwadi, Bhor, Pune Maharashtra, Dhangwadi, Bhor, Pune Maharashtra, Dhangwadi, Bhor, Pune Maharashtra, Pune Maharashtra India, India, India, India, India
n[/if 1175][/foreach]
n[/if 2099][if 2099 equals=”Yes”][/if 2099]n
Abstract
nObject detection systems are essential tools for identifying and locating objects within images or videos. When integrated into spectacles or wearable devices, these systems provide users with real-time information about objects present in their surroundings. This functionality serves diverse purposes, such as assisting visually impaired individuals in navigating their environment or offering augmented reality data to workers during tasks. Region-based Convolutional Neural Networks (RCNN) represent a prominent machine learning model used extensively for object detection. The RCNN model operates in two main stages: initially employing a convolutional neural network (CNN) to extract distinctive features from the input image. Subsequently, it applies a region proposal algorithm to pinpoint potential object locations within the image. These proposed regions are then processed through a second CNN, which classifies them into either objects or backgrounds. Effectively, the RCNN model has demonstrated its capability to detect a broad spectrum of objects across various types of images and videos. Its proficiency lies in leveraging deep learning techniques to accurately identify and categorize objects, making it a versatile tool for applications ranging from enhancing accessibility for the visually impaired to improving productivity through augmented reality in industrial settings.
n
Keywords: Moving Object Detection Systems, Machine Learning, Region-based CNN, Spectacles, Algorithm
n[if 424 equals=”Regular Issue”][This article belongs to International Journal of Optical Innovations & Research(ijoir)]
n
n
n
n
n
nn[if 992 equals=”Open Access”] Full Text PDF Download[/if 992] n
nn[if 379 not_equal=””]n
Browse Figures
n
n
n[/if 379]n
References
n[if 1104 equals=””]n
- Suresh, Aswath & Arora, Chetan & Laha, Debrup & Gaba, Dhruv & Bhambri, Siddhant. (2019). Intelligent Smart Glass for Visually Impaired Using Deep Learning Machine Vision Techniques and Robot Operating System (ROS). 10.1007/978-3-319-78452-6_10.
- Saha, Himadri & Dey, Ratul & Dey, Shopan. (2017). Low cost ultrasonic smart glasses for blind. 10.1109/IEMCON.2017.8117194.
nn[/if 1104][if 1104 not_equal=””]n
- [foreach 1102]n t
- [if 1106 equals=””], [/if 1106][if 1106 not_equal=””],[/if 1106]
n[/foreach]
n[/if 1104]
nn
nn[if 1114 equals=”Yes”]n
n[/if 1114]
n
n

n
n
n
n
n
| Volume | 02 | |
| [if 424 equals=”Regular Issue”]Issue[/if 424][if 424 equals=”Special Issue”]Special Issue[/if 424] [if 424 equals=”Conference”][/if 424] | 01 | |
| Received | May 31, 2024 | |
| Accepted | June 27, 2024 | |
| Published | August 7, 2024 |
n
n
n
n
n
n nfunction myFunction2() {nvar x = document.getElementById(“browsefigure”);nif (x.style.display === “block”) {nx.style.display = “none”;n}nelse { x.style.display = “Block”; }n}ndocument.querySelector(“.prevBtn”).addEventListener(“click”, () => {nchangeSlides(-1);n});ndocument.querySelector(“.nextBtn”).addEventListener(“click”, () => {nchangeSlides(1);n});nvar slideIndex = 1;nshowSlides(slideIndex);nfunction changeSlides(n) {nshowSlides((slideIndex += n));n}nfunction currentSlide(n) {nshowSlides((slideIndex = n));n}nfunction showSlides(n) {nvar i;nvar slides = document.getElementsByClassName(“Slide”);nvar dots = document.getElementsByClassName(“Navdot”);nif (n > slides.length) { slideIndex = 1; }nif (n (item.style.display = “none”));nArray.from(dots).forEach(nitem => (item.className = item.className.replace(” selected”, “”))n);nslides[slideIndex – 1].style.display = “block”;ndots[slideIndex – 1].className += ” selected”;n}n”}]