Comparative Analysis of deep learning based object detection model’s for their application in autonomous vehicles

Open Access

Year : 2023 | Volume :8 | Issue : 1 | Page : 2-6
By

Umar Farooq

Abdur Rehman

Tabish Imtiaz

M. Saad Alam

  1. Student Department of Electrical Engineering, Zakir Husain College of Engineering and Technology, Aligarh Muslim University, Aligarh Uttar Pradesh India
  2. Phd Scholar Department of Electrical Engineering, Zakir Husain College of Engineering and Technology, Aligarh Muslim University, Aligarh Uttar Pradesh India

Abstract

In this work, we compare the detection accuracy and speed of several state of the art models for the task of detecting red and green traffic lights. We compare detection performance and speed of YOLOv4, ScaledYOLOv4 and YOLOR. All of these are single stage object detection models. Two stage models have good detection accuracy but are slower than single stage detectors, single stage detectors are faster and also have good detection accuracy which makes them reliable in real time object detection. We discuss about the object detection models and the evaluation metric that we used to score our models. Than we discuss about the results of our work.

Keywords: Object Detection, Self Driving Cars, Deep Learning, Traffic, Light Detection

[This article belongs to International Journal of Analog Integrated Circuits(ijaic)]

How to cite this article: Umar Farooq, Abdur Rehman, Tabish Imtiaz, M. Saad Alam. Comparative Analysis of deep learning based object detection model’s for their application in autonomous vehicles. International Journal of Analog Integrated Circuits. 2023; 8(1):2-6.
How to cite this URL: Umar Farooq, Abdur Rehman, Tabish Imtiaz, M. Saad Alam. Comparative Analysis of deep learning based object detection model’s for their application in autonomous vehicles. International Journal of Analog Integrated Circuits. 2023; 8(1):2-6. Available from: https://journals.stmjournals.com/ijaic/article=2023/view=90461

Full Text PDF Download

Browse Figures

References

1. J. Deng, A. Berg, S. Satheesh, H. Su, A. Khosla, L. Fei Fei, Image net large scale visual recognition competition 2012 (ilsvrc2012), See net. org/challenges/LSVRC 41 (2012).
2. R. Girshick, J. Donahue, T. Darrell, J. Malik, Region based convolutional networks for accurate object detection and segmentation, IEEE transactions on pattern analysis and machine intelligence38 (1) (2015) 142–158.
3. M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, et al., End to end learning for selfdriving cars, arXiv preprint arXiv:1604.07316 (2016).
4. R. Girshick, J. Donahue, T. Darrell, J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580–587.
5. R. Kulkarni, S. Dhavalikar, S. Bangar, Traffic light detection and recognition for self driving cars using deep learning, in 2018 Fourth Interna tional Conference on Computing Communication Control and Automation (ICCUBEA), IEEE, 2018, pp. 1–4.
6. S. Ren, K. He, R. Girshick, J. Sun, Faster rcnn: Towards real time object detection with region proposal networks, Advances in neural information processing systems 201 (2015).
7. R. Girshick, Fastrcnn, in 2015 IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1440–1448. doi:10.1109/ICCV.2015.169.
8. J. Redmon, S. Divvala, R. Girshick, A. Farhadi, You only look once: Unified, real time object detection, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779–788.
9. J. Redmon, A. Farhadi, Yolo9000: better, faster, stronger, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7263–7271.
10. J. Redmon, A. Farhadi, Yolov3: An incremental improvement, arXiv preprint arXiv:1804.02767 (2018).
11. T. Lin, P. Goyal, R. B. Girshick, K. He, P. Dollar, Focal loss for dense object detection, CoRR abs/1708.02002 (2017). arXiv: 1708.02002.URL http://arxiv.org/abs/1708.02002
12. A. Bochkovskiy, C.Y. Wang, H.Y. M. Liao, Yolov4: Optimal speed and accuracy of object detection, arXiv preprint arXiv:2004.10934 (2020).
13. C.Y. Wang, A. Bochkovskiy, H.Y. M. Liao, Scaledyolov4: Scaling cross stage partial network, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 13029– 13038.
14. C.Y. Wang, I.H. Yeh, H.Y. M. Liao, you only learn one representation: Unified network for multiple tasks, arXiv preprint arXiv:2105.04206 (2021).
15. A. Groener, G. Chern, M. Pritt, A comparison of deep learning object detection models for satellite imagery, in 2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), IEEE, 2019, pp. 1–10.
16. D. Misra, Mish: A self-regularized nonmonotonic activation function, arXiv preprint arXiv:1908.08681 (2019).
17. Z. Ge, S. Liu, F. Wang, Z. Li, J. Sun, Yolox: Exceeding yolo series in 2021, arXiv preprint arXiv:2107.08430 (2021).
18. M. Tan, R. Pang, Q. V. Le, Efficient det: Scalable and efficient object detection, in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 10781 10790.


Regular Issue Open Access Article
Volume 8
Issue 1
Received June 16, 2022
Accepted June 23, 2022
Published January 23, 2023