[{“box”:0,”content”:”[if 992 equals=”Open Access”]n
n
Open Access
nn
n
n[/if 992]n
n
n
n
n

n
Divya K.K, Muhammad Rafnas K.M., Muhammed Ismail P, Muhammed Minshad C,
n
- n t
n
n
n[/foreach]
n
n[if 2099 not_equal=”Yes”]n
- [foreach 286] [if 1175 not_equal=””]n t
- Professor, Student, Student, Student Department of Computer Science and Engineering, P A College of Engineering, Mangalore, Department of Computer Science and Engineering, P A College of Engineering, Mangalore, Department of Computer Science and Engineering, P A College of Engineering, Mangalore, Department of Computer Science and Engineering, P A College of Engineering, Mangalore Karnataka, Karnataka, Karnataka, Karnataka India, India, India, India
n[/if 1175][/foreach]
n[/if 2099][if 2099 equals=”Yes”][/if 2099]n
Abstract
nThis paper explores a deep learning system to detect deepfake videos, a common type of fake media. With the use of sophisticated methods such as recurrent neural networks (RNNs) and convolutional neural networks (CNNs), our system can reliably discern between authentic and altered videos. It analyzes both the images and the audio in videos to find signs of deepfake manipulation. We process video frames and audio, extract features with CNNs and RNNs, and combine these features to decide if a video is real or fake. To ensure the dependability of our system, we trained and tested it on enormous datasets of both actual and fraudulent videos. Our project helps fight misinformation and protect the authenticity of digital content.
n
Keywords: Deepfake, ResNext, machine learning, deep learning, LSTM
n[if 424 equals=”Regular Issue”][This article belongs to Journal of Instrumentation Technology & Innovations(joiti)]
n
n
n
n
n
nn[if 992 equals=”Open Access”] Full Text PDF Download[/if 992] n
nn[if 379 not_equal=””]n
Browse Figures
n
n
n[/if 379]n
References
n[if 1104 equals=””]n
- Bietti LM, Baker MJ. Collaborative remembering at work. Interaction Studies. 2018 Dec 31;19(3):459-86.
- Kidd J, Rees AJ. A MUSEUM OF DEEPFAKES?. Emerging Technologies and Museums: Mediating Difficult Heritage. 2022 Jan 14:218.
- Zhou Y, Lim SN. Joint audio-visual deepfake detection. InProceedings of the IEEE/CVF International Conference on Computer Vision 2021 (pp. 14800-14809).
- Güera D, Delp EJ. Deepfake video detection using recurrent neural networks. In2018 15th IEEE international conference on advanced video and signal based surveillance (AVSS) 2018 Nov 27 (pp. 1-6). IEEE.
- Chen M, Liao X, Wu M. PulseEdit: Editing physiological signals in facial videos for privacy protection. IEEE Transactions on Information Forensics and Security. 2022 Jan 13;17:457-71.
- Akter T, Ali MH, Khan MI, Satu MS, Uddin MJ, Alyami SA, Ali S, Azad AK, Moni MA. Improved transfer-learning-based facial recognition framework to detect autistic children at an early stage. Brain sciences. 2021 May 31;11(6):734.
- Dolhansky B, Bitton J, Pflaum B, Lu J, Howes R, Wang M, Ferrer CC. The deepfake detection challenge (dfdc) dataset. arXiv preprint arXiv:2006.07397. 2020 Jun 12.
- Yan Y. Deep Dive into Deepfakes-Safeguarding Our Digital Identity. Brook. J. Int’l L.. 2022;48:767.
- Li Y, Yang X, Sun P, Qi H, Lyu SC. A large-scale challenging dataset for deepfake forensics (2019). URL http://arxiv. org/abs/1909.12962. 1909;35:36.
- Yuezun Li, Siwei Lyu, “ExposingDF Videos by Detecting Face Warping Artifacts,” in arXiv:1811.00656v3.
- Yuezun Li, Ming-Ching Chang and Siwei Lyu “Exposing AI Created Fake Videos by Detecting Eye Blinking” in arXiv:1806.02877v2.
- Huy H. Nguyen, Junichi Yamagishi, and Isao Echizen “Using capsule networks to detect forged images and videos” in arXiv:1810.11215.
- Güera and E. J. Delp, “Deepfake Video Detection Using Recurrent Neural Networks,” 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Auckland, New Zealand, 2018, pp. 1-6.
- Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld. Learning realistic human actions from movies. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1– 8, June 2008. Anchorage, AK.
nn[/if 1104][if 1104 not_equal=””]n
- [foreach 1102]n t
- [if 1106 equals=””], [/if 1106][if 1106 not_equal=””],[/if 1106]
n[/foreach]
n[/if 1104]
nn
nn[if 1114 equals=”Yes”]n
n[/if 1114]
n
n

n
Journal of Instrumentation Technology & Innovations
n
n
n
n
nnn
n
| Volume | ||
| [if 424 equals=”Regular Issue”]Issue[/if 424][if 424 equals=”Special Issue”]Special Issue[/if 424] [if 424 equals=”Conference”][/if 424] | ||
| Received | June 9, 2024 | |
| Accepted | June 28, 2024 | |
| Published | August 14, 2024 |
n
n
n
n
n
n nfunction myFunction2() {nvar x = document.getElementById(“browsefigure”);nif (x.style.display === “block”) {nx.style.display = “none”;n}nelse { x.style.display = “Block”; }n}ndocument.querySelector(“.prevBtn”).addEventListener(“click”, () => {nchangeSlides(-1);n});ndocument.querySelector(“.nextBtn”).addEventListener(“click”, () => {nchangeSlides(1);n});nvar slideIndex = 1;nshowSlides(slideIndex);nfunction changeSlides(n) {nshowSlides((slideIndex += n));n}nfunction currentSlide(n) {nshowSlides((slideIndex = n));n}nfunction showSlides(n) {nvar i;nvar slides = document.getElementsByClassName(“Slide”);nvar dots = document.getElementsByClassName(“Navdot”);nif (n > slides.length) { slideIndex = 1; }nif (n (item.style.display = “none”));nArray.from(dots).forEach(nitem => (item.className = item.className.replace(” selected”, “”))n);nslides[slideIndex – 1].style.display = “block”;ndots[slideIndex – 1].className += ” selected”;n}n”}]