A Deep Learning Model For Crime Surveillance In Phone Calls.



EOI: 10.11242/viva-tech.01.04.159

Download Full Text here



Citation

Mr. Abhishek Khaire, Mr. Pranav Mahadeokar, Mr. Praveen Kumar Prajapati, Prof. Sunita Naik, "A Deep Learning Model For Crime Surveillance In Phone Calls.", VIVA-IJRI Volume 1, Issue 4, Article 159, pp. 1-6, 2021. Published by Computer Engineering Department, VIVA Institute of Technology, Virar, India.

Abstract

Public surveillance Systems are creating ground-breaking impact to secure lives, ensuring public safety with the interventions that will curb crime and improve well-being of communities, law enforcement agencies are employing and deploying tools like CCTV for video surveillance in banks, residential societies shopping malls,markets,roads almost everywhere to detect where and who was responsible for events like robbery, rough driving, insolence, murder etc. Other surveillance system tools like phone tapping is used to plot an event to catch a suspected person who is threatening an innocent or conspiring a violent activity over a phone call which is being done voluntarily by authorities . Now analyzing both the scenarios in the case of CCTV the crime(robbery,murder) has already occured and in the case of phone tapping there must be information in advance about the suspected person who is going to commit a crime, Now to overcome this issue a system is proposed to know in advace who is suspected and what conspiracy is being done over phone calls as well to detect it automatically and report it to law enforcemet authorities.

Keywords

Deep learning ,Phone calls, Audio analysis ,Surveillance Systems,Threats.

References

  1. Philip Duncal, Duraid Mohammed “ Audio information extraction from arbitrary sound recordings” Audio Engineering Society (AES) ,2014 , 22nd International Congress on Sound and Vibration (ICSV22) and AES 136th Convention
  2. Eduard frantic ,Monica Dascalu , “Voice Based Emotion Recognition with Convolutional Neural Networks for Companion Robots” ,(ROMANIAN JOURNAL OF INFORMATION SCIENCE AND TECHNOLOGY )Volume 20, Number 3, 2017, 222–240.
  3. Regunathan Radhakrishnan, Ajay Divakaran, “Audio Analysis for surveillance applications for elevators” Mitsubishi Electric Research Labs 2005 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics.
  4. Arpit Shah,Shivani , Firodiya ”Audio Sentiment Analysis after a Single-Channel Multiple Source Separation” Indiana University Bloomington,2019.
  5. Cynthia Van Hee , Gilles Jacobs “Automatic Detection of Cyberbullying in Social Media Text” AMiCA (Automatic Monitoring of Cyberspace Applications), 2018, rXiv:1801.05617v1 [cs.CL]
  6. Hao Hu, Ming-XingXu, and Wei Wu, “GMM Supervector Based SVM With Spectral Features For Speech Emotion Recognition” The international Conference on Acoustics, Speech, & Signal Processing (ICASSP),IEEE ,2007,1-4244-0728-1/07
  7. Radim Burger and Malay Kishore Dutta ”Speech Emotion Recognition with Deep Learning” 4th International Conference on Signal Processing and Integrated Networks (SPIN) , IEEE ,2017, 978-1-5090-2797-2/17/
  8. Xavier Anguera Miro and Nicholas Evans “Speaker Diarization: A Review of Recent Research” IEEE Transactions On Audio, Speech, And Language Processing, IEEE Vol. 20, No. 2 ,February 2012 ,1558-7916/
  9. James Albert Cornel, Carl Christian Pablo,“Cyberbullying Detection for Online Games Chat Logs using Deep Learning,” IEEE , 2019, 978-1-7281-3044
  10. Alexandru Lucian Georgescu , Horia Cucu “Automatic Annotation of Speech Corpora using Complementary GMM and DNN Acoustic Models,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, IEEE, 2018 , 978-1-5386-4695
  11. Fayek, H. M., M. Lech, and L. Cavedon. "Towards real-time speech emotion recognition using deep neural networks." Signal Processing and Communication Systems (ICSPCS), IEEE, 2015, 978-1-4673-8118
  12. Eesung Kimand ,Jong Won Shin, ”DNN-based emotion recognition based on bottleneck acoustic features and lexical features”, ICASSP IEEE, 2019, 978-1-5386-4658/
  13. Starlet Ben Alex and Ben P. Bab,”Utterance Syllable Level Prosodic Features for Automatic Emotion Recognition”, Recent Advances in Intelligent Computational Systems (RAICS), IEEE,2018