2025 • In In Proceedings of the 20th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, 3, p. 847-854
Forest Fire Detection; Deep Learning; CNN Networks; Vision Transformers; Edge AI; XAI
Abstract :
[en] Forests are vital natural resources but are highly vulnerable to disasters, both natural (e.g., lightining strikes) and human induced. Early and automated detection of forest fire and smoke is critical for mitigating damages. The main challenge of this kind of application is to provide accurate, explainable, real-time and lightweight solutions that can be easily deployable by and for users like firefighters. This paper presents an embedded and explainable artificial intelligence “Edge AI” system, for real-time forest fire, and smoke detection, using compressed Deep Learning (DL) models. Our model compression approach allowed to provide lightweight models for Edge AI deployment. Experimental evaluation on a preprocessed dataset composed of 1500 images demonstrated a test accuracy of 98% with a lightweight model running in real-time on a Jetson Xavier Edge AI resource. The compression methods preserved the same accuracy, while accelerating computation (3× to 18× speedup), reducing memory consumption ( 3.8× to 10.6×), and reducing energy consumption (3.5× to 6.3×).
Disciplines :
Computer science
Author, co-author :
Mahmoudi, Sidi ; Université de Mons - UMONS > Faculté Polytechnique > Service Informatique, Logiciel et Intelligence artificielle
Gloesener, Maxime ; Université de Mons - UMONS > Faculté Polytechnique > Service Informatique, Logiciel et Intelligence artificielle
Benkedadra, Mohamed ; Université de Mons - UMONS > Faculté Polytechnique > Service Informatique, Logiciel et Intelligence artificielle
Lerat, Jean-Sébastien ; Université de Mons - UMONS > Faculté Polytechnique > Service Informatique, Logiciel et Intelligence artificielle
Language :
English
Title :
Edge AI System for Real-Time and Explainable Forest Fire Detection Using Compressed Deep Learning Models
Publication date :
2025
Journal title :
In Proceedings of the 20th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
ISSN :
2184-4321
Publisher :
ScitePress Digital Library, Porto, Portugal
Volume :
3
Pages :
847-854.
Peer reviewed :
Peer reviewed
Research unit :
F114 - Informatique, Logiciel et Intelligence artificielle
Research institute :
R300 - Institut de Recherche en Technologies de l'Information et Sciences de l'Informatique R450 - Institut NUMEDIART pour les Technologies des Arts Numériques
Akiba, T., Sano, S., Yanase, T., Ohta, T., and Koyama, M. (2019). Optuna: A next-generation hyperparameter optimization framework. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.
Blalock, D., Gonzalez Ortiz, J. J., Frankle, J., and Guttag, J. (2020). What is the state of neural network pruning? In Dhillon, I., Papailiopoulos, D., and Sze, V., editors, Proceedings of Machine Learning and Systems, volume 2, pages 129–146.
Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. ICLR 2021 · The Ninth International Conference on Learning Representations.
Englebert, A., Stassin, S., Nanfack, G., Mahmoudi, S. A., Siebert, X., Cornu, O., and De Vleeschouwer, C. (2023). Explaining through transformer input sampling. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, pages 806–815.
Gou, J., Yu, B., Maybank, S. J., and Tao, D. (2021). Knowledge distillation: A survey. International Journal of Computer Vision, 129(6):1789–1819.
Govil, K., Welch, M. L., Ball, J. T., and Pennypacker, C. R. (2020). Preliminary results from a wildfire detection system using deep learning on remote camera images. Remote Sensing, 12(1).
Hamzah, S. A., Dalimin, M. N., Som, M. M., Zainal, M. S., Khairun, Ramli, N., Utomo, W. M., and Yusoff, N. A. (2022). High accuracy sensor nodes for a peat swamp forest fire detection using esp32 camera.
Khan, S. and Khan, A. (2022). Ffirenet: Deep learninbased forest fire classification and detection in smacities. Symmetry, 14(10).
Lagunas, F., Charlaix, E., Sanh, V., and Rush, A. M. (2021). Block pruning for faster transformers. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10619–10629.
Lerat, J.-S. and Mahmoudi, S. A. (2024). Scalable deep learning for industry 4.0: Speedup with distributed deep learning and environmental sustainability considerations. In Artificial Intelligence and High Performance Computing in the Cloud, pages 182–204, Cham. Springer Nature Switzerland.
Lerat, J.-S., Mahmoudi, S. A., and Mahmoudi, S. (2023). Single node deep learning frameworks: Comparative study and cpu/gpu performance analysis. Concurrency and Computation: Practice and Experience, 35(14):e6730.
Liu, Z., Mao, H., Wu, C.-Y., Feichtenhofer, C., Darrell, T., and Xie, S. (2022). A convnet for the 2020s. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11976–11986.
Madeira, E. M. (2023). The crucial role of forests in combating climate change. Trends in Biotechnology and Plant Sciences.
Mahmoudi, S. A., Stassin, S., Daho, M. E. H., Lessage, X., and Mahmoudi, S. (2022). Explainable Deep Learning for Covid-19 Detection Using Chest X-ray and CT-Scan Images, pages 311–336. Springer International Publishing, Cham.
Nagel, M., Fournarakis, M., Amjad, R. A., Bondarenko, Y., Van Baalen, M., and Blankevoort, T. (2021). A white paper on neural network quantization. arXiv preprint arXiv:2106.08295.
Park, M., Tran, D. Q., Lee, S., and Park, S. (2021). Mul-tilabel image classification with deep transfer learning for decision support on wildfire response. Remote Sensing, 13(19).
Petsiuk, V., Das, A., and Saenko, K. (2018). Rise: Randomized input sampling for explanation of black-box models. In BMC.
Pruthi, D., Gupta, M., Dhingra, B., Neubig, G., and Lipton, Z. C. (2020). Learning to deceive with attention-based explanations. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.
Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., and Batra, D. (2020). Grad-cam: visual explanations from deep networks via gradient-based localization. International journal of computer vision, 128:336–359.
Shamsoshoara, A., Afghah, F., Razi, A., Zheng, L., Fulé, P. Z., and Blasch, E. (2021). Aerial imagery pile burn detection using deep learning: The flame dataset. Computer Networks, 193:108001.
Shi, B., Wu, Z., Mao, M., Wang, X., and Darrell, T. (2025). When do we not need larger vision models? In European Conference on Computer Vision, pages 444–462. Springer.
Simonyan, K., Vedaldi, A., and Zisserman, A. (2014). Deep inside convolutional networks: Visualising image classification models and saliency maps. ICLR 2014 · International Conference on Learning Representations. Workshop Poster.
Sousa, M. J., Moutinho, A., and Almeida, M. (2020). Wildfire detection using transfer learning on augmented datasets. Expert Systems with Applications, 142:112975.
Stassin, S., Corduant, V., Mahmoudi, S. A., and Siebert, X. (2024). Explainability and evaluation of vision transformers: An in-depth experimental study. Electronics, 13(1).
Wang, H., Wang, Z., Du, M., Yang, F., Zhang, Z., Ding, S., Mardziel, P., and Hu, X. (2020). Score-cam: Score-weighted visual explanations for convolutional neural networks. In CVPR Worshop on TCV.
Wu, H., Li, H., Shamsoshoara, A., Razi, A., and Afghah, F. (2020). Transfer learning for wildfire identification in uav imagery. In 2020 54th Annual Conference on Information Sciences and Systems (CISS), pages 1–6.
Zeiler, M. D. and Fergus, R. (2014). Visualizing and understanding convolutional networks. In European conference on computer vision, pages 818–833. Springer.
Zhao, Y., Ma, J., Li, X., and Zhang, J. (2018). Saliency detection and deep learning-based wildfire identification in uav imagery. Sensors, 18(3).