Research on Student Behavior Pattern Recognition in Physical Education Based on Adaptive Algorithm
DOI:
https://doi.org/10.70917/ijcisim-2025-0203Keywords:
behavior recognition; adaptive algorithm; physical education; graph convolutional network; multi-scale featuresAbstract
In order to address the need for accurate identification of student behaviors in physical education, this paper conducts research on student behavior pattern recognition based on adaptive algorithms. Firstly, the human skeleton spatio-temporal motion model is used as a tool to transform continuous actions into structured data. Then multi-intelligent body deep reinforcement learning is used to realize the parallel recognition and decision-making of multi-threaded behaviors. On this basis, a multi-scale spatio-temporal feature encoder is built to capture behavioral features at different scales. Finally, the features are fused and optimized with the help of graph convolution adaptive reinforcement network (DT-AGCN) to complete the adaptive reinforcement recognition of students' behavioral patterns. The experimental results show that on the KTH dataset, the model achieves an average recognition accuracy of 98.12% for ten types of basic sports actions, and the recognition accuracy remains above 93.11% even in the complex scene environment with partial occlusion. In the more challenging UCF dataset, the model still achieves 95.84% of the overall recognition accuracy for nine categories of teaching activities in a real classroom with complex scenarios. In the 12-week empirical study, it can be seen that students' sports participation increased from 58.51% to 98.93% at the beginning of the period, the skill attainment rate increased to 97.54%, the physical fitness pass rate increased from 62.83% to 99.03%, and the number of classroom interactions increased to 72, which is a strong proof that the teaching model using the model can greatly enhance the effectiveness of teaching and learning
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Xiaolu Li, Junyi Yang, Xiaoguang Yang

This work is licensed under a Creative Commons Attribution 4.0 International License.