機械知覚&ロボティクスグループ
中部大学

Vision Applications 国際会議

Generating a Time Shrunk Lecture Video by Event Detection

Author
Takao Yokoi, Hironobu Fujiyoshi
Publication
IEEE International Conference on MULTIMEDIA & EXPO, 2006

Download: PDF (English)

Streaming a lecture video via the Internet is important for Elearning. We have developed a system that generates a lecture video using virtual camerawork based on shooting techniques of broadcast cameramen. However, viewing a fulllength video takes time for students. In this paper, we propose a method for generating a time shrunk lecture video using event detection. We detect two kinds of events: a speech period and a chalkboard writing period. A speech period is detected by voice activity detection with LPC cepstrum and classified into speech or non-speech using Mahalanobis distance. To detect chalkboard writing periods, we use a graph cuts technique to segment a precise region of interests such as an instructor. By deleting content-free periods, i.e, period without the events of speech and writing, and fast-forwarding writing periods, our method can generate a time shrunk lecture video automatically. The resulting generated video is about 20%。~0% shorter than the original video in time. This is almost the same as the results of manual editing by a human operator.

前の研究 次の研究