Lecture recording is a very common and feasible method for examination preparation. But in many cases the recorded results can tend to be boring, completely independent of how interesting the original session was. The basic idea to address this problem is to organize the recording in a similar way as camera teams do live production in television. Extremely abstracted, the behaviour of a real camera team are reactions to the environment based on cinematographic rules and the experience of each single member of the team.
Virtual cameramen and a virtual director are imitating this behaviour. Each cameraman is continuously controlling his image, waiting for orders from the director and reports back any problems or actions detected. In addition, sensors try to inform the director of events which can not easily be detected by image recognition, e.g., the questioner's position in the audience or the questioner raising his/her hand. The director is based on a finite state machine and calculates, out of all possible transitions going out from the active state, the transition which fits best to the incoming messages from cameramen and sensors. Neither the finite state machine nor the constraints of the transitions are fixed in the source code, but described in an XML-file. Thereby the director's decisions are mostly similar but seldom identical.