Endoscopic examinations play an important role in the diagnosis of head and neck tumors because they provide information that complements tomographic imaging, especially with respect to tissue composition and surface structures. The project goal is to intuitively and efficiently visualize and explore the image data generated during an endoscopy, which can be reconstructed in a 3D model, using virtual endoscopy techniques. In this way, the examination results can be documented in a manner consistent with the nature of the examination. They are thus reproducible and can be reused in a variety of ways. The examiner can use the results as preparation for a surgical procedure, for patient education and for training. Telemedical examinations are also directly possible with them. In the event of a legal dispute, they help the physician to describe the planned procedure in a comprehensible manner.
The implementation of this goal requires the solution of some technically challenging tasks. In particular, the required real-time capability of three-dimensional, virtual exploration with the abundance of high-resolution data requires state-of-the-art visualization and interaction techniques. The reconstructed 3D model must be textured to a high quality so that the quality of the virtual exploration does not suffer. Since the surface does not have a regular shape, a largely distortion-free texture mapping is difficult.
The automatically generated 3D model of the target region should be able to be explored during a virtual endoscopy. Interaction techniques and input devices are to be tested and evaluated with respect to their suitability for flexible and efficient navigation. In particular, a study will compare current 3D input devices (Space Pilot, Phantom) available at the Chair of Visualization with 2D input devices (pen, mouse). Similar to virtual colonoscopy, videos representing a fly-through of the target region will be created automatically.