Ultrasound is reported to be difficult to interpret by medical personnel and is nearly impossible to understand by non-expert observer. This project aims at technology which effectively handles this interpretation challenge for better intra-clinical and d octor-to-patient communication. Potential outcome is (a) more precise doctor-to-doctor communication and consequently better diagnosis and treatment and (b) better information of patient on his/her medical problem. We see these two potential outcomes as v ery profitable for the modern society.We plan to research techniques to transfer available semantic information (findings, anatomy, physiology, pathologies) obtained from 3D modalities, used in previous diagnostic stages, to the ultrasound examination. T his abstracted high-level information will be overlaid over the live ultrasound scan or transferred from the live ultrasound 'back' to the 3D scan.To allow for inter-modality semantic transfer we will research suitable enabling technologies in particular segmentation, registration, and tracking approaches.We will research techniques for guided navigation in ultrasound sequences to satisfy the high-level user request: "show me structure X in ultrasound cineloop".Furthermore we will visualize 3D ultrasou nd with computationally intensive illumination models to display inspected structures in high quality and allow better perception of inspected anatomical regions.Finally we will translate the visualization technology to clinical environment and perform t horough evaluation of effectiveness of newly developed methods.
Postboks 564, 1327 Lysaker
Besøksadresse: Drammensveien 288, 0283 Oslo
Telefon: +47 22 03 70 00
E-post til programmet: email@example.com