ICCR & MCMA 2019: INTERNATIONAL CONFERENCE ON THE USE OF COMPUTERS IN RADIOTHERAPY AND THE INTERNATIONAL CONFERENCE ON MONTE CARLO TECHNIQUES FOR MEDICAL APPLICATIONS
PROGRAM FOR WEDNESDAY, JUNE 19TH
Days:
previous day
next day
all days

View: session overviewtalk overview

07:30-08:20 Session 13A: Educational Session III
07:30
Educational Lecture: Radiomics: the Image Biomarker Standardisation Initiative (IBSI)

ABSTRACT. Lecture overview:

It is recognized that intratumoral heterogeneity is associated with more aggressive tumor phenotypes leading to poor patient outcomes. Medical imaging plays a central role in related investigations, as radiological images are routinely acquired during cancer management (PET, CT, MRI, etc.). Nowadays, the rise of computational power allows for the exploitation of a large number of quantitative imaging features and has led to a new incarnation of computer-aided diagnosis: “radiomics”.

Better standardization, transparency and sharing practices are however required in the radiomics community to improve the quality and reproducibility of published studies and to achieve faster clinical translation. In this course, emphasis will be put on the presentation of the standardized radiomics workflow defined by the Image Biomarker Standardisation Initiative (IBSI), a group of more than 67 researchers from 25 institutions in 8 countries. Since 2016, the IBSI has put efforts into standardizing both the computation of radiomics features and the image processing steps required before feature extraction. The workflow for computing radiomics features is in fact complex and involves many steps such as image interpolation, re-segmentation and discretization. Overall, the standardized workflow of the IBSI along with consensual benchmark values could serve as a calibration tool for radiomics investigations.

Learning objectives:

  1. Introduce attendees to the field of radiomics and its potential in supporting clinical decision-making.
  2. Describe in details the radiomics workflow defined by the IBSI.
  3. Provide attendees the necessary knowledge for benchmarking their own software according to IBSI standards.

About the speaker:

Martin Vallières completed his undergraduate studies (Bachelor in Engineering Physics) at École Polytechique de Montréal in 2010, and his graduate studies (M.Sc. in Medical Radiation Physics, Ph.D. in Physics) at the Medical Physics Unit of McGill University, Montreal in 2017. He then started his postdoctoral training first at the Laboratoire de Traitement de L’Information Médicale (LaTIM) in Brest, France from July 2017 to July 2018. Since July 2018, Dr. Vallières continues his postdoctoral training jointly between the Medical Physics Unit of McGill University and the University of California San Francisco (UCSF).

Dr. Vallières’ current research in oncology is focused on the following topics: (i) Prediction of tumor outcomes via radiomics analysis of medical images and advanced machine learning; (ii) Standardization of radiomics computations and analyses; (iii) Combination of radiomics and deep learning; (iv) Distributed learning in oncology; and (v) Integration of multi-omics data (radiomics, genomics, etc.) for a better personalization of cancer treatments.

07:30-08:20 Session 13B: Educational Session IV
07:30
Educational Lecture: Deep Learning–A Brief Introduction

ABSTRACT. Lecture overview:

Deep learning is playing an increasingly prominent role within healthcare and radiation oncology is no exception. In this course, deep learning and the primary concepts that drive the methodologies will be introduced. First, deep learning will be put into context in comparison to other machine learning algorithms. Next, the quintessential building block of most deep learning algorithms, the neural network will be introduced. This will be followed by a description of the convolutional neural network, a deep learning algorithm built specifically for analyzing images. The session will be concluded by a brief look at the history of deep learning, specifically the recent results that led to its renaissance and speculation as to its future.

Learning objectives:

  1. Introduce attendees to deep learning and the theoretical concepts required to understand it.
  2. Motivate attendees to consider if/how deep learning could be applied to their own research.

About the speaker:

André Diamant completed his undergraduate studies (Bachelor in Honours Physics) at McGill University in 2014. He then proceeded to the graduate study program at the Medical Physics Unit of McGill University in 2014. In 2016, he completed his MSc. in Medical Radiation Physics and continued on to his PhD in Physics.

Mr. Diamant’s current research aims are focused on predicting oncological outcomes using advanced machine learning algorithms (deep learning) to analyze pre-treatment medical images.

08:30-10:10 Session 14: ICCR Rising Stars Competition
Location: Opera A+B
08:30
ICCR Rising Stars 3: Modeling Radiation Dose to Circulating Lymphocytes for different radiotherapy modalities
PRESENTER: Harald Paganetti

ABSTRACT. Multimodal therapeutic approaches combining radiotherapy and immunotherapy are under investigation in clinical trials in a range of indications, with expected improvements in patient prognosis. However, radiation is known to decrease the counts of lymphocyte cells, particularly in the hyperfractionated radiation therapy. To understand the interaction of the two therapeutic modalities accurate methods are required to calculate the radiation dose to circulating lymphoid cells.

In this work we have developed a technique to simulate the dose deposited in the blood circulating in the patient during intracranial irradiation. Based on 3D magnetic resonance imaging patient data of the head, generic trajectories of individual blood particles in the different lobes of the brain have been generated. Furthermore, a blood flow model of the whole body has been developed, using the dynamic distribution of the cardiac output, normal values of organ blood- volumes and hemodynamic information. The blood particles of the cardiac output are assumed to be uniform distributed to pass through the brain and to reenter the beam path multiple times during the delivery of a single beam. The accumulated dose of each individual blood particle circulating through the radiation fields is calculated using the associated local dose grid of the therapy plan, considering the fractionation if the course of the radiotherapy. The rest of the body is modeled with the combination of the discrete Markov chain approach and queueing models, enabling multiple irradiations to be scored after passing through the body’s circulation.

Quantifying Quantifying the dose delivered to the blood has shown that the circulating blood pool receives only a small fraction of the delivered dose. The accumulated dose over all fractions, however differ between proton therapy and IMRT. The difference may be explained by the intrinsic physical characteristics of the deposited dose of both modalities.

08:45
ICCR Rising Stars 1: Closed-form modeling of biological uncertainties in carbon ion therapy

ABSTRACT. Particle therapy is prone to uncertainties. In this context, not only the location of dose deposition is uncertain but also the induced effectiveness. Here, we present efficient closed-form expressions to quantify carbon ion treatment plan uncertainties by means of expectation value and standard deviation of the RBE weighted dose. The proposed analytical probabilistic methodology does not rely on discrete dose scenarios and accounts in addition to biological uncertainties also for setup and range uncertainties in a fractionated manner. The additional consideration of biological uncertainties did not increase the computational complexity and resulted for three-dimensional carbon ion treatment plans mainly in absolute miss-estimations of the RBE weighted dose. Further, we observed that biological uncertainties are mitigated in treatment plans applying multiple fields due to averaging effects caused by multiple thousand carbon ion pencil beams. The benefit and impact of including biological uncertainties into probabilistic optimization is part of ongoing research.

09:00
ICCR Rising Stars 2: Generalized Feature Analysis for Radiotherapy Application to Head and Neck Outcome Predictions
PRESENTER: Mattea Welch

ABSTRACT. PURPOSE: Radiomics has seen increased interest in recent years, and represents a movement towards automated information generation. In this work, we augment the traditional radiomics pipelines to include interventional features extracted from radiation therapy (RT) plans (RTx-omic features). We built a fully automated pipeline for automatic extraction and analysis of clinical imaging and interventional information. As a proof of concept, we tested the ability of RTx-omic and radiomic features to distinguish between positive and negative locoregional failures (LRF) in H&N patients. METHODS: CT images and planned RT dose volumes from 64 H&N oropharyngeal patients were quantified using a custom Pyradiomics module. Radiomic, RTx-omic and clinical features were used to predict LRF at 3 years with Random Forests (RF), Logistic Regression with Recursive Feature Elimination (LOG) and Isolation Forests (IF). Training and validation was repeated 100 times using stratified subsampled portions of the data, 75 and 25% respectively, without replacement. The mode prediction for each patient across the 100 subsampled fittings were used to calculate the Area under the curve (AUC), and confidence intervals (CI) were calculated with bootstrapping. RESULTS & DISCUSSION: Clinical features with LOG modelling had the highest AUC of 0.80 (0.65-0.96) for LRF prediction at 3 years. However, the Large CIs prevent us from definitely saying one model is better than another, and indicate that too few LRF events are present in our dataset. These results are not to say that imaging and interventional features do not provide additional information important to the prediction of LRF, only that with the current data and features they do not draw immediate conclusions. These results do demonstrate the potential to apply automated information generation and Big Data methods to data not previously used in prognosis generation before.

09:15
ICCR Rising Stars 4: The importance of evaluating the complete knowledge-based automated planning pipeline
PRESENTER: Aaron Babier

ABSTRACT. Purpose: To determine how knowledge-based planning (KBP) methods combine with optimization methods in two-stage knowledge-based automated planning (KBAP) pipelines to produce better radiation therapy treatment plans.

Methods: We trained two KBP methods, a generative adversarial network (GAN) and a random forest (RF) algorithm with the same 130 treatment plans. We used the models to predict the dose distribution for 87 out-of-sample patients which we then used as input to two optimization models. The first optimization model, reverse weight optimization (RWO), estimates weights for structure-based objectives from a predicted dose distribution and generates new plans using conventional inverse planning. The second optimization model, dose mimicking (DM), minimizes the sum of quadratic penalties between the predictions and the generated plans using several voxel- and structure-based objectives. Altogether, four KBAP pipelines were constructed and subsequently benchmarked against the corresponding clinical plans using clinical criteria to quantify plan quality and the gamma index (5%/5mm) to quantify spatial differences.

Results: GAN+RWO plans satisfied the same criteria as their corresponding clinical plans (78%) more than any other KBAP pipeline. However, the GAN model did not necessarily provide the best prediction for the second-stage optimization models. Specifically, both the RF+RWO and RF+DM plans satisfied all clinical criteria 25% points and 15% points more often than GAN+DM plans, respectively. When analysing spatial differences, the best plans were generated using RF predictions as input to either optimization method (γ = 0.86); this was 0.06 and 0.04 better than the GAN+RWO and GAN+DM plans, respectively.

Conclusion: Many papers in the literature focus solely on KBP prediction and do not perform post-optimization plan comparisons. We find that state-of-the-art KBP techniques may produce treatment plans with considerable quality variation when paired with different optimization algorithms. Thus, it is critical that future KBAP research report results from the full pipeline.

09:30
ICCR Rising Stars 5: Liver-ultrasound based motion model for lung tumour tracking in PBS proton therapy
PRESENTER: Miriam Krieger

ABSTRACT. Introduction: Mobile tumours are a challenge to pencil beam scanned (PBS) proton therapy due to potential geometric target miss and interplay effects. One solution is to track the tumour with the proton beam using on-line predictions of target motion. In this study, the potential of 2D ultrasound (US) imaging of the liver as a motion surrogate for lung tumour motion has been investigated. Materials & Methods: Simultaneous 2DUS of the liver and 4DMRI of the lung for two volunteers were acquired. From these, pseudo-4DCTs of two lung patients were generated by warping the end-exhale phase CT with deformation vector fields (DVF) derived from 4DMRI. Using a statistical motion model, features in the US images were correlated to DVFs in the lung using Gaussian process regression and principal component analysis for dimensionality reduction. To test the model, PBS treatment plans were optimised on the single-phase CT, with 4D dose distributions being calculated for both ground truth (the motion extracted from 4DMRI) and liver-US predicted lung motion. Tracking was simulated using both predicted, by adapting the pencil beam position according to the predicted motion, and ideal tracking, whereby ground truth motions adapted the beam positions. All calculations were repeated with 9x rescanning (re-tracking). Resulting dose differences in the CTV between modelled and ground truth motion were analysed. Results: From the 4D-dose distributions, only minor uncertainties resulted from the use of the liver-US modelled DVF when using re-tracking, where for 75% of the scenarios, VDiff>10% to the CTV was <10% and for 56% it was <5%. Discussion & Conclusions: It is possible to predict 4D dose distributions in lungs using 2D abdominal ultrasound images with acceptable accuracy for re-tracking, which is a promising technique for online adaptation of PBS proton treatments with no additional radiation dose to the patient.

09:45
ICCR Rising Stars 6: Optimising methodology for 4DCT radiomic feature extraction to predict distant failure in NSCLC patients treated with SBRT
PRESENTER: Angela Davey

ABSTRACT. Purpose: Predictors of distant failure (DF) are unknown for non-small-cell lung cancer patients treated with stereotactic body radiotherapy. For these patients, few radiomic studies have been performed, partly due to 4DCT planning, as an internal gross tumour volume (iGTV) is defined as opposed to a GTV. Additionally, no studies have investigated how to optimally extract features from 4DCT. In this abstract, we developed and propose a framework to optimally extract radiomic features from 4DCT, investigating tumour and peritumoural features as predictors of DF. Method: GTV was generated on reference phase (GTV50) for 275 patients by a novel methodology. Local rigid registration was used to map GTV50 to all phases. Peritumoural border was defined as 3mm inwards and outwards from GTV50. Statistic and texture features were extracted from the tumour and peritumoural border on all phases before and after edge enhancement (Laplacian of Gaussian). Three methods were used to combine feature values from all phases: 1) average values, 2) values from exhale, 3) values from optimum phase per patient. The optimum phase had minimum absolute difference in feature value compared to its two neighbouring phases. For all methods, redundant features were removed. Feature selection was performed (backward-selection (BS)/LASSO) to build a multivariate model per method. Results: The optimal phase varied across patients. No clinical variables were found to be significant in univariate analysis or correlated with radiomic features. LASSO and BS detected same significant variables per method. The personalised phase model showed optimal performance (concordance index=0.77) compared to exhale (0.72) and mean (0.68), with additional tumour features selected. No peritumoural features were significant in multivariate analysis. Conclusion: This work presents a framework for extracting radiomic features directly from 4DCT. Utilising a personalised approach led to improved model performance and identified more tumour features as prognostic for DF which outperformed clinical variables.

10:40-12:30 Session 15A: Motion, Deformation and Tracking
Location: Opera A
10:40
Deformable Registration and Dose Accumulation in Radiation Therapy

ABSTRACT. Deformable image registration (DIR) has been an active area of research over the past two decades.  In this time, DIR tools have translated from a research tool under development to a clinically integrated algorithm that is now demonstrating the potential for clinical impact in treatment planning, image guided delivery, and treatment assessment.  DIR algorithms have advanced, demonstrating voxel level of accuracy in the integration of multi-modality and physiological imaging together to form a comprehensive model of the patient. The application of DIR for dose accumulation to enable evidence-driven adaptive radiotherapy and improved correlations with outcomes is an area of exciting clinical translational research. This lecture will discuss the current status DIR in translational research for dose accumulation, adaptive radiotherapy, and improved assessment of treatment response.

11:10
Multiparametric MRI assessment of gross tumour volume and hippocampal changes over the course of brain cancer radiotherapy
PRESENTER: Michael Jameson

ABSTRACT. Heterogeneity in tumour activity, as well as response to standard therapies in brain cancer remains a challenging clinical problem. Most primary brain cancer patients recur locally, either at the original site of disease or at the margin following radical radiotherapy (RT). The aim of this project is to describe functional changes in target volume and organs at risk over the course of radiotherapy. This is an ethical review board approved prospective pilot study aiming to recruit both primary and secondary brain cancer patients. The study schedule involved imaging at RT planning (ie baseline - BL), mid-way through (for patients receiving 30 fractions) (FU1), at the end of (FU2) and four weeks post treatment (FU3). The MRI sequences employed were anatomical T1 and T2 weighted, diffusion weighted imaging (DWI), dynamic contrast enhancement (DCE), susceptibility weighted imaging (SWI) and arterial spin labelling (ASL) on a 3T Siemens Skyra. All image segmentation and analysis was performed in MiM and Matlab. There was a reduction in gross tumour volume between BL and FU3 with mean values of 43.06 cm3 (σ=32.68) and 35.07 cm3 (σ=26.42). Similarly, for the hippocampus with mean BL volume of 2.34 cm3 (σ=0.68) and FU3 of 1.95 cm3 (σ=0.58). An increase in apparent diffusion coefficient was seen throughout the course of treatment for the gross tumour volume of 1266.45, 1429.09, 1549.90 and 1527.91 x10-6mm2s-1 for BL, FU1, FU2 and FU3 respectively. Linking functional information of this kind to patient outcomes is key to the development of personalised adaptive RT for brain cancer. The limitations of this study include the relatively small patient numbers, heterogeneous cohort, varying prescription (e.g. standard versus hypofractionated RT techniques) and reliance on manual image segmentation. The results of this pilot study will form the basis for a second phase focusing on glioblastoma with multi-institution recruitment.

11:20
Predicting inter-fractional anatomical change in head and neck cancer patients using diffeomorphic image registration
PRESENTER: Megan Zoe Wilson

ABSTRACT. Purpose/objectives: Modelling inter-fractional anatomical changes is important in the context of developing robust radiotherapy treatment planning and adaptive radiotherapy strategies. We propose a method for predicting anatomical changes during radiotherapy for head and neck (HN) cancers. The method requires generation of a population-based model of inter-fractional deformations. The model is used to generate virtual CTs representing likely anatomical changes of a new patient from their planning CT alone.

Materials/Methods: Planning CTs and six repeat-CTs (rCTs), recorded at weekly intervals, have been analysed retrospectively for 20 HN patients undergoing radiotherapy. Intra-patient anatomical deformations between the planning CT and rCTs were transformed to an average-shape atlas. The population-based mean model is then built and evaluated using a leave-one-out cross-validation for all 20 patients. An open-source diffeomorphic image registration algorithm, NiftyReg, was used and statistics were computed in the Log-Euclidean framework. A novel efficient approach to transforming the patient-specific deformations to the atlas space has been adopted. This approach ensures that the underlying topologies of the transformations are preserved.

Results: The predicted virtual CTs were evaluated using quantitative metrics that compared predicted contours of organs-at-risk (OAR) with contours drawn manually on the rCTs. For all OAR considered, the predictive model gave OAR contours that, from the third week of treatment onwards, were more accurate than assuming no anatomy changes after planning. For example, the difference from the end-of-treatment rCT volume of the parotid glands was (31+/-9)% using the predictive model, compared with (76+/-10)% assuming the planning anatomy.

Conclusions: An inter-fraction predictive model has been developed for HN cancer patients, which allows more accurate predictions of anatomical changes than assuming planning CT anatomy. We aim to apply this prediction framework to a larger dataset of HN patients using various image guidance protocols, and to assess the dosimetric implications.

11:30
Geometric accuracy of surrogate-driven respiratory motion models for MR-guided lung radiotherapy
PRESENTER: Björn Eiben

ABSTRACT. MR-Linacs offer unprecedented motion monitoring potential during treatment with excellent soft tissue contrast, but high-quality 3D images cannot currently be acquired fast enough to image respiratory motion. 2D cine-MR images facilitate 2D lung tumour monitoring, but do not provide information outside the imaging plane, preventing downstream adaption methods that rely on temporally resolved volumetric patient information. Surrogate-driven motion models (SDMMs) can provide this information. Our method uses multi-slice 2D images to build an SDMM and generate a motion-compensated super-resolution reconstruction (MCSR) of the anatomy. We quantify the SDMM’s geometric accuracy using the XCAT anthropomorphic phantom. An XCAT patient anatomy with a tumour in the lower right lung was animated with a volunteer’s breathing trace and an MR-like image and ground-truth deformation vector field (DVF) was generated for every time point. An acquisition pattern of interleaved motion and surrogate slices was simulated. Motion slices capture the anatomy in sagittal and coronal orientations and overlap by 8mm to facilitate a super-resolution reconstruction. Each motion slice was acquired three times. From the surrogate slices the skin and diaphragm motion was measured to generate surrogate signals. An SDMM was fitted to the data and an MCSR was generated using our motion modelling methodology. Treatment delivery was simulated on a later part of the breathing trace. Surrogate signals were calculated and used as input to the SDMM to generate estimated DVFs. Representative instances were selected and evaluated in terms of deformation field error (DFE) and tumour centre of mass error (COM) against the ground truth simulation. Results were weighted according to the relative occurrence of each instance during beam-on time. The mean DFE/COM error was reduced by the SDMM from 3.1mm/3.9mm to 1.1mm/0.7mm, below the voxel size, highlighting the SDMM’s potential to produce volumetric patient information of high spatial and temporal resolution.

11:40
Deformable Image Registration using Structure Guidance for Dose Accumulation
PRESENTER: Marian Himstedt

ABSTRACT. With the rise of modern machine learning algorithms that allow for high quality organ delineation on planning CTs as well as CBCTs, adaptive radiotherapy is becoming more and more feasible. However, to warp the administered dose for treatment monitoring, a deformable image registration (DIR) step between daily CBCT and planning CT is necessary. Common DIR algorithms such as [1] struggle in the presence of large deformations, as induced by e.g. bladder or rectum, and give rise to errors in dose accumulation. We propose an extended DIR approach that tackles this issue by incorporating corresponding delineated structures on planning CT scans and daily CBCT scans to guide the deformation direction. We provide quantitative results obtained for uterus cancer cases. Our algorithm allows to reduce the differences to gold standard values up to 3-5 times compared to common DIR algorithms on clinical goals for PTV and OAR.

11:50
In silico validation of motion-including dose reconstruction for MR-guided lung SBRT using a patient specific motion model
PRESENTER: Jenny Bertholet

ABSTRACT. Motion-including dose reconstruction (MIDR) aims at reconstructing the actually delivered dose to the moving anatomy during radiotherapy. Patient-specific motion models (PSMM) can be used to determine the time-resolved anatomy during treatment delivery on an MR-linac for MIDR. In this study, PSMM-based MIDR was validated for MR-guided lung SBRT. The digital XCAT phantom was used to generate a ground truth moving anatomy (GT-XCAT) based on in-vivo measured motion. Using the first 10 minutes of the motion trace, GT-XCAT volumes were subsampled to simulate pre-treatment interleaved sagittal/coronal MR acquisition with a sagittal navigator slice for breathing signal extraction. A PSMM was fitted and a motion-compensated super-resolution image (MCSRI) was reconstructed simultaneously. An MR-linac treatment plan for 3-fraction lung-SBRT was designed on a reference GT-XCAT. GT-XCATs were generated for the remainder of the motion trace. The intra-treatment time-resolved anatomy was estimated via MCSRI deformation using the PSMM and the breathing signals extracted from navigator slices sub-sampled from GT-XCATs. Treatment delivery was simulated in our in-house emulator. The treatment fluence was discretized into sub-beams, each associated with the GT or deformed-MCSRI anatomy that it was delivered to. The dose was accumulated onto the reference anatomy. For comparison, shift-MIDR was calculated emulating tumour motion as sub-beam isocenter shifts on the static reference GT-XCAT anatomy. For the plan dose, GT-MIDR, PSMM-MIDR and shift-MIDR respectively: GTV-D98% was 70.8Gy, 67.7Gy, 69.0Gy and 67.4Gy; GTV-D50% was 77.7Gy, 775.2Gy, 75.5Gy and 76.0Gy; heart-V30Gy was 48.4cc, 55.6cc, 53.0cc and 64.7cc; Oesophagus-V2% was 22.6Gy, 21.7Gy, 21.7Gy and 23.1Gy. Evaluated against GT-MIDR, PSMM-MIDR was more accurate than shift-MIDR for organ at risk (OAR) dose estimation and similar for target dose estimation. The MR-based PSMM was shown to be suitable for MIDR of the target and OAR. Shift-MIDR is not intended to correctly estimate OAR dose but may be used for target dose estimation.

12:00
Respiratory motion models built using MR-derived signals and different amounts of MR image data from multi-slice acquisitions
PRESENTER: Elena Huong Tran

ABSTRACT. MR-Linacs provide 2D cine-MR images capturing respiratory motion before and during radiotherapy treatment. Surrogate-driven respiratory motion models can estimate the 3D motion of tumour and organs-at-risk with high spatio-temporal resolution using surrogate signals extracted from 2D cine-MR images. Our motion modelling framework fits the model to unsorted 2D images producing a correspondence model and a motion-compensated super-resolution reconstruction (MCSR). This study investigates the effect of the training data size used to build the respiratory motion models, since long acquisition and processing times limit their application for MR-guided radiotherapy. Four volunteers were scanned on a 1.5T MR scanner with a 3-minute interleaved multi-slice acquisition of 2x2x10mm3 surrogate and motion images, repeated 10 times: sagittal surrogate images from a fixed location, sagittal and coronal motion images covering the thorax. Two surrogate signals were generated by applying principal component analysis to the deformation fields obtained from registering the surrogate images. For each volunteer we built motion models using data from 1, 3, 5, and 10 repetitions, generating a 2x2x2mm3 MCSR. We reconstructed super-resolution images without motion compensation (no-MCSR) to show the improvement obtained with the models. Visual assessment showed plausible estimated respiratory motion with breath-to-breath variations. We computed intensity profiles along the boundary between diaphragm and lung to assess the image quality of the super-resolution reconstructions. We calculated the mean absolute difference (MAD) between the training motion images and the corresponding model-simulated images, averaged over all images and volunteers. The MCSRs presented sharper intensity profiles than no-MCSRs indicating successful motion compensation. MAD increases with the number of repetitions and without motion compensation (for MCSRs/no-MCSRs: 2.10/2.40 with 1 repetition, 2.33/2.56 with 10 repetitions). Computational times to build the models without GPU implementation ranged from ~30 (1 repetition) to ~380 minutes (10 repetitions). Promising results indicated the feasibility of short acquisition and processing times.

12:10
Respiratory motion model derived from CBCT projection data

ABSTRACT. Respiratory motion can be a source of errors and uncertainties when delivering radiotherapy treatment. Precise knowledge of the respiratory induced anatomical motion may lead to more accurate and effective treatments. 4DCT can be used to account for respiratory motion during planning, but this may not give a good representation of the motion at treatment time due to inter-fraction variations in the motion and anatomy. 4D-CBCT can be acquired just prior to treatment to provide a better estimate of the motion at treatment time. However, 4D-CBCT can suffer from poor image quality due to the assumption of regular breathing and the need to bin the projection data. Another solution is to use surrogate-driven respiratory motion models to estimate the motion. Typically these models are built in two stages: 1) use image registration to determine the motion of the internal anatomy; 2) fit a correspondence model that relates the motion to the surrogate signal(s). In this work we have utilised a recently developed generalised framework that unifies image registration and correspondence model fitting into a single optimisation. This enables the model to be fitted directly to unsorted/unreconstructed data. This work presents the first application of this framework to CBCT projection data. Since evaluation of the model on real data is difficult because the ground truth motion is unknown, we have used an anthropomorphic software phantom to simulate CBCT projection data and evaluate the generated motion model. Results from the generated model were assessed both quantitatively and qualitatively. We compared the results of the motion model to the ground truth motion using sum squared differences, Dice coefficient and the centre of mass of the tumour in the volumes. All the results obtained indicated that the model generated with the CBCT projection data was able to estimate ground truth motion well.

12:20
Hybrid 2D/4D MRI for motion management in liver radiotherapy
PRESENTER: Martin Fast

ABSTRACT. To facilitate daily pre-beam plan adaptation on the MR-linac, we previously developed a multi-slice coronal 4D-MRI sequence. We now propose a novel hybrid 2D/4D MRI method that continuously acquires 4D-MRI data during beam-on, while also monitoring the tumour position in real-time by relying on a liver motion model intrinsic to the 4D-MRI reconstruction. For this study, we acquired 4D-MRIs on the Elekta Unity MR-linac (Elekta AB, Stockholm, Sweden) for one patient with a liver metastasis and 5 healthy volunteers. A stack of 25 coronal slices was positioned to capture the moving liver. The acquisition of each image stack (dynamic) was repeated 120 times, resulting in a total acquisition time of 16 min. Retrospectively each dataset was split into two parts. Pre-beam: dynamics 1-30 were used to create a liver motion model. To enable the self-sorted 4D-MRI reconstruction, the motion model converts cranial-caudal diaphragm motion measured across the liver to diaphragm motion in the position of the tumour slice. In addition, linear regression was used to correlate diaphragm motion with tumour motion. Beam-on: dynamics 31-120 were used to determine the real-time diaphragm motion per slice location. Next, the pre-beam motion model was applied to estimate the cranial-caudal diaphragm motion in the tumour slice in real-time. Additionally, tumour motion was predicted by the pre-beam linear regression motion model. Good spatial agreement was observed, when comparing the real-time motion with the self-sorting motion. Averaged over all datasets, the median (RMS) difference between the two trajectories was 1.1 (2.3) mm. The median (RMS) difference between predicted and true tumour motion, as determined by rigid registrations on the tumour mask, was -0.8 (2.2) mm. In future, the use of the 4D motion model could be expanded by also correlating the real-time diaphragm motion with left-right and anterior-posterior tumour motion.

10:40-12:30 Session 15B: Auto-segmentation
Location: Opera C
10:40
Highlight talk: Using Federated Data Sources and Varian Learning Portal Framework to Train Neural Network for Automatic Organ Segmentation
PRESENTER: Petr Jordan

ABSTRACT. Radiation treatment planning process involves several steps where data-driven approaches, such as machine learning (ML), could be used to improve the quality and reproducibility of patient care and at the same time to decrease the variability in treating standard cases. Typical examples of such steps include organ segmentation, tumor identification, or 3D dose prediction. Training high-performance ML models requires access to a large set of related data, which could be obtained by combining data sources from several clinics. However, sharing personal health information (PHI) among clinics located in different countries is a highly regulated process imposing strict restrictions on data transfer.

Data anonymization is one of the methods used to protect patients’ privacy while also enabling data sharing. However, due to large variations in the data types the anonymization process needs to be planned for each individual case separately which requires additional expertise and leads to increased costs. The data sharing problem becomes even more challenging when the data needed for training the ML models contains identifiable elements which cannot be removed without decreasing data quality and thus the model prediction power.

An alternative approach is to use a distributed training process for the ML algorithm. In a data-parallel distributed training schema each clinic has a copy of the model which is trained using only its own dataset. The results of the training process are periodically aggregated across the participating sites during the training process.

We illustrate here a case study to train a deep neural network organ segmentation model using the synchronized data-distributed framework implemented in Varian Learning Portal where PHI data stays inside the institutions. The prediction performance of the trained model was on the same level with the model trained in a centralized manner where all training data was pulled together in one center.

10:55
Highlight talk: Using a Bayesian neural network approximation to quantify the uncertainty in segmentation prediction on prostate cancer
PRESENTER: Dan Nguyen

ABSTRACT. Accurate and robust segmentation of the tumor in the radiation oncology workflow is pivotal to the success of the treatment planning and, ultimately, patient outcome. With typically only one ground truth segmentation per image, many modern neural network implementations are unable to produce an uncertainty estimation. Because the test set is limited, once the network is deployed, it may encounter new edge cases and predict poorly. Unfortunately, the model will give no indication that a poor prediction was made. Bayesian methods in the past can quantify uncertainty, with respect to data precision and internal error, but was limited due to high computational cost. Recently, Gal and Ghahramani, demonstrated an efficient Bayesian approximation for neural networks. Implementing the method, we used dropout and the Monte Carlo (MC) estimations during the training the prediction phase of the model, respectively. Utilizing 127 training and 42 validation prostate cancer patients, and 26 patients for hold out test set, we localized the prostate in an 96 x 96 x 48 array, with voxel size of 1.17 mm x 1.17 mm x 2 mm. We then trained a U-net with ResNeXt blocks to learn the prostate segmentation. To evaluate the uncertainty, we developed a score function that rewards high uncertainty in mislabeled voxels and low uncertainty in correctly labeled voxels, and penalizes the opposite. The neural network segmentation predictions acheived a high average Dice coefficient of 0.87 on the test data set. On average we achieved a positive uncertainty score of 0.0979 ± 0.0407, indicating that, overall, the high and low uncertainties are correctly aligned to the mislabeled and correctly labeled voxels, respectively. Achieving both a high Dice coefficent and the capability to quantify uncertainty, makes the framework highly suitable for clinical deployment.

11:10
On the impact of the training dataset on a deep fully convolutional neural network (DFCNN) for automatic segmentation of the prostate gland on CT images
PRESENTER: Chang Liu

ABSTRACT. Introduction Deep fully convolutional neural networks (DFCNN) have been reported to be effective for image segmentation. The purpose of this work was to determine how the DFCNN learns from training data for automatic contouring of the prostate gland. Materials & Methods Planning CT images of prostate cancer patients (N=972) on which prostate glands were contoured, were restrospectively recruited. All images and contours were cropped acround the center of the prostate to have a uniform size of 128x128x64/1x1x1.5mm and used to train a DFCNN. Of the 972 total datasets, 777 were applied for training. Models were chosen based on validation data from the remaining 20% (195 datasets). To analyze the impact of gradual intensity variations, we built several different training data sets using principle component analysis (PCA). All images were projected to the first 1,5,10,20,40,60,80, and 100-900 principle components (PC’s), to produce 16 perturbed training data sets. Seventeen DFCNN models were built. Training error was reported in terms of the dice coefficeint (DSC) between the predicted and physician-delineated contours. We also trained a baseline model using Gaussian white noise images as input. Results The model trained with Gaussian white noise images showed DSC=0.73+0.20. The training data set reconstructed using only the first principle component showed DSC = 0.76+0.1, while that using all PC’s had DSC=0.86+0.04. As the no. of PC’s increase, the DSC mean increases, and standard deviation decreases. Discussion & Conclusions Our experiments show that it is generally more difficult (larger training error) for a DFCNN to learn the contour of the prostate when some intensity variations are removed from the training images. The piece-wise linear error curve indicates that the DFCNN is more sensitive to certain variations, e.g., in the direction of the 100th-600th principle components. The DFCNN was able to learn as opposed to just memorizing the inputs.

11:20
Transfer Learning for Sarcopenia Segmentation
PRESENTER: Andrew Green

ABSTRACT. Image segmentation to define structures in routine imaging using neural networks is an emerging technique in radiotherapy. So far, networks have been trained from scratch to segment defined structures. However, this approach is limiting as it requires a large amount of high-quality training data. In machine learning, the concept of transfer learning has been developed, this enables the transfer of knowledge to accomplish a different task. Transfer learning has previously been applied in image classification, where a network can be re-trained to classify images not present in the original training data. Such an approach would enable large scale adoption in radiotherapy as it minimises the required training data. In this work, show the feasibility of transfer learning to segment skeletal muscle in medical images for the first time. A VGG-16 network was modified to perform segmentation by adding transpose convolutions and skip connections in the last levels. The modified network is trained to reproduce clinician segmentations of the skeletal muscle in a single slice at the L3 vertebral level. The training set consisted of 160 image-segmentation pairs, with 30 reserved for validation and 10 for testing. Network segmentations are further refined using a conditional random field post-processing step. A five-fold validation was used to assess the accuracy of the network. Segmentations are compared to the ground truth using the dice similarity coefficient and Hausdorff distance. The produced segmentation were visually similar to the clinician segmentations. This was confirmed by the mean dice score (≥0.9) and mean Hausdorff distance (<5mm) for all validation splits. In three out of five cases, the addition of the CRF improved segmentation accuracy assessed by either dice or Hausdorff distance. This work has demonstrated the feasibility of applying transfer learning for segmentation in routine medical images utilising only a relatively small training dataset.

11:30
Improving accuracy and robustness of deep convolutional neural network based thoracic OAR segmentation by training with data from local institution
PRESENTER: Quan Chen

ABSTRACT. Purpose Deep convolutional neural networks (DCNNs) have demonstrated the superiority over traditional methods in OAR segmentation in open challenges. However, it has been observed that the high-performing model in challenge has much worse performance in patient cases from our institution. It is hypothesized that there is institution-specific factors that produced subtle changes in the CT that created problem for the DCNN. And this image difference can be learned by the DCNN. The purpose of this study is to investigate whether adding cases from our institution to the training can improve model performance in our institution. Materials & Methods Our DCNN model that achieved good performance in 2017 AAPM Thoracic Auto-segmentation challenge was used in this study. 45 thoracic patient cases were randomly selected from our clinical practice. The Contouring of each case was reviewed and adjusted if necessary by experts to ensure the conformance to the same contouring guideline. Of 45 cases collected, 30 cases were added to training dataset to produce an improved model. The performance on the remaining 15 validation cases were evaluated using Dice scores, mean surface distance (MSD) and 95% Hausdorff distance (HD95). Results The baseline model produced several outlier cases where there is significant mis-segmentation in heart and esophagus. Overall performance on heart and esophagus are: Dice (0.835, 0.667), MSD (7.4mm,4.5mm) and HD95 (22mm,16mm). After training with local cases, the performance improved to: Dice (0.921,0.804), MSD (2.4mm,1.3mm) and HD95 (7.1mm,4.8mm). These metrics are very close to the human expert’s performance. Conclusions Adding cases from local institution to the training dataset can improve the accuracy and robustness of the DCNN OAR segmentation model on the cases from the local institution.

11:40
Stepping-in segmentation of cardiac substructures with deep learning
PRESENTER: Huaizhi Geng

ABSTRACT. Purpose: We are reporting a novel convolutional neural networks (CNN) model for fast and consistent auto-segmentation of cardiac sub-structures. Methods: We applied the cascaded atrous convolution (CAC) and spatial pyramid pooling (SPP) module to the conventional CNN model to capture multi-scale and high-resolution features from the images. We used digital data from 81 patients of esophageal cancer undergoing radiotherapy. The cardiac substructures were manually contoured on the planning CT and reviewed by a cardiologist. We randomly chose 66 cases as the training set and used the remaining 15 cases for test. Pericardium and great vessels were first segmented using the model, then the knowledge of pericardium is fed to the model as prior knowledge for chamber segmentation; Lastly, left ventricle was used as prior knowledge for the walls of left ventricle segmentations. The Dice similarity coefficient (DSC) and the Mean distance to agreement (MDA) were calculated to quantify the segmentation accuracy for the testing data set. Results: Pericardium auto-segmentation demonstrated the highest accuracy (Dice: 0.91±0.03, MDA: 2.45±0.64 mm); followed by Aorta (0.83±0.04, 2.49±0.92 mm), Left Ventricle (0.81±0.07, 3.69±1.69 mm), Right Ventricle (0.77±0.07, 3.25±1.09 mm) and Right Atrium (0.72±0.08, 3.37±1.08 mm); while Left Atrium (0.63±0.22, 6.58±8.61 mm), the Pulmonary vessels (Left (0.65±0.15, 2.57±1.29 mm), Right (0.58±0.17, 3.63±2.53 mm), Main (0.67±0.11, 2.96±0.79 mm)), Superior Vena Cava (0.57±0.23, 3.28±2.32 mm) and Inferior Vena Cava (0.40±0.12, 4.33±1.21 mm) LV_Inferior (0.68±0.03, 0.96±0.17 mm), LV_Lateral (0.72±0.06, 0.87±0.27 mm), LV_Apical (0.70±0.05, 1.01±0.48 mm) and Septum (0.79±0.05, 0.61±0.16 mm) showed reasonable accuracy. Conclusions: CNN method appeared to be an effective method for auto-segmentation of Pericardium. Given the limited contrast of heart substructures on CT, when Pericardium is used as prior knowledge, chambers of the heart can be auto-segmented with satisfactory accuracy. Using left ventricle as prior knowledge also made auto-segmentation of left ventricle walls a feasible task.

11:50
Automated segmentation of pulmonary fibrosis on CT images for radiation therapy applications

ABSTRACT. Pulmonary fibrosis (PF) is a relative contraindication for radiation therapy treatments due to the increased risks of pulmonary toxicity. In this preliminary study, an assisting tool to automatically segment PF on CT images is being developed for applications in radiation therapy. Applications include assisting clinicians in screening fibrotic patients before treatment planning and the assessment of PF progression over a radiation therapy treatment.

The tool is based on a fully deep convolutional neural network, which takes lung CT images and produces corresponding PF label maps. The proposed network includes dilated convolutions with small kernels allowing to increase the receptive field and the depth of the network without incurring any extra cost. Training has been done in a 5-fold cross-validation (CV) scheme with 3 classes (healthy, ground glass opacities and fibrosis). The dataset consists of high resolution CT images of 66 patients, sparsely annotated by two radiologists, provided by a publicly available database. It is a partial segmentation ground truth, where only a few pixels per image are annotated.

Within the 5-fold CV, a global accuracy of 89.4% and individual accuracies of 97.2% (Dice similarity coefficient (DSC) 0.94), 75.1% (DSC 0.84) and 96.7% (DSC 0.91) for healthy, ground glass opacities and fibrosis classes respectively were obtained. A fully-annotated test patient was segmented with an accuracy of 75.8%. On that patient, 77.8% of lungs were classified as fibrosis, while the radiation oncologist’s evaluation is 53.7%. Most false positives are due to misassignments around the bronchovascular tree.

The efficient segmentation of healthy and PF patterns suggests that the proposed network could be used in a fibrotic patient screening application. This study has work in progress, including the annotation of a radiation oncology database using planning CT images. An application to evaluate PF progression, based on the implemented network, is also being developed.

12:00
Physics-Based Data Augmentation for Deep Learning Organ Segmentation
PRESENTER: Petr Jordan

ABSTRACT. High-speed ring gantry systems with kV CBCT imaging capability enable single breath-hold CBCT acquisition protocols and provide significant image quality improvement in abdominal and thoracic applications. These recent advances are especially relevant in the context of automatic organ segmentation for IGRT and adaptive radiotherapy.

Current state-of-the-art automatic segmentation methods are based on encoder-decoder convolutional neural network architectures that require a substantial number of informative training images. To cope with the challenge of limited patient dataset size, researchers typically rely on online and offline data augmentation strategies, such as random intensity perturbations, spatial transformations, and deformations.

In this study, a physics-based data augmentation method is proposed for generating more realistic and diverse training datasets, and its ability to improve autosegmentation accuracy on clinical CBCT scans using a fully convolutional network (FCN) is shown.

This study demonstrates improved autosegmentation accuracy on half-fan high speed CBCT scans of state-of-the-art FCN models trained on physics-based augmented datasets. Furthermore, the feasibility of clinically acceptable organ segmentation accuracy in abdominal breath-hold CBCT scans in the absence of a large CBCT training dataset is demonstrated.

12:10
Sigmoid Segmentation via a Human-Like Deep Learning Approach
PRESENTER: Yesenia Gonzalez

ABSTRACT. Purpose: Recent advancements in deep learning had led to many successes that achieve high accuracy in organ segmentation. However, a structure that still faces challenges is sigmoid colon due to its complex 3D shape and large inter- and intra-patient variations. Standard deep-learning approach that simply applies a neural network cannot achieve high accuracy. In this study, we propose a novel iterative deep-learning approach that segments the sigmoid colon in a human-like fashion.

Methods: We developed a U-Net structure that takes an input of a CT slice, corresponding sigmoid contour, and an adjacent CT slice, with a goal of predicting the sigmoid contour in the adjacent slice. After predicting the contour at the slice, the predicted contour was used as the input for the next prediction. This process continued along superior-inferior and inferior-superior directions multiple iterations to capture the entire organ. We further consider contours of other nearby organ as prior information, which was used to remove incorrect segmentations at each slice. The method was motivated by a human’s behavior that segments the sigmoid slice by slice while considering relationships of organ contours between neighboring slices and locations of other known organs. We collected 27 training CT volumes and five testing volumes. Results were evaluated using Dice similarity coefficient (DSC) with manual contours as ground truth.

Results: The five testing volumes were completely segmented within four iterations. The method achieved an average DSC of 0.75 without using other organ contours and an average DSC of 0.85 with other organ contours considered.

Conclusion: We have developed a novel iterative deep-learning approach to segment the complex sigmoid colon. Incorporating other organ contours as prior information in the segmentation process improved segmentation accuracy.

12:20
Transfer learning from CT to MRI: auto-segmentation of the parotid glands using a deep convolutional neural network approach

ABSTRACT. Delineation of organs at risk is a time consuming task in radiotherapy treatment planning and will have to be repeated with the introduction of daily treatment adaptations as envisioned, for instance, with MR-guided radiotherapy systems. Therefore, we aim to automate delineation of the parotid glands with a deep convolutional neural network (CNN). To compensate for the lack of annotated MRI data, we propose to make use of the wealth of publicly available contoured head and neck (H&N) CT images to pre-train a 2D CNN. Subsequently we fine-tuned the learned model by training the network on the limited set of MR images available, using transfer learning. Imaging data consisted of CT and MR databases H&N cancer patients. The MR database contained 27 T2-weighted pre-treatment images and the CT database 202 downloaded images, each with manual segmentations of the parotid glands. After preprocessing all images (smaller resolution and mapping to common intensity range), we first fed the CT images as 2D axial slices into a network using the U-Net architecture. The CT network was trained for 70 epochs, optimising a Dice loss function with the Adam optimiser and a learning rate of 1e-5. With the pre-trained weights as starting point, we fine-tuned the network by feeding the MR images with the same resolution to the network (60 epochs, learning rate 1e-4). With an average Dice similarity coefficient (DSC) of 0.81 ± 0.06 and an average mean surface distance (MSD) of 2.07 ± 1.69 mm, the accuracy was close to the inter-expert variability of 0.84 ± 0.06 (DSC) and 1.50 ± 0.77 mm (MSD), respectively. We believe this technique can be of great value for daily replanning in MR-guided radiotherapy, where the negligible decrease in performance compared to manual segmentation is greatly compensated for by the gain in speed and decrease in workload.

10:45-12:30 Session 16: MCMA Rising Stars Competition
Location: Opera B
10:45
MCMA Rising Stars 1: Monte Carlo evaluation of proton range uncertainties using photon-counting CT
PRESENTER: Arthur Lalonde

ABSTRACT. Purpose: To evaluate the potential of photon counting CT (PCCT) to reduce proton beam range uncertainties with a full Monte Carlo (MC) environment.

Methods: A comprehensive simulation framework is used to simulate CT image acquisition and calculate proton dose distributions in a controlled ground truth virtual patient geometry. The virtual patient is created from real pelvic patient images, using contours made by an expert and tabulated tissue data. CT images of the virtual patient are created and reconstructed using the MIRT toolbox, while the spectral response of the PCCT is modelled following the distortion model of Schlomka et al. (2008). Single- and dual-energy CT (SECT and DECT) are used to benchmark the performance of PCCT. Simulated DECT and PCCT images are analyzed using the eigentissue decomposition (ETD) method developed in our group (Lalonde and Bouchard 2016, Lalonde et al. 2017). Fourteen mono-energetic proton pencil beams are simulated from different angles around the patient using the Monte Carlo code TOPAS/GEANT4. Lateral distributions and percentage depth dose curves are compared to the ground truth to evaluate the performance of all methods.

Results: In the presence of imaging artifacts, PCCT used with ETD noticeably reduces the error on proton range compared to both SECT and DECT. Root mean square errors of 2.03 mm, 1.38 mm and 0.86 mm are obtained for SECT, DECT and PCCT respectively. Lateral dose distributions are less affected by the imaging modality used, all three methods yielding sub-millimetric agreement with the reference dose distribution.

Conclusion: The potential of PCCT used with the ETD method to reduce proton range uncertainties in the context of MC dose calculation is demonstrated. MC dose calculation of fourteen proton pencil beams using simulated CT images suggest that the benefits of PCCT over DECT and SECT can be important in the presence of imaging artifacts.

11:00
MCMA Rising Stars 2: Intrinsic energy response of microDiamond detectors at kilovoltage energy photon beams
PRESENTER: Vaiva Kaveckyte

ABSTRACT. Purpose: An important characteristic of dosimetry detectors is their energy response that consists of absorbed-dose and intrinsic energy responses, and its dependence on beam quality. The former can be characterized using Monte Carlo (MC) simulations, whereas the latter (i.e., detector signal per absorbed dose to detector) is extracted from experimental data. Purpose of our study was to design and validate a method that predicts the intrinsic energy response of microDiamond detector (PTW 60019, Freiburg, Germany) in kilovoltage photon beams with a possible application in brachytherapy.

Material and Methods: Three microDiamond detectors and, for comparison, two silicon diodes (PTW 60017) were calibrated in terms of air-kerma in six x-ray beam qualities (from 25 to 250 kV) and in terms of absorbed dose to water in 60Co beam at the national metrology laboratory in Sweden. The PENELOPE/penEasy MC radiation transport code was used to decouple absorbed-dose energy and intrinsic energy dependence of detectors, which were modeled based on blueprints. The determined intrinsic energy response was verified using ophthalmic BT 125I seeds (the effective photon energy of 28 keV) and applying MC-calculated correction factors.

Results: The intrinsic energy response of microDiamond increases from approximately 1 nC/Gy in 60Co beam to 2 nC/Gy in 25 and 50 kV beams. The difference gradually decreased when the effective kV beam energy increased. The intrinsic energy response of silicon diodes was within 10% although they have a more pronounced absorbed-dose energy dependence. Experimentally determined absorbed dose to water around 125I sources using both types of detectors agreed well with TG-43 formalism based calculations.

Conclusion: A combination of MC simulations and experimental data allowed us to determine a notable variation in the intrinsic energy response of microDiamond in kilovoltage photon beams and account for it in non-relative dosimetry measurements when detectors were calibrated in a different beam quality.

11:15
MCMA Rising Stars 3: Towards multiscale simulations with EGSnrc: tests on cellular length scales
PRESENTER: Martin Martinov

ABSTRACT. Purpose: Modelling radiation transport across length scales from patient tumours to subcellular structures is needed to advance knowledge of biological response and to support the development of novel treatment strategies for radiotherapy. This study investigates EGSnrc cell-scale simulations towards its application to the computationally-intensive multiscale problem of modeling microscopic cellular detail on a macroscopic scale.

Methods: Several test simulations are performed using a custom version of egs_chamber; two subsets are outlined subsequently and results are compared to published data. Cells containing monoenergetic (3-100 keV) electron sources in cellular sub-compartments are modelled to calculate S-values (dose to cellular compartment per unit activity) in a Medical Internal Radiation Dose scenario. Ratios of dose to tissue in a microscopic cavity containing 25 nm radius Gold NanoParticles (GNPs) divided by the dose scored in a corresponding cavity filled with a homogeneous gold/tissue mixture are calculated for 20-50 keV incident photons. Self-consistency tests, including the Fano cavity test of electron transport, are carried out for simulation geometries including cells and GNPs (i.e., models combining the previous simulations) for energies of 0.02-1 MeV.

Results: Subcellular doses generally have sub-1% deviations from published S-values computed with Geant4-DNA. Dose ratios for the GNP scenarios agree well with corresponding published results from PENELOPE, with differences reaching 2 standard deviations of statistical uncertainties only for the highest concentrations. The Fano cavity test results agree with expected dose in all cases (statistical uncertainties range from 0.1-1.1% in the slowest case).

Conclusion: For the scenarios considered, EGSnrc results agree with other Monte Carlo codes typically used for cell simulations, and its electron transport algorithm remains consistent considering sub-micron length scales. These results complement EGSnrc’s extensive benchmarking and testing in other contexts, and demonstrate that EGSnrc holds promise for the development of a reliable and efficient framework for multiscale modelling.

11:30
MCMA Rising Stars 4: Cherenkov emission-based dosimetry: Detection angle and aperture study
PRESENTER: Yana Zlateva

ABSTRACT. Purpose: Cherenkov emission (CE) has shown promise for in-water perturbation-free micrometer-resolution dosimetry of photon and electron beams, and small photon fields. Here, we investigate the angle dependence of the CE-to-dose conversion with the aim to motivate and inform CE-based dosimeter+phantom development.

Methods: Monte Carlo simulations are performed with a modified version of the EGSnrc code SPRRZnrc for 6-22 MeV, 10×10 cm2, validated electron BEAMnrc models. Angular distributions and CE-to-dose conversion data, for CE generated at polar angle θ (relative to beam) and within δθ, are scored. The considered θ are: 90° (minimal reflectance loss with conventional phantoms) and 45° (near 42° CE angle of relativistic electrons in water). The considered δθ are 5°, 45°, and 90° (4π).

Results: The CE angular distribution is peaked at 42° near the surface and broadens at depth and with decreasing beam quality, in accordance with theory. These effects are less pronounced with photon beams. Detection at 90° appears to be not optimal in terms of signal intensity; however, we have previously shown that CE at 90° is readily detectable with photons, electrons, and small photon fields. Furthermore, detection at 90° is less sensitive to angle deviations than detection near 42° and we have found, through both Monte Carlo and experiment, that the uncertainty due to 0.01° achievable angle tolerances is ≤0.1% at 90°±5° detection. We also find that the reduced depth and beam quality dependence of 4π may be achievable at θ between 90° and 45° and with δθ between 0 and 4π, avoiding the challenges involved in 4π detection.

Conclusions: Our findings indicate that CE-based in-water dosimetry system design should focus on 4π detection or 3D microscopy techniques (e.g., optical sectioning) involving detection polar angle between 45° and 90° and large aperture. The optimal detection configuration will be explored further in subsequent work.

11:45
MCMA Rising Stars 5: Cellular response to proton irradiation: a simulation study with TOPAS-nBio
PRESENTER: Hongyu Zhu

ABSTRACT. The cellular response to ionizing radiation continues to be of significant research interest in cancer radiotherapy. To improve understanding of the characteristics of radiation-induced DNA damage, DNA damage repair, and chromosome aberration formation after proton exposure, simulations were conducted with TOPAS-nBio. First, a human fibroblast nucleus was modeled in the G0/G1 phase and filled with 6×109 base pairs (6 Gbp) of DNA representing the entire human genome. The DNA structure was organized in a hierarchical pattern in the order of DNA double helix, nucleotide pairs, chromatin fibers, chromatin fiber loops, and whole chromosomes. DNA damage was quantified by the yields of single strand breaks (SSBs) and double strand breaks (DSBs) after proton irradiation through detailed physics and chemistry simulation within the nucleus with TOPAS-nBio. The resulting SSB and DSB yields will be compared with published simulated and experimental data. Furthermore, the DNA damage results were incorporated via the SDD (Standard for DNA Damage) data format with a mechanistic repair model that implements high-level characterizations of DNA repair through different pathways, cell cycle effects, and cell death processes to predict mis-repair and chromosome aberration formation. The results of micronucleus formation, predicted by the repair model, will be compared with published experimental data. This work presents an integrated study of cellular response from proton irradiation and such results could provide a valuable reference for clinical treatment.

12:00
MCMA Rising Stars 6: Anatomical changes vs. calculation approximations: Which causes larger dose distortions for proton therapy patients?
PRESENTER: Lena Nenoff

ABSTRACT. Due to limitations in computational resources and time, analytical dose calculations (ADC) are currently the standard in proton therapy. However, to improve dose accuracy in tissues with strongly heterogeneous density, the use of more time-consuming Monte Carlo (MC) calculations is increasingly being proposed. On the other hand, anatomical changes in the patient also have a substantial effect on the delivered dose distribution, which can only be compensated for by rapid plan adaption (e.g. within 5-10 minutes). As adaption can be much faster with ADCs than with MCs, we here investigated which of these uncertainties (ADC or anatomical changes) is more clinically relevant. Five non-small cell lung cancer (NSCLC) patients with up to 9 on-treatment CTs, and five paranasal (HN) patients with 10 simulated on-treatment CTs with changes in sinus fillings, were analysed. On the initial planning CTs, treatment plans were optimized and calculated with our standard ADC and recalculated with MC. Additionally, all plans were recalculated (non-adapted), as well as being fully re-optimized (adapted), on each on-treatment CT. For both NSCLC and HN patients, agreement between the ADC and MC dose distributions was high with more than 93% and 97% of the voxels having differences <+/-5% respectively. In addition, CTV V95% differences were in mean[range] <= -1.6% [-4.9-0.8] for both treatment sites. In contrast, for non-adapted plans the CTV V95% degraded significantly, with differences of -2.8% [-5.2- -2.1] and -15% [-34.5- -0.9] for NSCLC and HN patients respectively. Plan adaption always restored the target coverage (difference<0.5%) and, in some cases, improved the doses to OARs. In conclusion, dose uncertainties for NSCLC and HN patients caused by anatomical changes are substantially larger than those caused by ADC. We would therefore caution overuse of MC based planning procedures if the resulting time overhead compromises the ability to rapidly adapt to anatomical changes.

12:15
MCMA Rising Stars 7: Benchmarking a GATE/Geant4 Monte Carlo model to support treatment planning towards MRI guided ion beam therapy

ABSTRACT. Introduction: Magnetic resonance imaging (MRI) is a promising candidate for real-time image guidance during particle therapy (PT). However, the accurate determination and compensation for bended beam trajectories is strictly required to guaranty a proper treatment planning. This work aims to develop a Monte Carlo (MC) model describing a clinical proton beam in the presence of realistic magnetic fields up to 1T and benchmark it against experimental dosimetric data. Material&Methods: A clinical proton beam (62.4 – 252.7 MeV) passing through a dedicated research magnet was simulated using the GATE8.0/Geant4 MC toolkit. The beam model was benchmarked against measured lateral beam deflections, spot sizes and dose distributions. Measurements were carried out using a horizontal pencil beam scanning proton beam with a dipole magnet (B=0-1T) positioned at the room isocenter. Dosimetric measurements were done in a PMMA phantom (200×120×300mm3) using a Roos ionization chamber and Gafchromic EBT3 films in two beam modalities: single energy fields and spread out Bragg peaks (SOBP). Results: Measured beam deflection and spot sizes agreed very well (within 1mm) with the MC predicted trajectory and beam scattering in air. The overall agreement between simulated and measured data was also good for longitudinal and lateral dose profiles in PMMA. Range and dose–weighted average differences were below 0.5 mm and 2.1% respectively for all irradiations. Simulated central beam positions and widths differed less than 1 and 0.5mm to the measurements in the EBT3 films, respectively. Conclusion: The MC model was successfully benchmarked to experimental data and will be used to generate reliable basic input datasets for a treatment planning system, accounting and compensating for beam deflections due to magnetic fields.

13:30-15:00 Session 17: Keynote & Plenary Speakers: Machine Learning and Monte Carlo I
Location: Opera A+B
13:30
Improving health-care: challenges and opportunities for machine learning

ABSTRACT. Machine learning offers a powerful paradigm for automatically 
discovering and optimizing sequential medical treatments. I this talk I 
will review some of the most recent advances in AI, including deep 
learning, reinforcement learning and generative models. I will also 
examine promising methods to improve treatment planning using AI. 
Examples will be drawn from several ongoing research projects on 
developing new treatment strategies for chronic and life-threatening 
diseases, including epilepsy and cancer.

14:30
Monte Carlo based Treatment Planning in Brachytherapy
15:15-16:00 Session 18A: Poster Session III - Rising Stars
15:15
P075: Dosimetric calibration and validation of spectral CT-based stopping power prediction for particle therapy planning

ABSTRACT. The study aimed to establish and validate a novel stopping power ratio relative to water (SPRw) prediction method for improved proton beam range determination for proton treatment planning. Calibration measurements were acquired using phantoms with various tissue-equivalent inserts (Gammex CT Phantom 467) on the Philips IQon Spectral CT system to derive three-dimensional maps of SPRw based on effective atomic number and electron density relative to water. Comparisons of experimentally determined SPRw against SPRw derived using spectral CT data were investigated, evaluating the accuracy of the methodology introduced. Validation in a clinical-like setting was performed using a half-head Alderson RANDO phantom attached to a water tank. After conventional planning based on a standard CT (Siemens SOMATOM Confidence® RT Pro), optimization was performed with a Monte Carlo treatment planning platform. A target structure (6 cm × 6 cm × 6 cm, centered at 5 cm depth) was delineated for spread-out Bragg peak (SOBP) optimization. Dose calculation was performed on both the conventional planning CT and spectral CT. Absolute dosimetry was performed in the experimental beam room using a PinPoint ionization chamber block for range verification. For the tissue substitutes, SPRw values predicted from spectral CT images were within a mean accuracy of <1 % compared to measured SPRw and showed superior agreement with measured data compared to standard HU-calibration. Precision in SPRw determination was unaffected by the tested scan settings, reconstruction parameters and phantom size. Mean deviation in range between the two SOBP calculations using the half-head anthropomorphic phantom was about 1.3(±2.6) mm. Here, calibration and validation of a spectral CT-based SPRw prediction was performed to demonstrate the improved range estimates in particle therapy from using advanced imaging techniques. Preliminary results for SPRw prediction with spectral CT data in anthropomorphic phantoms show promise over conventional means to improve high-precision radiotherapy dose calculation.

15:20
P076: A mathematical optimization framework for spatial adjustments of dose distributions in high dose-rate brachytherapy
PRESENTER: Björn Morén

ABSTRACT. High dose-rate brachytherapy is a method of radiation therapy for cancer treatment in which a radiation source is placed within the body. The aim with the treatment is to give a high enough dose to the tumour while sparing nearby healthy tissue and organs at risk. The most common evaluation criteria for dose distributions are dosimetric indices, derived from dose-volume histograms. For the tumour a dosimetric index is the portion of the volume that receives at least a specified dose level, such as the prescription dose, while for organs at risk it is rather the portion of the volume that receives at most a specified dose level that is of interest. Dosimetric indices are aggregate dose measures, without spatial information, although spatial aspects of the dose distribution is of clinical relevance. Further, there is a lack of both established clinical evaluation criteria and optimization models considering spatial aspects of a dose distribution. We propose a mathematical optimization framework to reduce the prevalence of contiguous volumes with a too high or too low dose (“hot spots” and “cold spots” respectively). The objective function considers pairs of dose points and gives a penalty if the dose is either too high or too low at both dose points. This penalty is higher the shorter the distance between the dose points is. The model also contains constraints that maintain acceptable levels of aggregate dose criteria such as dosimetric indices. In clinical practice, a dose distribution is commonly adjusted manually to also take spatial aspects into account. The purpose of our framework model is to automate this adjustment step in the planning process. In this adjustment we solve large-scale optimization models and show reductions in the prevalence of hot spots and cold spots.

15:25
P077: Multi-criterial MLC segmentation to minimize plan quality loss

ABSTRACT. Fluence map optimized (FMO) dose requires a segmentation phase to convert the dose distribution into a deliverable plan. Due to machine limitations of the linear accelerator, the segmentation phase generally results in degraded plan quality, risking that the segmented plan becomes clinically unacceptable. To steer towards clinically acceptable segmented plans of high quality, a novel multi-criterial MLC segmentation algorithm is proposed for step-and-shoot IMRT.

The method proposed generates segments while considering all treatments beams together. It reconstructs the 3-dimensional FMO dose distribution rather than the 2-dimensional fluences separately as other MLC segmentation methods do. The algorithm features a multi-criterial mechanism, designed for reproducing the plan parameters obtained in the FMO treatment plan while explicitly taking into account priorities assigned to the clinical objectives. The performance of the segmentation algorithm was evaluated for 20 prostate, 12 liver and 15 head-and-neck cancer cases. The prostate and liver cases were 25-beam SBRT plans, the head-and-neck 9 beams IMRT plans.

For all cases, Pareto-optimal FMO dose distributions were generated by automated multi-criterial treatment planning and subsequently segmented using the proposed multi-criterial method. All 47 segmented plans were clinically acceptable with highly similar 3D dose distributions. If the multi-criterial component was disabled, only 30/47 plans were clinically acceptable. The average numbers of segments generated were 59, 65 and 77 for the prostate, liver, and head-and-neck cases, respectively.

Conclusion: The proposed multi-criterial MLC segmentation was able to segment FMO-based dose-distributions, generating clinically deliverable plans within an acceptable number of segments while minimizing plan quality loss. The multi-criterial component was essential to maintain clinical acceptability of plans.

15:30
P078: Streamlining the radiotherapy working environment with an in-house developed web application
PRESENTER: Samuel Peet

ABSTRACT. Our radiotherapy clinic recently completed a review of our working environment with an eye to modernising out-of-date or inefficient practises. Many issues were noted such as protocols and reference documents being spread over duplicated folders on network drives; accelerator control rooms covered in out-of-date and uncontrolled printouts of contacts and machine service information; and paper forms being heavily relied on for many tasks in the clinical workflow. Additionally, physics data analysis scripts could only be run on certain workstations with the correct environment and software setup. These issues were addressed by developing an in-house web app to streamline communication between professional groups in the clinic, increase productivity, and improve staff quality of life. Developed and supported by the physics team, the app is driven by a Flask backend and HTML5/CSS/JS frontend, interfaces with a PostgreSQL database, and is served behind an Nginx reverse proxy. The full stack is deployed on a virtual server using Docker containers. The app caters for all of the stakeholders in the clinic: for patients it displays timing information in the waiting room; for physicists it logs PSQA results, creates monte carlo jobs, and is a frontend for data analysis scripts; for radiotherapists it features upcoming service days, assigned contacts, and finishing time estimates; and for radiation oncologists it includes several electronic forms, a knowledge base of protocol information, and is fully responsive so it can be used on mobile devices outside of the clinic. The app was launched in early 2019 to a very warm reception from all stakeholders.

15:35
P079: Bayesian network structure for predicting two-year survival in patients diagnosed with lung cancer

ABSTRACT. Introduction: The incidence of lung cancer has been increasing. Healthcare providers are trying to acquire more knowledge of the disease’s biology to treat their patients better. However, the information available is more than humans can comprehend. Predictive models such as Bayesian networks (BN), which can intricately represent causal relations between variables, are suitable structures to model this information. In this study, we have developed and validated a Bayesian network structure to predict two-year survival in patients diagnosed with lung cancer and primarily treated with radiation therapy.

Methods: A Bayesian network structure was developed on 1250 lung cancer patients treated primarily with radiotherapy from the Netherlands and validated on a cohort of 250 patients from the United Kingdom (UK). All continuous variables were binned before learning the structure. The causal relationship (arcs) between the variables in the data was determined using the hill-climbing algorithm with domain experts’ restrictions over a thousand bootstrap runs. The final structure consists of nodes and arcs, where arcs (relationships) are present in at least 50% of all bootstrap samples.

Results: The type of chemotherapy administered and the WHO performance score variables were removed from the final Bayesian network structure since there was no parental connection to two-year survival. The structures performance in predict two years survival is above the chance level. The mean AUCs and confidence intervals on the training and validation data are 0.74 (0.71-0.76) and 0.80 (074-0.87) respectively.

Conclusions: We have developed a Bayesian networks structure from routine clinical data, for predicting two-year survival in lung cancer patients treated with (chemo) radiotherapy.

15:40
P080: Evaluation of Synthesized Computed Tomography (sCT) Generated from Cone-Beam Computed Tomography (CBCT) by Cycle Generative Adversarial Network (CycleGAN)
PRESENTER: Xiao Liang

ABSTRACT. Inter-factional anatomical change occurs to many cancer patients under radiotherapy and to many tumor sites such as head and neck (H&N) cancer. Only relying on computed tomography (CT) images acquired before treatment may increase the risk of tumor underdose and organs at risk (OARs) overdose. Cone beam computed tomography (CBCT) images are frequently taken during the treatment course. Thus, using CBCT in ART is more practical and efficient than CT-on-rails. However CBCT images need to be corrected before use in adaptive radiation therapy (ART) as they contain many artifacts and inaccurate Hounsfield Unit (HU) values. Currently, deformable image registration (DIR) method is often used for this purpose, although anatomical accuracy might be a concern. Recently, deep learning has achieved great success in image-to-image translation tasks. A cycle generative adversarial network (CycleGAN) has been developed and tested to convert CBCT images to synthesized CT (sCT) images with most artifacts removed, HU values corrected and CBCT anatomy preserved. This network is an unsupervised learning method and does not require paired CBCT and CT images with exactly matching anatomy for training. Dose calculation accuracy using sCT images has been improved over the original CBCT images compared against deformed planning CT (dpCT) images, with the average Gamma Index passing rate increased from 95.4% to 97.4% for 1 mm/1% criteria and from 97.90% to 98.85% for 2mm/2% criteria. A deformable phantom study has been conducted and demonstrated better anatomical accuracy for sCT over dpCT with mean absolute error (MAE) decrease from 6.98 HU to 4.66 HU and structural similarity index (SSIM) increase from 0.91 to 0.95.

15:45
P074: Improving treatment plan dose accuracy using a deep learning-based dose conversion scheme
PRESENTER: Dan Nguyen

ABSTRACT. Purpose: To develop a deep convolutional neural network to automatically convert doses from the analytical anisotropic algorithm (AAA) to doses of the Acuros XB algorithm (AXB) to improve dose accuracy.

Materials & Methods: AAA calculates accurate doses at homogeneous regions while falls short in inhomogeneities. In contrast, AXB algorithm provides accurate dose calculation in both situations, while its clinical usage is currently limited. We proposed a hierarchically-dense U-net (HD U-net) for automatic AAA-to-AXB dose conversion to improve the dose reporting accuracy. Patient-specific CTs and lower-accuracy AAA doses are input into the network, and a higher-accuracy AXB dose is output. The network contained multiple layers of varying feature sizes to learn both local and global features to maximize the conversion accuracy. AAA and AXB doses were calculated in pair for 120 lung patients planned in Eclipse. The network was trained on randomly-selected 72 sets and validated on another 18 sets in training. The network was evaluated on the remaining 30 sets. The mean-squared-errors (MSEs) and gamma-pass-rates (2mm/2% & 1mm/1%) were calculated between AAA-converted and true AXB dose distributions for quantitative evaluation.

Results: The volume-of-interest for MSE calculation and gamma analysis is defined by the 20% isodose line of the maximum dose on the AXB dose map. The AAA-converted AXB doses demonstrated substantially improved match to the true AXB doses, with average(±s.d.) gamma-pass-rate(1mm/1%) 98.3%(±1.7%), compared to 86.0%(± 9.8%) of the AAA dose. The corresponding average MSE was 0.16(± 0.10) vs 0.52(± 0.26).

Conclusion: The deep learning-based dose conversion scheme has substantially improved the dose accuracy of AAA. The inaccurate radiation transport model of the AAA algorithm in inhomogeneous regions, especially around the lung-tumor interface, has been successfully corrected after the conversion. The developed network enables automatic and fast AXB dose generation from AAA doses, allowing more informed plan evaluation, fine-tuning and selection.

15:50
P081: Deep Learning, Medical Physics and Cargo Cult Science.
PRESENTER: Gilmer Valdes

ABSTRACT. Purpose: While deep learning algorithms are increasingly popular in medical physics applications, available datasets are significantly smaller than those found in other fields. Here we elucidate the dependence of dataset size on the performance of deep learning algorithms.

Method: A large scale dataset, ChestX-ray14, with 112,120 frontal-view X-ray images was used in our analysis. Two different tasks were studied: unbalanced multi-label classification of 14 diseases and balanced binary classification (pneumonia vs non-pneumonia). The dataset was randomly split into training, validation and testing (69%, 8%, 23%). Using PyTorch, a convolution neural network (CNN) was trained using different samples and number of data points for both tasks (N=50 to N=1558 and N=77,880 for tasks A and B respectively). The architecture selected was a DensNet121. Area-under-the-curve and balanced accuracy on the test set were reported. CNN results were compared with traditional methods by training logistic regression and random forest algorithms on 79 radiomics textures features generated from each image.

Results: When the number of images was smaller than 1600, CNN performed worse on average than the best radiomics-based traditional approach in the balanced binary classification task. In the unbalanced multi-label classification task, CNN performed better on average than the best radiomics-based traditional approach, though the performance of both was essentially random. Overall, CNNs outperformed logistic regression and random forest only when the dataset size exceeded 10,000 images.

Conclusions: Although current CNNs have proven to be powerful algorithms, given the limited size of datasets, this performance is unlikely to transfer to applications in medical physics and radiation oncology. It will serve well our community if we adopt a more critical view of the advantages and limitations of these algorithms as compared to traditional methods in order to avoid common pitfalls found in cargo cult science.

15:15-16:00 Session 18B: Poster Session III - Auto-segmentation
15:15
P082: Quantitative versus qualitative evaluation of automatic segmentation
PRESENTER: Jennifer Pursley

ABSTRACT. Automatic segmentation of anatomic regions in medical images has the potential to improve treatment efficiency for image-guided interventions. While many algorithms for automatic segmentation have been developed, evaluation of their clinical usability is largely limited to quantitative metrics such as measures of region overlap (DICE coefficient) or surface distance (Hausdorff distance). However, quantitative metrics may only tell part of the story; an auto-segmented contour of a small volume organ may have a low DICE coefficient but still be clinically acceptable, while a large volume organ may have a high DICE coefficient but be clinically unacceptable. The goal of this work is to explore the use of a qualitative evaluation system for rating the clinical acceptability of auto-segmented contours, and establish the relation between quantitative and qualitative metrics. A strong correlation between quantitative metrics and qualitative scores would establish a scientific basis for the use of quantitative metrics in the evaluation of medical image segmentation. The qualitative system was designed with 5 scores ranging from “clinically acceptable” to “completely unacceptable” and evaluated by multiple expert observers for pelvic and abdominal auto-segmented structures, which were generated from user-defined atlases in the MimVista software. Four quantitative metrics (Hausdorff distance, Mean Distance to Agreement, DICE and Jaccard coefficients) were calculated by comparing to physician contours. Overall results were varied. For bony anatomy with good agreement between auto-segmented and physician contours, quantitative metrics failed to identify discrepancies that reviewers considered clinically significant and there was no correlation between metrics and scores. For soft-tissue organs with more variability between auto-segmented and physician contours, quantitative metrics tended to correlate with qualitative scores although the degree of correlation varied with the organ size. Results indicate that while quantitative metrics do have merit in evaluation of auto-segmentation, their applicability may vary with organ properties.

15:20
P083: Evaluation of MRI atlas-based automatic segmentations for organs at risk in the brain

ABSTRACT. Introduction: National consensus guidelines for the definition of organs at risk (OAR) have been agreed upon within the Danish Neuro Oncology Group (DNOG), thus streamlining the reporting of radiotherapy doses to these critical organs. In support of this we aim to develop an MRI atlas-based automatic segmentation (ABAS) workflow and evaluate its performance on previously treated patients with brain cancer. Materials and methods: Brain, Brainstem, Optic tracts, Chiasm, Hippocampi, Pituitary gland and Temporal lobes were manually delineated on a T1w MRI with contrast for 25 patients. A basis library (atlas-v1) was built upon 15 atlases. An extended library (atlas-v2) was created using five patients more (20 patients). The atlases were built in the MIM software (v6.7.12). New segmentations were generated with the Simultaneous Thruth and Performance Level Estimation (STAPLE) algorithm. The performance of the atlases was evaluated on five patients using Hausdorff distance (HD) and Dice Similarity Coefficient (DSC) between manual and automatic segmentations for each of the OARs. Results: The ABAS workflow performed better for larger volumes, where median DSC was ≥ 0.8 for brain, brainstem and temporal lobes. No statistically significant difference was observed between the two atlases. The DSC was smaller for the smaller structures; however, the HD was less than 1 cm in all cases, indicating that the ABAS workflow predicts the structures to be in correct locations of the brain. For the smaller structures, DSC was highest for the hippocampi. Conclusion: Both atlases are based on delineations and MRI scans of patients with brain tumors which will deform OARs in the brain to some extent. Hence, large differences are observed on some structures. This may explain the poor performance of the atlases. A larger cohort of patients is needed for a clinically relevant ABAS workflow and we are currently expanding the library with more patients.

15:25
P084: Clinical evaluation of deep learning methods for brain tumor contouring

ABSTRACT. We aimed to investigate the feasibility of using brain tumor contours generated by a convolutional neural network (CNN) in radiosurgery treatment planning. Ten patients with different diagnosis (four cases of meningioma, two cases of vestibular schwannoma and four cases of multiple brain metastases) were selected from routine clinical practice. Both the axial contrast-enhanced T1-weighted and axial T2 weighted MR images were used for the manual tumors delineation and adjustment of the CNN contours. The axial contrast-enhanced T1-weighted MR images were used to contour the multiple metastases. The tumors were segmented by four experts. We compared the times needed for two contouring techniques: manual delineation of the tumors and a user adjustment of the CNN generated contours of the tumors. The time spent on each task was recorded. To quantify the quality of the CNN generated contours we assessed the similarity between the CNN tumor contour adjusted by the user and the reference contour using the Dice coefficient. To investigate the differences in the Dice scores and to measure time reduction we performed the Sign test and the Wilcoxon test respectively. P-values smaller than 0.05 were assumed to be statistically significant. The usage of the developed algorithm demonstrates significant time reduction and decreased inter-rater variability. The automatically generated contours are a promising tool for standardization of tumor delineation.

15:30
P085: Automated Generation of Optimization Structures for Treatment Planning in Eclipse using Python
PRESENTER: Theodore Mutanga

ABSTRACT. For VMAT/IMRT inverse planning, utilizing optimization structures created to guide dose distribution is widely applied as an effective way to generate good quality plans. Manual creation of optimization structures is time consuming and error prone. For treatment planning systems (TPS) without scripting or with read-only scripting, finding an alternative way to automate the process is necessary for improving efficiency and reducing manual errors.

The current version of our TPS, Eclipse V13.6 (Varian Medical Systems, Palo Alto), is capable of scripting but does not allow creating and saving new structures via the scripting interface. A standalone Python program (Contouring Assistant) was therefore designed to automate the process following predefined steps in the treatment planning protocol. The Contouring Assistant is based on simulating mouse clicks and keyboard inputs to drive the TPS contouring tools using open-source tools (Pywinauto & PyAutoGUI) for GUI automation. When compared to manual creation of optimization structures for different genitourinary(GU) treatment protocols at our center, the time savings obtained with Contouring Assitant were of the order of 19 to 28 minutes depending of the case.

Contouring Assistant has been implemented in our clinical practice for genitourinary (GU) prostate treatment planning. The tool has been demonstrated to save time while ensuring consistency during treatment planning.

15:35
P086: Head and Neck Deformable Registration by Physical Modeling of Patient Posture Change
PRESENTER: Bingqi Guo

ABSTRACT. Purpose: To develop a automatic motion modeling technique for head and neck patients to measure the deformation caused by head, neck and shoulder posture change. Methods: Constructing a patient-specific physical motion model takes the following steps: segmenting the bony voxels from CT using Hounsfield unit (HU) threshold; clustering the bony voxels and assigning the clusters to individual bones by template matching; dividing the spine into vertebrae by searching for the intervertebral discs; rigidly registering individual bones from PETCT to Planning CT using iterative closest point algorithm; segmenting soft tissue voxels using HU threshold; clustering soft tissue voxels and attaching soft tissue clusters to adjacent bones; and deforming soft tissue using physical modeling. The physical motion model was used to evaluate the accuracy of multiple rigid registrations (best of three rigid registrations) and four commercial deformable image registration (DIR) software for four head and neck patients. Results: The physical modeling technique was able to deform the PETCT to Planning CT for all patients, including those with large posture changes (arms up in PETCT and arms down in planning CT). Overall, the accuracy of different DIR software was comparable with each other and with multiple rigid registrations: mean errors to head, jaw, spine, and sternum were on the order of millimeters and mean errors to bones in the shoulder region ranged from 1 cm to 5 cm. For cervical spine, multiple rigid registrations outperform all DIR software. Conclusion: A patient-specific motion modeling technique capable of measuring deformation caused by head, neck and shoulder posture change was developed. The physical motion model can be used for deformable registration for head and neck patients and quality assurance of commercial deformable image registration software.

15:40
P087: Feature based quality assurance of image segmentation for clinical trials
PRESENTER: Lois Holloway

ABSTRACT. Consistency of radiotherapy contouring is essential within a radiotherapy clinical trial to ensure that the clinical trial question is effectively addressed. Achieving this currently requires a manual review of contours, ideally prior to the commencement of treatment. As this is time critical with large resource requirements manual review is often only completed for a limited of number of patients within a given study. An automated approach to contouring review within a clinical trials framework could improve the accuracy of clinical trial outcomes as well as reducing resource overheads required during the study. In this investigation we have considered two different approaches to undertaking a manual contour review for a prostate radiotherapy trial (PROMETHEUS ACTRN12615000223538).

A benchmarking dataset consisting of 10 patient datasets with 5 observer contours for prostate, rectum and bladder was generated, as well as the trial dataset (n=93) consisting of submitted contours that were manually reviewed. Augmented data was also generated. Within the trial rectal stabilisation was required either using the rectafix device or SpaceOAR, datasets for each of these options were considered separately. A random forest classifier using 45 radiomics features generated using pyradiomics was developed trained on i) just the benchmark data and ii) the bench mark and augmented data with validation on the trial data and iii) a subset of the trial data validated with k-fold cross validation.

The classifier trained on the subset of the trial data, demonstrated the highest accuracy (0.85 vs 0.76 and 0.28 for benchmark and benchmark including augmented data) suggesting that the classifier could be strengthened over the course of a trial. The feasibility of this approach in a clinical trial setting was demonstrated.

15:45
P089: Automatic thoracic OAR segmentation from CT scans using patient-specific deep convolutional neural networks
PRESENTER: Quan Chen

ABSTRACT. Introduction: Automatic segmentation of organs-at-risk (OARs) is a key step in radiation treatment planning. Deep convolutional neural networks (DCNNs) have demonstrated the superiority over the traditional atlas-based method in OAR segmentation. However, suboptimal results still exist for patients with abnormal anatomy not represented in the training dataset. In clinical practice, many patients have prior CT scans with validated OAR segmentations. Therefore, we will develop a patient-specific DCNN that takes the advantage of the validated OAR contours in prior scans to improve its performance on new scans of the same patient using transfer learning. Methods: A generic model based on a cascaded 3D U-Net structure was first trained on 36 cases obtained from the 2017 AAPM Thoracic Auto-segmentation challenge and 30 cases obtained from clinical practice at the University of Kentucky (UK). Another 15 patients with two scans collected from UK were used for validation. To train a patient-specific DCNN, transfer learning was used to fine-tune the generic model for each patient using only the prior scan of the same patient. The updated model was then applied to the second scan of the same patient. As a comparison, the generic model was directly deployed to the second scan. The Dice scores, mean surface distance (MSD) and 95% Hausdorff distance (HD95) were used to evaluate the performance. Results: On the second scan, the patient-specific model yielded statistically significant improvements of dice scores for spinal cord (0.884 to 0.893) and left lung (0.982 to 0.984) as compared with the generic DCNN model. Heart also showed improvements (0.921 to 0.928). Conclusion: Patient-specific DCNN can incorporate unique features on a patient and yield improved segmentation performance on subsequent scans. Transfer learning is an effective method that can rapidly learn such features without introducing significant deviations from the generic model.

15:15-16:00 Session 18C: Poster Session III - Motion Deformation and Tracking II
15:15
P090: Optimization of an unsupervised patient classification system from daily EPID images

ABSTRACT. The goal of this work is to make a systematic comparison and optimization of different unsupervised clustering algorithms to classify daily images from electronic portal imaging devices (EPID) in order to rapidly and automatically identify potential clinical problems such as tumor regression or frequent gas pockets. EPID images were collected for every treatment fraction for 180 patients equally treated for either prostate, head and neck or lung cancer. A subset of 40 cases for each anatomical site was used for training the clustering algorithm while 20 cases per site were used for validation. A set of 4 unsupervised clustering algorithms were compared: k-means clustering, hierarchical clustering, gaussian mixture models and spectral clustering. For each algorithm, the impact of different meta parameters were studied. A systematic grid search of the types and number of features used in the clustering process as well as the total number of clusters was performed. All tested algorithm could be used consistently across all anatomical site considered. The hierarchical clustering provided the finer categorization of features linked with high gamma values, which are most likely associated with problematic cases. On the other side, the spectral clustering provided finer categories with smaller gamma values, which may be of a reduced clinical interests. Retrospective analysis of the clusters could be linked with a range of clinical behavior. We’ve demonstrated how the unsupervised classification of EPID images can be retrospectively associated with clinical behavior such as weight losses, reduction of dose coverage or the presence of rectal gas pockets. Deploying this classification system clinically could act as virtual safety net that systematically analyze all daily images and flag potentially problematic cases for a human-based review or trigger an adaptive radiotherapy process.

15:20
P091: Line of Response (LOR)-based real-time tumor tracking for emission-guided radiotherapy
PRESENTER: Weiguo Lu

ABSTRACT. Purpose We propose a real-time tumor-tracking method by directly updating tumor’s center of mass (COM) position based on lines of response (LOR) without reconstructing positron emission tomography (PET) images for emission-guided radiotherapy.

Method We first prove that the average LOR midpoints for a point source is half-way between the detector ring center and source transaxially; therefore, the COM of any source distribution can be estimated by doubling the average LOR midpoints. In the axial direction, the COM can be estimated by averaging the midpoints of LORs whose angles are within a given threshold from the transverse planes based on the activity region. The statistical uncertainty of this COM estimation can be modeled using multinomial and uniformly distributed random variables for event counts and bin positions, respectively. We verified the COM estimation and statistical model using simulated point sources and 3D emission distributions.

Results The dominant uncertainty of the COM estimation is the square of the source distance divided by the number of events. This is also verified by simulations as the errors followed the theoretically established curves: the inverse square root of the number of events. For a typical injected FDG tracer dose of 18 kbq/ml, roughly 3000 events can be registered in 200ms, thus allowing tumor-tracking at the 200ms scale with less than 2% error or equivalently, less than 2mm error when the true COM is within 10cm from the detector center.

Conclusion Unlike PET image reconstruction that requires several minutes of data acquisition to reach adequate signal statistics, this COM estimation only requires sub-seconds of data and minimal calculation. Thus, this COM estimation method is feasible in real-time, enabling precise tumor tracking and motion adaptation in emission-guided radiotherapy treatment. Potential applications of this COM estimation also include online alignment and adaptive planning, and motion-corrected PET reconstruction.

15:25
P092: Measuring Registration and Motion Uncertainties in Brain Fractionated Stereotactic Radiation Therapy Patients Treated on a 3DOF vs a 6DOF Couch: Does a New 6DOF Couch Justify Zero PTV Margins?

ABSTRACT. New computer-based technologies have enabled replacing palliative whole-brain radiation therapy by brain Fractionated Stereotactic Radiation Therapy (FSRT), where a high dose of 30 to 35Gy is delivered in 5 fractions to one or more secondary brain metastases. New robotic linac couches capable of 6 degrees of freedom (6DOF) adjustments have also become commercially available over traditional 3DOF+rotation couches. Such 6DOF couches are expected to be advantageous in brain FSRT by enabling online 6DOF rigid registration of the patient to the planning CT (via a cone-beam CT on-board image). Consequently, some clinics have already reported shrinking the planning target volumes (PTV) of the brain FSRT plans from 3mm to zero, after commissioning a new 6DOF couch. In this study, we measure and analyze two sources of geometrical target uncertainties in brain FSRT deliveries on 6DOF vs 3DOF couches, which include the MR-CT registration required prior to tumor delineation at the planning stage, and the CBCT-CT registration at each fraction of the delivery stage.

All patient datasets (including 54 MR-CT and 177 CBCT-CT registrations) registered in the Eclipse treatment planning software were exported to VelocityAI, an independent image-analysis software. Registration uncertainties were measured by comparing shifts/rotations predicted by each software, while patient motion during each treatment fraction was estimated via a pre-treatment and a post-treatment CBCT.

Target-registration errors in MR-CT registrations lied within 1.3mm in either Eclipse or Velocity, so long as direct registrations as opposed to “chained registrations” (applied across multiple MRI datasets within a common exam) were employed. For the CBCT-CT registrations, the combined uncertainty (including motion and registration) decreased from a root-means-squared error of 1.41mm/1.6° for a 3DOF couch to 0.93mm/0.48° on a 6DOF couch. Our results demonstrate that a 6DOF couch can significantly reduce target uncertainties, but cannot justify shrinking PTV margins below 2mm.

15:30
P093: Assessment of non-respiratory abdominal displacements using breathing-compensated 4D imaging of the gastrointestinal tract
PRESENTER: Adam Johansson

ABSTRACT. Introduction

Peristalsis and segmental contractions in the gastrointestinal tract can induce changes in the shape and position of the stomach and intestines with respect to surrounding organs during a radiotherapy treatment session. These GI deformations are concealed by conventional 4D-MRI techniques, which were developed to visualize respiratory motion by binning acquired data into respiratory motion states without considering the phases of GI motion. We present a method to reconstruct breathing-compensated images showing the phases of periodic gastric motion and report on the effect of GI motion on surrounding structures.

Methods

67 DCE-MRI examinations were performed with a golden-angle stack-of-stars sequence that collected 2000 radial spokes over 5 min. The collected data was reconstructed using a method with integrated respiratory motion correction into a time series of 3D image volumes without visible breathing motion. From this series, a gastric motion signal was extracted. Using this motion signal, breathing-corrected back-projection images were sorted according to the gastric phase and reconstructed into 21 gastric motion state images showing the phases of gastric motion.

Results

Reconstructed image volumes showed varying gastric states clearly with no visible breathing motion or related artifacts. Gastric 4D MRIs showing periodic gastric motion were successfully reconstructed for 61 of 67 examinations. The frequency of periodic gastric motion varied among the 61 successfully reconstructed examinations with a mean of 3 cycles/min. For the whole liver, spleen and liver GTVs, maximum displacements were found at the surface of the organs with smaller displacements inside the organs and larger displacements outside

Discussion

Periodic gastrointestinal motion can be visualized without confounding respiratory motion using the reported GI 4D MRI technique. Therefore, non-respiratory abdominal displacements measured using this technique, could help define internal target volumes for treatment planning or support modelling of gastrointestinal motion during development of tracking strategies for MR-guided radiotherapy.

15:35
P094: An automated, quantitative and patient specific quality assurance tool for evaluation of deformable image registration in clinical head and neck cases

ABSTRACT. Deformable Image Registration (DIR) supports a clinical decision which requires the quantification of registration error per patient. The AAPM Task Group 132 (TG132) recommends assessing and reporting the uncertainties associated for each clinical registration i.e. on a per patient basis. Jacobian Determinant, Harmonic Energy and Mutual Information metrics were identified as having potential to quantify the registration error. It is unclear from the previous literature whether a single quantitative metric alone is sensitive and adequate to identify clinically relevant errors or if a combination of these metrics should be used to assess clinical DIR accuracy. Also, the computation of some of these metrics is not currently available as a standard feature within the commercial software MIM (MIM Software Inc., Cleveland, OH) and therefore requires an external plugin in order to compute these quantitative metrics on clinical cases. An in-house QA tool was developed as an external plug-in to MIM to compute quantitative metrics and generate a report for clinical cases. These metrics, as well as total registration error and the distribution are provided in the report generated.

ROIs were placed at high dose PTV regions for ten clinical Head and Neck cases. The DIR accuracy of these cases was determined by a clinical team of Radiation Oncologist’s (RO), Radiation Therapist’s (RT) and Medical Physicist’s, and the compared to the metrics computed within the ROIs. All three quantitative metrics evaluated could track the DIR errors with some outliers and information of each of the metrics compliment each other for DIR assessment. Further evaluation of the sensitivity of these metrics requires assessment of large patient cohorts, different clinical sites other than HN and between multimodal image datasets.

15:40
P096: Experimental validation of an MLC tracking treatment simulator with dose reconstruction
PRESENTER: Thomas Ravkilde

ABSTRACT. Purpose Current QA for MLC tracking involves pre-treatment delivery of the treatment plan to a moving dosimeter to determine its suitability for MLC tracking with an assumed motion. This manual and time-consuming process may instead be replaced by dose delivery simulations. Here, we perform such simulations and validate them by comparison with measurements. Methods MLC tracking experiments were performed on a TrueBeam linear accelerator (Varian). A Delta4 dosimeter (Scandidos) reproduced tumor motion from five previously treated liver SBRT patients and measured delivered doses at 72 Hz. Five VMAT fields per patient were delivered both with and without MLC tracking. Time-resolved motion-induced 3%/3mm gamma failure rates (GFRs) were determined for each field delivery by comparing measured cumulative dose distributions with a measured static reference. For computer simulation of experiments, two in-house developed programs for 1) treatment delivery simulation and 2) dose reconstruction (DoseTracker) were combined. The treatment simulator took as input the DICOM-RT treatment plan and motion trajectories and generated a file with synchronized target positions and simulated linac parameters. This file was used by DoseTracker to reconstruct time-resolved delivered doses and calculate GFRs comparing simulated cumulative dose distributions with simulated static references. Finally, the time-resolved GFRs of simulations and experiments were compared and the root-mean-square deviation (RMSD) calculated. Results The simulated gamma failure rates agreed well with the measurements throughout beam delivery for both MLC tracking and standard non-tracking treatments with a RMSD of 2.0 %-points. Conclusions End-to-end simulations of advanced radiotherapy delivery, from treatment plan to delivered dose distributions, were demonstrated and experimentally validated. The simulator accurately predicted motion-induced dose errors for VMAT liver SBRT to a moving target throughout both MLC tracking and standard non-tracking deliveries. The tracking simulator with dose evaluation can eliminate the need for time-consuming experiments and QA measurements for MLC tracking.

15:15-16:00 Session 18D: Poster Session III - Outcomes and Radiobiology
15:15
P098: High doses to the heart affect overall survival in stage III lung cancer patients treated with conventionally fractionated radiation therapy.
PRESENTER: Mirek Fatyga

ABSTRACT. Background Recent studies have suggested that high radiation dose to the heart when delivered to lung cancer patients undergoing radiation therapy may adversely affect their overall survival (OS). More detailed understanding of the effect of heart doses on OS could lead to heart avoidance strategies with better clinical outcomes. Methods 134 stage III NSCLC patients treated with radiation therapy were selected for this study. The overall survival (OS) for all patients was obtained from the institutional tumor registry. We used multivariate cox regression model to systematically search for dosimetric indices in the cumulative dose volume histogram of the heart that may be predictive for OS. In the multivariate analysis we used a single DVH index combined with chemotherapy, age, dose prescription, mean lung dose, lung V20, tumor site, laterality, and stage. To further investigate the potential sensitivity of heart substructures to radiation damage we digitally subdivided each heart into four equal substructures (superior-right, superior-left, inferior-right, inferior-left) and repeated the same analysis for each substructure separately. Results Among 134 patients, 52 patients were alive at the last follow-up while 82 were not. In the multivariate analysis of the whole heart DVH the age before RT (p=0.02), stage of the cancer (IIIA/IIIB) (p=0.02), chemotherapy (p=0.0005) and cumulative DVH indices V[%]_55Gy (p=0.011) and V[%]_60Gy (p=0.039) were found to be predictive for OS. In the analysis which used DVHs of four heart segments the same patient variables were found to be significant, while V[%]_55Gy (p=0.014) and V[%]_60Gy (p=0.024) indices were found significant only in the superior-right segment of the heart. Conclusions High doses to the heart in radiation therapy for stage III NSCLC lung cancer were associated with decreased OS, especially doses to the superior-right part of the heart.

15:20
P099: Comparison of automated and pathologist assessment of radiation induced pulmonary fibrosis for Masson’s trichrome
PRESENTER: Li Ming Wang

ABSTRACT. Purpose - Radiation Induced Pulmonary Fibrosis (RIPF) is a chronic, adverse side-effect characterized as increased collagen deposition and disrupted interstitial structures leading to a thickening of the alveolar walls. Current methods, criteria and scoring schemes for RIPF, used by pathologists, are subjective and lack reproducibility. This study seeks to validate a novel quantitative method for automated analysis.

Methods - Twenty-five rats were separated into five groups: CG - sham-irradiation control, CR - irradiated control, DR - drug treated using granulocyte-macrophage colony stimulating factor, IV - intravascularly administered bone-marrow derived cell therapy, and IT - intratracheally administered bone-marrow derived cell therapy. All groups were imaged via cone-beam computed-tomography imaging, planned for treatment and given one fraction of 18Gy (6MV photon beam - Novalis Tx linear accelerator) to the right lung. DR, IV and IT groups were given their respective treatments immediately after irradiation. CG and CR groups received no treatment to ameliorate radiation damage. Rats were then sacrificed twenty-four weeks post-irradiation, their lungs fixed, paraffin embedded and stained with Masson’s Trichrome. Stained samples were anonymized then scored by a certified pathologist using the modified Ashcroft Scale. For the automated analysis, samples were digitized and analyzed using colour thresholding to isolate and quantify areas of aniline blue, with large airways and vessels excluded from analysis.

Results - The Spearman’s correlation (rho) between the pathologist and automated scoring for all samples was rho_all=0.7147 with p-value_all=7.69e-26. Only CG achieved no correlation (rho_CG= 0.0611, p-value_CG=0.7194) among CR (rho_CR=0.8720, p-value_CR=9.10e-12), IV (rho_IV=0.8141 with a p-value_IV=1.3e-06), DR (rho_DR=0.7392, p-value_DR=1.74e-07) and IT (rho_IT=0.7031 and p-value_IT=1.27e-04).

Conclusion – An automated histological analysis quantifying amounts of a color of interest can be used as a surrogate for pathologist scoring. In its current form, our automated analysis is only effective for scoring the extent and not denseness of RIPF.

15:25
P100: Comparing the efficacy of SBRT and IMRT in reducing radiation induced pulmonary fibrosis
PRESENTER: Li Ming Wang

ABSTRACT. Purpose - Sparing healthy lung tissue during radiotherapy (RT) has been shown to reduce the severity of radiation induced pulmonary fibrosis (RIPF). Stereotactic body radiation therapy (SBRT) and intensity modulated radiation therapy (IMRT) offer the possibility of reducing dose to healthy tissue through better dose distribution and more conformal fields. In this study, we attempt to validate the benefit of SBRT in reducing RIPF severity through comparing outcomes assessed via a traditional physician scoring and a novel automated analysis based on changes in radiodensity.

Methods - RT patients (84 total) were attributed to Conventional RT (41 patients), IMRT (13 patients) or SBRT (30 patients) groups based on treatment protocols. Patients were scored for RIPF severity at six-months post-treatment using computed tomography (CT) images by a group of physicians, via a five-grade criterion, and an automated algorithm detecting radiodensity changes between the pre-treatment and six-months post-treatment CT images. A two-sample T-test was then performed to verify significant differences between grade distributions.

Results - Results of the physician scoring indicated that SBRT (mean grade of 1.400) is significantly (p<0.05) associated with reduced RIPF severity compared to Conventional RT and IMRT (mean grade of 2.0244 and 2.0000 respectively). The automated analysis produced contradictory results, with IMRT being associated with significantly (p<0.05) better RIPF outcomes (mean score of -0.045) compared with SBRT and Conventional RT (mean score of -0.024 and 2.8e-4 respectively). The correlation between the two scoring modalities were 0.71, 0.59, 0.68 and 0.64, for the Conventional, IMRT, SBRT group and all groups combined, respectively.

Conclusions - Ultimately, SBRT demonstrates the greatest reduction in RIPF severity. While the results derived from the automated analysis are promising, they do contradict physician appraisals and highlights the need to improve our automated approach to assessing RIPF, as the feature of radiodensity changes alone is not sufficient.

15:30
P101: Development of indigenous tool for analysis of large dataset of dose volume histograms
PRESENTER: Gaganpreet Singh

ABSTRACT. In radiotherapy, dose volume histograms (DVHs) are most widely used for the plan evaluation. Varian's Eclipse treatment planning system offers plan uncertainty module which can simulate (as many as possible) the patient setup uncertainties into the treatment plans of the patient and produce a large dataset of DVHs associated with these uncertainties. Many tools are available for the radiotherapy plan analysis using DVHs dataset but no such tool is available to analyze the large dataset of DVHs obtained from Eclipse TPS. In this proposed work, In house "DVH Analyzer" program is written in MATLAB® R20011b software which provides plenty of dosimetric parameters extracted from the large dataset of DVHs and along with plan quality and radiobiological parameters (TCP/NTCP) have been integrated to see the clinical impact of these uncertainties on the patients. A case of carcinoma cervix has been used for the demonstration. All the results obtained from the "DVH Analyzer" have been analyzed and verified with the TPS's DVHs statistics and found to be within a good agreement. Plan quality and radiobiological parameters were tested against the manual calculation and found exactly the same. This tool is compatible with the DVHs generated Eclipse Plan uncertainty module and provides the processed output of all the DVHs (generated with the uncertainty module) of each contoured structures in excel file format. In conclusion, "DVH Analyzer" has the potential to analyze the large dataset of DVHs that can be utilized in the radiotherapy clinics for a better understanding of uncertainties and its impacts.

15:35
P102: A Prediction Model Incorporating PET Radiomics for Lymph Node Metastases in Esophageal Cancer Patients
PRESENTER: Zhenwei Shi

ABSTRACT. Objective: The primary objective is to develop and validate a prediction model for pathological LNMs in Esophageal Cancer patients using clinical and PET radiomic features that improves on current staging methods. Materials and Methods: In total, 203 EC patients were included in the development (n=130, STAGE cohort) and external validation (n=73, CROSS cohort). Seven clinical variables; age, sex, tumour location, histological cell type, clinical T-stage, type of neoadjuvant treatment, tumour regression grade (Mandard score), were considered for inclusion. Then, 154 radiomic features were extracted from the GTV complying with the Image Biomarker Standardization Initiative (IBSI). The following feature reduction methods were used. Firstly, Recursive Feature Elimination (RFE) was used to select the optimal clinical variable combination, using the area under curve (AUC) measurement. Secondly, the pair-wise Pearson correlation of radiomic features and Kolmogorov-Smirnov statistic were calculated to reduce feature dimension. LASSO method was then used to find the most frequent radiomic features. The high frequency features were fitted into a RFE with 5-fold cross validation to select the feature combination. Multivariable logistic regression analysis was performed to develop the prediction model. Results: Four clinical variables; age, clinical T-stage, neoadjuvant therapy, and Mandard score were selected using RFE method optimized by AUC. Also, these features achieved the top four frequently selected features directly through LASSO performed in a 500 times loop via randomly split training and Conclusion: These results show that a prediction model based on clinical variables has moderate accuracy to predict LNMs and demonstrates transferability between international centres. However, the addition of PET radiomics added little to the predictive model performance. This model requires further external validation, but has potential clinical utility in EC patients who are restaged after neo-adjuvant treatment by influencing the surgical decision to operate, or to offer alternative adjuvant therapy.

15:40
P103: Machine learning techniques in microdosimetry: application to energy deposition within cell populations
PRESENTER: Iymad Mansour

ABSTRACT. Studies of cellular radiation response traditionally use experimental and Monte Carlo (MC) methods, both presenting diverse challenges. This work investigates applications of Machine Learning (ML) techniques to advance studies related to cellular radiation response, focusing on applications of ML techniques to predict non-uniform energy deposition within cell populations. Logistic regression, random forest and neural network algorithms are trained using MC-generated datasets of specific energy (energy imparted per unit mass) scored in cellular compartments for different homogenized tissue volume (macroscopic) doses. These MC simulations of microscopic tissue structure involve >1500 explicitly-modelled cells (radii 5 to 10 microns) with representative non-water elemental compositions irradiated by 20- 370 keV and 60Co photon beams. The trained algorithms are able to use solely experimentally-measurable metrics to predict both the mean and standard deviation of the specific energy, and are compared based on prediction error (relative to MC value) and computational requirements. MC simulations demonstrate that specific energy distributions within cell populations are sensitive to incident photon energy, macroscopic dose levels, as well as tissue elemental compositions. For example, at a macroscopic dose of 0.025 Gy there is a microdosimetric spread (standard deviation of the specific energy distribution normalized by the mean) >50% for all cell size and beam quality variations. The most accurate ML algorithm is the random forest which is able to predict the mean and standard deviation of the specific energy within the cell population with a mean absolute error of 2 and 10% respectively at 0.025 Gy (all model variations), >104 times faster than MC simulation. ML algorithms show promise in predicting variations in energy deposition within cell populations, providing an efficient alternative to computationally-intensive MC simulations. Ongoing work is exploring ML feature sensitivity and a larger parameter space, as well as connecting experimental results of cellular radiation response with computational models.

15:45
P105: Deep-Mining lncRNA transcriptomics of breast cancer by TCGA RNA-Seq data
PRESENTER: Xiaoping Su

ABSTRACT. Introduction Breast cancer is a heterogeneous disease that can be classified in 4 subgroups using transcriptional profiling. The role of lncRNA transcriptomics in human breast cancer biology, prognosis and molecular classification remains unknown.

Methods & Results Using an integrative comprehensive analysis of lncRNA, mRNA and DNA methylation in 900 breast cancer patients from The Cancer Genome Atlas (TCGA) project, we unraveled the molecular portraits of 1,700 expressed lncRNA. Some of those lncRNA (i.e, HOTAIR) are previously reported and others are novel (i.e, HOTAIRM1, MAPT-AS1). The lncRNA classification correlated well with the PAM50 classification for basal-like, Her-2 enriched and luminal B subgroups, in contrast to the luminal A subgroup which behaved differently. Importantly, Estrogen Receptor (ESR1) expression was associated with distinct lncRNA networks in lncRNA clusters III and IV. Gene set enrichment analysis for cis- and trans-acting lncRNA showed enrichment for breast cancer signatures driven by breast cancer master regulators. Almost two third of those lncRNA were marked by enhancer chromatin modifications (i.e., H3K27ac), suggesting that lncRNA expression may result in increased activity of neighboring genes. Differential analysis of gene expression profiling data showed that lncRNA HOTAIRM1 was significantly down-regulated in basal-like subtype, and DNA methylation profiling data showed that lncRNA HOTAIRM1 was highly methylated in basal-like subtype. Thus, our integrative analysis of gene expression and DNA methylation strongly suggested that lncRNA HOTAIRM1 should be a tumor suppressor in basal-like subtype.

Conclusion & Significance Our study depicts the first lncRNA molecular portrait of breast cancer and shows that lncRNA HOTAIRM1 might be a novel tumor suppressor.

15:15-16:00 Session 18E: Poster Session III - Workflow and QA III
15:15
P106: Implementation of the MOSAIQ Oncology Information System at CHUM - Going Paperless and Smart

ABSTRACT. As part of their migration to a new integrated center, CHUM's Medical Oncology (MO) and Radiation-Oncology (RO) Departments decided to move to a specialized information system in order to improve their processes, eliminate paper from the clinic and pave the way for a better infrastructure to gather and use data for clinical quality control, research and development.

The implementation process for MOSAIQ was a 3 year long journey that involved many participants and steps, and still has some ongoing improvements being deployed. This article will present the different steps and challenges of implementing such a system in a very large University Hospital, as well as provide some insights into the automation and controls that were achieved due to the new information system.

The implementation of an information system within two large departments of a University Hospital is a very complex and long endeavor. It requires the participation and collaboration of the department’s staff, as well as the hospital’s administration and IT department, and finally of the information system provider. The desire to move from a semi-paperless process to a almost completely paperless process only adds to this already complex project. But the gains are valuable, though not necessarily immediately visible for end users.

Automation is an important part of the success of such a deployment and helps in converting staff to become users of the new information system and abandon paper.

Finally, implementing a quality control process using the data found in the information system not only ensures the system is properly used, but also ensures that patients are properly taken care of and allows hospital staff to catch errors early in the process, improving overall patient care.

15:20
P107: Improving development practice for creating and maintaining clinical, in-house developed software tools.
PRESENTER: Matthew Jennings

ABSTRACT. Like many departments worldwide, the Royal Adelaide Hospital heavily relies on in-house developed software. Despite its benefits, in-house development presents challenges regarding software maintainability, dependability, quality and usability. We demonstrate an example of our more complex in-house software, LogAnalysis, along with our efforts to address these challenges by adopting better coding practices. New practices adopted include: comprehensive, automated testing; standardised coding style; requirements for both user and developer documentation; formalised code review; and development under version control using Git and the Git Flow workflow. Several releases of LogAnalysis have been made since initial release in July 2018 to improve code efficiency, intuitiveness & aesthetics (usability); fix bugs (quality, dependability); and refactor code (maintainability). The introduction of automated testing has reduced the release workload from up to a few hours to less than five minutes. Deliberate code review has prevented multiple errors from entering production software, including an error that would have produced false-negative QA results. Code review also facilitated having multiple developers intimately familiar with the latest code, which increased the availability of effective troubleshooters to software users. Improved coding style and availability of documentation has enabled faster code comprehension and accelerated troubleshooting. Maintaining a clean Git history has allowed efficient fall-back to previous stable versions when unforeseen errors were introduced by a new release. This also facilitated faster diagnosis of errors as compared with other projects. Actively adopting better coding practices has improved the quality, maintainability, dependability and usability of LogAnalysis.

15:25
P108: A Method for Serving Linac Log File Data On-Demand for Analysis Using a GraphQL Application Programming Interface

ABSTRACT. Linac log file analysis is a useful tool for performing quality assurance (QA) testing on IMRT/VMAT treatment plans by calculating the delivered dose and running a variety of other linac performance checks. Cancer centres can develop analysis programs to perform these types of QA tests and a standardized way to access the log file data, regardless of file type, makes it easier to develop more efficient, automated QA software. One such method has been developed in Python to collect and store log file data and then serve the data using a web-based GraphQL Application Programming Interface (API).

This API provides a standardized query syntax for clients, typically analysis scripts or web-based applications, to fetch treatment log data from a database. Logs of two Varian file types (Dynalog and Trajectory) can be imported and stored in a standardized format within the API's database. Logs can then be queried and filtered by patient ID, treatment unit, and date range. The query can specify a subset of the log data which improves load times since the entire file does not need to be parsed to find the data.

Serving log data via this API resulted in several benefits. Dynalog file data could be compressed to 41.6% of its original size when stored in the API's database. Log data fetched by the API required a linear amount of time to load, proportional to the number of measurements in the specified log, and large collections of log data could be returned efficiently by loading up to 100 log files per query to the API. Compared with a script that read data directly from the log files, the API showed a significant improvement in efficiency, processing over 1400 logs/minute compared to 110 logs/minute from the direct read method.

15:30
P109: Audit Data Tool for the DcmCollab Dose Plan Bank
PRESENTER: Simon Krogh

ABSTRACT. Introduction The DcmCollab dose plan bank is a DICOM based system designed to receive, analyze and share DICOM RT data nationwide. It has proven its worth in many respects, but for audit trials in which a number of sites perform a prescribed task using the same original DICOM data set, some issues have occurred. The aim of this study was to develop a tool, embedded in DcmCollab, which addresses these issues by facilitating and managing contouring and treatment planning audits applicable to various cancer sites.

Materials & Methods The DcmCollab dose plan bank was developed locally using Microsoft SQL and .Net. For DICOM support the ClearCanvas SDK was used. The system consists of a DICOM SCP for receiving data, services processing the imported data, and a website for the users to view and interact with the data. Libraries from a locally developed tool were used for anonymization as prescribed in the DICOM standard to avoid UID collisions when the data is collected from all participating centers. To send the generated data to the participating institutions, the audit tool uses the existing export tools in DcmCollab which perform standard DICOM transmissions.

Results The audit tool was implemented as a new page in the DcmCollab. The user is prompted to select the basis patient data set, some basic options, and where to send the processed datasets. The user-selections are stored as a job in the database which is registered by an underlying service. The service performs the anonymization and export of the processed datasets.

Discussion & Conclusions The implemented audit tool allows the user to easily generate data fully compatible with the DcmCollab system. In the future, tools to evaluate and visualize the returned audit data sets, calculating contour similarities and treatment plan variations will be developed.

15:35
P110: Performing QA with hybrid devices: automation and centralization of results
PRESENTER: Alain Sottiaux

ABSTRACT. Introduction Linacs and other equipment used in radiotherapy require appropriate periodical quality controls. Many devices, software and methods are used for that purpose. Having an overview of all controls status requires checking many data sources (paper sheets, Excel sheets and dedicated software). Our department recently purchased TrackIt (PTW, Freiburg), in order to centralize all control results in one single place. Methods Data collect and analysis is automated (1 click operations) whenever possible. Various in-house scripting are implemented to collect all results, using manual data input or XML file input interface of TrackIt. Our solution combines VBA macros to transfer Excel sheets data, C# Eclipse scripting to transfer Portal Dosimetry results, and a web application (Python based) to collect and transfer results from external database, text files or DICOM files. DICOM images are analyzed with Pylinac package (integrated in our web application). Results 17 quality controls (daily, weekly monthly and quarterly) for 2 TrueBeam, 1 HDR brachytherapy afterloader and an Iodine waste tank are now centralized in TrackIt and available with a web interface. They were previously strewed in many locations (paper sheet, Excel, dedicated software). Some additional quantitative analysis now replace former visual inspection. Conclusions TrackIt allows all people in our team to have a quick and easy overview of all quality controls in one single place. With some development effort including various in-house scripting and a web application, data collect and analysis from various source is fast, easy and paperless.

15:40
P111: Data pipelines in radiation oncology: lessons from software engineering
PRESENTER: Gabriel Couture

ABSTRACT. Processes automation in clinical environments is highly desirable as it reduces the workload and eliminates error-prone tasks. Extract/transform/load (ETL) operations for instance are often used to move data from an IT system to another. At our institution, ETL operations are used to federate prostate cancer patient data in a research database used to evaluate survival and toxicities. In these data pipelines, laboratory results and dosimetric indices are pulled from hospital IT systems and treatment planning systems (TPS) respectively. Maintaining these pipelines is not trivial however as several external factors can break them: new firewall rules, software updates, change in the vocabulary used, etc. It is therefore important to design the pipelines so that they can cope elegantly with frequent changes. Lessons learned from software engineering guided us towards the use of factory design pattern and data structures such as patient tree representations to handle the inherent complexity of ETL tasks in an ever changing IT ecosystem. Memory usage by the pipeline is also a concern, which was handled by the virtual proxy design pattern. This methodology will be presented through a case study where dosimetric indices are generated from DICOM-RT files derived from brachytherapy treatment planning. The pipeline first queries a research-dedicated PACS server, retrieves the files required, performs consistency checks, and calculates dosimetric indices. Results are written back into the DICOM files, using appropriate tags defined by the standard for this purpose. Any exception raised during the process is contextualized and sent to the developer for potential action. Implementation details and software engineering guidelines will be presented as well avenues for improvement, including the use of tools such as Apache Airflow to schedule and monitor ETL processes in a clinical setting.

15:45
P112: Secure-DICOM-Uploader: A platform for anonymising and transferring imaging data from hospital sites to remote repositories
PRESENTER: Daniel Beasley

ABSTRACT. For a large multi-site project, imaging (DICOM and other formats) and non-imaging data from different hospitals are anonymised and sent to a central repository for analysis. To provide secure data transfer from different hospital sites to a central repository, we have developed a user-friendly solution based on software components designed for and established in large-scale clinical trials. This was designed with the primary purpose of Quality Assurance for radiotherapy clinical trials, however it can be used for general multi-center clinical trials and research projects.

XNAT is a platform that provides for easy data management, image viewing and synchronisation of data. We have developed a customised XNAT-based workflow and placed this within a Docker service for easy distribution, reliability and control.

The Uploader consists of two separate XNAT servers running within the same Docker service, one hosting non-anonymised data, the other anonymised. DICOM images are pushed from PACS or placed in a network folder which is then automatically imported into the non-anonymised XNAT server. The local user can log into the XNAT server and view session details and visualise the images via the OHIF (Open Health Imaging Foundation) viewer. Data is anonymised either automatically, where subject and session IDs are generated using hashing, or manually assigned for clinical trials. Clinical Trial protocols are used to ensure data conforms to expectations (ROI labels, structures) before anonymisation. When data is anonymised, a link is created between anonymised and identifiable data to ensure all data can be identified by the clinical trial management.

The Secure DICOM Uploader greatly improves the experience and reliability of data transfer for clinical research. The user-friendly software requires minimal training while providing a number of checks to prevent identifiable information being transferred. It is planned to distribute the software for upcoming clinical trials.

15:50
P113: The Functional Paradigm in RT Computing: Features & Realization
PRESENTER: Nicolas Depauw

ABSTRACT. Algorithms have been the focus for computing in RT. System architectures or data models have not received comparable attention and current systems continue dated paradigms that cannot meet requirements for adaptive radiotherapy or temporal and dynamic state management in RT. We present a data-driven architecture and demonstrate its implementation derived from the functional programming paradigm (FP). In FP, pure functions operate on input parameters only, have no side-effects and produce predictable and immutable output. The DICOM standard uses immutable data and requires that data cannot be changed once produced. Without immutability, data cannot be presumed correct or safe for use. Our computational unit is the triplet of inputs, function and output data and is over-specified in that the inputs and the function always produce the expected output data. In our architecture, we tag each triplet by a unique ID (akin to the DICOM UID) that identifies the execution of that triplet in a particular context. Triplets form associative networks because output from one triplet is used by another triplet. An immutable data instance is the product of triplet execution. The network of triplet IDs, when passed to the computational framework, allows the computational reconstruction of the immutable data without explicit storage requirements; a feature that goes beyond the DICOM definition of realized data and allows a description of the end-to-end computational process (say the creation of a treatment plan) by the immutable inputs (i.e. CT) and the networks of triplet IDs. Computational states of an action are documented in the network and storage is minimized to inputs and network descriptions (in JSON). Some features of this architecture include “memory” as networks re-execute upon change of input, support for service-oriented architectures and distributed and cloud computing because triplets execute independently and automatic versioning and traceability of any data construct.

15:30-16:10 Session 19: Machine Learning and Monte Carlo II
Location: Opera C
15:30
GaGa: GAN for GATE
PRESENTER: David Sarrut

ABSTRACT. Phase-Space (PHSP) files are of great interest in Monte-Carlo (MC) simulations for various medical physics tasks such as the description of a Linac photon source or the storage of particles outgoing a voxelised patient image to simulate an imaging process. However, PHSP files are generally large, from a few to several tens of GB and inconvenient to use efficiently. In this work, we propose a generic virtual source model of Monte-Carlo PHSP using Generative Adversarial Nets (GAN). GAN were recently proposed as deep neural net architectures allowing to mimic a distribution of data. In this work, we model a PHSP with a GAN and the resulting generator neural network (G), obtained after reaching Nash equilibrium, can be used as a source of particles reproducing the statistical properties of the original PHSP. All methods were implemented in the open-source GATE platform and will be made available to the community. We applied this concept to a Linac PHSP and to the MC simulation of single photon emission CT (SPECT) where the particles outgoing a patient CT images were recorded in a PHSP. At the end of the training process, the PHSP files were represented by neural networks with about 0.5 million weights (around 4 MB) instead of few GB of the initial PHSP. Linac simulations using particles from the reference PHSP or generated by the GAN led to similar depth dose profiles (2-4 %). For the SPECT example, images generated with the PHSP or by the generator network G were overall in good agreement. Detailed statistical properties and limitations of the particles generated from the GAN are currently under investigation, but the proposed method is already promising and may potentially be applied to a wide range of simulations.

15:40
Preliminary results in using Deep Learning to emulate BLOB, a nuclear interaction model

ABSTRACT. Monte Carlo (MC) simulations are of utmost importance in Ion-therapy and for such applications the nuclear interaction models are crucial. Geant4 is one of the most widely used MC toolkit, also for Ion-therapy simulations. However, recent literature has highlighted the limitations of its models in reproducing the secondaries yields measure in ions interaction below 100 MeV/n . To mitigate such a shortcoming, we interfaced a model dedicated to these reactions with Geant4, BLOB (“Boltzmann-Langevin One Body”), obtaining promising results. The BLOB drawback is its computation time, indeed it takes several minutes to simulate one interaction. Such a running time is too large for any practical application. Therefore, we trained a Deep Learning algorithm, i.e. a Variational Auto-Encoder (VAE) to emulate the BLOB final states. The double differential cross sections of fragments production in the interaction of a 12C beam at 62 MeV/n with a thin carbon target obtained interfacing the VAE with Geant4 are comparable with the ones obtained coupling BLOB with Geant4. In this way we can have a precision in the nuclear interactions similar to the BLOB full model without its computational overhead.

15:50
Development and association of new metrics of dose and image quality for optimizing protocols in CT imaging

ABSTRACT. Nowadays, though Computed Tomography (CT) examinations only correspond to a small portion of medical imaging procedures (typically 10 % in France), they are credited with about 70 % of the total imaging collective dose. Reducing the dose due to CT examinations is therefore a major issue. However, decreasing the dose in CT imaging cannot be achieved at the expense of the image quality (IQ) needed to ensure a correct diagnosis. Therefore, CT imaging needs to be described simultaneously using reliable metrics for delivered dose and IQ. However, such metrics are still lacking, especially for IQ, and we decided to develop novel ones. For IQ evaluation, the mathematical model observer (MO) Non Pre-Whitening Eye filter was implemented and validated, thanks to a clinical study involving a dozen of experienced radiologists. The MO calculated the Percentage of Correct answers (PC) on CT images acquired for various irradiation and reconstruction conditions on a home-made dedicated phantom, linked to lesion detection and discrimination clinical tasks. For dose estimation, a complete Monte Carlo model of the GE Discovery CT750 HD scanner was developed with the PENELOPE code. All the elements were estimated by physical measurements. The modelling was validated with measurements in CTDI and anthropomorphic phantoms using ion chambers and Optically Stimulated Luminescence dosimeters. Finally, the dose in the dedicated phantom was simulated in the various clinical study conditions and linked to the corresponding PC calculated by the MO. Some of the scanner standard protocols were placed on the curves after regression and compared from the double point of view of dose and IQ, in particular some protocols that integrate a Double Energy mode. This method paves the way for a standardized methodology enabling clinical physicists and radiologists to optimize protocols for defined clinical tasks while keeping the dose as low as possible.

16:00
Machine learning and the glandular dose estimation on homogeneous breasts in mammography
PRESENTER: Rodrigo Massera

ABSTRACT. Mean glandular dose (MGD) is the standard quantity employed in dosimetry in mammography for risk assessments. Due to the impossibility to measure this quantity directly, Monte Carlo (MC) simulations are used for this purpose. In practical situations, the MGD is obtained by multiplying the incident air kerma at the entrance surface of the breast by conversion factors, provided in literature named Normalized Glandular Dose (DgN). The DgN is dependent of breast composition, thickness and x-ray beam quality, and its values are usually provided in tables or parametric equations. However, this becomes a high dimensional problem, depending on the considered parameters, making the table or parametric methods exhaustive. We propose in this work the use of machine learning for MGD and DgN regressions through MC generated data. For this, the PELENOPE (v. 2014) with penEasy (v. 2015) MC code was used to generate dosimetry data for a wide range of parameters: breast composition, thickness, radius, skin thickness and photon energy. A Python script was used to build and train the neural networks (NN) using Keras (v. 2.2.4) and scikit-learn (v. 0.19.1). To avoid overfitting, we trained an ensemble of 10 NN using cross-validation with 80% of the dataset, and we tested the NN performance with the remaining 20% calculating the mean and standard deviation of the predictions, where the optimal parameters determined by the best performed model. The relative errors between the predicted values and the validation dataset were below 2.1%, with a mean of 0.2%. The results indicate that the use of NN to estimate the DgN and MGD values could be an alternative to tables and parametric equations and could be useful in quality control and dose estimation on DICOM headers.

16:00-17:10 Session 20A: Outcomes and Radiobiological Applications I
Location: Opera A
16:00
External validation of predictive models of toxicity

ABSTRACT. The final purpose of any predictive model in the oncological domain is to provide valid outcome predictions for new patients. Essentially, the dataset used to develop a model is not of interest other than to learn for the future. Validation hence is a crucial aspect in the process of predictive modelling.

Validation is the process of determining the degree to which a model is an accurate representation of the real world from the perspective of the intended uses of the model. It is a process that accumulates evidence of a model correctness or accuracy for specific scenarios, with external validation providing a measure of “generalizability” and “transportability” of the prediction model to populations that are “plausibly related”.

“Plausibly related” populations can be defined as cohorts that could be slightly different from the one used for model development, e.g. treated at different hospitals, at different dose levels, with different RT techniques, in different countries or in different time frames. Generalizability and transportability are desired properties from both a scientific and practical perspective.

Quantifying the confidence and predictive accuracy of model calculations provides the decision-maker with the information necessary for making high-consequence decisions.

The more often a model is externally validated and the more diverse these settings are, the more confidence we can gain in use of the model for prospective decision-making and its possible use in interventional trials.

Focus will be on external validation of models for the prediction of side effects after radiotherapy for prostate cancer.

16:30
Exploring the Relationship of Radiation Dose Exposed to the Length of Esophagus and Weight Loss in Patients with NSCLC
PRESENTER: Peijin Han

ABSTRACT. Purpose: To investigate the relationship of an esophageal dose-length parameter and radiation therapy (RT) treatment-related weight loss, defined as ≥ 5% weight loss during treatment. Methods: Lung cancer patients treated with conventionally-fractionated RT with curative-intent were included. We segmented contours of the esophagus based on CT-simulation slices, calculated the full- (D90) and partial-circumferential (D50) RT dose for each segment, and further calculated the full- and partial-circumferential absolute length at each dose level. A classification and regression tree (CART) model was used to visualize the relative importance of individual dose length parameters together with the clinical variables. Multivariate logistic regression, with corrections for multiple comparisons, was used to examine the associations between the individual dose-length parameter and weight loss. Lastly, ridge logistic regression models were used to compare the performances of weight loss prediction models, constructed using dose-length parameters and dose-volume parameters separately. Results: Among 214 patients identified, CART demonstrated that an esophagus receiving a high full circumferential dose of 55 Gy for ≥ 4.3 cm indicated high probability of significant weight loss. Among patients receiving a full circumferential esophageal dose of 55 Gy < 4.3 cm, patients aged < 78 years were likely to lose weight if they received a high partial circumferential esophageal dose of 65 Gy for ≥ 0.45 cm, while patients aged ≥ 78 years were likely to lose weight if their pre-treatment BMI < 25 kg/m2. After adjusting for clinical variables, esophagus lengths receiving high (50-60 Gy) full-circumferential doses, and high partial-circumferential doses (60-65 Gy) are significantly associated with weight loss. Dose-length parameters had comparable performance compared to the dose-volume parameters in predicting weight loss. Conclusions: Esophageal dose-length parameters are an efficient way of visualizing and interpreting complex esophageal dose parameters in relation to weight loss toxicity outcomes amongst lung cancer patients receiving definitive RT.

16:40
Voxel dose pattern for dysphagia among head and neck cancer patients receiving definitive radiotherapy
PRESENTER: Todd McNutt

ABSTRACT. Introduction Acute dysphagia(swallow difficulty) during head and neck cancer (HNC) chemoradiation (CRT) is a significant complication. The purpose of this study is to explore whether the spatial dose pattern in swallow and salivary related structures can help us better model acute dysphagia in HNC patients treated with CRT. Materials & Methods HNC patients treated with intensity-modulated-CRT from 2007-2018 at our institution were evaluated. Our outcome was CTCAE-dysphagia grade≥2 at 3month post-radiotherapy. Variables includes clinical data and spatial voxelized dose data for anterior/posterior digastric, geniohyoid, hyoglossus, hyoid, mylohyoid, ipsi/contralateral masticatory, ipsi/contralateral parotid, ipsi/contralateral submandibular gland, soft palate, larynx, cricopharyngeous, superior/middle/inferior constrictor muscle, thyroid, and esophagus. Each patient’s CT was spatially normalized to a common coordinate system (CCS) with nonrigid registration. The obtained deformation fields were used to map the dose of each patient to the CCS. The dose map was sampled at 3132 points in a raster pattern across the composite region of all of the ROIs. Ridge logistic regression with regularlization methods were used to investigate the influence of the dose voxel patterns in each ROI on dysphagia. Results Among the 447patients, 70(15.6%) reported severe dysphagia(grade≥2). The dose voxel importance analysis shows that the superior portion of the contralateral parotid gland, the anterior diagastric muscle, and larynx were the most influential regions regarding dose effect on dysphagia. Discussion & Conclusions Our analysis demonstrated the feasibility of applying a spatial dose voxel analysis onto a registered CSS with an atlas of swallow-related ROIs. More importantly, we demonstrated that there are visually apparent dose patterns associated with acute dysphagia. Specifically, important irradiated regions include contralateral parotid. These results also support the intuitive relationship between salivary function and dysphagia, through a spatial description of the pattern of influence. Future directions incorporating more refined measures of dysphagia may help improve the prediction characteristics.

16:50
Quantifying parotid compartment importance for post-radiotherapy function
PRESENTER: Haley Clark

ABSTRACT. Introduction: Loss of salivary function and xerostomia are common and debilitating sequelae for head-and-neck cancer patients. Identification of confined regions within the parotid that are critical to post-treatment function have recently been reported, but questions about the specific location, extent, and magnitude of criticality are withstanding. In particular, it is uncertain to what extent non-critical regions contribute to sequelae incidence. The aim of this work was to quantify the relative importance of entire parotid glands for late salivary function after radiotherapy using a large (n=332) prospective clinical cohort of head-and-neck cancer patients.

Methods: Parotids were divided into isovolumetric compartments. Baseline-normalized whole-mouth saliva was analyzed using two complementary approaches: model-based techniques (sensitivity analysis, dispersion importance, and information-theoretic model ranking) and machine learning techniques (random forests and conditional inference trees with permutation-derived importance estimation). Compartment-specific models and a whole-gland model were considered. A similar analysis was also performed using patient-reported xerostomia rather than saliva measurements.

Results: In the model-based approach parotids were divided into three compartments. The caudal-specific model ranked 2-10x higher than other models, had larger or equivalent response parameters, and had dispersion importance >1.4x that of other compartments. For the machine learning methods, parotids were further divided using into 18 isovolumetric compartments. For both random forests and conditional inference trees the most important compartments for hypofunction were caudal, having 1.47-2.74x the importance of a hypothetically homogeneous parotid. For xerostomia, again the most important compartment was caudal; it had an elevated importance (1.20x), and all six caudal compartments were uniformly more important than any non-caudal compartments.

Conclusions: A large clinical cohort was used to quantify the relative importance of parotid gland interior for post-radiotherapy late salivary function. Dose to the caudal aspect was found to be most important for patient outcomes.

17:00
Introducing information on gut microbiota into toxicity modelling: preliminary results from a clinical trial
PRESENTER: Tiziana Rancati

ABSTRACT. Introduction

We focus on introduction of information on gut microbiota into a normal tissue complication probability model (NTCP) for acute gastrointestinal toxicity after prostate cancer radiotherapy.

Materials & Methods

130 consecutive patients were enrolled. Microbiota bacterial 16S ribosomal-RNA reads were analysed and pooled in Operational Taxonomic Units (OTUs). The bioinformatics pipeline included: metagenome identification, demultiplexing, clustering into OTUs, summarizing of communities by taxonomic composition at multiple levels (Phylum, Class, Order, Family and Genus), alpha and beta diversity calculations. Grade2 acute gastrointestinal toxicity was the primary endpoint. For this preliminary evaluation 20 patients were selected: 10 Grade0 and 10 Grade2. Unsupervised clustering (fuzzy c-means algorithm) was used to separate patients into 2 microbiota clusters, based on relative abundance of OTUs at class level in microbiota before radiotherapy start. Information on microbiota clustering was introduced as a dose-modifying-factor into a logit NTCP model. Mean dose to the rectum was chosen as dosimetric predictor.

Results

Unsupervised clustering identified 13 patients included in a first cluster (A) and 7 in a second cluster (B). 4/13 (31%) and 6/7 (86%) patients with toxicity were found in clusters A and B, respectively (p=0.019). Microbiota clustering resulted in AUC=0.75 (95%CI=0.51-0.91) for toxicity discrimination. NTCP model including only mean rectal dose had D50=49Gy, k=16 (AUC=0.85,95%CI=0.62-0.97). When clustering was introduced, k=20.5, D50=42Gy for cluster A vs D50=32Gy for B. Microbiota clustering dose-modifying-factor (B vs A) was 0.76 (AUC=0.87,95%CI=0.65-0.98), with significant improvement in goodness-of-fit and calibration.

Discussion & Conclusions

This preliminary study demonstrates the possibility of introducing patient-specific microbiota information into NTCP models through use of unsupervised clustering to exploit the whole microbiota information (176 classes) without dramatically increasing the number of features. Results obtained in a small sample of patients seem promising in indicating that patients with/without radio-induced acute toxicity have different constitutional gut microbiota profiles.

16:00-17:10 Session 20B: Big Data III and Panel Discussion
Chair:
Location: Opera B
16:00
Leveraging Clinical Big Data to Bring AI into Clinical Practice

ABSTRACT. The combination of big data with machine learning (ML) and artificial intelligence (AI) algorithms promise to fundamentally change the nature of how we develop, test and apply clinical insights in healthcare. By creating a high volume big data analytics resource system to integrate “real world” clinical experience into a standardized ontological framework and then applying it to feed these algorithms we are creating a pathway in our clinic toward increased ability to apply observational data for driving discovery and informing decision frameworks.  Enabling this shift requires physicists to work on several efforts on several fronts including: clinical practice standardization, development of databases and ontologies, modeling and profiling using statistical and machine learning, and treatment team coordination. Using recent examples including use of an of AI toxicity model to refine DVH constraints, ML profiling factors affecting of treatment and imaging times we will highlight the several roles physicists must play to enable this translation of AI into clinical practice.

16:15-17:15 Session 21: Parallel Monte Carlo Implementations
Location: Opera C
16:15
Parallel Monte Carlo Implementations

ABSTRACT. The presentation will examine the requirements on a Monte Carlo (MC) based Dose Computation Engine (DCE) for use in Radiation Therapy (RT). The focus will be on RT with external photon beams in the context of (i) Offline treatment plan preparation, (ii) Online plan adaptation before treatment delivery, (iii) Real-time dose accumulation, and (iv) Real-time plan adaptation during treatment delivery. To satisfy the requirements of (i) and (ii), a DCE must provide the ability to a) Compute the dose of a treatment plan, b) Compute the dose for each segment in a treatment plan and c) Compute the dose from discretized fluence elements from the beams involved in the treatment plan on typically static representations of the patient anatomy. To be able to accommodate (iii) and (iv), the DCE must be also capable of performing a-c) on time-dependent and rapidly changing patient anatomies, in addition to being able to quickly pause and resume simulations to allow for the execution of other computationally intensive tasks such as real-time tissue tracking, deformable image registration, dose accumulation, and treatment plan optimization. The merits of parallel implementations on modern multi-core CPU’s and GPU’s will be discussed and specific examples illustrating the respective performance will be presented.

16:45
gPET: An efficient and accurate simulation tool for PET via GPU-based Monte Carlo
PRESENTER: Youfang Lai

ABSTRACT. Monte Carlo (MC) simulation method is widely used in multi-facets in modern medical physics. There is an emerging application of MC in positron emission tomography (PET) system for hardware/prototype designs, image reconstruction, artifact reduction and applications in hadron-therapy. Existing packages suffer from low computational efficiency. In this work, we present our recent development of a Graphical Processing Unit (GPU)-based highly efficient and accurate MC tool, gPET. gPET was built on the NVidia CUDA platform. The simulation process was modularized into three functional parts: 1) gamma pair generation, including positron decay, transport and annihilation, 2) gamma transport inside the voxelized phantom, and 3) signal detection and processing inside the parametrized detector. The detector was formed in three levels: panel, module, and crystal scintillator. A user can selectively tally multiple quantities as outputs, like intermediate photon phase space file (PSF), hits, singles and coincidence, etc. The package was evaluated via benchmark comparison with GATE8.0 in three comprehensive test cases. 1) two million positron-histories from C11 and F18 were simulated. The difference for the average positron range is 0.2mm and 0.08mm now 2) A gamma PSF file was recorded outside of a 5cm radius water phantom with 5e6 photon-history. The yield difference for the 511 keV energy peak was 0.34%. 3) 2e7 history was simulated for an eight panel smaller animal PET simulation case. The differences of energy, panel, module and crystal distributions of hits are 2.44%, 2.26%, 2.26% and 2.27%, respectively. Those for singles and coincidences are 1.26%, 0.51%, 0.51%, 1.12% and 1.84%, 1.75%, 1.74%, 1.82%, respectively. The average simulation time of the three test cases was reduced by 780 times from gPET than from GATE. gPET1.0 is an accurate and efficient tool for complex PET simulations. The package is open to research community at https://github.com/UTAChiLab/gPET.git.

16:55
Ines: a portable and parallel Monte-Carlo simulation code of class II for electromagnetic showers

ABSTRACT. Monte Carlo simulation is considered by the medical physics community to be the most reliable method for estimating the doses deposited during the various medical procedures that make use of radiation.

Ines (INteraction of Electronic Showers) is a class II Monte-Carlo simulation code containing the models implemented in Penelope for the multiple scattering of charged particles. The choice of these models follows the successful experimental validation of Penelope, which is therefore the reference point of Ines.

From the physical point of view, Ines is developed by ensuring the agreement of models implemented with those of Penelope. Regarding the code development, Ines is written in C ++ 14 and can be built by any compiler complying with this standard (GCC, Visual, etc.). In addition, the modularity of the code has been sought to facilitate the introduction of new features, thus the main elements involved in the transport loop have an interface in the form of a virtual base class.

Despite its position as a reference method, Monte Carlo simulations converge slowly. In order to partially overcome this problem, Ines provides several variance reduction techniques and makes easier the introduction of new algorithms thanks to the modularity of the code. On the other hand, Ines can be compiled with MPI to run in parallel.

When introducing parallel execution in Ines, particular attention was paid to scability (execution on a heterogeneous cluster), numerical stability (hybrid computation of averages and associated variances) and generation and use of a single sequence of pseudorandom numbers (the random numbers sequence generated by a single seed is distributed over the set of showers to be simulated).

Adding additional features can also be done using extensions loaded from dynamic library modules (DLL, [D]SO). This allows, among other things, to replace physical models, the source of particles or geometry.

17:05
Cloud based Monte Carlo independent dose calculation tool for Varian Linacs
PRESENTER: Quan Chen

ABSTRACT. Purpose Complex leaf motions and small MLC apertures in modern Intensity-modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) may lead to decreases in dose calculation accuracy. Monte Carlo (MC) simulation is regarded as the most accurate method for dose calculation. However, its clinical adoption has been limited by the steep learning curve and limited availability of computing resources. The purpose of this study is to develop a solution that addresses these issues. Materials & Methods MC package PENELOPE was modified to include Message-passing-interface (MPI) for parallel computing. Phase space file (PSF) from Varian was used. To reduce latent variation in the PSF, a rotational augmentation was performed. Transport in Jaws and MLCs were modeled with first-order approximation. The approximation was examined with the dosimetric leaf gap (DLG) simulation. An Amazon Machine Image was created for easy deploying and sharing. Client software was created using Matlab. The client software automates the entire independent dose calculation process including the request of cloud instance and the report generation. 21 VMAT plans were used to evaluate the calculation time at 2% statistical uncertainty setting. Results The DLG and MLC transmission computed with MC agrees well with the measurement. For clinical plans, MC dose with 2% statistical uncertainty can be computed around 5.5±2.6 minutes on c5n.18xlarge instances. The initialization of cloud instances, data preprocessing/transfer and report generation took additional 1.5, 0.7 and 1.6 minutes respectively. The process is fully automated so that the only user interaction is to dicom export the plan. With spot instance request, average cloud computing cost is less than $0.2 per plan. Conclusions A cloud-based MC dose calculation tool was developed. It can be easily deployed to different clinics with little cost for computing resources.

17:15-18:45 Industry seminar supported by Varian

Industry-supported seminar supported by Varian

Wednesday, June 19, 5:15 pm - 6:45 pm

Grand Salon A, DoubleTree by Hilton Montreal (Conference venue)

Smarter Cancer Care: Connecting people and technology

Attend the Varian Symposium to hear the experts’ view on AI technologies. There will be an interactive panel discussion covering various topics around the developments, challenges and the future of AI in radiation oncology and beyond. Focused topics:

 

Artificial Intelligence in Radiotherapy: Challenges and Opportunities for Research and Clinical Practice

Steve Jiang, PhD, DABR

University of Texas Southwestern Medical Center, Dallas, TXM

 

Artificial Intelligence for Better Radiation Oncology

Andre Dekker, PhD, Medical Physicist

MAASTRO Clinic, Maastricht University, The Netherlands

 

AI-based Decision Support Systems for Precision Medicine

Philippe Lambin, MD, PhD

The D-Lab, Dept. of Precision Medicine, Maastricht University, The Netherlands

 

MEDomics: A Framework for the Development of AI in Oncology

Olivier Morin, PhD

University of California - San Francisco, San Francisco, CA

 

Digital Medicine and (Radiation) Oncology

Nicola Dinapoli, MD, PhD

Radioterapia, Fondazione Policlinico A. Gemelli IRCCS, L.go A. Gemelli, Roma, Italy

 

Light refreshments will be served during this program.

This symposium is open to ICCR registrants only. Space is limited. Seats are on a first come, first served basis. Please register here.

Location: Opera A