Intermittent patient-specific quality assurance failures in proton therapy attributable to detector calibration drift
Original Article

Intermittent patient-specific quality assurance failures in proton therapy attributable to detector calibration drift

Yuki Tominaga1,2 ORCID logo, Caroline Bamberger3, Yushi Wakisaka1,2, Masaaki Takashina4, Xing Li5, Robabeh Rahimi6 ORCID logo

1Medical Co. Hakuhokai, Osaka Proton Therapy Clinic, Osaka, Japan; 2Medical Physics Laboratory, Division of Health Science, Graduate School of Medicine, The University of Osaka, Osaka, Japan; 3Department of Radiation Oncology, Inova Health System, Fairfax, VA, USA; 4Department of Medical Physics, Osaka Heavy Ion Therapy Center, Osaka, Japan; 5Department of Radiation Oncology, Loyola University Chicago Stritch School of Medicine, Maywood, IL, USA; 6Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD, USA

Contributions: (I) Conception and design: R Rahimi; (II) Administrative support: Y Tominaga, Y Wakisaka, X Li; (III) Provision of study materials or patients: R Rahimi; (IV) Collection and assembly of data: C Bamberger; (V) Data analysis and interpretation: All authors; (VI) Manuscript writing: All authors; (VII) Final approval of manuscript: All authors.

Correspondence to: Robabeh Rahimi, PhD, DABR. Department of Radiation Oncology, University of Maryland School of Medicine, 850 W. Baltimore Street, Baltimore, MD 21201, USA. Email: rrahimi@som.umaryland.edu.

Background: In proton therapy, an out-of-calibration measurement device can intermittently affect patient-specific quality assurance (PSQA) results, and such failures can occur even when routine quality assurance (QA) tests, such as daily and monthly output checks, continue to pass. We retrospectively analyzed PSQA failures in pencil-beam scanning proton therapy with the aim of identifying their root causes and found that repeating measurements with updated device calibration, including output and uniformity, led to improved gamma passing rates, confirming detector calibration as the root cause.

Methods: We reviewed two clinical cases from 2022 that failed PSQA, defined as <90% gamma passing rate (3%/2 mm). Both involved superficial chest wall targets requiring a range shifter, with large, low-energy fields (70–80 MeV). Measurements were performed with an IBA MatriXX PT detector array in 2022. Standard machine QA and treatment planning system (TPS) dose checks showed no abnormalities at the time. In 2024, we repeated PSQA for the same plans using updated device calibration files from 2023 and 2024, while maintaining identical setup conditions. Gamma analysis was repeated for all beams.

Results: The original 2022 measurements showed higher dose readings than TPS predictions, leading to widespread gamma failures (76–91% passing rates). Repeated measurements with the 2023 and 2024 calibration files yielded high passing rates (97–100%). All plausible causes were systematically evaluated at the time of the failures and no evidence of machine output drift or TPS errors was identified, confirming detector calibration as the root cause. The daily and monthly QA procedures performed with the same device during the failure period did not detect the calibration error, as they used mid-range energies and small fields that did not sufficiently engage the outer ion chambers of the detector.

Conclusions: Despite limited machine time in a busy clinic, we recommend periodic functionality checks of the QA detector across energy levels and field sizes, ensuring that both central and peripheral detectors are adequately evaluated.

Keywords: Proton therapy; patient-specific quality assurance (PSQA); detector calibration; Monte Carlo dose calculation (MC dose calculation); pencil beam scanning


Received: 14 September 2025; Accepted: 26 January 2026; Published online: 25 March 2026.

doi: 10.21037/tro-25-41


Highlight box

Key findings

• Intermittent patient-specific quality assurance (PSQA) failures were traced to calibration-related non-uniformity in a 2D ionization chamber array.

• Updated device calibration restored passing gamma results, confirming detector calibration drift as the root cause.

What is known and what is new?

• It is known that detector arrays require regular calibration and that routine quality assurance (QA) may not test all detector regions or energy conditions.

• This study newly demonstrates that calibration drift in some chambers can selectively affect results while leaving routine QA and most clinical plans unaffected.

What is the implication, and what should change now?

• Centers using 2D arrays for proton PSQA should incorporate periodic functionality checks at clinical energies and field sizes encompassing the detector active area to ensure uniform detector response.

• Routine QA procedures should be expanded to include field conditions that are otherwise not tested but are clinically relevant for target geometries.


Introduction

Patient-specific quality assurance (PSQA) is a critical component of the radiotherapy workflow, designed to identify any discrepancies between the dose distribution calculated by the treatment planning system (TPS) and the dose actually delivered by the treatment machine. This step ensures the safety and accuracy of patient treatments, especially for highly complex or modulated plans. The American Association of Physicists in Medicine Task Group 218 recommends rigorous PSQA for intensity-modulated radiation therapy (IMRT), and by extension, this is applied to intensity-modulated proton therapy (IMPT) as well (1).

Proton therapy with pencil-beam scanning (PBS) is a highly advanced modality in cancer treatment, offering precise dose placement to tumors while sparing healthy tissues (2,3). The adoption of Monte Carlo (MC) dose calculation algorithms in proton TPS has significantly improved the accuracy of dose predictions, particularly in heterogeneous media and at depth. Despite these advancements in planning accuracy, the role of PSQA in PBS proton therapy remains indispensable (4). Each treatment plan’s accuracy must be verified prior to patient delivery, as even small discrepancies can have clinical significance given the sharp dose gradients and range-specific effects in proton therapy. In this context, any PSQA failure demands a thorough investigation to identify its root cause before the treatment proceeds. Known sources of discrepancy include potential TPS calculation limitations, quality assurance (QA) device malfunctions, and setup or measurement errors. MC proton TPS algorithms, for instance, have known challenges at very shallow depths (near the surface) and in scenarios involving range shifters. PSQA failures due to setup and measurement errors, such as QA device malfunction, can be case-by-case dependent and may need to be addressed individually for each occurrence.

Previous studies have investigated the frequency and causes of PSQA failures in PBS proton therapy. Ricci et al. conducted a root cause analysis of failed PBS PSQA measurements and identified several common contributors to failure, including incorrect measurement device setup, steep dose gradient regions causing detector volume-averaging errors, and issues related to the finite detector size and spot spacing of PBS beams (5). Trnková et al. examined factors influencing PSQA performance for PBS IMPT fields and highlighted key elements such as the characteristics of the delivery system, the detector configuration and resolution, measurement uncertainties, treatment plan complexity, and the importance of regular calibration and QA protocols for detectors (6).

Studies have shown that large air gaps and thick range shifters in the beam line can introduce additional scatter and modeling uncertainties, which might lead the TPS to misestimate the dose in superficial regions (7,8). Specifically, for shallow-target proton fields—where low-energy proton layers are used—the TPS may slightly under-predict the dose due to resolution limitations. The TPS calculates the average dose deposited within each voxel, but for low-energy proton fields, the Bragg peak can be so sharp that even the highest available calculation dose grid resolution (typically 0.1 cm) may be insufficient to accurately capture its shape. This can lead to a systematic underestimation of the computed dose relative to the actual delivered dose, potentially resulting in treatment plans that deliver a higher dose than expected at shallow depths (9). At the time of commissioning and validation, using externally calibrated detectors and audits, the TPS uncertainties are inspected and accounted for.

In this study, we report a retrospective analysis of intermittent PSQA failures that occurred in our clinic, primarily for plans with shallow target volumes and large lateral field sizes in a PBS proton therapy system. Over the past years, out of hundreds of patient plans verified, very few patient plans (each consisting of two beams) did not meet our PSQA passing criteria. These plans all required the use of a range shifter. The failing PSQA results showed measured planar doses that were higher than the TPS-calculated doses. Such failures prompted comprehensive troubleshooting after each occurrence. Standard investigation procedures included verifying the accelerator output, checking the measurement device setup and functionality, confirming the accuracy of environmental condition corrections (temperature-pressure) applied to the detector, and examining for any potential errors in TPS dose calculation or beam delivery. We also considered factors like high dose gradients (to see if the gamma failure could be caused by the detector’s finite sampling) and any irregularities in beam scanning or spot size. Because the PSQA failures persisted and the underlying cause remained unresolved at the time, the corresponding treatment plans were revised in the TPS, after which the revised plans passed PSQA and were cleared for clinical use.

Subsequently, in 2024, we revisited the failing cases from multiple perspectives. Ultimately, we repeated the PSQA measurements from 2022 using updated detector calibration files from 2023 and 2024, while keeping all other setup conditions unchanged. Among the small number of affected plans, two representative cases with complete retrievable data were selected for detailed re-evaluation, as they most clearly demonstrated the calibration-related failure mechanism. With the updated calibrations, all beams achieved gamma passing rates of 97–100% using the 3%/2 mm global criterion, which exceeds our institutional acceptance threshold of ≥90% (with ≥95% considered clinically robust). Additionally, many QA plans used routinely for daily and monthly QA in 2022 were remeasured during subsequent years, and their gamma passing rates remained consistently high. Because these routine QA plans and most clinical plans primarily irradiated central detector regions, they did not sample the peripheral ion chambers affected by the calibration drift. This led us to hypothesize that calibration issues with the QA detector may have selectively impacted the failed PSQA cases. The following sections describe the detector, the calibration procedures, and the results of the repeated measurements, which ultimately identified the source of error and prompted changes to our QA practice.


Methods

PSQA device and beam configuration

For planar dose measurements, we used a commercially available 2D ionization chamber array, the IBA MatriXX PT (IBA Dosimetry, Schwarzenbruck, Germany) (10,11). The MatriXX PT consists of 1020 vented parallel-plate ionization chambers arranged in a 32×32 grid with a chamber spacing of 7.62 mm, covering an area of 244 mm × 244 mm. Each chamber has an active diameter of 4.2 mm and a thickness (height) of 2.0 mm, with an effective point of measurement approximately 6 mm below its surface. Appropriate depth corrections were applied in the software for different measurement depths to align the effective measurement plane with the intended depth in the phantom.

The effectiveness and reliability of such 2D ionization chamber arrays for proton beam QA have been demonstrated in previous studies, highlighting their utility in routine QA procedures (12). Before clinical implementation, the MatriXX PT underwent a full acceptance procedure including signal-stability testing, dose linearity, and comparison with a reference ion chamber across multiple energies, and uniformity evaluation according to the vendor’s commissioning guidelines. Following acceptance, the device was integrated into the routine QA workflow with routine checks and calibrations.

Clinical treatment plans were created using the RayStation TPS (version 11A, RaySearch Laboratories, Stockholm, Sweden) with the MC dose calculation algorithm. All plans discussed in this report are IMPT cases delivered using an IBA Proteus Plus PBS system, which provides proton beam energies ranging from approximately 70 to 225 MeV. Because the targets in the investigated cases were superficial (i.e., located near the patient surface), a range shifter with a water-equivalent thickness (WET) of 40 mm was used in the beamline to enable dose coverage at shallow depths. The range shifter was mounted on the gantry and inserted into the snout, a component of the gantry nozzle designed to hold range shifters. During PSQA measurements, we replicated the clinical setup by using the same range shifter. The snout was extended (from its fully retracted position) to achieve an air gap, the distance from the range shifter to the detector surface, that closely matched the clinical geometry, within practical constraints.

Calibration of the MatriXX detector

In our clinic, it is standard internal policy to recalibrate each QA measurement device following the annual machine QA. Although the proton therapy machine typically requires minimal output adjustment during this QA, detector recalibration is nonetheless performed to account for potential gradual changes in detector response over time. For the MatriXX PT, calibration involves delivering a known reference field and adjusting the detector’s calibration factors so that its measured dose matches the expected dose from the TPS. Importantly, this calibration is conducted only after verifying that the machine output is accurate. On the same day as the annual QA, a reference dosimeter (ion chamber) placed under identical beam conditions reads within 1% of the TPS-calculated dose for the reference plan. Therefore, the MatriXX calibration is effectively traceable to the absolute dose measured by the reference detector, under the assumption that both the TPS and machine output have been independently validated immediately prior to calibration.

In practice, this is achieved by creating a simple 100×100×100 mm3 QA plan (commonly referred to as a ‘cube plan’). This calibration is performed using a single uniform field with 140 mm range, 100 mm modulation, and a 100×100 mm2 field size. This plan, traditionally referred to as R14M10 (R: range; M: modulation), produces a uniform dose distribution within a 100×100×100 mm3 cube. The plan delivers a uniform dose of 200 cGy to the central region of the MatriXX PT. A 9 cm solid water buildup phantom composed of RW3 white polystyrene slabs (IBA Dosimetry, a material with a density of 1.045 g/cm3) is placed so that the detector’s measurement plane is positioned at the midpoint of the spread-out Bragg peak. Such a plan is well-suited for QA purposes, as slight misplacement of the detector does not significantly affect the measured readings (13). The plan is delivered to the MatriXX and solid water setup. Additionally, the vendor performs an annual uniformity check. Thus, all ion chambers are rescaled consistently, even though the reference field may not cover the full active area. As a best practice, vendor-recommended procedures, including proper detector warm-up and environmental stabilization, are also followed during calibration to ensure measurement consistency and reliability. The calibration procedures and beam parameters were identical across annual calibrations, using the same cube-plan configuration, energy selection, buildup material, and vendor uniformity workflow.

The raw reading from the device is recorded. Then myQA software (IBA Dosimetry) generates a calibration file by comparing the measured values to the TPS-calculated dose at corresponding positions. This calibration file represents a sensitivity adjustment that aligns the MatriXX measurement with the TPS expectation. Each calibration is timestamped and stored in the myQA system as a separate file (referred to by year for simplicity). The most recent calibration file is subsequently applied to all patient QA measurements until the next recalibration is performed.

Retrospective re-measurement of failing plans

We identified patient plans from 2022 that had failed PSQA, which we defined as having a gamma index passing rate below 90% using a 3%/2 mm dose difference/distance-to-agreement criterion (14). In our center, a passing rate below 90% triggers further investigation, while a rate between 90% and 95% is considered borderline and warrants review. These thresholds align with common clinical practice and AAPM TG-218 recommendations for tolerance levels (1).

Two treatment plans (each consisting of two beams) met the failure criterion in 2022, both involving chest wall targets at shallow depth. For each failed plan, we performed new PSQA measurements in early 2024. The same MatriXX PT device was used, but we sequentially applied two different calibration files: first, the calibration file from 2023 (which was in clinical use throughout that year), followed by the 2024 calibration file (obtained after the 2024 annual QA).

All measurements were conducted by the same personnel and followed the same setup protocol as the original QA to eliminate variability due to technique or positioning. The MatriXX was positioned at the same depth in the phantom as in 2022 (approximately 1 cm detector depth), corresponding to the shallow target depth in the patient. Each 2D measured dose distribution was compared to the TPS-calculated planar dose using gamma index analysis within the IBA myQA platform. To maintain consistency, the same gamma analysis criteria (3%/2 mm, global normalization) were used for all measurements. We recorded the gamma passing rates for each beam under each calibration condition.

Statistical analysis

Data analysis in this study was descriptive. Gamma passing rates obtained for each beam were summarized and compared across different detector calibration files (2022, 2023, and 2024) for the same treatment plans. PSQA results were evaluated relative to the institutional acceptance threshold described above. Because this study focused on retrospective root-cause analysis of a small number of cases, no formal hypothesis testing or inferential statistical analyses were performed. All data evaluation was conducted using the IBA myQA software platform.

Ethical considerations

This work analyzed routine PSQA data generated as part of standard clinical care and did not involve human subjects research; IRB review and informed consent were not required.


Results

Observation of initial failures (2022)

The failing PSQA cases, in 2022, exhibited a similar behavior. The measured dose across the irradiation field was higher than the TPS-predicted dose by roughly 3–5% on average. Consequently, the gamma index analysis showed failure across the field (illustrated in Figure 1). Standard machine and beam checks at the time did not reveal any issues, daily output was within tolerance and the beam profile measurements appeared normal. These failures were scattered over time and did not appear to cluster immediately after the detector calibration associated with annual QA. It was notable that they occurred for low-energy beams (70–80 MeV range) used for superficial targets, with large field sizes covering most of the MatriXX detector area. No failures were observed during the same period for deeper targets or smaller fields.

Figure 1 Example of a failed PSQA result. The dose distribution measured by the IBA MatriXX PT detector (A) was uniformly higher than the TPS-predicted dose distribution (B), leading to widespread gamma-index failures (γ-index result shown in C). The profile comparison (D; x-direction profile) illustrates the measured dose (green curve) exceeding the calculated dose (orange curve) by a nearly constant factor across the field.

Improvement with detector recalibration

In 2024, we repeated the QA measurements for those failed plans using the updated calibration files. The results are summarized in Table 1, which lists the gamma passing rates for each beam of the two cases, comparing the original 2022 measurement (with the old calibration) to the measurements made with the 2023 and 2024 calibrations. Each plan had two beams (Beam 1 and Beam 2), and both were remeasured.

Table 1

Gamma passing rates (3%/2 mm criteria) for the two clinical cases (each with two beams) under different detector calibrations

Calibration year Case I Case II
Beam 1 (%) Beam 2 (%) Beam 1 (%) Beam 2 (%)
2022 88.9 85.0 91.4 76.2
2023 99.3 100.0 97.5 97.7
2024 98.0 98.6 97.5 98.5

, original PSQA measurements performed in 2022 using the 2022 calibration file (failed). , measurements repeated in 2024 using the 2023 and 2024 calibration files, respectively.

As shown in Table 1, the initial gamma passing rates in 2022 ranged from 76.2% up to 91.4%. After the detector was recalibrated, using the standard procedure and incorporating the vendor-performed annual uniformity maintenance, the same plans and beams passed comfortably, with gamma scores generally in the 97–100% range. The 2023 calibration file already restored the passing rate to acceptable levels for all beams, and the 2024 calibration yielded similarly high passing rates, confirming that the issue observed in 2022 was resolved by the act of recalibrating the detector. These results indicate that an erroneous calibration of the MatriXX detector was the primary factor responsible for the original PSQA failures.

Additionally, as a sanity check, we reviewed the machine output logs and routine QA data from 2022 to 2024. No substantial changes in beam output were recorded that could explain a true delivered dose difference of the magnitude observed.


Discussion

Most proton therapy plans in our clinic achieve PSQA passing rates well above tolerance. However, a single-digit number of cases were found to have unacceptable PSQA results. This local experience is consistent with PSQA failure rates reported in the literature, where studies such as Ricci et al. and Trnková et al. have documented failure incidences on the order of a few percent, particularly for complex or shallow, low-energy proton fields (5,6). In these cases, the measured dose distribution was higher than the TPS-calculated dose by a few percent, leading to gamma index failures across the field. Each failure triggered a careful troubleshooting process at the time. Machine output verification was done by checking that the beam output was within calibration (typically using a standard ion chamber at reference conditions). Detector functionality was checked to confirm that the MatriXX device was operational (using daily QA and monthly QA tests) and that the chamber pressure/temperature corrections were applied correctly. Device setup was reviewed, ensuring the range shifter, detector depth, and lateral positioning replicated the plan, and that no offsets or misalignments were present. Additionally, dose calculation was reviewed, examining whether the TPS could be underestimating the dose for these fields, for example, due to limitations of the modeling at shallow depths or in the presence of the range shifter.

Our investigation identified the MatriXX detector calibration as the source of the discrepancy. The fact that repeating the measurements with updated calibration files, including output and uniformity, resolved the issue suggests that the detector’s response in 2022 may have been erroneous. When the device was recalibrated, the chamber-by-chamber response was effectively renormalized under the vendor’s uniformity profile, which in turn corrected a peripheral bias that routine QA had not revealed.

Importantly, the detector’s acceptable performance in daily QA measurements taken on the same day as the failing PSQAs should not be cited as evidence against detector malfunction. A QA device can remain “in calibration” for specific test conditions matched to its calibration factors (thus passing routine daily or monthly QA tests) yet still be out-of-calibration under other beam conditions not typically included in routine QA tests. This phenomenon has been previously observed and discussed in the literature, including AAPM TG-142 and TG-24 reports. These reports caution that devices calibrated under limited beam conditions may not yield accurate results under different clinical scenarios involving variations in energy, field size, or dose rate. In our case, the daily and monthly QAs, along with the annual output calibration procedures, involved relatively moderate field sizes and energies (e.g., output checks performed using a small uniform field at mid-range energy). Such testing conditions were insufficient to detect the detector’s calibration drift at the lower-energy range (15,16).

It became clear that more energy-specific and field-size-specific QA checks of the detector needed to be implemented. For example, to specifically catch an issue like this in the future, we could have implemented a routine QA procedure where the detector is exposed to a low energy, large-area field (encompassing most of the array) and the resulting 2D dose is compared against expectation. Such a test would be more sensitive to a broad calibration error.

PSQA failure analyses have primarily attributed discrepancies to detector setup inaccuracies, steep dose gradients, plan complexity, or detector resolution limitations (5,6). In contrast, our findings indicate that a calibration-related sensitivity drift in certain chambers can also contribute to PSQA failures. Although deliberately reproducing the calibration error in a controlled setting would have provided additional mechanistic confirmation, such reproduction was not feasible because the chamber-specific sensitivity drift observed in 2022 could not be replicated without access to proprietary detector configuration controls. Synthetic calibration drift would not have reproduced the specific behavior seen clinically; therefore, our conclusions relied on exclusion of alternative causes and the restoration of agreement using updated calibration files.

It is important to emphasize that our findings do not imply that TPS calculations are infallible. Proton TPS dose calculations, even with MC algorithms, can have uncertainties. As noted, shallow-depth dose calculations can be challenging, and factors like spot size, spot positioning accuracy, and nuclear interactions in the range shifter could all contribute to dose discrepancies in certain situations, especially for low energy ranges. This possibility, however, has been ruled out through cross-checking with a secondary device absolute dose measurement and third-party audit. At the time of this investigation, however, no alternative MC algorithm or independent dose calculation platform was available in our clinic to further validate the findings, which we acknowledge as a limitation of the study.

Finally, one practical constraint to acknowledge is machine time. Our clinic, like many others, has a busy treatment schedule, and dedicating extensive time to additional QA measurements (especially across many energies and field sizes) can be challenging. However, the cost of missing a latent issue like an erroneous detector can be far greater. Therefore, it is recommended to integrate quick checks that cover the extremes as part of monthly QA or after any recalibration. Even a small investment of time in these checks can save significant time (and ensure safety) later by preventing false alarms or last-minute plan recalculations.


Conclusions

Our retrospective analysis of intermittent PSQA failures in PBS proton therapy highlighted the critical impact of detector array, output calibration, and uniformity correction on QA outcomes. A small number of proton plans failed PSQA, despite all routine machine and QA device checks showing normal results. In these cases, the measured dose was higher than predicted. Daily and monthly QA procedures did not show any failing results. However, by repeating the measurements for the failing PSQA cases with the updated MatriXX PT calibration, we achieved passing results. Because the routine QA procedure failed to detect the device mis-calibration, we conclude that in addition to maintaining a schedule of regular detector calibration, it is advisable to implement routine detector QA that includes the full range of clinically relevant conditions, particularly low-energy beams and large field sizes, to confirm consistent detector response. Calibration integrity is now explicitly included in our PSQA root-cause analysis workflow when common sources of PSQA failure have been excluded.


Acknowledgments

The authors wish to thank the staff of Inova Health System for their cooperation in obtaining the experimental data for this study.


Footnote

Data Sharing Statement: Available at https://tro.amegroups.com/article/view/10.21037/tro-25-41/dss

Peer Review File: Available at https://tro.amegroups.com/article/view/10.21037/tro-25-41/prf

Funding: None.

Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at https://tro.amegroups.com/article/view/10.21037/tro-25-41/coif). The authors have no conflicts of interest to declare.

Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. This work analyzed routine PSQA data generated as part of standard clinical care and did not involve human subjects research; IRB review and informed consent were not required.

Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.


References

  1. Miften M, Olch A, Mihailidis D, et al. Tolerance limits and methodologies for IMRT measurement-based verification QA: Recommendations of AAPM Task Group No. 218. Med Phys 2018;45:e53-83. [Crossref] [PubMed]
  2. Hrinivich WT, Li H, Tran A, et al. Clinical Characterization of a Table Mounted Range Shifter Board for Synchrotron-Based Intensity Modulated Proton Therapy for Pediatric Craniospinal Irradiation. Cancers (Basel) 2023;15:2882. [Crossref] [PubMed]
  3. Sisin NNT, Zamri N, Abdullah R, et al. Radiation Attenuation Evaluation of Different Density of Polylactic Acid (PLA) and Tough PLA as Tissue Equivalent Materials for Radiotherapy Phantom. In: Proceedings of the 19th Asian Workshop on Polymer Processing (AWPP 2022). Springer; 2023:99-109.
  4. Zhu XR, Li Y, Mackin D, et al. Towards effective and efficient patient-specific quality assurance for spot scanning proton therapy. Cancers (Basel) 2015;7:631-47. [Crossref] [PubMed]
  5. Ricci JC, Hsi WC, Su Z, et al. The root cause analysis on failed patient-specific measurements of pencil beam scanning protons using a 2D detection array with finite size ionization chambers. J Appl Clin Med Phys 2021;22:175-90. [Crossref] [PubMed]
  6. Trnková P, Bolsi A, Albertini F, et al. Factors influencing the performance of patient specific quality assurance for pencil beam scanning IMPT fields. Med Phys 2016;43:5998. [Crossref] [PubMed]
  7. Shirey RJ, Wu HT. Quantifying the effect of air gap, depth, and range shifter thickness on TPS dosimetric accuracy in superficial PBS proton therapy. J Appl Clin Med Phys 2018;19:164-73. [Crossref] [PubMed]
  8. Rana S, Samuel EJJ. Measurements of in-air spot size of pencil proton beam for various air gaps in conjunction with a range shifter on a ProteusPLUS PBS dedicated machine and comparison to the proton dose calculation algorithms. Australas Phys Eng Sci Med 2019;42:853-62. [Crossref] [PubMed]
  9. RaySearch Laboratories. RayStation 11A User Manual, Chapter 7.2. Stockholm, Sweden: RaySearch Laboratories; 2021.
  10. Brodbek L, Kretschmer J, Willborn K, et al. Analysis of the applicability of two-dimensional detector arrays in terms of sampling rate and detector size to verify scanned intensity-modulated proton therapy plans. Med Phys 2020;47:4589-601. [Crossref] [PubMed]
  11. Togno M, Wilkens JJ, Menichelli D, et al. Development and clinical evaluation of an ionization chamber array with 3.5 mm pixel pitch for quality assurance in advanced radiotherapy techniques. Med Phys 2016;43:2283. [Crossref] [PubMed]
  12. Arjomandy B, Sahoo N, Ding X, et al. Use of a two-dimensional ionization chamber array for proton therapy beam quality assurance. Med Phys 2008;35:3889-94. [Crossref] [PubMed]
  13. Arjomandy B, Taylor P, Ainsley C, et al. AAPM task group 224: Comprehensive proton therapy machine quality assurance. Med Phys 2019;46:e678-e705. [Crossref] [PubMed]
  14. Low DA, Harms WB, Mutic S, et al. A technique for the quantitative evaluation of dose distributions. Med Phys 1998;25:656-61. [Crossref] [PubMed]
  15. Klein EE, Hanley J, Bayouth J, et al. Task Group 142 report: quality assurance of medical accelerators. Med Phys 2009;36:4197-212. [Crossref] [PubMed]
  16. Svensson GK, Baily NA, Loevinger R, et al. Physical Aspects of Quality Assurance in Radiation Therapy. AAPM Report 24. New York: American Association of Physicists in Medicine; 1994.
doi: 10.21037/tro-25-41
Cite this article as: Tominaga Y, Bamberger C, Wakisaka Y, Takashina M, Li X, Rahimi R. Intermittent patient-specific quality assurance failures in proton therapy attributable to detector calibration drift. Ther Radiol Oncol 2026;10:2.

Download Citation