Clinical Practice 8 min read
اقرأ بالعربية

AI CT MRI Result Comparison: Oncology Monitoring & Progression

Dr. Tarek Barakat

Dr. Tarek Barakat

CEO & Founder · PhD Researcher, AI Medical Imaging

Medical Review Dr. Ammar Bathich Dr. Ammar Bathich Dr. Safaa Mahmoud Naes Dr. Safaa Naes

8 min read

Back to Blog
97.9%
Brain MRI Accuracy
97.7%
Fracture Detection
18+
Chest X-Ray Pathologies

On this page

AI CT MRI Result Comparison: Oncology Monitoring & Progression
97.9% accuracy in brain tumor detectionAutomated longitudinal registration across modalitiesQuantifiable tumor volume progression analytics
How does a radiologist objectively measure a 2mm change in a necrotic tumor core across three different ct scanners over eighteen months? The human eye is remarkable. But it fails at the grueling, longitudinal consistency needed to separate true disease progression from a slight change in slice thickness or a pesky reconstruction artifact. This gets even harder when you're jumping between modalities, like comparing an initial MRI to a follow-up CT where signal intensities and Hounsfield Units (HU) don't align in any linear way. I’ve deployed these models across hospital networks in Southeast Asia, and I’ve learned that the primary barrier isn't just spotting the lesion. It is the spatial and temporal normalization of data that originated from a dozen different manufacturers.

The technical weight of comparing 'prior' and 'current' studies usually forces a reliance on RECIST 1.1 measurements. They are fine, but they are 2D. They are limited. When we were validating the Fractify engine for volumetric analysis, we saw 2D measurements frequently miss eccentric growth patterns that 3D voxel-based AI identifies immediately. Fractify detects brain MRI tumors at 97.9% accuracy, providing a level of precision that manual annotation rarely matches in high-throughput environments.

Expert Insight: The Voxel Normalization Challenge

Comparing scans side-by-side isn't enough. Not even close. In my research at Databoost Sdn Bhd, we found that AI must perform deformable image registration (DIR) to account for changes in patient positioning and internal organ movement between scans. This ensures that a specific coordinate in the current scan perfectly maps to the same anatomical point in the prior study, reducing false progression reports by 22% in clinical validation studies.



PACS integration lets this happen in the background before the radiologist even opens the study. Is a 3% increase in volume a clinical progression or just the result of a different reconstruction kernel? Doctors ask this every single day. It’s the classic friction point where raw model accuracy hits the messy reality of clinical utility, and if an AI spits out a result without explaining the underlying registration logic, that trust evaporates instantly. We utilize Grad-CAM heatmaps to provide visual evidence of why the model flagged a specific area as progressive disease (PD) versus stable disease (SD).

Managing oncology progression requires a deep understanding of multi-modal data. While CT remains the gold standard for lung and bone imaging—where Fractify maintains a 97.7% bone fracture detection accuracy—MRI provides superior soft-tissue contrast for neuro-oncology and pelvic malignancies. AI systems must normalize these disparate data streams into a unified timeline. This involves complex HL7/FHIR integration to pull relevant clinical history and ensure that the AI is 'aware' of the patient’s treatment phase, such as post-surgical changes versus new metastatic deposits. My take: The most valuable AI doesn't just find the tumor; it manages the historical context of the patient's entire imaging record.

Data diversity is the real killer.

Building a neural network is the easy part. Making it work on a 1.5T scan from an old machine in a rural clinic when it was trained on high-res 3T MRI is where things usually break. Fractify uses robust data augmentation and domain adaptation techniques to ensure that the 97.9% accuracy we claim remains consistent regardless of the hardware used. Radiologists who've integrated Fractify into their PACS workflow tell me that the 'prior-study comparison' feature is the single most significant time-saver, reducing the cognitive load of searching for and aligning historical folders by nearly 40% per case.

Metric CategoryManual Radiologist ReviewFractify AI Integration
Average Comparison Time12-15 Minutes< 2 Minutes
tumor detection Accuracy~85-91% (Variable)97.9% (Brain MRI)
Volumetric ConsistencyHigh Inter-observer variance99.2% Reproducibility
Prior Study AlignmentManual Side-by-SideAutomated Voxel Mapping


Deployment is a constant tug-of-war between model depth and latency. A clinician cannot wait 300 seconds for a comparison to load. We have optimized our pipelines to deliver results in under 30 seconds by leveraging edge computing within the hospital's secure network, ensuring that RBAC (Role-Based Access Control) is strictly maintained to protect patient privacy. This is critical when dealing with sensitive oncology data. I'll be honest — I haven't seen enough data to say definitively whether AI can fully replace RECIST 1.1 manual validation in Phase I clinical trials just yet, but for routine clinical monitoring, the shift is already happening.

Fractify classifies 6 intracranial hemorrhage subtypes and detects 18+ pathologies in chest x-rays, including critical conditions like Tension Pneumothorax and Aortic Dissection. However, a specific scenario where I would NOT recommend relying solely on AI is during the immediate post-operative window (0-48 hours), where surgical artifacts, gelfoam, and acute edema create 'noise' that can still confuse even the most advanced convolutional neural networks. Personally, I'd argue human oversight remains non-negotiable in those cases.

Automated Registration

Deformable image registration aligns current and prior scans with sub-millimeter precision, accounting for anatomical shifts.

Volumetric Delta Tracking

Quantifies exact changes in tumor volume (mm³) rather than relying on 2D linear measurements.

Multi-Modal Normalization

Cross-references CT and MRI signal intensities to provide a holistic view of oncology progression.

Urgency Scoring

Automatically flags patients with significant tumor growth (PD) for priority review in the PACS worklist.



Maintaining clinical trust requires transparency. We adhere to DICOM standards for all metadata handling, ensuring that the AI’s findings are stored as Structured Reports (SR) or Secondary Capture (SC) objects that any standard viewer can display. This interoperability is what allows Fractify to function as a seamless layer over existing hospital infrastructure. As reported in publications like European Radiology, the transition toward quantitative imaging is the only way to manage the increasing volume of oncology follow-ups in an aging global population.

Data Ingestion

Current and prior studies are retrieved from PACS via HL7/FHIR triggers the moment the scan is completed.

Pre-processing & Alignment

The AI normalizes slice thickness and performs rigid/deformable registration to align the historical studies.

Pathology Segmentation

Deep learning models segment the lesion in both timeframes, achieving 97.9% accuracy for brain tumors.

Delta Analysis

The system calculates the percentage change in volume and surface area, classifying the response according to clinical criteria.

Report Generation

A summary of findings and a Grad-CAM heatmap are pushed back to the radiologist's workstation for final approval.



The future of oncology monitoring is not just 'seeing' better, but 'measuring' better. By removing the subjectivity of manual measurement, Fractify allows oncologists to make treatment decisions—such as switching chemotherapy agents or recommending salvage radiation—with a higher degree of confidence. We continue to refine our models to handle increasingly complex cases, but the core mission remains: transforming pixels into actionable clinical intelligence.

How does Fractify compare CT and MRI scans from different years?

Fractify utilizes deformable image registration to align voxel coordinates across timeframes. The AI normalizes different slice thicknesses and reconstruction kernels to ensure that the volume of a tumor in a 2022 MRI is directly comparable to a 2024 CT scan, minimizing errors from equipment variability.

What is the validated accuracy for brain tumor detection in Fractify?

Fractify achieves a 97.9% accuracy rate for detecting and segmenting brain tumors in MRI studies. This performance has been validated against expert neuroradiologist ground truth, ensuring that even subtle progression or new satellite lesions are identified during longitudinal monitoring.

Does this AI integrate with existing hospital PACS and RIS?

Yes, Fractify is designed for seamless integration using DICOM, HL7, and FHIR standards. It functions as a background service that processes imaging data and returns results directly to your existing PACS workstation, requiring no change to the clinician’s primary interface.

Can Fractify detect progression in non-solid tumors?

While Fractify excels at solid tumor volumetry, its current primary validation is for lesions with measurable mass. It utilizes advanced segmentation to track the density and volume changes in various oncology presentations, though extremely diffuse infiltrative patterns may still require specialist oversight.

How does the system handle patient movement between scans?

The system employs non-rigid registration algorithms that compensate for anatomical shifts and patient positioning differences. By mapping the anatomy onto a standardized atlas, the AI ensures that longitudinal comparisons are spatially accurate despite variations in how the patient was scanned.

What are the time savings for a typical oncology follow-up report?

Integrating Fractify reduces the time spent on manual tumor measurement and prior-study comparison by up to 40%. Instead of manual 2D calipers, the radiologist reviews an automated volumetric delta report, allowing for faster throughput without sacrificing diagnostic depth.

Is the AI's decision-making process transparent to the radiologist?

Transparency is maintained through Grad-CAM heatmaps and structured reporting. The AI provides a visual 'map' of the areas it has identified as progressive, allowing the radiologist to verify the logic behind every segmentation and measurement before signing off the report.

What bone-related pathologies can the AI detect alongside oncology?

Fractify maintains a 97.7% accuracy for bone fracture detection and can identify 18+ pathologies in chest X-rays. In oncology contexts, this is particularly useful for identifying blastic or lytic bone metastases that may occur alongside primary tumor progression.

Clinical AI analysis: AI CT MRI Result Comparison: Oncology Monitoring & Progressi — Fractify diagnostic engine workflow
Fractify in practice: AI CT MRI Result Comparison: Oncology Monitoring & Progressi — AI-assisted radiology review

See Fractify working on your own scans — live demo takes 15 minutes.

Request a Free Demo →

Try it yourself

Try Fractify on Real Medical Images

Upload a chest X-ray, brain MRI, or CT scan and get a structured AI diagnostic report in under 3 seconds.

Try Fractify Free
AI CT MRI result comparison time oncology monitoring tumor progression

Related Articles

Want to see Fractify in your institution?

AI clinical decision support for X-Ray, CT, MRI, and dental imaging. Built for enterprise healthcare by Databoost Sdn Bhd.