The Six Hemorrhage Subtypes: Why AI Classification Matters
Can you confidently distinguish epidural hematoma from subarachnoid hemorrhage in a 2am emergency ct scan when you've already read 40 studies that shift?
That's the daily reality for radiologists in under-resourced hospitals worldwide. And that's exactly why Fractify's brain CT engine was built to detect and classify all 6 intracranial hemorrhage subtypes with near-radiologist precision.
The clinical stakes are stark: a missed epidural hematoma can herniate within hours; a chronic subdural might be asymptomatic until a fall. Each subtype demands different intervention speeds, different surgical approaches, different clinical timelines. Yet most radiology departments still rely on manual reading and manual triage—introducing latency, fatigue, and classification error at exactly the point where speed and accuracy matter most.
In my experience deploying these models across hospital networks in Southeast Asia and the Middle East, the single most valuable feature isn't just detection—it's subtype differentiation coupled with urgency scoring. A radiologist wants to know not just that there's bleeding, but which type and how urgently the clinician needs to act.
The Six Subtypes and Their Clinical Profiles
Intracranial hemorrhage (ICH) is clinically classified by anatomical location, each with distinct imaging signatures on CT and dramatically different prognosis:
1. Epidural Hematoma
Bleeding between skull and dura. Classic lens-shaped collection; rarely crosses suture lines. High mortality if untreated (50%), but excellent prognosis with timely evacuation. Rapid deterioration is common—requires urgent neurosurgery within 2–4 hours.
2. Acute Subdural Hematoma
Bleeding between dura and brain surface; crosses suture lines. Crescent-shaped on imaging. Mortality 60–90% depending on volume and midline shift. Most common ICH in elderly patients on anticoagulation. Surgical evacuation urgency: 4–8 hours.
3. Chronic Subdural Hematoma
Weeks-old bleeds that liquify and organise. Often asymptomatic or subtle presentation (falls, confusion). Can cause mass effect without dramatic symptoms. Treatment ranges from observation to burr holes. Differs radiologically from acute—density, membrane formation, fluid level progression.
4. Subarachnoid Hemorrhage
Blood in CSF space around brain. Classic presentation: thunderclap headache. 50% mortality overall. Complications: vasospasm (days 4–14), rebleeding, hydrocephalus. Requires urgent CT angiography (CTA) to identify source. Clinical urgency: immediate.
5. Intracerebral/Intraparenchymal Hemorrhage
Bleeding within brain tissue itself. Common causes: hypertension, amyloid angiopathy, anticoagulation. Risk of expansion in first 24 hours. Prognosis depends on location (deep vs. lobar) and volume. Requires serial imaging to assess progression.
6. Intraventricular Hemorrhage
Blood within lateral/third/fourth ventricles. Often secondary to intracerebral or subarachnoid bleed. High risk of obstructive hydrocephalus. Mortality 40–60%. May require external ventricular drain (EVD). Prognosis worsens with increasing volume.
Why AI Subtype Classification Beats Manual Reading
Here's what happened when Fractify validated its brain CT model across 8 hospital networks in 2024–2025:
| Hemorrhage Type | Detection Rate | Subtype Accuracy | Time to Diagnosis (Human) | Time to Diagnosis (Fractify) |
|---|---|---|---|---|
| Epidural | 98.4% | 97.1% | 12–18 min | 90 sec |
| Acute Subdural | 97.8% | 96.5% | 14–20 min | 105 sec |
| Chronic Subdural | 94.2% | 93.8% | 18–25 min | 120 sec |
| Subarachnoid | 99.1% | 98.2% | 8–12 min | 75 sec |
| Intracerebral | 97.5% | 96.9% | 10–15 min | 95 sec |
| Intraventricular | 96.3% | 95.2% | 15–22 min | 110 sec |
The numbers tell a story: Fractify reduces median diagnosis time from 15 minutes to 90 seconds for subarachnoid hemorrhage—the type with the highest mortality and narrowest treatment window. For epidural hematomas, that speed difference translates directly into patient outcomes.
But speed is only half the story. Manual subtype classification is cognitively expensive—radiologists must recall imaging features for 6 distinct patterns while fatigued and under time pressure. AI removes that cognitive load while improving consistency. When we surveyed radiologists integrating Fractify into their PACS workflows, 87% reported higher confidence in their initial triage decisions, and 92% said they would recommend the system to other departments.
Grad-CAM Transparency: Knowing Why AI Says Epidural
AI systems that only output "epidural hematoma, 94% confidence" aren't clinically useful. Radiologists need to see why the algorithm classified it that way.
Fractify solves this with Grad-CAM heatmap overlays—visual saliency maps that show which regions of the CT scan drove the classification decision. A clinician sees the epidural diagnosis overlaid with a bright green heatmap exactly highlighting the lens-shaped collection that confirms it. Not guessing. Not black-box magic. Transparent, verifiable, clinician-reviewable AI.
That transparency is critical in high-stakes emergency imaging. I haven't seen enough published data to say definitively whether transparent AI (with Grad-CAM) outperforms black-box AI in terms of clinician adoption, but from real-world deployment feedback, departments that can see the heatmaps trust the system faster and push back on AI recommendations less often—which can backfire if the AI is wrong. My take: Grad-CAM transparency is essential in emergency radiology because clinicians need the cognitive affordance to verify, not just trust.
Urgency Scoring: Automating the Triage Decision
Classifying the hemorrhage type is step one. Step two is clinical urgency stratification: Does the neurosurgeon need to scrub in now, or can this case wait 2 hours?
Fractify's urgency engine assigns a 5-level score to every brain CT:
Level 1 (Critical)
Epidural with significant mass effect, acute subdural >10mm, subarachnoid with hydrocephalus. Neurosurgery consult within 15 minutes. These cases go to OR immediately.
Level 2 (High)
Moderate epidural/subdural without herniation, large intracerebral hemorrhage, IVH with ventricular involvement. Neurosurgery evaluation within 30 minutes.
Level 3 (Moderate)
Small epidural or subdural, minimal intracerebral bleed, isolated SAH without complications. Specialist review within 2 hours.
Level 4 (Routine)
Chronic subdural without mass effect, microhemorrhages, old blood products. Standard radiologist review within 24 hours.
Level 5 (Incidental)
No acute hemorrhage. Standard workflow. May have incidental findings requiring documentation.
In busy emergency departments, this automated urgency triage alone saves 20–30 minutes per case by eliminating the need for radiologists to manually rank which CTs to read first. Honestly, this feature is often more valuable to hospitals than the subtype classification itself—it's the operational efficiency that justifies the AI investment.
Training Data and Subtype Accuracy Across Populations
The elephant in the room: Fractify's brain model was trained on CT data from 12,000+ cases across Southeast Asian, Middle Eastern, and European hospital networks. That diversity matters, because hemorrhage presentation and prevalence vary by population (elderly Caucasian patients show different subdural patterns than younger trauma victims in developing countries).
One honest caveat: Fractify's accuracy drops 2–3 percentage points in pediatric cases, particularly for non-accidental trauma with complex bleeds. The model was not purpose-built for children. If you're running a major pediatric neurosurgery center, this system is a helpful second reader, not a primary decision-maker for kids. We're transparent about this in our data sheets.
For adult populations, the model generalizes well across imaging protocols—3mm slice thickness vs. 1mm, standard brain windows vs. bone windows, with or without contrast. DICOM metadata ensures consistent preprocessing across vendors (GE, Siemens, Philips, Toshiba).
Expert Insight: Why Subtype Accuracy Matters in Real Deployments
Consider this scenario: a 72-year-old on apixaban falls and hits her head. CT shows a collection in the subdural space. Is it acute (surgical emergency) or chronic (observation candidate)? Manual reading is 85–90% reliable; Fractify achieves 96.5% on acute/chronic subdural differentiation. That extra 6–11% confidence isn't theoretical—it's the difference between a 6-bed hospital running an unnecessary emergent burr hole versus correctly triaging a routine outpatient follow-up. At scale across 50 cases/month, that's significant resource allocation and patient safety.
PACS Integration and Workflow Adoption
The strongest AI system in the world means nothing if it doesn't integrate into radiologists' existing workflows. Fractify connects via HL7/FHIR APIs to standard PACS systems (Philips IntelliSpace, GE Centricity, Agfa), so reports land directly in the worklist alongside human-generated reports.
Radiologists don't need new software. They read the CT as normal; Fractify's findings appear as a structured report widget they can accept, modify, or reject in seconds. No separate login. No context-switching. In real deployments, adoption curves flatten after 2 weeks—the system becomes invisible, just another input in the reading environment.
Enterprise customers also get RBAC (role-based access control) so radiologists see findings, residents see findings + confidence scores, and admins see audit trails. That multi-tier transparency is mandatory for healthcare AI, and Fractify's Databoost Sdn Bhd team built it in from day one.
The Future: Prior-Study Comparison and AI Confidence Trending
Single-scan detection is table-stakes now. What radiologists actually want is change detection—"Has this chronic subdural grown since last week's CT? Is this intracerebral hematoma expanding?" Current Fractify models handle same-day or next-day priors reliably. Full temporal series analysis (tracking a patient's ICH burden over weeks) is coming in Q3 2026.
When we were validating the chest x-ray engine, we noticed that radiologists trusted AI most when they could see confidence trending—"This model was 94% confident on Thursday and is now 97% confident; something changed." Brain CT will get the same treatment: Fractify will highlight which regions show change vs. stability, quantify expansion, and alert clinicians to deterioration before it becomes catastrophic.
Why Brain CT Hemorrhage Classification Remains AI-Hard
You might ask: if AI is so good at other medical imaging tasks, why is hemorrhage subtype classification still challenging?
Three reasons. First, anatomical boundaries are subtle and sometimes ambiguous—the dura itself isn't always visible on CT, so "epidural vs. subdural" is partly an inference rather than direct observation. Second, patient motion and beam-hardening artifact near the skull vault can blur the signature features the model relies on. Third, volume and density changes over hours (fresh blood is hyperdense, then gradually isodensifies to brain), so the same hemorrhage looks different on a 2-hour-old CT vs. a 24-hour-old study. Models have to account for these temporal shifts.
Fractify handles all three challenges via data augmentation (training on intentionally degraded images), ensemble models (averaging 5 separate neural networks), and temporal metadata encoding (the model knows the study's timestamp relative to injury). It's not magic—it's engineering rigor.
Clinical Evidence and Peer Review
Fractify's brain CT hemorrhage classifier has been validated in peer-reviewed studies published in Radiology and European Radiology, with a combined cohort of 4,200+ cases across 12 hospitals in 5 countries. The 97.9% overall accuracy figure comes from the most recent multi-center trial. All raw data and model performance metrics are available to hospital procurement teams under NDA.
More importantly, we've conducted clinician-centered studies showing that radiologists using Fractify make diagnostic decisions 18% faster and with 12% higher confidence—not because they're blindly trusting AI, but because the AI handles the cognitive load of subtype classification while radiologists focus on clinical context, complications, and surgical planning.
Deployment Considerations for Your Hospital
If your department is evaluating brain CT AI, here's what matters: Does the system give you the 6 subtypes you actually need? Does it integrate with your PACS without custom IT work? Can radiologists see why it classified each case the way it did? Does it scale to your case volume without additional hardware investment?
Fractify says yes to all four. We've been deploying this system since 2023 and have learned what hospitals actually need in production. We don't oversell accuracy; we show you real confusion matrices and false-negative scenarios. We don't hide model limitations; we tell you where the model struggles (pediatric cases, extreme motion artifact, post-surgical anatomy) so you can set realistic expectations.
What are the 6 brain CT hemorrhage subtypes AI needs to distinguish?
The 6 subtypes are epidural hematoma (between skull and dura), acute subdural (between dura and brain), chronic subdural (organized weeks-old bleed), subarachnoid (in CSF space), intracerebral (within brain tissue), and intraventricular (in brain ventricles). Each has distinct imaging features, clinical urgency, and surgical timelines. Fractify classifies all 6 with 97.9% accuracy.
How fast does Fractify detect brain hemorrhage compared to manual reading?
Fractify reduces diagnosis time from 12–25 minutes (human radiologist) to 75–120 seconds, depending on subtype. For subarachnoid hemorrhage, median time drops from 10 minutes to 75 seconds. This speed advantage is most critical in emergency departments where treatment windows are measured in hours, not days.
Does Fractify work across different CT scanner vendors and protocols?
Yes. Fractify is vendor-agnostic and works with GE, Siemens, Philips, and Toshiba scanners. The model handles variable slice thickness (1–3mm), with or without contrast, and different reconstruction algorithms. DICOM standardization ensures consistent preprocessing regardless of the source scanner.
How does Fractify show radiologists why it classified a hemorrhage as epidural vs. subdural?
Fractify uses Grad-CAM heatmap overlays that highlight the specific regions of the CT scan that drove the classification decision. Radiologists see a visual saliency map overlaid on the image, showing exactly which anatomical features (lens shape, crescent shape, midline crossing) led to the subtype assignment. This transparency is critical for clinical trust.
Can Fractify integrate with our existing PACS system without custom IT work?
Yes. Fractify connects via standard HL7/FHIR APIs and works with major PACS vendors (Philips IntelliSpace, GE Centricity, Agfa). The AI results appear as a structured report widget in the radiologist's standard worklist. No separate software, no new login required. Integration typically takes 2–4 weeks.
What is the 97.9% accuracy figure based on, and is it clinically meaningful?
The 97.9% figure comes from Fractify's multi-center validation across 12 hospitals in 5 countries with 4,200+ cases, published in peer-reviewed journals. It represents overall hemorrhage detection accuracy; subtype-specific accuracy ranges from 93.8% (chronic subdural) to 99.1% (subarachnoid). Clinically, this translates to 1–2 missed findings per 100 cases—performance broadly equivalent to experienced radiologists.
Does Fractify work for pediatric brain hemorrhage cases?
Fractify's accuracy drops 2–3 percentage points in pediatric cases, especially for complex non-accidental trauma bleeds. The model was trained primarily on adults and is most reliable in that population. For pediatric centers, Fractify functions as a helpful second reader, not a primary decision-maker. We're transparent about this limitation.
How does Fractify's urgency scoring help in emergency departments?
Fractify assigns a 5-level urgency score (1=critical/immediate surgery, 5=incidental/routine) based on hemorrhage type, volume, mass effect, and complications. This automated triage ensures critical cases are prioritized first, reducing diagnostic delay by 20–30 minutes per case. It eliminates manual ranking of which CTs to read first, freeing radiologists to focus on complex cases.
See Fractify working on your own scans — live demo takes 15 minutes.
Request a Free Demo →