Domains

Model compression

Works

DenseNet121 chest X-ray classifier compressed 51.43% (6.97M → 3.38M parameters) at constant accuracy (AUROC +0.0003). Compression makes channel attribution atomic enough for surgical correction: 5-channel weight zeroing reduces a target false positive by ∆prob −0.13 with zero true-positive loss and exactly zero AUROC change on the other 13 pathologies. Polarized channels reframed as bipolar discriminative axes exploiting label mutual exclusivity (Jaccard < 0.1 in 89 of 100 polarized channels, zero architectural conflicts).

Layer-level analysis of BERT on the five GLUE tasks, applying a forward-primary learning framework. Three findings: (1) separability-guided layer skip with compensation classifier — lossless compression on 3 of 5 GLUE tasks; (2) FFN's role decomposed as 92% structural (norm normalization) and 8% classification, explaining why FFN removal hurts even when individual layers look classification-harmful; (3) 60–93% of misclassifications are high-confidence errors — the BERT CLS vector itself is the fundamental limitation.