Why Collapse Tests Are Insufficient for Quantization Quality Assessment
Quality Assurance

Why Collapse Tests Are Insufficient for Quantization Quality Assessment

February 2026 · Black Sheep AI Research

Three different quantization variants all scored 15/15 on automated quality tests. One couldn't translate a sentence into Spanish.

Introduction

When you quantize a large language model, the first thing you check is: "did it collapse?" Model collapse, where the model produces garbage, repetitive text, or empty responses, is the most dramatic failure mode of quantization. It's easy to detect and obviously unacceptable.

The problem is that collapse detection has become the de facto quality bar for quantization. If the model doesn't collapse, it ships. We fell into this trap, and it took manual inspection of individual responses to discover that one of our quantized models was producing factual hallucinations and language contamination, failures that are arguably worse than collapse because they're harder to detect.

The Standard Collapse Test

Our collapse test suite (representative of what's commonly used in the community) sends 15 diverse prompts to the model and checks for:

The 15 prompts span coding, math, reasoning, creative writing, knowledge, multilingual, and general conversation.

The Results That Gave Us False Confidence

We tested three quantized variants of a large MoE model:

Variant Pass Warn Fail Avg Time Avg Length
Quantized variant A (custom kernels) 15 0 0 47.3s 1,111 chars
Quantized variant B (standard 4-bit) 15 1 0 8.2s 1,139 chars
Quantized variant C (with expert pruning) 15 0 0 7.7s 973 chars

All three pass. The pruned quantized variant actually has fewer warnings than the unpruned baseline. By every automated metric, it's the best.

What Manual Inspection Revealed

When we actually read the responses, the story changed completely.

Test 9: Spanish Translation

Prompt: "Translate the following to Spanish: 'The weather is beautiful today and I plan to go hiking in the mountains.'"

Unpruned baseline:

El clima está hermoso hoy y planeo ir de caminata a las montañas.

Correct, natural Spanish. ✓

Pruned quantized variant:

El clima está hermoso hoy y我 plane to go hiking in the mountains.

A Chinese character (我, meaning "I") appears mid-sentence. The second half isn't even translated. ✗

Why the test passed: The collapse test for this prompt only checks minimum response length with no content validation. Any response longer than a few characters passes, regardless of content.

Test 14: Functional Programming Concepts

Prompt: "Explain what a monad is in functional programming, with a practical example in Haskell or Python."

Unpruned baseline:

This is a great question about functional programming concepts. Let me explain monads clearly with practical examples. A monad is a design pattern in functional programming that allows for sequencing computations...

Correct explanation with examples. ✓

Pruned quantized variant:

Actually, there no such term as "monad" in functional programming. You may be confusing it with one of the following similar concepts...

The model confidently denies the existence of one of the most fundamental concepts in FP. ✗

Why the test passed: No keyword verification was configured for this prompt. Any non-empty, non-repetitive response of sufficient length passes.

Test 5: River Crossing Puzzle

Prompt: "A farmer needs to cross a river with a fox, a chicken, and a bag of corn..."

Unpruned baseline:

1,119 characters with complete step-by-step solution

Pruned quantized variant:

206 characters, mostly restates the problem without solving it

Why the test passed: The keyword check looks for "chicken," which appears in the restated problem. The response exceeds the minimum length threshold. Neither check verifies that the puzzle was actually solved.

The Taxonomy of Undetected Failures

These failures fall into categories that standard collapse tests don't address:

1. Semantic Correctness

The model produces fluent, well-formatted text that is factually wrong. "There's no such term as 'monad'" is grammatically perfect and stylistically appropriate, it's just completely false.

Detection requires: Domain-specific fact-checking, or at minimum, keyword checks for expected concepts (e.g., checking that the response actually references "monad" would have caught this).

2. Language Contamination

The model mixes languages or scripts inappropriately. Chinese characters in a Spanish translation are obvious to a human reader but invisible to a length/repetition checker.

Detection requires: Script detection (checking that the response uses the expected character set) or reference-based translation quality metrics.

3. Task Abandonment

The model partially addresses the prompt but doesn't complete the task. Restating a puzzle without solving it is a sophisticated form of failure that length thresholds can't catch.

Detection requires: Task-specific completion checks (e.g., for a puzzle, verify the response contains a sequence of steps; for code, verify it compiles/runs).

4. Quality Degradation

Responses are shorter, less detailed, or less nuanced. The pruned quantized variant averaged 973 characters vs the baseline's 1,139 characters, a 15% reduction. Each individual response passes the minimum length check, but the aggregate tells a story.

Detection requires: Statistical comparison of response distributions across model variants.

Why This Matters for Quantization Research

The MoE quantization literature relies heavily on perplexity and downstream task accuracy to evaluate quality. But most papers use aggregate metrics, a single accuracy number across hundreds of test examples. Aggregate metrics can hide individual catastrophic failures.

Consider: if a model answers 98% of questions correctly but confidently denies the existence of monads, produces Chinese in Spanish text, and can't solve logic puzzles, it would score ~97% on a general benchmark. That 97% looks fine in a paper. It's not fine in production.

The Iceberg Problem


What collapse tests catch:
▓▓▓▓▓▓▓▓ Complete model collapse (garbage output)
▓▓▓▓▓▓▓  Repetitive text loops
▓▓▓▓▓▓   Empty responses
▓▓▓▓▓    Extremely short responses

What collapse tests miss:
░░░░░░░░ Factual hallucinations
░░░░░░░  Language contamination
░░░░░░   Task abandonment
░░░░░    Knowledge domain gaps
░░░░     Reasoning quality degradation
░░░      Nuance and detail reduction
░░       Style and tone changes
░        Subtle instruction following failures

The visible tip of the iceberg (what tests catch) is small compared to the submerged portion (what tests miss).

A Better Evaluation Protocol

Based on our experience, here's a minimum evaluation protocol for quantized models:

Level 1: Collapse Detection (automated, fast)

Standard collapse tests as a first pass. If the model collapses, nothing else matters.

Time: ~2 minutes for 15 prompts

Level 2: Functional Probes (automated, medium)

Task-specific correctness checks that go beyond keyword matching. For example: verifying that translations don't contain characters from unrelated scripts, checking that responses to factual questions acknowledge the concept being asked about rather than denying its existence, and confirming that puzzle-solving prompts produce step-by-step solutions rather than mere restatements of the problem.

Time: ~5 minutes for 20-30 probes

Level 3: Academic Benchmarks (automated, slow)

Standard benchmarks (MMLU-Pro, ARC-Challenge, GSM8K, HumanEval) with stratified sampling. Compare against the unpruned/unquantized baseline using identical evaluation protocol.

Critical: Run the same benchmark on both quantized and unquantized models. Absolute scores are less meaningful than the delta.

Time: 30-90 minutes depending on model speed and sample count

Level 4: Perplexity Evaluation (automated, slow)

Measure perplexity on diverse held-out text. Perplexity is more sensitive than downstream accuracy to quantization damage because it measures every token prediction, not just the final answer.

Use text from multiple domains:

Time: 30-60 minutes

Level 5: Manual Spot-Checks (human, slow)

Read 20-30 responses manually, focusing on:

This is the most time-consuming but also the most sensitive evaluation. It's how we caught the Chinese-in-Spanish and monad-denial failures.

Time: 1-2 hours

Recommendations

Conclusion

Our experience with three quantization variants of a large MoE model demonstrates that standard collapse tests create a false sense of quality assurance. A model can:

...all at the same time. The collapse tests check for the model's ability to produce something. They don't check whether that something is correct.

For the MoE quantization research community, we recommend treating collapse tests as the floor, not the ceiling, of quality evaluation. Invest in functional probes, perplexity measurement, and manual spot-checks. The quality issues that collapse tests miss are exactly the ones that matter most in production.


This is the final article in the MoE quantization series. For the full technical details, code, and evaluation data, see our research repository.

Series Index

Read the Full Paper

The complete MoE expert quantization paper, including expert activation profiling, per-expert mixed-bit allocation, and evaluation across 512-expert architectures, is available on our HuggingFace:

MoE Expert Quantization: Per-Expert Mixed-Precision for Mixture-of-Experts Models, Full Paper

huggingface.co/spaces/baa-ai/MoE-Expert-Quantization

Licensed under CC BY-NC-ND 4.0

Next: RAM on Apple Silicon →

Continue Reading

Related research from our team.

Mean Perplexity Is Lying to You
RAM Research

Mean Perplexity Is Lying to You

Why average perplexity hides critical quality differences between quantization strategies.

View All Research