القراء بالذكاء الاصطناعي

Scoring criteria

قراءة 3 دقيقة

See "Overall 7.3" and you'll want to know — "how is 7.3 different from 8.1? what's an 8?" This article explains.

Slima AI Editor 5-dimension scoring view — each dimension shows a 0-10 score plus brief summary

What 0–10 means

Range Meaning
9–10 Rare — close to submission-ready
7–9 Strong, with a few patches
5–7 Mid, several areas to fix
3–5 Structural issues; major revision
0–3 Very early draft

Most first drafts land 5–7 — that's normal, not failure.

How scores are computed

Each dimension has 4–6 sub-criteria; AI scores each 0–10, weighted average yields the dimension score.

Example: "Structure" (5 subs)

  • Act break clarity (25%)
  • Turning point impact (25%)
  • Chapter ordering coherence (15%)
  • Opening setup (20%)
  • Ending completeness (15%)

Each sub has its own rubric.

Full rubrics are in the report's "Detailed scoring criteria" expandable section.

Why scores are directional, not absolute

A few reasons:

1 · LLM nature

Different model versions / temperatures can shift the same book by 0.5.

2 · Persona choice

Strict personas vs generous personas → same book, different scores.

3 · Your category

A "7" in literary fiction vs "7" in popular fiction don't mean the same thing.

Comparing same-book runs is what's meaningful

Revise and rerun is more useful than any single absolute score:

  • Run 1: 6.2
  • Run 2 (after revision): 7.5
  • → your revisions are working

See: Reading history & revisit

Alignment with external reviews

If you also have a human beta reader score — you'll find their scores usually align directionally with Slima's (not always with the same number).

E.g. human reader gives 8, Slima gives 7.5. Direction matches is the signal; absolute numbers aren't.

Related

Was this helpful?