Slima MCP

Script Studio tool-specific limits

4 min read

Two MCP tools have Script-Studio-specific limits.

Slima home — Script Studio books have specific limits on search_content / analyze_chapter

search_content: structured files excluded by default

Calling search_content on a Script Studio book searches only plain markdown by default.scene / .character etc. are excluded.

Why

Structured files are JSON — text hits are usually internal schema field names rather than real content. "name" would match every *.character file's name field.

Workaround

Include structured:

search_content({
  book_token: "...",
  query: "lighthouse",
  include_structured: true
})

With include_structured: true it dives into structured files' text properties.

analyze_chapter: doesn't accept .scene

analyze_chapter on a .scene file → 400 UNSUPPORTED_FILE_TYPE.

Why

analyze_chapter is built for narrative prose (novel chapters). A scene JSON isn't a "chapter" — pacing / prose / dialogue beat concepts don't map.

Workarounds

Two:

1 · Use episode-level analysis

.scene is a scene; multiple scenes form an episode. Episode-level analysis is a separate tool (on the roadmap as analyze_episode).

2 · Convert scene to prose

If you want "emotional pacing of this scene":

  1. Take scene → ask AI to rewrite the scene JSON as prose markdown
  2. Run analyze_chapter on the prose

Two steps, but achieves the goal.

Token limits per file

Script files can be long (a storyline can run tens of thousands of words) — read_file has per-file caps:

  • Plain markdown: ~100K tokens
  • Structured files: ~50K tokens (JSON is more token-heavy)

Above cap → 416 FILE_TOO_LARGE + chunking suggestion.

Related

Was this helpful?