The Complete Guide to Slima MCP: Let AI Read Your Book Directly
Most writers assume AI needs more intelligence to be useful for novels. Better models, bigger context windows, fancier reasoning. But the actual bottleneck has nothing to do with intelligence. It is plumbing.
Think about what happens in a typical session. Open the character profile in Slima -- select all, copy. Switch to Claude's chat window -- paste. Open chapter eight -- select all, copy. Switch back -- paste. Type a single sentence: "Compare these two for contradictions." The AI responds, but it missed the foreshadowing in chapter three because nobody fed it chapter three. It has no idea the worldbuilding document even exists. So back to Slima. Another file. Another copy. Another paste.
An afternoon vanishes. Not to writing -- to moving text between windows.
The fix is not a smarter AI. The fix is a pipe. Slima MCP -- three minutes of setup, and AI tools can reach directly into Slima to read, search, and edit manuscripts on their own. No clipboard required.
What Is MCP? The Short Version
MCP stands for Model Context Protocol, an open standard from Anthropic. Sounds technical. The concept fits in one sentence: it lets AI tools exchange data directly with external services.
Without MCP, the AI is blind. It sees only what lands in the chat box -- whatever fragments get manually pasted in. It does not know character profiles exist. It cannot check earlier chapters. It cannot reference worldbuilding notes. The writer becomes a courier, hauling text back and forth.
With MCP, a single request does the job. "Check whether the protagonist's actions in chapter eight contradict his character profile." The AI retrieves both files on its own, compares them, reports back. No copying. No pasting. No lost context.
If that needs a metaphor -- MCP is USB. Before USB, every manufacturer had a proprietary connector. USB gave everyone a common interface: plug any device into any computer, it works. Same principle. Claude Desktop, Cursor, Gemini CLI, VS Code -- any tool that supports MCP can connect to Slima and work with manuscripts directly.
Slima MCP is an npm package (slima-mcp). Install it, configure it, and AI tools gain real-time access to books on Slima. Reading, editing, searching -- all live, all instant. No exports, no imports.
Installation and Authentication
Two things. That is all: Node.js on the machine and a Slima account.
Step 1: Authenticate
Open a terminal. One command:
npx slima-mcp auth
A browser window opens, walks through Slima login. Done.
The authentication token gets stored in the operating system's native credential manager -- Keychain on macOS, Credential Manager on Windows, libsecret on Linux. Not a plain text file tucked in some folder. OS-level security -- the same mechanism guarding saved passwords and encryption keys. Even if someone combed through every directory on the hard drive, there would be no token file to steal.
Tokens refresh automatically. Re-authentication is almost never needed. To check status:
npx slima-mcp auth --status
To log out:
npx slima-mcp auth --logout
That covers authentication. Next step.
Platform Configuration
After authenticating, the AI tool needs to know where to find Slima MCP. Config files differ slightly across platforms, but the core is identical: tell the tool to run slima-mcp via npx.
Claude Desktop
Locate claude_desktop_config.json. On macOS it lives at ~/Library/Application Support/Claude/, on Windows at %APPDATA%\Claude\. Add:
{
"mcpServers": {
"slima": {
"command": "npx",
"args": ["slima-mcp"]
}
}
}
Restart Claude Desktop. A tool icon appears beside the chat input -- Slima MCP is connected.
Cursor
Create .cursor/mcp.json in the project root:
{
"mcpServers": {
"slima": {
"command": "npx",
"args": ["slima-mcp"]
}
}
}
Cursor detects this automatically and launches Slima MCP on demand.
Gemini CLI
Edit ~/.gemini/settings.json:
{
"mcpServers": {
"slima": {
"command": "npx",
"args": ["slima-mcp"]
}
}
}
VS Code
Create .vscode/mcp.json in the project:
{
"servers": {
"slima": {
"command": "npx",
"args": ["slima-mcp"]
}
}
}
Watch out -- VS Code uses servers, not mcpServers. This trips up more people than any other configuration detail. If the connection is not working, check that key first.
Once configured, any of these tools can interact with Slima books directly. No extra steps.
Fifteen Tools in Three Categories
Slima MCP ships fifteen tools across three groups: book management, file operations, AI Beta Reader. Rather than a dry feature list, here is what each one sounds like in actual conversation with an AI.
Book Management
create_book -- Spin up a new book. Fresh idea? "Create a new book in Slima called The Last Bookshop." The AI builds it, then can immediately scaffold folders for characters, worldbuilding, and outline.
list_books -- See everything. "What books do I have right now?" One sentence, full inventory, pick which project to work on.
get_book -- Pull up a book's details. Title, author, description, creation date -- the basics at a glance.
get_book_structure -- Retrieve the complete file tree. This is the tool that gives AI a bird's-eye view. "Show me the full structure of this novel." The AI sees every chapter, every character file, every folder of notes. With that map in hand, every subsequent request lands more precisely.
get_writing_stats -- Writing statistics. "How's my progress looking?" The AI breaks down total word count, per-chapter counts, update frequency. Numbers do not lie about momentum.
File Operations
read_file / get_chapter -- The workhorse tools. "Read chapter five." "Open the protagonist's profile." "Pull up the magic system notes." Once the AI reads actual text, its feedback is grounded in what exists on the page -- not generic advice floating in a vacuum.
write_file -- Overwrite an entire file. For major rewrites. "Replace chapter three with the version we just worked out." The whole file gets swapped.
edit_file -- Surgical search-and-replace. More precise than write_file. "Change every instance of Xiaoming to Zixuan in chapter one." Exact substitution, nothing else touched.
create_file -- Build new files. "Create a villain character card in the characters folder." The AI creates the file and populates it with initial content.
delete_file -- Remove what is no longer needed. Old brainstorming notes, deprecated outlines, temporary drafts -- clean up without leaving the conversation.
append_to_file -- Add to the end of an existing file without overwriting. Perfect for writing journals and session logs. "Add today's progress notes to the end of my writing journal."
search_content -- Full-text search across every file in a book. The killer feature. "Search the entire novel for every mention of 'blue eyes.'" Or: "Find all paragraphs where Professor Chen appears." It combs through everything and returns all matches. For maintaining character consistency and plot continuity across a long manuscript -- this tool is priceless.
AI Beta Reader
list_personas -- Browse available virtual reader personas. Slima's AI Beta Readers are not generic feedback machines. Different personas carry different specializations -- story structure, character development, pacing analysis, market viability. "Which Beta Reader personas are available?" Check the options, then pick the one that fits the current need.
analyze_chapter -- Send a chapter to a specific persona for in-depth analysis. "Use the structure analyst persona to review chapter three." The persona reads the chapter and delivers professional-grade feedback: pacing, tension curves, character arcs, potential weak spots. Like having a beta reader on call -- except results come in minutes, not weeks.
Four Workflows That Actually Matter
Knowing the tools is one thing. Knowing when to combine them -- that is what rewires a writing process.
Starting a Writing Day
Sit down. Open the AI tool. First message:
"Show me the book structure, then read the last section of whatever chapter I worked on yesterday."
The AI fires get_book_structure to map the project, then read_file to pull up the latest chapter. It might say: "Yesterday ended mid-scene -- the protagonist just entered the ruins and found the first clue. According to the outline, the next beat is the encounter with the Guardian. Ready?"
No file hunting. No re-reading the outline. No trying to remember where things left off. Context is already loaded. Writing begins immediately.
Checking Character Consistency
Chapter fifteen. A nagging question: what exactly did that secondary character say back in chapter three? In a long novel, this happens constantly.
"Search for every passage where the old butler appears across the entire book."
The AI runs search_content, gathers every mention, organizes the butler's words and actions chapter by chapter. Maybe it surfaces a contradiction -- he swore in chapter three that he never lies, but in chapter ten he hints at concealing something. Deliberate foreshadowing? Accidental slip? Either way, the evidence is laid out in seconds. No manual scrolling through dozens of pages.
Running a Revision Pass
First draft done. Time for revision. Chapter by chapter, feed each one to an AI Beta Reader:
"Analyze chapter one with the pacing analyst persona."
analyze_chapter runs, returns feedback -- where the rhythm drags, where transitions feel abrupt, where a reader might disengage. Review the notes, decide what to change, then:
"The scene description in the third paragraph of chapter one runs too long. Trim it -- keep the core imagery but cut the length roughly in half."
The AI uses edit_file to modify that specific passage. Nothing else in the chapter changes. No full rewrite. No hunting for the paragraph manually.
Consolidating Worldbuilding
A novel with an elaborate magic system. Rules scattered across chapters, side notes, character profiles. Even the author cannot remember which details live where.
"Search the entire book for all passages describing magic system rules, then consolidate them into a single reference document in the worldbuilding folder."
The AI chains search_content to locate fragments, read_file to pull full context, then create_file to produce a clean, organized reference. Scattered pieces from twenty files become one authoritative guide. This kind of multi-file orchestration is where MCP shines brightest -- the AI does not just answer questions. It executes entire workflows.
Advanced Tips
Environment Variables
For troubleshooting, add environment variables to the config to get detailed logs from Slima MCP:
{
"mcpServers": {
"slima": {
"command": "npx",
"args": ["slima-mcp"],
"env": {
"SLIMA_LOG_LEVEL": "debug"
}
}
}
}
Setting debug level outputs detailed information for every API call. Once the problem is resolved, switch it back to keep logs clean.
Remote MCP
By default, Slima MCP uses stdio transport -- runs locally on the machine. But it also supports remote HTTP transport. One server running MCP, multiple devices connecting to it. Desktop to laptop to tablet -- remote mode means no Node.js installation needed on every machine.
Matching AI Platforms to Tasks
Multiple platforms, all connected to the same Slima account, same books. Pick based on the task:
- Claude -- Long-context understanding at its strongest. Analyzing full chapters, checking character consistency, deep discussions about plot logic -- this is where it excels.
- Cursor -- Built for precise batch operations. Search-and-replace, formatting adjustments, large-scale text edits. Maximum efficiency.
- Gemini -- Multilingual capabilities stand out. Translation, cross-language comparison, information consolidation -- the natural fit.
Same account, same books, switch anytime.
Frequently Asked Questions
Q: Does MCP let anyone else see my content?
No. Slima MCP accesses only the authenticated account's books. Token permissions are scoped to a single account. The AI reads or modifies content only when instructed -- it will not browse the library on its own, and no data reaches other users.
Q: Can I restrict AI to read-only access?
Yes. MCP tools split into read operations (read_file, search_content) and write operations (write_file, edit_file). AI platforms typically confirm before executing writes. Starting a conversation with "read only today -- no edits" is enough. The AI sticks to read-only tools for the session.
Q: The AI tool is not responding after setup. What now?
Most common cause: malformed JSON in the config file. One extra comma, one missing brace, one full-width quotation mark -- any of these cause silent failure. Validate the JSON first. Then run npx slima-mcp auth --status to confirm authentication. Those two steps catch the vast majority of issues.
Q: Does the authentication token expire?
Auto-refreshes in the background. Under normal use, expiration is virtually nonexistent. If it does happen, re-run npx slima-mcp auth -- thirty seconds and it is resolved.
Three Minutes. Permanent Shift.
Here is what this article covered.
MCP is a standardized protocol that lets AI read and write external service data directly -- no more manual copy-paste. Setup: npx slima-mcp auth to authenticate, a few lines of JSON in the AI tool's config. Three minutes. Fifteen tools span book management, file operations, and AI Beta Reader -- from reading chapters to searching entire manuscripts to having virtual readers analyze the work. All through natural language.
One sentence for what MCP actually changes: it turns AI from a tool that waits to be fed data into a partner that can look things up on its own.
Time should go toward writing the story. Not toward shuttling text between windows.
Open a terminal. Run npx slima-mcp auth. Go back to the AI tool and say: "Show me where I left off yesterday."
Writing can be more focused now.