A quick guide to what the AI is good at—and what it isn’t.
The Manuscript AI Chat is designed to help you work more efficiently during early editorial assessment. To get the most out of it, use this guide to understand the kinds of questions it answers well and where it may produce unreliable or misleading results.
✅ What the AI Chat Is Good At
1. Explaining terminology or technical concepts
The chat can clarify unfamiliar methods, acronyms, or domain terms drawn directly from the manuscript.
Ask:
- “What does this term mean in context?”
- “Explain this equation in simple language.”
2. Pointing to where content appears in the manuscript
You can ask the AI to locate claims, methods, or details within the PDF.
Ask:
- “Where does the manuscript describe this technique?”
- “Which section discusses this assumption?”
3. Providing high-level summaries
The general and advanced summaries can give a quick scaffold of the paper’s structure.
Ask:
- “Summarize the main contributions.”
- “Give me a high-level overview of the results.”
4. Checking local internal consistency
The AI can help identify contradictions, repeated information, and inconsistencies within the manuscript.
Ask:
- “Does this conclusion contradict earlier statements?”
- “Are variables defined consistently?”
❌ What the AI Chat Is Not Good At
1. Finding missing citations or doing literature search
The AI may hallucinate nonexistent papers or DOIs.
Avoid asking:
- “What important papers are missing?”
- “What related work should be added?”
2. Judging novelty or scientific correctness
The AI cannot reliably evaluate whether the work is new, significant, or methodologically sound.
Avoid asking:
- “Is this work novel?”
- “Are the results correct?”
3. Suggesting reviewers
Pilot results show inaccurate outputs, including nonexistent people or manuscript authors.
Avoid asking:
- “Who should review this paper?”
- “Give me emails of possible reviewers.”
4. Assessing journal scope or fit
Scope matching requires context beyond the manuscript.
Avoid asking:
- “Is this suitable for our journal?”
5. Detecting fraud, paper mills, or manipulation
Fraud detection requires specialized tools and human judgment.
Avoid asking:
- “Does this contain fabricated data?”
- “Is this image manipulated?”
6. Producing field-accurate method classification
The AI may misidentify theoretical frameworks or trivial steps as “methods.”
Avoid asking:
- “List the methods used in this paper” (if accuracy is critical).
⭐ Best Practices
- Keep questions grounded in the manuscript itself.
The AI performs best when answering questions anchored directly to the PDF. - Be specific.
Instead of: “Explain the methods,” try:
“Explain how PCA is used in Section 3.” - Use the AI as a helper, not a decision-maker.
Editors reported the chat is most valuable as a support tool—not as a source of formal assessment.