Alchemist Review FAQ

General Use and Workflow

Q: How quickly are manuscripts processed and available in the dashboard?
A: Manuscripts appear in the Alchemist Review dashboard within approximately b after submission to the journal’s system (e.g., ScholarOne). The pipeline runs hourly, and all AI analyses (digest, citation, chat, etc.) are typically available within 24 hours.

Q: Can specific or problematic manuscripts be submitted for analysis outside the random sample?
A: Yes, though it’s handled manually. Hum can run special analyses (e.g., weekly) on flagged or problematic submissions not included in the automated feed. These are arranged case-by-case with the client lead.

Q: Does Alchemist Review integrate with our submission system?
A: The platform receives an automated feed from ScholarOne, EJP or other manuscript systems. It’s a standalone dashboard, meaning that while it syncs with submission data, editorial activity (flagging, notes, etc.) occurs outside of the Alchemist interface.

Q: What should we do if PDFs contain extra content (e.g., proof versions or attachments)?
A: Alchemist Review is built to handle “noisy” PDFs, but missing or altered content can affect results.

Dashboard and Features

Q: What does the “severity” color coding mean in Citation Evaluation?
A:

  • Red (Critical): Issues that require editorial attention (e.g., missing DOI, invalid reference, or known retraction). 
  • Yellow (Low): Potential issues, such as questionable relevance or weak source reliability. 
  • Gray (Neutral): Informational flags like self-citations or missing in-text citations; not inherently problematic but worth review. 

Q: What checks are performed in Citation Evaluation?
A: Six background checks are currently run via Grounded Ai:

  1. Source validity 
    1. Resolves a reference against external sources and guards against the inclusion of fake, missing, or improperly cited works. 
  2. Relevance to the manuscript
    1. Assesses topical alignment and guards against the inclusion of irrelevant or weakly related citations. 
  3. Editorial Notices
    1. Identifies editorial status and guards against the inadvertent inclusion of retracted references. 
  4. Self-citation detection
    1. Detects whether the reference cites document authors and guards against excessive or undisclosed self-citation.
  5. In-text citation consistency
    1. Determines if the full reference is cited inline and guards against the inclusion of unused or uncited references.

 

Q: Does Alchemist Review detect retractions and other types of corrections?
A: Currently, the system detects formal retractions

Q: How is the AI Chat trained—does it only use the manuscript?
A: The AI Chat leverages Open Scholar, which means it references both the manuscript and external scholarly sources to answer queries and provide context. It’s not limited to text within the manuscript.

Q: What assets are being sent to the AI Chat?
A: The original manuscript and the manuscript digest sections.

Q: Can I ask the AI chat questions about the platform?
A: The AI chat is not intended for this. It is intended to answer questions about the manuscript.

 

Q: Can I flag a manuscript as reviewed or add notes in the dashboard?
A: Not yet. Currently, review tracking and notes are handled in the publisher’s manuscript submission system. 

Q: What does the Author Track Record feature show?
A: It highlights authors with prior retractions or questionable records. Editors can use this as a risk indicator when screening manuscripts.

Q: How does the tool handle self-citations and author representation?
A: The Citation Evaluation tab includes an Author Representation section that calculates self-citation ratios and visualizes citation concentration by author and journal.

Editorial Checks and Automation

Q: Can Alchemist Review detect AI-generated text?
A: No. Hum does not attempt to detect AI-generated text, as current detection tools (including OpenAI’s) are unreliable and prone to false positives. Instead, the system flags “hallucinated” references—a strong indicator of AI misuse.

Q: How do you evaluate manuscripts for Writing Quality and Readability?

A: We evaluate manuscripts with the goal of identifying major issues that would hinder an editor’s ability to review a manuscript. Quality is designated by the manuscript’s ability to communicate its findings in accord with common peer review checklists and author guidelines while avoiding stylistic judgments, since writing norms vary by discipline and region.

User Feedback and Product Development

Q: How does Hum collect user feedback?
A: Hum follows a structured engagement plan, unique to each client, but generally includes the following:

  • Bi-monthly user interviews (a single user with 2 members of the Hum Alchemist Review team and 1 member of the Grounded Ai team) 
  • In-dashboard feedback (“thumbs up/down”) 
  • End-of-phase surveys