I work with research students to make scientific reasoning explicit in writing, helping them align claims, methods, and interpretation in ways that meet supervisory and review expectations.
Scientific writing is shaped by a series of small but consequential decisions: what to claim, how strongly to state it, what to justify, and what to leave uncertain. When these decisions are made implicitly or inconsistently, rigor erodes and interpretation drifts.
I train students to recognize and make these micro-decisions deliberately, so scientific reasoning, judgment, and evidence boundaries remain clear on the page.
Challenges in student writing are rarely uniform. They vary across individuals, training backgrounds, and stages of development.
Some students struggle with clarity or concision. Others have difficulty constructing a narrative that accurately reflects the logic of the experiment. Still others find it challenging to discuss results in a way that integrates prior literature without overstating conclusions or losing focus. In some cases, weaknesses in study design or analytical reasoning surface directly in the writing itself.
What supervisors recognize immediately but often struggle to name is that these issues converge at the same point: scientific thinking has not yet been fully externalized on the page.
Common signals include:
Implicit assumptions left unstated or unexamined
Missing causal or logical links between observations, analysis, and claims
Difficulty shaping a narrative that reflects the experimental design and its constraints
Incomplete or unfocused integration of results with the existing literature
Misalignment between the research question, chosen methods, and resulting interpretation
Overconfident AI-generated phrasing that masks uncertainty, limitations, or evidentiary boundaries
These are not surface-level writing errors. They are indicators that reasoning, judgment, and evidentiary constraints have not yet been made explicit in writing.
Confirming that cited sources explicitly support the stated claim.
Ensuring conclusions reflect what the evidence actually allows.
Treating evidence checking as an ongoing reasoning loop.
Weighing convergent and divergent findings without bias.
Making assumptions, trade-offs, and uncertainty visible in writing.
Adjusting claims and framing when evidence is mixed or conditional.
Supervisors typically bring me in at points where student work begins to carry higher stakes—when manuscripts, proposals, or theses move closer to review, and familiar feedback cycles no longer seem sufficient. At this stage, progress often slows not because students lack effort or intelligence, but because reasoning, evidence, and interpretation are no longer aligning cleanly in writing.
My role is to help students translate the thinking they are already doing into writing that meets supervisory and review expectations, without forcing premature conclusions or overconfident claims.
The training begins by restoring the conditions that allow rigorous thinking to become visible on the page. Rather than correcting language or polishing drafts, the focus is on helping students surface assumptions, clarify judgment calls, and make reasoning steps explicit in relation to the task at hand.
Training is adapted to:
The specific writing context (manuscript, proposal, report, or thesis)
The recurring patterns you see in your students’ drafts
The disciplinary norms and standards that guide evaluation in your field
As part of this process, AI is treated as a tool, not a stand-in for scientific judgment. Students learn to interrogate AI outputs, verify claims and citations against primary sources, and recognize when automation begins to erode reasoning—particularly in interpretation, framing, and expressions of certainty.
The goal is not to standardize writing, but to help students produce work where judgment, evidence, and uncertainty are aligned in ways that supervisors and reviewers can readily follow.