How I Structure Every Research Note (And Why Most Investment Research Is Theater)
Most investment research is written for engagement, not accountability.
A chart with arrows. A ten-tweet thread. Numbers specific enough to sound credible. Then the market moves against it and the post disappears. No follow-up. No post-mortem.
This is not research. Its a prayer disguised as research.
The tells are always the same. No stated conviction level. No explicit kill conditions. No position disclosures (or worse an advertisement in the garb of research). No mechanism for closing the loop when the thesis fails. The author reserves the right to be right in hindsight and wrong in silence. The reader has no way to distinguish a high-conviction bet from a speculative guess because every post reads the same.
It is structurally dishonest. So I built a different system.
Conviction Is Not a Feeling but a label
Every research note I write opens with one of three words: HIGH, MEDIUM, or SPECULATIVE. That label is the first line. Not buried in paragraph four. Not implied by tone. Written before the thesis begins.
HIGH conviction means the thesis is tested. Multiple confirming signals exist. I am positioned accordingly. If I cannot name three specific scenarios that would invalidate the call, I do not have HIGH conviction. I have wishful thinking.
MEDIUM means I am building a position. One or two signals have confirmed. I am staged in and watching for the third. The risk/reward justifies owning it, not fully sizing it.
SPECULATIVE means the thesis is forming. I may have a starter position or nothing at all. The purpose of a SPECULATIVE note is to document the logic early and state exactly what would upgrade it. If I cannot say what moves it to MEDIUM, I have not thought clearly enough to publish.
The practical difference matters. A HIGH conviction note on NVDA would require five data-anchored thesis points, a specific catalyst with a date, and three named kill conditions. A SPECULATIVE note on ARM would state explicitly: it will not upgrade until royalty revenue beats by more than 10% and v9 architecture penetration guidance moves above 35%. Same sector, different epistemic states. Same format, different labels.
Most research platforms publish both types identically. The audience cannot tell which is which until the outcome reveals it. That is not analysis. That is a lottery where the ticket looks the same regardless of odds.
A Thesis Is Not a Prediction
A prediction says: I think this goes up.
A thesis says: here is the mechanism, here is what would break it, and here is what I own.
The kill section is what separates them.
Every HIGH conviction note I write includes it. The format is three specific scenarios that, if they materialize, mean the thesis is wrong and the position is closed. Not vague scenarios. “Hyperscaler CapEx guidance cut more than 15% in any Q2 earnings call” is a kill condition. “Macro deteriorates” is not.
This is not hedging. Hedging is writing “risks include macro uncertainty” and leaving the reader to interpret it. A kill condition is a commitment device. It states in advance what would change my mind. When that event occurs, I am not permitted to rationalize around it. The condition was pre-defined. The discipline is mechanical.
Losses Should Be Visible
When a thesis is killed, I publish a post-mortem in the thesis graveyard. Every one. No exceptions.
The post-mortem answers three questions: what did I call, what actually happened, and what was the root cause of the miss. The root cause is assigned to one of six categories: narrative risk, timing error, data error, regime change, execution failure, or unknowable at the time of entry.
Over time, the distribution of categories tells you something specific about your analytical process. If most kills are timing errors, your entry framework is too early. If most are narrative risk, you are overweighting fundamentals relative to market structure. The graveyard is a diagnostic tool, not a confession box.
The “unknowable” category is narrow. Most misses are explainable in retrospect. My ai16z position lost 95% before I exited. The thesis was right; agentic AI infrastructure matters. The bet was wrong. I sized into a team, not a thesis. The infighting and internal breakdown that killed the project were not in my kill section because I was too attached to the position to write honest kill conditions. That is execution failure compounded by attachment bias. The post-mortem said so. Attachment to a position is itself a kill condition. I know that now.
Most research platforms never close this loop. Kills are silent. The track record is constructed from hits. The audience ends up with a biased sample and no ability to assess actual edge.
The Framework in Practice
Every note I publish carries a conviction label. Every HIGH conviction note has a kill section. Every closed thesis gets a post-mortem. Every data point is a number. Every position is disclosed at entry, not after the outcome.
The live data infrastructure is at ai-tracker-sigma.vercel.app. It covers the AI supply chain across equities and tokens. The tracker is the data layer behind every note.
Over time this expands. More dashboards. More tracked theses. Full transparency on every position I hold and every one I exit. The track record builds in public, not in hindsight.
The research Discord built on this framework launches soon. It will house every thesis, call, post-mortem, and position update in one place. A living record, not a highlight reel.
If you’re curious to join, follow along ROCH Labs or subscribe to be the first to know.

