v7.0 RELEASE · EDUCATIONAL RESEARCH PUBLICATION · RUNS IN YOUR OWN LLM · OPTIMIZED FOR CLAUDE

It's not more opinions you need. It's a research framework.

Harvest Protocol is a paid educational research publication and methodology. It is not investment advice and not a registered investment adviser.

Most investors have the same information. The difference isn't insight—it's process. HARVEST PROTOCOL is a complete educational investment research framework — a quantitative scoring methodology, deployment-gate logic, and behavioral circuit breakers — delivered as a structured publication you run inside your own LLM. Optimized for Claude; runs at high fidelity on any frontier model. No Harvest server and no account — nothing about your research ever leaves your hands.

Educational research framework · Not investment advice · Not individual recommendations

1.5–2% documented avg behavior gap (Morningstar/DALBAR)
45 min monthly cycle
9 exit triggers
13% EV hurdle
monthly_research_session --allocate $9,200
 
SESSION: 2026-04 · PORTFOLIO: $312,400
MACRO: VIX 14.2 · CAPE 32.1 · GATE OPEN
 
DEPLOYMENT QUEUE — 3 PASSES, 2 REJECTS
 
COST composite 82 · EV 16.1% · size $3,800
VEEV composite 79 · EV 14.4% · size $2,600
VOO composite 78 · EV 14.3% · size $2,800
SMCI composite 64 · EV 2.8% · EV_REJECT
PFE composite 58 · EPS↓ · QUALITY_REJECT
 
ALLOCATION: Roth 401k $3,800 · Taxable $5,400
EXITS ARMED: 9 triggers per position
 
→ RESEARCH_PASS: 3 NAMES · 2 BLOCKED · 38 MIN ELAPSED
THE COST OF NO SYSTEM

1.5–2% a year. Vanishing quietly.

Behavioral finance research documents an average 1.5–2% annual return loss from preventable decision errors — selling at panic lows, deploying at FOMO peaks, holding positions whose thesis broke months ago.

Morningstar Mind the Gap 2024; DALBAR QAIB 2024 — investor returns lagged fund returns by 1.68% annualized over 10 years.

Over a decade, a 1.5–2% annual drag compounds into a meaningful share of total return — not from bad picks, but from bad process. (Published Morningstar/DALBAR research averages — context for why a disciplined process matters, not a projection of any individual's outcome.)

PANIC EXIT — MARCH 2020
The classic pattern — selling into a sharp drawdown, then rebuying higher into the recovery. Behavioral-finance research identifies this as the single largest contributor to the documented behavior gap.
BEHAVIORAL COST INCURRED
FOMO ENTRY — NOVEMBER 2021
Deploying into high-growth names at an elevated CAPE. On these inputs the Deployment Gate returns INDEX ONLY — the methodology's structural defense against buying into froth.
BEHAVIORAL COST INCURRED
WITH HARVEST PROTOCOL
Gate fires. Capital holds. Process executes. The system's value isn't the research candidates it surfaces — it's what it tells you not to do. Behavioral costs avoided; no dollar outcome implied.
PROCESS DISCIPLINE APPLIED
THE REJECTION PROOF

Not what we bought. What we didn't.

A walkthrough of how the methodology's rejection logic works, applied retrospectively to widely-known names using data from the dates shown. This is an educational illustration of the screening criteria — not investment performance, not a track record, and not a complete or representative list of names the criteria would flag. The product was not operating with buyers on these dates. Subsequent price action is a matter of public record and is intentionally omitted here.

SCREENING CRITERIA · MAR-2024 DATA (RETROSPECTIVE)
SMCI
Super Micro Computer
REJECTION TRIGGERS
• RSI: 86 → overbought flag (−2 pts)
• EV%: 2.8% vs 13% hurdle → EV_REJECT
• Beneish M-Score: −1.38 → quality flag (−5 pts)
METHODOLOGY RESULT
RESEARCH_HOLD
SCREENING CRITERIA · JAN-2024 DATA (RETROSPECTIVE)
LULU
Lululemon Athletica
REJECTION TRIGGERS
• Expectation Gap: UNFAVORABLE → EXPGAP_REJECT
• EV%: 3.1% vs 13% hurdle → EV_REJECT
• Estimate revision cut: −11% in 60 days (−3 pts)
METHODOLOGY RESULT
RESEARCH_HOLD
SCREENING CRITERIA · JAN-2024 DATA (RETROSPECTIVE)
PFE
Pfizer Inc.
REJECTION TRIGGERS
• EPS declining YoY: −77% FY23 vs FY22 (−5 pts)
• Oct 2023 guidance cut: revenue −$9B vs prior (−5 pts)
• Piotroski F-Score: 3/9 → flagged (−3 pts)
METHODOLOGY RESULT
RESEARCH_HOLD
SCREENING CRITERIA · Q1-2024 DATA (RETROSPECTIVE)
WBD
Warner Bros. Discovery
REJECTION TRIGGERS
• Altman Z-Score: 0.94 → distress zone (−5 pts)
• EPS declining YoY — losses widening (−5 pts)
• U/D ratio: 1.1 with VP < 2 → UD_REJECT
METHODOLOGY RESULT
RESEARCH_HOLD
SCREENING CRITERIA · JAN-2024 DATA (RETROSPECTIVE)
NKLA
Nikola Corporation
REJECTION TRIGGERS
• Altman Z-Score: −4.2 → pre-filter rejection
• EV%: negative — no FCF path → EV_REJECT
• Short interest: 28% of float (−2 pts)
METHODOLOGY RESULT
RESEARCH_HOLD
Public record: Nikola Corporation filed for Chapter 11 bankruptcy in February 2025.
THE QUESTION WE GET MOST

"Why not just ask Claude directly?"

THE AI IS THE REASONING ENGINE. THE PROTOCOL IS THE PRODUCT.

HARVEST PROTOCOL gives Claude a 500-page protocol—with a Deployment Gate, 10-factor scoring, Expected Value Framework, and 9 exit triggers—that converts analysis into a sized, routed, behaviorally-enforced research output. The AI is the reasoning engine. The protocol is the product.

CAPABILITY AI ALONE WITH PROTOCOL
Analysis quality High—but unfocused 40+ quantitative checks, structured
Position sizing None EV-weighted, risk-adjusted
Market timing gate None VIX/CAPE/sentiment deployment gate
Exit criteria None 9 pre-committed triggers
Behavioral circuit breakers None VP disconfirmation, EV decay
THE HONEST COMPARISON

How it stacks up.

FEATURE HARVEST Automated Services Tip / Recommendation Services Asking Claude/GPT Directly
Deployment Gate (VIX/CAPE)
9 pre-committed exit triggers Auto rebal only
EV-weighted position sizing Fixed %
40+ quantitative checks Varies Unstructured
Behavioral circuit breakers
FIRE / tax routing integration
Local — your data stays yours Full account access Cloud Cloud
Cost $127 once (Starter) 0.25% AUM/yr $99–299/yr $0–20/mo
Decisions or just analysis? Sized + routed framework output Managed decisions Ideas only Essays only
You stay the operator Partial
THREE LAYERS

The complete investment research framework.

HARVEST PROTOCOL is a paid educational research publication and methodology — not a tip service. It is a complete investment research framework with three integrated components that work together to convert analysis into repeatable, behaviorally-enforced research outputs — all applied by Claude inside your own session, following the published method.

LAYER 01

Investor Playbook

30 chapters. Knowledge layer spanning Graham to Dalio. Compiled mental models, frameworks, and decision heuristics from 8 decades of investment literature.

LAYER 02

Analysis Methodology

10-sweep BEST research methodology with 40+ quantitative checks. Composite scoring, expectation gap analysis, Beneish M-Score, Piotroski F-Score, and VP modeling.

LAYER 03

FinPlan Framework

FIRE planning integration, research cycle management, tax-advantaged routing, and market-condition deployment gates that prevent bad timing.

WHAT'S IN THE STARTER EDITION

A structured methodology you run in your own LLM — optimized for Claude. No Harvest server, no account.

14-FILE STRUCTURED BUNDLEThe full methodology as ready-to-use markdown files for a Claude Project — Investment Engine, FinPlan Framework, Project Primer, Mobile Workflows, Model Routing, QC Protocol, reference docs, and 5 sample profiles.
30-CHAPTER INVESTOR PLAYBOOKThe knowledge layer — compiled mental models and decision heuristics spanning eight decades of investment literature, Graham to Dalio.
10-SWEEP BEST METHODOLOGY40+ quantitative checks — composite scoring, expectation-gap analysis, Beneish M-Score, Piotroski F-Score, VP modeling — applied by Claude following the published method. No external engine.
READY-TO-PASTE WORKFLOW PROMPTSQuick Screen, Monthly BEST Cycle, Quarterly Review and more — paste into your Claude Project and run. No coding, no setup beyond a file upload.
ANTI-HALLUCINATION QC PROTOCOLThe bundled QC_PROTOCOL.md forces calculation scratchpads, NULL-on-unsourced data, and source citations on every workflow — so you can audit every number.
RUNS IN YOUR OWN LLM (OPTIMIZED FOR CLAUDE)Your session, your data. Nothing is sent to us — no Harvest server, no account. The publication is the product; it runs at high fidelity on any frontier model.
BRING YOUR OWN FREE DATA CONNECTORSFor accurate inputs, connect your LLM to free public data sources — SEC EDGAR, Alpha Vantage, FMP, Tavily. These are your own connectors, registered with those providers (not with us); accurate data materially improves every workflow.
WHAT'S IN THE BOX

This is what a single-stock deep-dive produces.

Not a stock tip. Not a generic analysis. A complete mechanical research output — every number sourced, every score traceable, every decision pre-committed. Here's a real analyze MSFT output.

10-FACTOR SCORECARD — MSFT
Business Quality (35%)8.75 / 10
Moat width & trajectory9.0
ROIC vs WACC spread8.5
Financial Strength (25%)8.1 / 10
Revenue growth & quality7.0
Profitability & margin8.0
FCF conversion8.5
Balance sheet health9.0
Valuation (25%)6.5 / 10
Absolute margin of safety6.0
Relative vs history & peers7.0
Management (10%)8.0 / 10
Risk (5%)9.0 / 10
ENGINE SCORE79.7 / 100
COMPOSITE SCORE
Composite = (Engine 79.7 × 0.70) + (Wall St 72 × 0.15) + (Upside 68 × 0.15)
= 55.8 + 10.8 + 10.2 = 76.8 / 100
EXPECTED VALUE FRAMEWORK
Bull (25%)+22% → 5.5%
Base (55%)+11% → 6.1%
Bear (20%)−8% → −1.6%
Expected Return10.0%
vs 13% HurdleBELOW — RESEARCH_HOLD
SYSTEM VERDICT
MSFT: Engine score 79.7 (screening pass) but EV 10.0% below 13% hurdle.
RESULT: Framework signals RESEARCH_HOLD. No additional allocation indicated under current parameters.
The framework doesn't just surface research candidates. It tells you when not to allocate — even quality companies at the wrong price.

Framework output shown for educational illustration. Scores reflect methodology application, not investment recommendations. Not financial advice.

Get a free Quick Score for any ticker

A free, standalone preview of the scoring methodology on any ticker — 3 per day, no account. (This preview is a separate hosted tool; the Starter Edition itself runs entirely in your own LLM, optimized for Claude — see below.)

PROCESS VS. NO PROCESS

What a structured process changes.

An illustration of how the methodology's design differs from unstructured investing. It describes the process, not a promised personal result — individual outcomes depend on your own execution and market conditions.

WITHOUT A STRUCTURED PROCESS

4–6 hours/month of unstructured research
Selling on fear, buying on momentum
No structured record of past decisions
Deploys capital when market is overvalued
Can't weight conflicting analyst opinions
Averages down on broken thesis
Exit decisions driven by P&L anxiety

WITH THE FRAMEWORK'S PROCESS

A single 45–90 minute monthly cycle
Exit criteria pre-committed in advance, not set in the moment
Calibration log with decision attribution
Deployment Gate flags overvalued conditions
Composite formula with source weighting
VP disconfirmation check is a required step each cycle
Exit triggers defined by thesis, not by price movement
ILLUSTRATIVE SCENARIOS · REPLACE WITH BETA TESTIMONIALS BEFORE LAUNCH

What changes after 90 days.

Placeholder — replace with 3+ real beta testimonials (name, role, portfolio size, concrete process outcome) before opening cart. Individual results vary. Not financial advice.

Verified buyer testimonials will appear here after launch. As a matter of policy, Harvest Protocol does not publish fabricated, composite, or "representative" quotes, and does not publish performance, return, or dollar figures in any testimonial — see Disclosures.

Harvest Protocol — Testimonial Policy

THE MATH

The math is simple. The discipline isn't.

Behavioral-finance research (Morningstar Mind the Gap 2024; DALBAR QAIB 2024) documents an average 1.5–2% annual return drag from preventable decision errors. That published research is the reason a disciplined process matters — it is context, not a benefit promised to any buyer. The Starter edition costs $127 once.

What you are buying is process discipline — fewer panic exits and FOMO entries, and a repeatable monthly cycle. Whether that is worth $127 to you is your decision; we make no projection of any dollar outcome.

ONE-TIME COST
▸ HARVEST PROTOCOL Starter $127 (once)
▸ Claude Pro (optional) $240/yr if upgraded
WHAT YOU'RE BUYING
▸ A repeatable research process methodology
▸ Pre-committed exit + deployment rules discipline
▸ 45-min monthly cycle vs. 4–6 hrs ad hoc time structure
WHAT WE PROMISE a process, not a return

The 1.5–2% behavior-gap figure is published third-party research (Morningstar Mind the Gap 2024; DALBAR QAIB 2024), cited as context for why process matters — not a guarantee, projection, or representation of any individual outcome. Not financial advice. Not a registered investment adviser.

ONE EDITION

The Starter Edition.

Buy-side research desk: $2,400+/mo · Seeking Alpha Premium: $299/yr · Automated services on $250K AUM: $625/yr · HARVEST PROTOCOL STARTER: $127 once

YOUR DATA STAYS YOURS

The Starter Edition is a set of files that run entirely inside your own LLM (optimized for Claude). There is no Harvest server and no Harvest account — so nothing about your research ever reaches us. Any free data connectors you add (SEC EDGAR, Alpha Vantage, FMP, Tavily) are your own, configured in your LLM and registered with those providers, not with us. We never see your tickers, your dollar amounts, or your portfolio composition. Full disclosure in the privacy policy.

No Claude subscription required to start. Begin with a free Claude account. Upgrade to Pro ($20/month) when your workflow demands it.

30-day money-back guarantee · Personal use license · Not financial advice

QUESTIONS

Everything you need to know.

No. Setup is ~10 minutes: upload the 14 Starter markdown files to a Claude Project (Claude is the optimized environment; any frontier LLM works), then run any of the 14 ready-to-paste workflow prompts — Morning Watchdog, Quick Screen, Monthly BEST Cycle, Quarterly Review. For accurate inputs we recommend connecting a few free public data sources to your LLM — SEC EDGAR, Alpha Vantage, FMP, Tavily (free accounts, ~5 minutes; these are your own connectors, not a Harvest service). There is no Harvest server, and nothing to install beyond those optional free connectors. If you can upload a file and paste a prompt, you can run the system.
Hallucination risk is real, and the methodology is designed around it — not in spite of it. The bundled QC_PROTOCOL.md file forces three behaviors on every workflow: (1) Claude must show a calculation_scratchpad for any numerical claim — named operands, source-tagged values, visible arithmetic; (2) when a number can't be sourced, Claude must mark it NULL and apply the DATA_DEFICIENT protocol rather than estimate; (3) every figure must cite its source (web search URL, broker statement, EDGAR filing). The methodology constrains how Claude works the numbers and requires every figure to trace to a primary source. You remain the final auditor against those primary sources (10-K, 10-Q, earnings transcripts).
The Starter edition runs inside a Claude Project (Claude.ai free or Pro, or Claude Code) — Claude is the optimized environment, but because the methodology is plain markdown it runs at high fidelity on any frontier LLM. The bundled MODEL_ROUTING.md tells you which model fits which task: most daily workflows run on Sonnet; the adversarial workflows (Bear Case Stress Test, VP Self-Audit, full Monthly BEST Cycle) benefit from Opus's deeper judgment. Everything runs in your own session — there is no Harvest engine or server to depend on.
There's no black box — the methodology forces the math into the open. The bundled QC_PROTOCOL.md requires the model to show its work — every F-Score, Z-Score, M-Score, EV calculation, and Composite Score appears inside a calculation_scratchpad block with named operands, source-tagged values, and visible arithmetic. If Claude skips the scratchpad, the protocol says you reject the output. Each workflow prompt includes verification cues — specific things to spot-check (e.g., "F-Score total should equal sum of 9 sub-checks", "EV must equal weighted sum of bull/base/bear returns"). When something looks off, the protocol provides recovery prompts: "Re-derive that score from these named operands and cite the source for each input."
Three steps in your first 30 minutes: (1) upload the 14 markdown files from your purchase email to a new Claude Project (or your LLM of choice); (2) connect the recommended free data sources to your LLM — SEC EDGAR, Alpha Vantage, FMP, Tavily (free, ~5 min; accurate data materially improves every workflow); (3) open the bundled userguide.html tab 1 — the 30-minute Quick Start — and run a quick screen on any ticker to see your first verified output. The week-1 calendar lays out your daily and weekly routines (Morning Watchdog, Sunday Evening Brief, mid-week scan).
Because unstructured AI gives you a balanced essay that decides nothing. The protocol gives Claude a 500-page system — Deployment Gate, 10-factor composite scoring, Expected Value Framework with a dynamic hurdle rate, Half-Kelly position sizing, and 9 pre-committed exit-review signals. The framework output is not "here are some things to consider." It surfaces structured research candidates with sizing references and exit-review signals — the buyer decides whether to act. The AI is the reasoning engine; the protocol is the educational research framework.
The methodology works at any portfolio size. The dollar value of behavioral guardrails scales with your account, but the habit value is largest when the portfolio is small — you build the discipline before your decisions involve six figures of capital. The same Deployment Gate, the same EV check, the same exit triggers fire on a $20K position as on a $200K position.
Meaningful calibration data — the kind that lets you measure your own decisions against a VOO benchmark over time — requires roughly 10–15 research candidates with 12-month outcomes. Plan on 12–15 months for that. Before then, the value shows up in two places: (1) process discipline — you stop making panic exits and FOMO entries, and (2) time — a 45-minute monthly cycle replaces 4–6 hours of unstructured research.
It's as private as it gets: nothing about your research ever reaches us. The Starter Edition is a set of files that run entirely inside your own LLM (optimized for Claude) — there is no Harvest server, no license-validation call, and no Harvest account. Any data connectors you add (SEC EDGAR, Alpha Vantage, FMP, Tavily) are your own, configured in your LLM and registered with those providers — not with us. We never receive your tickers, your dollar amounts, or your portfolio composition. Full disclosure in the privacy policy.
The methodology is model-agnostic by design. It is documented in plain markdown and runs at ~85% fidelity on any frontier model — Claude, GPT-4/5, Gemini, or successor systems. Because the product is the published methodology itself — not a hosted service tied to any AI vendor — it keeps working when models change. Starter buyers receive all v7.x methodology updates included in the purchase price.

The next market drop will reveal whether you have a system.

When markets fall 25%, most investors panic. The ones with a Deployment Gate, pre-committed exit-review criteria, and a VP disconfirmation signal don't panic—they execute their own framework.

THE GUARANTEE

Run one full monthly BEST cycle. If it doesn't change how you approach your next investment decision — full refund, no questions, you keep the Playbook.

That's the deal. We don't promise returns. We promise the methodology will change how you research — or we don't keep your money.

Get the Starter Edition — $127 →

30-day money-back · Personal use license · All future v7.x updates included · Not financial advice. All investments involve risk.