Changelog

Every product change to Webhound — what you can now do, and how to use it.

Looking for the long-form posts? Read the news →

April 2026

·UI

Going back to a composer-first home

We added an agent chat in front of the home composer earlier this year because we thought a copilot would help. It mostly added friction. Power users said the original direct flow was better. So we put it back.

A few months ago we made the home page a chat with an agent. The bet was that some people would want a copilot to plan a run, ask clarifying questions, and chase follow-ups in conversation.

In practice it was friction. For people who already knew what they wanted to research, the chat sat between them and the run. Several of you told us the original direct composer was faster and you missed it.

So today the home page is a composer again. Click the Output pill (Report, Dataset, Chain, or Ask), type the brief, set a budget, hit Send. Mode, model, Deep read, and budget are pill controls under the textarea — same controls as before, just always visible instead of hidden behind a chat.

The agent isn't gone. It still runs every session as Planner → Executor → Verifier. It still asks Plan-mode questions when you opt into Plan. It still drafts chain steps inside the chain builder. We just stopped putting it in front of the start of every run.

How: same as it always was. Pick what to make, type, send. Carry on.

Read the announcement
·Reports

LaTeX math and DOI lookup for open access in research

Research HTML can include KaTeX-friendly math spans. The research agent can also query Unpaywall by DOI for open-access metadata and links.

Equations are stored as katex-source spans with raw TeX inside so the report reader can render them with KaTeX.

For a known DOI, the agent can run an Unpaywall lookup and work from structured JSON instead of guessing from search snippets alone.

How: start a research session as usual; the executor uses these when your brief calls for math or citable papers.

Read the announcement
·Pricing

New session minimums for Pro vs Flash

Pro reports start at $10 and Pro datasets at $5. Flash reports start at $2 and Flash datasets at $1.

When you start a session, the budget you enter has to clear a minimum that depends on the model. Pro is built for thorough work that needs more room to breathe; Flash is for quick, cheap runs.

How: in the budget field on a new session, the minimum updates as you switch between Flash and Pro. If you want the cheapest possible run, pick Flash; if you want the most thorough run, pick Pro.

Read the announcement
·Trust

Click any fact or cell to see the tools that produced it

Every fact in a report and every cell in a dataset is now clickable down to the raw tool calls — searches, page visits, code runs — with full inputs and outputs.

Before, you had to trust the output. Now you can audit any single claim down to the raw work behind it.

Click a claim pill in a report (or the small icon next to a dataset cell) and a popover opens with the evidence (the verbatim passage from the source), the method (which search → which page → which section), the sources, and the confidence level.

From there, click "View tool chain →" in the top-right of the popover. A modal opens with each tool call rendered in full — searches show the query and the results, page visits show the URL and content, code runs show the source and the output.

This works the same in your own workspace, in published reports anyone reads anonymously, and in copies of those publications.

Read the announcement
·Reports

Redirect a running session, roll back to any version, and reach 14 new sites

Send a message from the Activity tab to redirect a running session, browse old versions of any output report, get full LinkedIn profiles, and reach 14 dedicated platform scrapers.

You don't have to wait for a session to finish to course-correct. While it's running, open the Activity tab and send a message ("focus more on pricing", "skip the historical context"). The executor stops, in-flight tasks are cancelled, and the planner re-plans with your guidance — anything already written is preserved.

Every time the assembler builds the final output, that assembly is saved as a version. A version timeline at the bottom of the document lets you browse and switch between past versions of the same report.

Webhound has dedicated scrapers for 14 platforms now: Google Maps, Amazon, Zillow, Indeed, TripAdvisor, eBay, Yelp, Telegram, Crunchbase, Airbnb, Pinterest, PitchBook, Google Flights, and Expedia Hotels. Several support direct search ("search Indeed for X", "search PitchBook for investors matching Y"). LinkedIn profile visits also now return the full data (education, work history, featured content) instead of just four fields.

Plus: the planner now creates 3-5 substantial tasks per cycle instead of one tiny one, the executor can grep over pages it already visited (useful for verifying a specific stat), and inline source indicators are now a single confidence-colored pill with the source hostname rather than a number plus a colored dot.

How: open the Activity tab on a running session and chat. The version timeline is at the bottom of any output document. Platform scrapers are picked automatically when relevant.

Read the announcement
·UI

Find recent work fast, and let the agent read further into long uploaded files

A new Recents tab in the workspace shows your most recently touched sessions. The agent can now keep reading further into long files you upload instead of stopping at the first page.

Recents is a new filter at the top of the workspace tree (next to All / Reports / Datasets). It shows your sessions in order of when you last touched them — running sessions float to the top.

On the agent side: when you upload a long file (a big PDF, a long document) the agent can now keep reading further into it during research, instead of being stuck on whatever fits in the first read.

How: in the workspace sidebar, click the Recents tab to switch out of the folder view. For long files, just attach as usual — the agent calls "Continue Reading File" when it needs more.

·Agent

Save API keys and let the agent use them

Save API keys, tokens, and other credentials in your account once. The agent then uses them with code execution to call any service you have a key for — Notion, Dropbox, your own internal API, anything with an HTTP endpoint.

Code execution got way more powerful: combined with stored credentials, the agent can now write Python that calls real APIs you authenticate against. Ask it to "pull my Notion database X and add a row per company in this dataset", "upload this report to my Dropbox", "post the top 5 findings to our internal Slack webhook" — it writes the code, pulls your saved key, and runs it.

There are two ways to add a key. From chat: paste it in and tell the agent what it is ("here's my Notion token"); the agent saves it under a label of your choosing and remembers it across conversations. From the UI: open Settings → Account → Secrets to add, view, edit, or delete keys yourself.

Keys are stored per-user with row-level security (only you can read your own) and are only loaded when the agent explicitly fetches one to use in code.

How: in chat, paste a credential and tell the agent what to do with it. To manage keys directly, open Settings → Account and scroll to Secrets.

·Publishing

Publish your research as a permanent link, and copy anyone else's into your workspace

Turn any session or folder into a permanent public URL with a license you choose. Browse public work in Explore, copy what's relevant into your own workspace with attribution, and share from mobile.

You can now share a session or a whole folder as a permanent published artifact at /p/{slug}. Pick one of four licenses: View Only, Attribution (copy into your workspace, no republishing the copy), Open (copy + republish under same license), and Open + Commercial.

When you find someone else's public publication that's relevant, click Copy to workspace and the whole thing — sessions, documents, datasets, sources — lands in a new folder in your workspace so you can keep researching from there. Lineage and attribution back to the original are tracked automatically.

Each republish creates a new immutable version (/p/{slug}/v/1, /v/2, …) with an update note. Authors get a profile page at /author/{id} with their publications and aggregate stats; you can follow authors and watch specific publications for updates.

Shared links unfurl with rich previews on Slack, X, etc. Sharing also works on mobile.

How: in any session or folder, click Share → Publish. Open Explore in the top header to browse public publications. On any public publication, click Copy to workspace.

Read the announcement
·Trust

A single research run now produces multiple documents, and every fact is traced back to its source

A research run now produces several working documents and assembles a final output. Every factual claim carries a structured trace (evidence, method, sources, confidence). Three new API endpoints expose all of it.

Research runs now produce several working documents — one per topic cluster — and an assembled final output. The executor writes into the working docs; the assembler synthesizes them into the output you read. Switch between them from the Document tab.

Every factual statement in a report gets a structured trace recording the type of claim, the verbatim evidence, the research method, the confidence level, the importance to the argument, and which other claims it depends on. The Claims tab lists every traced claim with filters for type, confidence, importance, and document.

Inline citation numbers are generated automatically from the trace's source URLs — no more mismatched or duplicate citations.

New API endpoints: GET /research/:id/documents (all docs with stats), GET /research/:id/claims (every traced claim deduplicated), and a doc_type parameter on the existing document endpoint (output | working | all).

How: open the Document tab strip above any report to switch documents. Open the Claims tab to scan every traced claim with its sources.

Read the announcement
·Trust

Run a separate audit pass to fact-check any finished report

Once a report is done, hand it to a separate audit pass that re-checks claims against fresh sources. The toolbar shows research and audit budgets separately.

You used to have to either trust a finished report or read every source yourself. Now you can run an audit on any completed report — Webhound finds additional sources, re-checks the most important claims against them, and regenerates the output with the corrections.

Research and audit budgets are tracked separately in the toolbar so you always see what each pass is spending. The budget you give the audit determines how many claims get verified.

How: open any finished report and click the Audit Claims button in the toolbar (it shows just "Audit" if claims have already been extracted). Pick a budget and click Start audit.

March 2026

·Agent

Chat with Webhound and let it run multi-step research pipelines for you

A conversational mode where Webhound takes a higher-level goal, proposes a multi-step pipeline, runs it in the background, and remembers your preferences across sessions.

Webhound can now string multiple sessions into a pipeline. Describe a sequence — for example, "research the best CRMs, then extract 30 of them with pricing, then write a buying recommendation" — and the agent proposes a plan with editable budgets per step. You approve before anything runs, and you can stop the pipeline at any point.

Sessions run in the background while you keep chatting. You're notified when each one finishes, and the agent summarizes what completed.

The agent can also read and analyze sessions you've already done, run Python over your actual datasets, and remember your research interests, budget preferences, and working patterns across conversations (Memories).

How: switch to Agent in the session-type selector under the prompt box. Set a working folder in the chat's top bar to scope the agent's context to one project. The agent runs while you keep chatting.

Read the announcement
·API

Pull every document, claim, and trace into your own systems via API v2

API v2 with new endpoints and auth. Structured access to every document, every claim with its trace, and the publication lifecycle (publish, copy, version).

If you want to fold Webhound output into your own product, dashboard, or pipeline, v2 gives you structured access to documents, individual claims with traces, and the full publication lifecycle (POST /publications, copy, watch, attribution graph).

How: open the API Docs page from the app menu for the full v2 reference and auth setup.

·Reports

Webhound can write and run Python during a session

Webhound writes and runs Python during the run for calculations, data transformations, charts, and file generation — and it can call any API you give it a key for (see Apr 3).

Some questions are easier to answer with code than with searching — "what's the CAGR over these 10 years", "deduplicate this list of 800 companies", "plot revenue against headcount". Webhound now writes and executes Python during the run and drops the result (number, table, chart) into the document.

Works in reports, datasets, enrichment, and Ask mode. The agent picks code automatically when it's the right tool for that step — including running over your actual datasets, not made-up data.

Combined with the secrets feature added Apr 3, code execution is also how the agent calls external APIs you provide credentials for (Notion, Dropbox, your own services, etc.).

Read the announcement
·Agent

Drag a past session into a new run to dig deeper

Attach a previous session to a new run and ask Webhound to go deeper on a specific part of it. Plan mode is now a real conversation, and you can split a research budget across phases.

When something interesting shows up in an old session, you don't have to start over — you can hand that whole session to a new run as context. Try things like: "Take the section on pricing from this attached session and dig deeper into competitor pricing." Webhound reads what was already done and only re-searches what's actually new.

Three ways to attach a past session: drag it from the workspace sidebar onto the chat box, type @ in the chat box and pick it by name, or click + → Attach Sessions. Folders work the same way. You can also just describe what you want — "look at my last 5 sessions on competitor analysis" — and Webhound will find and read them itself.

Plan mode is now a back-and-forth conversation about scope before the run starts, instead of a list of yes/no questions. Discuss what to focus on, what to skip, then kick off when you're happy with the plan.

Research phases let you tell Webhound how to spend its budget across the run — e.g. "spend 90% searching broadly, then 10% deciding" — so it doesn't lock onto an answer early and spend the rest of the budget confirming it.

How: drag a session from the left workspace onto the chat box, or type @ to pick one. To use phases, just say how you want the time split in your prompt, or refine it in Plan mode.

Read the announcement

February 2026

·UI

Ask follow-up questions over a finished report or dataset without starting a new run

A new Ask mode lets you query a report or dataset you've already built, with optional web lookups when needed. Guided was renamed to Plan and One shot to Agent.

Once a session is done, you don't have to start a new full run for one follow-up question. Switch to Ask and ask things like "give me a one-paragraph summary", "is the second recommendation on page 3 still valid?", or "how many rows have no email?" — Webhound answers from the existing document and only goes to the web if it actually needs to.

Mode names also changed: Guided is now Plan, One shot is now Agent. Behavior is the same; only the labels.

How: pick Ask from the mode dropdown (Plan | Agent | Ask) on any session you've already finished.

Read the announcement

November 2025

·Pricing

Invite friends to earn credits

Get $2 in credits when a friend signs up with your code, plus matching credits up to $50 when they make their first purchase.

You and the friend you invite both get rewarded — $2 each on signup with your code, plus matching credits up to $50 when they buy their first credits.

How: open Settings → Referrals to grab your personal invite link.

Read the announcement
·Pricing

Pay only for what you use, no subscriptions

Webhound switched off subscriptions. Buy credits when you want, pay only what each session actually costs.

No more monthly tiers, no more wasted seats. Top up credits when you want to and Webhound only charges what each run actually costs.

How: open Settings → Billing to add credits.

Read the announcement