Honcho: Use Cases

Real-world examples of using Honcho through the built-in copilot or an AI assistant connected via MCP. The copilot is the primary interface for most users—a chat experience with full access to search, research, collaboration, and content management. External AI assistants connect via the MCP server for programmatic access.

Deep Analysis

Cross-topic synthesis

“How have central bank responses to inflation differed between the 2008 financial crisis and the post-COVID period? Draw on everything in my feeds.”

The assistant searches across years of indexed financial commentary, pulls relevant analysis from multiple sources, and synthesizes a comparison—threading together perspectives that would take hours to cross-reference manually. Every claim links back to a specific article, author, and date.

Track how expert opinion has shifted

“How has coverage of remote work evolved from the pandemic emergency through the return-to-office pushback? What shifted in the argument?”

The corpus preserves chronology. The assistant can trace a narrative arc—early emergency analysis, the productivity debate, the cultural backlash—and show how the same authors changed their positions over time. This is temporal reasoning that traditional search can’t do.

Rapid briefing assembly

“I have a board meeting tomorrow about AI regulation. Build me a briefing from my sources covering the EU approach, the U.S. approach, and the China approach.”

The assistant pulls articles across regions and regulatory frameworks, identifies the key analysts and their positions, and assembles a structured briefing with attributed quotes and contrasting viewpoints—in minutes instead of hours.

Pattern recognition across domains

“What patterns do my sources reveal about how different industries are adopting AI? Compare coverage of healthcare, finance, and education.”

The assistant searches across all indexed content by domain, identifies recurring themes—regulatory friction, workforce displacement, productivity claims—and surfaces cross-domain connections that no single-industry analyst would make.

Surface contrarian and minority views

“Find articles in my feeds where analysts disagreed with the consensus on interest rate direction. What arguments did they make, and how did things play out?”

The assistant searches for dissenting perspectives, maps them against subsequent events, and assesses which contrarian calls were right—a uniquely valuable capability when the corpus preserves both the prediction and the outcome.

Institutional memory

“Based on how the Financial Times has covered previous tech bubbles, what framework would their analysts likely apply to the current AI investment cycle?”

By synthesizing patterns across years of indexed analysis from a single source, the assistant can construct a plausible analytical framework for new situations—not prediction, but pattern-informed reasoning grounded in the source’s own editorial tradition.

Build a presentation from your research

“Design a presentation outline on the history of cryptocurrency regulation using only articles from my index.”

The assistant sequences your collected articles into a pedagogically sound structure—chronological narrative, key turning points, competing viewpoints—with each section grounded in specific sources you’ve already curated.

Network and influence mapping

“Which analysts appear most frequently in my feeds when the topic is semiconductor policy? What institutions do they represent?”

The assistant maps the expert network across your indexed content—identifying who covers what, which institutions are cited most, and how perspectives cluster. This is institutional knowledge that exists nowhere else in structured form.

Search Connectors

Query SEC filings on demand

“@sec-edgar Show me Apple's latest 10-K filing.”

The @ prefix targets a specific data connector. SEC EDGAR returns corporate filings—10-K, 10-Q, 8-K, proxy statements—fetched live at query time. Results are indexed so you can save, tag, and search them alongside your library content.

Search court opinions

“@courtlistener first amendment cases involving social media platforms”

CourtListener provides access to US court opinions from the Supreme Court to district courts. Combine with your library content to research legal topics across both primary sources and commentary.

Cross-reference library content with live data

“Compare what my sources say about unemployment trends with the latest FRED data.”

The copilot searches your library for commentary, queries the FRED connector for current Federal Reserve economic data, and synthesizes both into a single briefing that contrasts expert analysis with the actual numbers.

Multi-source search

“@sec-edgar @courtlistener corporate liability for AI-generated content”

Stack multiple @ mentions to search several connectors in parallel. Results from each source are labeled so you can see where information came from.

Academic research

“@openalex recent papers on CRISPR gene therapy for sickle cell disease”

OpenAlex provides access to 270M+ scholarly works. arXiv covers preprints in science, math, and computer science. Both return structured results with authors, abstracts, and links to full text.

Multi-Source Analysis

Build a dossier

“Build a dossier on SpaceX.”

The dynamic planner auto-executes a multi-step analysis: searches your library, queries relevant connectors, searches the web, and assembles a comprehensive entity profile with source attribution. Each section cites the entries it drew from.

Compare sources

“Compare CFR's Iran coverage to what's in the news this week.”

The planner searches the CFR group for institutional analysis, searches news connectors for current coverage, and produces a side-by-side comparison with source attribution showing where each perspective came from.

Find blind spots

“What am I missing on AI policy? Where are the gaps in my coverage?”

Gap analysis: the planner searches your library for what you have, searches the web and connectors for what’s being discussed elsewhere, and highlights topics and perspectives your sources don’t cover.

Track trends over time

“@plan How has climate coverage changed this quarter vs. last?”

The planner compares two time periods, identifying shifts in framing, new voices, and topics that gained or lost attention. Use the @plan prefix to invoke the planner explicitly.

Fact triangulation

“Verify: US GDP grew 3% in Q4 2025. What do my sources say?”

The planner searches your library, economic data connectors, and the web to triangulate a claim across multiple sources. The result includes a confidence rating and shows which sources agree, disagree, or add nuance.

Editions

Automated daily briefing

Configure a daily edition in the admin console with your key collections and sources.

Honcho generates a newspaper-style summary page every day, synthesizing the last 24 hours of content into a hero story, themed sections, and a source bibliography. The edition is published at a public URL—share the link and your team has a daily briefing without anyone writing it.

Weekly executive summary

Create a weekly edition that summarizes the past week’s daily editions.

Weekly editions synthesize from daily output rather than raw entries, producing broader narrative arcs and week-in-review analysis. Useful for executive summaries or team newsletters.

Branded publications

Add custom branding: logo, accent color, masthead, and footer.

Each edition carries its own visual identity. Add a style guide with editorial instructions for the AI writer’s tone and voice. The result looks like a curated publication, not a generated report.

Share externally

Create a share link with 30-day expiration.

Share links provide direct access to specific editions without requiring authentication. Set expiration (7, 30, or 60 days) and track open counts. Useful for sharing with clients, stakeholders, or subscribers.

Daily Workflow

Morning briefing

“Give me my morning briefing.”

The copilot pulls your personalized digest—entries from your favorited authors, sources, hosts, and saved searches—then summarizes them by topic. Your digest is shaped entirely by your favorites, so it gets more useful the more you follow.

Research a topic across your feeds

“What have I saved about tariffs?”

The copilot searches your indexed content and summarizes the results—pulling together perspectives from different sources you follow rather than doing a generic web search. Natural language queries are automatically translated to structured Lucene syntax—no need to learn query operators. The admin UI has the same capability: toggle AI search on, type your question in plain English, and click Summarize to get a concise synthesis of what your results contain.

Compare your reading to the latest news

“What's the latest on the Supreme Court tariff case? How does it compare to what I've been reading?”

The assistant searches Honcho for your saved articles on the topic, does a web search for the latest developments, and gives you a combined briefing that highlights what's new vs. what you've already seen.

Save a web article with notes

“Save this article and tag it as 'research'—the key insight is that ocean acidification is accelerating faster than predicted.”

The copilot fetches the URL, extracts the content, indexes it with your tags, and includes your annotation as a note. The article becomes searchable alongside everything else in your library.

Informed writing

“Draft a blog post about AI regulation using what I've collected.”

The assistant searches your Honcho index for everything tagged or related to AI regulation, combines it with current web research, and drafts something grounded in the sources you've already curated.

The search → favorite → digest loop

“Search for articles about AI safety.”
(reviews results)
“Good stuff. Add that as a favorite so I see new articles in my briefings.”
(later)
“Morning briefing please.”

The core workflow: search to find content you care about, favorite the sources and topics, then get a personalized briefing whenever you want.

Group Collaboration

Post analysis to the team

“Write up a summary of the Q4 earnings and post it to the research group.”

The copilot drafts the summary from your indexed content and publishes it to the shared group feed. Teammates see it immediately in their own copilot or admin interface.

Share an article with commentary

“Share that Reuters article about the Fed with the research group, and add a note that it's relevant to our rate analysis.”

The copilot cross-posts the entry to the group. The original author is preserved, your note is prepended, and provenance metadata tracks who shared it and from where.

Catch up after time away

“I've been out for a week. Summarize what's happened in the research group.”

The copilot pulls recent group entries, reads through them, and gives you a synthesized briefing rather than just a list of titles.

Messaging

“Message the research group: the client meeting has been moved to Thursday.”

The copilot posts to the group’s shared feed. Messages, shared articles, and analysis all live in the same searchable stream.

Feed Management

Discover and add sources

“Find RSS feeds for The Verge and add them.”

The copilot discovers available feeds from the site URL, shows you what it found, and adds the ones you choose. Crawling starts automatically.

AI memory

“Remember that the client prefers weekly reports on Mondays.”

The copilot stores the note and can recall it in any future conversation. Memories persist across sessions—useful for preferences, project context, and recurring instructions.

Feedback and bug reports

“I have feedback: it would be great to have a dark mode option in the web UI.”

The copilot captures your feedback and routes it to the team. Use @bug to report issues—session context is included automatically so the team can diagnose without you having to explain every step.