Content intelligence platform

Honcho

A content intelligence platform that gives AI assistants secure access to your organization’s private documents and internal APIs. Ingest documents, feeds, and archives into a single searchable index, and connect AI tools to the systems your team actually uses.

The Problem

AI assistants have no access to your organization’s private documents or internal systems. Out of the box, they can’t search your research archive, your contracts, your internal wikis, or the data sitting behind your own APIs. The most useful information for your team is exactly the information the model has never seen.

Honcho closes that gap. Ingest your organization’s documents and connect to internal APIs through pluggable connectors, and AI assistants can search, retrieve, and cite real source material—with answers that link back to the originals.

Honcho gives AI assistants a secure, searchable view of the documents and systems your organization already runs on.
  • Private documents Internal reports, archives, knowledge bases, and contracts indexed for search and retrieval.
  • Internal APIs Pluggable connectors expose your own services to AI assistants alongside indexed content.
  • Provenance Every answer cites the actual entry. You can always check the original.

The Short Version

Honcho is a content intelligence platform that indexes large collections of documents—research archives, institutional publications, news corpora, internal knowledge bases—and makes them searchable through a built-in AI chat interface. Import tens of thousands of articles from a CMS, bulk-load a document archive, or crawl hundreds of feeds over time. Everything lands in the same full-text index.

Honcho ships with Copilot, a built-in AI assistant that turns the entire index into a conversation. Ask “how have expert views on China’s economy evolved over the past decade?” and Copilot synthesizes across thousands of indexed documents—threading together analysis, tracking how positions shifted, surfacing connections that no keyword search would find. Combine results from your library with live web content and internal APIs to compare, verify, and fill gaps. External agents and tools like Claude can integrate over MCP for the same access.

Content flows in from many directions: bulk importers for CMS migrations and document archives, an extract API with declarative rules for structured data sources, pull replication from WordPress and other CMS platforms, feed crawling for ongoing sources, pluggable connectors for custom integrations, and AI assistants that save articles and notes on your behalf.

It supports multiple users and groups, so a team can share a common pool of content and collaborate through their AI assistants. An analyst can summarize a report and share it to the team’s feed. A writer can research across the entire corpus and send findings to colleagues. A shared knowledge base builds organically without anyone copying and pasting links around.

Honcho is also a persistence layer for AI workflows. Agent apps that summarize articles, monitor topics, or produce analysis can write their output back to Honcho, where it becomes searchable alongside everything else. The content you curate and the content your tools produce all live in one place.

Most importantly, Honcho gets smarter as you use it. Every entry you save, every note you add, every article you mark as important becomes a curated relevance signal that shapes future answers. Rather than stuffing the AI’s context window with whatever matches the keywords, Honcho prioritizes your own research and annotations—so the AI answers in the context of what you already know and think, not just what’s in the library.

What does this look like in practice?

Cross-topic synthesis across years of financial commentary. Briefings assembled from dozens of sources in minutes. Expert opinion tracked over time. Pattern recognition across industries.

See real use cases →

What It Does

Most content platforms are assembled from separate services—a CMS for storage, Elasticsearch for search, a feed reader for ingestion, S3 for assets, a custom API layer to glue it all together. Honcho replaces that stack with a single integrated system deployed on your infrastructure.

  • Aggregate Crawl feeds and HTML sources on configurable schedules. A declarative rules engine lets you define per-site extraction logic—CSS selectors, field mappings, timestamp parsing, fallback chains—in a config file instead of code. Built-in deduplication and change detection keep the index clean.
  • Ingest Content doesn’t just come from feeds. Pull replication syncs from WordPress and other CMS platforms with incremental updates. A pluggable connector API supports custom Java connectors dropped in as JARs. AI assistants save articles and notes via MCP. A save-URL endpoint supports bookmarklets and mobile shortcuts. An extract API accepts raw JSON, HTML, or XML and runs it through extraction rules. Bulk importers handle CMS migrations. Everything lands in the same index.
  • Index Full-text search powered by Lucene with a configurable text analysis pipeline. Boolean queries, time-range filters, tag and topic facets, custom numeric fields, and configurable relevance scoring.
  • Serve GraphQL, REST, and MCP (Model Context Protocol) endpoints for search, retrieval, and content distribution. No rendering opinions—bring your own frontend, feed reader, mobile app, or AI assistant.
  • Replicate Push and pull replication between instances and from external CMS platforms. WordPress connector with incremental sync, pluggable connector API for custom sources. Distribute content across organizational boundaries with topic-based routing.
Content flows from crawl to index to API to replication as a single transaction with a single data model.
Concern Typical Stack Honcho
Storage PostgreSQL / DynamoDB Built-in
Search Algolia / Elasticsearch Built-in
Crawling Scrapy / custom crawlers Built-in
Ingestion Custom importers / ETL Built-in (MCP, save URL, extract API, bulk import)
Assets S3 / cloud storage Built-in
API Custom REST / GraphQL Built-in (GraphQL + REST + MCP)
Sync Custom ETL / webhooks Built-in

What Makes It Different

  • Search is built in, not bolted on The Lucene index is the primary read path. Content is indexed at write time with no sync lag. The query API exposes the full power of Lucene: phrase queries, field-scoped search, boolean composition, range-boosted relevance, and custom numeric fields for domain-specific ranking.
  • Structured content fragments Content is modeled as typed fragments—paragraphs, headings, pull quotes, captions, recipe steps. Each fragment type is indexed separately, so you can search within specific block types: find all entries where a heading contains earnings guidance, or where a pull quote mentions supply chain.
  • Declarative content extraction A rules DSL lets you define how to extract entries from any JSON, HTML, or XML source—selectors, extractors, transforms, timestamp parsers—without writing code. Rules compose via config layering: define a base for a feed or a WordPress connector, then override only the fields that differ per source. You can develop rules conversationally through an AI assistant—paste in your raw content, iterate on the rules until the extraction is right, then save them.
  • AI-powered search, analysis & publishing Ask questions in plain English and the search engine does the right thing. Natural language queries are automatically translated to structured Lucene syntax. Search connectors extend your reach to live external data—SEC filings, court opinions, academic papers, economic indicators, global news—queried on demand alongside your library. A dynamic planner auto-executes multi-step analysis: dossiers, source comparisons, gap analysis, trend tracking, and fact triangulation across all available sources. Editions turn your content pipeline into an automated publication—AI-curated daily and weekly briefings with custom branding, scheduling, and shareable URLs. Click Summarize to synthesize any set of search results. Each user configures their own API key and model preference; usage is tracked per-user with full transparency.
  • Curated context, not brute-force retrieval Standard RAG pipelines start from zero every query—keyword match, top-K results, hope for the best. Honcho builds a curated relevance layer from user engagement: every save, annotation, and boost is a pre-computed signal that no retrieval-time ranking can replicate. The AI answers in the context of what the user already knows and thinks—their notes, their perspective, their research trail—not just whatever matched the search terms.
  • One system, not six Each service in the typical stack is another deployment, another set of credentials, another failure mode, another thing to keep in sync. Honcho’s tight integration eliminates the boundaries where things break.
Most content platforms are assembled from parts that weren’t designed to work together. Honcho was built as one system from the start—search, storage, ingestion, and API share the same data model, the same transaction, and the same deployment. Your data stays on your infrastructure.

Architecture

Built on standard enterprise infrastructure that any Java team can deploy and maintain. No exotic dependencies, no cloud-specific lock-in, no operational surprises.

  • Runtime Java on Jetty with Jakarta Servlet.
  • Search engine Lucene with unified numeric fields, near-real-time search with searcher warm-up, and proper FILTER vs MUST clause handling.
  • API layer GraphQL for flexible queries, REST for JSON, Feed, and Sitemap output, and MCP (Model Context Protocol) for AI assistant integration.
  • Data model Protocol Buffers for internal data model and wire format, with a purpose-built JSON encoder for browser and API clients.
  • Instrumentation Dropwizard Metrics on every significant operation—search latency, indexing throughput, crawl rates, storage I/O.
Every dependency is production-grade open source with a permissive license—Apache 2.0, MIT, or BSD. Deploy anywhere, license freely, and know exactly what you’re running.
Java Jetty Jakarta Servlet Lucene Protocol Buffers GraphQL Java MCP SDK MariaDB Maven Dropwizard Metrics

Built-in AI Assistant

Honcho includes Copilot, a built-in AI chat interface that gives your team instant access to the entire index through natural conversation. Search, get briefings, save and organize research, collaborate with teammates—all without leaving the browser.

  • Natural language search Ask questions in plain English. The copilot automatically translates to structured Lucene queries, understands tags, topics, authors, date ranges, and type filters. Use @ to target a specific group or data connector.
  • Search connectors Extend your reach beyond the library to live external data sources. SEC EDGAR corporate filings, CourtListener court opinions, OpenAlex scholarly works, arXiv preprints, FRED economic data, GDELT global news, Google News, Bing News, World Bank, WHO, NASA, and more. Connectors fetch live data at query time.
  • Deep research For complex questions, the copilot searches your library, the web, and data connectors, then synthesizes a cited briefing. Results can be saved as library entries.
  • Multi-source analysis The dynamic planner auto-executes multi-step analysis across sources: dossiers, source comparisons, gap analysis, trend tracking, briefings, and fact triangulation. Every analysis includes source attribution.
  • Personalized digest Follow authors, sources, hosts, and search queries. Ask “give me my briefing” for a curated feed of what matters to you, updated in real time.
  • Save, annotate, organize Save URLs, create entries, add notes, tag and organize—all through conversation. Everything you save becomes context for future answers.
  • Document upload Upload documents directly into the copilot for analysis, summarization, and indexing.
  • Chat history Conversations are saved and resumable. Fork a conversation to explore a tangent without losing your place.
  • Style guide Administrators set a system-wide editorial style guide that shapes all AI-generated text—tone, voice, formatting—so output matches your organization’s standards.
The copilot isn’t a chatbot bolted onto a search API. It has full read-write access to the index, understands your research history, and prioritizes your own notes and annotations when building answers. The more you use it, the better it gets.

Editions

AI-curated briefings that synthesize your indexed content into newspaper-style summary pages. Configure which sources to draw from, set a schedule, and Honcho generates a branded publication automatically.

  • Daily & weekly Daily editions synthesize entries from the last 24 hours into a hero story, themed sections, and source bibliography. Weekly editions summarize a week of daily output into broader narrative arcs.
  • Scheduled generation Set editions to generate automatically on a daily or weekly schedule, or trigger manually.
  • Custom branding Each edition can carry its own masthead, logo, favicon, accent color, and footer. Style guides provide editorial instructions for the AI writer’s tone and voice.
  • Public URLs & sharing Each edition has a public URL at /edition/{slug} with a browsable archive index. Create expirable share links (7, 30, or 60 days) for external distribution.
  • API access JSON and HTML export endpoints for integration with email newsletters, websites, or other systems.
Editions turn your content pipeline into a publication pipeline. Instead of asking “what happened today?” every morning, the answer is already written, formatted, and waiting at a URL you can share.

MCP Server

Honcho includes a built-in Model Context Protocol server, so AI assistants like Claude can search, retrieve, and create content directly. Point any MCP client at the /mcp endpoint and the entire index becomes conversational. See practical use cases →

Tool What It Does
search_content Full-text search with host, author, date range, tag, topic, type, and sort filters. Natural language queries are automatically enhanced to Lucene syntax when LLM integration is enabled.
get_entry Retrieve a single entry by UID or most recent
get_entries Retrieve multiple entries by UID in a single call
create_entry Create a new entry with title, content, tags, topics, type, author, and metadata
update_entry Update fields on an existing entry—replace or append tags/topics, merge metadata
delete_entry Soft-delete an entry from the database and search index
tag_entries Add or remove tags from entries matching a search query
get_status Account overview—sources, entries, groups, and favorites
save_memory Save a note that can be recalled in future conversations
recall_memory Search or list saved memories
send_feedback Report bugs, request features, or provide feedback

The MCP server exposes a focused set of core tools. Source management, favorites, digest, feed discovery, group collaboration, and other operational features are available through the built-in copilot and admin UI.

  • Read and write AI assistants can search and retrieve content, but also create entries, update metadata, manage tags, and curate the index—all through the same MCP interface.
  • Structured retrieval Content fragments let MCP tools return specific block types—code examples, definitions, key paragraphs—rather than dumping entire documents into the context window.
  • Multi-source aggregation Hundreds of curated sources through one interface—industry news, internal docs, regulatory updates—without the model needing to know where each piece lives.
  • Real-time content Continuous crawling and near-real-time indexing. Content is searchable within seconds of arrival.
  • AI memory Assistants can save and recall notes across conversations. “Remember that the client prefers weekly reports on Mondays”—and it’s there next time you ask.
  • Multi-user isolation Each authenticated user sees only their assigned sources. OAuth with DB-backed tokens provides secure, persistent access.

Use Cases

  • Research & analysis Analysts search across the organization’s entire corpus, combine internal documents with web sources, and share synthesized findings with their team. The AI grounds every answer in actual source material—no hallucination, no guesswork.
  • Team knowledge base Groups organize teams and content. An analyst summarizes a report and shares it to the research feed. A writer pulls from the archive and sends findings to colleagues. Shared knowledge builds organically through AI-mediated collaboration.
  • Content aggregation and monitoring Crawl and normalize hundreds of sources—industry publications, competitor blogs, wire services, regulatory feeds—into a single searchable interface. Automated digest delivers briefings from the sources and topics that matter to each user.
  • CMS integration & replication Pull content from WordPress and other CMS platforms with incremental sync. Pluggable connector API for custom sources. Bulk importers for migrations. Everything lands in the same searchable index.
  • Headless search API Lucene-quality full-text search as a service. Boolean queries, faceted filtering, configurable relevance, time-range constraints, and custom ranking signals—via GraphQL, REST, and MCP endpoints.

Status

Honcho is actively developed and available for licensing, collaboration, or investment.

Designed for on-premise and private cloud deployment—your documents, indexes, and user data stay on your infrastructure. Encrypted backups with cloud KMS integration support compliance and retention requirements. No external dependencies for core functionality; AI features use your organization’s own API keys with the provider of your choice. If you are interested in licensing, collaboration, or investment, please get in touch.