Skip to content

AI Query Assistant

rfx ask lets you query your codebase using natural language. It combines Reflex’s search capabilities with an LLM to answer questions about your code.

Configure your LLM provider:

Terminal window
rfx llm config

This interactive wizard sets up your API key, provider, and model. Configuration is stored in ~/.reflex/config.toml.

Check your current setup:

Terminal window
rfx llm status
ProviderModels
AnthropicClaude Sonnet, Claude Opus, Claude Haiku
OpenAIGPT-4o, GPT-4o-mini
OpenRouterAny model available on OpenRouter

Start a conversation about your code:

Terminal window
rfx ask

This opens an interactive session where you can ask follow-up questions with full conversation history. The assistant searches your codebase to ground its answers.

>> How does authentication work in this project?
>> What files would I need to change to add OAuth support?
>> Show me the error handling pattern used in the API handlers.

Ask a single question from the command line:

Terminal window
rfx ask "What does the handleRequest function do?"

Returns a direct answer grounded in your code:

Terminal window
rfx ask "What database does this project use?" --answer

Enables multi-step reasoning — the assistant can run multiple searches to build a comprehensive answer:

Terminal window
rfx ask "How would I add a new API endpoint?" --agentic

In agentic mode, the assistant iteratively searches your codebase, follows dependency chains, and synthesizes a thorough answer.

Terminal window
rfx ask "Find all TODO comments" --execute

Runs the query and returns raw results without LLM interpretation.

Override the configured provider for a single query:

Terminal window
rfx ask "Explain the auth flow" --provider anthropic
rfx ask "Explain the auth flow" --provider openai

If your project has a REFLEX.md file in the root, rfx ask automatically includes it as context. This helps the assistant understand your project’s architecture, conventions, and terminology. See Configuration for details.