Sends a prompt to an LLM and returns the response content as a string.
Supports OpenAI, Anthropic, and any OpenAI-compatible endpoint (Ollama, Azure, vLLM, Groq, Together, etc.).
When tools is provided, uses native tool calling (OpenAI/Anthropic) or JSON fallback (custom).
query_llm(
prompt,
system_prompt = "You are a data anonymization assistant.",
provider = c("openai", "anthropic", "custom"),
model = NULL,
api_key = NULL,
base_url = NULL,
temperature = 0,
tools = NULL
)Character string with the user message.
Character string with the system message.
One of "openai", "anthropic", or "custom".
"custom" uses the OpenAI-compatible format with a custom base_url.
Model identifier. If NULL, defaults to "gpt-4.1" for OpenAI,
"claude-sonnet-4-20250514" for Anthropic.
API key. If NULL, auto-detected from environment variables:
OPENAI_API_KEY, ANTHROPIC_API_KEY, or LLM_API_KEY.
Base URL for the API. Defaults per provider; required for "custom".
Sampling temperature (default 0 for deterministic output).
Optional list of tool schemas (from get_tool_schemas()). When provided,
enables tool calling and returns a structured list instead of a string.
When tools = NULL, the LLM response content as a character string.
When tools is provided, a list with $tool_calls (list of normalized
tool call lists, each with $tool and parameters) and $content
(text content or NULL).