Sends a prompt to an LLM and returns the response content as a string. Supports OpenAI, Anthropic, and any OpenAI-compatible endpoint (Ollama, Azure, vLLM, Groq, Together, etc.). When tools is provided, uses native tool calling (OpenAI/Anthropic) or JSON fallback (custom).

query_llm(
  prompt,
  system_prompt = "You are a data anonymization assistant.",
  provider = c("openai", "anthropic", "custom"),
  model = NULL,
  api_key = NULL,
  base_url = NULL,
  temperature = 0,
  tools = NULL
)

Arguments

prompt

Character string with the user message.

system_prompt

Character string with the system message.

provider

One of "openai", "anthropic", or "custom". "custom" uses the OpenAI-compatible format with a custom base_url.

model

Model identifier. If NULL, defaults to "gpt-4.1" for OpenAI, "claude-sonnet-4-20250514" for Anthropic.

api_key

API key. If NULL, auto-detected from environment variables: OPENAI_API_KEY, ANTHROPIC_API_KEY, or LLM_API_KEY.

base_url

Base URL for the API. Defaults per provider; required for "custom".

temperature

Sampling temperature (default 0 for deterministic output).

tools

Optional list of tool schemas (from get_tool_schemas()). When provided, enables tool calling and returns a structured list instead of a string.

Value

When tools = NULL, the LLM response content as a character string. When tools is provided, a list with $tool_calls (list of normalized tool call lists, each with $tool and parameters) and $content (text content or NULL).