| add_msg_to_chat_history | Add a message to a chat history |
| add_text | Add text to a tidyprompt |
| answer_as_boolean | Make LLM answer as a boolean (TRUE or FALSE) |
| answer_as_integer | Make LLM answer as an integer (between min and max) |
| answer_as_json | Make LLM answer as JSON (with optional schema) |
| answer_as_key_value | Make LLM answer as a list of key-value pairs |
| answer_as_list | Make LLM answer as a list of items |
| answer_as_named_list | Make LLM answer as a named list |
| answer_as_regex_match | Make LLM answer match a specific regex |
| answer_as_text | Make LLM answer as a constrained text response |
| answer_by_chain_of_thought | Set chain of thought mode for a prompt |
| answer_by_react | Set ReAct mode for a prompt |
| answer_using_r | Enable LLM to draft and execute R code |
| answer_using_sql | Enable LLM to draft and execute SQL queries on a database |
| answer_using_tools | Enable LLM to call R functions |
| chat_history | Create or validate 'chat_history' object |
| construct_prompt_text | Construct prompt text from a tidyprompt object |
| df_to_string | Convert a dataframe to a string representation |
| extract_from_return_list | Function to extract a specific element from a list |
| get_chat_history | Get the chat history of a tidyprompt object |
| get_prompt_wraps | Get prompt wraps from a tidyprompt object |
| is_tidyprompt | Check if object is a tidyprompt object |
| llm_break | Create an 'llm_break' object |
| llm_feedback | Create an 'llm_feedback' object |
| llm_provider-class | LlmProvider R6 Class |
| llm_provider_google_gemini | Create a new Google Gemini LLM provider |
| llm_provider_groq | Create a new Groq LLM provider |
| llm_provider_mistral | Create a new Mistral LLM provider |
| llm_provider_ollama | Create a new Ollama LLM provider |
| llm_provider_openai | Create a new OpenAI LLM provider |
| llm_provider_openrouter | Create a new OpenRouter LLM provider |
| llm_provider_xai | Create a new XAI (Grok) LLM provider |
| llm_verify | Have LLM check the result of a prompt (LLM-in-the-loop) |
| persistent_chat-class | PersistentChat R6 class |
| prompt_wrap | Wrap a prompt with functions for modification and handling the LLM response |
| quit_if | Make evaluation of a prompt stop if LLM gives a specific response |
| r_json_schema_to_example | Generate an example object from a JSON schema |
| send_prompt | Send a prompt to a LLM provider |
| set_chat_history | Set the chat history of a tidyprompt object |
| set_system_prompt | Set system prompt of a tidyprompt object |
| skim_with_labels_and_levels | Skim a dataframe and include labels and levels |
| tidyprompt | Create a tidyprompt object |
| tidyprompt-class | Tidyprompt R6 Class |
| tools_add_docs | Add tidyprompt function documentation to a function |
| tools_get_docs | Extract documentation from a function |
| user_verify | Have user check the result of a prompt (human-in-the-loop) |
| vector_list_to_string | Convert a named or unnamed list/vector to a string representation |