You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Milestone 3 - Template Iteration
Templates and how an LLM is prompted can have a deep impact on the quality. Given it's large impact, there should be tools around prompt iteration and the quality of the responses. The first step is to capture as much of the data as possible - notably the prompts embedded in frameworks like LlamaIndex and LangChain as well as prompt serving layers like Prompt Layer.
As an AI engineer, I don't know what prompts are being used by the frameworks. I want to make adjustments to the baked-in templates to improve the generation.
A clear demarkation of prompt template and variables will enable a new class of analytics, notably statistics and drift of variable values, which could un-lock a new category of analytics.
It's critical to capture all parameters that impact the generation. While this is largely available to OpenAI models, we will have to better understand the ecosystem. We should at a minimum support OpenAI, Anthropic, Cohere, OpenAI (Azure), Llama 2.
The text was updated successfully, but these errors were encountered:
Milestone 3 - Template Iteration
Templates and how an LLM is prompted can have a deep impact on the quality. Given it's large impact, there should be tools around prompt iteration and the quality of the responses. The first step is to capture as much of the data as possible - notably the prompts embedded in frameworks like LlamaIndex and LangChain as well as prompt serving layers like Prompt Layer.
As an AI engineer, I don't know what prompts are being used by the frameworks. I want to make adjustments to the baked-in templates to improve the generation.
A clear demarkation of prompt template and variables will enable a new class of analytics, notably statistics and drift of variable values, which could un-lock a new category of analytics.
It's critical to capture all parameters that impact the generation. While this is largely available to OpenAI models, we will have to better understand the ecosystem. We should at a minimum support OpenAI, Anthropic, Cohere, OpenAI (Azure), Llama 2.
The text was updated successfully, but these errors were encountered: