Skip to content

Commit

Permalink
edited playground & llm settings articles (#1110)
Browse files Browse the repository at this point in the history
  • Loading branch information
evintunador authored Dec 16, 2023
1 parent 8e8ac3b commit cc81b4b
Show file tree
Hide file tree
Showing 6 changed files with 45 additions and 12 deletions.
Binary file not shown.
Binary file modified docs/assets/basics/openai_mode.webp
Binary file not shown.
Binary file not shown.
Binary file added docs/assets/basics/openai_system_prompt.webp
Binary file not shown.
24 changes: 21 additions & 3 deletions docs/basics/configuration_hyperparameters.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,13 +97,24 @@ import Max from '@site/docs/assets/basics/openai_maximum_length.webp';

<div className="flex flex-col sm:flex-row justify-between">
<div>
The maximum length is the total # of tokens the AI is allowed to generate. This setting is useful since it allows users to manage the length of the model's response, preventing overly long or irrelevant responses. It also helps control cost, as the length is shared between the input in the Playground box and the generated response.
The maximum length is the total # of tokens the AI is allowed to generate. This setting is useful since it allows users to manage the length of the model's response, preventing overly long or irrelevant responses. The length is shared between the <code>USER</code> input in the Playground box and the <code>ASSISTANT</code> generated response. Notice how with a limit of 256 tokens, our PirateGPT from earlier is forced to cut its story short mid-sentence.
</div>
<div className="mt-4 sm:mt-0 sm:ml-auto">
<img src={Max} className="img-docs w-20 sm:w-auto" />
</div>
</div>

import max_length_example from '@site/docs/assets/basics/openai_maximum_length_example.webp';

<br/>
<div style={{textAlign: 'center'}}>
<img src={max_length_example} className="img-docs" style={{width: "80%"}}/>
</div>

:::note
This also helps control cost if you're paying for use of the model through the API rather than using the Playground.
:::

## Other LLM Settings

There many other settings that can affect language model output, such as stop sequences, and frequency and presence penalties.
Expand All @@ -114,13 +125,20 @@ import Stop from '@site/docs/assets/basics/openai_stop_sequences.webp';

<div className="flex flex-col sm:flex-row justify-between">
<div>
Stop sequences tell the model when to cease output generation, which allows you to control content length and structure. If you are prompting the AI to write an email, setting "Best regards," or "Sincerely," as the stop sequence ensures the model stops after the closing salutation, which keeps the email short and to the point.
Stop sequences tell the model when to cease output generation, which allows you to control content length and structure. If you are prompting the AI to write an email, setting "Best regards," or "Sincerely," as the stop sequence ensures the model stops before the closing salutation, which keeps the email short and to the point. Stop sequences are useful for output that you expect to come out in a structured format such as an email, a numbered list, or dialogue.
</div>
<div className="mt-4 sm:mt-0 sm:ml-auto">
<img src={Stop} className="img-docs w-20 sm:w-auto" />
</div>
</div>

import stop_sequences_example from '@site/docs/assets/basics/openai_stop_sequences_example.webp';

<br/>
<div style={{textAlign: 'center'}}>
<img src={stop_sequences_example} className="img-docs" style={{width: "80%"}}/>
</div>

### Frequency Penalty

import Freq from '@site/docs/assets/basics/openai_frequency_penalty.webp';
Expand Down Expand Up @@ -161,7 +179,7 @@ In conclusion, mastering settings like temperature, top p, maximum length and ot



Partly written by jackdickens382
Partly written by jackdickens382 and evintunador

[^a]: A more technical word is "configuration hyperparameters"
[^b]: Also known as Nucleus Sampling
33 changes: 24 additions & 9 deletions docs/basics/openai_playground.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,34 +33,47 @@ Or watch this video:

<iframe width="560" height="315" src="https://www.youtube.com/embed/6OD14rpokRw" title="YouTube video player" frameBorder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowFullScreen></iframe>

## The Interface

At first, this interface seems very complex. There are many drop downs and sliders that allow you to configure models. We will cover System Prompts, Mode, and Model selection in this video. We will cover the rest in the next lesson.
:::note
This video shows an old version of the website, but the process of logging in remains very similar.
:::

### System Prompts
## The Interface

The first thing that you may notice is the SYSTEM area on the left side of the page. So far, we have seen two types of messages, USER messages, which are just the messages you send to the chatbot, and ASSISTANT messages, which are the chatbot's replies. There is a third type of message, the system prompt, that can be used to configure how the AI responds. This is the best place to put a priming prompt.
At first, this interface seems very complex. There are many drop downs and sliders that allow you to configure models. We will cover Mode, System Prompts, and Model selection in this lesson, and LLM settings like Temperature, Top P, and Maximum Length in the [next lesson](https://learnprompting.org/docs/basics/configuration_hyperparameters).

### Mode

import Mode from '@site/docs/assets/basics/openai_mode.webp';

<div className="flex flex-col sm:flex-row justify-between">
<div>
Click the Mode dropdown on the top right of the page. This dropdown allows you to change the type of model that you are using. OpenAI has three different Modes: <code>Chat</code>, <code>Complete</code>, and <code>Edit</code>. We have already learned about the first two; <code>Edit</code> models modify the prompt you give them to, for example, fix typos. We will only use <code>Chat</code> and occasionally <code>Complete</code> models in this course.
Click the 'Assistants' dropdown on the top left of the page. This dropdown allows you to change the type of model that you are using. OpenAI has three different Modes: <code>Assistants</code>, <code>Chat</code>, and <code>Complete</code>. We have already learned about the latter two; <code>Assistants</code> models are meant for API use by developers and can use interesting tools such as running code and retrieving information. We will only use <code>Chat</code> and occasionally <code>Complete</code> models in this course.
</div>
<div className="mt-4 sm:mt-0 sm:ml-auto">
<img src={Mode} className="img-docs w-20 sm:w-auto" />
</div>
</div>

### System Prompts

After switching to <code>Chat</code>, the first thing that you may notice on the left side of the page other than the Get Started popup is the SYSTEM area. So far, we have seen two types of messages, USER messages, which are just the messages you send to the chatbot, and ASSISTANT messages, which are the chatbot's replies. There is a third type of message, the system prompt, that can be used to configure how the AI responds.

This is the best place to put a priming prompt. The system prompt will be "You are a helpful assistant." by default, but a fun alternative example to try out would be the "You are PirateGPT. Always talk like a pirate." example from [our previous lesson](https://learnprompting.org/docs/basics/priming_prompt).

import system_prompt from '@site/docs/assets/basics/openai_system_prompt.webp';

<div style={{textAlign: 'center'}}>
<img src={system_prompt} className="img-docs" style={{width: "80%"}}/>
</div>
<br/>

### Model

import Model from '@site/docs/assets/basics/openai_model.webp';

<div className="flex flex-col sm:flex-row justify-between">
<div>
Click the Model dropdown on the right of the page. This dropdown allows you to change the model that you are using. Each mode has multiple models, but we will focus on the chat ones. This list appears to be very complicated (*what does gpt-3.5-turbo mean?*), but these are just technical names for different models. Anything that starts with gpt-3.5-turbo is a version of ChatGPT, while anything that starts with gpt-4 is a version of GPT-4.
Click the Model dropdown on the right of the page. This dropdown allows you to change the model that you are using. Each mode has multiple models, but we will focus on the chat ones. This list appears to be very complicated (what does gpt-3.5-turbo mean?), but these are just technical names for different models. Anything that starts with gpt-3.5-turbo is a version of ChatGPT, while anything that starts with gpt-4 is a version of GPT-4, the newer model you get access to from purchasing a ChatGPT Plus subscription.

</div>
<div className="mt-4 sm:mt-0 sm:ml-auto">
Expand All @@ -72,8 +85,10 @@ import Model from '@site/docs/assets/basics/openai_model.webp';
You may not see GPT-4 versions in your interface.
:::

The numbers like 16K or 32K in the model names represent the context length. If it's not specified, the default context length is 4K. OpenAI regularly updates both ChatGPT (gpt-3.5-turbo) and GPT-4, and older versions are kept available on the platform for a limited period. These older models have additional numbers at the end of their name, such as "0613". For instance, the model "gpt-3.5-turbo-16k-0613" is a ChatGPT model with a 16K context length, released on June 13th, 2023. However, it's recommended to use the most recent versions of models, which don't contain any date information. A comprehensive list of model versions can be found [here](https://platform.openai.com/docs/models/gpt-4).
The numbers like 16K, 32K, or 128k in the model names represent the context length. If it's not specified, the default context length is 4K for gpt-3.5 and 8k for GPT-4. OpenAI regularly updates both ChatGPT (gpt-3.5-turbo) and GPT-4, and older versions are kept available on the platform for a limited period. These older models have additional numbers at the end of their name, such as "0613". For instance, the model "gpt-3.5-turbo-16k-0613" is a ChatGPT model with a 16K context length, released on June 13th, 2023. However, it's recommended to use the most recent versions of models, which don't contain any date information. A comprehensive list of model versions can be found [here](https://platform.openai.com/docs/models/gpt-4).

## Conclusion

The OpenAI Playground is a powerful tool that provides a more advanced interface for interacting with ChatGPT and other AI models. It offers a range of configuration options, including the ability to select different models and modes. We will learn about the rest of the settings in the next lesson. The Playground also supports system prompts, which can be used to guide the AI's responses. While the interface may seem complex at first, with practice, it becomes a valuable resource for exploring the capabilities of OpenAI's models. Whether you're using the latest versions of ChatGPT or GPT-4, or exploring older models, the Playground offers a flexible and robust platform for AI interaction and experimentation.
The OpenAI Playground is a powerful tool that provides a more advanced interface for interacting with ChatGPT and other AI models. It offers a range of configuration options, including the ability to select different models and modes. We will learn about the rest of the settings in the [next lesson](https://learnprompting.org/docs/basics/configuration_hyperparameters). The Playground also supports system prompts, which can be used to guide the AI's responses. While the interface may seem complex at first, with practice, it becomes a valuable resource for exploring the capabilities of OpenAI's models. Whether you're using the latest versions of ChatGPT or GPT-4, or exploring older models, the Playground offers a flexible and robust platform for AI interaction and experimentation.

Partly written by evintunador

1 comment on commit cc81b4b

@vercel
Copy link

@vercel vercel bot commented on cc81b4b Dec 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Successfully deployed to the following URLs:

learn-prompting – ./

learn-prompting.vercel.app
learn-prompting-trigaten.vercel.app
learn-prompting-git-main-trigaten.vercel.app

Please sign in to comment.