Skip to content

Commit

Permalink
prosified documentation content, removed inlined br's
Browse files Browse the repository at this point in the history
  • Loading branch information
sethjuarez committed Oct 30, 2024
1 parent 221a717 commit 09f35e8
Show file tree
Hide file tree
Showing 24 changed files with 27 additions and 110 deletions.
4 changes: 0 additions & 4 deletions web/docs/contributing/page.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,15 +10,12 @@ tags:
- contributing
---

<br/>
## About Prompty
[Prompty](https://github.com/microsoft/prompty) is an open-source project from Microsoft that makes it easy for developers to _create, manage, debug, and evaluate_ LLM prompts for generative AI applications. We welcome contributions from the community that can help make the technology more useful, and usable, by developers from all backgrounds. Before you get started, review this page for contributor guidelines.

<br/>
## Code Of Conduct
Read the project's [Code of Conduct](https://github.com/microsoft/prompty/blob/main/CODE_OF_CONDUCT.md) and adhere to it. The project is alse governed by the Microsoft Open Source Code of Conduct - [Read their FAQ](https://opensource.microsoft.com/codeofconduct/faq/) to learn why the CoC matters and how you can raise concerns or provide feedback.

<br/>
## Providing feedback

Feedback can come in several forms:
Expand All @@ -28,6 +25,5 @@ Feedback can come in several forms:

The easiest way to give us feedback is by [filing an issue](https://github.com/microsoft/prompty/issues/new?template=Blank+issue). **Please check previously logged issues (open and closed) to make sure the topic or bug has not already been raised.** If it does exist, weigh in on that discussion thread to add any additional context of value.

<br/>
## Contributor guidelines
The repository contains both the code and the documentation for the project. Each requires a different set of tools and processes to build and preview outcomes. We hope to document these soon - so check back for **contributor guidelines** that will cover the requirements in more detail.
12 changes: 0 additions & 12 deletions web/docs/getting-started/concepts/page.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -16,16 +16,13 @@ index: 2

_In this section, we cover the core building blocks of Prompty (specification, tooling, and runtime) and walk you through the developer flow and mindset for going from "prompt" to "prototype"_.

<br/>


## 1. Prompty Components

The Prompty implementation consists of three core components - the _specification_ (file format), the _tooling_ (developer experience) and the _runtime_ (executable code). Let's review these briefly.
<br/>

![What is Prompty?](01-what-is-prompty.png)
<br/>


### 1.1 The Prompty Specification
Expand Down Expand Up @@ -67,7 +64,6 @@ The [Prompty specification](https://github.com/microsoft/prompty/blob/main/Promp
```

</details>
<br/>

### 1.2 The Prompty Tooling

Expand All @@ -86,7 +82,6 @@ The [Prompty Visual Studio Code Extension](https://marketplace.visualstudio.com/
- View the "runs" history, and drill down into a run with a built-in trace viewer.

</details>
<br/>

### 1.3 The Prompty Runtime

Expand All @@ -98,15 +93,13 @@ The Prompty Runtime helps you make the transition from _static asset_ (`.prompty
Core runtimes provide the base package needed to run the Prompty asset with code. Prompty currently has two core runtimes, with more support coming.
* [Prompty Core (python)](https://pypi.org/project/prompty/) → Available _in preview_.
* Prompty Core (csharp) → In _active development_.
<br/>

Enhanced runtimes add support for orchestration frameworks, enabling complex workflows with Prompty assets:
* [Prompt flow](https://microsoft.github.io/promptflow/) → Python core
* [LangChain (python)](https://pypi.org/project/langchain-prompty/) → Python core (_experimental_)
* [Semantic Kernel](https://learn.microsoft.com/semantic-kernel/) → C# core

</details>
<br/>


## 2. Developer Workflow
Expand All @@ -117,9 +110,7 @@ Prompty is ideal for rapid prototyping and iteration of a new generative AI appl
2. **Develop** by iterating config & content, use tracing to debug
3. **Evaluate** prompts with AI assistance, saved locally or to cloud

<br/>
![How do we use prompty?](02-build-with-prompty.png)
<br/>


## 3. Developer Mindset
Expand All @@ -131,10 +122,7 @@ Think of it as a **micro-orchestrator focused on a single LLM invocation** putti
- _engineer_ the prompt (system, user, context, instructions) for that request
- _shape_ the data used to "render" the template on execution by the runtime

<br/>
![Where does this fit?](03-micro-orchestrator-mindset.png)

<br/>
---
<br/>
[Want to Contribute To the Project?](/docs/contributing/) - _Updated Guidance Coming Soon_.
23 changes: 6 additions & 17 deletions web/docs/getting-started/debugging-prompty/page.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: Debuggging Prompty
title: Debugging Prompty
authors:
- bethanyjep
- nitya
Expand All @@ -13,7 +13,6 @@ index: 6

_In the last section, we converted our Prompty asset into code and successfully executed the application. In this section, we will cover how we can use Observability in Prompty to debug our application._

<br/>

## 1. What we will cover

Expand All @@ -24,7 +23,6 @@ For observability in Prompty, we will use the tracer to visualize and debug the
- Understand how observability works in your code
- Analyze the trace output to debug and fix the bug

<br/>

## 2. Understandfing Observability in Prompty

Expand All @@ -35,14 +33,12 @@ In Prompty, you can easily trace and visualize flow, which helps you to understa
- **Console Tracer**: This tracer logs the output to the console.
- **Prompty Tracer**: This tracer logs the output to a JSON file.

<br/>

## 3. Modify our Prompty
In our `shakespeare.prompty` asset we will update the prompt to request for different variations of the same message. The new prompt will be: `"Can you create 5 different versions of a short message inviting friends to a Game Night?"`. Additionally, change the `max_tokens:` value from `3000` to `150`.

Head over to the `shakespeare.py` file as well and update the question to: `"Can you create 5 different versions of a short message inviting friends to a Game Night?"`.

<br/>

<details>
<summary>☑ **Function that executes the Prompty asset** (click to expand)</summary>
Expand Down Expand Up @@ -82,7 +78,6 @@ user:
```
</details>

<br/>

## 4. Adding observability to your code
To add a tracer, we have the following in our previously generated code snippet:
Expand All @@ -107,14 +102,12 @@ def run(
}
)
```
<br/>

- **Tracer.add("console", console_tracer)**: logs tracing information to the console, useful for real-time debugging.
- **json_tracer = PromptyTracer()**: Creates an instance of the PromptyTracer class, which is a custom tracer.
- **Tracer.add("PromptyTracer", json_tracer.tracer)**: logs tracing in a `.tracy` JSON file for more detailed inspection after runs, providing you with an interactive UI.
- **@trace**: Decorator that traces the execution of the run function.
- **`Tracer.add("console", console_tracer)`**: logs tracing information to the console, useful for real-time debugging.
- **`json_tracer = PromptyTracer()`**: Creates an instance of the PromptyTracer class, which is a custom tracer.
- **`Tracer.add("PromptyTracer", json_tracer.tracer)`**: logs tracing in a `.tracy` JSON file for more detailed inspection after runs, providing you with an interactive UI.
- **`@trace`**: Decorator that traces the execution of the run function.

<br/>

## 5: Analyzing and debugging the trace output

Expand All @@ -126,31 +119,27 @@ The trace output is divided into three: _load, prepare_ and _run_. Load refers t
![Trace Output](trace-output.png)

<br/>

From the trace output, you can see the inputs, outputs and metrics such as time to execute the prompt and tokens. This information can be used to debug and fix any issues in your code. For example, we can see output has been truncated and the `Completion Tokens` count is less than 1000, which might not be sufficent for the prompt to generate different outputs. We can increase the `max_tokens` in our Prompty to 1000 to generate more tokens. Once done, run the code again and confirm you get 5 examples of the short message inviting friends to a Game Night.

![updated trace output](trace-bug-fixed.png)

You can continue experimenting with different parameters such as `temperature` and observe how it affects the model outputs.

<br/>

## 6. Using observability for Model Selection

Another way to make the most of observability is in Model Selection. You can switch between models and observe their performance such as completion tokens, latency and accuracy for different tasks. For example, you can switch between the `gpt-4o` and `gpt-35-turbo` models and observe the performance of each model. You can also leverage on GitHub Models, Azure OpenAI and other models to observe the performance of each model. Below is a comparison of the trace output for the `gpt-4o` and `gpt-35-turbo` models:

![grpt-35-turbo output](gpt-35-turbo-trace.png)

<br/>

From the output, you can see the difference in the completion tokens and the time taken to execute the prompt. This information can be used to select the best model for your use case.


## 7. Building a Custom Tracer in Prompty

In the guides section, we will provide a deep dive into [Observability in Prompty](docs/guides/prompty-observability) and how you can create your own tracer.
In the guides section, we will provide a deep dive into [Observability in Prompty](/docs/guides/prompty-observability) and how you can create your own tracer.

---
<br/>
[Want to Contribute To the Project?](/docs/contributing/) - _Updated Guidance Coming Soon_.
13 changes: 0 additions & 13 deletions web/docs/getting-started/first-prompty/page.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -21,13 +21,11 @@ To _run_ the Prompty, you will need a valid Large Language Model deployed endpoi
1. **Azure OpenAI** - requires the endpoint and uses keyless auth (with login)
1. **Serverless** - requires a GitHub Personal Access Token, uses [Marketplaces models](https://github.com/marketplace/models)

<br/>
For our first Prompty, we'll focus on the Azure OpenAI option.
- we assume you've deployed an _Azure OpenAI_ model
- we assume you've retrieved its Endpoint URL information
- we assume you've installed the Prompty extension in VS Code

<br/>
## 2. Create a Prompty asset

Open the Visual Studio Code editor, then click the `File Explorer` icon to view your project filesystem. Select a destination folder (e.g., could be the repository root) and _right-click_ to get a drop-down menu. Look for the `New Prompty` option and click it.
Expand Down Expand Up @@ -79,7 +77,6 @@ user:
```
</details>

<br/>
## 3. Update the default Prompty

You can now update the file contents as shown below. Here, we have made three changes:
Expand Down Expand Up @@ -128,14 +125,12 @@ user:
```
</details>

<br/>
## 4. Run the Prompty
You can now run the Prompty by clicking the `Play` button (top right) in the editor pane of your `.prompty` file.

1. You will see a pop-up asking you to authenticate with Azure. **Sign in**
1. You will see the VS Code terminal switch to the `Outputs` tab. **View output**
The first step ensures that we use Azure managed identity to authenticate with the specified Azure OpenAI endpoint - and don't need to use explicitly defined keys. You only need to authenticate once. You can then iterate rapidly on prompty content ("prompt engineering") and run it for instant responses. We recommend clearing the output terminal after each run, for clarity.
<br/>
<details>
<summary> ☑️ **This is a sample response from one prompty run**. (click to expand) </summary>

Expand All @@ -153,7 +148,6 @@ Yours in fellowship,
</details>


<br/>
## 5. How Prompty assets work

The `.prompty` file is an example of a Prompty _asset_ that respects the schema defined in the Prompty specification. The asset class is language-agnostic (not tied to any language or framework), using a _markdown format with YAML_ to specify metadata ("frontmatter") and content ("template") for a _single prompt-based interaction_ with a Large Language Model. By doing this, it **unifies the prompt content and its execution context in a single asset package**, making it easy for developers to rapidly iterate on prompts for prototyping.
Expand All @@ -164,7 +158,6 @@ The asset is then _activated_ by a Prompty runtime as follows:
1. The file asset is **loaded**, converting it into an executable function.
1. The asset is now **rendered**, using function parameters to fill in the template data.
1. The asset is then **executed**, invoking the configured model with the rendered template.
<br/>
The returned result can then be displayed to the caller (single node) or can be passed as the input to a different Prompty asset (chained flow) to support more complex orchestration.

## 6. How Models Are Configured
Expand All @@ -174,7 +167,6 @@ Prompty assets must be configured with a _model_ that is the target for the prom
1. The Visual Studio Code environment defines a _default_ configuration that you can view by clicking on the `Prompty default` tab in the bottom toolbar. If a Prompty asset does not specify an explicit model configuration, the invocation will use the default model.
1. When we convert a Prompty asset to code, you may see a `prompty.json` file with a default configuration. This is equivalent to the Visual Studio Code default, but applied to the case when we execute the Prompty from code (vs. VS Code editor).
1. The Prompty file can itself define model configuration in the _frontmatter_ as seen in our example Prompty (see snippet below). When specified, this value will override other defaults.
<br/>
In our example asset (snippet below), the Prompty file **explicitly defines** model configuration properties, giving it precedence. Note also that property values can be specified as constants (`gpt-4`) or reference environment variables (`${env:AZURE_OPENAI_ENDPOINT}`). The latter is the recommended approach, ensuring that secrets don't get checked into version control with asset file updates.

```yaml
Expand All @@ -194,15 +186,13 @@ model:

**Tip 3: Use Environment Variables**. As shown above, property values can be defined using environment variables in the format ``${env:ENVAR_NAME}``. By default, the Visual Studio Code extension will look for a `.env` file in the root folder of the repository containing Prompty assets - create and update that file (and ensure it is .gitignore-d by default). _If you use GitHub Codespaces, you can also store environment variables as Codespaces secrets that get automatically injected into the runtime at launch_.

<br/>
## 7. How To Observe Output

By default, executing the Prompty will open the _Output_ tab on the Visual Studio Code terminal and show a _brief response_ with the model output. But what if you want more detail? Prompty provides two features that can help.

1. **Terminal Options** - Look for the `Prompty Output (verbose)` option in a drop-down menu in the Visual Studio Code terminal (at top left of terminal). Selecting this option gives you verbose output which includes the _request_ details and _response_ details, including useful information like token usage for execution.
1. **Code Options** - When assets are converted to code, you can take advantage of _Prompty Tracer_ features to log execution traces to the console, or to a JSON file, that can then be visualized for richer analysis of the flow steps and performance.

<br/>
## 8. How To Generate Code

In this section, we focused on Prompty asset creation and execution from the Visual Studio Code editor (no coding involved). Here, the Visual Studio Code extension acts as the default runtime, loading the asset, rendering the template, and executing the model invocation transparently. But this approach will not work when we need to **orchestrate** complex flows with multiple assets, or when we need to **automate** execution in CI/CD pipelines.
Expand All @@ -212,11 +202,8 @@ This is where the _Prompty Runtime_ comes in. The runtime converts the Prompty a
- **Core Runtimes** - generate basic code in the target language. Examples: Python, C#
- **Framework-Enabled** - generate code for a specific framework. Examples: LangChain, Semantic Kernel

<br/>
*In the next section, we'll explore how to go from Prompty To Code, using the core Python runtime*.


<br/>
---
<br/>
[Want to Contribute To the Project?](/docs/contributing/) - _Updated Guidance Coming Soon_.
3 changes: 0 additions & 3 deletions web/docs/getting-started/page.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -19,17 +19,14 @@ _In this section we take you from core concepts to code, covering the following
- **First App**: Convert your Prompty to code (with SDK) and execute it.
- **Debugging**: Use Observability in Prompty to debug your application

<br/>


## Next Steps

Start with the **[Core Concepts](/docs/getting-started/concepts)** section to learn about the basic building blocks of Prompty.

<br/>

---

<br/>
[Want to Contribute To the Project?](/docs/contributing/) - _Guidance coming soon_.

7 changes: 0 additions & 7 deletions web/docs/getting-started/prompty-to-code/page.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -20,15 +20,13 @@ To convert a Prompty asset to code and execute your first app, you need to have
- [Python 3.10 or higher](https://www.python.org/downloads/)
- [Prompty Package (Python library)](https://pypi.org/project/prompty/)

<br/>

For our first app, we will focus on Azure Open AI and cover the following steps:
- Create code from Prompty asset in VS Code
- Install Prompty Package (Python library)
- Configure code (use environment variables)
- Execute code (from command line or VS Code)

<br/>

## 2. Generate Code from Prompty Asset
Open the `File Explorer` in Visual Studio Code open the Prompty asset we created earlier. Right click on the file name, and in the options, select `add code` then select `add Prompty code`. A new file will be created with the Python code generated from the Prompty asset.
Expand Down Expand Up @@ -79,7 +77,6 @@ if __name__ == "__main__":
```
</details>

<br/>
## 3. Install Prompty Runtime
When you run the code generated, you will receive the error ``ModuleNotFoundError: No module named 'prompty'``. To resolve this, you need to install the Prompty runtime. The runtime supports different invokers that you can customize based on your needs. In this example, we are using Azure OpenAI API, therefore, we will need to install the ``azure`` invoker. Run the following command in your terminal:

Expand All @@ -96,7 +93,6 @@ In the code generated, we will need to load our environment variables to connect
from dotenv import load_dotenv
load_dotenv()
```
<br/>

## 5. Execute the code
You can now run the code by either clicking on the ``run`` button on VS Code or executing the following command in your terminal:
Expand All @@ -122,7 +118,6 @@ Faithfully thine,
```
</details>

<br/>

## 6. How Python code works

Expand Down Expand Up @@ -205,7 +200,6 @@ if __name__ == "__main__":



<br/>



Expand All @@ -215,5 +209,4 @@ The Prompty runtime supports additional runtimes, including frameworks such as [


---
<br/>
[Want to Contribute To the Project?](/docs/contributing/) - _Updated Guidance Coming Soon_.
6 changes: 0 additions & 6 deletions web/docs/getting-started/setup/page.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,6 @@ To create your first Prompty (using the VS Code Extension), you will need:
- Access to the [GitHub Models Marketplace](https://github.com/marketplace/models)
- A computer with the Visual Studio Code IDE installed.

<br/>
## Developer Tools

The Prompty project has three tools to support your prompt engineering and rapid prototyping needs:
Expand All @@ -32,15 +31,13 @@ The Prompty project has three tools to support your prompt engineering and rapid

Let's start with the Visual Studio Code extension.

<br/>
## Install Prompty Extension

The easiest way to get started with Prompty, is to use the Visual Studio Code Extension. Launch Visual Studio Code, then install the extension using one of these two options:

1. Visit the [Visual Studio Code Marketplace](https://marketplace.visualstudio.com/items?itemName=ms-toolsai.prompty) in the browser. Click to install, and you should see the install complete in your IDE.
1. Click the Extensions explorer icon in the Visual Studio Code sidebar (left) and search for "Prompty". Install directly into VS Code.

<br/>
## Explore Prompty Extension

Once installed, you should see a stylized "P" (resembling the Prompty logo) in the VS Code sidebar, as seen in the figure below (left). Click the extension and you should see the _Prompty_ panel slide out at left.
Expand All @@ -53,14 +50,11 @@ With this, you see four Prompty-related features in the frame:
1. **Edit Settings** - shows the "Prompty default" tab on toolbar that links to settings.
1. **Prompty Asset** - the editor shows a `.prompty` file, giving you a first look at this asset.

<br/>
In the next section, we'll create our first prompty and make use of the identified features to run it and observe the results.

![VS Code Extension](./prompty-vscode.png)



<br/>
---
<br/>
[Want to Contribute To the Project?](/docs/contributing/) - _Updated Guidance Coming Soon_.
Loading

0 comments on commit 09f35e8

Please sign in to comment.