diff --git a/web/docs/contributing/page.mdx b/web/docs/contributing/page.mdx
index 078a00f..bfb8b81 100644
--- a/web/docs/contributing/page.mdx
+++ b/web/docs/contributing/page.mdx
@@ -10,15 +10,12 @@ tags:
- contributing
---
-
## About Prompty
[Prompty](https://github.com/microsoft/prompty) is an open-source project from Microsoft that makes it easy for developers to _create, manage, debug, and evaluate_ LLM prompts for generative AI applications. We welcome contributions from the community that can help make the technology more useful, and usable, by developers from all backgrounds. Before you get started, review this page for contributor guidelines.
-
## Code Of Conduct
Read the project's [Code of Conduct](https://github.com/microsoft/prompty/blob/main/CODE_OF_CONDUCT.md) and adhere to it. The project is alse governed by the Microsoft Open Source Code of Conduct - [Read their FAQ](https://opensource.microsoft.com/codeofconduct/faq/) to learn why the CoC matters and how you can raise concerns or provide feedback.
-
## Providing feedback
Feedback can come in several forms:
@@ -28,6 +25,5 @@ Feedback can come in several forms:
The easiest way to give us feedback is by [filing an issue](https://github.com/microsoft/prompty/issues/new?template=Blank+issue). **Please check previously logged issues (open and closed) to make sure the topic or bug has not already been raised.** If it does exist, weigh in on that discussion thread to add any additional context of value.
-
## Contributor guidelines
The repository contains both the code and the documentation for the project. Each requires a different set of tools and processes to build and preview outcomes. We hope to document these soon - so check back for **contributor guidelines** that will cover the requirements in more detail.
\ No newline at end of file
diff --git a/web/docs/getting-started/concepts/page.mdx b/web/docs/getting-started/concepts/page.mdx
index 84d1dd1..d046b3d 100644
--- a/web/docs/getting-started/concepts/page.mdx
+++ b/web/docs/getting-started/concepts/page.mdx
@@ -16,16 +16,13 @@ index: 2
_In this section, we cover the core building blocks of Prompty (specification, tooling, and runtime) and walk you through the developer flow and mindset for going from "prompt" to "prototype"_.
-
## 1. Prompty Components
The Prompty implementation consists of three core components - the _specification_ (file format), the _tooling_ (developer experience) and the _runtime_ (executable code). Let's review these briefly.
-
![What is Prompty?](01-what-is-prompty.png)
-
### 1.1 The Prompty Specification
@@ -67,7 +64,6 @@ The [Prompty specification](https://github.com/microsoft/prompty/blob/main/Promp
```
-
### 1.2 The Prompty Tooling
@@ -86,7 +82,6 @@ The [Prompty Visual Studio Code Extension](https://marketplace.visualstudio.com/
- View the "runs" history, and drill down into a run with a built-in trace viewer.
-
### 1.3 The Prompty Runtime
@@ -98,7 +93,6 @@ The Prompty Runtime helps you make the transition from _static asset_ (`.prompty
Core runtimes provide the base package needed to run the Prompty asset with code. Prompty currently has two core runtimes, with more support coming.
* [Prompty Core (python)](https://pypi.org/project/prompty/) → Available _in preview_.
* Prompty Core (csharp) → In _active development_.
-
Enhanced runtimes add support for orchestration frameworks, enabling complex workflows with Prompty assets:
* [Prompt flow](https://microsoft.github.io/promptflow/) → Python core
@@ -106,7 +100,6 @@ Enhanced runtimes add support for orchestration frameworks, enabling complex wor
* [Semantic Kernel](https://learn.microsoft.com/semantic-kernel/) → C# core
-
## 2. Developer Workflow
@@ -117,9 +110,7 @@ Prompty is ideal for rapid prototyping and iteration of a new generative AI appl
2. **Develop** by iterating config & content, use tracing to debug
3. **Evaluate** prompts with AI assistance, saved locally or to cloud
-
![How do we use prompty?](02-build-with-prompty.png)
-
## 3. Developer Mindset
@@ -131,10 +122,7 @@ Think of it as a **micro-orchestrator focused on a single LLM invocation** putti
- _engineer_ the prompt (system, user, context, instructions) for that request
- _shape_ the data used to "render" the template on execution by the runtime
-
![Where does this fit?](03-micro-orchestrator-mindset.png)
-
---
-
[Want to Contribute To the Project?](/docs/contributing/) - _Updated Guidance Coming Soon_.
diff --git a/web/docs/getting-started/debugging-prompty/page.mdx b/web/docs/getting-started/debugging-prompty/page.mdx
index e8c88a0..4e56412 100644
--- a/web/docs/getting-started/debugging-prompty/page.mdx
+++ b/web/docs/getting-started/debugging-prompty/page.mdx
@@ -1,5 +1,5 @@
---
-title: Debuggging Prompty
+title: Debugging Prompty
authors:
- bethanyjep
- nitya
@@ -13,7 +13,6 @@ index: 6
_In the last section, we converted our Prompty asset into code and successfully executed the application. In this section, we will cover how we can use Observability in Prompty to debug our application._
-
## 1. What we will cover
@@ -24,7 +23,6 @@ For observability in Prompty, we will use the tracer to visualize and debug the
- Understand how observability works in your code
- Analyze the trace output to debug and fix the bug
-
## 2. Understandfing Observability in Prompty
@@ -35,14 +33,12 @@ In Prompty, you can easily trace and visualize flow, which helps you to understa
- **Console Tracer**: This tracer logs the output to the console.
- **Prompty Tracer**: This tracer logs the output to a JSON file.
-
## 3. Modify our Prompty
In our `shakespeare.prompty` asset we will update the prompt to request for different variations of the same message. The new prompt will be: `"Can you create 5 different versions of a short message inviting friends to a Game Night?"`. Additionally, change the `max_tokens:` value from `3000` to `150`.
Head over to the `shakespeare.py` file as well and update the question to: `"Can you create 5 different versions of a short message inviting friends to a Game Night?"`.
-
☑ **Function that executes the Prompty asset** (click to expand)
@@ -82,7 +78,6 @@ user:
```
## 4. Adding observability to your code
To add a tracer, we have the following in our previously generated code snippet:
@@ -107,14 +102,12 @@ def run(
}
)
```
-
-- **Tracer.add("console", console_tracer)**: logs tracing information to the console, useful for real-time debugging.
-- **json_tracer = PromptyTracer()**: Creates an instance of the PromptyTracer class, which is a custom tracer.
-- **Tracer.add("PromptyTracer", json_tracer.tracer)**: logs tracing in a `.tracy` JSON file for more detailed inspection after runs, providing you with an interactive UI.
-- **@trace**: Decorator that traces the execution of the run function.
+- **`Tracer.add("console", console_tracer)`**: logs tracing information to the console, useful for real-time debugging.
+- **`json_tracer = PromptyTracer()`**: Creates an instance of the PromptyTracer class, which is a custom tracer.
+- **`Tracer.add("PromptyTracer", json_tracer.tracer)`**: logs tracing in a `.tracy` JSON file for more detailed inspection after runs, providing you with an interactive UI.
+- **`@trace`**: Decorator that traces the execution of the run function.
-
## 5: Analyzing and debugging the trace output
@@ -126,7 +119,6 @@ The trace output is divided into three: _load, prepare_ and _run_. Load refers t
![Trace Output](trace-output.png)
-
From the trace output, you can see the inputs, outputs and metrics such as time to execute the prompt and tokens. This information can be used to debug and fix any issues in your code. For example, we can see output has been truncated and the `Completion Tokens` count is less than 1000, which might not be sufficent for the prompt to generate different outputs. We can increase the `max_tokens` in our Prompty to 1000 to generate more tokens. Once done, run the code again and confirm you get 5 examples of the short message inviting friends to a Game Night.
@@ -134,7 +126,6 @@ From the trace output, you can see the inputs, outputs and metrics such as time
You can continue experimenting with different parameters such as `temperature` and observe how it affects the model outputs.
-
## 6. Using observability for Model Selection
@@ -142,15 +133,13 @@ Another way to make the most of observability is in Model Selection. You can swi
![grpt-35-turbo output](gpt-35-turbo-trace.png)
-
From the output, you can see the difference in the completion tokens and the time taken to execute the prompt. This information can be used to select the best model for your use case.
## 7. Building a Custom Tracer in Prompty
-In the guides section, we will provide a deep dive into [Observability in Prompty](docs/guides/prompty-observability) and how you can create your own tracer.
+In the guides section, we will provide a deep dive into [Observability in Prompty](/docs/guides/prompty-observability) and how you can create your own tracer.
---
-
[Want to Contribute To the Project?](/docs/contributing/) - _Updated Guidance Coming Soon_.
diff --git a/web/docs/getting-started/first-prompty/page.mdx b/web/docs/getting-started/first-prompty/page.mdx
index ce99730..d635ac1 100644
--- a/web/docs/getting-started/first-prompty/page.mdx
+++ b/web/docs/getting-started/first-prompty/page.mdx
@@ -21,13 +21,11 @@ To _run_ the Prompty, you will need a valid Large Language Model deployed endpoi
1. **Azure OpenAI** - requires the endpoint and uses keyless auth (with login)
1. **Serverless** - requires a GitHub Personal Access Token, uses [Marketplaces models](https://github.com/marketplace/models)
-
For our first Prompty, we'll focus on the Azure OpenAI option.
- we assume you've deployed an _Azure OpenAI_ model
- we assume you've retrieved its Endpoint URL information
- we assume you've installed the Prompty extension in VS Code
-
## 2. Create a Prompty asset
Open the Visual Studio Code editor, then click the `File Explorer` icon to view your project filesystem. Select a destination folder (e.g., could be the repository root) and _right-click_ to get a drop-down menu. Look for the `New Prompty` option and click it.
@@ -79,7 +77,6 @@ user:
```
-
## 3. Update the default Prompty
You can now update the file contents as shown below. Here, we have made three changes:
@@ -128,14 +125,12 @@ user:
```
-
## 4. Run the Prompty
You can now run the Prompty by clicking the `Play` button (top right) in the editor pane of your `.prompty` file.
1. You will see a pop-up asking you to authenticate with Azure. **Sign in**
1. You will see the VS Code terminal switch to the `Outputs` tab. **View output**
The first step ensures that we use Azure managed identity to authenticate with the specified Azure OpenAI endpoint - and don't need to use explicitly defined keys. You only need to authenticate once. You can then iterate rapidly on prompty content ("prompt engineering") and run it for instant responses. We recommend clearing the output terminal after each run, for clarity.
-
☑️ **This is a sample response from one prompty run**. (click to expand)
@@ -153,7 +148,6 @@ Yours in fellowship,
## 5. How Prompty assets work
The `.prompty` file is an example of a Prompty _asset_ that respects the schema defined in the Prompty specification. The asset class is language-agnostic (not tied to any language or framework), using a _markdown format with YAML_ to specify metadata ("frontmatter") and content ("template") for a _single prompt-based interaction_ with a Large Language Model. By doing this, it **unifies the prompt content and its execution context in a single asset package**, making it easy for developers to rapidly iterate on prompts for prototyping.
@@ -164,7 +158,6 @@ The asset is then _activated_ by a Prompty runtime as follows:
1. The file asset is **loaded**, converting it into an executable function.
1. The asset is now **rendered**, using function parameters to fill in the template data.
1. The asset is then **executed**, invoking the configured model with the rendered template.
-
The returned result can then be displayed to the caller (single node) or can be passed as the input to a different Prompty asset (chained flow) to support more complex orchestration.
## 6. How Models Are Configured
@@ -174,7 +167,6 @@ Prompty assets must be configured with a _model_ that is the target for the prom
1. The Visual Studio Code environment defines a _default_ configuration that you can view by clicking on the `Prompty default` tab in the bottom toolbar. If a Prompty asset does not specify an explicit model configuration, the invocation will use the default model.
1. When we convert a Prompty asset to code, you may see a `prompty.json` file with a default configuration. This is equivalent to the Visual Studio Code default, but applied to the case when we execute the Prompty from code (vs. VS Code editor).
1. The Prompty file can itself define model configuration in the _frontmatter_ as seen in our example Prompty (see snippet below). When specified, this value will override other defaults.
-
In our example asset (snippet below), the Prompty file **explicitly defines** model configuration properties, giving it precedence. Note also that property values can be specified as constants (`gpt-4`) or reference environment variables (`${env:AZURE_OPENAI_ENDPOINT}`). The latter is the recommended approach, ensuring that secrets don't get checked into version control with asset file updates.
```yaml
@@ -194,7 +186,6 @@ model:
**Tip 3: Use Environment Variables**. As shown above, property values can be defined using environment variables in the format ``${env:ENVAR_NAME}``. By default, the Visual Studio Code extension will look for a `.env` file in the root folder of the repository containing Prompty assets - create and update that file (and ensure it is .gitignore-d by default). _If you use GitHub Codespaces, you can also store environment variables as Codespaces secrets that get automatically injected into the runtime at launch_.
-
## 7. How To Observe Output
By default, executing the Prompty will open the _Output_ tab on the Visual Studio Code terminal and show a _brief response_ with the model output. But what if you want more detail? Prompty provides two features that can help.
@@ -202,7 +193,6 @@ By default, executing the Prompty will open the _Output_ tab on the Visual Studi
1. **Terminal Options** - Look for the `Prompty Output (verbose)` option in a drop-down menu in the Visual Studio Code terminal (at top left of terminal). Selecting this option gives you verbose output which includes the _request_ details and _response_ details, including useful information like token usage for execution.
1. **Code Options** - When assets are converted to code, you can take advantage of _Prompty Tracer_ features to log execution traces to the console, or to a JSON file, that can then be visualized for richer analysis of the flow steps and performance.
-
## 8. How To Generate Code
In this section, we focused on Prompty asset creation and execution from the Visual Studio Code editor (no coding involved). Here, the Visual Studio Code extension acts as the default runtime, loading the asset, rendering the template, and executing the model invocation transparently. But this approach will not work when we need to **orchestrate** complex flows with multiple assets, or when we need to **automate** execution in CI/CD pipelines.
@@ -212,11 +202,8 @@ This is where the _Prompty Runtime_ comes in. The runtime converts the Prompty a
- **Core Runtimes** - generate basic code in the target language. Examples: Python, C#
- **Framework-Enabled** - generate code for a specific framework. Examples: LangChain, Semantic Kernel
-
*In the next section, we'll explore how to go from Prompty To Code, using the core Python runtime*.
-
---
-
[Want to Contribute To the Project?](/docs/contributing/) - _Updated Guidance Coming Soon_.
diff --git a/web/docs/getting-started/page.mdx b/web/docs/getting-started/page.mdx
index 21b8ba7..84c22f8 100644
--- a/web/docs/getting-started/page.mdx
+++ b/web/docs/getting-started/page.mdx
@@ -19,17 +19,14 @@ _In this section we take you from core concepts to code, covering the following
- **First App**: Convert your Prompty to code (with SDK) and execute it.
- **Debugging**: Use Observability in Prompty to debug your application
-
## Next Steps
Start with the **[Core Concepts](/docs/getting-started/concepts)** section to learn about the basic building blocks of Prompty.
-
---
-
[Want to Contribute To the Project?](/docs/contributing/) - _Guidance coming soon_.
diff --git a/web/docs/getting-started/prompty-to-code/page.mdx b/web/docs/getting-started/prompty-to-code/page.mdx
index 3032094..0f9f2aa 100644
--- a/web/docs/getting-started/prompty-to-code/page.mdx
+++ b/web/docs/getting-started/prompty-to-code/page.mdx
@@ -20,7 +20,6 @@ To convert a Prompty asset to code and execute your first app, you need to have
- [Python 3.10 or higher](https://www.python.org/downloads/)
- [Prompty Package (Python library)](https://pypi.org/project/prompty/)
-
For our first app, we will focus on Azure Open AI and cover the following steps:
- Create code from Prompty asset in VS Code
@@ -28,7 +27,6 @@ For our first app, we will focus on Azure Open AI and cover the following steps:
- Configure code (use environment variables)
- Execute code (from command line or VS Code)
-
## 2. Generate Code from Prompty Asset
Open the `File Explorer` in Visual Studio Code open the Prompty asset we created earlier. Right click on the file name, and in the options, select `add code` then select `add Prompty code`. A new file will be created with the Python code generated from the Prompty asset.
@@ -79,7 +77,6 @@ if __name__ == "__main__":
```
-
## 3. Install Prompty Runtime
When you run the code generated, you will receive the error ``ModuleNotFoundError: No module named 'prompty'``. To resolve this, you need to install the Prompty runtime. The runtime supports different invokers that you can customize based on your needs. In this example, we are using Azure OpenAI API, therefore, we will need to install the ``azure`` invoker. Run the following command in your terminal:
@@ -96,7 +93,6 @@ In the code generated, we will need to load our environment variables to connect
from dotenv import load_dotenv
load_dotenv()
```
-
## 5. Execute the code
You can now run the code by either clicking on the ``run`` button on VS Code or executing the following command in your terminal:
@@ -122,7 +118,6 @@ Faithfully thine,
```
-
## 6. How Python code works
@@ -205,7 +200,6 @@ if __name__ == "__main__":
-
@@ -215,5 +209,4 @@ The Prompty runtime supports additional runtimes, including frameworks such as [
---
-
[Want to Contribute To the Project?](/docs/contributing/) - _Updated Guidance Coming Soon_.
diff --git a/web/docs/getting-started/setup/page.mdx b/web/docs/getting-started/setup/page.mdx
index f43a472..5d2afbb 100644
--- a/web/docs/getting-started/setup/page.mdx
+++ b/web/docs/getting-started/setup/page.mdx
@@ -21,7 +21,6 @@ To create your first Prompty (using the VS Code Extension), you will need:
- Access to the [GitHub Models Marketplace](https://github.com/marketplace/models)
- A computer with the Visual Studio Code IDE installed.
-
## Developer Tools
The Prompty project has three tools to support your prompt engineering and rapid prototyping needs:
@@ -32,7 +31,6 @@ The Prompty project has three tools to support your prompt engineering and rapid
Let's start with the Visual Studio Code extension.
-
## Install Prompty Extension
The easiest way to get started with Prompty, is to use the Visual Studio Code Extension. Launch Visual Studio Code, then install the extension using one of these two options:
@@ -40,7 +38,6 @@ The easiest way to get started with Prompty, is to use the Visual Studio Code Ex
1. Visit the [Visual Studio Code Marketplace](https://marketplace.visualstudio.com/items?itemName=ms-toolsai.prompty) in the browser. Click to install, and you should see the install complete in your IDE.
1. Click the Extensions explorer icon in the Visual Studio Code sidebar (left) and search for "Prompty". Install directly into VS Code.
-
## Explore Prompty Extension
Once installed, you should see a stylized "P" (resembling the Prompty logo) in the VS Code sidebar, as seen in the figure below (left). Click the extension and you should see the _Prompty_ panel slide out at left.
@@ -53,14 +50,11 @@ With this, you see four Prompty-related features in the frame:
1. **Edit Settings** - shows the "Prompty default" tab on toolbar that links to settings.
1. **Prompty Asset** - the editor shows a `.prompty` file, giving you a first look at this asset.
-
In the next section, we'll create our first prompty and make use of the identified features to run it and observe the results.
![VS Code Extension](./prompty-vscode.png)
-
---
-
[Want to Contribute To the Project?](/docs/contributing/) - _Updated Guidance Coming Soon_.
diff --git a/web/docs/guides/page.mdx b/web/docs/guides/page.mdx
index fbce9c3..e5785f9 100644
--- a/web/docs/guides/page.mdx
+++ b/web/docs/guides/page.mdx
@@ -14,7 +14,5 @@ index: 4
TODO
-
---
-
[Want to Contribute To the Project?](/docs/contributing/) - _Updated Guidance Coming Soon_.
diff --git a/web/docs/guides/prompty-extension/page.mdx b/web/docs/guides/prompty-extension/page.mdx
index d95e6b7..34af33b 100644
--- a/web/docs/guides/prompty-extension/page.mdx
+++ b/web/docs/guides/prompty-extension/page.mdx
@@ -13,7 +13,5 @@ index: 3
TODO: How does Prompty runtime work for no-code execution in VS Code
-
---
-
[Want to Contribute To the Project?](/docs/contributing/) - _Updated Guidance Coming Soon_.
diff --git a/web/docs/guides/prompty-invoker/page.mdx b/web/docs/guides/prompty-invoker/page.mdx
index 84240f8..551126f 100644
--- a/web/docs/guides/prompty-invoker/page.mdx
+++ b/web/docs/guides/prompty-invoker/page.mdx
@@ -23,11 +23,8 @@ Invokers trigger a call to a the different models and return its output ensuring
2. **openai**: Invokes the OpenAI API
3. **serverless**: Invokes serverless models (e.g. GitHub Models) using the Azure AI Inference client library (currently only key based authentication is supported with more managed identity support coming soon)
-
TODO: Explain how invokers work and how to build a custom invoker
-
---
-
[Want to Contribute To the Project?](/docs/contributing/) - _Updated Guidance Coming Soon_.
diff --git a/web/docs/guides/prompty-observability/page.mdx b/web/docs/guides/prompty-observability/page.mdx
index 0acaa3b..16241be 100644
--- a/web/docs/guides/prompty-observability/page.mdx
+++ b/web/docs/guides/prompty-observability/page.mdx
@@ -15,7 +15,5 @@ Get started with Observability at [debugging Prompty](/docs/getting-started/debu
TODO: Explain how to trace Prompty execution from input to response
-
---
-
[Want to Contribute To the Project?](/docs/contributing/) - _Updated Guidance Coming Soon_.
diff --git a/web/docs/guides/prompty-runtime/page.mdx b/web/docs/guides/prompty-runtime/page.mdx
index c1eb417..a70ea1e 100644
--- a/web/docs/guides/prompty-runtime/page.mdx
+++ b/web/docs/guides/prompty-runtime/page.mdx
@@ -13,7 +13,5 @@ index: 2
TODO: Explain how runtimes work and how to build a custom runtime
-
---
-
[Want to Contribute To the Project?](/docs/contributing/) - _Updated Guidance Coming Soon_.
diff --git a/web/docs/page.mdx b/web/docs/page.mdx
index c236409..6194088 100644
--- a/web/docs/page.mdx
+++ b/web/docs/page.mdx
@@ -15,16 +15,13 @@ date: 2024-10-02
## 1. Introduction
-
[Prompty](https://github.com/microsoft/prompty) is an asset class and format for LLM prompts designed to enhance observability, understandability, and portability for developers. The primary goal is to accelerate the developer inner loop of _prompt engineering_ and _prompt source management_ in a cross-language and cross-platform implementation.
The implementation currently supports popular runtimes (Python, C#) and frameworks (LangChain, Semantic Kernel, Prompt flow) with plans to add more.
The project is [open source](https://github.com/microsoft/prompty) and we encourage developers to extend the capabilities to new runtimes and tooling _and contribute those back to the core_ for use by the community.
-
_Watch the [**Microsoft Build 2024 Breakout Session**](https://build.microsoft.com/sessions/86e41e8b-1fd2-40fa-a608-6f99a28d4a61?source=sessions)_ for an in-depth introduction.
-
## 2. Developer Resources
@@ -50,13 +47,10 @@ This sample implements a retail copilot solution for Contoso Outdoor that uses a
This is a curated set of Azure AI Templates for use with the Azure Developer CLI, and released initially at Microsoft Build 2024. The collection showcases complete end-to-end solutions for diverse application scenarios, languages, and frameworks - using Prompty and Azure AI Studio. Deploy the solution with one command, then customize it to your needs to learn by experimentation.
-
## Next Steps
Start with the **[Getting Started](/docs/getting-started)** section to validate your development setup and build your first Prompty sample.
-
---
-
[Want to Contribute To the Project?](/docs/contributing/) - _Updated Guidance Coming Soon_.
diff --git a/web/docs/prompty-specification/page.mdx b/web/docs/prompty-specification/page.mdx
index 374f50e..c64ae43 100644
--- a/web/docs/prompty-specification/page.mdx
+++ b/web/docs/prompty-specification/page.mdx
@@ -258,7 +258,6 @@ definitions:
```
-
The Prompty yaml file spec can be found [here](https://github.com/microsoft/prompty/blob/main/Prompty.yaml). Below if you can find a brief description of each section and the attributes within it.
@@ -476,7 +475,5 @@ The Prompty yaml file spec can be found [here](https://github.com/microsoft/prom
additionalProperties: false
```
-
---
-
[Want to Contribute To the Project?](/docs/contributing/) - _Updated Guidance Coming Soon_.
diff --git a/web/docs/prompty-specification/spec-definitions/page.mdx b/web/docs/prompty-specification/spec-definitions/page.mdx
index 04e3739..e53aa9d 100644
--- a/web/docs/prompty-specification/spec-definitions/page.mdx
+++ b/web/docs/prompty-specification/spec-definitions/page.mdx
@@ -10,7 +10,6 @@ tags:
index: 1
---
-
+TODO - Add details on the definitions of the Prompty file spec.
---
-
[Want to Contribute To the Project?](/docs/contributing/) - _Updated Guidance Coming Soon_.
diff --git a/web/docs/prompty-specification/spec-properties/page.mdx b/web/docs/prompty-specification/spec-properties/page.mdx
index 43f5547..b906440 100644
--- a/web/docs/prompty-specification/spec-properties/page.mdx
+++ b/web/docs/prompty-specification/spec-properties/page.mdx
@@ -10,8 +10,6 @@ tags:
index: 2
---
-
-
+TODO - Add details on the properties of the Prompty file spec.
---
-
[Want to Contribute To the Project?](/docs/contributing/) - _Updated Guidance Coming Soon_.
diff --git a/web/docs/tutorials/add-observability/page.mdx b/web/docs/tutorials/add-observability/page.mdx
index a18781e..67650b9 100644
--- a/web/docs/tutorials/add-observability/page.mdx
+++ b/web/docs/tutorials/add-observability/page.mdx
@@ -12,7 +12,5 @@ index: 1
TODO: Add observability to see run traces
-
---
-
[Want to Contribute To the Project?](/docs/contributing/) - _Updated Guidance Coming Soon_.
diff --git a/web/docs/tutorials/page.mdx b/web/docs/tutorials/page.mdx
index e28143e..b6ac5dd 100644
--- a/web/docs/tutorials/page.mdx
+++ b/web/docs/tutorials/page.mdx
@@ -12,7 +12,5 @@ index: 2
TODO
-
---
-
[Want to Contribute To the Project?](/docs/contributing/) - _Updated Guidance Coming Soon_.
diff --git a/web/docs/tutorials/using-langchain/page.mdx b/web/docs/tutorials/using-langchain/page.mdx
index 2b32bb4..9a930a9 100644
--- a/web/docs/tutorials/using-langchain/page.mdx
+++ b/web/docs/tutorials/using-langchain/page.mdx
@@ -12,7 +12,5 @@ index: 2
TODO: Convert Prompty asset to LangChain
-
---
-
[Want to Contribute To the Project?](/docs/contributing/) - _Updated Guidance Coming Soon_.
diff --git a/web/docs/tutorials/using-semantic-kernel/page.mdx b/web/docs/tutorials/using-semantic-kernel/page.mdx
index 89e0f17..eea422b 100644
--- a/web/docs/tutorials/using-semantic-kernel/page.mdx
+++ b/web/docs/tutorials/using-semantic-kernel/page.mdx
@@ -231,7 +231,5 @@ public class PromptyExample
Prompty allows you to define detailed, reusable prompt templates for use in the Semantic Kernel. By following the steps in this guide, you can quickly integrate Prompty files into your Semantic Kernel-based applications, making your AI-powered interactions more dynamic and flexible.
-
---
-
[Want to Contribute To the Project?](/docs/contributing/) - _Updated Guidance Coming Soon_.
diff --git a/web/package-lock.json b/web/package-lock.json
index a62cda3..a717b84 100644
--- a/web/package-lock.json
+++ b/web/package-lock.json
@@ -27,7 +27,7 @@
},
"devDependencies": {
"@babel/plugin-transform-logical-assignment-operators": "^7.24.7",
- "@tailwindcss/typography": "^0.5.13",
+ "@tailwindcss/typography": "^0.5.15",
"@types/node": "^20",
"@types/react": "^18",
"@types/react-dom": "^18",
@@ -1470,9 +1470,9 @@
}
},
"node_modules/@tailwindcss/typography": {
- "version": "0.5.13",
- "resolved": "https://registry.npmjs.org/@tailwindcss/typography/-/typography-0.5.13.tgz",
- "integrity": "sha512-ADGcJ8dX21dVVHIwTRgzrcunY6YY9uSlAHHGVKvkA+vLc5qLwEszvKts40lx7z0qc4clpjclwLeK5rVCV2P/uw==",
+ "version": "0.5.15",
+ "resolved": "https://registry.npmjs.org/@tailwindcss/typography/-/typography-0.5.15.tgz",
+ "integrity": "sha512-AqhlCXl+8grUz8uqExv5OTtgpjuVIwFTSXTrh8y9/pw6q2ek7fJ+Y8ZEVw7EB2DCcuCOtEjf9w3+J3rzts01uA==",
"dev": true,
"dependencies": {
"lodash.castarray": "^4.4.0",
@@ -1481,7 +1481,7 @@
"postcss-selector-parser": "6.0.10"
},
"peerDependencies": {
- "tailwindcss": ">=3.0.0 || insiders"
+ "tailwindcss": ">=3.0.0 || insiders || >=4.0.0-alpha.20"
}
},
"node_modules/@tailwindcss/typography/node_modules/postcss-selector-parser": {
diff --git a/web/package.json b/web/package.json
index 4939c94..cba76d7 100644
--- a/web/package.json
+++ b/web/package.json
@@ -30,7 +30,7 @@
},
"devDependencies": {
"@babel/plugin-transform-logical-assignment-operators": "^7.24.7",
- "@tailwindcss/typography": "^0.5.13",
+ "@tailwindcss/typography": "^0.5.15",
"@types/node": "^20",
"@types/react": "^18",
"@types/react-dom": "^18",
diff --git a/web/src/app/docs/[[...slug]]/page.tsx b/web/src/app/docs/[[...slug]]/page.tsx
index fe8d2ea..c02efa6 100644
--- a/web/src/app/docs/[[...slug]]/page.tsx
+++ b/web/src/app/docs/[[...slug]]/page.tsx
@@ -220,7 +220,7 @@ export default async function Page({ params }: Props) {
)}
-