From 79bc8d58fef7e14644f3d0fd86e48264f62e5b08 Mon Sep 17 00:00:00 2001 From: Austin Parker Date: Mon, 28 Oct 2024 09:26:04 -0400 Subject: [PATCH] address feedback --- guides/contributor/genai.md | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/guides/contributor/genai.md b/guides/contributor/genai.md index bd295a8a..1fc9241c 100644 --- a/guides/contributor/genai.md +++ b/guides/contributor/genai.md @@ -29,7 +29,7 @@ understanding of the project to evaluate the LLM output, and to know when to accept or reject it. Therefore, we ask that contributors do not rely on LLM output as the sole basis for their contributions. -Examples of this include: - +Examples of this include: - Copying and pasting LLM output into issues or pull requests without any additional context or explanation. @@ -73,3 +73,11 @@ _Q: How do I address contributors who are making consistent, low-effort contribu If an individual contributor continues to engage in low-effort PRs or issues, _and_ you have exhausted other avenues of communication, please escalate the situation to the OpenTelemetry Governance Committee. + +_Q: Can I use LLM or Generative AI tooling to assist in my own work as a maintainer?_ +In general, you should evaluate the output of LLMs -- regardless of how you use +them -- in the same way you'd evaluate the output of a human contributor or +non-AI tool. For example, tools like [Dosu](https://dosu.dev/) are being used in +certain respositories to aid in code review and issue management. Remember that +these tools can make mistakes, and use your best judgement when evaluating their +output.