Skip to content

Commit

Permalink
Make improvements in pandas-test reporting (#15485)
Browse files Browse the repository at this point in the history
This PR fixes an issue where `listJobsForWorkflowRun` returns only 30 jobs details by default and we need to paginate and load the rest all of the job details to be able to filter jobs.

This PR also address review comments in #15369

Authors:
  - GALI PREM SAGAR (https://github.com/galipremsagar)

Approvers:
  - Ray Douglass (https://github.com/raydouglass)

URL: #15485
  • Loading branch information
galipremsagar authored Apr 10, 2024
1 parent 15c148d commit b06536d
Show file tree
Hide file tree
Showing 3 changed files with 15 additions and 9 deletions.
13 changes: 9 additions & 4 deletions .github/workflows/status.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -85,13 +85,18 @@ jobs:
state: CUSTOM_STATE = 'success'
} = contentJSON;
// Fetch the first job ID from the workflow run
const jobs = await github.rest.actions.listJobsForWorkflowRun({
// Fetch all jobs using pagination
const jobs = await github.paginate(
github.rest.actions.listJobsForWorkflowRun,
{
owner: context.repo.owner,
repo: context.repo.repo,
run_id: process.env.WORKFLOW_RUN_ID,
});
const job = jobs.data.jobs.find(job => job.name === JOB_NAME);
}
);
// Fetch the first job ID from the workflow run
const job = jobs.find(job => job.name === JOB_NAME);
const JOB_ID = job ? job.id : null;
// Set default target URL if not defined
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ jobs:
secrets: inherit
uses: rapidsai/shared-workflows/.github/workflows/[email protected]
with:
matrix_filter: map(select(.ARCH == "amd64")) | group_by(.CUDA_VER|split(".")|map(tonumber)|.[0]) | map(max_by([(.PY_VER|split(".")|map(tonumber)), (.CUDA_VER|split(".")|map(tonumber))]))
matrix_filter: map(select(.ARCH == "amd64")) | group_by(.CUDA_VER|split(".")|map(tonumber)|.[0]) | map(min_by([(.PY_VER|split(".")|map(tonumber)), (.CUDA_VER|split(".")|map(tonumber))]))
build_type: nightly
branch: ${{ inputs.branch }}
date: ${{ inputs.date }}
Expand Down
9 changes: 5 additions & 4 deletions ci/cudf_pandas_scripts/pandas-tests/diff.sh
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,13 @@
GH_JOB_NAME="pandas-tests-diff / build"
rapids-logger "Github job name: ${GH_JOB_NAME}"

MAIN_ARTIFACT=$(rapids-s3-path)cuda12_$(arch)_py310.main-results.json
PR_ARTIFACT=$(rapids-s3-path)cuda12_$(arch)_py39.pr-results.json
PY_VER="39"
MAIN_ARTIFACT=$(rapids-s3-path)cuda12_$(arch)_py${PY_VER}.main-results.json
PR_ARTIFACT=$(rapids-s3-path)cuda12_$(arch)_py${PY_VER}.pr-results.json

rapids-logger "Fetching latest available results from nightly"
aws s3api list-objects-v2 --bucket rapids-downloads --prefix "nightly/" --query "sort_by(Contents[?ends_with(Key, '.main-results.json')], &LastModified)[::-1].[Key]" --output text > s3_output.txt
cat s3_output.txt
aws s3api list-objects-v2 --bucket rapids-downloads --prefix "nightly/" --query "sort_by(Contents[?ends_with(Key, '_py${PY_VER}.main-results.json')], &LastModified)[::-1].[Key]" --output text > s3_output.txt

read -r COMPARE_ENV < s3_output.txt
export COMPARE_ENV
rapids-logger "Latest available results from nightly: ${COMPARE_ENV}"
Expand Down

0 comments on commit b06536d

Please sign in to comment.