Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update job tool to support specific ENV vars #115

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

jianzhangbjz
Copy link
Contributor

As title, and @LiZhang19817 , could you help check if https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-quay-quay-tests-master-omr-ocp415-unreleased-quay-omr-tests-omr-ocp415-disconnected-unreleased/1739203704738811904 really used the OMR_IMAGE=openshift-mirror-registry-rhel8:v1.3.8-2, thanks!

MacBook-Pro:job jianzhang$ job run --envs OMR_IMAGE=openshift-mirror-registry-rhel8:v1.3.8-2 periodic-ci-quay-quay-tests-master-omr-ocp415-unreleased-quay-omr-tests-omr-ocp415-disconnected-unreleased 
Debug mode is off
{'job_execution_type': '1', 'pod_spec_options': {'envs': {'OMR_IMAGE': 'openshift-mirror-registry-rhel8:v1.3.8-2'}}}
Returned job id: 02a12b66-9a64-42db-8c28-bf785bbca501
periodic-ci-quay-quay-tests-master-omr-ocp415-unreleased-quay-omr-tests-omr-ocp415-disconnected-unreleased None 02a12b66-9a64-42db-8c28-bf785bbca501 2023-12-25T08:37:30Z https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-quay-quay-tests-master-omr-ocp415-unreleased-quay-omr-tests-omr-ocp415-disconnected-unreleased/1739203704738811904
Done.

Copy link

openshift-ci bot commented Dec 25, 2023

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please ask for approval from jianzhangbjz. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@jianzhangbjz jianzhangbjz changed the title support specific ENVs update job tool to support specific ENV vars Dec 25, 2023
@jianzhangbjz
Copy link
Contributor Author

/assign @LiZhang19817

Copy link

openshift-ci bot commented Dec 25, 2023

@jianzhangbjz: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@LiZhang19817
Copy link

LiZhang19817 commented Dec 25, 2023

@jianzhangbjz The new Job is still using default value "[0m�[1maws_instance.quaybuilder (remote-exec):�[0m �[0mTrying to pull brew.registry.redhat.io/rh-osbs/openshift-mirror-registry-rhel8:v1.3.10-2..."

you can find the log here: https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/periodic-ci-quay-quay-tests-master-omr-ocp415-unreleased-quay-omr-tests-omr-ocp415-disconnected-unreleased/1739203704738811904/artifacts/quay-omr-tests-omr-ocp415-disconnected-unreleased/quay-tests-provisioning-omr-disconnected/build-log.txt
image

@jianzhangbjz
Copy link
Contributor Author

jianzhangbjz commented Dec 26, 2023

Hi @LiZhang19817 , the env here can be called only for the dependencies, such as https://github.com/openshift/release/blob/master/ci-operator/config/openshift/openshift-tests-private/openshift-openshift-tests-private-release-4.15__arm64-nightly-4.15-upgrade-from-stable-4.13.yaml#L98-L102

      dependencies:
      - env: RELEASE_IMAGE_INTERMEDIATE414
        name: release:intermediate414
      - env: RELEASE_IMAGE_ARM64_TARGET
        name: release:arm64-target

Not

    env:
      EXTRACT_MANIFEST_INCLUDED: "true"
      OMR_IMAGE: openshift-mirror-registry-rhel8:v1.3.10-2
      OMR_RELEASE: "false"

But, the dependencies used for pulling the ImageStream, not any other image.
I'm investigating a way to call the env, you can use the PR submitting to trigger the relevant test. Such as openshift/release#47101
/hold

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Dec 26, 2023
@LiZhang19817
Copy link

Hi @LiZhang19817 , the env here can be called only for the dependencies, such as https://github.com/openshift/release/blob/master/ci-operator/config/openshift/openshift-tests-private/openshift-openshift-tests-private-release-4.15__arm64-nightly-4.15-upgrade-from-stable-4.13.yaml#L98-L102

      dependencies:
      - env: RELEASE_IMAGE_INTERMEDIATE414
        name: release:intermediate414
      - env: RELEASE_IMAGE_ARM64_TARGET
        name: release:arm64-target

Not

    env:
      EXTRACT_MANIFEST_INCLUDED: "true"
      OMR_IMAGE: openshift-mirror-registry-rhel8:v1.3.10-2
      OMR_RELEASE: "false"

But, the dependencies used for pulling the ImageStream, not any other image. I'm investigating a way to call the env, you can use the PR submitting to trigger the relevant test. /hold

OK

@jianzhangbjz
Copy link
Contributor Author

jianzhangbjz commented Dec 27, 2023

MacBook-Pro:release-tests jianzhang$ job run --envs OMR_IMAGE=openshift-mirror-registry-rhel8:v1.3.8-2 periodic-ci-quay-quay-tests-master-omr-ocp415-unreleased-quay-omr-tests-omr-ocp415-disconnected-unreleased 
Debug mode is off
{'job_execution_type': '1', 'pod_spec_options': {'envs': {'OMR_IMAGE': 'openshift-mirror-registry-rhel8:v1.3.8-2'}}}
Returned job id: 5b9d2cf6-459b-4b66-b02b-438aa06ebff1
periodic-ci-quay-quay-tests-master-omr-ocp415-unreleased-quay-omr-tests-omr-ocp415-disconnected-unreleased None 5b9d2cf6-459b-4b66-b02b-438aa06ebff1 2023-12-27T02:38:15Z https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-quay-quay-tests-master-omr-ocp415-unreleased-quay-omr-tests-omr-ocp415-disconnected-unreleased/1739838069793624064
Done.

After checking that prow job's metadata, I found the env was created well. So, the OMR_IMAGE value has already been passed to the job container. The reason why it still uses the old value is that the env configured in step overrides the job's env. So, I submit openshift/release#47102 to have a try.

MacBook-Pro:~ jianzhang$ curl -X GET -H "Authorization: Bearer ${PROW_TOKEN}" https://qe-private-deck-ci.apps.ci.l2s4.p1.openshiftapps.com/prowjob?prowjob=5b9d2cf6-459b-4b66-b02b-438aa06ebff1
metadata:
  annotations:
    executor: gangway
    prow.k8s.io/context: ""
    prow.k8s.io/job: periodic-ci-quay-quay-tests-master-omr-ocp415-unreleased-quay-omr-tests-omr-ocp415-disconnected-unreleased
  creationTimestamp: "2023-12-27T02:38:15Z"
  generation: 4
  labels:
    ci-operator.openshift.io/cloud: aws
    ci-operator.openshift.io/cloud-cluster-profile: aws-qe
    ci-operator.openshift.io/variant: omr-ocp415-unreleased
    ci.openshift.io/generator: prowgen
    created-by-prow: "true"
    job-release: "4.15"
    pj-rehearse.openshift.io/can-be-rehearsed: "true"
    prow.k8s.io/build-id: "1739838069793624064"
    prow.k8s.io/context: ""
    prow.k8s.io/id: 5b9d2cf6-459b-4b66-b02b-438aa06ebff1
    prow.k8s.io/job: periodic-ci-quay-quay-tests-master-omr-ocp415-unreleased-quay-o
    prow.k8s.io/refs.base_ref: master
    prow.k8s.io/refs.org: quay
    prow.k8s.io/refs.repo: quay-tests
    prow.k8s.io/type: periodic
  name: 5b9d2cf6-459b-4b66-b02b-438aa06ebff1
  namespace: ci
  resourceVersion: "3619378294"
  uid: aca62a05-16d2-4cb0-b459-12015a9cbe58
spec:
  agent: kubernetes
  cluster: build03
  decoration_config:
    censor_secrets: true
    gcs_configuration:
      bucket: origin-ci-test
      default_org: openshift
      default_repo: origin
      mediaTypes:
        log: text/plain
      path_strategy: single
    gcs_credentials_secret: gce-sa-credentials-gcs-publisher
    grace_period: 1h0m0s
    resources:
      clonerefs:
        limits:
          memory: 3Gi
        requests:
          cpu: 100m
          memory: 500Mi
      initupload:
        limits:
          memory: 200Mi
        requests:
          cpu: 100m
          memory: 50Mi
      place_entrypoint:
        limits:
          memory: 100Mi
        requests:
          cpu: 100m
          memory: 25Mi
      sidecar:
        limits:
          memory: 2Gi
        requests:
          cpu: 100m
          memory: 250Mi
    skip_cloning: true
    timeout: 8h0m0s
    utility_images:
      clonerefs: gcr.io/k8s-prow/clonerefs:v20231206-f5c8e5872b
      entrypoint: gcr.io/k8s-prow/entrypoint:v20231206-f5c8e5872b
      initupload: gcr.io/k8s-prow/initupload:v20231206-f5c8e5872b
      sidecar: gcr.io/k8s-prow/sidecar:v20231206-f5c8e5872b
  extra_refs:
  - base_ref: master
    org: quay
    repo: quay-tests
  job: periodic-ci-quay-quay-tests-master-omr-ocp415-unreleased-quay-omr-tests-omr-ocp415-disconnected-unreleased
  namespace: ci
  pod_spec:
    containers:
    - args:
      - --gcs-upload-secret=/secrets/gcs/service-account.json
      - --image-import-pull-secret=/etc/pull-secret/.dockerconfigjson
      - --lease-server-credentials-file=/etc/boskos/credentials
      - --oauth-token-path=/usr/local/github-credentials/oauth
      - --report-credentials-file=/etc/report/credentials
      - --secret-dir=/secrets/ci-pull-credentials
      - --secret-dir=/usr/local/quay-omr-tests-omr-ocp415-disconnected-unreleased-cluster-profile
      - --target=quay-omr-tests-omr-ocp415-disconnected-unreleased
      - --variant=omr-ocp415-unreleased
      command:
      - ci-operator
      env:
      - name: OMR_IMAGE
        value: openshift-mirror-registry-rhel8:v1.3.8-2
      image: ci-operator:latest
      imagePullPolicy: Always
      name: ""
      resources:
        requests:
          cpu: 10m
      volumeMounts:
      - mountPath: /etc/boskos
        name: boskos
        readOnly: true
      - mountPath: /secrets/ci-pull-credentials
        name: ci-pull-credentials
        readOnly: true
      - mountPath: /usr/local/quay-omr-tests-omr-ocp415-disconnected-unreleased-cluster-profile
        name: cluster-profile
      - mountPath: /secrets/gcs
        name: gcs-credentials
        readOnly: true
      - mountPath: /usr/local/github-credentials
        name: github-credentials-openshift-ci-robot-private-git-cloner
        readOnly: true
      - mountPath: /secrets/manifest-tool
        name: manifest-tool-local-pusher
        readOnly: true
      - mountPath: /etc/pull-secret
        name: pull-secret
        readOnly: true
      - mountPath: /etc/report
        name: result-aggregator
        readOnly: true
    serviceAccountName: ci-operator
    volumes:
    - name: boskos
      secret:
        items:
        - key: credentials
          path: credentials
        secretName: boskos-credentials
    - name: ci-pull-credentials
      secret:
        secretName: ci-pull-credentials
    - name: cluster-profile
      secret:
        secretName: cluster-secrets-aws-qe
    - name: github-credentials-openshift-ci-robot-private-git-cloner
      secret:
        secretName: github-credentials-openshift-ci-robot-private-git-cloner
    - name: manifest-tool-local-pusher
      secret:
        secretName: manifest-tool-local-pusher
    - name: pull-secret
      secret:
        secretName: registry-pull-credentials
    - name: result-aggregator
      secret:
        secretName: result-aggregator
  prowjob_defaults:
    tenant_id: GlobalDefaultID
  report: true
  reporter_config:
    slack:
      channel: '#quay-qe'
      job_states_to_report:
      - success
      - failure
      - error
      report: true
      report_template: '{{if eq .Status.State "success"}} :rainbow: Job *{{.Spec.Job}}*
        ended with *{{.Status.State}}*. <{{.Status.URL}}|View logs> :rainbow: {{else}}
        :volcano: Job *{{.Spec.Job}}* ended with *{{.Status.State}}*. <{{.Status.URL}}|View
        logs> :volcano: {{end}}'
  type: periodic
status:
  build_id: "1739838069793624064"
  description: Job triggered.
  pendingTime: "2023-12-27T02:38:15Z"
  pod_name: 5b9d2cf6-459b-4b66-b02b-438aa06ebff1
  prev_report_states:
    gcsk8sreporter: pending
    gcsreporter: pending
  startTime: "2023-12-27T02:38:15Z"
  state: pending
  url: https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-quay-quay-tests-master-omr-ocp415-unreleased-quay-omr-tests-omr-ocp415-disconnected-unreleased/1739838069793624064

@jianzhangbjz
Copy link
Contributor Author

Rerun, waiting for https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-quay-quay-tests-master-omr-ocp415-unreleased-quay-omr-tests-omr-ocp415-disconnected-unreleased/1739936446791290880

MacBook-Pro:~ jianzhang$ job run --envs OMR_IMAGE_ENV=openshift-mirror-registry-rhel8:v1.3.8-2 periodic-ci-quay-quay-tests-master-omr-ocp415-unreleased-quay-omr-tests-omr-ocp415-disconnected-unreleased
Debug mode is off
{'job_execution_type': '1', 'pod_spec_options': {'envs': {'OMR_IMAGE_ENV': 'openshift-mirror-registry-rhel8:v1.3.8-2'}}}
Returned job id: 112f6ccb-38bb-4754-8d64-916e5f3e8f32
periodic-ci-quay-quay-tests-master-omr-ocp415-unreleased-quay-omr-tests-omr-ocp415-disconnected-unreleased None 112f6ccb-38bb-4754-8d64-916e5f3e8f32 2023-12-27T09:09:10Z https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-quay-quay-tests-master-omr-ocp415-unreleased-quay-omr-tests-omr-ocp415-disconnected-unreleased/1739936446791290880
Done.

@jianzhangbjz
Copy link
Contributor Author

After checking https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/periodic-ci-quay-quay-tests-master-omr-ocp415-unreleased-quay-omr-tests-omr-ocp415-disconnected-unreleased/1739936446791290880/artifacts/quay-omr-tests-omr-ocp415-disconnected-unreleased/quay-tests-provisioning-omr-disconnected/build-log.txt, found the OMR_IMAGE_TAG still use the default openshift-mirror-registry-rhel8:v1.3.10-2. The --envs OMR_IMAGE_ENV=openshift-mirror-registry-rhel8:v1.3.8-2 doesn't work.

�[0m�[1maws_instance.quaybuilder (remote-exec):�[0m �[0mbrew.registry.redhat.io/rh-osbs/openshift-mirror-registry-rhel8:v1.3.10-2
�[0m�[1maws_instance.quaybuilder (remote-exec):�[0m �[0mTrying to pull brew.registry.redhat.io/rh-osbs/openshift-mirror-registry-rhel8:v1.3.10-2...

And, I also checked that step pod found the OMR_IMAGE_ENV not as an ENV var. Tring another way.

[cloud-user@preserve-olm-env2 jian]$ oc get pods quay-omr-tests-omr-ocp415-disconnected-unreleased-quay-tests-provisioning-omr-disconnected -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    ci-operator.openshift.io/container-sub-tests: test
    ci-operator.openshift.io/save-container-logs: "true"
    ci.openshift.io/job-spec: '{"type":"presubmit","job":"rehearse-47102-periodic-ci-quay-quay-tests-master-omr-ocp415-unreleased-quay-omr-tests-omr-ocp415-disconnected-unreleased","buildid":"1739903460821700608","prowjobid":"b2899b96-37a9-4641-a908-64dbbb81a76a","refs":{"org":"openshift","repo":"release","base_ref":"master","base_sha":"9bd7ef0f9407c1bceeea34990741352363e89533","pulls":[{"number":47102,"author":"jianzhangbjz","sha":"cba0845e060328fc93224d8d75cea8920b21adc0","title":"use
      env for quay test","link":"https://github.com/openshift/release/pull/47102"}]},"extra_refs":[{"org":"quay","repo":"quay-tests","base_ref":"master","workdir":true}],"decoration_config":{"timeout":"8h0m0s","grace_period":"1h0m0s","utility_images":{"clonerefs":"gcr.io/k8s-prow/clonerefs:v20231206-f5c8e5872b","initupload":"gcr.io/k8s-prow/initupload:v20231206-f5c8e5872b","entrypoint":"gcr.io/k8s-prow/entrypoint:v20231206-f5c8e5872b","sidecar":"gcr.io/k8s-prow/sidecar:v20231206-f5c8e5872b"},"resources":{"clonerefs":{"limits":{"memory":"3Gi"},"requests":{"cpu":"100m","memory":"500Mi"}},"initupload":{"limits":{"memory":"200Mi"},"requests":{"cpu":"100m","memory":"50Mi"}},"place_entrypoint":{"limits":{"memory":"100Mi"},"requests":{"cpu":"100m","memory":"25Mi"}},"sidecar":{"limits":{"memory":"2Gi"},"requests":{"cpu":"100m","memory":"250Mi"}}},"gcs_configuration":{"bucket":"origin-ci-test","path_strategy":"single","default_org":"openshift","default_repo":"origin","mediaTypes":{"log":"text/plain"},"job_url_prefix":"https://prow.ci.openshift.org/view/"},"gcs_credentials_secret":"gce-sa-credentials-gcs-publisher","skip_cloning":true,"censor_secrets":true}}'
    k8s.v1.cni.cncf.io/network-status: |-
      [{
          "name": "openshift-sdn",
          "interface": "eth0",
          "ips": [
              "10.129.47.90"
          ],
          "default": true,
          "dns": {}
      }]
    openshift.io/scc: restricted-v2
    seccomp.security.alpha.kubernetes.io/pod: runtime/default
  creationTimestamp: "2023-12-27T07:11:17Z"
  labels:
    OPENSHIFT_CI: "true"
    ci-workload: tests
    ci-workload-namespace: ci-op-vm71ljpg
    ci.openshift.io/metadata.branch: master
    ci.openshift.io/metadata.org: quay
    ci.openshift.io/metadata.repo: quay-tests
    ci.openshift.io/metadata.step: quay-tests-provisioning-omr-disconnected
    ci.openshift.io/metadata.target: quay-omr-tests-omr-ocp415-disconnected-unreleased
    ci.openshift.io/metadata.variant: omr-ocp415-unreleased
    ci.openshift.io/multi-stage-test: quay-omr-tests-omr-ocp415-disconnected-unreleased
    created-by-ci: "true"
  name: quay-omr-tests-omr-ocp415-disconnected-unreleased-quay-tests-provisioning-omr-disconnected
  namespace: ci-op-vm71ljpg
  ownerReferences:
  - apiVersion: image.openshift.io/v1
    kind: ImageStream
    name: pipeline
    uid: b84271c2-5410-46fd-a84b-48a213c06c70
  resourceVersion: "3318463590"
  uid: d1c8f311-6984-4173-9357-8edcfa6ad589
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/hostname
            operator: NotIn
            values:
            - ip-10-0-233-22.ec2.internal
  containers:
  - args:
    - /tools/entrypoint
    command:
    - /tmp/entrypoint-wrapper/entrypoint-wrapper
    env:
    - name: BUILD_ID
      value: "1739903460821700608"
    - name: CI
      value: "true"
    - name: JOB_NAME
      value: rehearse-47102-periodic-ci-quay-quay-tests-master-omr-ocp415-unreleased-quay-omr-tests-omr-ocp415-disconnected-unreleased
    - name: JOB_SPEC
      value: '{"type":"presubmit","job":"rehearse-47102-periodic-ci-quay-quay-tests-master-omr-ocp415-unreleased-quay-omr-tests-omr-ocp415-disconnected-unreleased","buildid":"1739903460821700608","prowjobid":"b2899b96-37a9-4641-a908-64dbbb81a76a","refs":{"org":"openshift","repo":"release","base_ref":"master","base_sha":"9bd7ef0f9407c1bceeea34990741352363e89533","pulls":[{"number":47102,"author":"jianzhangbjz","sha":"cba0845e060328fc93224d8d75cea8920b21adc0","title":"use
        env for quay test","link":"https://github.com/openshift/release/pull/47102"}]},"extra_refs":[{"org":"quay","repo":"quay-tests","base_ref":"master","workdir":true}],"decoration_config":{"timeout":"2h0m0s","grace_period":"15s","utility_images":{"clonerefs":"gcr.io/k8s-prow/clonerefs:v20231206-f5c8e5872b","initupload":"gcr.io/k8s-prow/initupload:v20231206-f5c8e5872b","entrypoint":"gcr.io/k8s-prow/entrypoint:v20231206-f5c8e5872b","sidecar":"gcr.io/k8s-prow/sidecar:v20231206-f5c8e5872b"},"resources":{"clonerefs":{"limits":{"memory":"3Gi"},"requests":{"cpu":"100m","memory":"500Mi"}},"initupload":{"limits":{"memory":"200Mi"},"requests":{"cpu":"100m","memory":"50Mi"}},"place_entrypoint":{"limits":{"memory":"100Mi"},"requests":{"cpu":"100m","memory":"25Mi"}},"sidecar":{"limits":{"memory":"2Gi"},"requests":{"cpu":"100m","memory":"250Mi"}}},"gcs_configuration":{"bucket":"origin-ci-test","path_strategy":"single","default_org":"openshift","default_repo":"origin","mediaTypes":{"log":"text/plain"},"job_url_prefix":"https://prow.ci.openshift.org/view/"},"gcs_credentials_secret":"gce-sa-credentials-gcs-publisher","skip_cloning":true,"censor_secrets":true}}'
    - name: JOB_TYPE
      value: presubmit
    - name: OPENSHIFT_CI
      value: "true"
    - name: PROW_JOB_ID
      value: b2899b96-37a9-4641-a908-64dbbb81a76a
    - name: PULL_BASE_REF
      value: master
    - name: PULL_BASE_SHA
      value: 9bd7ef0f9407c1bceeea34990741352363e89533
    - name: PULL_HEAD_REF
    - name: PULL_NUMBER
      value: "47102"
    - name: PULL_PULL_SHA
      value: cba0845e060328fc93224d8d75cea8920b21adc0
    - name: PULL_REFS
      value: master:9bd7ef0f9407c1bceeea34990741352363e89533,47102:cba0845e060328fc93224d8d75cea8920b21adc0
    - name: PULL_TITLE
      value: use env for quay test
    - name: REPO_NAME
      value: release
    - name: REPO_OWNER
      value: openshift
    - name: GIT_CONFIG_COUNT
      value: "1"
    - name: GIT_CONFIG_KEY_0
      value: safe.directory
    - name: GIT_CONFIG_VALUE_0
      value: '*'
    - name: ENTRYPOINT_OPTIONS
      value: '{"timeout":7200000000000,"grace_period":15000000000,"artifact_dir":"/logs/artifacts","args":["/bin/bash","-c","#!/bin/bash\nset
        -eu\n#!/bin/bash\n\nset -o nounset\nset -o errexit\nset -o pipefail\n\n#Check
        podman and skopeo version\npodman -v\nskopeo -v\nHOME_PATH=$(pwd) \u0026\u0026
        echo $HOME_PATH\n\n#Create new AWS EC2 Instatnce to deploy Quay OMR\nOMR_AWS_ACCESS_KEY=$(cat
        /var/run/quay-qe-omr-secret/access_key)\nOMR_AWS_SECRET_KEY=$(cat /var/run/quay-qe-omr-secret/secret_key)\n\n#Retrieve
        the Credentials of image registry \"brew.registry.redhat.io\"\nOMR_BREW_USERNAME=$(cat
        /var/run/quay-qe-brew-secret/username)\nOMR_BREW_PASSWORD=$(cat /var/run/quay-qe-brew-secret/password)\nif
        [ -z \"${OMR_IMAGE_ENV+x}\" ]; then\n    OMR_IMAGE_TAG=\"brew.registry.redhat.io/rh-osbs/${OMR_IMAGE}\"\nelse\n   OMR_IMAGE_TAG=\"brew.registry.redhat.io/rh-osbs/${OMR_IMAGE_ENV}\"\nfi\nOMR_RELEASED_TEST=\"${OMR_RELEASE}\"\nOMR_CI_NAME=\"omrprowci$RANDOM\"\n\n####################\n#
        get vpc id and public subnet from disconnected AWS VPC\nVpcId=$(cat \"${SHARED_DIR}/vpc_id\")\necho
        \"VpcId: $VpcId\"\n\nPublicSubnet=$(cat \"${SHARED_DIR}/public_subnet_ids\"
        | yq ''.[0]'')\necho \"PublicSubnet: $PublicSubnet\"\n\n# get AWS region\nREGION=\"${LEASED_RESOURCE}\"\necho
        \"REGION: $REGION\"\n####################\n\ncat \u003e\u003eomr-ami-images.json
        \u003c\u003cEOF\n{\n  \"images\": {\n    \"aws\": {\n      \"regions\": {\n        \"us-east-1\":
        {\n          \"release\": \"RHEL_HA-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2\",\n          \"image\":
        \"ami-02e0bb36c61bb9715\"\n        },\n        \"us-east-2\": {\n          \"release\":
        \"RHEL_HA-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2\",\n          \"image\":
        \"ami-0b2e47f3b2e23d235\"\n        },\n        \"us-west-1\": {\n          \"release\":
        \"RHEL_HA-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2\",\n          \"image\":
        \"ami-054965c6cd7c6e462\"\n        },\n        \"us-west-2\": {\n          \"release\":
        \"RHEL_HA-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2\",\n          \"image\":
        \"ami-0b28dfc7adc325ef4\"\n        },\n        \"ap-northeast-1\": {\n          \"release\":
        \"RHEL_HA-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2\",\n          \"image\":
        \"ami-0cf31bd68732fb0e2\"\n        },\n        \"ap-southeast-2\": {\n          \"release\":
        \"RHEL_HA-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2\",\n          \"image\":
        \"ami-016461ac55b16fd05\"\n        },\n        \"ap-northeast-3\": {\n          \"release\":
        \"RHEL_HA-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2\",\n          \"image\":
        \"ami-08daa4649f61b8684\"\n        },\n        \"ap-southeast-1\": {\n          \"release\":
        \"RHEL_HA-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2\",\n          \"image\":
        \"ami-0d6ba217f554f6137\"\n        },\n        \"ap-northeast-2\": {\n          \"release\":
        \"RHEL_HA-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2\",\n          \"image\":
        \"ami-0bb1758bf5a69ca5c\"\n        }\n      }\n    }\n  }\n}\nEOF\n\nami_id=$(jq
        -r .images.aws.regions[\\\"${REGION}\\\"].image \u003comr-ami-images.json)\n\nmkdir
        -p terraform_omr \u0026\u0026 cd terraform_omr\n\ncat \u003e\u003evariables.tf
        \u003c\u003cEOF\nvariable \"quay_build_worker_key\" {\n}\nvariable \"quay_build_worker_security_group\"
        {\n}\nvariable \"quay_build_instance_name\" {\n}\nEOF\n\ncat \u003e\u003ecreate_aws_ec2.tf
        \u003c\u003cEOF\nprovider \"aws\" {\n  region = \"${REGION}\"\n  access_key
        = \"${OMR_AWS_ACCESS_KEY}\"\n  secret_key = \"${OMR_AWS_SECRET_KEY}\"\n}\nresource
        \"aws_key_pair\" \"quaybuilder0710\" {\n  key_name   = var.quay_build_worker_key\n  public_key
        = file(\"./quaybuilder.pub\")\n}\nresource \"aws_security_group\" \"quaybuilder\"
        {\n  name        = var.quay_build_worker_security_group\n  description = \"Allow
        all inbound traffic\"\n  vpc_id      = \"${VpcId}\"\n  ingress {\n    description
        = \"traffic into quaybuilder VPC\"\n    from_port   = 0\n    to_port     =
        0\n    protocol    = \"-1\"\n    cidr_blocks = [\"0.0.0.0/0\"]\n  }\n  egress
        {\n    from_port   = 0\n    to_port     = 0\n    protocol    = \"-1\"\n    cidr_blocks
        = [\"0.0.0.0/0\"]\n  }\n}\nresource \"aws_instance\" \"quaybuilder\" {\n  key_name      =
        aws_key_pair.quaybuilder0710.key_name\n  ami           = \"${ami_id}\"\n  instance_type
        = \"m4.xlarge\"\n  associate_public_ip_address = true\n  vpc_security_group_ids
        = [aws_security_group.quaybuilder.id]\n  subnet_id = \"${PublicSubnet}\"\n  \n  ebs_block_device
        {\n    device_name = \"/dev/sda1\"\n    volume_size = 200\n  }\n  provisioner
        \"remote-exec\" {\n    inline = [\n      \"sudo yum install podman openssl
        -y\",\n      \"podman login brew.registry.redhat.io -u ''${OMR_BREW_USERNAME}''
        -p ${OMR_BREW_PASSWORD}\",\n      \"echo ${OMR_IMAGE_TAG}\",\n      \"if [
        ${OMR_RELEASED_TEST} = false ]; then podman cp \\$(podman create --rm ${OMR_IMAGE_TAG}):/mirror-registry.tar.gz
        .; fi\",\n      \"if [ ${OMR_RELEASED_TEST} = true ]; then curl -L -o mirror-registry.tar.gz
        https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/mirror-registry/latest/mirror-registry.tar.gz
        --retry 12; fi\",\n      \"tar -xzvf mirror-registry.tar.gz\",\n      \"./mirror-registry
        --version\",\n      \"./mirror-registry install --quayHostname \\${aws_instance.quaybuilder.public_dns}
        --initPassword password --initUser quay -v\"\n    ]\n  }\n  connection {\n    type        =
        \"ssh\"\n    host        = self.public_ip\n    user        = \"ec2-user\"\n    private_key
        = file(\"./quaybuilder\")\n  }\n  tags = {\n    Name = var.quay_build_instance_name\n  }\n}\noutput
        \"instance_public_dns\" {\n  value = aws_instance.quaybuilder.public_dns\n}\nEOF\n\ncp
        /var/run/quay-qe-omr-secret/quaybuilder . \u0026\u0026 cp /var/run/quay-qe-omr-secret/quaybuilder.pub
        .\nchmod 600 ./quaybuilder \u0026\u0026 chmod 600 ./quaybuilder.pub \u0026\u0026
        echo \"\" \u003e\u003equaybuilder\n\nexport TF_VAR_quay_build_instance_name=\"${OMR_CI_NAME}\"\nexport
        TF_VAR_quay_build_worker_key=\"${OMR_CI_NAME}\"\nexport TF_VAR_quay_build_worker_security_group=\"${OMR_CI_NAME}\"\nterraform
        init\nterraform apply -auto-approve\n\n#Share the OMR HOSTNAME, Terraform
        Var and Terraform Directory\ntar -cvzf terraform.tgz --exclude=\".terraform\"
        *\ncp terraform.tgz ${SHARED_DIR}\n\n#Use Terraform to output the Public DNS
        Name of Quay OMR\nOMR_HOST_NAME=$(terraform output instance_public_dns | tr
        -d ''\"'')\necho \"OMR HOST NAME is $OMR_HOST_NAME\"\n\necho \"${OMR_HOST_NAME}\"
        \u003e${SHARED_DIR}/OMR_HOST_NAME\necho \"${OMR_CI_NAME}\" \u003e${SHARED_DIR}/OMR_CI_NAME\n\n#Share
        the CA Cert of Quay OMR\nscp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/tmp/ssh_known_hosts
        -o VerifyHostKeyDNS=no -o ConnectionAttempts=3 -i quaybuilder ec2-user@\"${OMR_HOST_NAME}\":/home/ec2-user/quay-install/quay-rootCA/rootCA.pem
        ${SHARED_DIR} || true\n\n#Test OMR by push image\nskopeo copy docker://docker.io/fedora@sha256:895cdfba5eb6a009a26576cb2a8bc199823ca7158519e36e4d9effcc8b951b47
        docker://\"${OMR_HOST_NAME}\":8443/quaytest/test:latest --dest-tls-verify=false
        --dest-creds quay:password || true\n"],"container_name":"test","process_log":"/logs/process-log.txt","marker_file":"/logs/marker-file.txt","metadata_file":"/logs/artifacts/metadata.json"}'
    - name: ARTIFACT_DIR
      value: /logs/artifacts
    - name: NAMESPACE
      value: ci-op-vm71ljpg
    - name: JOB_NAME_SAFE
      value: quay-omr-tests-omr-ocp415-disconnected-unreleased
    - name: JOB_NAME_HASH
      value: 2dc1d
    - name: UNIQUE_HASH
      value: 2dc1d
    - name: LEASED_RESOURCE
      value: us-east-1
    - name: RELEASE_IMAGE_LATEST
      value: registry.build03.ci.openshift.org/ci-op-vm71ljpg/release@sha256:1bdf18a4b55d005ff97f4eec64e4166db7ea8ce61469a62758a6d91c73f27123
    - name: IMAGE_FORMAT
    - name: OMR_RELEASE
      value: "false"
    - name: OMR_IMAGE
      value: openshift-mirror-registry-rhel8:v1.3.10-2
    - name: KUBECONFIG
      value: /var/run/secrets/ci.openshift.io/multi-stage/kubeconfig
    - name: KUBECONFIGMINIMAL
      value: /var/run/secrets/ci.openshift.io/multi-stage/kubeconfig-minimal
    - name: KUBEADMIN_PASSWORD_FILE
      value: /var/run/secrets/ci.openshift.io/multi-stage/kubeadmin-password
    - name: CLUSTER_PROFILE_NAME
      value: aws-qe
    - name: CLUSTER_TYPE
      value: aws
    - name: CLUSTER_PROFILE_DIR
      value: /var/run/secrets/ci.openshift.io/cluster-profile
    - name: CLI_DIR
      value: /cli
    - name: SHARED_DIR
      value: /var/run/secrets/ci.openshift.io/multi-stage
    image: image-registry.openshift-image-registry.svc:5000/ci-op-vm71ljpg/pipeline@sha256:e45d1ceecb8d04f952c2d1fe96e8fd2166378bea4eab57185350c84396f4da18
    imagePullPolicy: IfNotPresent
    name: test
    resources:
      requests:
        cpu: 10m
        memory: 100Mi
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      runAsNonRoot: true
      runAsUser: 1005090000
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: FallbackToLogsOnError
    volumeMounts:
    - mountPath: /logs
      name: logs
    - mountPath: /tools
      name: tools
    - mountPath: /alabama
      name: home
    - mountPath: /tmp/entrypoint-wrapper
      name: entrypoint-wrapper
    - mountPath: /var/run/secrets/ci.openshift.io/cluster-profile
      name: cluster-profile
    - mountPath: /cli
      name: cli
    - mountPath: /var/run/secrets/ci.openshift.io/multi-stage
      name: quay-omr-tests-omr-ocp415-disconnected-unreleased
    - mountPath: /var/run/quay-qe-omr-secret
      name: test-credentials-quay-qe-omr-secret
    - mountPath: /var/run/quay-qe-brew-secret
      name: test-credentials-quay-qe-brew-secret
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-5jm7n
      readOnly: true
  - env:
    - name: JOB_SPEC
      value: '{"type":"presubmit","job":"rehearse-47102-periodic-ci-quay-quay-tests-master-omr-ocp415-unreleased-quay-omr-tests-omr-ocp415-disconnected-unreleased","buildid":"1739903460821700608","prowjobid":"b2899b96-37a9-4641-a908-64dbbb81a76a","refs":{"org":"openshift","repo":"release","base_ref":"master","base_sha":"9bd7ef0f9407c1bceeea34990741352363e89533","pulls":[{"number":47102,"author":"jianzhangbjz","sha":"cba0845e060328fc93224d8d75cea8920b21adc0","title":"use
        env for quay test","link":"https://github.com/openshift/release/pull/47102"}]},"extra_refs":[{"org":"quay","repo":"quay-tests","base_ref":"master","workdir":true}],"decoration_config":{"timeout":"8h0m0s","grace_period":"1h0m0s","utility_images":{"clonerefs":"gcr.io/k8s-prow/clonerefs:v20231206-f5c8e5872b","initupload":"gcr.io/k8s-prow/initupload:v20231206-f5c8e5872b","entrypoint":"gcr.io/k8s-prow/entrypoint:v20231206-f5c8e5872b","sidecar":"gcr.io/k8s-prow/sidecar:v20231206-f5c8e5872b"},"resources":{"clonerefs":{"limits":{"memory":"3Gi"},"requests":{"cpu":"100m","memory":"500Mi"}},"initupload":{"limits":{"memory":"200Mi"},"requests":{"cpu":"100m","memory":"50Mi"}},"place_entrypoint":{"limits":{"memory":"100Mi"},"requests":{"cpu":"100m","memory":"25Mi"}},"sidecar":{"limits":{"memory":"2Gi"},"requests":{"cpu":"100m","memory":"250Mi"}}},"gcs_configuration":{"bucket":"origin-ci-test","path_strategy":"single","default_org":"openshift","default_repo":"origin","mediaTypes":{"log":"text/plain"},"job_url_prefix":"https://prow.ci.openshift.org/view/"},"gcs_credentials_secret":"gce-sa-credentials-gcs-publisher","skip_cloning":true,"censor_secrets":true}}'
    - name: SIDECAR_OPTIONS
      value: '{"gcs_options":{"items":["/logs/artifacts"],"sub_dir":"artifacts/quay-omr-tests-omr-ocp415-disconnected-unreleased/quay-tests-provisioning-omr-disconnected","bucket":"origin-ci-test","path_strategy":"single","default_org":"openshift","default_repo":"origin","mediaTypes":{"log":"text/plain"},"job_url_prefix":"https://prow.ci.openshift.org/view/","gcs_credentials_file":"/secrets/gcs/service-account.json","dry_run":false},"entries":[{"args":["/bin/bash","-c","#!/bin/bash\nset
        -eu\n#!/bin/bash\n\nset -o nounset\nset -o errexit\nset -o pipefail\n\n#Check
        podman and skopeo version\npodman -v\nskopeo -v\nHOME_PATH=$(pwd) \u0026\u0026
        echo $HOME_PATH\n\n#Create new AWS EC2 Instatnce to deploy Quay OMR\nOMR_AWS_ACCESS_KEY=$(cat
        /var/run/quay-qe-omr-secret/access_key)\nOMR_AWS_SECRET_KEY=$(cat /var/run/quay-qe-omr-secret/secret_key)\n\n#Retrieve
        the Credentials of image registry \"brew.registry.redhat.io\"\nOMR_BREW_USERNAME=$(cat
        /var/run/quay-qe-brew-secret/username)\nOMR_BREW_PASSWORD=$(cat /var/run/quay-qe-brew-secret/password)\nif
        [ -z \"${OMR_IMAGE_ENV+x}\" ]; then\n    OMR_IMAGE_TAG=\"brew.registry.redhat.io/rh-osbs/${OMR_IMAGE}\"\nelse\n   OMR_IMAGE_TAG=\"brew.registry.redhat.io/rh-osbs/${OMR_IMAGE_ENV}\"\nfi\nOMR_RELEASED_TEST=\"${OMR_RELEASE}\"\nOMR_CI_NAME=\"omrprowci$RANDOM\"\n\n####################\n#
        get vpc id and public subnet from disconnected AWS VPC\nVpcId=$(cat \"${SHARED_DIR}/vpc_id\")\necho
        \"VpcId: $VpcId\"\n\nPublicSubnet=$(cat \"${SHARED_DIR}/public_subnet_ids\"
        | yq ''.[0]'')\necho \"PublicSubnet: $PublicSubnet\"\n\n# get AWS region\nREGION=\"${LEASED_RESOURCE}\"\necho
        \"REGION: $REGION\"\n####################\n\ncat \u003e\u003eomr-ami-images.json
        \u003c\u003cEOF\n{\n  \"images\": {\n    \"aws\": {\n      \"regions\": {\n        \"us-east-1\":
        {\n          \"release\": \"RHEL_HA-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2\",\n          \"image\":
        \"ami-02e0bb36c61bb9715\"\n        },\n        \"us-east-2\": {\n          \"release\":
        \"RHEL_HA-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2\",\n          \"image\":
        \"ami-0b2e47f3b2e23d235\"\n        },\n        \"us-west-1\": {\n          \"release\":
        \"RHEL_HA-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2\",\n          \"image\":
        \"ami-054965c6cd7c6e462\"\n        },\n        \"us-west-2\": {\n          \"release\":
        \"RHEL_HA-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2\",\n          \"image\":
        \"ami-0b28dfc7adc325ef4\"\n        },\n        \"ap-northeast-1\": {\n          \"release\":
        \"RHEL_HA-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2\",\n          \"image\":
        \"ami-0cf31bd68732fb0e2\"\n        },\n        \"ap-southeast-2\": {\n          \"release\":
        \"RHEL_HA-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2\",\n          \"image\":
        \"ami-016461ac55b16fd05\"\n        },\n        \"ap-northeast-3\": {\n          \"release\":
        \"RHEL_HA-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2\",\n          \"image\":
        \"ami-08daa4649f61b8684\"\n        },\n        \"ap-southeast-1\": {\n          \"release\":
        \"RHEL_HA-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2\",\n          \"image\":
        \"ami-0d6ba217f554f6137\"\n        },\n        \"ap-northeast-2\": {\n          \"release\":
        \"RHEL_HA-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2\",\n          \"image\":
        \"ami-0bb1758bf5a69ca5c\"\n        }\n      }\n    }\n  }\n}\nEOF\n\nami_id=$(jq
        -r .images.aws.regions[\\\"${REGION}\\\"].image \u003comr-ami-images.json)\n\nmkdir
        -p terraform_omr \u0026\u0026 cd terraform_omr\n\ncat \u003e\u003evariables.tf
        \u003c\u003cEOF\nvariable \"quay_build_worker_key\" {\n}\nvariable \"quay_build_worker_security_group\"
        {\n}\nvariable \"quay_build_instance_name\" {\n}\nEOF\n\ncat \u003e\u003ecreate_aws_ec2.tf
        \u003c\u003cEOF\nprovider \"aws\" {\n  region = \"${REGION}\"\n  access_key
        = \"${OMR_AWS_ACCESS_KEY}\"\n  secret_key = \"${OMR_AWS_SECRET_KEY}\"\n}\nresource
        \"aws_key_pair\" \"quaybuilder0710\" {\n  key_name   = var.quay_build_worker_key\n  public_key
        = file(\"./quaybuilder.pub\")\n}\nresource \"aws_security_group\" \"quaybuilder\"
        {\n  name        = var.quay_build_worker_security_group\n  description = \"Allow
        all inbound traffic\"\n  vpc_id      = \"${VpcId}\"\n  ingress {\n    description
        = \"traffic into quaybuilder VPC\"\n    from_port   = 0\n    to_port     =
        0\n    protocol    = \"-1\"\n    cidr_blocks = [\"0.0.0.0/0\"]\n  }\n  egress
        {\n    from_port   = 0\n    to_port     = 0\n    protocol    = \"-1\"\n    cidr_blocks
        = [\"0.0.0.0/0\"]\n  }\n}\nresource \"aws_instance\" \"quaybuilder\" {\n  key_name      =
        aws_key_pair.quaybuilder0710.key_name\n  ami           = \"${ami_id}\"\n  instance_type
        = \"m4.xlarge\"\n  associate_public_ip_address = true\n  vpc_security_group_ids
        = [aws_security_group.quaybuilder.id]\n  subnet_id = \"${PublicSubnet}\"\n  \n  ebs_block_device
        {\n    device_name = \"/dev/sda1\"\n    volume_size = 200\n  }\n  provisioner
        \"remote-exec\" {\n    inline = [\n      \"sudo yum install podman openssl
        -y\",\n      \"podman login brew.registry.redhat.io -u ''${OMR_BREW_USERNAME}''
        -p ${OMR_BREW_PASSWORD}\",\n      \"echo ${OMR_IMAGE_TAG}\",\n      \"if [
        ${OMR_RELEASED_TEST} = false ]; then podman cp \\$(podman create --rm ${OMR_IMAGE_TAG}):/mirror-registry.tar.gz
        .; fi\",\n      \"if [ ${OMR_RELEASED_TEST} = true ]; then curl -L -o mirror-registry.tar.gz
        https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/mirror-registry/latest/mirror-registry.tar.gz
        --retry 12; fi\",\n      \"tar -xzvf mirror-registry.tar.gz\",\n      \"./mirror-registry
        --version\",\n      \"./mirror-registry install --quayHostname \\${aws_instance.quaybuilder.public_dns}
        --initPassword password --initUser quay -v\"\n    ]\n  }\n  connection {\n    type        =
        \"ssh\"\n    host        = self.public_ip\n    user        = \"ec2-user\"\n    private_key
        = file(\"./quaybuilder\")\n  }\n  tags = {\n    Name = var.quay_build_instance_name\n  }\n}\noutput
        \"instance_public_dns\" {\n  value = aws_instance.quaybuilder.public_dns\n}\nEOF\n\ncp
        /var/run/quay-qe-omr-secret/quaybuilder . \u0026\u0026 cp /var/run/quay-qe-omr-secret/quaybuilder.pub
        .\nchmod 600 ./quaybuilder \u0026\u0026 chmod 600 ./quaybuilder.pub \u0026\u0026
        echo \"\" \u003e\u003equaybuilder\n\nexport TF_VAR_quay_build_instance_name=\"${OMR_CI_NAME}\"\nexport
        TF_VAR_quay_build_worker_key=\"${OMR_CI_NAME}\"\nexport TF_VAR_quay_build_worker_security_group=\"${OMR_CI_NAME}\"\nterraform
        init\nterraform apply -auto-approve\n\n#Share the OMR HOSTNAME, Terraform
        Var and Terraform Directory\ntar -cvzf terraform.tgz --exclude=\".terraform\"
        *\ncp terraform.tgz ${SHARED_DIR}\n\n#Use Terraform to output the Public DNS
        Name of Quay OMR\nOMR_HOST_NAME=$(terraform output instance_public_dns | tr
        -d ''\"'')\necho \"OMR HOST NAME is $OMR_HOST_NAME\"\n\necho \"${OMR_HOST_NAME}\"
        \u003e${SHARED_DIR}/OMR_HOST_NAME\necho \"${OMR_CI_NAME}\" \u003e${SHARED_DIR}/OMR_CI_NAME\n\n#Share
        the CA Cert of Quay OMR\nscp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/tmp/ssh_known_hosts
        -o VerifyHostKeyDNS=no -o ConnectionAttempts=3 -i quaybuilder ec2-user@\"${OMR_HOST_NAME}\":/home/ec2-user/quay-install/quay-rootCA/rootCA.pem
        ${SHARED_DIR} || true\n\n#Test OMR by push image\nskopeo copy docker://docker.io/fedora@sha256:895cdfba5eb6a009a26576cb2a8bc199823ca7158519e36e4d9effcc8b951b47
        docker://\"${OMR_HOST_NAME}\":8443/quaytest/test:latest --dest-tls-verify=false
        --dest-creds quay:password || true\n"],"container_name":"test","process_log":"/logs/process-log.txt","marker_file":"/logs/marker-file.txt","metadata_file":"/logs/artifacts/metadata.json"}],"ignore_interrupts":true,"censoring_options":{"secret_directories":["/secrets/ci-pull-credentials","/secrets/gce-sa-credentials-gcs-publisher","/secrets/oauth-3sqr2x86","/secrets/quay-omr-tests-omr-ocp415-disconnected-unreleased-cluster-profile","/secrets/registry-pull-credentials","/secrets/test-credentials-ci-ibmcloud8","/secrets/test-credentials-devqe-secrets","/secrets/test-credentials-openshift-custom-mirror-registry","/secrets/test-credentials-qe-proxy-creds","/secrets/test-credentials-quay-qe-brew-secret","/secrets/test-credentials-quay-qe-omr-secret"]}}'
    image: gcr.io/k8s-prow/sidecar:v20231206-f5c8e5872b
    imagePullPolicy: IfNotPresent
    name: sidecar
    resources:
      limits:
        memory: 2Gi
      requests:
        cpu: 100m
        memory: 250Mi
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      runAsNonRoot: true
      runAsUser: 1005090000
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: FallbackToLogsOnError
    volumeMounts:
    - mountPath: /logs
      name: logs
    - mountPath: /secrets/gcs
      name: gcs-credentials
    - mountPath: /secrets/ci-pull-credentials
      name: censor-4
    - mountPath: /secrets/gce-sa-credentials-gcs-publisher
      name: censor-9
    - mountPath: /secrets/oauth-3sqr2x86
      name: censor-10
    - mountPath: /secrets/quay-omr-tests-omr-ocp415-disconnected-unreleased-cluster-profile
      name: censor-13
    - mountPath: /secrets/registry-pull-credentials
      name: censor-15
    - mountPath: /secrets/test-credentials-ci-ibmcloud8
      name: censor-16
    - mountPath: /secrets/test-credentials-devqe-secrets
      name: censor-17
    - mountPath: /secrets/test-credentials-openshift-custom-mirror-registry
      name: censor-18
    - mountPath: /secrets/test-credentials-qe-proxy-creds
      name: censor-19
    - mountPath: /secrets/test-credentials-quay-qe-brew-secret
      name: censor-20
    - mountPath: /secrets/test-credentials-quay-qe-omr-secret
      name: censor-21
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-5jm7n
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  imagePullSecrets:
  - name: registry-pull-credentials
  - name: quay-omr-tests-omr-ocp415-disconnected-1a0ad78f-dockercfg-fhwx8
  initContainers:
  - command:
    - /bin/sh
    - -c
    - declare -i T; until [[ "$ret" == "0" ]] || [[ "$T" -gt "120" ]]; do curl https://github.com
      > /dev/null; ret=$?; sleep 1; let "T+=1"; done
    image: registry.access.redhat.com/ubi8
    imagePullPolicy: Always
    name: ci-scheduling-dns-wait
    resources:
      requests:
        cpu: 100m
        memory: 200Mi
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      runAsNonRoot: true
      runAsUser: 1005090000
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-5jm7n
      readOnly: true
  - args:
    - --copy-mode-only
    image: gcr.io/k8s-prow/entrypoint:v20231206-f5c8e5872b
    imagePullPolicy: IfNotPresent
    name: place-entrypoint
    resources:
      limits:
        memory: 100Mi
      requests:
        cpu: 100m
        memory: 25Mi
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      runAsNonRoot: true
      runAsUser: 1005090000
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /tools
      name: tools
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-5jm7n
      readOnly: true
  - args:
    - /bin/entrypoint-wrapper
    - /tmp/entrypoint-wrapper/entrypoint-wrapper
    command:
    - cp
    image: registry.ci.openshift.org/ci/entrypoint-wrapper:latest
    imagePullPolicy: Always
    name: cp-entrypoint-wrapper
    resources: {}
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      runAsNonRoot: true
      runAsUser: 1005090000
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: FallbackToLogsOnError
    volumeMounts:
    - mountPath: /tmp/entrypoint-wrapper
      name: entrypoint-wrapper
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-5jm7n
      readOnly: true
  - args:
    - /usr/bin/oc
    - /cli
    command:
    - /bin/cp
    image: image-registry.openshift-image-registry.svc:5000/ci-op-vm71ljpg/stable@sha256:58d00e9059d9c7c9bccd047df0818d86dadd9ad64e53b622c7e82ba7f2bb58ea
    imagePullPolicy: IfNotPresent
    name: inject-cli
    resources: {}
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      runAsNonRoot: true
      runAsUser: 1005090000
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /cli
      name: cli
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-5jm7n
      readOnly: true
  nodeName: ip-10-0-194-139.ec2.internal
  nodeSelector:
    ci-workload: tests
  overhead:
    cpu: 300m
    memory: 600Mi
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Never
  runtimeClassName: ci-scheduler-runtime-tests
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 1005090000
    seLinuxOptions:
      level: s0:c71,c60
    seccompProfile:
      type: RuntimeDefault
  serviceAccount: quay-omr-tests-omr-ocp415-disconnected-unreleased
  serviceAccountName: quay-omr-tests-omr-ocp415-disconnected-unreleased
  terminationGracePeriodSeconds: 18
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  - effect: NoSchedule
    key: node.kubernetes.io/memory-pressure
    operator: Exists
  - effect: NoSchedule
    key: node-role.kubernetes.io/ci-tests-worker
    operator: Exists
  volumes:
  - emptyDir: {}
    name: logs
  - emptyDir: {}
    name: tools
  - name: gcs-credentials
    secret:
      defaultMode: 420
      secretName: gce-sa-credentials-gcs-publisher
  - emptyDir: {}
    name: home
  - name: censor-4
    secret:
      defaultMode: 420
      secretName: ci-pull-credentials
  - name: censor-9
    secret:
      defaultMode: 420
      secretName: gce-sa-credentials-gcs-publisher
  - name: censor-10
    secret:
      defaultMode: 420
      secretName: oauth-3sqr2x86
  - name: censor-13
    secret:
      defaultMode: 420
      secretName: quay-omr-tests-omr-ocp415-disconnected-unreleased-cluster-profile
  - name: censor-15
    secret:
      defaultMode: 420
      secretName: registry-pull-credentials
  - name: censor-16
    secret:
      defaultMode: 420
      secretName: test-credentials-ci-ibmcloud8
  - name: censor-17
    secret:
      defaultMode: 420
      secretName: test-credentials-devqe-secrets
  - name: censor-18
    secret:
      defaultMode: 420
      secretName: test-credentials-openshift-custom-mirror-registry
  - name: censor-19
    secret:
      defaultMode: 420
      secretName: test-credentials-qe-proxy-creds
  - name: censor-20
    secret:
      defaultMode: 420
      secretName: test-credentials-quay-qe-brew-secret
  - name: censor-21
    secret:
      defaultMode: 420
      secretName: test-credentials-quay-qe-omr-secret
  - emptyDir: {}
    name: entrypoint-wrapper
  - name: cluster-profile
    secret:
      defaultMode: 420
      secretName: quay-omr-tests-omr-ocp415-disconnected-unreleased-cluster-profile
  - emptyDir: {}
    name: cli
  - name: quay-omr-tests-omr-ocp415-disconnected-unreleased
    secret:
      defaultMode: 420
      secretName: quay-omr-tests-omr-ocp415-disconnected-unreleased
  - name: test-credentials-quay-qe-omr-secret
    secret:
      defaultMode: 420
      secretName: test-credentials-quay-qe-omr-secret
  - name: test-credentials-quay-qe-brew-secret
    secret:
      defaultMode: 420
      secretName: test-credentials-quay-qe-brew-secret
  - name: kube-api-access-5jm7n
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
      - configMap:
          items:
          - key: service-ca.crt
            path: service-ca.crt
          name: openshift-service-ca.crt
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2023-12-27T07:11:24Z"
    reason: PodCompleted
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2023-12-27T07:20:19Z"
    reason: PodCompleted
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2023-12-27T07:20:19Z"
    reason: PodCompleted
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2023-12-27T07:11:17Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: cri-o://42cae5bb9d9c2dba86997cd35a8a7d5d929b4389f905b5a58ca3e4b107e65912
    image: gcr.io/k8s-prow/sidecar:v20231206-f5c8e5872b
    imageID: gcr.io/k8s-prow/sidecar@sha256:15ca24349ea553e5b98ab9e0081996fb9511368d419ed8f09aef54558bd7cb09
    lastState: {}
    name: sidecar
    ready: false
    restartCount: 0
    started: false
    state:
      terminated:
        containerID: cri-o://42cae5bb9d9c2dba86997cd35a8a7d5d929b4389f905b5a58ca3e4b107e65912
        exitCode: 0
        finishedAt: "2023-12-27T07:20:19Z"
        reason: Completed
        startedAt: "2023-12-27T07:11:25Z"
  - containerID: cri-o://c57eac6a40c72a2c8cfe52b4eaee085bf0a23fb957ca47892b5fa866601f3741
    image: image-registry.openshift-image-registry.svc:5000/ci-op-vm71ljpg/pipeline@sha256:e45d1ceecb8d04f952c2d1fe96e8fd2166378bea4eab57185350c84396f4da18
    imageID: image-registry.openshift-image-registry.svc:5000/ci-op-c9ph5650/pipeline@sha256:99ea1fe9bbed5b5ca9a838d248587f0c9354aff412a297687a01778da3b5be20
    lastState: {}
    name: test
    ready: false
    restartCount: 0
    started: false
    state:
      terminated:
        containerID: cri-o://c57eac6a40c72a2c8cfe52b4eaee085bf0a23fb957ca47892b5fa866601f3741
        exitCode: 0
        finishedAt: "2023-12-27T07:20:19Z"
        reason: Completed
        startedAt: "2023-12-27T07:11:25Z"
  hostIP: 10.0.194.139
  initContainerStatuses:
  - containerID: cri-o://606de142c1dec0130cf4d76f239d6920de2e74fedc111121d3b0b5c711650349
    image: registry.access.redhat.com/ubi8:latest
    imageID: registry.access.redhat.com/ubi8@sha256:449da7f8f2ef6285a8445a1e31af57a97b9dae5dcf009b1629c59742c89c68c3
    lastState: {}
    name: ci-scheduling-dns-wait
    ready: true
    restartCount: 0
    state:
      terminated:
        containerID: cri-o://606de142c1dec0130cf4d76f239d6920de2e74fedc111121d3b0b5c711650349
        exitCode: 0
        finishedAt: "2023-12-27T07:11:20Z"
        reason: Completed
        startedAt: "2023-12-27T07:11:19Z"
  - containerID: cri-o://eaa3f66f9e3c872cbd52ef7d7c45ebb94bbbc04cb9a4467cd335dc468b580607
    image: gcr.io/k8s-prow/entrypoint:v20231206-f5c8e5872b
    imageID: gcr.io/k8s-prow/entrypoint@sha256:911c9ef6e1eafe6a5b18357636e97191a4d9e1f8e23a2b4be9cdd45199a83ae5
    lastState: {}
    name: place-entrypoint
    ready: true
    restartCount: 0
    state:
      terminated:
        containerID: cri-o://eaa3f66f9e3c872cbd52ef7d7c45ebb94bbbc04cb9a4467cd335dc468b580607
        exitCode: 0
        finishedAt: "2023-12-27T07:11:22Z"
        reason: Completed
        startedAt: "2023-12-27T07:11:22Z"
  - containerID: cri-o://7173786e88da32bbc25e0adaf5d9fca4fd429a0fdbb1fb4fd201d0ffbc5ca903
    image: registry.ci.openshift.org/ci/entrypoint-wrapper:latest
    imageID: registry.ci.openshift.org/ci/entrypoint-wrapper@sha256:45f7f9c94d6141a13fa268a97464706302df3717e24a21881010330e76f41091
    lastState: {}
    name: cp-entrypoint-wrapper
    ready: true
    restartCount: 0
    state:
      terminated:
        containerID: cri-o://7173786e88da32bbc25e0adaf5d9fca4fd429a0fdbb1fb4fd201d0ffbc5ca903
        exitCode: 0
        finishedAt: "2023-12-27T07:11:23Z"
        reason: Completed
        startedAt: "2023-12-27T07:11:23Z"
  - containerID: cri-o://832f187b3716ed3739bed253aa18e5cd668a8151e1b048b93683f87003e8d0bc
    image: image-registry.openshift-image-registry.svc:5000/ci-op-vm71ljpg/stable@sha256:58d00e9059d9c7c9bccd047df0818d86dadd9ad64e53b622c7e82ba7f2bb58ea
    imageID: image-registry.openshift-image-registry.svc:5000/ci-op-3k2gs5ch/stable@sha256:3afa2e00e832b5e7e56b89ba927a29bdeb3b1810f90dbc7f2c2ed4441677fee5
    lastState: {}
    name: inject-cli
    ready: true
    restartCount: 0
    state:
      terminated:
        containerID: cri-o://832f187b3716ed3739bed253aa18e5cd668a8151e1b048b93683f87003e8d0bc
        exitCode: 0
        finishedAt: "2023-12-27T07:11:24Z"
        reason: Completed
        startedAt: "2023-12-27T07:11:24Z"
  phase: Succeeded
  podIP: 10.129.47.90
  podIPs:
  - ip: 10.129.47.90
  qosClass: Burstable
  startTime: "2023-12-27T07:11:17Z"

@jianzhangbjz
Copy link
Contributor Author

I guess we have to update the step container generated logic to get the prowjob's env vars, and request an enhancement to the DPTP team: https://issues.redhat.com/browse/DPTP-3802

@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 28, 2024
@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 28, 2024
@openshift-merge-robot
Copy link

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-bot
Copy link

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 27, 2024
@openshift-bot
Copy link

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci openshift-ci bot closed this May 28, 2024
Copy link

openshift-ci bot commented May 28, 2024

@openshift-bot: Closed this PR.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@jianzhangbjz
Copy link
Contributor Author

/remove-lifecycle stale

@jianzhangbjz
Copy link
Contributor Author

/reopen

@openshift-ci openshift-ci bot reopened this Sep 25, 2024
Copy link

openshift-ci bot commented Sep 25, 2024

@jianzhangbjz: Reopened this PR.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@jianzhangbjz
Copy link
Contributor Author

cc: @huweihua-redhat ^^

@jianzhangbjz
Copy link
Contributor Author

/remove-lifecycle rotten

@openshift-ci openshift-ci bot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Sep 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants