diff --git a/.github/workflows/deploy-site.yml b/.github/workflows/deploy-site.yml index 752850d..b22d6a8 100644 --- a/.github/workflows/deploy-site.yml +++ b/.github/workflows/deploy-site.yml @@ -20,7 +20,7 @@ # name: Deploy documentation to GitHub Pages -on: +on: # yamllint disable-line rule:truthy push: branches: ["main"] # @@ -59,7 +59,7 @@ jobs: uses: actions/upload-pages-artifact@v3 if: github.ref == 'refs/heads/main' with: - path: 'docs/build/public' + path: "docs/build/public" - name: Deploy to GitHub Pages if: github.ref == 'refs/heads/main' uses: actions/deploy-pages@v4 diff --git a/projects/dataflow-gcs-to-alloydb/docs/index.md b/projects/dataflow-gcs-to-alloydb/docs/index.md index 063eee7..b4787fa 100644 --- a/projects/dataflow-gcs-to-alloydb/docs/index.md +++ b/projects/dataflow-gcs-to-alloydb/docs/index.md @@ -27,4 +27,4 @@ Cloud Storage to AlloyDB or PostgreSQL server in Cloud SQL. ## Quickstart 1. [Configure](./configuration.md) the pipeline parameters. -2. [Build and run](./build_and_run_pipeline.md) the Dataflow Flex pipeline. +1. [Build and run](./build_and_run_pipeline.md) the Dataflow Flex pipeline. diff --git a/projects/dataproc-trino-autoscaler/docs/demo.md b/projects/dataproc-trino-autoscaler/docs/demo.md index b2ded7c..d7cf863 100644 --- a/projects/dataproc-trino-autoscaler/docs/demo.md +++ b/projects/dataproc-trino-autoscaler/docs/demo.md @@ -185,8 +185,8 @@ in this tutorial, you can delete the project: 1. In the Cloud Console, go to the [**Manage resources** page](https://console.cloud.google.com/iam-admin/projects). -2. In the project list, select the project that you want to delete and then +1. In the project list, select the project that you want to delete and then click **Delete**. -3. In the dialog, type the project ID and then click **Shut down** to delete +1. In the dialog, type the project ID and then click **Shut down** to delete the project. diff --git a/projects/imagen-object-changer/README.md b/projects/imagen-object-changer/README.md index 5e81b67..408ba9f 100644 --- a/projects/imagen-object-changer/README.md +++ b/projects/imagen-object-changer/README.md @@ -15,12 +15,12 @@ The solution works as follows: 1. The user provides an object label such as ‘car’, the source image file, and a GenAI Imagen prompt such as ‘red plastic toy car’. In this case, to replace cars with toy cars -2. Query Vision API with the source image and object label -3. Present list of found labels -4. If the user’s label was found in the list of detected labels, generate a +1. Query Vision API with the source image and object label +1. Present list of found labels +1. If the user’s label was found in the list of detected labels, generate a mask image based on the bounding boxes for the objects -5. Query Imagen with the original image, mask image, and GenAI prompt -6. Save the resulting generated images +1. Query Imagen with the original image, mask image, and GenAI prompt +1. Save the resulting generated images ## Example @@ -63,9 +63,9 @@ replace the background instead: 1. With your browser, navigate to the [Google Cloud Console](https://console.cloud.google.com/home/dashboard) and select the right Project -2. Use the Console to Enable the +1. Use the Console to Enable the [Cloud Vision API in the API Library](https://console.cloud.google.com/apis/library/vision.googleapis.com) -3. Check that you can access +1. Check that you can access [Generative AI Studio Vision](https://console.cloud.google.com/vertex-ai/generative/vision) ### Installing the Application @@ -82,20 +82,20 @@ replace the background instead: isolated environment for this application and its imported modules. Please refer to the links in this step for more information on virtual environment -2. Open a terminal on your computer, and execute the following gcloud command +1. Open a terminal on your computer, and execute the following gcloud command to authenticate the application for Google Cloud Platform: ```shell gcloud auth application-default login ``` -3. Install the Python dependencies with: +1. Install the Python dependencies with: ```shell pip3 install -r requirements.txt --require-hashes ``` -4. Test that the application can start without errors, by executing the +1. Test that the application can start without errors, by executing the following: ```shell diff --git a/projects/imagen-voice-captioning/README.md b/projects/imagen-voice-captioning/README.md index 46073f5..12c0ce4 100644 --- a/projects/imagen-voice-captioning/README.md +++ b/projects/imagen-voice-captioning/README.md @@ -10,13 +10,13 @@ and [Text-to-Speech AI](https://cloud.google.com/text-to-speech). The solution works as follows: 1. The user runs the application and selects a connected camera as the input -2. The applications shows a live view of the camera -3. The user triggers captioning -4. The camera’s image is sent to Imagen image captioning, which returns a +1. The applications shows a live view of the camera +1. The user triggers captioning +1. The camera’s image is sent to Imagen image captioning, which returns a textual description of the scene -5. The text is sent to Text-to-Speech AI, which returns an audio file +1. The text is sent to Text-to-Speech AI, which returns an audio file containing the caption synthesized as speech -6. The application plays back the audio, describing the scene verbally +1. The application plays back the audio, describing the scene verbally ## Example @@ -46,11 +46,11 @@ in a grassy field behind a fence_” 1. With your browser, navigate to the [Google Cloud Console](https://console.cloud.google.com/home/dashboard) and select the right Project -2. Use the Console to Enable the +1. Use the Console to Enable the [Cloud Vision Text to Speech API](https://console.cloud.google.com/apis/library/texttospeech.googleapis.com) -3. Use the Console to Enable the +1. Use the Console to Enable the [Cloud Vision API in the API Library](https://console.cloud.google.com/apis/library/vision.googleapis.com) -4. Check that you can access +1. Check that you can access [Generative AI Studio Vision](https://console.cloud.google.com/vertex-ai/generative/vision) ### Creating the Service Account and Key @@ -59,8 +59,8 @@ in a grassy field behind a fence_” to: 1. create a new service account in your GCP Project - 2. create and download a JSON key for it - 3. Bind the Cloud Storage object viewer role to it + 1. create and download a JSON key for it + 1. Bind the Cloud Storage object viewer role to it This service account will be used to access Cloud Vision API. Execute the following, replacing `[your-gcp-project]` with your project ID: @@ -81,7 +81,7 @@ in a grassy field behind a fence_” gcloud services enable vision.googleapis.com ``` -2. The service account key was downloaded as a file called `credentials.json` +1. The service account key was downloaded as a file called `credentials.json` in your current working directory. Note the location of the file, as you will need it later. @@ -101,20 +101,20 @@ in a grassy field behind a fence_” isolated environment for this application and its imported modules. Please refer to the links in this step for more information on virtual environment -2. Install the Python dependencies with: +1. Install the Python dependencies with: ```shell pip3 install -r requirements.txt ``` -3. Copy the service account key credentials.json that you downloaded earlier to +1. Copy the service account key credentials.json that you downloaded earlier to this same directory with e.g: ```shell cp ~/credentials.json . ``` -4. Test that the application can start without errors, by executing the +1. Test that the application can start without errors, by executing the following: ```shell @@ -139,13 +139,13 @@ in a grassy field behind a fence_” 1. Connect a camera such as a USB webcam to your computer. Test that the camera works on your computer -2. Start the application and provide the following parameter values: +1. Start the application and provide the following parameter values: 1. `--input`: <the device number of the connected cameras>. Start with 0 and increase the number until you see the correct camera’s output on the screen - 2. `--credentials`: <path to the credentials.json you prepared earlier> - 3. `--project_id`: <your GCP project ID> + 1. `--credentials`: <path to the credentials.json you prepared earlier> + 1. `--project_id`: <your GCP project ID> Example command: @@ -153,7 +153,7 @@ in a grassy field behind a fence_” python3 imagen-voice-captioning.py --input 1 --credentials credentials.json --project_id myproject ``` -3. If successful, the application will output the following on the console: +1. If successful, the application will output the following on the console: ```text {'input': '1', 'credentials': 'credentials.json', 'project_id': 'myproject''} @@ -164,10 +164,10 @@ in a grassy field behind a fence_” And open a window where it displays the camera’s live view on your desktop. -4. To caption the camera view, select the camera display window and press +1. To caption the camera view, select the camera display window and press <SPACE> -5. If successful, the application will output: +1. If successful, the application will output: ```text Querying Imagen captioning...OK diff --git a/projects/sa-tools/common/java-common/README.md b/projects/sa-tools/common/java-common/README.md index d56f180..c0a25f0 100644 --- a/projects/sa-tools/common/java-common/README.md +++ b/projects/sa-tools/common/java-common/README.md @@ -17,7 +17,7 @@ This folder contains reusable Java components: includeBuild("path/to/common/java-common") ``` -2. Use appropriate module in your `build.gradle`: Example to use utils and +1. Use appropriate module in your `build.gradle`: Example to use utils and testing module: ```groovy diff --git a/projects/sa-tools/common/ui-tests/src/CommonFunctions.test.js b/projects/sa-tools/common/ui-tests/src/CommonFunctions.test.js index 0c69174..e563a1f 100644 --- a/projects/sa-tools/common/ui-tests/src/CommonFunctions.test.js +++ b/projects/sa-tools/common/ui-tests/src/CommonFunctions.test.js @@ -14,7 +14,7 @@ * limitations under the License. */ -import { describe, expect, it } from 'vitest'; +import {describe, expect, it} from 'vitest'; import * as common from './Common/CommonFunctions'; const commonFunctionsTestSuite = () => { diff --git a/projects/sa-tools/common/ui-tests/src/SaToolsDate.test.js b/projects/sa-tools/common/ui-tests/src/SaToolsDate.test.js index 9f8ad8e..c78a9fb 100644 --- a/projects/sa-tools/common/ui-tests/src/SaToolsDate.test.js +++ b/projects/sa-tools/common/ui-tests/src/SaToolsDate.test.js @@ -14,8 +14,8 @@ * limitations under the License. */ -import { describe, expect, it } from 'vitest'; -import { SaToolsDate } from './Common/SaToolsDate'; +import {describe, expect, it} from 'vitest'; +import {SaToolsDate} from './Common/SaToolsDate'; const satoolsDateTestSuite = () => { it(`Given N number of substracted minutes should return new Date object of @@ -42,7 +42,7 @@ const satoolsDateTestSuite = () => { ago`, () => { const testDate = new SaToolsDate(new Date('2022-12-01 11:00:00')); expect(testDate.formattedDateDiff(new Date('2022-12-01 10:00:00'))).toBe( - '1 hour(s) ago' + '1 hour(s) ago', ); }); @@ -106,7 +106,7 @@ const satoolsDateTestSuite = () => { it(`Given a datetime in 50 mins ago should return 50 minute(s) ago`, () => { const testDate = new SaToolsDate(new Date('2022-12-01 10:10:00')); expect( - testDate.formattedDateDiffFrom(new Date('2022-12-01 11:00:00')) + testDate.formattedDateDiffFrom(new Date('2022-12-01 11:00:00')), ).toBe('50 minute(s) ago'); }); diff --git a/projects/sa-tools/common/ui-tests/src/StringSet.test.js b/projects/sa-tools/common/ui-tests/src/StringSet.test.js index 4d75ce8..6a80370 100644 --- a/projects/sa-tools/common/ui-tests/src/StringSet.test.js +++ b/projects/sa-tools/common/ui-tests/src/StringSet.test.js @@ -14,8 +14,8 @@ * limitations under the License. */ -import { describe, expect, it } from 'vitest'; -import { StringSet } from './Common/StringSet'; +import {describe, expect, it} from 'vitest'; +import {StringSet} from './Common/StringSet'; /** * Assert StringSet equality to an array. diff --git a/projects/sa-tools/common/ui-tests/vite.config.js b/projects/sa-tools/common/ui-tests/vite.config.js index 3354c4c..65ed838 100644 --- a/projects/sa-tools/common/ui-tests/vite.config.js +++ b/projects/sa-tools/common/ui-tests/vite.config.js @@ -14,7 +14,7 @@ * limitations under the License. */ -import { defineConfig } from 'vite'; +import {defineConfig} from 'vite'; export default defineConfig({ plugins: [], diff --git a/projects/sa-tools/common/ui/SaToolsDate.js b/projects/sa-tools/common/ui/SaToolsDate.js index 79be1e8..033210d 100644 --- a/projects/sa-tools/common/ui/SaToolsDate.js +++ b/projects/sa-tools/common/ui/SaToolsDate.js @@ -122,4 +122,4 @@ class SaToolsDate { } } -export { SaToolsDate }; +export {SaToolsDate}; diff --git a/projects/sa-tools/common/ui/StringSet.js b/projects/sa-tools/common/ui/StringSet.js index 0e47e0d..220fa45 100644 --- a/projects/sa-tools/common/ui/StringSet.js +++ b/projects/sa-tools/common/ui/StringSet.js @@ -117,4 +117,4 @@ class StringSet { } } -export { StringSet }; +export {StringSet}; diff --git a/projects/sa-tools/perf-benchmark/README.md b/projects/sa-tools/perf-benchmark/README.md index 493f123..7bb7ac9 100644 --- a/projects/sa-tools/perf-benchmark/README.md +++ b/projects/sa-tools/perf-benchmark/README.md @@ -5,13 +5,13 @@ Consists of 3 modules: 1. API Server: Java SpringBoot based server providing RESTful API for creating, monitoring and cancelling BenchmarkJobs using the Benchmark Runner. -2. Benchmark Runner: Templated launch scripts on fork of Google's Perfkit +1. Benchmark Runner: Templated launch scripts on fork of Google's Perfkit benchmark maintained by @prakhargautam at The runner executes a parameterized Cloud Build jobs with job specific parameters passed as input and writes the results to Google BigQuery table. -3. Daily Perfkit benchmarks runner cron-based runner for standard shapes. +1. Daily Perfkit benchmarks runner cron-based runner for standard shapes. ## Prerequisites @@ -130,25 +130,25 @@ generated and open it in the browser: gcloud firestore databases create --type=datastore-mode --project "${PROJECT_ID}" --location "nam5" ``` -2. If facing this error: "Unable to Sign-In using Google Sign-In. An Error has +1. If facing this error: "Unable to Sign-In using Google Sign-In. An Error has occured. Only accounts from the organization can access this site." Make sure the Authorized Domains section under OAuth Screen having domain name of user's Identiy Platform, ex: example.com Which means login will be allowed. -3. If facing this error: "Storage object. Permission 'storage.objects.get' +1. If facing this error: "Storage object. Permission 'storage.objects.get' denied on resource (or it may not exist)., forbidden" or "denied: Permission "artifactregistry.repositories.uploadArtifacts" denied on resource" Make sure that @cloudbuild.gserviceaccount.com exist, and having Artifact Registry Writer and Storage Object Creator permissions. -4. If facing this error: "message":"Failed to open popup +1. If facing this error: "message":"Failed to open popup window","stack":"Error: Failed to open popup window\n at new ..." Make sure the allow pop-up from the Cloud Run's URL domain is allowed from Browser's address bar. -5. Error: Members belonging to the external domain cannot be added as domain +1. Error: Members belonging to the external domain cannot be added as domain restricted sharing is enforced by the organization policy Depending on organization policy being used, some organization will have constraints/iam.allowedPolicyMemberDomains to be restricted to be on the @@ -165,13 +165,13 @@ generated and open it in the browser: gcloud beta emulators datastore start --no-store-on-disk & ``` -2. Set Environment variables +1. Set Environment variables ```shell $(gcloud beta emulators datastore env-init) ``` -3. Run Local server +1. Run Local server ```shell cd api/ diff --git a/projects/sa-tools/perf-benchmark/ui/vite.config.js b/projects/sa-tools/perf-benchmark/ui/vite.config.js index fc73f08..9aafaac 100644 --- a/projects/sa-tools/perf-benchmark/ui/vite.config.js +++ b/projects/sa-tools/perf-benchmark/ui/vite.config.js @@ -14,7 +14,7 @@ * limitations under the License. */ /* eslint-disable comma-dangle, semi */ -import { defineConfig } from 'vite'; +import {defineConfig} from 'vite'; import react from '@vitejs/plugin-react'; export default defineConfig({ diff --git a/projects/sa-tools/performance-testing/README.md b/projects/sa-tools/performance-testing/README.md index d52593b..b6dc4e6 100644 --- a/projects/sa-tools/performance-testing/README.md +++ b/projects/sa-tools/performance-testing/README.md @@ -42,7 +42,7 @@ The deployment approach is by deploying two copies of pt-admin component: 1. To the target GCP Project. Create a new GCP Project, or use an existing GCP Project where the deployer user having permissions as Owner or Editor. -2. To run pt-admin container locally (hosting the UI and part of the backend) +1. To run pt-admin container locally (hosting the UI and part of the backend) The less pre-requisite dependencies to run this setup is by doing it from the Cloud Shell. Make sure there's enough disk space to run the commands below. diff --git a/projects/sa-tools/performance-testing/pt-operator/README.md b/projects/sa-tools/performance-testing/pt-operator/README.md index e82d613..727dc42 100644 --- a/projects/sa-tools/performance-testing/pt-operator/README.md +++ b/projects/sa-tools/performance-testing/pt-operator/README.md @@ -22,13 +22,13 @@ cluster-info` shows). kubectl apply -f config/samples/ ``` -2. Build and push your image to the location specified by `IMG`: +1. Build and push your image to the location specified by `IMG`: ```sh make docker-build docker-push IMG=/pt-operator:tag ``` -3. Deploy the controller to the cluster with the image specified by `IMG`: +1. Deploy the controller to the cluster with the image specified by `IMG`: ```sh make deploy IMG=/pt-operator:tag @@ -68,7 +68,7 @@ the desired state is reached on the cluster. make install ``` -2. Run your controller (this will run in the foreground, so switch to a new +1. Run your controller (this will run in the foreground, so switch to a new terminal if you want to leave it running): ```sh diff --git a/projects/serverless-event-processing/docs/index.md b/projects/serverless-event-processing/docs/index.md index e84d3fa..cc058f0 100644 --- a/projects/serverless-event-processing/docs/index.md +++ b/projects/serverless-event-processing/docs/index.md @@ -241,11 +241,11 @@ processing results, you might encounter an issue that causes the event processing pipeline to be stuck in an endless loop. For example: 1. You create an object in the Cloud Storage bucket. -2. Eventarc sends an event about the newly created object. -3. The pipeline receives the event, processes the object, and saves the result +1. Eventarc sends an event about the newly created object. +1. The pipeline receives the event, processes the object, and saves the result in the same bucket. -4. Eventarc sends an event about the newly created object. -5. The pipeline receives the event, processes the object, and saves the result +1. Eventarc sends an event about the newly created object. +1. The pipeline receives the event, processes the object, and saves the result in the same bucket. And so on. @@ -426,7 +426,7 @@ To migrate this reference architecture from AWS to Google Cloud, we recommend that you: 1. Design a migration strategy that takes into account the whole architecture. -2. Refine the migration strategy to consider each component of the architecture. +1. Refine the migration strategy to consider each component of the architecture. ### Design the overall migration strategy @@ -511,7 +511,7 @@ By starting with migrating result data, you: 1. Refactor the AWS Lambda workload to store the results both in Amazon DynamoDB, Firestore, and Cloud Storage. - 2. Once the migration proceeds, refactor the event-processing workload to + 1. Once the migration proceeds, refactor the event-processing workload to store the results in Firestore and Cloud Storage only, along with other modifications that you need to migrate the event processing workload from AWS Lambda to Cloud Run. @@ -542,11 +542,11 @@ By starting with migrating the event-processing workload, you: 1. Refactor the AWS Lambda workload to migrate the event-processing workload to Cloud Run, but keep using Amazon S3 and Amazon SQS as sources, and Amazon DynamoDB to store results. - 2. As the migration proceeds, refactor the event-processing workload to get + 1. As the migration proceeds, refactor the event-processing workload to get source data from Cloud Storage and Eventarc, and Amazon S3 and Amazon SQS, and store the results in both Amazon DynamoDB, Firestore and Cloud Storage. - 3. Once you approach the cutover from your AWS environment, refactor the + 1. Once you approach the cutover from your AWS environment, refactor the event-processing workload to get source data from Cloud Storage and Eventarc only, and store the results in Firestore and Cloud Storage only. @@ -587,8 +587,8 @@ To migrate from Amazon DynamoDB to Firestore, you can design and implement an automated process as follows: 1. Export data from Amazon DynamoDB to Amazon S3. -2. Migrate the exported data from Amazon S3 to Cloud Storage. -3. Implement a Dataflow batch job to +1. Migrate the exported data from Amazon S3 to Cloud Storage. +1. Implement a Dataflow batch job to [load results from Cloud Storage to Firestore](https://cloud.google.com/dataflow/docs/guides/templates/provided/cloud-storage-to-firestore). When you design this process, we recommend that you design it as a gradual