diff --git a/docs/source/api-client-python/getting-started-windows.rst b/docs/source/api-client-python/getting-started-windows.rst index aa7cbd0..47a74cb 100644 --- a/docs/source/api-client-python/getting-started-windows.rst +++ b/docs/source/api-client-python/getting-started-windows.rst @@ -75,7 +75,7 @@ If you want to pull in data from Google Genomics API you will need to set https://cloud.google.com/genomics/ * Then create a project in the - `Google Developers Console`_ + `Google Cloud Platform Console`_ or select an existing one. * On the **APIs & auth** tab, select APIs and turn the Genomics API to ON diff --git a/docs/source/conf.py b/docs/source/conf.py index ceefddd..954502b 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -143,8 +143,6 @@ .. _Contact us: google-genomics-contact@googlegroups.com .. ### Google Guide Links -.. _Google Developers Console Guide: https://developers.google.com/console/help/new/ -.. _Google Identity Platform Guide: https://developers.google.com/identity/protocols/OAuth2 .. _Application Default Credentials: https://developers.google.com/identity/protocols/application-default-credentials .. ### Google Product Links @@ -152,16 +150,15 @@ .. _Google Cloud Dataflow: https://cloud.google.com/dataflow/ .. _Google Cloud Storage: https://cloud.google.com/storage/ .. _Google Compute Engine: https://cloud.google.com/compute/ -.. _Google Developers Console: https://console.developers.google.com/ +.. _Google Cloud Platform Console: https://console.cloud.google.com/ .. _Google Genomics: https://cloud.google.com/genomics/ .. _Google Cloud Datalab: https://cloud.google.com/datalab/ .. _Google Cloud Dataproc: https://cloud.google.com/dataproc/ .. ### Deep links into the Developers Console -.. _Project List: https://console.developers.google.com/project -.. _click-to-deploy NCBI BLAST: https://console.developers.google.com/project/_/launcher/details/click-to-deploy-images/ncbiblast -.. _click-to-deploy Bioconductor: https://console.developers.google.com/project/_/mc/template/bioconductor -.. _Deployments: https://console.developers.google.com/project/_/deployments +.. _click-to-deploy NCBI BLAST: https://console.cloud.google.com/project/_/launcher/details/click-to-deploy-images/ncbiblast +.. _click-to-deploy Bioconductor: https://console.cloud.google.com/project/_/mc/template/bioconductor +.. _Deployments: https://console.cloud.google.com/project/_/deployments .. ### Deep links into cloud.google.com documentation .. _Compute Engine resource quota: https://cloud.google.com/compute/docs/resource-quotas diff --git a/docs/source/includes/c2d_deployment_teardown.rst b/docs/source/includes/c2d_deployment_teardown.rst index 627f5d2..97d3d05 100644 --- a/docs/source/includes/c2d_deployment_teardown.rst +++ b/docs/source/includes/c2d_deployment_teardown.rst @@ -1,6 +1,6 @@ If you would like to pause your VM when not using it: -1. Go to the Google Developers Console and select your project: https://console.developers.google.com/project/_/compute/instances +1. Go to the Google Cloud Platform Console and select your project: https://console.cloud.google.com/project/_/compute/instances 2. Click on the checkbox next to your VM. 3. Click on *Stop* to pause your VM. 4. When you are ready to use it again, *Start* your VM. For more detail, see: https://cloud.google.com/compute/docs/instances/stopping-or-deleting-an-instance diff --git a/docs/source/includes/create_project.rst b/docs/source/includes/create_project.rst index fe0a8d8..60dabf8 100644 --- a/docs/source/includes/create_project.rst +++ b/docs/source/includes/create_project.rst @@ -1 +1 @@ -If you do not yet have a cloud project, `create a Genomics and Cloud Storage enabled project via the Google Developers Console `_. +If you do not yet have a cloud project, `create a Genomics and Cloud Storage enabled project via the Google Cloud Platform Console `_. diff --git a/docs/source/includes/dataflow_on_gce_setup.rst b/docs/source/includes/dataflow_on_gce_setup.rst index 7294213..a167cce 100644 --- a/docs/source/includes/dataflow_on_gce_setup.rst +++ b/docs/source/includes/dataflow_on_gce_setup.rst @@ -1,8 +1,8 @@ If you do not have Java on your local machine, you can set up Java 7 on a `Google Compute Engine`_ instance. The following setup instructions will allow you to *launch* Dataflow jobs from a Compute Engine instance: -#. If you have not already enabled the Google Cloud Platform APIs used by `Google Cloud Dataflow`_, click `here `_ to do so. +#. If you have not already enabled the Google Cloud Platform APIs used by `Google Cloud Dataflow`_, click `here `_ to do so. -#. Use the `Google Developers Console`_ to spin up a `Google Compute Engine`_ instance and ssh into it. If you have not done this before, see the `step-by-step instructions `_. +#. Use the `Google Cloud Platform Console`_ to spin up a `Google Compute Engine`_ instance and ssh into it. If you have not done this before, see the `step-by-step instructions `_. #. Run the following command from your local machine to copy the **runnable** jar to the Compute Engine instance. You can download the latest GoogleGenomics dataflow **runnable** jar from the `Maven Central Repository `_. diff --git a/docs/source/includes/gcp_signup.rst b/docs/source/includes/gcp_signup.rst index 1bd7b94..933cd31 100644 --- a/docs/source/includes/gcp_signup.rst +++ b/docs/source/includes/gcp_signup.rst @@ -2,4 +2,4 @@ If you already have a Google Cloud Platform project, this link will take you to your list of projects. -Sign up for Google Cloud Platform by clicking on this link: https://console.developers.google.com/billing/freetrial +Sign up for Google Cloud Platform by clicking on this link: https://console.cloud.google.com/billing/freetrial diff --git a/docs/source/includes/get_client_secrets_steps.rst b/docs/source/includes/get_client_secrets_steps.rst index 1de2ade..8e75ca7 100644 --- a/docs/source/includes/get_client_secrets_steps.rst +++ b/docs/source/includes/get_client_secrets_steps.rst @@ -1,4 +1,4 @@ - https://console.developers.google.com/project/_/apiui/credential + https://console.cloud.google.com/project/_/apiui/credential After you select your Google Cloud project, this link will automatically take you to the Credentials tab under the API Manager. diff --git a/docs/source/includes/grid-computing-tools-steps-sizing-disks.rst b/docs/source/includes/grid-computing-tools-steps-sizing-disks.rst index c67c4a0..6b521f2 100644 --- a/docs/source/includes/grid-computing-tools-steps-sizing-disks.rst +++ b/docs/source/includes/grid-computing-tools-steps-sizing-disks.rst @@ -27,9 +27,9 @@ b. Verify or increase quota gcloud compute regions describe *region* - or in ``Developers Console``: + or in ``Cloud Platform Console``: - https://console.developers.google.com/project/_/compute/quotas + https://console.cloud.google.com/project/_/compute/quotas Important quota limits include ``CPUs``, ``in-use IP addresses``, and ``disk size``. diff --git a/docs/source/includes/spark_setup.rst b/docs/source/includes/spark_setup.rst index b6276bc..a37078b 100644 --- a/docs/source/includes/spark_setup.rst +++ b/docs/source/includes/spark_setup.rst @@ -1,4 +1,4 @@ -* Deploy your Spark cluster using `Google Cloud Dataproc`_. This can be done using the `Cloud Console `__ or the following ``gcloud`` command: +* Deploy your Spark cluster using `Google Cloud Dataproc`_. This can be done using the `Cloud Platform Console `__ or the following ``gcloud`` command: .. code-block:: shell diff --git a/docs/source/includes/tute_data.rst b/docs/source/includes/tute_data.rst index 4c1f345..0e430bf 100644 --- a/docs/source/includes/tute_data.rst +++ b/docs/source/includes/tute_data.rst @@ -8,5 +8,5 @@ See `Tute's documentation`_ for more details about: Google Cloud Platform data locations ------------------------------------ -* Google Cloud Storage folder `gs://tute_db `_ +* Google Cloud Storage folder `gs://tute_db `_ * Google BigQuery Dataset ID `silver-wall-555:TuteTable.hg19 `_ diff --git a/docs/source/use_cases/analyze_reads/calculate_coverage.rst b/docs/source/use_cases/analyze_reads/calculate_coverage.rst index 62a8031..26fee90 100644 --- a/docs/source/use_cases/analyze_reads/calculate_coverage.rst +++ b/docs/source/use_cases/analyze_reads/calculate_coverage.rst @@ -47,8 +47,8 @@ Create Output Dataset In order to run this pipeline, you must have a Google Genomics dataset to which the pipeline can output its AnnotationSet and Annotations. -* If you already have a dataset in which you have write access, you may use it. Click here to see your datasets: https://console.developers.google.com/project/_/genomics/datasets -* If not, you can click on the following link to use the Developers Control to create one: https://console.developers.google.com/project/_/genomics/datasets/create. +* If you already have a dataset in which you have write access, you may use it. Click here to see your datasets: https://console.cloud.google.com/project/_/genomics/datasets +* If not, you can click on the following link to use the Cloud Platform Console to create one: https://console.cloud.google.com/project/_/genomics/datasets/create. In either case, the ``ID`` of the dataset is the output dataset id you should use when running the pipeline. diff --git a/docs/source/use_cases/discover_public_data/1000_genomes.rst b/docs/source/use_cases/discover_public_data/1000_genomes.rst index db6ccc3..17a7df4 100644 --- a/docs/source/use_cases/discover_public_data/1000_genomes.rst +++ b/docs/source/use_cases/discover_public_data/1000_genomes.rst @@ -54,9 +54,9 @@ Google Cloud Platform data locations * Google Cloud Storage folders * These files were loaded into Google Genomics datasets: - * `gs://genomics-public-data/1000-genomes `_ - * `gs://genomics-public-data/1000-genomes-phase-3 `_ - * A full mirror of http://ftp-trace.ncbi.nih.gov/1000genomes/ftp/ `gs://genomics-public-data/ftp-trace.ncbi.nih.gov/1000genomes/ftp/ `_. + * `gs://genomics-public-data/1000-genomes `_ + * `gs://genomics-public-data/1000-genomes-phase-3 `_ + * A full mirror of http://ftp-trace.ncbi.nih.gov/1000genomes/ftp/ `gs://genomics-public-data/ftp-trace.ncbi.nih.gov/1000genomes/ftp/ `_. * Google Genomics Dataset IDs * Dataset Id `10473108253681171589 `_ diff --git a/docs/source/use_cases/discover_public_data/clinvar_annotations.rst b/docs/source/use_cases/discover_public_data/clinvar_annotations.rst index 5ff5047..744a54d 100644 --- a/docs/source/use_cases/discover_public_data/clinvar_annotations.rst +++ b/docs/source/use_cases/discover_public_data/clinvar_annotations.rst @@ -22,7 +22,7 @@ Annotations from `ClinVar`_ were loaded into Google Genomics for use in sample a Google Cloud Platform data locations ------------------------------------ -* Google Cloud Storage folder `gs://genomics-public-data/clinvar/ `_ +* Google Cloud Storage folder `gs://genomics-public-data/clinvar/ `_ * Google Genomics `annotation sets `_ Provenance diff --git a/docs/source/use_cases/discover_public_data/dream_smc_dna.rst b/docs/source/use_cases/discover_public_data/dream_smc_dna.rst index cc03576..e369a9f 100644 --- a/docs/source/use_cases/discover_public_data/dream_smc_dna.rst +++ b/docs/source/use_cases/discover_public_data/dream_smc_dna.rst @@ -31,7 +31,7 @@ This dataset comprises the three public synthetic tumor/normal pairs created for Google Cloud Platform data locations ------------------------------------ -* Google Cloud Storage folder `gs://public-dream-data/ `_ +* Google Cloud Storage folder `gs://public-dream-data/ `_ * Google Genomics dataset `337315832689 `_. Provenance diff --git a/docs/source/use_cases/discover_public_data/pgp_public_data.rst b/docs/source/use_cases/discover_public_data/pgp_public_data.rst index 0169092..8bba00e 100644 --- a/docs/source/use_cases/discover_public_data/pgp_public_data.rst +++ b/docs/source/use_cases/discover_public_data/pgp_public_data.rst @@ -28,7 +28,7 @@ This dataset comprises roughly 180 Complete Genomics genomes. See the `Personal Google Cloud Platform data locations ------------------------------------ -* Google Cloud Storage folder `gs://pgp-harvard-data-public `_ +* Google Cloud Storage folder `gs://pgp-harvard-data-public `_ * Google Genomics Dataset ID `9170389916365079788 `_ * Google BigQuery Dataset IDs * `google.com:biggene:pgp_20150205.genome_calls `_ @@ -38,7 +38,7 @@ Provenance Google Genomics variant set for dataset ``pgp_20150205``: `9170389916365079788 `_ contains: -* the Complete Genomics datasets from `gs://pgp-harvard-data-public/**/masterVar*bz2 `_ +* the Complete Genomics datasets from `gs://pgp-harvard-data-public/**/masterVar*bz2 `_ Appendix -------- diff --git a/docs/source/use_cases/discover_public_data/platinum_genomes.rst b/docs/source/use_cases/discover_public_data/platinum_genomes.rst index 1ca3cc8..e5720af 100644 --- a/docs/source/use_cases/discover_public_data/platinum_genomes.rst +++ b/docs/source/use_cases/discover_public_data/platinum_genomes.rst @@ -22,7 +22,7 @@ This dataset comprises the 17 member CEPH pedigree 1463. See http://www.illumin Google Cloud Platform data locations ------------------------------------ -* Google Cloud Storage folder `gs://genomics-public-data/platinum-genomes `_ +* Google Cloud Storage folder `gs://genomics-public-data/platinum-genomes `_ * Google Genomics Dataset ID `3049512673186936334 `_ * `ReadGroupSet IDs `_ diff --git a/docs/source/use_cases/discover_public_data/reference_genomes.rst b/docs/source/use_cases/discover_public_data/reference_genomes.rst index 6e63b4d..d0fc6cd 100644 --- a/docs/source/use_cases/discover_public_data/reference_genomes.rst +++ b/docs/source/use_cases/discover_public_data/reference_genomes.rst @@ -22,7 +22,7 @@ Reference Genomes such as GRCh37, GRCh37lite, GRCh38, hg19, hs37d5, and b37 are Google Cloud Platform data locations ------------------------------------ -* Google Cloud Storage folder `gs://genomics-public-data/references `_ +* Google Cloud Storage folder `gs://genomics-public-data/references `_ * Google Genomics `reference sets `_ Provenance diff --git a/docs/source/use_cases/discover_public_data/ucsc_annotations.rst b/docs/source/use_cases/discover_public_data/ucsc_annotations.rst index ad1ef0b..26a2b47 100644 --- a/docs/source/use_cases/discover_public_data/ucsc_annotations.rst +++ b/docs/source/use_cases/discover_public_data/ucsc_annotations.rst @@ -22,7 +22,7 @@ __ RenderedVersion_ Google Cloud Platform data locations ------------------------------------ -* Google Cloud Storage folder `gs://genomics-public-data/ucsc/ `_ +* Google Cloud Storage folder `gs://genomics-public-data/ucsc/ `_ * Google Genomics `annotation sets `_ Provenance diff --git a/docs/source/use_cases/run_familiar_tools/bioconductor.rst b/docs/source/use_cases/run_familiar_tools/bioconductor.rst index fd5128b..4599f3a 100644 --- a/docs/source/use_cases/run_familiar_tools/bioconductor.rst +++ b/docs/source/use_cases/run_familiar_tools/bioconductor.rst @@ -21,7 +21,7 @@ __ RenderedVersion_ Bioconductor maintains Docker containers with R, Bioconductor packages, and RStudio Server all ready to go! Its a great way to set up your R environment quickly and start working. The instructions to deploy it to Google Compute Engine are below but if you want to learn more about these containers, see http://www.bioconductor.org/help/docker/. -1. Click on `click-to-deploy Bioconductor`_ to navigate to the launcher page on the Developers Console. +1. Click on `click-to-deploy Bioconductor`_ to navigate to the launcher page on the Cloud Platform Console. 1. Optional: change the *Machine type* if you would like to deploy a machine with more CPU cores or RAM. 2. Optional: change the *Data disk size (GB)* if you would like to use a larger persistent disk for your own files. diff --git a/docs/source/use_cases/run_picard_and_gatk/index.rst b/docs/source/use_cases/run_picard_and_gatk/index.rst index bd56e20..406fe5f 100644 --- a/docs/source/use_cases/run_picard_and_gatk/index.rst +++ b/docs/source/use_cases/run_picard_and_gatk/index.rst @@ -122,7 +122,7 @@ This command uses an older, slower REST based API. To run using GRPC API impleme For Java 7 (as opposed to 8) use *alpn-boot-7.1.3.v20150130.jar*. -We use a test readset here from `genomics-test-data `_ project. +We use a test readset here from `genomics-test-data `_ project. Specifying a genomics region to use from the readset ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -143,7 +143,7 @@ Timing the reading speed from the cloud You can run `gatk-tools-java/src/main/scripts/example.sh `_ with and without "grpc" command line parameter to see the difference in reading speed. The timing statistics are dumped to the terminal. We benchmarked **x11** speed improvements with GRPC compared to REST, giving **~12,000 reads/second**. -The tests were done on `Platinum Genomes NA12877_S1.bam dataset `_, please see the `detailed writeup of the test procedure and results `_ if you want to repeat the test. +The tests were done on `Platinum Genomes NA12877_S1.bam dataset `_, please see the `detailed writeup of the test procedure and results `_ if you want to repeat the test. We therefore recommend running GRPC variants of command line. @@ -170,7 +170,7 @@ How do you find an ID of the readset from the :doc:`/use_cases/discover_public_ We will do it step by step using the command line API client. -* Lets say we want to use `Platinum Genomes NA12877_S1.bam readgroupset `_ from :doc:`/use_cases/discover_public_data/1000_genomes` project. +* Lets say we want to use `Platinum Genomes NA12877_S1.bam readgroupset `_ from :doc:`/use_cases/discover_public_data/1000_genomes` project. * The `documentation `_ page states that the dataset id for this set of files is **10473108253681171589**. @@ -190,7 +190,7 @@ We will do it step by step using the command line API client. Now lets suppose we are not looking for one of the readgroupsets form the genomics public data but instead want to use one from our own project. In this case we need to figure out the *dataset id* for our files first, before we can use "readgroupsets list" command to list the individual readgroupsets. -* Lets say we want to figure out which dataset ids are present under `genomics test data `_ project. +* Lets say we want to figure out which dataset ids are present under `genomics test data `_ project. * First we need to set the project id for subsequent commands to be our project using diff --git a/docs/source/workshops/bioc-2015.rst b/docs/source/workshops/bioc-2015.rst index 1c87371..af17bb2 100644 --- a/docs/source/workshops/bioc-2015.rst +++ b/docs/source/workshops/bioc-2015.rst @@ -40,7 +40,7 @@ Create a Google Cloud Platform project Enable APIs ^^^^^^^^^^^ -Enable all the Google Cloud Platform APIs we will use in this workshop by clicking on this `link `_. +Enable all the Google Cloud Platform APIs we will use in this workshop by clicking on this `link `_. Install gcloud ^^^^^^^^^^^^^^ @@ -56,7 +56,7 @@ To further the goals of `reproducibility, ease of use, and convenience