Skip to content
This repository has been archived by the owner on Oct 29, 2023. It is now read-only.

Developers Console is now "Cloud Platform Console" #118

Merged
merged 3 commits into from
Feb 2, 2016
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/source/api-client-python/getting-started-windows.rst
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ If you want to pull in data from Google Genomics API you will need to set
https://cloud.google.com/genomics/

* Then create a project in the
`Google Developers Console`_
`Google Cloud Platform Console`_
or select an existing one.

* On the **APIs & auth** tab, select APIs and turn the Genomics API to ON
Expand Down
11 changes: 4 additions & 7 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -143,25 +143,22 @@
.. _Contact us: [email protected]

.. ### Google Guide Links
.. _Google Developers Console Guide: https://developers.google.com/console/help/new/
.. _Google Identity Platform Guide: https://developers.google.com/identity/protocols/OAuth2
.. _Application Default Credentials: https://developers.google.com/identity/protocols/application-default-credentials

.. ### Google Product Links
.. _Google BigQuery: https://cloud.google.com/bigquery/
.. _Google Cloud Dataflow: https://cloud.google.com/dataflow/
.. _Google Cloud Storage: https://cloud.google.com/storage/
.. _Google Compute Engine: https://cloud.google.com/compute/
.. _Google Developers Console: https://console.developers.google.com/
.. _Google Cloud Platform Console: https://console.cloud.google.com/
.. _Google Genomics: https://cloud.google.com/genomics/
.. _Google Cloud Datalab: https://cloud.google.com/datalab/
.. _Google Cloud Dataproc: https://cloud.google.com/dataproc/

.. ### Deep links into the Developers Console
.. _Project List: https://console.developers.google.com/project
.. _click-to-deploy NCBI BLAST: https://console.developers.google.com/project/_/launcher/details/click-to-deploy-images/ncbiblast
.. _click-to-deploy Bioconductor: https://console.developers.google.com/project/_/mc/template/bioconductor
.. _Deployments: https://console.developers.google.com/project/_/deployments
.. _click-to-deploy NCBI BLAST: https://console.cloud.google.com/project/_/launcher/details/click-to-deploy-images/ncbiblast
.. _click-to-deploy Bioconductor: https://console.cloud.google.com/project/_/mc/template/bioconductor
.. _Deployments: https://console.cloud.google.com/project/_/deployments

.. ### Deep links into cloud.google.com documentation
.. _Compute Engine resource quota: https://cloud.google.com/compute/docs/resource-quotas
Expand Down
2 changes: 1 addition & 1 deletion docs/source/includes/c2d_deployment_teardown.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
If you would like to pause your VM when not using it:

1. Go to the Google Developers Console and select your project: https://console.developers.google.com/project/_/compute/instances
1. Go to the Google Cloud Platform Console and select your project: https://console.cloud.google.com/project/_/compute/instances
2. Click on the checkbox next to your VM.
3. Click on *Stop* to pause your VM.
4. When you are ready to use it again, *Start* your VM. For more detail, see: https://cloud.google.com/compute/docs/instances/stopping-or-deleting-an-instance
Expand Down
2 changes: 1 addition & 1 deletion docs/source/includes/create_project.rst
Original file line number Diff line number Diff line change
@@ -1 +1 @@
If you do not yet have a cloud project, `create a Genomics and Cloud Storage enabled project via the Google Developers Console <https://console.developers.google.com/start/api?id=genomics,storage_api>`_.
If you do not yet have a cloud project, `create a Genomics and Cloud Storage enabled project via the Google Cloud Platform Console <https://console.cloud.google.com/start/api?id=genomics,storage_api>`_.
4 changes: 2 additions & 2 deletions docs/source/includes/dataflow_on_gce_setup.rst
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
If you do not have Java on your local machine, you can set up Java 7 on a `Google Compute Engine`_ instance. The following setup instructions will allow you to *launch* Dataflow jobs from a Compute Engine instance:

#. If you have not already enabled the Google Cloud Platform APIs used by `Google Cloud Dataflow`_, click `here <https://console.developers.google.com/flows/enableapi?apiid=dataflow,compute_component,logging,storage_component,storage_api,bigquery,pubsub,datastore&_ga=1.38537760.2067798380.1406160784>`_ to do so.
#. If you have not already enabled the Google Cloud Platform APIs used by `Google Cloud Dataflow`_, click `here <https://console.cloud.google.com/flows/enableapi?apiid=dataflow,compute_component,logging,storage_component,storage_api,bigquery,pubsub,datastore&_ga=1.38537760.2067798380.1406160784>`_ to do so.

#. Use the `Google Developers Console`_ to spin up a `Google Compute Engine`_ instance and ssh into it. If you have not done this before, see the `step-by-step instructions <https://cloud.google.com/compute/docs/quickstart-developer-console>`_.
#. Use the `Google Cloud Platform Console`_ to spin up a `Google Compute Engine`_ instance and ssh into it. If you have not done this before, see the `step-by-step instructions <https://cloud.google.com/compute/docs/quickstart-developer-console>`_.

#. Run the following command from your local machine to copy the **runnable** jar to the Compute Engine instance. You can download the latest GoogleGenomics dataflow **runnable** jar from the `Maven Central Repository <https://search.maven.org/#search%7Cgav%7C1%7Cg%3A%22com.google.cloud.genomics%22%20AND%20a%3A%22google-genomics-dataflow%22>`_.

Expand Down
2 changes: 1 addition & 1 deletion docs/source/includes/gcp_signup.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@

If you already have a Google Cloud Platform project, this link will take you to your list of projects.

Sign up for Google Cloud Platform by clicking on this link: https://console.developers.google.com/billing/freetrial
Sign up for Google Cloud Platform by clicking on this link: https://console.cloud.google.com/billing/freetrial
2 changes: 1 addition & 1 deletion docs/source/includes/get_client_secrets_steps.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
https://console.developers.google.com/project/_/apiui/credential
https://console.cloud.google.com/project/_/apiui/credential

After you select your Google Cloud project, this link will
automatically take you to the Credentials tab under the API Manager.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,9 @@ b. Verify or increase quota

gcloud compute regions describe *region*

or in ``Developers Console``:
or in ``Cloud Platform Console``:

https://console.developers.google.com/project/_/compute/quotas
https://console.cloud.google.com/project/_/compute/quotas

Important quota limits include ``CPUs``, ``in-use IP addresses``,
and ``disk size``.
Expand Down
2 changes: 1 addition & 1 deletion docs/source/includes/spark_setup.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
* Deploy your Spark cluster using `Google Cloud Dataproc`_. This can be done using the `Cloud Console <https://console.developers.google.com/project/_/dataproc/clustersAdd>`__ or the following ``gcloud`` command:
* Deploy your Spark cluster using `Google Cloud Dataproc`_. This can be done using the `Cloud Platform Console <https://console.cloud.google.com/project/_/dataproc/clustersAdd>`__ or the following ``gcloud`` command:

.. code-block:: shell

Expand Down
2 changes: 1 addition & 1 deletion docs/source/includes/tute_data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,5 +8,5 @@ See `Tute's documentation`_ for more details about:
Google Cloud Platform data locations
------------------------------------

* Google Cloud Storage folder `gs://tute_db <https://console.developers.google.com/storage/tute_db>`_
* Google Cloud Storage folder `gs://tute_db <https://console.cloud.google.com/storage/tute_db>`_
* Google BigQuery Dataset ID `silver-wall-555:TuteTable.hg19 <https://bigquery.cloud.google.com/table/silver-wall-555:TuteTable.hg19>`_
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BigQuery works publicly, but accounts seem to have access to the files on gs:// disabled. I'm getting the following error:

The account for bucket "tute_db" has been disabled.

It might be good to have it automated so that gs:// also is accessible.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Paul - can you explain further? I see no problem accessing the tute_db bucket from either the Cloud Console, nor using "gsutil ls". Where are you receiving this error message?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Matt,

I see the same message for both. So below are the steps of what I perform and what I see:

  1. Via gsutil:
$ gsutil ls gs://tute_db | head -n 3
gs://tute_db/hg19allsnp.split1.ggapidb.txt.gz
gs://tute_db/hg19allsnp.split10.ggapidb.txt.gz
gs://tute_db/hg19allsnp.split100.ggapidb.txt.gz
$
$ gsutil cat gs://tute_db/hg19allsnp.split100.ggapidb.txt.gz | gunzip - | head
Failure: The account for bucket &quot;tute_db&quot; has been disabled..

gzip: stdin: unexpected end of file
$
  1. Next I perform the same thing, by trying it through the browser and clicking on the following link:

https://console.cloud.google.com/m/cloudstorage/b/tute_db/o/hg19allsnp.split100.ggapidb.txt.gz

The response I get from the browser is the following:

The account for bucket "tute_db" has been disabled.

It seemed to have worked before, since I performed an analysis through R on the files, and posted it at the following location a while back:

https://groups.google.com/forum/#!topic/tute-genomics/wW5ubyPDV-Y

Thank you for helping me out.

Paul

4 changes: 2 additions & 2 deletions docs/source/use_cases/analyze_reads/calculate_coverage.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,8 +47,8 @@ Create Output Dataset

In order to run this pipeline, you must have a Google Genomics dataset to which the pipeline
can output its AnnotationSet and Annotations.
* If you already have a dataset in which you have write access, you may use it. Click here to see your datasets: https://console.developers.google.com/project/_/genomics/datasets
* If not, you can click on the following link to use the Developers Control to create one: https://console.developers.google.com/project/_/genomics/datasets/create.
* If you already have a dataset in which you have write access, you may use it. Click here to see your datasets: https://console.cloud.google.com/project/_/genomics/datasets
* If not, you can click on the following link to use the Cloud Platform Console to create one: https://console.cloud.google.com/project/_/genomics/datasets/create.

In either case, the ``ID`` of the dataset is the output dataset id you should use when running
the pipeline.
Expand Down
6 changes: 3 additions & 3 deletions docs/source/use_cases/discover_public_data/1000_genomes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -54,9 +54,9 @@ Google Cloud Platform data locations

* Google Cloud Storage folders
* These files were loaded into Google Genomics datasets:
* `gs://genomics-public-data/1000-genomes <https://console.developers.google.com/storage/genomics-public-data/1000-genomes/>`_
* `gs://genomics-public-data/1000-genomes-phase-3 <https://console.developers.google.com/storage/genomics-public-data/1000-genomes-phase-3/>`_
* A full mirror of http://ftp-trace.ncbi.nih.gov/1000genomes/ftp/ `gs://genomics-public-data/ftp-trace.ncbi.nih.gov/1000genomes/ftp/ <https://console.developers.google.com/storage/browser/genomics-public-data/ftp-trace.ncbi.nih.gov/1000genomes/ftp/>`_.
* `gs://genomics-public-data/1000-genomes <https://console.cloud.google.com/storage/genomics-public-data/1000-genomes/>`_
* `gs://genomics-public-data/1000-genomes-phase-3 <https://console.cloud.google.com/storage/genomics-public-data/1000-genomes-phase-3/>`_
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The following file still points to developers:

https://console.cloud.google.com/m/cloudstorage/b/genomics-public-data/o/1000-genomes/SEE_ALSO_THE_FULL_MIRROR_OF_1%2C000_GENOMES

When one clicks on it, it will show the following:

For more information on the data in this bucket see:
http://cloud.google.com/genomics/public-data

Note that we have a full mirror of 1,000 Genomes in Google Cloud Storage underneath this path:

  https://console.developers.google.com/storage/browser/genomics-public-data/ftp-trace.ncbi.nih.gov/1000genomes/ftp

  gs://genomics-public-data/ftp-trace.ncbi.nih.gov/1000genomes/ftp

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

* A full mirror of http://ftp-trace.ncbi.nih.gov/1000genomes/ftp/ `gs://genomics-public-data/ftp-trace.ncbi.nih.gov/1000genomes/ftp/ <https://console.cloud.google.com/storage/browser/genomics-public-data/ftp-trace.ncbi.nih.gov/1000genomes/ftp/>`_.
* Google Genomics Dataset IDs
* Dataset Id `10473108253681171589 <https://developers.google.com/apis-explorer/#p/genomics/v1/genomics.datasets.get?datasetId=10473108253681171589>`_
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/v1/v1beta2

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

v1 is the correct version

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup :)


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Annotations from `ClinVar`_ were loaded into Google Genomics for use in sample a
Google Cloud Platform data locations
------------------------------------

* Google Cloud Storage folder `gs://genomics-public-data/clinvar/ <https://console.developers.google.com/storage/browser/genomics-public-data/clinvar/>`_
* Google Cloud Storage folder `gs://genomics-public-data/clinvar/ <https://console.cloud.google.com/storage/browser/genomics-public-data/clinvar/>`_
* Google Genomics `annotation sets <https://developers.google.com/apis-explorer/?#p/genomics/v1beta2/genomics.annotationSets.search?_h=3&resource=%257B%250A++%2522datasetIds%2522%253A+%250A++%255B%25222259180486797191426%2522%250A++%255D%250A%257D&>`_

Provenance
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ This dataset comprises the three public synthetic tumor/normal pairs created for

Google Cloud Platform data locations
------------------------------------
* Google Cloud Storage folder `gs://public-dream-data/ <https://console.developers.google.com/storage/browser/public-dream-data/>`_
* Google Cloud Storage folder `gs://public-dream-data/ <https://console.console.google.com/storage/browser/public-dream-data/>`_
* Google Genomics dataset `337315832689 <https://developers.google.com/apis-explorer/#p/genomics/v1/genomics.datasets.get?datasetId=337315832689>`_.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one seems to still point to v1 - is that still okay to use? I thought v1beta2 was the proper version.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

v1 has superceded v1beta2. Version v1beta2 of the API is deprecated.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh sorry, I totally missed Jonathan's post here - and thought that v1 was related to v1beta:

https://groups.google.com/forum/#!topic/google-genomics-discuss/SsuPecO29WA

Thanks for the clarification.

~p


Provenance
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ This dataset comprises roughly 180 Complete Genomics genomes. See the `Personal
Google Cloud Platform data locations
------------------------------------

* Google Cloud Storage folder `gs://pgp-harvard-data-public <https://console.developers.google.com/storage/pgp-harvard-data-public>`_
* Google Cloud Storage folder `gs://pgp-harvard-data-public <https://console.cloud.google.com/storage/pgp-harvard-data-public>`_
* Google Genomics Dataset ID `9170389916365079788 <https://developers.google.com/apis-explorer/#p/genomics/v1/genomics.datasets.get?datasetId=9170389916365079788>`_
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same thing as above - this one points to v1 instead of v1beta2 - is that okay?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. We are in the process of upgrading all samples to v1 from v1beta2.
See #114.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, perfect!

Thanks,
~p

* Google BigQuery Dataset IDs
* `google.com:biggene:pgp_20150205.genome_calls <https://bigquery.cloud.google.com/table/google.com:biggene:pgp_20150205.genome_calls>`_
Expand All @@ -38,7 +38,7 @@ Provenance

Google Genomics variant set for dataset ``pgp_20150205``: `9170389916365079788 <https://developers.google.com/apis-explorer/#p/genomics/v1/genomics.datasets.get?datasetId=9170389916365079788>`_ contains:

* the Complete Genomics datasets from `gs://pgp-harvard-data-public/**/masterVar*bz2 <https://console.developers.google.com/storage/pgp-harvard-data-public>`_
* the Complete Genomics datasets from `gs://pgp-harvard-data-public/**/masterVar*bz2 <https://console.cloud.google.com/storage/pgp-harvard-data-public>`_

Appendix
--------
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ This dataset comprises the 17 member CEPH pedigree 1463. See http://www.illumin
Google Cloud Platform data locations
------------------------------------

* Google Cloud Storage folder `gs://genomics-public-data/platinum-genomes <https://console.developers.google.com/storage/genomics-public-data/platinum-genomes/>`_
* Google Cloud Storage folder `gs://genomics-public-data/platinum-genomes <https://console.cloud.google.com/storage/genomics-public-data/platinum-genomes/>`_
* Google Genomics Dataset ID `3049512673186936334 <https://developers.google.com/apis-explorer/#p/genomics/v1/genomics.datasets.get?datasetId=3049512673186936334>`_
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/v1/v1beta2

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

v1 is the correct version

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Understood :)


* `ReadGroupSet IDs <https://developers.google.com/apis-explorer/#p/genomics/v1/genomics.readgroupsets.search?fields=readGroupSets(id%252Cname)&_h=5&resource=%257B%250A++%2522datasetIds%2522%253A+%250A++%255B%25223049512673186936334%2522%250A++%255D%250A%257D&>`_
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Reference Genomes such as GRCh37, GRCh37lite, GRCh38, hg19, hs37d5, and b37 are
Google Cloud Platform data locations
------------------------------------

* Google Cloud Storage folder `gs://genomics-public-data/references <https://console.developers.google.com/storage/genomics-public-data/references/>`_
* Google Cloud Storage folder `gs://genomics-public-data/references <https://console.cloud.google.com/storage/genomics-public-data/references/>`_
* Google Genomics `reference sets <https://developers.google.com/apis-explorer/#p/genomics/v1/genomics.referencesets.search>`_
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/v1/v1beta2

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

v1 is the correct version

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, my mistake.

Thanks,
`p


Provenance
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ __ RenderedVersion_
Google Cloud Platform data locations
------------------------------------

* Google Cloud Storage folder `gs://genomics-public-data/ucsc/ <https://console.developers.google.com/storage/browser/genomics-public-data/ucsc/>`_
* Google Cloud Storage folder `gs://genomics-public-data/ucsc/ <https://console.cloud.google.com/storage/browser/genomics-public-data/ucsc/>`_
* Google Genomics `annotation sets <https://developers.google.com/apis-explorer/?#p/genomics/v1beta2/genomics.annotationSets.search?_h=11&resource=%257B%250A++%2522datasetIds%2522%253A+%250A++%255B%252210673227266162962312%2522%250A++%255D%250A%257D&>`_

Provenance
Expand Down
2 changes: 1 addition & 1 deletion docs/source/use_cases/run_familiar_tools/bioconductor.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ __ RenderedVersion_

Bioconductor maintains Docker containers with R, Bioconductor packages, and RStudio Server all ready to go! Its a great way to set up your R environment quickly and start working. The instructions to deploy it to Google Compute Engine are below but if you want to learn more about these containers, see http://www.bioconductor.org/help/docker/.

1. Click on `click-to-deploy Bioconductor`_ to navigate to the launcher page on the Developers Console.
1. Click on `click-to-deploy Bioconductor`_ to navigate to the launcher page on the Cloud Platform Console.

1. Optional: change the *Machine type* if you would like to deploy a machine with more CPU cores or RAM.
2. Optional: change the *Data disk size (GB)* if you would like to use a larger persistent disk for your own files.
Expand Down
8 changes: 4 additions & 4 deletions docs/source/use_cases/run_picard_and_gatk/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ This command uses an older, slower REST based API. To run using GRPC API impleme

For Java 7 (as opposed to 8) use *alpn-boot-7.1.3.v20150130.jar*.

We use a test readset here from `genomics-test-data <https://console.developers.google.com/project/genomics-test-data/storage/browser/gatk-tools-java/>`_ project.
We use a test readset here from `genomics-test-data <https://console.cloud.google.com/project/genomics-test-data/storage/browser/gatk-tools-java/>`_ project.

Specifying a genomics region to use from the readset
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Expand All @@ -143,7 +143,7 @@ Timing the reading speed from the cloud
You can run `gatk-tools-java/src/main/scripts/example.sh <https://github.com/googlegenomics/gatk-tools-java/blob/master/src/main/scripts/example.sh>`_ with and without "grpc" command line parameter to see the difference in reading speed. The timing statistics are dumped to the terminal.
We benchmarked **x11** speed improvements with GRPC compared to REST, giving **~12,000 reads/second**.

The tests were done on `Platinum Genomes NA12877_S1.bam dataset <https://console.developers.google.com/storage/browser/genomics-public-data/platinum-genomes/bam/?_ga=1.197206447.160385476.1431305548>`_, please see the `detailed writeup of the test procedure and results <https://docs.google.com/document/d/1Br7RMSbAChNpG6pi2teujf-YthczF-rAM1afSgDoCgQ/edit#>`_ if you want to repeat the test.
The tests were done on `Platinum Genomes NA12877_S1.bam dataset <https://console.cloud.google.com/storage/browser/genomics-public-data/platinum-genomes/bam/?_ga=1.197206447.160385476.1431305548>`_, please see the `detailed writeup of the test procedure and results <https://docs.google.com/document/d/1Br7RMSbAChNpG6pi2teujf-YthczF-rAM1afSgDoCgQ/edit#>`_ if you want to repeat the test.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Google Doc requires access permission, is that a prerequisite or should it be public?


We therefore recommend running GRPC variants of command line.

Expand All @@ -170,7 +170,7 @@ How do you find an ID of the readset from the :doc:`/use_cases/discover_public_

We will do it step by step using the command line API client.

* Lets say we want to use `Platinum Genomes NA12877_S1.bam readgroupset <https://console.developers.google.com/storage/browser/genomics-public-data/platinum-genomes/bam/?_ga=1.197206447.160385476.1431305548>`_ from :doc:`/use_cases/discover_public_data/1000_genomes` project.
* Lets say we want to use `Platinum Genomes NA12877_S1.bam readgroupset <https://console.cloud.google.com/storage/browser/genomics-public-data/platinum-genomes/bam/?_ga=1.197206447.160385476.1431305548>`_ from :doc:`/use_cases/discover_public_data/1000_genomes` project.

* The `documentation <https://cloud.google.com/genomics/data/1000-genomes?hl=en>`_ page states that the dataset id for this set of files is **10473108253681171589**.

Expand All @@ -190,7 +190,7 @@ We will do it step by step using the command line API client.
Now lets suppose we are not looking for one of the readgroupsets form the genomics public data but instead want to use one from our own project.
In this case we need to figure out the *dataset id* for our files first, before we can use "readgroupsets list" command to list the individual readgroupsets.

* Lets say we want to figure out which dataset ids are present under `genomics test data <https://console.developers.google.com/project/genomics-test-data/storage/browser>`_ project.
* Lets say we want to figure out which dataset ids are present under `genomics test data <https://console.cloud.google.com/project/genomics-test-data/storage/browser>`_ project.

* First we need to set the project id for subsequent commands to be our project using

Expand Down
Loading