From 323805a4f74d82600a4093eb36548ea12144be2e Mon Sep 17 00:00:00 2001 From: thomasyu888 Date: Wed, 30 Oct 2019 22:09:24 -0700 Subject: [PATCH 1/8] Add submission.md --- articles/submissions.md | 14 ++++++++++++++ 1 file changed, 14 insertions(+) create mode 100644 articles/submissions.md diff --git a/articles/submissions.md b/articles/submissions.md new file mode 100644 index 00000000..f8e3055c --- /dev/null +++ b/articles/submissions.md @@ -0,0 +1,14 @@ +--- +title: Submission +layout: article +excerpt: Explore the features of submissions made to evaluation queues +category: howto +--- + + + +Every submission you make to an Evaluation queue has a unique id. This id should not be confused with Synapse ids which start with `syn...`. All submissions have a `Submission` and `SubmissionStatus` object. Annotations can be added to a `SubmissionStatus` to be displayed on leaderboards. Each of these added annotations can be set to either public or private. Private annotations cannot be read by people on the a leaderboard unless the Team or Synapse user has `Can Score` or `Admin` permissions on the evaluation queue. Public annotations can be viewed by any Team or user that at least has `Can View` permissions. See more about [evaluation queues](evaluation_queues.md). From 6d7f61839dcdcae93f960fa82251e9750962a543 Mon Sep 17 00:00:00 2001 From: thomasyu888 Date: Thu, 14 Nov 2019 10:48:33 -0800 Subject: [PATCH 2/8] Merge submission text into evaluation queue --- articles/evaluation_queues.md | 9 +++++++-- articles/submissions.md | 14 -------------- 2 files changed, 7 insertions(+), 16 deletions(-) delete mode 100644 articles/submissions.md diff --git a/articles/evaluation_queues.md b/articles/evaluation_queues.md index b0e4d86c..ea825509 100644 --- a/articles/evaluation_queues.md +++ b/articles/evaluation_queues.md @@ -161,10 +161,15 @@ submission <- synSubmit( name = "My Submission", # An arbitrary name for your submission team = "My Team Name") # Optional, can also pass a Team object or id ``` +## Submissions -## View Submissions of an Evaluation Queue +Every submission you make to an Evaluation queue has a unique id. This id should not be confused with Synapse ids which start with syn.... All submissions have a `Submission` and `SubmissionStatus` object. -All submissions of an Evaluation queue can be views through the through the use of a leaderboard. To learn how to create a wiki page, please visit [here](wikis.md). Below are instructions on how to set up a leaderboard. You must know the **evaluation Id** to do so; see the section on how to "Configure an Evaluation Queue" for instructions on finding the evaluation Id. +### Viewing Submissions + +All Submissions to an Evaluation queue can be viewed through a leaderboard. Annotations can be added to a SubmissionStatus to be displayed. Each of these added annotations can be set to either public or private. Private annotations cannot be read by people on the a leaderboard unless the Team or Synapse user has **Can Score** or **Admin** permissions on the Evaluation queue. Public annotations can be viewed by any Team or user that at least has **Can View** permissions. + +To learn how to create a wiki page, please visit [here](wikis.md). Below are instructions on how to set up a leaderboard. You must know the Evaluation queue ID to do so; see the section on how to "Configure an Evaluation Queue" for instructions on finding the ID. ### Adding Leaderboard Widget diff --git a/articles/submissions.md b/articles/submissions.md deleted file mode 100644 index f8e3055c..00000000 --- a/articles/submissions.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Submission -layout: article -excerpt: Explore the features of submissions made to evaluation queues -category: howto ---- - - - -Every submission you make to an Evaluation queue has a unique id. This id should not be confused with Synapse ids which start with `syn...`. All submissions have a `Submission` and `SubmissionStatus` object. Annotations can be added to a `SubmissionStatus` to be displayed on leaderboards. Each of these added annotations can be set to either public or private. Private annotations cannot be read by people on the a leaderboard unless the Team or Synapse user has `Can Score` or `Admin` permissions on the evaluation queue. Public annotations can be viewed by any Team or user that at least has `Can View` permissions. See more about [evaluation queues](evaluation_queues.md). From 45fb91215f6480c80c62903fb1299653fb00c3d4 Mon Sep 17 00:00:00 2001 From: Kelsey Montgomery <40647130+kelshmo@users.noreply.github.com> Date: Fri, 6 Dec 2019 08:57:44 -0800 Subject: [PATCH 3/8] Update evaluation_queues.md grammatical, style and ordering edits --- articles/evaluation_queues.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/articles/evaluation_queues.md b/articles/evaluation_queues.md index 844ca157..b1b579fd 100644 --- a/articles/evaluation_queues.md +++ b/articles/evaluation_queues.md @@ -168,8 +168,7 @@ submission <- synSubmit( ``` ## Submissions - -Every submission you make to an Evaluation queue has a unique id. This id should not be confused with Synapse ids which start with syn.... All submissions have a `Submission` and `SubmissionStatus` object. +Every submission you make to an Evaluation queue has a unique ID. This ID should not be confused with Synapse IDs which start with syn. All submissions have a `Submission` and `SubmissionStatus` object. ##### Web @@ -178,16 +177,13 @@ Select **Submit To Challenge** to pick the challenge for your submission. Follow - - ## View Submissions of an Evaluation Queue - ### Viewing Submissions -All Submissions to an Evaluation queue can be viewed through a leaderboard. Annotations can be added to a SubmissionStatus to be displayed. Each of these added annotations can be set to either public or private. Private annotations cannot be read by people on the a leaderboard unless the Team or Synapse user has **Can Score** or **Admin** permissions on the Evaluation queue. Public annotations can be viewed by any Team or user that at least has **Can View** permissions. +All Submissions to an Evaluation queue can be viewed through a leaderboard. Annotations can be added to a SubmissionStatus to be displayed. Each of these added annotations can be set to either public or private. Private annotations cannot be read by people on the a leaderboard unless the Team or Synapse user has **Can Score** or **Admin** permissions on the Evaluation queue. Public annotations can be viewed by any Team or user that have **Can View** permissions. -To learn how to create a wiki page, please visit [here](wikis.md). Below are instructions on how to set up a leaderboard. You must know the Evaluation queue ID to do so; see the section on how to "Configure an Evaluation Queue" for instructions on finding the ID. +You must know the Evaluation queue ID to do so; see the section on how to "Configure an Evaluation Queue" for instructions on finding the ID. ### Adding Leaderboard Widget @@ -208,3 +204,7 @@ Clicking **Refresh Columns** will add these default columns. If you are happy with your leaderboard configurations, save both the configurations and the wiki page to see the Leaderboard. + +# See Also + +To learn how to create a Wiki page, please visit [the Wikis article](wikis.md). From 728d53c470e865c6b6ed868be00b715bc0d80c13 Mon Sep 17 00:00:00 2001 From: Thomas Yu Date: Mon, 9 Dec 2019 14:40:00 -0800 Subject: [PATCH 4/8] Update articles/evaluation_queues.md Co-Authored-By: Kelsey Montgomery <40647130+kelshmo@users.noreply.github.com> --- articles/evaluation_queues.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/articles/evaluation_queues.md b/articles/evaluation_queues.md index b1b579fd..150e1451 100644 --- a/articles/evaluation_queues.md +++ b/articles/evaluation_queues.md @@ -181,7 +181,7 @@ Select **Submit To Challenge** to pick the challenge for your submission. Follow ### Viewing Submissions -All Submissions to an Evaluation queue can be viewed through a leaderboard. Annotations can be added to a SubmissionStatus to be displayed. Each of these added annotations can be set to either public or private. Private annotations cannot be read by people on the a leaderboard unless the Team or Synapse user has **Can Score** or **Admin** permissions on the Evaluation queue. Public annotations can be viewed by any Team or user that have **Can View** permissions. +All submissions to an Evaluation Queue can be viewed through a leaderboard. Annotations can be added to a SubmissionStatus object to be displayed. Each of these added annotations can be set to either public or private. Private annotations cannot be read by people on the a leaderboard unless the Team or Synapse user has **Can Score** or **Admin** permissions on the Evaluation queue. Public annotations can be viewed by any Team or user that have **Can View** permissions. You must know the Evaluation queue ID to do so; see the section on how to "Configure an Evaluation Queue" for instructions on finding the ID. From f6fd83deb74243707c69633ec0db61f92071485a Mon Sep 17 00:00:00 2001 From: Kelsey Montgomery <40647130+kelshmo@users.noreply.github.com> Date: Mon, 9 Dec 2019 14:49:54 -0800 Subject: [PATCH 5/8] Update evaluation_queues.md add in section link --- articles/evaluation_queues.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/articles/evaluation_queues.md b/articles/evaluation_queues.md index 150e1451..2b47639b 100644 --- a/articles/evaluation_queues.md +++ b/articles/evaluation_queues.md @@ -183,7 +183,7 @@ Select **Submit To Challenge** to pick the challenge for your submission. Follow All submissions to an Evaluation Queue can be viewed through a leaderboard. Annotations can be added to a SubmissionStatus object to be displayed. Each of these added annotations can be set to either public or private. Private annotations cannot be read by people on the a leaderboard unless the Team or Synapse user has **Can Score** or **Admin** permissions on the Evaluation queue. Public annotations can be viewed by any Team or user that have **Can View** permissions. -You must know the Evaluation queue ID to do so; see the section on how to "Configure an Evaluation Queue" for instructions on finding the ID. +You must know the Evaluation Queue ID to find the leaderboard. See the section ["Configure an Evaluation Queue"](# Configure-an-Evaluation-Queue) for instructions on finding the ID. ### Adding Leaderboard Widget From 32bc7adc5f599cec2ad14470db91b42be2f0c8b4 Mon Sep 17 00:00:00 2001 From: Thomas Yu Date: Mon, 9 Dec 2019 14:51:25 -0800 Subject: [PATCH 6/8] Edit leaderboard configuration --- articles/evaluation_queues.md | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/articles/evaluation_queues.md b/articles/evaluation_queues.md index 150e1451..bfc4e6dd 100644 --- a/articles/evaluation_queues.md +++ b/articles/evaluation_queues.md @@ -179,11 +179,7 @@ Select **Submit To Challenge** to pick the challenge for your submission. Follow ## View Submissions of an Evaluation Queue -### Viewing Submissions - -All submissions to an Evaluation Queue can be viewed through a leaderboard. Annotations can be added to a SubmissionStatus object to be displayed. Each of these added annotations can be set to either public or private. Private annotations cannot be read by people on the a leaderboard unless the Team or Synapse user has **Can Score** or **Admin** permissions on the Evaluation queue. Public annotations can be viewed by any Team or user that have **Can View** permissions. - -You must know the Evaluation queue ID to do so; see the section on how to "Configure an Evaluation Queue" for instructions on finding the ID. +Submissions can be viewed through Leaderboard widgets on wikipages. ### Adding Leaderboard Widget @@ -191,7 +187,9 @@ You must know the Evaluation queue ID to do so; see the section on how to "Confi ### Configuring Leaderboard Widget -Once you click on **Leaderboard**, you will have to input your own query statement such as `select * from evaluation_9610091`. Remember, 9610091 should be replaced with your own evaluation Id. To view all the columns available, click **Refresh Columns**. +Submission annotations can be added to a SubmissionStatus object to be displayed. Each of these added annotations can be set to either public or private. Private annotations cannot be read by people on the a leaderboard unless the Team or Synapse user has **Can Score** or **Admin** permissions on the Evaluation queue. Public annotations can be viewed by any Team or user that have **Can View** permissions. + +Once you click on **Leaderboard**, you will have to input your own query statement such as `select * from evaluation_9610091`. Remember, 9610091 should be replaced with your own evaluation queue ID. To view all the columns available, click **Refresh Columns**. From 1480aa3e66a260a7d3904e44926740825f0feae4 Mon Sep 17 00:00:00 2001 From: Kelsey Montgomery <40647130+kelshmo@users.noreply.github.com> Date: Mon, 9 Dec 2019 15:41:03 -0800 Subject: [PATCH 7/8] Update evaluation_queues.md copy edits, reordered, modified style --- articles/evaluation_queues.md | 46 +++++++++++++++++------------------ 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/articles/evaluation_queues.md b/articles/evaluation_queues.md index bfc4e6dd..bc7aa957 100644 --- a/articles/evaluation_queues.md +++ b/articles/evaluation_queues.md @@ -6,11 +6,11 @@ excerpt: An queue accepts submission of Synapse entities for evaluation. category: howto --- -An Evaluation queue allows for people to submit Synapse Files, Docker images, etc. for evaluation. They are designed to support open-access data analysis and modeling challenges in Synapse. This framework provides tools for administrators to collect and analyze data models from Synapse users created for a specific goal or purpose. +An Evaluation queue allows for people to submit Synapse `Files`, `Docker` images, etc. for evaluation. They are designed to support open-access data analysis and modeling challenges in Synapse. This framework provides tools for administrators to collect and analyze data models from Synapse users created for a specific goal or purpose. ## Create an Evaluation Queue -To create a queue, you must first create a Synapse project. To learn how to do so, please follow instructions [here](getting_started.md#project-and-data-management-on-synapse). An Evaluation Queue can take several parameters that you can use to fine tune it to your preferences. The minimum requirements to create a queue are: +To create a queue, you must first create a Synapse `Project`. To create a Synapse Project, follow the instructions on the [Project and Data Management](getting_started.md#making-and-managing-projects-in-synapse) page. An Evaluation queue can take several parameters that you can use to customize your preferences. The minimum requirements to create a queue are: * name – Unique name of the evaluation * description – A short description of the evaluation @@ -18,7 +18,7 @@ To create a queue, you must first create a Synapse project. To learn how to do s * submissionReceiptMessage – Message to display to users upon submission * submissionInstructionsMessage – Message to display to users detailing acceptable formatting for submissions. -Additionally, you can pass in an optional **quota** parameter using the R, Python, or web clients. It can be configured with the following terms: +Optionally, you can restrict how things are submitted by using a quota. An Evaluation queue can only have one quota. If you set queue duration, the start date (firstRoundStart), round duration (roundDurationMillis) and number of rounds(nunmberOfRounds) are required parameters. It is optional to set submission limit (submissionLimit). * firstRoundStart - The date/time at which the first round begins in UTC * roundDurationMillis - The duration of each round in milliseconds @@ -27,7 +27,7 @@ Additionally, you can pass in an optional **quota** parameter using the R, Pytho {% include note.html content="The name of your evaluation queue MUST be unique, otherwise the queue will not be created." %} -The example below shows how to create a queue using all of the parameters described: +The examples below shows how to create a queue and set the quota using all of the parameters in Python, R and the web client: ##### Python @@ -73,19 +73,21 @@ evaluation <- Evaluation(name="My Unique Example Challenge Name", synStore(evaluation) ``` -You can create Evaluation queues on the web by navigating to your challenge site by adding `/admin` to the url (E.g. www.synapse.org/#!Synapse:syn12345/admin). Click **Tools** on the right corner and **Add Evaluation Queue** and follow the prompts. +##### Web + +Navigate to your challenge site and add `/admin` to the url (e.g. www.synapse.org/#!Synapse:syn12345/admin). Click **Tools** in the right corner and select **Add Evaluation Queue**. ![Create evaluation queue](../assets/images/create_evaluation_queues.png) -In the web client, the quota can be modified under the **Challenge** tab by clicking `Edit` for the Evaluations that require a quota. +In the web client, the quota can be modified under the **Challenge** tab by clicking `Edit`. -## Configure an Evaluation Queue +## Set a Quota on an Existing Evaluation Queue -An Evaluation Queue can have limits. Submission "rounds" (start date, round duration, and number of rounds) with an optional submission quota (maximum submissions per participant or team) can be defined for each queue. There is no way to configure the round or quota settings of an Evaluation Queue from the web. The Evaluation ID can be found under the **Challenge** tab of your project. Please note that a Challenge tab will not appear on your project until you have created a challenge (**Tools > Run Challenge**). In the case below, the evaluation queue id is `9610091`. +The Evaluation ID can be found under the **Challenge** tab of your project. Please note that a Challenge tab will not appear on your project until you have created a Challenge (**Tools > Run Challenge**). In the case below, the Evaluation queue ID is `9610091`. -Using this value, we can configure the `quota` parameters of this evaluation queue with the R or Python client. +Using the Evaluation ID, we can configure the `quota` parameters of this evaluation queue with the R or Python client. ##### Python @@ -113,14 +115,14 @@ synStore(evaluation) ## Share an Evaluation Queue -Each Evaluation has its own sharing settings, which limit who can interact with the Evaluation and in what way: +Each Evaluation has sharing settings, which limit who can interact with the Evaluation. -* "Administrator" sharing should be tightly restricted, as it includes authority to delete the entire Evaluation queue with all its contents. These users also have the ability to download all the submissions. -* "Can score" allows for individuals to download all the submissions -* "Can submit" allows for Teams or individuals to submit to the Evaluation, but doesn't have access to any of the submissions. -* "Can view" allows for Teams or individuals to view the submissions on a leaderboard. +* **Administrator** sharing should be tightly restricted, as it includes authority to delete the entire Evaluation queue and its contents. These users also have the ability to download all of the submissions. +* **Can Score** allows for individuals to download all of the submissions +* **Can Submit** allows for teams or individuals to submit to the Evaluation, but doesn't have access to any of the submissions. +* **Can View** allows for teams or individuals to view the submissions on a leaderboard. -To set the sharing setting, go to the **Challenge** tab and see your list of Evaluations. Click on the `Share` button per Evaluation and share it with the Teams or individuals you would like. +To set the sharing settings, go to the **Challenge** tab to view your list of Evaluations. Click on the **Share** button per Evaluation and share it with the teams or individuals you would like. {% include important.html content="When someone submits to an Evaluation, a copy of the submission is made, so a person with Administrator or Can score access will be able to download the submission even if the submitter deletes the entity." %} @@ -129,9 +131,8 @@ To set the sharing setting, go to the **Challenge** tab and see your list of Eva Any Synapse entity may be submitted to an Evaluation Queue. In the R and Python examples, you need to know the ID of the evaluation queue. This ID must be provided to you by administrators of the queue. -The submission function takes **two optional parameters**: `name` and `team`. Name can be provided to customize the submission. The submission name is often used by participants to identify their submissions. If a name is not provided, the name of the entity being submitted will be used. As an example, if you submit a File named testfile.txt, and the name of the submission isn't specified, it will default to testfile.txt. Team names can be provided to recognize a group of contributors. - +The submission function takes **two optional parameters**: `name` and `team`. Name can be provided to customize the submission. The submission name is often used by participants to identify their submissions. If a name is not provided, the name of the entity being submitted will be used. As an example, if you submit a File named testfile.txt, and the name of the submission isn't specified, it will default to testfile.txt. Team names can be provided to recognize a group of contributors. ##### Python @@ -172,14 +173,13 @@ Every submission you make to an Evaluation queue has a unique ID. This ID shoul ##### Web -Navigate to a file in Synapse and click on **Tools** in the upper right-hand corner. -Select **Submit To Challenge** to pick the challenge for your submission. Follow the provided steps to complete your submission. +Navigate to a file in Synapse and click on **Tools** in the upper right-hand corner. Select **Submit To Challenge** to pick the challenge for your submission. Follow the provided steps to complete your submission. ## View Submissions of an Evaluation Queue -Submissions can be viewed through Leaderboard widgets on wikipages. +Submissions can be viewed through leaderboard widgets on Wiki pages. ### Adding Leaderboard Widget @@ -187,9 +187,9 @@ Submissions can be viewed through Leaderboard widgets on wikipages. ### Configuring Leaderboard Widget -Submission annotations can be added to a SubmissionStatus object to be displayed. Each of these added annotations can be set to either public or private. Private annotations cannot be read by people on the a leaderboard unless the Team or Synapse user has **Can Score** or **Admin** permissions on the Evaluation queue. Public annotations can be viewed by any Team or user that have **Can View** permissions. +Submission annotations can be added to a SubmissionStatus object to be displayed. Each of these annotations can be set to either public or private. Private annotations are not visible unless the team or Synapse user has **Can Score** or **Admin** permissions on the Evaluation queue. Public annotations can be viewed by any team or user that have **Can View** permissions. -Once you click on **Leaderboard**, you will have to input your own query statement such as `select * from evaluation_9610091`. Remember, 9610091 should be replaced with your own evaluation queue ID. To view all the columns available, click **Refresh Columns**. +Once you click on **Leaderboard**, you will have to input your own query statement such as `select * from evaluation_9610091`. Remember, 9610091 should be replaced with your own Evaluation queue ID. To view all the columns available, click **Refresh Columns**. @@ -199,7 +199,7 @@ Clicking **Refresh Columns** will add these default columns. ### Saving Leaderboard Widget -If you are happy with your leaderboard configurations, save both the configurations and the wiki page to see the Leaderboard. +If you are happy with your leaderboard configurations, save both the configurations and the Wiki page to visualize these updates. From 18fa18ebf9daf0608622b2567e228ac1ec7fdccf Mon Sep 17 00:00:00 2001 From: Kelsey Montgomery <40647130+kelshmo@users.noreply.github.com> Date: Mon, 9 Dec 2019 15:48:54 -0800 Subject: [PATCH 8/8] Update evaluation_queues.md fix spaces and language around quota definition --- articles/evaluation_queues.md | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/articles/evaluation_queues.md b/articles/evaluation_queues.md index bc7aa957..d509a310 100644 --- a/articles/evaluation_queues.md +++ b/articles/evaluation_queues.md @@ -6,7 +6,7 @@ excerpt: An queue accepts submission of Synapse entities for evaluation. category: howto --- -An Evaluation queue allows for people to submit Synapse `Files`, `Docker` images, etc. for evaluation. They are designed to support open-access data analysis and modeling challenges in Synapse. This framework provides tools for administrators to collect and analyze data models from Synapse users created for a specific goal or purpose. +An Evaluation queue allows for people to submit Synapse `Files`, `Docker` images, etc. for evaluation. They are designed to support open-access data analysis and modeling challenges in Synapse. This framework provides tools for administrators to collect and analyze data models from Synapse users created for a specific goal or purpose. ## Create an Evaluation Queue @@ -18,11 +18,11 @@ To create a queue, you must first create a Synapse `Project`. To create a Synaps * submissionReceiptMessage – Message to display to users upon submission * submissionInstructionsMessage – Message to display to users detailing acceptable formatting for submissions. -Optionally, you can restrict how things are submitted by using a quota. An Evaluation queue can only have one quota. If you set queue duration, the start date (firstRoundStart), round duration (roundDurationMillis) and number of rounds(nunmberOfRounds) are required parameters. It is optional to set submission limit (submissionLimit). +Optionally, you can restrict how things are submitted by using a quota. An Evaluation queue can only have one quota. If you want to change how long the queue is open, the start date (firstRoundStart), round duration (roundDurationMillis) and number of rounds (nunmberOfRounds) are required parameters. It is optional to set submission limit (submissionLimit). -* firstRoundStart - The date/time at which the first round begins in UTC -* roundDurationMillis - The duration of each round in milliseconds -* numberOfRounds - The number of rounds, or null if there is no end +* firstRoundStart - The date/time at which the first round begins in UTC. +* roundDurationMillis - The duration of each round in milliseconds. +* numberOfRounds - The number of rounds, or null if there is no limit to set. * submissionLimit - The maximum number of submissions per team/participant per round. Please keep in mind that the system will prevent additional submissions by a user/team once they have hit this number of submissions. {% include note.html content="The name of your evaluation queue MUST be unique, otherwise the queue will not be created." %} @@ -75,7 +75,7 @@ synStore(evaluation) ##### Web -Navigate to your challenge site and add `/admin` to the url (e.g. www.synapse.org/#!Synapse:syn12345/admin). Click **Tools** in the right corner and select **Add Evaluation Queue**. +Navigate to your challenge site and add `/admin` to the url (e.g. www.synapse.org/#!Synapse:syn12345/admin). Click **Tools** in the right corner and select **Add Evaluation Queue**. ![Create evaluation queue](../assets/images/create_evaluation_queues.png) @@ -83,11 +83,11 @@ In the web client, the quota can be modified under the **Challenge** tab by clic ## Set a Quota on an Existing Evaluation Queue -The Evaluation ID can be found under the **Challenge** tab of your project. Please note that a Challenge tab will not appear on your project until you have created a Challenge (**Tools > Run Challenge**). In the case below, the Evaluation queue ID is `9610091`. +The Evaluation ID can be found under the **Challenge** tab of your project. Please note that a Challenge tab will not appear on your project until you have created a Challenge (**Tools > Run Challenge**). In the case below, the Evaluation queue ID is `9610091`. -Using the Evaluation ID, we can configure the `quota` parameters of this evaluation queue with the R or Python client. +Using the Evaluation ID, we can configure the `quota` parameters of this evaluation queue with the R or Python client. ##### Python @@ -169,7 +169,7 @@ submission <- synSubmit( ``` ## Submissions -Every submission you make to an Evaluation queue has a unique ID. This ID should not be confused with Synapse IDs which start with syn. All submissions have a `Submission` and `SubmissionStatus` object. +Every submission you make to an Evaluation queue has a unique ID. This ID should not be confused with Synapse IDs which start with syn. All submissions have a `Submission` and `SubmissionStatus` object. ##### Web @@ -187,9 +187,9 @@ Submissions can be viewed through leaderboard widgets on Wiki pages. ### Configuring Leaderboard Widget -Submission annotations can be added to a SubmissionStatus object to be displayed. Each of these annotations can be set to either public or private. Private annotations are not visible unless the team or Synapse user has **Can Score** or **Admin** permissions on the Evaluation queue. Public annotations can be viewed by any team or user that have **Can View** permissions. +Submission annotations can be added to a SubmissionStatus object to be displayed. Each of these annotations can be set to either public or private. Private annotations are not visible unless the team or Synapse user has **Can Score** or **Admin** permissions on the Evaluation queue. Public annotations can be viewed by any team or user that have **Can View** permissions. -Once you click on **Leaderboard**, you will have to input your own query statement such as `select * from evaluation_9610091`. Remember, 9610091 should be replaced with your own Evaluation queue ID. To view all the columns available, click **Refresh Columns**. +Once you click on **Leaderboard**, you will have to input your own query statement such as `select * from evaluation_9610091`. Remember, 9610091 should be replaced with your own Evaluation queue ID. To view all the columns available, click **Refresh Columns**.