Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add method for comparison to remote datasets #2

Open
freeman-lab opened this issue Aug 10, 2016 · 2 comments
Open

add method for comparison to remote datasets #2

freeman-lab opened this issue Aug 10, 2016 · 2 comments

Comments

@freeman-lab
Copy link
Member

freeman-lab commented Aug 10, 2016

Currently the evaluate method compares two local results to each another, which is useful. But as suggested by @marius10p, sometimes we want the evaluation to incorporate metadata from the "standard" ground truth datasets.

So one idea is to add an extra method, maybe called benchmark or evaluate-remote that takes as input ONE set of results, and the name of a ground truth dataset, then fetches both the remote regions and the metadata, and returns the scores.

In other words, we'll have both

neurofinder evaluate a.json b.json

and

neurofinder benchmark 01.00 a.json

Thoughts?

cc @syncrostone

@marius10p
Copy link

Sounds good, but maybe do not make it possible to obtain results on the test datasets, otherwise people can easily overfit them (and we won't know).

I have been using "neurofinder evaluate a.json b.json" on the training datasets, just to get an overall idea of how many ROIs to output.

@freeman-lab
Copy link
Member Author

Yes, oops, I definitely meant only having this for the training data 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants