Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(lib-classifier): record classification times between the first and last annotation #6449

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

eatyourgreens
Copy link
Contributor

@eatyourgreens eatyourgreens commented Nov 7, 2024

Please request review from @zooniverse/frontend team or an individual member of that team.

Package

  • lib-classifier

Linked Issue and/or Talk Post

How to Review

classification.metadata.started_at should now record the time that you started a classification, proxied as the time that you first annotate a subject. You should be able to test by waiting between downloading a subject and starting a classification.

It should fix cases, mentioned by Peter Mason on Talk, where project teams are finding classifications that seem to have taken hours or days, but really only took minutes.

I have seen differences in the start time (started classifying by the volunteer) and finished times (completed classification received by zooniverse) of many hours, even days.

The new tests for classification start time and duration should fail when run with the old code for classification.metadata.started_at.
https://github.com/zooniverse/front-end-monorepo/actions/runs/11720164440/job/32644906423#step:9:5027

Logs from a failing test for classification start time, where the expected time is one hour different from the recorded time.

I've got a staging project with a variety of workflows and tasks, including drawing tasks, which now record started_at when you first draw on the subject:
https://localhost:8080/?project=eatyourgreens/-project-testing-ground&workflow=3370

Checklist

PR Creator - Please cater the checklist to fit the review needed for your code changes.
PR Reviewer - Use the checklist during your review. Each point should be checkmarked or discussed before PR approval.

General

  • Tests are passing locally and on Github
  • Documentation is up to date and changelog has been updated if appropriate
  • You can yarn panic && yarn bootstrap or docker-compose up --build and FEM works as expected
  • FEM works in all major desktop browsers: Firefox, Chrome, Edge, Safari (Use Browserstack account as needed)
  • FEM works in a mobile browser

General UX

Example Staging Project: i-fancy-cats

  • All pages of a FEM project load: Home Page, Classify Page, and About Pages
  • Can submit a classification
  • Can sign-in and sign-out
  • The component is accessible

Bug Fix

  • The PR creator has listed user actions to use when testing if bug is fixed
  • The bug is fixed
  • Unit tests are added or updated

Comment on lines +39 to +42
const userLanguage = getRoot(self)?.locale
if (userLanguage) {
self.update({ userLanguage })
}
Copy link
Contributor Author

@eatyourgreens eatyourgreens Nov 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't related to the timestamps bug but fixes the store error in #6448.

@eatyourgreens eatyourgreens marked this pull request as ready for review November 7, 2024 09:48
@coveralls
Copy link

coveralls commented Nov 7, 2024

Coverage Status

coverage: 77.854% (+0.02%) from 77.838%
when pulling 8bed133 on eatyourgreens:test-classification-timestamps
into 0e8655e on zooniverse:master.

@eatyourgreens eatyourgreens changed the title test(lib-classifier): test classification timestamps fix(lib-classifier): record classification times between the first and last annotation Nov 7, 2024
@eatyourgreens eatyourgreens marked this pull request as draft November 7, 2024 12:37
@eatyourgreens
Copy link
Contributor Author

Tests are passing but started_at is still wrong when I check this with I Fancy Cats. Setting it back to draft while I look at that.

@eatyourgreens
Copy link
Contributor Author

eatyourgreens commented Nov 7, 2024

Figured it out: A new, empty annotation is created when the subject first loads, so the code needs to wait until annotation._inProgress is true before setting the classification start time. That happens when someone first interacts with a task eg. selecting an answer to a question.

EDIT: for a survey task, annotation starts when annotation._choiceInProgress is true. All other task annotations start when annotation._inProgress is true.

@eatyourgreens eatyourgreens marked this pull request as ready for review November 7, 2024 13:21
function _onAnnotationsChange () {
// set started at when inProgress changes from false to true
if (self.inProgress) {
self.setStartedAt()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you comment this out, the tests should fail.

get inProgress() {
let inProgress = false
self.annotations.forEach(annotation => {
inProgress ||= annotation._inProgress || annotation._choiceInProgress
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_choiceInProgress is used by the survey task. All other tasks use _inProgress.

- [x] Add tests for classification `started_at` and `finished_at` timestamps.
- [ ] fix the failing test for classification start times.
A new, empty annotation is created when the subject first loads, so wait until `annotation._inProgress` is true before setting the classification start time.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants