Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ignore_fail to work on non-sequence tasks #256

Open
Levelleor opened this issue Nov 15, 2024 · 2 comments
Open

ignore_fail to work on non-sequence tasks #256

Levelleor opened this issue Nov 15, 2024 · 2 comments
Labels
enhancement New feature or request

Comments

@Levelleor
Copy link

This isn't a bug, I am assuming, since ignore_fail is probably only meant to work for sequence tasks. Would it be possible to get ignore_fail to also work for other task types? I know pytest, for example, fails when there are no tests in the project and returns an error code 5. I don't see any way to make it not fail gracefully other than just running a "pytest || true".

It would be great if I could just add ignore_fail to any task directly without wrapping it with a sequence and having the ability to resolve them in case of failure.

@nat-n
Copy link
Owner

nat-n commented Nov 19, 2024

hi @Levelleor, thanks for the feedback.

I'm not sure I get it. You want to be able to configure a task to always return 0 no matter what? Could you tell me more about why you would want that?

That said, maybe being able to swallow specific error codes from a command could be useful, so you could specifically ignore a certain failure mode as you describe. But if you want to ignore all failures this seems more like a concern for how you invoke the task from some other process. Does this make sense for your use case?

@Levelleor
Copy link
Author

I know in Jest (for JS) there's an option for "passWithNoTests": https://jestjs.io/docs/cli#--passwithnotests

This is a minor feature that I use when working with CI/CD when I don't have any tests available but just want to quickly have the test run succeed. Pytest doesn't provide a flag like that, at least to my knowledge, and if no tests found always returns code 5 which in turn fails a command in terminal and thus the CI also fails.

It's not something I can't overcome with just a bit more time and coding by writing an actual example test but I thought it would still be a valid suggestion that would slightly improve my experience when I have nothing at hand yet. Some other cases might include information commands, which I only care for outputs of. In these cases I wouldn't care for exit status if info server is down or not, I'd just like to ignore the non-sensitive info in those cases and just proceed with CI. It would be great to suppress the error in this case because there's nothing very important about running it, it's more of a supplemental script.

That said, maybe being able to swallow specific error codes from a command could be useful, so you could specifically ignore a certain failure mode as you describe. But if you want to ignore all failures this seems more like a concern for how you invoke the task from some other process. Does this make sense for your use case?

This sounds like a great idea actually. If I could at least ignore exit code 5 in my case it would fully resolve my case.

@nat-n nat-n added the enhancement New feature or request label Nov 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants