-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
crates.io: Trusted Publishing Support #3691
base: master
Are you sure you want to change the base?
Conversation
Thank you so much for your hard work authoring this @mdtro! It was my honor and pleasure to be able to help. As one of the people who built the equivalent Trusted Publishing feature on PyPI, I'm more than happy to answer any technical or policy questions the Rust community has, as well as offer insight into PyPI's experience (which IMO has been extremely successful) over the past 18 months of having Trusted Publishing deployed. |
I think we should try to support 3rd-party websites that have their own gitlab/forgejo/gitea/etc. instances, so e.g. gitlab.example.com could publish to whatever crates they own even though they aren't using gitlab.com's CI and instead are running their own CI infrastructure. this could perhaps be done by, when CI asks crates.io for an OIDC token, having crates.io use oauth/oidc to check that gitlab.example.com grants permission for CI uploads through a token provided to crates.io by CI |
It's ultimately up to each index to decide a subjective cutoff for IdP "popularity," but I would caution against this: the main security benefit of trusted publishing versus an API token is that large CI/CD providers have dedicated OIDC IdP maintenance and operation teams that handle the burden of maintaining an OIDC PKI. For one-off instances, the benefits of a PKI versus ordinary API tokens are marginal and may even invert, since maintaining a PKI is significantly more operationally complicated than securing a single API token. (For PyPI, this is one of the reasons we started with GitHub, and then moved to support GitLab, Google Cloud Build, etc., but haven't yet moved to support third-party instances of GH or GL.) |
+1 to what @woodruffw said, also to add to this: PyPI has a notion of "organizations", and one thing we are considering is for PyPI is to permit self-hosted IdPs 1:1 with organizations on a case by case basis. |
I think it should really be implemented so that you need BOTH a crates.io API token AND the OpenID Connect identity token. Otherwise, if there is a bug in the OpenID Connect implementation by GitHub/Google/etc., someone exploiting it could take over all crates using it without having to take over the actual developer machines or CI systems where the API token would be stored; this also guarantees that security is strictly improved since even if the OpenID Connect implementation on crates.io's side were totally broken, it would still be as secure as the current system. The best way to do this seems to be to change the crates.io API token creation UI to have the option to also require an OpenID Connect identity to be provided to accept requests using that token. |
Could you elaborate on the threat model you're envisioning here? We considered similar scenarios when building out the threat model for trusted publishing on PyPI, and ultimately came to the conclusion that an attacker who is sufficiently powerful to control a major OIDC IdP (like Google's or GitHub's) would almost certainly also have sufficient power to control CI-side user-configured credentials. Or in other words: we couldn't think of an internally coherent threat model in which an attacker is simultaneously strong enough to take over a major OIDC IdP but too weak to compromise an individual CI process on that IdP's platform (and thereby exfiltrate a manually-configured crates.io API token). (More broadly, I think Trusted Publishing's security and usability benefits become moot if they require two credentials - one manual - instead of just an automatic one: the goal is to remove error prone manual steps and opportunities for over-scoping/accidental disclosure, both of which would still exist if the user would still need to configure a crates.io API token.) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good work @mdtro and everyone involved! I'm excited for this to land in crates.io :)
|
||
- Should crate owners be able to configure the allowed token scopes for a Trusted Publisher configuration? | ||
- We could default to `publish-new` and `publish-update`, but maybe it's best to allow this to be configurable? | ||
- How long should an access token derived from the ID token exchange be valid for? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we know what the other registries have chosen and the reasons for their choices?
I guess an "issue" is that cargo by default builds the crate before it is sent to crates.io. Depending on how long this build process takes, the token may have already expired by the time cargo sends the crate to crates.io.
I wonder how viable it would be to provide a config option for the token lifetime.
Alternatively: cargo provides support for other authentication methods these days, e.g. to integrate with 1Password. Would it be viable to integrate the authentication flow through that? It would presumably allow us to request the token from crates.io just in time before the upload happens. Admittedly I don't know at what point cargo talks to the auth provider process. It might happen before the build process too. Might be worth exploring though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like integrating this into Cargo's default set of auth providers also avoids the separate "auth" repository/action maintenance, which feels like more of a "new thing" in our release processes etc. It also seems like native integration in Cargo is probably better for users -- the risk of using the wrong repository (or an outdated version) is probably reduced.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess it's up to the cargo team whether they would want to make this a built-in thing. It looks like they have quite a long to-do list already though 😅
I don't think the action would have to be integrated with the Rust release process though. All it needs is a repository with some git tags, which shouldn't be too hard to set up an maintain.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For PyPI, the token is valid for 15 minutes from the time that it is minted. Our reasoning for this is that 15 minutes is generally enough time to perform any uploading needed, and anyone that needed more time could always re-request a token.
I guess an "issue" is that cargo by default builds the crate before it is sent to crates.io. Depending on how long this build process takes, the token may have already expired by the time cargo sends the crate to crates.io.
I'm a little unfamiliar with the general workflow here, but it might not be necessary to exchange an OIDC token for an upload token prior to the build, i.e. this could happen after a build completes, but just before an upload will happen instead, so your only concern is the length of time of the upload.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
15 minutes is generally enough time to perform any uploading needed
if I understand correctly this kind of originates in PyPI allowing file uploads after a version was created? (vs. crates.io where you need to upload your one crate file upfront)
this could happen after a build completes, but just before an upload will happen instead, so your only concern is the length of time of the upload.
the short version is cargo publish
performs both the build and the upload and there isn't really a step in between that we could hook into from the outside (except maybe with the auth providers that I mentioned above). the uploads are currently limited to 30sec anyway due to hosting provider limitations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if I understand correctly this kind of originates in PyPI allowing file uploads after a version was created? (vs. crates.io where you need to upload your one crate file upfront)
Yea, kind of, more like it originates from a PyPI release consisting of multiple files, which are uploaded in multiple separate requests rather than a single request, and that some of those files can be huge, which takes some time to upload.
the short version is cargo publish performs both the build and the upload and there isn't really a step in between that we could hook into from the outside
I guess the question is whether the OIDC token exchange needs to happen externally (and prior to) cargo publish
being invoked, or whether the token exchange is the responsibility of cargo publish
, in which case it could happen between the build & publish steps.
For PyPI, we chose to do the exchange in our canonical upload workflow prior to invoking twine upload
, but we have plans to make twine upload
support handling the exchange directly: pypa/twine#999. In either case, the build has already happened though, so we can keep this window relatively small.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think having cargo
do the token exchange is a great idea. It'd allow us to limit the lifetime of the token to the minimum, but do not think we should take this on as a prerequisite for this work. Could this possibly be implemented as a cargo plugin/external command?
Instead, I'll move forward with provisioning a long enough token lifetime -- possibly configurable in the Trusted Publisher Configuration settings (up to a sane maximum allowed value).
At the end of the rust-lang/crates-io-auth-action
we can make a call to a revoke endpoint to invalidate the token as soon as the workflow is finished.
1. (required) The owning GitHub username or organization | ||
2. (required) The repository name | ||
3. (required) The workflow file name (must be located in `.github/workflows/`) | ||
4. (optional) The [GitHub Actions environment](https://docs.github.com/en/actions/deployment/targeting-different-environments/using-environments-for-deployment) name |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to treat an empty environment string as mandating that the publishing is not done from an environment? This may help users fill in which environment allows publishing if they use environments at all on their CI rather than forgetting to fill it in implicitly acting as wildcard and thus accidentally allow publishing outside of an environment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On PyPI we treat "no environment specified" as "any environment is allowed," since we observed that users would configure an environment on the CI/CD side and then forget to include it on the Trusted Publishing side. They'd then get confused as to why their publisher wasn't working, since the error would strongly suggest that their configuration was correct (the only difference being the environment).
That being said, YMMV! PyPI's approach was pretty experimental, and it's possible that you can avoid this user confusion through better documentation and error messaging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They'd then get confused as to why their publisher wasn't working, since the error would strongly suggest that their configuration was correct (the only difference being the environment).
Would it have been possible to in the error message suggest to change the publish settings on the PyPI side to allow the environment that was used?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it have been possible to in the error message suggest to change the publish settings on the PyPI side to allow the environment that was used?
Yep, that's pretty much what we do now -- we didn't do that originally because we were wary of users blindly trusting the error message to tell them what to configure, but in practice I think it's a reasonable tradeoff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I have a strong opinion on this either way. Environments are a newish feature on GitHub Actions, and I generally don't see them used much. Treating the empty environment
field as an any felt nature to me.
GitHub Action Environments are generally used to increase the security around a workflow, so it feels weird to explicitly not want one. 🤔
## Unresolved questions | ||
[unresolved-questions]: #unresolved-questions | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one minor unresolved question that I don't think is a blocker for the RFC though: how do we deal with user-related things in the publish endpoint? e.g. our publish rate limits (and overrides) are currently implemented per-user, but when publishing via OIDC there is no user. we also have a version.published_by
column that refers to a users.id
and other user-related columns that we might need to rethink when we implement this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Github actions at least has an actor field in their token pointing to the user initiating the workflow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
interesting, I guess that could be sufficient for now. but I guess once we start to support a) different CI providers and/or b) different auth providers we will have to think about this anyway 😅
IMHO this is ready now! :) @rfcbot fcp merge |
Team member @Turbo87 has proposed to merge this. The next step is review by the rest of the tagged team members: No concerns currently listed. Once a majority of reviewers approve (and at most 2 approvals are outstanding), this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See this document for info about what commands tagged team members can give me. |
/cc @rust-lang/crates-io
A big thank you to @woodruffw for co-authoring, providing prior art through PyPi's implementation, and all of the expert advice. 🙏
Rendered