This change makes it easier to understand misconfigurations caused
by issuers with extraneous trailing slashes.
Signed-off-by: Mo Khan <mok@vmware.com>
The admin kubeconfigs we have on EKS clusters are a bit different from others, because there is no certificate/key (EKS does not use certificate auth).
This code didn't quite work correctly in that case. The fix is to allow the case where `tlsConfig.GetClientCertificate` is non-nil, but returns a value with no certificates.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
to avoid garbage collection breaking the refresh flow
Also changed the access token lifetime to be 2 minutes instead of 15
since we now have cert caching.
- Use `nickname` claim as an example, which means we only need the `openid` scope.
This is also more stable since emails can change over time.
- Put the OIDCIdentityProvider and Secret into one YAML blob, since they will likely be copy-pasted together anyway.
- Add a separate section for using alternate claims.
- Add a separate section for using a private GitLab instance.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
Some minor edits I came across while reviewing this:
- Capitalize "GitLab" the way they do.
- Use `{{< ref "xyz" >}}` references when linking internally. The advantage of these is that they're "type checked" by Hugo when the site is rendered, so we'll know if we ever break one.
- Add links to the GitLab docs about creating an OAuth client. These also cover adding a group-level or instance-wide application.
- Re-wrap the YAML lines to fit a bit more naturally.
- Add a `namespace` to the YAML examples, so they're more likely to work without tweaks.
- Use "gitlab" instead of "my-oidc-identity-provider" as the example name, for clarity.
- Re-word a few small bits. These are 100% subjective but hopefully an improvement?
Signed-off-by: Matt Moyer <moyerm@vmware.com>
The supervisor treats all events the same hence it must use a
singleton queue.
Updated the integration test to remove the data race caused by
calling methods on testing.T outside of the main test go routine.
Signed-off-by: Monis Khan <mok@vmware.com>
Followup on the previous comment to split apart the ServiceAccount of the kube-cert-agent and the main concierge pod. This is a bit cleaner and ensures that in testing our main Concierge pod never requires any privileged permissions.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
Since 0dfb3e95c5, we no longer directly create the kube-cert-agent Pod, so our "use"
permission on PodSecurityPolicies no longer has the intended effect. Since the deployments controller is now the
one creating pods for us, we need to get the permission on the PodSpec of the target pod instead, which we do somewhat
simply by using the same service account as the main Concierge pods.
We still set `automountServiceAccountToken: false`, so this should not actually give any useful permissions to the
agent pod when running.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This change updates the impersonator logic to pass through requests
that authenticated via a bearer token that asserts a UID. This
allows us to support service account tokens (as well as any other
form of token based authentication).
Signed-off-by: Monis Khan <mok@vmware.com>
- The linux base64 command is different, so avoid using it at all.
On linux the default is to split the output into multiple lines,
which messes up the integration-test-env file. The flag used to
disable this behavior on linux ("-w0") does not exist on MacOS's
base64.
- On debian linux, the latest version of Docker from apt-get still
requires DOCKER_BUILDKIT=1 or else it barfs.
This controller is responsible for cleaning up kube-cert-agent pods that were deployed by previous versions.
They are easily identified because they use a different `kube-cert-agent.pinniped.dev` label compared to the new agent pods (`true` vs. `v2`).
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This is a relatively large rewrite of much of the kube-cert-agent controllers. Instead of managing raw Pod objects, they now create a single Deployment and let the builtin k8s controller handle it from there.
This reduces the amount of code we need and should handle a number of edge cases better, especially those where a Pod becomes "wedged" and needs to be recreated.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
Now that we have the fix from https://github.com/kubernetes/kubernetes/pull/97693, we no longer need these sleeps.
The underlying authenticator initialization is still asynchronous, but should happen within a few milliseconds.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This change updates the impersonator logic to use the delegated
authorizer for all non-rest verbs such as impersonate. This allows
it to correctly perform authorization checks for incoming requests
that set impersonation headers while not performing unnecessary
checks that are already handled by KAS.
The audit layer is enabled to track the original user who made the
request. This information is then included in a reserved extra
field original-user-info.impersonation-proxy.concierge.pinniped.dev
as a JSON blob.
Signed-off-by: Monis Khan <mok@vmware.com>
$PINNIPED_TEST_SUPERVISOR_UPSTREAM_OIDC_ISSUER_CA_BUNDLE was recently
changed to be a base64 encoded value, so this script does not need to
base64 encode the value itself anymore.
Avoid them because they can't be used in GoLand for running integration
tests in the UI, like running in the debugger.
Also adds optional PINNIPED_TEST_TOOLS_NAMESPACE because we need it
on the LDAP feature branch where we are developing the upcoming LDAP
support for the Supervisor.
I don't believe this is used by any tests or docs. I think it was for some initial local testing of the impersonation proxy?
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This isn't strictly necessary because we currently always have the concierge endpoint and CA as CLI flags, but it doesn't hurt and it's better to err on the side of _not_ reusing a cache entry.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
We have some nice normalization code in this package to remove expired or otherwise malformed cache entries, but we weren't calling it in the appropriate place.
Added calls to normalize the cache data structure before and after each transaction, and added test cases to ensure that it's being called.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
- Rename the test/deploy/dex directory to test/deploy/tools
- Rename the dex namespace to tools
- Add a new ytt value called `pinny_ldap_password` for the tools
ytt templates
- This new value is not used on main at this time. We intend to use
it in the forthcoming ldap branch. We're defining it on main so
that the CI scripts can use it across all branches and PRs.
Signed-off-by: Ryan Richard <richardry@vmware.com>
Before this change, the "context", "cluster", and "user" fields in generated kubeconfig YAML were always hardcoded to "pinniped". This could be confusing if you generated many kubeconfigs for different clusters.
After this change, the fields will be copied from their names in the original kubeconfig, suffixed with "-pinniped". This suffix can be overridden by setting the new `--generated-name-suffix` CLI flag.
The goal of this change is that you can distinguish between kubeconfigs generated for different clusters, as well as being able to distinguish between the Pinniped and original (admin) kubeconfigs for a cluster.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
A demo of running the Supervisor and Concierge on
a kind cluster. Can be used to quickly set up an
environment for manual testing.
Also added some missing copyright headers to other
hack scripts.
These commits include security fixes (CVE-2021-3121) for code generated by github.com/gogo/protobuf.
We expect this fix to also land in v1.20.6, but we don't want to wait for it.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This test could flake if the load balancer hostname was provisioned but is not yet resolving in DNS from the test process.
The fix is to retry this step for up to 5 minutes.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This test could fail when the cluster was under heavy load. This could cause kubectl to emit "Throttling request took [...]" logs that triggered a failure in the test.
The fix is to ignore these innocuous warnings.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
We had this code that printed out pod logs when certain tests failed, but it is a bit cumbersome. We're removing it because we added a CI task that exports all pod logs after every CI run, which accomplishes the same thing and provides us a bunch more data.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This allows setting `$PINNIPED_TEST_CLI` to point at an existing `pinniped` CLI binary instead of having the test build one on-the-fly. This is more efficient when you're running the tests across many clusters as we do in CI.
Building the CLI from scratch in our CI environment takes 1.5-2 minutes, so this change should save nearly that much time on every test job.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
We've seen some test flakes caused by this test. Some small changes:
- Use a 30s timeout for each iteration of the test loop (so each iteration needs to check or fail more quickly).
- Log a bit more during the checks so we can diagnose what's going on.
- Increase the overall timeout from one minute to five minutes
Signed-off-by: Matt Moyer <moyerm@vmware.com>
- a credential that is understood by -> a credential that can be used to
authenticate to
- This is more neutral to whether its going directly to k8s
or through the impersonation proxy
Instead of using the LongRunningFunc to determine if we can safely
use http2, follow the same logic as the aggregation proxy and only
use http2 when the request is not an upgrade.
Signed-off-by: Monis Khan <mok@vmware.com>
In the case where we are using middleware (e.g., when the api group is
different) in our kubeclient, these error messages have a "...middleware request
for..." bit in the middle.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
When the frontend connection to our proxy is closed, the proxy falls through to
a panic(), which means the HTTP handler goroutine is killed, so we were not
seeing this log statement.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
This is probably a good idea regardless, but it also avoids an infinite recursion from IntegrationEnv() -> assertNoRestartsDuringTest() -> NewKubeclient() -> IntegrationEnv() -> ...
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This test could flake in some rare scenarios. This change adds a bunch of retries, improves the debugging output if the tests fail, and puts all of the subtests in parallel which saves ~10s on my local machine.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This test has occasionally flaked because it only waited for the APIService GET to finish, but did not wait for the controller to successfully update the target object.
The new code should be more patient and allow the controller up to 10s to perform the expected action.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This new capability describes whether a cluster is expected to allow anonymous requests (most do since k8s 1.6.x, but AKS has it disabled).
This commit also contains new capability YAML files for AKS and EKS, mostly to document publicly how we expect our tests to function in those environments.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
It takes a lot of manual steps to get ready to manually test the
impersonation proxy on a kind cluster, which makes it error prone,
so encapsulate them into a script to make it easier.
At the end of the test, wait for the KubeClusterSigningCertificate
strategy on the CredentialIssuer to go back to being healthy, to avoid
polluting other integration tests which follow this one.
This is the configuration for https://pre-commit.com/, which now also runs golangci-lint using the same version as CI (currently v1.33.0).
Signed-off-by: Matt Moyer <moyerm@vmware.com>
We were previously issuing both client certs and server certs with
both extended key usages included. Split the Issue*() methods into
separate methods for issuing server certs versus client certs so
they can have different extended key usages tailored for each use
case.
Also took the opportunity to clean up the parameters of the Issue*()
methods and New() methods to more closely match how we prefer to call
them. We were always only passing the common name part of the
pkix.Name to New(), so now the New() method just takes the common name
as a string. When making a server cert, we don't need to set the
deprecated common name field, so remove that param. When making a client
cert, we're always making it in the format expected by the Kube API
server, so just accept the username and group as parameters directly.
I'm kinda surprised this is working with our current implementation of the
impersonator, but regardless this seems like a step forward.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
The impersonator_test.go unit test now starts the impersonation
server and makes real HTTP requests against it using client-go.
It is backed by a fake Kube API server.
The CA IssuePEM() method was missing the argument to allow a slice
of IP addresses to be passed in.
These tests occasionally flake because of a conflict error such as:
```
supervisor_discovery_test.go:105:
Error Trace: supervisor_discovery_test.go:587
supervisor_discovery_test.go:105
Error: Received unexpected error:
Operation cannot be fulfilled on federationdomains.config.supervisor.pinniped.dev "test-oidc-provider-lvjfw": the object has been modified; please apply your changes to the latest version and try again
Test: TestSupervisorOIDCDiscovery
```
These retries should improve the reliability of the tests.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
Also make each t.Run use its own namespace to slight reduce the
interdependency between them.
Use t.Cleanup instead of defer in whoami_test.go just to be consistent
with other integration tests.
The same coverage that was supplied by
TestCredentialRequest_OtherwiseValidRequestWithRealTokenShouldFailWhenTheClusterIsNotCapable
is now provided by an assertion at the end of TestImpersonationProxy,
so delete the duplicate test which was failing on GKE because the
impersonation proxy is now active by default on GKE.
When testing that the impersonation proxy port was closed there
is no need to include credentials in the request. At the point when
we want to test that the impersonation proxy port is closed, it is
possible that we cannot perform a TokenCredentialRequest to get a
credential either.
Also add a new assertion that the TokenCredentialRequest stops handing
out credentials on clusters which have no successful strategies.
Signed-off-by: Monis Khan <mok@vmware.com>
To make an impersonation request, first make a TokenCredentialRequest
to get a certificate. That cert will either be issued by the Kube
API server's CA or by a new CA specific to the impersonator. Either
way, you can then make a request to the impersonator and present
that client cert for auth and the impersonator will accept it and
make the impesonation call on your behalf.
The impersonator http handler now borrows some Kube library code
to handle request processing. This will allow us to more closely
mimic the behavior of a real API server, e.g. the client cert
auth will work exactly like the real API server.
Signed-off-by: Monis Khan <mok@vmware.com>
It also works on the slightly older MacOS Catalina.
This script is only used on development laptops, so hopefully
this will work for more laptop OS's now.
Signed-off-by: Ryan Richard <richardry@vmware.com>
This adds two new flags to "pinniped get kubeconfig": --skip-validation and --timeout.
By default, at the end of the kubeconfig generation process, we validate that we can reach the configured cluster. In the future this might also validate that the TokenCredentialRequest API is running, but for not it just verifies that the DNS name resolves, and the TLS connection is available on the given port.
If there is an error during this check, we block and retry for up to 10 minutes. This duration can be changed with --timeout an the entire process can be skipped with --skip-validation.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This makes output that's easier to copy-paste into the test. We could also make it ignore the order of key/value pairs in the future.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
The thing we're waiting for is mostly that DNS is resolving, the ELB is listening, and connections are making it to the proxy.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
All controller unit tests were accidentally using a timeout context
for the informers, instead of a cancel context which stays alive until
each test is completely finished. There is no reason to risk
unpredictable behavior of a timeout being reached during an individual
test, even though with the previous 3 second timeout it could only be
reached on a machine which is running orders of magnitude slower than
usual, since each test usually runs in about 100-300 ms. Unfortunately,
sometimes our CI workers might get that slow.
This sparked a review of other usages of timeout contexts in other
tests, and all of them were increased to a minimum value of 1 minute,
under the rule of thumb that our tests will be more reliable on slow
machines if they "pass fast and fail slow".
In impersonator_config_test.go, instead of waiting for the resource
version to appear in the informers, wait for the actual object to
appear.
This is an attempt to resolve flaky failures that only happen in CI,
but it also cleans up the test a bit by avoiding inventing fake resource
version numbers all over the test.
Signed-off-by: Monis Khan <mok@vmware.com>
- Use `Eventually` when making tls connections because the production
code's handling of starting and stopping the TLS server port
has some async behavior.
- Don't use resource version "0" because that has special meaning
in the informer libraries.
This time, don't use the Squid proxy if the cluster supports real external load balancers (as in EKS/GKE/AKS).
Signed-off-by: Matt Moyer <moyerm@vmware.com>
Because otherwise `go test` will panic/crash your test if it takes
longer than 10 minutes, which is an annoying way for an integration
test to fail since it skips all of the t.Cleanup's.
This updates our issuerconfig.UpdateStrategy to sort strategies according to a weighted preference.
The TokenCredentialRequest API strategy is preffered, followed by impersonation proxy, followed by any other unknown types.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
- This commit does not include the updates that we plan to make to
the `status.strategies[].frontend` field of the CredentialIssuer.
That will come in a future commit.
This is more than an automatic merge. It also includes a rewrite of the CredentialIssuer API impersonation proxy fields using the new structure, and updates to the CLI to account for that new API.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
I think this is another aspect of the test flakes we're trying to fix. This matters especially for the "Multiple Pinnipeds" test environment where two copies of the test suite are running concurrently.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
If the test is run immediately after the Concierge is installed, the API server can still have broken discovery data and return an error on the first call.
This commit adds a retry loop to attempt this first kubectl command for up to 60s before declaring failure.
The subsequent tests should be covered by this as well since they are not run in parallel.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
These controllers were a bit inconsistent. There were cases where the controllers ran out of the expected order and the custom labels might not have been applied.
We should still plan to remove this label handling or move responsibility into the middleware layer, but this avoids any regression.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This field is a new tagged-union style field that describes how clients can connect using each successful strategy.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
We don't support using the impersonate headers through the impersonation
proxy yet, so this integration test is a negative test which asserts
that we get an error.
- The CA cert will end up in the end user's kubeconfig on their client
machine, so if it changes they would need to fetch the new one and
update their kubeconfig. Therefore, we should avoid changing it as
much as possible.
- Now the controller writes the CA to a different Secret. It writes both
the cert and the key so it can reuse them to create more TLS
certificates in the future.
- For now, it only needs to make more TLS certificates if the old
TLS cert Secret gets deleted or updated to be invalid. This allows
for manual rotation of the TLS certs by simply deleting the Secret.
In the future, we may want to implement some kind of auto rotation.
- For now, rotation of both the CA and TLS certs will also happen if
you manually delete the CA Secret. However, this would cause the end
users to immediately need to get the new CA into their kubeconfig,
so this is not as elegant as a normal rotation flow where you would
have a window of time where you have more than one CA.
Also fixes our sitemap to have correct `lastmod` times when built locally (it was already correct on Netlify).
Signed-off-by: Matt Moyer <moyerm@vmware.com>
Should work on cluster which have:
- load balancers not supported, has squid proxy (e.g. kind)
- load balancers supported, has squid proxy (e.g. EKS)
- load balancers supported, no squid proxy (e.g. GKE)
When testing with a load balancer, call the impersonation proxy through
the load balancer.
Also, added a new library.RequireNeverWithoutError() helper.
Signed-off-by: Margo Crawford <margaretc@vmware.com>
This flag selects a CredentialIssuer to use when detecting what mode the Concierge is in on a cluster. If not specified, the command will look for a single CredentialIssuer. If there are multiple, then the flag is required.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
Also update concierge_impersonation_proxy_test.go integration test
to use real TLS when calling the impersonator.
Signed-off-by: Ryan Richard <richardry@vmware.com>
These are prone to breaking when stdr is upgraded because they rely on the exact ordering of keys in the log message. If we have more problems we can rewrite the assertions to be more robust, but for this time I'm just fixing them to match the new output.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
The login commands now expect either `--concierge-mode ImpersonationProxy` or `--concierge-mode TokenCredentialRequestAPI` (the default).
This is partly a style choice, but I also think it helps in case we need to add a third major mode of operation at some point.
I also cleaned up some other minor style items in the help text.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
Adds a new optional `spec.impersonationProxyInfo` field to hold the URL and CA data for the impersonation proxy, as well as some additional status condition constants for describing the current status of the impersonation proxy.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
These are some more changes that came up when Pablo and I were reviewing the previous docs PR.
In no particular order:
- Fix "related posts" on the blog section, and hide the section if there are none.
- Minor style changes to several pages (guided by various style guides).
- Redirect the root of get.pinniped.dev to our main page (shouldn't really be hit, but it's nice to do something).
- Add more mobile-friendly CSS for our docs.
- Reword the "getting started" CTA, and hide it on the docs pages (you're already there).
- Fix the "Learn how Pinniped provides identity services to Kubernetes" link on the landing page.
- Add a date to our blog post cards.
- Rewrite the hero text on the landing page.
- Fix the docs link for the "Get Started with Pinniped" button on the landing page.
- Rework the landing page grid text.
- Add Margo and Nanci to the team section and sort it alphabetically.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
Also:
- Shut down the informer correctly in
concierge_impersonation_proxy_test.go
- Remove the t.Failed() checks which avoid cleaning up after failed
tests. This was inconsistent with how most of the tests work, and
left cruft on clusters when a test failed.
Signed-off-by: Ryan Richard <richardry@vmware.com>
Also:
- Changed base64 encoding of impersonator bearer tokens to use
`base64.StdEncoding` to make it easier for users to manually
create a token using the unix `base64` command
- Test the headers which are and are not passed through to the Kube API
by the impersonator more carefully in the unit tests
- More WIP on concierge_impersonation_proxy_test.go
Signed-off-by: Margo Crawford <margaretc@vmware.com>
This change adds a new virtual aggregated API that can be used by
any user to echo back who they are currently authenticated as. This
has general utility to end users and can be used in tests to
validate if authentication was successful.
Signed-off-by: Monis Khan <mok@vmware.com>
- Because the impersonation proxy config controller needs to be able
to delete the load balancer which it created
Signed-off-by: Margo Crawford <margaretc@vmware.com>
This is a more reliable way to determine whether the load balancer
is already running.
Also added more unit tests for the load balancer.
Signed-off-by: Ryan Richard <richardry@vmware.com>
Makes most of the fonts a bit bigger, increases contrast, fixes some nits about the spacing in numbered/bulletted lists, and adds some image alt texts.
Overall this improves our Lighthouse accessibility score from 71 to 95 and I think it's subjectively more readable.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This wasn't needed before because the other code wasn't in the main module and golangci-lint won't cross a module boundary.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
If someone has already set impersonation headers in their request, then
we should fail loudly so the client knows that its existing impersonation
headers will not work.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
I think we were assuming the name of our Concierge app, and getting lucky
because it was the name we use when testing locally (but not in CI).
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
- Watch a configmap to read the configuration of the impersonation
proxy and reconcile it.
- Implements "auto" mode by querying the API for control plane nodes.
- WIP: does not create a load balancer or proper TLS certificates yet.
Those will come in future commits.
Signed-off-by: Margo Crawford <margaretc@vmware.com>
This is a partial revert of 288d9c999e. For some reason it didn't occur to me
that we could do it this way earlier. Whoops.
This also contains a middleware update: mutation funcs can return an error now
and short-circuit the rest of the request/response flow. The idea here is that
if someone is configuring their kubeclient to use middleware, they are agreeing
to a narrow-er client contract by doing so (e.g., their TokenCredentialRequest's
must have an Spec.Authenticator.APIGroup set).
I also updated some internal/groupsuffix tests to be more realistic.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
I think the reason we were seeing flakes here is because the kube cert agent
pods had not reached a steady state even though our test assertions passed, so
the test would proceed immediately and run more assertions on top of a weird
state of the kube cert agent pods.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
I added that test helper to create an http.Request since I wanted to properly
initialize the http.Request's context.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
This allows us to keep all of our resources in the pinniped category
while not having kubectl return errors for calls such as:
kubectl get pinniped -A
Signed-off-by: Monis Khan <mok@vmware.com>
As of upgrading to Kubernetes 1.20, our aggregated API server nows runs some
controllers for the two flowcontrol.apiserver.k8s.io resources in the title of
this commit, so it needs RBAC to read them.
This should get rid of the following error messages in our Concierge logs:
Failed to watch *v1beta1.FlowSchema: failed to list *v1beta1.FlowSchema: flowschemas.flowcontrol.apiserver.k8s.io is forbidden: User "system:serviceaccount:concierge:concierge" cannot list resource "flowschemas" in API group "flowcontrol.apiserver.k8s.io" at the cluster scope
Failed to watch *v1beta1.PriorityLevelConfiguration: failed to list *v1beta1.PriorityLevelConfiguration: prioritylevelconfigurations.flowcontrol.apiserver.k8s.io is forbidden: User "system:serviceaccount:concierge:concierge" cannot list resource "prioritylevelconfigurations" in API group "flowcontrol.apiserver.k8s.io" at the cluster scope
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
I messed this up before because the ordering of the path components is a bit different than in the specific version case.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
When the Pinniped server has been installed with the `api_group_suffix`
option, for example using `mysuffix.com`, then clients who would like to
submit a `TokenCredentialRequest` to the server should set the
`Spec.Authenticator.APIGroup` field as `authentication.concierge.mysuffix.com`.
This makes more sense from the client's point of view than using the
default `authentication.concierge.pinniped.dev` because
`authentication.concierge.mysuffix.com` is the name of the API group
that they can observe their cluster and `authentication.concierge.pinniped.dev`
does not exist as an API group on their cluster.
This commit includes both the client and server-side changes to make
this work, as well as integration test updates.
Co-authored-by: Andrew Keesler <akeesler@vmware.com>
Co-authored-by: Ryan Richard <richardry@vmware.com>
Co-authored-by: Margo Crawford <margaretc@vmware.com>
Makes it easy to deploy Pinniped under a different API group for manual
testing and iterating on integration tests on your laptop.
Signed-off-by: Ryan Richard <richardry@vmware.com>
Because it is a test of the conciergeclient package, and the naming
convention for integration test files is supervisor_*_test.go,
concierge_*_test.go, or cli_*_test.go to identify which component
the test is primarily covering.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
This makes sure that if our clients ever send types with the wrong
group, the server will refuse to decode it.
Signed-off-by: Monis Khan <mok@vmware.com>
- I realized that the hardcoded fakekubeapi 404 not found response was invalid,
so we were getting a default error message. I fixed it so the tests follow a
higher fidelity code path.
- I caved and added a test for making sure the request body was always closed,
and believe it or not, we were double closing a body. I don't *think* this will
matter in production, since client-go will pass us ioutil.NopReader()'s, but
at least we know now.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
Yes, this is a huge commit.
The middleware allows you to customize the API groups of all of the
*.pinniped.dev API groups.
Some notes about other small things in this commit:
- We removed the internal/client package in favor of pkg/conciergeclient. The
two packages do basically the same thing. I don't think we use the former
anymore.
- We re-enabled cluster-scoped owner assertions in the integration tests.
This code was added in internal/ownerref. See a0546942 for when this
assertion was removed.
- Note: the middlware code is in charge of restoring the GV of a request object,
so we should never need to write mutations that do that.
- We updated the supervisor secret generation to no longer manually set an owner
reference to the deployment since the middleware code now does this. I think we
still need some way to make an initial event for the secret generator
controller, which involves knowing the namespace and the name of the generated
secret, so I still wired the deployment through. We could use a namespace/name
tuple here, but I was lazy.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
Co-authored-by: Ryan Richard <richardry@vmware.com>
This was generated via `hugo gen chromastyles --style=monokailight > ./site/themes/pinniped/assets/scss/_syntax.css`.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
We have these redirects set up to make the `kubectl apply -f [...]` commands cleaner, but we never went back and fixed up the documentation to use them until now.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This project overwrote the v1.0.0 tag with a different commit ID, which has caused issues with the Go module sum DB (which accurately detected the issue).
This has been one of the reasons why Dependabot is not updating our Go dependencies.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This optimizes our image in a few different ways:
- It adds a bunch of files and directories to the `.dockerignore` file.
This lets us have a single `COPY . .` but still be very aggressive about pruning what files end up in the build context.
- It adds build-time cache mounts to the `go build` commands using BuildKit's `--mount=type=cache` flag.
This requires BuildKit-capable Docker, but means that our Go builds can all be incremental builds.
This replaces the previous flow we had where we needed to split out `go mod download`.
- Instead of letting the full `apt-get install ca-certificates` layer end up in our final image, we copy just the single file we need.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
We need this in CI when we want to configure Dex with the redirect URI for both
primary and secondary deploys at one time (since we only stand up Dex once).
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
I didn't advertise this feature in the deploy README's since (hopefully) not
many people will want to use it?
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
Previously, when triggering a Tilt reload via a *.go file change, a reload would
take ~13 seconds and we would see this error message in the Tilt logs for each
component.
Live Update failed with unexpected error:
command terminated with exit code 2
Falling back to a full image build + deploy
Now, Tilt should reload images a lot faster (~3 seconds) since we are running
the images as root.
Note! Reloading the Concierge component still takes ~13 seconds because there
are 2 containers running in the Concierge namespace that use the Concierge
image: the main Concierge app and the kube cert agent pod. Tilt can't live
reload both of these at once, so the reload takes longer and we see this error
message.
Will not perform Live Update because:
Error retrieving container info: can only get container info for a single pod; image target image:image/concierge has 2 pods
Falling back to a full image build + deploy
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
The group claims read from the session cache file are loaded as `[]interface{}` (slice of empty interfaces) so when we previously did a `groups, _ := idTokenClaims[oidc.DownstreamGroupsClaim].([]string)`, then `groups` would always end up nil.
The solution I tried here was to convert the expected value to also be `[]interface{}` so that `require.Equal(t, ...)` does the right thing.
This bug only showed up in our acceptance environnment against Okta, since we don't have any other integration test coverage with IDPs that pass a groups claim.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
We were seeing a race in this test code since the require.NoError() and
require.Eventually() would write to the same testing.T state on separate
goroutines. Hopefully this helper function should cover the cases when we want
to require.NoError() inside a require.Eventually() without causing a race.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
Co-authored-by: Margo Crawford <margaretc@vmware.com>
Co-authored-by: Monis Khan <i@monis.app>
This reverts commit 4a28d1f800.
This commit was originally made to fix a bug that caused TokenCredentialRequest
to become slow when the server was idle for an extended period of time. This was
to address a Kubernetes issue that was fixed in 1.19.5 and onward. We are now
running with Kubernetes 1.20, so we should be able to pick up this fix.
This change updates our clients to always set an owner ref when:
1. The operation is a create
2. The object does not already have an owner ref set
Signed-off-by: Monis Khan <mok@vmware.com>
See comment. This is at least a first step to make our GKE acceptance
environment greener. Previously, this test assumed that the Pinniped-under-test
had been deployed in (roughly) the last 10 minutes, which is not an assumption
that we make anywhere else in the integration test suite.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
- JWKSWriterController
- JWKSObserverController
- FederationDomainSecretsController for HMAC keys
- FederationDomainSecretsController for state signature key
- FederationDomainSecretsController for state encryption key
Signed-off-by: Ryan Richard <richardry@vmware.com>
- Only sync on add/update of secrets in the same namespace which
have the "storage.pinniped.dev/garbage-collect-after" annotation, and
also during a full resync of the informer whenever secrets in the
same namespace with that annotation exist.
- Ignore deleted secrets to avoid having this controller trigger itself
unnecessarily when it deletes a secret. This controller is never
interested in deleted secrets, since its only job is to delete
existing secrets.
- No change to the self-imposed rate limit logic. That still applies
because secrets with this annotation will be created and updated
regularly while the system is running (not just during rare system
configuration steps).
I'm not sure if these docs are used anywhere in our website, but I don't think
that they are. I'm assuming someone or something will yell if these should not
be deleted. These docs also live at the root of the repo, and the duplicate
versions are already drifting out of sync from one another.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
We stared at this very carefully and we don't think there are any structural changes. Maybe something small happened to get the RNG off by one?
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This implementation is janky because I wanted to make the smallest change
possible to try to get the code back to stable so we can release.
Also deep copy an object so we aren't mutating the cache.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
This is a bit more clear. We're changing this now because it is a non-backwards-compatible change that we can make now since none of this RFC8693 token exchange stuff has been released yet.
There is also a small typo fix in some flag usages (s/RF8693/RFC8693/)
Signed-off-by: Matt Moyer <moyerm@vmware.com>
- The overall timeout for logins is increased to 90 minutes.
- The timeout for token refresh is increased from 30 seconds to 60 seconds to be a bit more tolerant of extremely slow networks.
- A new, matching timeout of 60 seconds has been added for the OIDC discovery, auth code exchange, and RFC8693 token exchange operations.
The new code uses the `http.Client.Timeout` field rather than managing contexts on individual requests. This is easier because the OIDC package stores a context at creation time and tries to use it later when performing key refresh operations.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
Fosite overrides the `Cache-Control` header we set, which is basically fine even though it's not exactly what we want.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
It would be great to do this for the supervisor's callback endpoint as well, but it's difficult to get at those since the request happens inside the spawned browser.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
From RFC2616 (https://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.2):
> It MUST be possible to combine the multiple header fields into one "field-name: field-value" pair,
> without changing the semantics of the message, by appending each subsequent field-value to the first,
> each separated by a comma.
This was correct before, but this simplifes a bit and shaves off a few bytes from the response.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
The bug itself has to do with when headers are streamed to the client. Once a wrapped handler has sent any bytes to the `http.ResponseWriter`, the value of the map returned from `w.Header()` no longer matters for the response. The fix is fairly trivial, which is to add those response headers before invoking the wrapped handler.
The existing unit test didn't catch this due to limitations in `httptest.NewRecorder()`. It is now replaced with a new test that runs a full HTTP test server, which catches the previous bug.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
Because the library that we are using which returns that error
formats the timestamp in localtime, which is LMT when running
on a laptop, but is UTC when running in CI.
Signed-off-by: Ryan Richard <richardry@vmware.com>
This reverts commit be4e34d0c0.
Roll back this change that was supposed to make the test more robust. If we
retry multiple token exchanges with the same auth code, of course we are going
to get failures on the second try onwards because the auth code was invalidated
on the first try.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
The value is correctly validated as `secrets.pinniped.dev/oidc-client` elsewhere, only this comment was wrong.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
- Refactor the test to avoid testing a private method and instead
always test the results of running the controller.
- Also remove the `if testing.Short()` check because it will always
be short when running unit tests. This prevented the unit test
from ever running, both locally and in CI.
Signed-off-by: Ryan Richard <richardry@vmware.com>
It now tests both the deprecated `pinniped get-kubeconfig` and the new `pinniped get kubeconfig --static-token` flows.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This is extracted from library.NewClientsetForKubeConfig(). It is useful so you can assert properties of the loaded, parsed kubeconfig.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
- Adds two new subcommands: `pinniped get kubeconfig` and `pinniped login static`
- Adds concierge support to `pinniped login oidc`.
- Adds back wrapper commands for the now deprecated `pinniped get-kubeconfig` and `pinniped exchange-credential` commands. These now wrap `pinniped get kubeconfig` and `pinniped login static` respectively.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This is a slighly evolved version of our previous client package, exported to be public and refactored to use functional options for API maintainability.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
I hope this will make TestSupervisorLogin less flaky. There are some instances
where the front half of the OIDC login flow happens so fast that the JWKS
controller doesn't have time to properly generate an asymmetric key.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
I saw this message in our CI logs, which led me to this fix.
could not update status: OIDCProvider.config.supervisor.pinniped.dev "acceptance-provider" is invalid: status.status: Unsupported value: "SameIssuerHostMustUseSameSecret": supported values: "Success", "Duplicate", "Invalid"
Also - correct an integration test error message that was misleading.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
We believe this API is more forwards compatible with future secrets management
use cases. The implementation is a cry for help, but I was trying to follow the
previously established pattern of encapsulating the secret generation
functionality to a single group of packages.
This commit makes a breaking change to the current OIDCProvider API, but that
OIDCProvider API was added after the latest release, so it is technically still
in development until we release, and therefore we can continue to thrash on it.
I also took this opportunity to make some things private that didn't need to be
public.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
This forced us to add labels to the CSRF cookie secret, just as we do
for other Supervisor secrets. Yay tests.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
- AudienceMatchingStrategy: we want to use the default matcher from
fosite, so remove that line
- AllowedPromptValues: We can use the default if we add a small
change to the auth_handler.go to account for it (in a future commit)
- MinParameterEntropy: Use the fosite default to make it more likely
that off the shelf OIDC clients can work with the supervisor
Signed-off-by: Ryan Richard <richardry@vmware.com>
- Also add more log statements to the controller
- Also have the controller apply a rate limit to itself, to avoid
having a very chatty controller that runs way more often than is
needed.
- Also add an integration test for the controller's behavior.
Signed-off-by: Margo Crawford <margaretc@vmware.com>
When we try to decode with the wrong decryption key, we could get any number of
error messages, depending on what failure mode we are in (couldn't authenticate
plaintext after decryption, couldn't deserialize, etc.). This change makes the
test weaker, but at least we know we will get an error message in the case where
the decryption key is wrong.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
This also sets the CSRF cookie Secret's OwnerReference to the Pod's grandparent
Deployment so that when the Deployment is cleaned up, then the Secret is as
well.
Obviously this controller implementation has a lot of issues, but it will at
least get us started.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
There is still a test failing, but I am sure it is a simple fix hiding in the
code. I think this is the general shape of the controller that we want.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
Note that we don't cache the securecookie.SecureCookie that we use in our
implementation. This was purely because of laziness. We should think about
caching this value in the future.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
- Make it more likely that the end user will get the more specific error
message saying that their refresh token has expired the first time
that they try to use an expired refresh token
Signed-off-by: Ryan Richard <richardry@vmware.com>
- This struct represents the configuration of all timeouts. These
timeouts are all interrelated to declare them all in one place.
This should also make it easier to allow the user to override
our defaults if we would like to implement such a feature in the
future.
Signed-off-by: Margo Crawford <margaretc@vmware.com>
Before this, we weren't properly parsing the `Content-Type` header. This breaks in integration with the Supervisor since it sends an extra encoding parameter like `application/json;charset=UTF-8`.
This change switches to properly parsing with the `mime.ParseMediaType` function, and adds test cases to match the supervisor behavior.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This default matches the static client we have defined in the supervisor, which will be the correct value in most cases.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
I think this should be more correct. In the server we're authenticating the request primarily via the `subject_token` parameter anyway, and Fosite needs the `client_id` to be set.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
kubectl 1.20 prints "Kubernetes control plane" instead of "Kubernetes master".
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
- This is to make it easier for the token exchange branch to also edit
this test without causing a lot of merge conflicts with the
refresh token branch, to enable parallel development of closely
related stories.
- This refactor will allow us to add new test tables for the
refresh and token exchange requests, which both must come after
an initial successful authcode exchange has already happened
Signed-off-by: Margo Crawford <margaretc@vmware.com>
We decided that we don't really need these in every case, since we'll be returning username and groups in a custom claim.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This refactors the `UpstreamOIDCIdentityProviderI` interface and its implementations to pass ID token claims through a `*oidctypes.Token` return parameter rather than as a third return parameter.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This is just a more convenient copy of these values which are already stored inside the ID token. This will save us from having to pass them around seprately or re-parse them later.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
I'm worried that these errors are going to be really burried from the user, so
add some log statements to try to make them a tiny bit more observable.
Also follow some of our error message convetions by using lowercase error
messages.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
This CSRF cookie needs to be included on the request to the callback endpoint triggered by the redirect from the OIDC upstream provider. This is not allowed by `Same-Site=Strict` but is allowed by `Same-Site=Lax` because it is a "cross-site top-level navigation" [1].
We didn't catch this earlier with our Dex-based tests because the upstream and downstream issuers were on the same parent domain `*.svc.cluster.local` so the cookie was allowed even with `Strict` mode.
[1]: https://tools.ietf.org/html/draft-ietf-httpbis-cookie-same-site-00#section-3.2
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This commit includes a failing test (amongst other compiler failures) for the
dynamic signing key fetcher that we will inject into fosite. We are checking it
in so that we can pass the WIP off.
Signed-off-by: Margo Crawford <margaretc@vmware.com>
We are currently using EC keys to sign ID tokens, so we should reflect that in
our OIDC discovery metadata.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
This fixes a regression introduced by 24c4bc0dd4. It could occasionally cause the tests to fail when run on a machine with an IPv6 localhost interface. As a fix I added a wrapper for the new Go 1.15 `LookupIP()` method, and created a partially-functional backport for Go 1.14. This should be easy to delete in the future.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This adds a few new "create test object" helpers and extends `CreateTestOIDCProvider()` to optionally wait for the created OIDCProvider to enter some expected status condition.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This allows the token exchange request to be performed with the correct TLS configuration.
We go to a bit of extra work to make sure the `http.Client` object is cached between reconcile operations so that connection pooling works as expected.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
- Note that this WIP commit includes a failing unit test, which will
be addressed in the next commit
Signed-off-by: Ryan Richard <richardry@vmware.com>
Mainly, avoid using some `testing` helpers that were added in 1.14, as well as a couple of other niceties we can live without.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
We were assuming that env.SupervisorHTTPAddress was set, but it might not be
depending on the environment on which the integration tests are being run. For
example, in our acceptance environments, we don't currently set
env.SupervisorHTTPAddress.
I tried to follow the pattern from TestSupervisorOIDCDiscovery here.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
Generate a new cookie for the user and move on as if they had not sent
a bad cookie. Hopefully this will make the user experience better if,
for example, the server rotated cookie signing keys and then a user
submitted a very old cookie.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
Also use ConstantTimeCompare() to compare CSRF tokens to prevent
leaking any information in how quickly we reject bad tokens.
Signed-off-by: Ryan Richard <richardry@vmware.com>
This is much nicer UX for an administrator installing a UpstreamOIDCProvider
CRD. They don't have to guess as hard at what the callback endpoint path should
be for their UpstreamOIDCProvider.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
Also aggresively refactor for readability:
- Make helper validations functions for each type of storage
- Try to label symbols based on their downstream/upstream use and group them
accordingly
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
Prior to this we re-used the CLI testing client to test the authorize flow of the supervisor, but they really need to be separate upstream clients. For example, the supervisor client should be a non-public client with a client secret and a different callback endpoint.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
- Also handle several more error cases
- Move RequireTimeInDelta to shared testutils package so other tests
can also use it
- Move all of the oidc test helpers into a new oidc/oidctestutils
package to break a circular import dependency. The shared testutil
package can't depend on any of our other packages or else we
end up with circular dependencies.
- Lots more assertions about what was stored at the end of the
request to build confidence that we are going to pass all of the
right settings over to the token endpoint through the storage, and
also to avoid accidental regressions in that area in the future
Signed-off-by: Ryan Richard <richardry@vmware.com>
Also refactor to get rid of duplicate test structs.
Also also don't default groups ID token claim because there is no standard one.
Also also also add some logging that will hopefully help us in debugging in the
future.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
Because we want it to implement an AuthcodeExchanger interface and
do it in a way that will be more unit test-friendly than the underlying
library that we intend to use inside its implementation.
This will allow it to be imported by Go code outside of our repository, which was something we have planned for since this code was written.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
Before, we did this in an init container, which meant if the Dex pod restarted we would have fresh certs, but our Tilt/bash setup didn't account for this.
Now, the certs are generated by a Job which runs once and saves the generated files into a Secret. This should be a bit more stable.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This change deploys a small Squid-based proxy into the `dex` namespace in our integration test environment. This lets us use the cluster-local DNS name (`http://dex.dex.svc.cluster.local/dex`) as the OIDC issuer. It will make generating certificates easier, and most importantly it will mean that our CLI can see Dex at the same name/URL as the supervisor running inside the cluster.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
- To better support having multiple downstream providers configured,
the authorize endpoint will share a CSRF cookie between all
downstream providers' authorize endpoints. The first time a
user's browser hits the authorize endpoint of any downstream
provider, that endpoint will set the cookie. Then if the user
starts an authorize flow with that same downstream provider or with
any other downstream provider which shares the same domain name
(i.e. differentiated by issuer path), then the same cookie will be
submitted and respected.
- Just in case we are sharing the domain name with some other app,
we sign the value of any new CSRF cookie and check the signature
when we receive the cookie. This wasn't strictly necessary since
we probably won't share a domain name with other apps, but it
wasn't hard to add this cookie signing.
Signed-off-by: Ryan Richard <richardry@vmware.com>
We want to have our APIs respond to `kubectl get pinniped`, and we shouldn't use `all` because we don't think most average users should have permission to see our API types, which means if we put our types there, they would get an error from `kubectl get all`.
I also added some tests to assert these properties on all `*.pinniped.dev` API resources.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
This check predates the API renaming we did. Now that our API groups have `concierge`/`supervisor` in the name, we don't need to maintain a specific set of `cp` commands and keep them in sync, so we don't really need this check.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
These only really make sense for aggregated API types where we need `conversion-gen` to do version conversion.
Signed-off-by: Matt Moyer <moyerm@vmware.com>
Our unit tests are gonna touch a lot more corner cases than our
integration tests, so let's make them run as close to the real
implementation as possible.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
This commit is meant to be reverted when we are unblocked and
ready to start working on this integration test again. Temporarily
remove it so we can merge this PR to main.
Note: I had tried using t.Skip() in the test, but then that caused lint
failures, so decided to just remove it for now.
We want to run all of the fosite validations in the authorize
endpoint, but we don't need to store anything yet because
we are storing what we need for later in the upstream state
parameter.
Signed-off-by: Ryan Richard <richardry@vmware.com>
This is helpful for us, amongst other users, because we want to enable "debug"
logging whenever we deploy components for testing.
See a5643e3 for addition of log level.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
- Add a new helper method to plog to make a consistent way to log
expected errors at the info level (as opposed to unexpected
system errors that would be logged using plog.Error)
Signed-off-by: Ryan Richard <richardry@vmware.com>
This is awaiting the new upstream OIDC provider CRD in order
to pass, however hopefully this is a starting point for us.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
Also move definition of our oauth client and the general fosite
configuration to a helper so we can use the same config to construct
the handler for both test and production code.
Signed-off-by: Ryan Richard <richardry@vmware.com>
This prevents unnecessary sync loop runs when the controller is
running with a single worker. When the controller is running with
more than one worker, it prevents subtle bugs that can cause the
controller to go "back in time."
Signed-off-by: Monis Khan <mok@vmware.com>
Signed-off-by: Matt Moyer <moyerm@vmware.com>
Previously we were injecting the whole oauth handler chain into this function,
which meant we were essentially writing unit tests to test our tests. Let's push
some of this logic into the source code.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
Does not validate incoming request parameters yet. Also is not
served on the http/https ports yet. Those will come in future commits.
Signed-off-by: Andrew Keesler <akeesler@vmware.com>
Below is a list of solutions where Pinniped is being used as a component.
**[Kubeapps](https://kubeapps.com/)**
Kubeapps uses Pinniped to [enable SSO authentication](https://github.com/kubeapps/kubeapps/blob/master/docs/user/using-an-OIDC-provider-with-pinniped.md) when running on clusters where SSO cannot be configured for the cluster API server.
TKG uses Pinniped to provide a seamless SSO experience across management and workload clusters.
**[VMware Tanzu Mission Control (TMC)](https://tanzu.vmware.com/mission-control)**
TMC uses Pinniped to provide a uniform authentication experience across all attached clusters.
## Adding your organization to the list of adopters
If you are using Pinniped and would like to be included in the list of Pinniped Adopters, add an SVG version of your logo that is less than 150 KB to
the [img directory](https://github.com/vmware-tanzu/pinniped/tree/main/site/themes/pinniped/static/img) in this repo and submit a pull request with your change including 1-2 sentences describing how your organization is using Pinniped. Name the image file something that
reflects your company (e.g., if your company is called Acme, name the image acme.svg). Please feel free to send us a message in [#pinniped](https://kubernetes.slack.com/archives/C01BW364RJA) with any questions you may have.
@@ -8,24 +8,17 @@ Please see the [Code of Conduct](./CODE_OF_CONDUCT.md).
## Project Scope
Learn about the [scope](doc/scope.md) of the project.
See [SCOPE.md](./SCOPE.md) for some guidelines about what we consider in and out of scope for Pinniped.
## Meeting with the Maintainers
## Community Meetings
The maintainers aspire to hold a video conference every other week with the Pinniped community.
Any community member may request to add topics to the agenda by contacting a [maintainer](MAINTAINERS.md)
in advance, or by attending and raising the topic during time remaining after the agenda is covered.
Typical agenda items include topics regarding the roadmap, feature requests, bug reports, pull requests, etc.
A [public document](https://docs.google.com/document/d/1qYA35wZV-6bxcH5375vOnIGkNBo7e4OROgsV4Sj8WjQ)
tracks the agendas and notes for these meetings.
Pinniped is better because of our contributors and maintainers. It is because of you that we can bring great software to the community. Please join us during our online community meetings, occuring every first and third Thursday of the month at 9AM PT / 12PM PT. Use [this Zoom Link](https://vmware.zoom.us/j/93798188973?pwd=T3pIMWxReEQvcWljNm1admRoZTFSZz09) to attend and add any agenda items you wish to discuss to [the notes document](https://hackmd.io/rd_kVJhjQfOvfAWzK8A3tQ?view). Join our [Google Group](https://groups.google.com/u/1/g/project-pinniped) to receive invites to this meeting.
These meetings are currently scheduled for the first and third Thursday mornings of each month
at 9 AM Pacific Time, using this [Zoom meeting](https://VMware.zoom.us/j/94638309756?pwd=V3NvRXJIdDg5QVc0TUdFM2dYRzgrUT09).
If the meeting day falls on a US holiday, please consider that occurrence of the meeting to be canceled.
## Discussion
Got a question, comment, or idea? Please don't hesitate to reach out via the GitHub [Discussions](https://github.com/vmware-tanzu/pinniped/discussions) tab at the top of this page.
Got a question, comment, or idea? Please don't hesitate to reach out via the GitHub [Discussions](https://github.com/vmware-tanzu/pinniped/discussions) tab at the top of this page or reach out in Kubernetes Slack Workspace within the [#pinniped channel](https://kubernetes.slack.com/archives/C01BW364RJA).
To learn more, see [architecture](https://pinniped.dev/docs/background/architecture/).
## Trying Pinniped
## Getting started with Pinniped
Care to kick the tires? It's easy to [install and try Pinniped](doc/demo.md).
Care to kick the tires? It's easy to [install and try Pinniped](https://pinniped.dev/docs/).
## Community meetings
Pinniped is better because of our contributors and maintainers. It is because of you that we can bring great software to the community. Please join us during our online community meetings, occurring every first and third Thursday of the month at 9 AM PT / 12 PM PT. Use [this Zoom Link](https://vmware.zoom.us/j/93798188973?pwd=T3pIMWxReEQvcWljNm1admRoZTFSZz09) to attend and add any agenda items you wish to discuss to [the notes document](https://hackmd.io/rd_kVJhjQfOvfAWzK8A3tQ?view). Join our [Google Group](https://groups.google.com/g/project-pinniped) to receive invites to this meeting.
If the meeting day falls on a US holiday, please consider that occurrence of the meeting to be canceled.
## Discussion
Got a question, comment, or idea? Please don't hesitate to reach out via the GitHub [Discussions](https://github.com/vmware-tanzu/pinniped/discussions) tab at the top of this page.
Got a question, comment, or idea? Please don't hesitate to reach out via the GitHub [Discussions](https://github.com/vmware-tanzu/pinniped/discussions) tab at the top of this page or reach out in Kubernetes Slack Workspace within the [#pinniped channel](https://kubernetes.slack.com/archives/C01BW364RJA).
## Contributions
Contributions are welcome. Before contributing, please see the [contributing guide](CONTRIBUTING.md).
## Reporting Security Vulnerabilities
## Reporting security vulnerabilities
Please follow the procedure described in [SECURITY.md](SECURITY.md).
This document provides a link to the[ Pinniped Project issues](https://github.com/vmware-tanzu/pinniped/issues) list that serves as the up to date description of items that are in the Pinniped release pipeline. Most items are gathered from the community or include a feedback loop with the community. This should serve as a reference point for Pinniped users and contributors to understand where the project is heading, and help determine if a contribution could be conflicting with a longer term plan.
###
**How to help?**
Discussion on the roadmap can take place in threads under [Issues](https://github.com/vmware-tanzu/pinniped/issues) or in [community meetings](https://github.com/vmware-tanzu/pinniped/blob/main/CONTRIBUTING.md#meeting-with-the-maintainers). Please open and comment on an issue if you want to provide suggestions and feedback to an item in the roadmap. Please review the roadmap to avoid potential duplicated effort.
###
**Need an idea for a contribution?**
We’ve created an [Opportunity Areas](https://github.com/vmware-tanzu/pinniped/discussions/483) discussion thread that outlines some areas we believe are excellent starting points for the community to get involved. In that discussion we’ve included specific work items that one might consider that also support the high-level items presented in our roadmap.
###
**How to add an item to the roadmap?**
Please open an issue to track any initiative on the roadmap of Pinniped (usually driven by new feature requests). We will work with and rely on our community to focus our efforts to improve Pinniped.
###
**Current Roadmap**
The following table includes the current roadmap for Pinniped. If you have any questions or would like to contribute to Pinniped, please attend a [community meeting](https://github.com/vmware-tanzu/pinniped/blob/main/CONTRIBUTING.md#meeting-with-the-maintainers) to discuss with our team. If you don't know where to start, we are always looking for contributors that will help us reduce technical, automation, and documentation debt. Please take the timelines & dates as proposals and goals. Priorities and requirements change based on community feedback, roadblocks encountered, community contributions, etc. If you depend on a specific item, we encourage you to attend community meetings to get updated status information, or help us deliver that feature by contributing to Pinniped.
|Improved Documentation|Reorganizing and improving Pinniped docs; new how-to guides and tutorials|May 2021|
|Multiple IDPs|Support for multiple upstream IDPs to be configured simultaneously|Jun 2021|
|Wider Concierge cluster support|Support for more cluster types in the Concierge|Jul 2021|
|Improving Security Posture|Offer the best security posture for Kubernetes cluster authentication|Exploring/Ongoing|
|Improve our CI/CD systems|Upgrade tests; make Kind more efficient and reliable for CI ; Windows tests; performance tests; scale tests; soak tests|Exploring/Ongoing|
|CLI Improvements|Improving CLI UX for setting up Supervisor IDPs|Exploring/Ongoing|
|Telemetry|Adding some useful phone home metrics as well as some vanity metrics|Exploring/Ongoing|
|Observability|Expose Pinniped metrics through Prometheus Integration|Exploring/Ongoing|
|Device Code Flow|Add support for OAuth 2.0 Device Authorization Grant in the Pinniped CLI and Supervisor|Exploring/Ongoing|
Pinniped development is sponsored by VMware, and the Pinniped team encourages users
who become aware of a security vulnerability in Pinniped to report any potential
vulnerabilities found to security@vmware.com. If possible, please include a description
of the effects of the vulnerability, reproduction steps, and a description of in which
version of Pinniped or its dependencies the vulnerability was discovered.
The use of encrypted email is encouraged. The public PGP key can be found at https://kb.vmware.com/kb/1055.
Pinniped provides identity services for Kubernetes clusters. The community has adopted this security disclosure and response policy to ensure we responsibly handle critical issues.
The Pinniped team hopes that users encountering a new vulnerability will contact
us privately as it is in the best interests of our users that the Pinniped team has
an opportunity to investigate and confirm a suspected vulnerability before it becomes public knowledge.
## Supported Versions
As of right now, only the latest version of Pinniped is supported.
## Reporting a Vulnerability - Private Disclosure Process
Security is of the highest importance and all security vulnerabilities or suspected security vulnerabilities should be reported to Pinniped privately, to minimize attacks against current users of Pinniped before they are fixed. Vulnerabilities will be investigated and patched on the next patch (or minor) release as soon as possible. This information could be kept entirely internal to the project.
If you know of a publicly disclosed security vulnerability for Pinniped, please **IMMEDIATELY** contact the VMware Security Team (security@vmware.com). The use of encrypted email is encouraged. The public PGP key can be found at https://kb.vmware.com/kb/1055.
**IMPORTANT: Do not file public issues on GitHub for security vulnerabilities**
To report a vulnerability or a security-related issue, please contact the VMware email address with the details of the vulnerability. The email will be fielded by the VMware Security Team and then shared with the Pinniped maintainers who have committer and release permissions. Emails will be addressed within 3 business days, including a detailed plan to investigate the issue and any potential workarounds to perform in the meantime. Do not report non-security-impacting bugs through this channel. Use [GitHub issues](https://github.com/vmware-tanzu/pinniped/issues/new/choose) instead.
## Proposed Email Content
Provide a descriptive subject line and in the body of the email include the following information:
* Basic identity information, such as your name and your affiliation or company.
* Detailed steps to reproduce the vulnerability (POC scripts, screenshots, and logs are all helpful to us).
* Description of the effects of the vulnerability on Pinniped and the related hardware and software configurations, so that the VMware Security Team can reproduce it.
* How the vulnerability affects Pinniped usage and an estimation of the attack surface, if there is one.
* List other projects or dependencies that were used in conjunction with Pinniped to produce the vulnerability.
## When to report a vulnerability
* When you think Pinniped has a potential security vulnerability.
* When you suspect a potential vulnerability but you are unsure that it impacts Pinniped.
* When you know of or suspect a potential vulnerability on another project that is used by Pinniped.
## Patch, Release, and Disclosure
The VMware Security Team will respond to vulnerability reports as follows:
1. The Security Team will investigate the vulnerability and determine its effects and criticality.
2. If the issue is not deemed to be a vulnerability, the Security Team will follow up with a detailed reason for rejection.
3. The Security Team will initiate a conversation with the reporter within 3 business days.
4. If a vulnerability is acknowledged and the timeline for a fix is determined, the Security Team will work on a plan to communicate with the appropriate community, including identifying mitigating steps that affected users can take to protect themselves until the fix is rolled out.
5. The Security Team will also create a [CVSS](https://www.first.org/cvss/specification-document) using the [CVSS Calculator](https://www.first.org/cvss/calculator/3.0). The Security Team makes the final call on the calculated CVSS; it is better to move quickly than making the CVSS perfect. Issues may also be reported to [Mitre](https://cve.mitre.org/) using this [scoring calculator](https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator). The CVE will initially be set to private.
6. The Security Team will work on fixing the vulnerability and perform internal testing before preparing to roll out the fix.
7. The Security Team will provide early disclosure of the vulnerability by emailing the [Pinniped Distributors](https://groups.google.com/g/project-pinniped-distributors) mailing list. Distributors can initially plan for the vulnerability patch ahead of the fix, and later can test the fix and provide feedback to the Pinniped team. See the section **Early Disclosure to Pinniped Distributors List** for details about how to join this mailing list.
8. A public disclosure date is negotiated by the VMware SecurityTeam, the bug submitter, and the distributors list. We prefer to fully disclose the bug as soon as possible once a user mitigation or patch is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for distributor coordination. The timeframe for disclosure is from immediate (especially if it’s already publicly known) to a few weeks. For a critical vulnerability with a straightforward mitigation, we expect the report date for the public disclosure date to be on the order of 14 business days. The VMware Security Team holds the final say when setting a public disclosure date.
9. Once the fix is confirmed, the Security Team will patch the vulnerability in the next patch or minor release, and backport a patch release into all earlier supported releases. Upon release of the patched version of Pinniped, we will follow the **Public Disclosure Process**.
## Public Disclosure Process
The Security Team publishes a [public advisory](https://github.com/vmware-tanzu/pinniped/security/advisories) to the Pinniped community via GitHub. In most cases, additional communication via Slack, Twitter, mailing lists, blog and other channels will assist in educating Pinniped users and rolling out the patched release to affected users.
The Security Team will also publish any mitigating steps users can take until the fix can be applied to their Pinniped instances. Pinniped distributors will handle creating and publishing their own security advisories.
## Mailing lists
* Use security@vmware.com to report security concerns to the VMware Security Team, who uses the list to privately discuss security issues and fixes prior to disclosure. The use of encrypted email is encouraged. The public PGP key can be found at https://kb.vmware.com/kb/1055.
* Join the [Pinniped Distributors](https://groups.google.com/g/project-pinniped-distributors) mailing list for early private information and vulnerability disclosure. Early disclosure may include mitigating steps and additional information on security patch releases. See below for information on how Pinniped distributors or vendors can apply to join this list.
## Early Disclosure to Pinniped Distributors List
The private list is intended to be used primarily to provide actionable information to multiple distributor projects at once. This list is not intended to inform individuals about security issues.
## Membership Criteria
To be eligible to join the [Pinniped Distributors](https://groups.google.com/g/project-pinniped-distributors) mailing list, you should:
1. Be an active distributor of Pinniped.
2. Have a user base that is not limited to your own organization.
3. Have a publicly verifiable track record up to the present day of fixing security issues.
4. Not be a downstream or rebuild of another distributor.
5. Be a participant and active contributor in the Pinniped community.
6. Accept the Embargo Policy that is outlined below.
7. Have someone who is already on the list vouch for the person requesting membership on behalf of your distribution.
**The terms and conditions of the Embargo Policy apply to all members of this mailing list. A request for membership represents your acceptance to the terms and conditions of the Embargo Policy.**
## Embargo Policy
The information that members receive on the Pinniped Distributors mailing list must not be made public, shared, or even hinted at anywhere beyond those who need to know within your specific team, unless you receive explicit approval to do so from the VMware Security Team. This remains true until the public disclosure date/time agreed upon by the list. Members of the list and others cannot use the information for any reason other than to get the issue fixed for your respective distribution's users.
Before you share any information from the list with members of your team who are required to fix the issue, these team members must agree to the same terms, and only be provided with information on a need-to-know basis.
In the unfortunate event that you share information beyond what is permitted by this policy, you must urgently inform the VMware Security Team (security@vmware.com) of exactly what information was leaked and to whom. If you continue to leak information and break the policy outlined here, you will be permanently removed from the list.
## Requesting to Join
Send new membership requests to https://groups.google.com/g/project-pinniped-distributors. In the body of your request please specify how you qualify for membership and fulfill each criterion listed in the Membership Criteria section above.
## Confidentiality, integrity and availability
We consider vulnerabilities leading to the compromise of data confidentiality, elevation of privilege, or integrity to be our highest priority concerns. Availability, in particular in areas relating to DoS and resource exhaustion, is also a serious security concern. The VMware Security Team takes all vulnerabilities, potential vulnerabilities, and suspected vulnerabilities seriously and will investigate them in an urgent and expeditious manner.
_,err:=warningsWriter.Write([]byte("WARNING: Server and certificate authority did not match between local kubeconfig and Pinniped's CredentialIssuer on the cluster. Using local kubeconfig values.\n"))
iferr!=nil{
returnfmt.Errorf("output write error: %w",err)
}
}
returnnil
}
typenoAuthenticatorErrorstruct{Namespacestring}
func(enoAuthenticatorError)Error()string{
returnfmt.Sprintf(`no authenticators were found in namespace %q`,e.Namespace)
wantStderr:`WARNING: Server and certificate authority did not match between local kubeconfig and Pinniped's CredentialIssuer on the cluster. Using local kubeconfig values.`,
Short:"Generate a Pinniped-based kubeconfig for a cluster",
SilenceUsage:true,
}
flagsgetKubeconfigParams
namespacestring// unused now
)
f:=cmd.Flags()
f.StringVar(&flags.staticToken,"static-token","","Instead of doing an OIDC-based login, specify a static token")
f.StringVar(&flags.staticTokenEnvName,"static-token-env","","Instead of doing an OIDC-based login, read a static token from the environment")
f.BoolVar(&flags.concierge.disabled,"no-concierge",false,"Generate a configuration which does not use the Concierge, but sends the credential to the cluster directly")
f.StringVar(&namespace,"concierge-namespace","pinniped-concierge","Namespace in which the Concierge was installed")
f.StringVar(&flags.concierge.credentialIssuer,"concierge-credential-issuer","","Concierge CredentialIssuer object to use for autodiscovery (default: autodiscover)")
f.StringVar(&flags.concierge.authenticatorType,"concierge-authenticator-type","","Concierge authenticator type (e.g., 'webhook', 'jwt') (default: autodiscover)")
f.StringVar(&flags.concierge.authenticatorName,"concierge-authenticator-name","","Concierge authenticator name (default: autodiscover)")
f.StringVar(&flags.concierge.apiGroupSuffix,"concierge-api-group-suffix",groupsuffix.PinnipedDefaultSuffix,"Concierge API group suffix")
f.BoolVar(&flags.concierge.skipWait,"concierge-skip-wait",false,"Skip waiting for any pending Concierge strategies to become ready (default: false)")
f.Var(&flags.concierge.caBundle,"concierge-ca-bundle","Path to TLS certificate authority bundle (PEM format, optional, can be repeated) to use when connecting to the Concierge")
f.StringVar(&flags.concierge.endpoint,"concierge-endpoint","","API base for the Concierge endpoint")
f.Var(&flags.concierge.mode,"concierge-mode","Concierge mode of operation")
f.StringVar(&flags.oidc.clientID,"oidc-client-id","pinniped-cli","OpenID Connect client ID (default: autodiscover)")
f.Uint16Var(&flags.oidc.listenPort,"oidc-listen-port",0,"TCP port for localhost listener (authorization code flow only)")
f.StringSliceVar(&flags.oidc.scopes,"oidc-scopes",[]string{oidc.ScopeOfflineAccess,oidc.ScopeOpenID,"pinniped:request-audience"},"OpenID Connect scopes to request during login")
f.BoolVar(&flags.oidc.skipBrowser,"oidc-skip-browser",false,"During OpenID Connect login, skip opening the browser (just print the URL)")
f.StringVar(&flags.oidc.sessionCachePath,"oidc-session-cache","","Path to OpenID Connect session cache file")
f.Var(&flags.oidc.caBundle,"oidc-ca-bundle","Path to TLS certificate authority bundle (PEM format, optional, can be repeated)")
f.BoolVar(&flags.oidc.debugSessionCache,"oidc-debug-session-cache",false,"Print debug logs related to the OpenID Connect session cache")
f.StringVar(&flags.oidc.requestAudience,"oidc-request-audience","","Request a token with an alternate audience using RFC8693 token exchange")
f.StringVar(&flags.kubeconfigPath,"kubeconfig",os.Getenv("KUBECONFIG"),"Path to kubeconfig file")
f.StringVar(&flags.kubeconfigContextOverride,"kubeconfig-context","","Kubeconfig context name (default: current active context)")
f.BoolVar(&flags.skipValidate,"skip-validation",false,"Skip final validation of the kubeconfig (default: false)")
f.DurationVar(&flags.timeout,"timeout",10*time.Minute,"Timeout for autodiscovery and validation")
returnnil,fmt.Errorf("multiple authenticators were found, so the --concierge-authenticator-type/--concierge-authenticator-name flags must be specified")
cmd.Flags().Uint16Var(&flags.listenPort,"listen-port",0,"TCP port for localhost listener (authorization code flow only)")
cmd.Flags().StringSliceVar(&flags.scopes,"scopes",[]string{oidc.ScopeOfflineAccess,oidc.ScopeOpenID,"pinniped:request-audience"},"OIDC scopes to request during login")
cmd.Flags().BoolVar(&flags.skipBrowser,"skip-browser",false,"Skip opening the browser (just print the URL)")
cmd.Flags().StringVar(&flags.sessionCachePath,"session-cache",filepath.Join(mustGetConfigDir(),"sessions.yaml"),"Path to session cache file")
cmd.Flags().StringSliceVar(&flags.caBundlePaths,"ca-bundle",nil,"Path to TLS certificate authority bundle (PEM format, optional, can be repeated)")
cmd.Flags().StringSliceVar(&flags.caBundleData,"ca-bundle-data",nil,"Base64 encoded TLS certificate authority bundle (base64 encoded PEM format, optional, can be repeated)")
cmd.Flags().BoolVar(&flags.debugSessionCache,"debug-session-cache",false,"Print debug logs related to the session cache")
cmd.Flags().StringVar(&flags.requestAudience,"request-audience","","Request a token with an alternate audience using RFC8693 token exchange")
cmd.Flags().BoolVar(&flags.conciergeEnabled,"enable-concierge",false,"Use the Concierge to login")
cmd.Flags().StringVar(&conciergeNamespace,"concierge-namespace","pinniped-concierge","Namespace in which the Concierge was installed")
cmd.Flags().StringVar(&flags.conciergeAuthenticatorType,"concierge-authenticator-type","","Concierge authenticator type (e.g., 'webhook', 'jwt')")
cmd.Flags().StringVar(&flags.conciergeEndpoint,"concierge-endpoint","","API base for the Concierge endpoint")
cmd.Flags().StringVar(&flags.conciergeCABundle,"concierge-ca-bundle-data","","CA bundle to use when connecting to the Concierge")
cmd.Flags().StringVar(&flags.conciergeAPIGroupSuffix,"concierge-api-group-suffix",groupsuffix.PinnipedDefaultSuffix,"Concierge API group suffix")
cmd.Flags().StringVar(&flags.credentialCachePath,"credential-cache",filepath.Join(mustGetConfigDir(),"credentials.yaml"),"Path to cluster-specific credentials cache (\"\" disables the cache)")
Error: invalid Concierge parameters: invalid API group suffix: a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
`),
},
{
name:"login error",
args:[]string{
"--client-id","test-client-id",
"--issuer","test-issuer",
},
loginErr:fmt.Errorf("some login error"),
wantOptionsCount:4,
wantError:true,
wantStderr:here.Doc(`
Error: could not complete Pinniped login: some login error
cmd.Flags().StringVar(&flags.conciergeEndpoint,"concierge-endpoint","","API base for the Concierge endpoint")
cmd.Flags().StringVar(&flags.conciergeCABundle,"concierge-ca-bundle-data","","CA bundle to use when connecting to the Concierge")
cmd.Flags().StringVar(&flags.conciergeAPIGroupSuffix,"concierge-api-group-suffix",groupsuffix.PinnipedDefaultSuffix,"Concierge API group suffix")
cmd.Flags().StringVar(&flags.credentialCachePath,"credential-cache",filepath.Join(mustGetConfigDir(),"credentials.yaml"),"Path to cluster-specific credentials cache (\"\" disables the cache)")
Error: invalid Concierge parameters: invalid API group suffix: a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
wantStderr:"Error: could not complete WhoAmIRequest (is the Pinniped WhoAmI API running and healthy?): whoamirequests.identity.concierge.pinniped.dev \"whatever\" not found\n",
description:"JWTAuthenticator describes the configuration of a JWT authenticator.
\n Upon receiving a signed JWT, a JWTAuthenticator will performs some validation
on it (e.g., valid signature, existence of claims, etc.) and extract the
username and groups from the token."
properties:
apiVersion:
description:'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type:string
kind:
description:'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type:string
metadata:
type:object
spec:
description:Spec for configuring the authenticator.
properties:
audience:
description:Audience is the required value of the "aud" JWT claim.
minLength:1
type:string
claims:
description:Claims allows customization of the claims that will be
mapped to user identity for Kubernetes access.
properties:
groups:
description:Groups is the name of the claim which should be read
to extract the user's group membership from the JWT token. When
not specified, it will default to "groups".
type:string
username:
description:Username is the name of the claim which should be
read to extract the username from the JWT token. When not specified,
it will default to "username".
type:string
type:object
issuer:
description:Issuer is the OIDC issuer URL that will be used to discover
public signing keys. Issuer is also used to validate the "iss" JWT
claim.
minLength:1
pattern:^https://
type:string
tls:
description:TLS configuration for communicating with the OIDC provider.
image_digest:#! e.g. sha256:f3c4fdfd3ef865d4b97a1fd295d94acc3f0c654c46b6f27ffad5cf80216903c8
image_tag:latest
@@ -14,3 +14,6 @@ image_tag: latest
#! Typically the value would be the output of: kubectl create secret docker-registry x --docker-server=https://example.io --docker-username="USERNAME" --docker-password="PASSWORD" --dry-run=client -o json | jq -r '.data[".dockerconfigjson"]'
#! Optional.
image_pull_dockerconfigjson:#! e.g. {"auths":{"https://registry.example.com":{"username":"USERNAME","password":"PASSWORD","auth":"BASE64_ENCODED_USERNAME_COLON_PASSWORD"}}}
run_as_user:1001#! run_as_user specifies the user ID that will own the process
run_as_group:1001#! run_as_group specifies the group ID that will own the process
description:OIDCIdentityProvider describes the configuration of an upstream
OpenID Connect identity provider.
properties:
apiVersion:
description:'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type:string
kind:
description:'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type:string
metadata:
type:object
spec:
description:Spec for configuring the identity provider.
properties:
authorizationConfig:
description:AuthorizationConfig holds information about how to form
the OAuth2 authorization request parameters to be used with this
OIDC identity provider.
properties:
additionalScopes:
description:AdditionalScopes are the scopes in addition to "openid"
that will be requested as part of the authorization request
flow with an OIDC identity provider. By default only the "openid"
scope will be requested.
items:
type:string
type:array
type:object
claims:
description:Claims provides the names of token claims that will be
used when inspecting an identity from this OIDC identity provider.
properties:
groups:
description:Groups provides the name of the token claim that
will be used to ascertain the groups to which an identity belongs.
type:string
username:
description:Username provides the name of the token claim that
will be used to ascertain an identity's username.
type:string
type:object
client:
description:OIDCClient contains OIDC client information to be used
used with this OIDC identity provider.
properties:
secretName:
description:SecretName contains the name of a namespace-local
Secret object that provides the clientID and clientSecret for
an OIDC client. If only the SecretName is specified in an OIDCClient
struct, then it is expected that the Secret is of type "secrets.pinniped.dev/oidc-client"
with keys "clientID" and "clientSecret".
type:string
required:
- secretName
type:object
issuer:
description:Issuer is the issuer URL of this OIDC identity provider,
i.e., where to fetch /.well-known/openid-configuration.
minLength:1
pattern:^https://
type:string
tls:
description:TLS configuration for discovery/JWKS requests to the
If you are using MacOS, you may get an error dialog that says
`“pinniped” cannot be opened because the developer cannot be verified`. Cancel this dialog, open System Preferences,
click on Security & Privacy, and click the Allow Anyway button next to the Pinniped message.
Run the above command again and another dialog will appear saying
`macOS cannot verify the developer of “pinniped”. Are you sure you want to open it?`.
Click Open to allow the command to proceed.
Note that the above command will print a warning to the screen. You can ignore this warning.
Pinniped tries to auto-discover the URL for the Kubernetes API server, but it is not able
to do so on kind clusters. The warning is just letting you know that the Pinniped CLI decided
to ignore the auto-discovery URL and instead use the URL from your existing kubeconfig.
1. Try using the generated kubeconfig to issue arbitrary `kubectl` commands as
the `pinny-the-seal` user.
```bash
kubectl --kubeconfig /tmp/pinniped-kubeconfig get pods -n pinniped
```
Because this user has no RBAC permissions on this cluster, the previous command
results in the error `Error from server (Forbidden): pods is forbidden: User "pinny-the-seal" cannot list resource "pods" in API group "" in the namespace "pinniped"`.
However, this does prove that you are authenticated and acting as the `pinny-the-seal` user.
1. As the admin user, create RBAC rules for the test user to give them permissions to perform actions on the cluster.
For example, grant the test user permission to view all cluster resources.
The Pinniped project is guided by the following principles.
* Pinniped lets you plug any external identitiy providers into
Kubernetes. These integrations follow enterprise-grade security principles.
* Pinniped is easy to install and use on any Kubernetes cluster via
distribution-specific integration mechanisms.
* Pinniped uses a declarative configuration via Kubernetes APIs.
* Pinniped provides optimal user experience when authenticating to many
clusters at one time.
* Pinniped provides enterprise-grade security posture via secure defaults and
revocable or very short-lived credentials.
* Where possible, Pinniped will contribute ideas and code to upstream
Kubernetes.
When contributing to Pinniped, please consider whether your contribution follows
these guiding principles.
## Out Of Scope
The following items are out of scope for the Pinniped project.
* Authorization.
* Standalone identity provider for general use.
* Machine-to-machine (service) identity.
* Running outside of Kubernetes.
## Roadmap
More details coming soon!
For more details on proposing features and bugs, check out our
[contributing](../CONTRIBUTING.md) doc.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.