Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Can't change default admin. #867

Closed
mvtab opened this issue Aug 1, 2024 · 4 comments
Closed

[BUG] Can't change default admin. #867

mvtab opened this issue Aug 1, 2024 · 4 comments
Labels
bug Something isn't working

Comments

@mvtab
Copy link

mvtab commented Aug 1, 2024

Bug description

The default admin password can not be changed.
Related to #409

Reproduction steps

I have an Ansible setup and would like to provision an opensearch cluster with custom credentials. The steps I followed are the following:

  1. Choose username and password,
  2. Encode username and password with echo <> | base64 and create secret with the values,
  3. Create hash with python -c 'import bcrypt; print(bcrypt.hashpw("<password>".encode("utf-8"), bcrypt.gensalt(12, prefix=b"2a")).decode("utf-8"))' and put it in the example securityconfig,
  4. Set the secrets in the opensearch cluster yaml.

All together in a file:

---
apiVersion: v1
kind: Secret
metadata:
  name: admin-credentials-secret
  namespace: logging
type: Opaque
data:
  username: {{ opensearch_b64_username }}
  password: {{ opensearch_b64_password }}
...
---
apiVersion: v1
kind: Secret
metadata:
  name: securityconfig-secret
  namespace: logging
type: Opaque
stringData:
  action_groups.yml: |-
     _meta:
       type: "actiongroups"
       config_version: 2
  internal_users.yml: |-
    _meta:
      type: "internalusers"
      config_version: 2
    {{ opensearch_username }}:
      hash: "{{ opensearch_password_hash }}"
      reserved: true
      backend_roles:
      - "admin"
      description: "Main admin user."
    dashboarduser:
      hash: "$2a$12$4AcgAt3xwOWadA5s5blL6ev39OXDNhmOesEoo33eZtrq2N0YrU3H."
      reserved: true
      description: "Dashboards user."
  nodes_dn.yml: |-
# 1:1 copy paste from example securityconfig continues here.
...
---
apiVersion: opensearch.opster.io/v1
kind: OpenSearchCluster
metadata:
  name: opensearch-fluentd
  namespace: logging
spec:
  security:
    config:
      adminCredentialsSecret:
        name: admin-credentials-secret
      securityConfigSecret:
        name: securityconfig-secret
    tls:
      http:
        generate: True
      transport:
        generate: True
        perNode: True
  general:
    httpPort: 9200
    serviceName: opensearch-fluentd
    version: {{ opensearch_version }}
    pluginsList: []
    vendor: opensearch
  dashboards:
    tls:
      enable: False
    version: {{ opensearch_version }}
    enable: True
    opensearchCredentialsSecret:
      name: admin-credentials-secret
    replicas: 1
    resources:
      requests:
        memory: "512Mi"
        cpu: "200m"
      limits:
        memory: "512Mi"
        cpu: "200m"
  nodePools:
  - component: masters
    replicas: 3
    resources:
      requests:
        memory: "4Gi"
        cpu: "1000m"
      limits:
        memory: "4Gi"
        cpu: "1000m"
    roles:
    - "data"
    - "cluster_manager"
    persistence:
      pvc:
        accessModes:
        - ReadWriteOnce
...
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: opensearch-fluentd
  namespace: logging
  annotations:
    "cert-manager.io/cluster-issuer": "kube-ca"
    # ... more annotations
spec:
  tls:
  - hosts:
    - opensearch-fluentd.kube.local
    secretName: opensearch-fluentd-ui-tls
  rules:
  - host: opensearch-fluentd.kube.local
    http:
      paths:
      - path: "/(.*)"
        pathType: ImplementationSpecific
        backend:
          service:
            name: opensearch-fluentd-dashboards
            port:
              number: 5601
...

Expected behavior

I would expect a working cluster to be bootstrapped with the new admin credentials.

Actual behavior

Cluster does not bootstrap at all, showing this error on all opensearch nodes:

[2024-08-01T08:02:36,639][WARN ][o.o.s.a.BackendRegistry  ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin from [::1]:60356
[2024-08-01T08:03:06,625][WARN ][o.o.s.a.BackendRegistry  ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin from [::1]:53988
[2024-08-01T08:03:23,532][WARN ][o.o.s.a.BackendRegistry  ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin from [::1]:46680
[2024-08-01T08:03:36,627][WARN ][o.o.s.a.BackendRegistry  ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin from [::1]:59044
[2024-08-01T08:04:06,640][WARN ][o.o.s.a.BackendRegistry  ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin from [::1]:56384
[2024-08-01T08:04:26,150][INFO ][o.o.j.s.JobSweeper       ] [opensearch-fluentd-masters-0] Running full sweep
[2024-08-01T08:04:35,531][WARN ][o.o.s.a.BackendRegistry  ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin from [::1]:32984
[2024-08-01T08:04:36,640][WARN ][o.o.s.a.BackendRegistry  ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin from [::1]:33000

opensearch-fluentd-securityconfig-update logs:

Waiting to connect to the cluster
OpenSearch Security not initialized.**************************************************************************
** This tool will be deprecated in the next major release of OpenSearch **
** https://github.com/opensearch-project/security/issues/1755           **
**************************************************************************
Security Admin v7
Will connect to opensearch-fluentd.logging.svc.cluster.local:9200 ... done
Connected as "CN=admin,OU=opensearch-fluentd"
OpenSearch Version: 2.15.0
Contacting opensearch cluster 'opensearch' and wait for YELLOW clusterstate ...
Cannot retrieve cluster state due to: 30,000 milliseconds timeout on connection http-outgoing-53 [ACTIVE]. This is not an error, will keep on trying ...
  Root cause: java.net.SocketTimeoutException: 30,000 milliseconds timeout on connection http-outgoing-53 [ACTIVE] (java.net.SocketTimeoutException/java.net.SocketTimeoutException)
   * Try running securityadmin.sh with -icl (but no -cl) and -nhnv (If that works you need to check your clustername as well as hostnames in your TLS certificates)
   * Make sure that your keystore or PEM certificate is a client certificate (not a node certificate) and configured properly in opensearch.yml
   * If this is not working, try running securityadmin.sh with --diagnose and see diagnose trace log file)
   * Add --accept-red-cluster to allow securityadmin to operate on a red cluster.
...
# same error repeats many times

last logs in bootstrap node:

[.opendistro_security] retrieving configuration for [ACTIONGROUPS, ALLOWLIST, AUDIT, CONFIG, INTERNALUSERS, NODESDN, ROLES, ROLESMAPPING, TENANTS, WHITELIST] (index=.opendistro_security)
[2024-08-01T08:15:39,633][ERROR][o.o.s.c.ConfigurationLoaderSecurity7] [opensearch-fluentd-bootstrap-0] Failure no such index [.opendistro_security] retrieving configuration for [ACTIONGROUPS, ALLOWLIST, AUDIT, CONFIG, INTERNALUSERS, NODESDN, ROLES, ROLESMAPPING, TENANTS, WHITELIST] (index=.opendistro_security)
[2024-08-01T08:15:39,633][ERROR][o.o.s.c.ConfigurationLoaderSecurity7] [opensearch-fluentd-bootstrap-0] Failure no such index [.opendistro_security] retrieving configuration for [ACTIONGROUPS, ALLOWLIST, AUDIT, CONFIG, INTERNALUSERS, NODESDN, ROLES, ROLESMAPPING, TENANTS, WHITELIST] (index=.opendistro_security)
[2024-08-01T08:15:39,633][ERROR][o.o.s.c.ConfigurationLoaderSecurity7] [opensearch-fluentd-bootstrap-0] Failure no such index [.opendistro_security] retrieving configuration for [ACTIONGROUPS, ALLOWLIST, AUDIT, CONFIG, INTERNALUSERS, NODESDN, ROLES, ROLESMAPPING, TENANTS, WHITELIST] (index=.opendistro_security)
[2024-08-01T08:15:39,633][ERROR][o.o.s.c.ConfigurationLoaderSecurity7] [opensearch-fluentd-bootstrap-0] Failure no such index [.opendistro_security] retrieving configuration for [ACTIONGROUPS, ALLOWLIST, AUDIT, CONFIG, INTERNALUSERS, NODESDN, ROLES, ROLESMAPPING, TENANTS, WHITELIST] (index=.opendistro_security)

Environment

Kubernetes operating system: opensuse-leap-15.6
Container environment:

  • cni_plugins 1.5.1
  • containerd 1.7.20
  • runc 1.1.13

Kubernetes version: 1.30.3
Opensearch version: 2.15.0

@mvtab mvtab added bug Something isn't working untriaged Issues that have not yet been triaged labels Aug 1, 2024
@mvtab
Copy link
Author

mvtab commented Aug 1, 2024

This was my fault, I researched forward and found out I hadn't deleted the PVC's from the initial cluster.
Deleted the PVCs and it's working.

@mvtab mvtab closed this as completed Aug 1, 2024
@mvtab
Copy link
Author

mvtab commented Aug 1, 2024

The masters now bootstrap, but the dashboard won't:

Dashboard pod logs:

{"type":"log","@timestamp":"2024-08-01T09:05:28Z","tags":["info","plugins-service"],"pid":1,"message":"Plugin \"dataSourceManagement\" has been disabled since the following direct or transitive dependencies are missing or disabled: [dataSource]"}
{"type":"log","@timestamp":"2024-08-01T09:05:28Z","tags":["info","plugins-service"],"pid":1,"message":"Plugin \"applicationConfig\" is disabled."}
{"type":"log","@timestamp":"2024-08-01T09:05:28Z","tags":["info","plugins-service"],"pid":1,"message":"Plugin \"cspHandler\" is disabled."}
{"type":"log","@timestamp":"2024-08-01T09:05:28Z","tags":["info","plugins-service"],"pid":1,"message":"Plugin \"dataSource\" is disabled."}
{"type":"log","@timestamp":"2024-08-01T09:05:28Z","tags":["info","plugins-service"],"pid":1,"message":"Plugin \"visTypeXy\" is disabled."}
{"type":"log","@timestamp":"2024-08-01T09:05:28Z","tags":["info","plugins-service"],"pid":1,"message":"Plugin \"workspace\" is disabled."}
{"type":"log","@timestamp":"2024-08-01T09:05:28Z","tags":["warning","config","deprecation"],"pid":1,"message":"\"cpu.cgroup.path.override\" is deprecated and has been replaced by \"ops.cGroupOverrides.cpuPath\""}
{"type":"log","@timestamp":"2024-08-01T09:05:28Z","tags":["warning","config","deprecation"],"pid":1,"message":"\"cpuacct.cgroup.path.override\" is deprecated and has been replaced by \"ops.cGroupOverrides.cpuAcctPath\""}
[agentkeepalive:deprecated] options.freeSocketKeepAliveTimeout is deprecated, please use options.freeSocketTimeout instead
{"type":"log","@timestamp":"2024-08-01T09:05:29Z","tags":["info","plugins-system"],"pid":1,"message":"Setting up [52] plugins: [usageCollection,opensearchDashboardsUsageCollection,opensearchDashboardsLegacy,mapsLegacy,share,opensearchUiShared,legacyExport,embeddable,expressions,data,securityAnalyticsDashboards,savedObjects,home,apmOss,reportsDashboards,searchRelevanceDashboards,dashboard,mlCommonsDashboards,assistantDashboards,visualizations,visTypeVega,visTypeTimeline,visTypeTable,visTypeMarkdown,visBuilder,visAugmenter,anomalyDetectionDashboards,alertingDashboards,tileMap,regionMap,customImportMapDashboards,inputControlVis,ganttChartDashboards,visualize,indexManagementDashboards,notificationsDashboards,management,indexPatternManagement,advancedSettings,console,dataExplorer,charts,visTypeVislib,visTypeTimeseries,visTypeTagcloud,visTypeMetric,discover,savedObjectsManagement,securityDashboards,observabilityDashboards,queryWorkbenchDashboards,bfetch]"}
[agentkeepalive:deprecated] options.freeSocketKeepAliveTimeout is deprecated, please use options.freeSocketTimeout instead
[agentkeepalive:deprecated] options.freeSocketKeepAliveTimeout is deprecated, please use options.freeSocketTimeout instead
[agentkeepalive:deprecated] options.freeSocketKeepAliveTimeout is deprecated, please use options.freeSocketTimeout instead
[agentkeepalive:deprecated] options.freeSocketKeepAliveTimeout is deprecated, please use options.freeSocketTimeout instead
[agentkeepalive:deprecated] options.freeSocketKeepAliveTimeout is deprecated, please use options.freeSocketTimeout instead
[agentkeepalive:deprecated] options.freeSocketKeepAliveTimeout is deprecated, please use options.freeSocketTimeout instead
[agentkeepalive:deprecated] options.freeSocketKeepAliveTimeout is deprecated, please use options.freeSocketTimeout instead
[agentkeepalive:deprecated] options.freeSocketKeepAliveTimeout is deprecated, please use options.freeSocketTimeout instead
[agentkeepalive:deprecated] options.freeSocketKeepAliveTimeout is deprecated, please use options.freeSocketTimeout instead
[agentkeepalive:deprecated] options.freeSocketKeepAliveTimeout is deprecated, please use options.freeSocketTimeout instead
[agentkeepalive:deprecated] options.freeSocketKeepAliveTimeout is deprecated, please use options.freeSocketTimeout instead
[agentkeepalive:deprecated] options.freeSocketKeepAliveTimeout is deprecated, please use options.freeSocketTimeout instead
[agentkeepalive:deprecated] options.freeSocketKeepAliveTimeout is deprecated, please use options.freeSocketTimeout instead
{"type":"log","@timestamp":"2024-08-01T09:05:31Z","tags":["info","savedobjects-service"],"pid":1,"message":"Waiting until all OpenSearch nodes are compatible with OpenSearch Dashboards before starting saved objects migrations..."}
{"type":"log","@timestamp":"2024-08-01T09:05:31Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:05:31Z","tags":["error","savedobjects-service"],"pid":1,"message":"Unable to retrieve version information from OpenSearch nodes."}
{"type":"log","@timestamp":"2024-08-01T09:05:33Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:05:36Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:05:38Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:05:41Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:05:43Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:05:46Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:05:48Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:05:51Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:05:53Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:05:56Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:05:58Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:06:01Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:06:03Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:06:06Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:06:08Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:06:11Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}

Node log:

[2024-08-01T09:27:26,272][WARN ][o.o.s.a.BackendRegistry  ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin
 from 10.214.2.246:39630
[2024-08-01T09:27:36,899][WARN ][o.o.s.a.BackendRegistry  ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin
 from 10.214.2.246:43884
[2024-08-01T09:27:37,464][WARN ][o.o.s.a.BackendRegistry  ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin
 from 10.214.2.246:43890

Namespace overview:

NAME                                                 READY   STATUS      RESTARTS       AGE
pod/opensearch-controller-manager-76d984bff-bb5vc    2/2     Running     0              96m
pod/opensearch-fluentd-dashboards-788d986f54-dzrwm   0/1     Running     2 (103s ago)   8m23s
pod/opensearch-fluentd-masters-0                     1/1     Running     0              8m24s
pod/opensearch-fluentd-masters-1                     1/1     Running     0              5m50s
pod/opensearch-fluentd-masters-2                     1/1     Running     0              4m15s
pod/opensearch-fluentd-securityconfig-update-nj295   0/1     Completed   0              8m24s

I tried giving the same credentials as the admin user:

dashboards:
    opensearchCredentialsSecret:
      name: admin-credentials-secret

I tried creating new credentials and adding them instead, I tried giving no special credentials to the dashboards, it's simply not working.

EDIT:
Reading the documentation, I see mentioned "By default Dashboards is configured to use the demo admin user.". Where? How? Why is there a dashboarduser in the security config with password kibanaserver?
Could the documentation be clearer on the subject?

UPDATE:
I completely removed the dashboards and the nodes themselves work, I can query them, but the operator can't:

Operator logs:

{"level":"info","ts":"2024-08-01T10:21:05.777Z","msg":"Generating certificates","controller":"opensearchcluster","controllerGroup":"opensearch.opster.io","controllerKind":"OpenSearchCluster","OpenSearchCluster":{"name":"opensearch-fluentd","namespace":"logging"},"namespace":"logging","name":"opensearch-fluentd","reconcileID":"2aee7728-3a6e-49e2-be19-0a4cf3d1ae18","interface":"transport"}
{"level":"info","ts":"2024-08-01T10:21:05.779Z","msg":"Generating certificates","controller":"opensearchcluster","controllerGroup":"opensearch.opster.io","controllerKind":"OpenSearchCluster","OpenSearchCluster":{"name":"opensearch-fluentd","namespace":"logging"},"namespace":"logging","name":"opensearch-fluentd","reconcileID":"2aee7728-3a6e-49e2-be19-0a4cf3d1ae18","interface":"http"}
{"level":"error","ts":"2024-08-01T10:21:06.784Z","msg":"Failed to get OpenSearch health status","controller":"opensearchcluster","controllerGroup":"opensearch.opster.io","controllerKind":"OpenSearchCluster","OpenSearchCluster":{"name":"opensearch-fluentd","namespace":"logging"},"namespace":"logging","name":"opensearch-fluentd","reconcileID":"2aee7728-3a6e-49e2-be19-0a4cf3d1ae18","error":"get error cluster health failed: [401 Unauthorized] Unauthorized","stacktrace":"github.com/Opster/opensearch-k8s-operator/opensearch-operator/pkg/reconcilers/util.GetClusterHealth\n\t/workspace/pkg/reconcilers/util/util.go:298\ngithub.com/Opster/opensearch-k8s-operator/opensearch-operator/pkg/reconcilers.(*ClusterReconciler).UpdateClusterStatus\n\t/workspace/pkg/reconcilers/cluster.go:479\ngithub.com/Opster/opensearch-k8s-operator/opensearch-operator/pkg/reconcilers.(*ClusterReconciler).Reconcile\n\t/workspace/pkg/reconcilers/cluster.go:128\ngithub.com/Opster/opensearch-k8s-operator/opensearch-operator/controllers.(*OpenSearchClusterReconciler).reconcilePhaseRunning\n\t/workspace/controllers/opensearchController.go:328\ngithub.com/Opster/opensearch-k8s-operator/opensearch-operator/controllers.(*OpenSearchClusterReconciler).Reconcile\n\t/workspace/controllers/opensearchController.go:143\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:118\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:314\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:265\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:226"}
{"level":"info","ts":"2024-08-01T10:21:16.789Z","msg":"Reconciling OpenSearchCluster","controller":"opensearchcluster","controllerGroup":"opensearch.opster.io","controllerKind":"OpenSearchCluster","OpenSearchCluster":{"name":"opensearch-fluentd","namespace":"logging"},"namespace":"logging","name":"opensearch-fluentd","reconcileID":"e1ccce2d-1e90-477e-bc15-ca176dc19d7c","cluster":{"name":"opensearch-fluentd","namespace":"logging"}}
{"level":"info","ts":"2024-08-01T10:21:16.802Z","msg":"Generating certificates","controller":"opensearchcluster","controllerGroup":"opensearch.opster.io","controllerKind":"OpenSearchCluster","OpenSearchCluster":{"name":"opensearch-fluentd","namespace":"logging"},"namespace":"logging","name":"opensearch-fluentd","reconcileID":"e1ccce2d-1e90-477e-bc15-ca176dc19d7c","interface":"transport"}
{"level":"info","ts":"2024-08-01T10:21:16.803Z","msg":"Generating certificates","controller":"opensearchcluster","controllerGroup":"opensearch.opster.io","controllerKind":"OpenSearchCluster","OpenSearchCluster":{"name":"opensearch-fluentd","namespace":"logging"},"namespace":"logging","name":"opensearch-fluentd","reconcileID":"e1ccce2d-1e90-477e-bc15-ca176dc19d7c","interface":"http"}
{"level":"error","ts":"2024-08-01T10:21:17.766Z","msg":"Failed to get OpenSearch health status","controller":"opensearchcluster","controllerGroup":"opensearch.opster.io","controllerKind":"OpenSearchCluster","OpenSearchCluster":{"name":"opensearch-fluentd","namespace":"logging"},"namespace":"logging","name":"opensearch-fluentd","reconcileID":"e1ccce2d-1e90-477e-bc15-ca176dc19d7c","error":"get error cluster health failed: [401 Unauthorized] Unauthorized","stacktrace":"github.com/Opster/opensearch-k8s-operator/opensearch-operator/pkg/reconcilers/util.GetClusterHealth\n\t/workspace/pkg/reconcilers/util/util.go:298\ngithub.com/Opster/opensearch-k8s-operator/opensearch-operator/pkg/reconcilers.(*ClusterReconciler).UpdateClusterStatus\n\t/workspace/pkg/reconcilers/cluster.go:479\ngithub.com/Opster/opensearch-k8s-operator/opensearch-operator/pkg/reconcilers.(*ClusterReconciler).Reconcile\n\t/workspace/pkg/reconcilers/cluster.go:128\ngithub.com/Opster/opensearch-k8s-operator/opensearch-operator/controllers.(*OpenSearchClusterReconciler).reconcilePhaseRunning\n\t/workspace/controllers/opensearchController.go:328\ngithub.com/Opster/opensearch-k8s-operator/opensearch-operator/controllers.(*OpenSearchClusterReconciler).Reconcile\n\t/workspace/controllers/opensearchController.go:143\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:118\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:314\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:265\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:226"}

Node logs:

[2024-08-01T10:21:17,501][WARN ][o.o.s.a.BackendRegistry  ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin
 from 10.214.2.3:48316
[2024-08-01T10:21:28,140][WARN ][o.o.s.a.BackendRegistry  ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin
 from 10.214.2.3:33674
[2024-08-01T10:21:28,686][WARN ][o.o.s.a.BackendRegistry  ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin
 from 10.214.2.3:33688
[2024-08-01T10:21:39,366][WARN ][o.o.s.a.BackendRegistry  ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin
 from 10.214.2.3:57430
[2024-08-01T10:21:39,685][WARN ][o.o.s.a.BackendRegistry  ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin
 from 10.214.2.3:57440
[2024-08-01T10:21:47,713][WARN ][o.o.s.a.BackendRegistry  ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin
 from 10.214.2.3:47386
[2024-08-01T10:21:47,999][WARN ][o.o.s.a.BackendRegistry  ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin
 from 10.214.2.3:47392
[2024-08-01T10:21:48,366][WARN ][o.o.s.a.BackendRegistry  ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin

curl test:

[opensearch@opensearch-fluentd-masters-0 ~]$ curl https://localhost:9200 -k -u ltb-admin:<pass>
{
  "name" : "opensearch-fluentd-masters-0",
  "cluster_name" : "opensearch-fluentd",
  "cluster_uuid" : "48TVUw9LSqOxlxUqCD2SAg",
  "version" : {
    "distribution" : "opensearch",
    "number" : "2.15.0",
    "build_type" : "tar",
    "build_hash" : "61dbcd0795c9bfe9b81e5762175414bc38bbcadf",
    "build_date" : "2024-06-20T03:26:49.193630411Z",
    "build_snapshot" : false,
    "lucene_version" : "9.10.0",
    "minimum_wire_compatibility_version" : "7.10.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "The OpenSearch Project: https://opensearch.org/"
}

@mvtab mvtab reopened this Aug 1, 2024
@gaiksaya gaiksaya removed the untriaged Issues that have not yet been triaged label Aug 15, 2024
@digitalray
Copy link

digitalray commented Aug 16, 2024

Hey @mvtab. I was able to change the default admin fine. I did not use python libs though to generate the salt. Here is my script for generating the salt that was executed in Ubuntu 22.04:

opensearch_pass=$(openssl rand -base64 24)
echo $opensearch_pass
htpasswd -bnBC 8 "" $opensearch_pass | grep -oP '\$2[ayb]\$.{56}'

Here is my opensearchcluster crd

apiVersion: opensearch.opster.io/v1
kind: OpenSearchCluster
metadata:
  annotations:
    meta.helm.sh/release-name: opensearch-cluster
    meta.helm.sh/release-namespace: logging
  creationTimestamp: "2024-08-16T21:31:30Z"
  finalizers:
  - Opster
  generation: 2
  labels:
    app.kubernetes.io/managed-by: Helm
  name: opensearch-cluster
  namespace: logging
  resourceVersion: "144312069"
  uid: 5da5873b-9705-4ff0-8a48-bb05cb914edb
spec:
  bootstrap:
    resources: {}
  confMgmt: {}
  dashboards:
    enable: true
    opensearchCredentialsSecret:
      name: admin-credentials-secret
    replicas: 1
    resources:
      limits:
        cpu: 500m
        memory: 1Gi
      requests:
        cpu: 500m
        memory: 1Gi
    service:
      type: ClusterIP
    version: 2.3.0
  general:
    drainDataNodes: true
    httpPort: 9200
    monitoring: {}
    pluginsList:
    - repository-s3
    serviceName: opensearch-cluster
    setVMMaxMapCount: true
    vendor: opensearch
    version: 2.3.0
  initHelper:
    resources: {}
  nodePools:
  - component: masters
    diskSize: 30Gi
    replicas: 3
    resources:
      limits:
        cpu: 500m
        memory: 2Gi
      requests:
        cpu: 500m
        memory: 2Gi
    roles:
    - master
    - data
  security:
    config:
      adminCredentialsSecret:
        name: admin-credentials-secret
      adminSecret: {}
      securityConfigSecret:
        name: securityconfig-secret
      updateJob:
        resources: {}
    tls:
      http:
        caSecret: {}
        generate: true
        secret: {}
      transport:
        caSecret: {}
        generate: true
        secret: {}
status:
  availableNodes: 3
  componentsStatus:
  - component: Restarter
    status: Finished
  health: green
  initialized: true
  phase: RUNNING
  version: 2.3.0

The 2 secrets that was added

+ # Source: secret/templates/secret.yaml
+ apiVersion: v1
+ kind: Secret
+ metadata:
+   labels:
+     app: securityconfig-secret
+     chart: secret
+     heritage: Helm
+     release: securityconfig-secret
+   name: securityconfig-secret
+ data:
+   action_groups.yml: '++++++++ # (49 bytes)'
+   config.yml: '++++++++ # (364 bytes)'
+   internal_users.yml: '++++++++ # (1689 bytes)'
+   nodes_dn.yml: '++++++++ # (44 bytes)'
+   roles.yml: '++++++++ # (6287 bytes)'
+   roles_mapping.yml: '++++++++ # (464 bytes)'
+   tenants.yml: '++++++++ # (44 bytes)'
+   whitelist.yml: '++++++++ # (46 bytes)'
+ type: Opaque
+ # Source: secret/templates/secret.yaml
+ apiVersion: v1
+ kind: Secret
+ metadata:
+   labels:
+     app: admin-credentials-secret
+     chart: secret
+     heritage: Helm
+     release: admin-credentials-secret
+   name: admin-credentials-secret
+ data:
+   password: '++++++++ # (32 bytes)'
+   username: '++++++++ # (5 bytes)'
+ type: Opaque

One last thing. I am using open-search operator version: 2.6.0

Hope this helps

@mvtab
Copy link
Author

mvtab commented Aug 29, 2024

I really don't understand why, but apparently I was using an extremely old version of the chart: 2.3.0. Current is 2.23.1.

Closing this.

@mvtab mvtab closed this as completed Aug 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Status: ✅ Done
Development

No branches or pull requests

3 participants