diff --git a/CHANGELOG.md b/CHANGELOG.md index b8ce616..c6df532 100755 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,6 +2,11 @@ All changes to the log analytics integration will be documented in this file. +## [1.0.5] - July 22, 2024 +* FluentD sidecar version bumped to 4.5, to upgrade base image to bitnami/fluentd 1.17.0 +* Fixing metrics documentation and general readme fixes +* Remove elastic search fluentd plugins from docker images + ## [1.0.4] - June6, 2024 * [BREAKING] Adding deprecation notice for partnership-pts-observability.jfrog.io docker registry * FluentD sidecar version bumped to 4.3, to upgrade base image to bitnami/fluentd 1.16.5 diff --git a/README.md b/README.md index e7d45c9..a06c568 100644 --- a/README.md +++ b/README.md @@ -9,7 +9,7 @@ This integration is last tested with Artifactory 7.71.11 and Xray 3.88.12 versio `Note! You must follow the order of the steps throughout Splunk Configuration` 1. [Splunk Setup](#splunk-setup) 2. [JFrog Metrics Setup](#jfrog-metrics-setup) -3. [Fluentd Installation](#fluentd-installation) +3. [FluentD Installation](#fluentd-installation) * [OS / Virtual Machine](#os--virtual-machine) * [Docker](#docker) * [Kubernetes Deployment with Helm](#kubernetes-deployment-with-helm) @@ -22,7 +22,7 @@ This integration is last tested with Artifactory 7.71.11 and Xray 3.88.12 versio ### Splunkbase App Install the `JFrog Log Analytics Platform` app from Splunkbase [here!](https://splunkbase.splunk.com/app/5023/) -````text +``` text 1. Download file from Splunkbase 2. Open Splunk web console as administrator 3. From homepage click on settings wheel in top right of Apps section @@ -30,15 +30,15 @@ Install the `JFrog Log Analytics Platform` app from Splunkbase [here!](https://s 5. Select download file from Splunkbase on your computer 6. Click upgrade 7. Click upload -```` +``` Restart Splunk post installation of App. -````text -1. Open Splunk web console as adminstrator +``` text +1. Open Splunk web console as administrator 2. Click on Settings then Server Controls 3. Click on Restart -```` +``` Login to Splunk after the restart completes. @@ -50,26 +50,26 @@ Our integration uses the [Splunk HEC](https://dev.splunk.com/enterprise/docs/dat Users will need to configure the HEC to accept data (enabled) and also create a new token. Steps are below. #### Create index jfrog_splunk -````text +``` text 1. Open Splunk web console as administrator 2. Click on "Settings" in dropdown select "Indexes" 3. Click on "New Index" 4. Enter Index name as jfrog_splunk 5. Click "Save" -```` +``` #### Create index jfrog_splunk_metrics -````text +``` text 1. Open Splunk web console as administrator 2. Click on "Settings" in dropdown select "Indexes" 3. Click on "New Index" 4. Enter Index name as jfrog_splunk_metrics 5. Select Index Data Type as Metrics 6. Click "Save" -```` +``` #### Configure new HEC token to receive Logs -````text +``` text 1. Open Splunk web console as administrator 2. Click on "Settings" in dropdown select "Data inputs" 3. Click on "HTTP Event Collector" @@ -81,10 +81,10 @@ Users will need to configure the HEC to accept data (enabled) and also create a 9. Click on the green "Review" button 10. If good, Click on the green "Done" button 11. Save the generated token value -```` +``` #### Configure new HEC token to receive Metrics -````text +``` text 1. Open Splunk web console as administrator 2. Click on "Settings" in dropdown select "Data inputs" 3. Click on "HTTP Event Collector" @@ -96,21 +96,23 @@ Users will need to configure the HEC to accept data (enabled) and also create a 9. Click on the green "Review" button 10. If good, Click on the green "Done" button 11. Save the generated token value -```` +``` ## JFrog Metrics Setup -To enable metrics in Artifactory, make the following configuration changes to the [Artifactory System YAML](https://www.jfrog.com/confluence/display/JFROG/Artifactory+System+YAML) +For non Kubernetes-based installations, enable metrics in Artifactory, make the following configuration changes to the [Artifactory System YAML](https://www.jfrog.com/confluence/display/JFROG/Artifactory+System+YAML) ```yaml -artifactory: +shared: metrics: enabled: true - openMetrics: + +artifactory: + metrics: enabled: true ``` Once this configuration is done and the application is restarted, metrics will be available in Open Metrics Format Metrics are enabled by default in Xray. -For kubernetes based installs, openMetrics are enabled in the helm install commands listed below +For Kubernetes-based installations, openMetrics is enabled in the helm install commands listed below ## Fluentd Installation @@ -146,14 +148,14 @@ Ensure you have access to the Internet from VM. Recommended install is through f After FluentD is successfully installed, the below plugins are required to be installed -````shell +```bash gem install fluent-plugin-concat gem install fluent-plugin-splunk-hec gem install fluent-plugin-jfrog-siem gem install fluent-plugin-jfrog-metrics -```` +``` -#### Configure Fluentd +#### Configure FluentD We rely heavily on environment variables so that the correct log files are streamed to your observability dashboards. Ensure that you fill in the .env file with correct values. Download the .env file from [here](https://raw.githubusercontent.com/jfrog/log-analytics-splunk/master/jfrog.env) * **JF_PRODUCT_DATA_INTERNAL**: The environment variable JF_PRODUCT_DATA_INTERNAL must be defined to the correct location. For each JFrog service you will find its active log files in the `$JFROG_HOME//var/log` directory @@ -170,14 +172,14 @@ We rely heavily on environment variables so that the correct log files are strea Apply the .env files and then run the fluentd wrapper with one argument pointed to the `fluent.conf.*` file configured. -````shell +```bash source jfrog.env ./fluentd $JF_PRODUCT_DATA_INTERNAL/fluent.conf. -```` +``` ### Docker -`Note! These steps were not tested to work out of the box on MAC` -In order to run fluentd as a docker image to send the logs, violations and metrics data to splunk, the following commands needs to be executed on the host that runs the docker. +`Note! These steps were not tested to work out of the box on MAC`
+In order to run FluentD as a docker image to send the logs, violations and metrics data to splunk, the following commands needs to be executed on the host that runs the docker. 1. Check the docker installation is functional, execute command 'docker version' and 'docker ps'. @@ -189,42 +191,47 @@ In order to run fluentd as a docker image to send the logs, violations and metri * Download docker.env from [here](https://raw.githubusercontent.com/jfrog/log-analytics-splunk/master/docker-build/docker.env) to the directory where the docker file was downloaded. -```text - -For Splunk as the observability platform, execute these commands to setup the docker container running the fluentd installation +For Splunk as the observability platform, execute these commands to setup the docker container running the FluentD installation -1. Execute 'docker build --build-arg SOURCE="JFRT" --build-arg TARGET="SPLUNK" -t .' +1. Execute + ```bash + docker build --build-arg SOURCE="JFRT" --build-arg TARGET="SPLUNK" -t . + ``` Command example - 'docker build --build-arg SOURCE="JFRT" --build-arg TARGET="SPLUNK" -t jfrog/fluentd-splunk-rt .' + ```bash + docker build --build-arg SOURCE="JFRT" --build-arg TARGET="SPLUNK" -t jfrog/fluentd-splunk-rt . + ``` The above command will build the docker image. 2. Fill the necessary information in the docker.env file - JF_PRODUCT_DATA_INTERNAL: The environment variable JF_PRODUCT_DATA_INTERNAL must be defined to the correct location. For each JFrog service you will find its active log files in the `$JFROG_HOME//var/log` directory - SPLUNK_COM_PROTOCOL: HTTP Scheme, http or https - SPLUNK_HEC_HOST: Splunk Instance URL - SPLUNK_HEC_PORT: Splunk HEC configured port - SPLUNK_HEC_TOKEN: Splunk HEC Token for sending logs to Splunk - SPLUNK_METRICS_HEC_TOKEN: Splunk HEC Token for sending metrics to Splunk - SPLUNK_INSECURE_SSL: false for test environments only or if http scheme - JPD_URL: Artifactory JPD URL of the format `http://` - JPD_ADMIN_USERNAME: Artifactory username for authentication - JFROG_ADMIN_TOKEN: Artifactory [Access Token](https://jfrog.com/help/r/how-to-generate-an-access-token-video/artifactory-creating-access-tokens-in-artifactory) for authentication - COMMON_JPD: This flag should be set as true only for non-kubernetes installations or installations where JPD base URL is same to access both Artifactory and Xray (ex: https://sample_base_url/artifactory or https://sample_base_url/xray) - -3. Execute 'docker run -it --name jfrog-fluentd-splunk-rt -v :/var/opt/jfrog/artifactory --env-file docker.env ' + **JF_PRODUCT_DATA_INTERNAL**: The environment variable JF_PRODUCT_DATA_INTERNAL must be defined to the correct location. For each JFrog service you will find its active log files in the `$JFROG_HOME//var/log` directory + **SPLUNK_COM_PROTOCOL**: HTTP Scheme, http or https + **SPLUNK_HEC_HOST**: Splunk Instance URL + **SPLUNK_HEC_PORT**: Splunk HEC configured port + **SPLUNK_HEC_TOKEN**: Splunk HEC Token for sending logs to Splunk + **SPLUNK_METRICS_HEC_TOKEN**: Splunk HEC Token for sending metrics to Splunk + **SPLUNK_INSECURE_SSL**: false for test environments only or if http scheme + **JPD_URL: Artifactory JPD URL of the format `http://` + **JPD_ADMIN_USERNAME**: Artifactory username for authentication + **JFROG_ADMIN_TOKEN**: Artifactory [Access Token](https://jfrog.com/help/r/how-to-generate-an-access-token-video/artifactory-creating-access-tokens-in-artifactory) for authentication + **COMMON_JPD**: This flag should be set as true only for non-kubernetes installations or installations where JPD base URL is same to access both Artifactory and Xray (ex: https://sample_base_url/artifactory or https://sample_base_url/xray) + +3. Execute + ```bash + docker run -it --name jfrog-fluentd-splunk-rt -v :/var/opt/jfrog/artifactory --env-file docker.env + ``` The should be an absolute path where the Jfrog Artifactory Logs folder resides, i.e for an Docker based Artifactory Installation, ex: /var/opt/jfrog/artifactory/var/logs on the docker host. Command example - 'docker run -it --name jfrog-fluentd-splunk-rt -v $JFROG_HOME/artifactory/var/:/var/opt/jfrog/artifactory --env-file docker.env jfrog/fluentd-splunk-rt' - - -``` + ```bash + docker run -it --name jfrog-fluentd-splunk-rt -v $JFROG_HOME/artifactory/var/:/var/opt/jfrog/artifactory --env-file docker.env jfrog/fluentd-splunk-rt + ``` ### Kubernetes Deployment with Helm Recommended installation for Kubernetes is to utilize the helm chart with the associated values.yaml in this repo. @@ -241,33 +248,49 @@ Recommended installation for Kubernetes is to utilize the helm chart with the as Add JFrog Helm repository: -```shell +```bash helm repo add jfrog https://charts.jfrog.io helm repo update ``` -Replace placeholders with your ``masterKey`` and ``joinKey``. To generate each of them, use the command -``openssl rand -hex 32`` + +Throughout the exampled helm installations we'll use `jfrog-splunk` as an example namespace. That said, you can use a different or existing namespace instead by setting the following environment variable + +```bash +export INST_NAMESPACE=jfrog-splunk +``` + +If you don't have an existing namespace for the deployment, create it and set the kubectl context to use this namespace + +```bash +kubectl create namespace $INST_NAMESPACE +kubectl config set-context --current --namespace=$INST_NAMESPACE +``` + +Generate ``masterKey`` and ``joinKey`` for the installation + +```bash +export JOIN_KEY=$(openssl rand -hex 32) +export MASTER_KEY=$(openssl rand -hex 32) +``` #### Artifactory ⎈: 1. Skip this step if you already have Artifactory installed. Else, install Artifactory using the command below - ```shell - helm upgrade --install artifactory jfrog/artifactory \ - --set artifactory.masterKey=FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF \ - --set artifactory.joinKey=EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE \ - --set artifactory.license.secret=artifactory-license \ - --set artifactory.license.dataKey=artifactory.cluster.license \ - --set artifactory.metrics.enabled=true \ - --set artifactory.openMetrics.enabled=true + ```bash + helm upgrade --install artifactory jfrog/artifactory \ + --set artifactory.masterKey=$MASTER_KEY \ + --set artifactory.joinKey=$JOIN_KEY \ + --set artifactory.openMetrics.enabled=true \ + -n $INST_NAMESPACE --create-namespace ``` 2. Create a secret for JFrog's admin token - [Access Token](https://jfrog.com/help/r/how-to-generate-an-access-token-video/artifactory-creating-access-tokens-in-artifactory) using any of the following methods - ```shell + ```bash kubectl create secret generic jfrog-admin-token --from-file=token= OR - kubectl create secret generic jfrog-admin-token --from-literal=token= + kubectl create secret generic jfrog-admin-token --from-literal=token= ``` 3. For Artifactory installation, download the .env file from [here](https://github.com/jfrog/log-analytics-splunk/raw/master/helm/jfrog_helm.env). Fill in the jfrog_helm.env file with correct values. @@ -283,21 +306,20 @@ Replace placeholders with your ``masterKey`` and ``joinKey``. To generate each o Apply the .env files using the helm command below - ````shell + ```bash source jfrog_helm.env - ```` + ``` 4. Postgres password is required to upgrade Artifactory. Run the following command to get the current password - ```shell + ```bash POSTGRES_PASSWORD=$(kubectl get secret artifactory-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode) ``` 5. Upgrade Artifactory installation using the command below - ```shell + ```bash helm upgrade --install artifactory jfrog/artifactory \ - --set artifactory.masterKey=FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF \ - --set artifactory.joinKey=EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE \ - --set artifactory.metrics.enabled=true --set artifactory.openMetrics.enabled=true \ + --set artifactory.joinKey=$JOIN_KEY \ + --set artifactory.openMetrics.enabled=true \ --set databaseUpgradeReady=true --set postgresql.postgresqlPassword=$POSTGRES_PASSWORD --set nginx.service.ssloffload=true \ --set splunk.host=$SPLUNK_HEC_HOST \ --set splunk.port=$SPLUNK_HEC_PORT \ @@ -308,31 +330,32 @@ Replace placeholders with your ``masterKey`` and ``joinKey``. To generate each o --set jfrog.observability.jpd_url=$JPD_URL \ --set jfrog.observability.username=$JPD_ADMIN_USERNAME \ --set jfrog.observability.common_jpd=$COMMON_JPD \ - -f helm/artifactory-values.yaml + -f helm/artifactory-values.yaml \ + -n $INST_NAMESPACE --create-namespace ``` #### Artifactory-HA ⎈: 1. For HA installation, please create a license secret on your cluster prior to installation. - ```shell - kubectl create secret generic artifactory-license --from-file=artifactory.cluster.license + ```bash + kubectl create secret generic artifactory-license --from-file= ``` 2. Skip this step if you already have Artifactory installed. Else, install Artifactory using the command below - ```shell + ```bash helm upgrade --install artifactory-ha jfrog/artifactory-ha \ - --set artifactory.masterKey=FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF \ - --set artifactory.joinKey=EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE \ + --set artifactory.masterKey=$MASTER_KEY \ + --set artifactory.joinKey=$JOIN_KEY \ --set artifactory.license.secret=artifactory-license \ --set artifactory.license.dataKey=artifactory.cluster.license \ - --set artifactory.metrics.enabled=true \ - --set artifactory.openMetrics.enabled=true + --set artifactory.openMetrics.enabled=true \ + -n $INST_NAMESPACE --create-namespace ``` 3. Create a secret for JFrog's admin token - [Access Token](https://jfrog.com/help/r/how-to-generate-an-access-token-video/artifactory-creating-access-tokens-in-artifactory) using any of the following methods - ```shell + ```bash kubectl create secret generic jfrog-admin-token --from-file=token= OR - kubectl create secret generic jfrog-admin-token --from-literal=token= + kubectl create secret generic jfrog-admin-token --from-literal=token= ``` 4. Download the .env file from [here](https://github.com/jfrog/log-analytics-splunk/raw/master/helm/jfrog_helm.env). Fill in the jfrog_helm.env file with correct values. @@ -348,41 +371,41 @@ Replace placeholders with your ``masterKey`` and ``joinKey``. To generate each o Apply the .env files and then run the helm command below - ````shell + ```bash source jfrog_helm.env - ```` + ``` 5. Postgres password is required to upgrade Artifactory. Run the following command to get the current password - ```shell + ```bash POSTGRES_PASSWORD=$(kubectl get secret artifactory-ha-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode) ``` 6. Upgrade Artifactory HA installation using the command below - ```text - helm upgrade --install artifactory-ha jfrog/artifactory-ha \ - --set artifactory.masterKey=FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF \ - --set artifactory.joinKey=EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE --set artifactory.replicaCount=0 \ - --set artifactory.metrics.enabled=true --set artifactory.openMetrics.enabled=true \ - --set databaseUpgradeReady=true --set postgresql.postgresqlPassword=$POSTGRES_PASSWORD --set nginx.service.ssloffload=true \ - --set splunk.host=$SPLUNK_HEC_HOST \ - --set splunk.port=$SPLUNK_HEC_PORT \ - --set splunk.logs_token=$SPLUNK_HEC_TOKEN \ - --set splunk.metrics_token=$SPLUNK_METRICS_HEC_TOKEN \ - --set splunk.com_protocol=$SPLUNK_COM_PROTOCOL \ - --set splunk.insecure_ssl=$SPLUNK_INSECURE_SSL \ - --set jfrog.observability.jpd_url=$JPD_URL \ - --set jfrog.observability.username=$JPD_ADMIN_USERNAME \ - --set jfrog.observability.common_jpd=$COMMON_JPD \ - -f helm/artifactory-ha-values.yaml + ```bash + helm upgrade --install artifactory-ha jfrog/artifactory-ha \ + --set artifactory.joinKey=$JOIN_KEY \ + --set artifactory.openMetrics.enabled=true \ + --set databaseUpgradeReady=true --set postgresql.postgresqlPassword=$POSTGRES_PASSWORD --set nginx.service.ssloffload=true \ + --set splunk.host=$SPLUNK_HEC_HOST \ + --set splunk.port=$SPLUNK_HEC_PORT \ + --set splunk.logs_token=$SPLUNK_HEC_TOKEN \ + --set splunk.metrics_token=$SPLUNK_METRICS_HEC_TOKEN \ + --set splunk.com_protocol=$SPLUNK_COM_PROTOCOL \ + --set splunk.insecure_ssl=$SPLUNK_INSECURE_SSL \ + --set jfrog.observability.jpd_url=$JPD_URL \ + --set jfrog.observability.username=$JPD_ADMIN_USERNAME \ + --set jfrog.observability.common_jpd=$COMMON_JPD \ + -f helm/artifactory-ha-values.yaml \ + -n $INST_NAMESPACE --create-namespace ``` #### Xray ⎈: Create a secret for JFrog's admin token - [Access Token](https://jfrog.com/help/r/how-to-generate-an-access-token-video/artifactory-creating-access-tokens-in-artifactory) using any of the following methods if it doesn't exist -```shell +```bash kubectl create secret generic jfrog-admin-token --from-file=token= OR -kubectl create secret generic jfrog-admin-token --from-literal=token= +kubectl create secret generic jfrog-admin-token --from-literal=token= ``` For Xray installation, download the .env file from [here](https://raw.githubusercontent.com/jfrog/log-analytics-splunk/master/jfrog_helm.env). Fill in the jfrog_helm.env file with correct values. @@ -400,26 +423,33 @@ For Xray installation, download the .env file from [here](https://raw.githubuser Apply the .env files and then run the helm command below -````shell +```bash source jfrog_helm.env -```` - -Use the same `joinKey` as you used in Artifactory installation to allow Xray node to successfully connect to Artifactory. - -```shell -helm upgrade --install xray jfrog/xray --set xray.jfrogUrl=http://my-artifactory-nginx-url \ - --set xray.masterKey=FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF \ - --set xray.joinKey=EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE \ - --set splunk.host=$SPLUNK_HEC_HOST \ - --set splunk.port=$SPLUNK_HEC_PORT \ - --set splunk.logs_token=$SPLUNK_HEC_TOKEN \ - --set splunk.metrics_token=$SPLUNK_METRICS_HEC_TOKEN \ - --set splunk.com_protocol=$SPLUNK_COM_PROTOCOL \ - --set splunk.insecure_ssl=$SPLUNK_INSECURE_SSL \ - --set jfrog.observability.jpd_url=$JPD_URL \ - --set jfrog.observability.username=$JPD_ADMIN_USERNAME \ - --set jfrog.observability.common_jpd=$COMMON_JPD \ - -f helm/xray-values.yaml +``` + +Generate a master key for xray + +```bash +export XRAY_MASTER_KEY=$(openssl rand -hex 32) +``` + +Use the same `joinKey` as you used in Artifactory installation ($JOIN_KEY) to allow Xray node to successfully connect to Artifactory. + +```bash +helm upgrade --install xray jfrog/xray --set xray.jfrogUrl=$JPD_URL \ + --set xray.masterKey=$XRAY_MASTER_KEY \ + --set xray.joinKey=$JOIN_KEY \ + --set splunk.host=$SPLUNK_HEC_HOST \ + --set splunk.port=$SPLUNK_HEC_PORT \ + --set splunk.logs_token=$SPLUNK_HEC_TOKEN \ + --set splunk.metrics_token=$SPLUNK_METRICS_HEC_TOKEN \ + --set splunk.com_protocol=$SPLUNK_COM_PROTOCOL \ + --set splunk.insecure_ssl=$SPLUNK_INSECURE_SSL \ + --set jfrog.observability.jpd_url=$JPD_URL \ + --set jfrog.observability.username=$JPD_ADMIN_USERNAME \ + --set jfrog.observability.common_jpd=$COMMON_JPD \ + -f helm/xray-values.yaml \ + -n $INST_NAMESPACE --create-namespace ``` ## Dashboards @@ -455,7 +485,7 @@ Log data from JFrog platform logs is translated to pre-defined Common Informatio To run this integration for Splunk users can create a Splunk instance with the correct ports open in Kubernetes by applying the yaml file: -``` +```bash kubectl apply -f k8s/splunk.yaml ``` diff --git a/docker-build/Dockerfile b/docker-build/Dockerfile index ae96aaf..2903e26 100644 --- a/docker-build/Dockerfile +++ b/docker-build/Dockerfile @@ -1,5 +1,5 @@ # Dockerfile for bitnami/fluentd sidecar image with all the necessary plugins for our log analytic providers -FROM bitnami/fluentd:1.16.3 +FROM bitnami/fluentd:1.17.0 LABEL maintainer="Partner Engineering " ## Build time Arguments, short circuit them to ENV Variables so they are available at run time also diff --git a/fluentd-installer/Dockerfile.fluentd.sidecar b/fluentd-installer/Dockerfile.fluentd.sidecar index 270c21d..2f8b771 100644 --- a/fluentd-installer/Dockerfile.fluentd.sidecar +++ b/fluentd-installer/Dockerfile.fluentd.sidecar @@ -1,18 +1,13 @@ # Dockerfile for bitnami/fluentd sidecar image with all the necessary plugins for our log analytic providers -FROM bitnami/fluentd:1.16.3 +FROM bitnami/fluentd:1.17.0 LABEL maintainer "Partner Engineering " USER root -##Uninstall elastic plugin which is preinstalled in bitnami fluentd -##Pin elastic gem version to 7.14 -RUN fluent-gem uninstall elasticsearch -a --ignore-dependencies \ - && fluent-gem install elasticsearch -v 7.14 --no-document \ - ## Install custom Fluentd plugins - && fluent-gem install fluent-plugin-jfrog-siem --no-document \ +## Install custom Fluentd plugins +RUN fluent-gem install fluent-plugin-jfrog-siem --no-document \ && fluent-gem install fluent-plugin-splunk-hec --no-document \ && fluent-gem install fluent-plugin-datadog --no-document \ - && fluent-gem install fluent-plugin-elasticsearch --no-document \ && fluent-gem install fluent-plugin-record-modifier --no-document \ && fluent-gem install fluent-plugin-jfrog-metrics --no-document \ && fluent-gem install fluent-plugin-jfrog-send-metrics --no-document \ diff --git a/helm/artifactory-ha-values.yaml b/helm/artifactory-ha-values.yaml index df60ef6..e69fed8 100644 --- a/helm/artifactory-ha-values.yaml +++ b/helm/artifactory-ha-values.yaml @@ -15,7 +15,7 @@ artifactory: name: volume customSidecarContainers: | - name: "artifactory-fluentd-sidecar" - image: "releases-pts-observability-fluentd.jfrog.io/fluentd:4.3" + image: "releases-pts-observability-fluentd.jfrog.io/fluentd:4.5" imagePullPolicy: "IfNotPresent" volumeMounts: - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" @@ -47,7 +47,7 @@ artifactory: - name: SPLUNK_INSECURE_SSL value: {{ .Values.splunk.insecure_ssl | quote}} - name: FLUENTD_CONF - value: ../../../../{{ .Values.artifactory.persistence.mountPath }}/etc/fluentd/fluentd.conf + value: ../../../..{{ .Values.artifactory.persistence.mountPath }}/etc/fluentd/fluentd.conf splunk: host: SPLUNK_HEC_HOST port: SPLUNK_HEC_PORT diff --git a/helm/artifactory-values.yaml b/helm/artifactory-values.yaml index 9997003..8b8e57e 100644 --- a/helm/artifactory-values.yaml +++ b/helm/artifactory-values.yaml @@ -15,7 +15,7 @@ artifactory: name: artifactory-volume customSidecarContainers: | - name: "artifactory-fluentd-sidecar" - image: "releases-pts-observability-fluentd.jfrog.io/fluentd:4.3" + image: "releases-pts-observability-fluentd.jfrog.io/fluentd:4.5" imagePullPolicy: "IfNotPresent" volumeMounts: - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" @@ -47,7 +47,7 @@ artifactory: - name: SPLUNK_INSECURE_SSL value: {{ .Values.splunk.insecure_ssl | quote}} - name: FLUENTD_CONF - value: ../../../../{{ .Values.artifactory.persistence.mountPath }}/etc/fluentd/fluentd.conf + value: ../../../..{{ .Values.artifactory.persistence.mountPath }}/etc/fluentd/fluentd.conf splunk: host: SPLUNK_HEC_HOST port: SPLUNK_HEC_PORT diff --git a/helm/xray-values.yaml b/helm/xray-values.yaml index 20f8bee..b1bfa58 100644 --- a/helm/xray-values.yaml +++ b/helm/xray-values.yaml @@ -19,7 +19,7 @@ common: name: data-volume customSidecarContainers: | - name: "xray-platform-fluentd-sidecar" - image: "releases-pts-observability-fluentd.jfrog.io/fluentd:4.3" + image: "releases-pts-observability-fluentd.jfrog.io/fluentd:4.5" imagePullPolicy: "IfNotPresent" volumeMounts: - mountPath: "{{ .Values.xray.persistence.mountPath }}" @@ -28,7 +28,7 @@ common: - name: JF_PRODUCT_DATA_INTERNAL value: {{ .Values.xray.persistence.mountPath }} - name: FLUENTD_CONF - value: ../../../../{{ .Values.xray.persistence.mountPath }}/etc/fluentd/fluentd.conf + value: ../../../..{{ .Values.xray.persistence.mountPath }}/etc/fluentd/fluentd.conf - name: JPD_URL value: {{ .Values.jfrog.observability.jpd_url }} - name: JPD_ADMIN_USERNAME