diff --git a/README.md b/README.md index fdaf21c..f076e13 100644 --- a/README.md +++ b/README.md @@ -8,9 +8,20 @@ Detection findings and audit events generated by CrowdStrike Falcon platform inf This project facilitates the export of the individual detections and audit events from CrowdStrike Falcon to third-party security dashboards (so called backends). The export is useful in cases where security operation team workflows are tied to given third-party solution to get early real-time heads-up about malicious activities or unusual user activities detected by CrowdStrike Falcon platform. +## API Scopes + +API clients are granted one or more API scopes. Scopes allow access to specific CrowdStrike APIs and describe the actions that an API client can perform. + +FIG requires the following API scopes at a minimum: + +- **Event streams**: [Read] +- **Hosts**: [Read] + +> Consult the backend guides for additional API scopes that may be required. + ## Backends w/ Available Deployment Guide(s) -| Backend | Description | Deployment Guide(s) | Developer Guide(s) | +| Backend | Description | Deployment Guide(s) | General Guide(s) | |:--------|:------------|:--------------------|:-------------------| | AWS | Pushes events to AWS Security Hub | *Coming Soon* | [AWS backend](fig/backends/aws) | | AWS_SQS | Pushes events to AWS SQS | *Coming Soon* | [AWS SQS backend](fig/backends/aws_sqs) | @@ -21,19 +32,27 @@ This project facilitates the export of the individual detections and audit event | Workspace ONE | Pushes events to VMware Workspace ONE Intelligence | *Coming Soon* | [Workspace ONE backend](fig/backends/workspaceone) | ## Alternative Deployment Options + > :exclamation: Prior to any deployment, ensure you refer to the [configuration options](./config/config.ini) available to the application :exclamation: + ### Installation to Kubernetes using the helm chart Please refer to the [FIG helm chart documentation](https://github.com/CrowdStrike/falcon-helm/tree/main/helm-charts/falcon-integration-gateway) for detailed instructions on deploying the FIG via helm chart for your respective backend(s). ### Manual Installation and Removal + #### With Docker/Podman + To install as a container: + 1. Pull the image + ```bash docker pull quay.io/crowdstrike/falcon-integration-gateway:latest ``` -2. Run the application in the background passing in your backend [CONFIG](./config/config.ini) options + +1. Run the application in the background passing in your backend [CONFIG](./config/config.ini) options as environment variables + ```bash docker run -d --rm \ -e FALCON_CLIENT_ID="$FALCON_CLIENT_ID" \ @@ -43,19 +62,24 @@ To install as a container: -e CONFIG_OPTION=CONFIG_OPTION_VALUE \ quay.io/crowdstrike/falcon-integration-gateway:latest ``` -3. Confirm deployment + +1. Confirm deployment + ```bash docker logs ``` #### From Git Repository + 1. Clone the repository + ```bash git clone https://github.com/CrowdStrike/falcon-integration-gateway.git ``` -2. Modify the `./config/config.ini` file with your backend options -3. Run the application +1. Modify the `./config/config.ini` file with your backend options +1. Run the application + ```bash python3 -m fig ``` @@ -63,4 +87,5 @@ To install as a container: ## [Developers Guide](./docs/developer_guide.md) ## Statement of Support -Falcon Integration Gateway (FIG) is an open source project, not a CrowdStrike product. As such it carries no formal support, expressed or implied. + +Falcon Integration Gateway (FIG) is a community-driven, open source project designed to forward threat detection findings and audit events from the CrowdStrike Falcon platform to the backend of your choice. While not a formal CrowdStrike product, FIG is maintained by CrowdStrike and supported in partnership with the open source community. diff --git a/config/config.ini b/config/config.ini index 828b03e..f1b9bea 100644 --- a/config/config.ini +++ b/config/config.ini @@ -18,7 +18,7 @@ # Alternatively, use EVENTS_OLDER_THAN_DAYS_THRESHOLD env variable. #older_than_days_threshold = 14 -# +# Exclude events originating from certain cloud environments (AWS, Azure, GCP, or unrecognized) # detections_exclude_clouds = [logging] @@ -55,8 +55,9 @@ # Uncomment to provide Azure Primary Key. Alternatively, use PRIMARY_KEY env variable. #primary_key = -# Uncomment to enable RTR based auto discovery of Azure Arc Systems -# arc_autodiscovery = true +# Uncomment to enable RTR based auto discovery of Azure Arc Systems. Alternatively, +# use ARC_AUTODISCOVERY env variable. +#arc_autodiscovery = true [aws] # AWS section is applicable only when AWS backend is enabled in the [main] section. diff --git a/docs/aks/falcon-integration-gateway.yaml b/docs/aks/falcon-integration-gateway.yaml index 2ed2e22..8000ae1 100644 --- a/docs/aks/falcon-integration-gateway.yaml +++ b/docs/aks/falcon-integration-gateway.yaml @@ -42,6 +42,9 @@ data: # Uncomment to filter out events based on number of days past the event (default 21) #older_than_days_threshold = 14 + # Exclude events originating from certain cloud environments (AWS, Azure, GCP, or unrecognized) + # detections_exclude_clouds = + [logging] # Uncomment to request logging level (ERROR, WARN, INFO, DEBUG) #level = DEBUG @@ -61,12 +64,16 @@ data: [azure] # Azure section is applicable only when AZURE backend is enabled in the [main] section. - + # Uncomment to provide Azure Workspace ID. Alternatively, use WORKSPACE_ID env variable. #workspace_id = # Uncomment to provide Azure Primary Key. Alternatively, use PRIMARY_KEY env variable. #primary_key = + # Uncomment to enable RTR based auto discovery of Azure Arc Systems. Alternatively, + # use ARC_AUTODISCOVERY env variable. + #arc_autodiscovery = true + --- apiVersion: apps/v1 diff --git a/docs/developer_guide.md b/docs/developer_guide.md index fa2811b..df3eb1a 100644 --- a/docs/developer_guide.md +++ b/docs/developer_guide.md @@ -3,16 +3,21 @@ To understand the architecture, readers are advised to review [./fig/__main__.py](../fig/__main__.py). Central to the application is the event queue with one writer to the queue (Falcon module) and multiple readers from the queue (the worker threads). Falcon module maintains streaming session to CrowdStrike Falcon cloud and translates events from the streaming session to the internal queue. Once an event is put on queue it awaits its pickup by one of the worker threads. Worker thread iterates through enabled [backends](../fig/backends) and asks relevant backends to process the event. ## Getting Started + The easiest method to work on the FIG is to make use of a container environment, as this will ensure all dependencies are packaged within. Most backends have a corresponding developer guide with examples specific to the backend itself. ### Workflow + 1. Fork and clone the repository -2. As changes are made to the code base, test them with the following approach: +1. As changes are made to the code base, test them with the following approach: - Build the image + ```bash docker build . -t falcon-integration-gateway ``` - - Run the application passing in your [CONFIG](../config/config.ini) options + + - Run the application interactively passing in your [CONFIG](../config/config.ini) options as environment variables + ```bash docker run -it --rm \ -e FALCON_CLIENT_ID="$FALCON_CLIENT_ID" \ diff --git a/fig/backends/azure/README.md b/fig/backends/azure/README.md index 5d97af8..3439539 100644 --- a/fig/backends/azure/README.md +++ b/fig/backends/azure/README.md @@ -5,7 +5,8 @@ Integration with Microsoft Azure Log Analytics. ### Example Configuration file [config/config.ini](https://github.com/CrowdStrike/falcon-integration-gateway/blob/main/config/config.ini) configures Falcon Integration Gateway. Below is a minimal configuration example for Azure: -``` + +```terminal [main] # Cloud backends that are enabled. The gateway will push events to the cloud providers specified below backends=AZURE @@ -18,10 +19,18 @@ backends=AZURE # Uncomment to provide Azure Primary Key. Alternatively, use PRIMARY_KEY env variable. #primary_key = -# Uncomment to enable RTR based auto discovery of Azure Arc Systems -# arc_autodiscovery = true +# Uncomment to enable RTR based auto discovery of Azure Arc Systems. Alternatively, +# use ARC_AUTODISCOVERY env variable. +#arc_autodiscovery = true ``` +### API Scopes + +Configure the following additional API scopes in your CrowdStrike Falcon console: + +- **Real Time Response**: [Read, Write] + > *Required if using Azure Arc Autodiscovery feature.* + ### Azure Arc Autodiscovery Azure Arc is service within Microsoft Azure that allows users to connect and manage systems outside Azure using single pane of glass (Azure user interface). @@ -29,18 +38,20 @@ Azure Arc is service within Microsoft Azure that allows users to connect and man Falcon Integration Gateway is able to identify Azure Arc system properties (resourceName, resourceGroup, subscriptionId, tenantId, and vmId) using RTR and send these details over to Azure Log Analytics. To enable this feature: - - set `arc_autodiscovery=true` inside `[azure]` section in your config.ini - - grant extra Falcon permission to API keys in CrowdStrike Falcon - - Real Time Response: [Read, Write] + +- set `arc_autodiscovery=true` inside `[azure]` section in your config.ini ### Developer Guide - - Build the image - ``` +- Build the image + + ```bash docker build . -t falcon-integration-gateway ``` - - Run the application - ``` + +- Run the application + + ```bash docker run -it --rm \ -e FALCON_CLIENT_ID="$FALCON_CLIENT_ID" \ -e FALCON_CLIENT_SECRET="$FALCON_CLIENT_SECRET" \ @@ -51,4 +62,5 @@ To enable this feature: ``` ### Developer Resources - - [Log Analytics Tutorial](https://docs.microsoft.com/en-us/azure/azure-monitor/logs/log-analytics-tutorial) + +- [Log Analytics Tutorial](https://docs.microsoft.com/en-us/azure/azure-monitor/logs/log-analytics-tutorial) diff --git a/fig/config/__init__.py b/fig/config/__init__.py index 42f2ba9..46f9100 100644 --- a/fig/config/__init__.py +++ b/fig/config/__init__.py @@ -20,6 +20,7 @@ class FigConfig(configparser.SafeConfigParser): ['falcon', 'application_id', 'FALCON_APPLICATION_ID'], ['azure', 'workspace_id', 'WORKSPACE_ID'], ['azure', 'primary_key', 'PRIMARY_KEY'], + ['azure', 'arc_autodiscovery', 'ARC_AUTODISCOVERY'], ['aws', 'region', 'AWS_REGION'], ['aws_sqs', 'region', 'AWS_REGION'], ['aws_sqs', 'sqs_queue_name', 'AWS_SQS'],