Use Case:
Docker -> Intermediate Heavy Forwarder -> Indexer -> SearchHead
--------------------------------------------------V -> Hadoop -> Hunk ^
Steps:
1. Pipe Docker logs to intermediate Heavy Forwarder, via built in Docker-Splunk Logging Driver
2a. Filter for long term data in Heavy Forwarder (that I **don't need** indexed) and send this to Hadoop
3a. Data archived in Hadoop cluster
4a. Access this non-critical information via Hunk through search head
2b. Filter for near term data in Heavy Forwarder (that I **need** indexed) and send this to a Splunk indexer
3b. Data stored as index in Splunk
4b. Access this critical data through search head directly to the index for fast searching
Disclaimer for respondents:
I **have read** the Splunk docs and I know you can pipe and split "raw" data to a 3rd party from a heavy forwarder. Is there no way to filter it via the heavy forwarder and send it to a 3rd party **without** indexing it?
↧