I have read in various places about "cooking" logs before sending them to a Splunk Enterprise instance. I'm curious to know if a Heavy Forwarder is an optimal solution for my team.
To give some background, my company has a department that handles the main Splunk environment. They have set up a deployment server that other departments can subscribe to in order to send their data to the environment; however, they do limit users actions and do not allow sensitive log information to be sent to them. In addition, they also don't easily allow HTTP Event collection either.
We are considering a heavy forwarder in order to transform the data **AND** handle extractions **
AND** handle HTTP Event collection before it is indexed by the Splunk environment; however, we have a few questions regarding this. I read that heavy forwarders perform "pre-indexing extractions" meaning they write, instead of Splunk doing "post-index extractions" (read). From my understanding, it seems like Splunk applies the extractions at search time instead of modifying the logs themselves, but does the heavy forwarder modify the logs themselves?
How much overhead is my team and I realistically looking at if we wanted to configure a heavy forwarder to handle transformations and extractions? On another note, does the heavy forwarder allow to use the "Regex Tool"?
For trial purposes, can I install a heavy forwarder on the same Windows Machine that my current demo enterprise is on?
Thank you everyone!
↧