Quantcast
Channel: Questions in topic: "heavy-forwarder"
Viewing all 727 articles
Browse latest View live

Can you help me get the File/Directory Information Input working?

$
0
0
I'm trying to get the File/Directory Information Input app working but I'm struggling. The place I'm working has this installed on a couple of heavy forwarders (HF), but neither seems to be generating any data. Looking in the internal logs, I can see the below error present from both HFs ( App version 1.1.2 running on these / Splunk 6.6): ERROR ModularInputs - Introspecting scheme=file_meta_data: script running failed (exited with code 1). ERROR ModularInputs - Unable to initialize modular input "file_meta_data" defined inside the app "file_meta_data": Introspecting scheme=file_meta_data: script running failed (exited with code 1). I have also attempted to install and run on a new HF, which I have set-up but am getting a different error (latest App version 1.3 with Splunk 7.0.5): uiHelper processValueEdit operator failed for endpoint_path=data/inputs/file_meta_data/FileMonitorTest elementName=spl-ctrl_sourcetypeSelect: list index out of range uiHelper submitValueEdit operator failed for endpoint_base=data/inputs/file_meta_data entity_name=FileMonitorTest elementName=sourcetype: invalid syntax (, line 1) Not great with Python, so struggling to figure out how to resolve these issues. Any assistance that can be provided would be great. If additional information is required to assist - I'm happy to provide.

In the search head, why am I not able to see which heavy forwarder the logs are coming from?

$
0
0
I have 3 heavy forwarders and sending firewall logs to all heavy forwarders and then forwarder to indexer. But, when I am searching from the search head, I am not able to check from which heavy forwarder the logs were forwarded from.

From a Heavy Forwarder to an Indexer, how can I get Splunk to separate Windows and Linux logs into two different indexes?

$
0
0
So my issue is that I am not sure how to get Splunk to separate data on the indexer. I am trying to listen on the forwarder port 514 (for Linux syslog) and 6161 (for windows event logs), I use _tcp_routing to send it to a tcpout targetgroup associated with the indexer ports 9997, and 9998. which allows me to have a splunktcp:// index= for each port. Am I doing this all wrong, and how can I get Splunk to separate the windows and Linux logs into two different indexes? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Forwarder: fwd inputs.conf- [scripts://$SPLUNK_HOME\bin\scripts\splunk-wmi.path] disabled=0 [tcp://514] _TCP_ROUTING=Linux [tcp://6161] _TCP_ROUTING=Windows ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ fwd outputs.conf - [tcpout] defaultGroup=Windows, Linux [tcpout:Windows] server=(server ip):9997 [tcpout:Linux] server=(server ip):9998 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ indexer: index inputs.conf - [default] host = somehost1 [tcp://9997] index=windowseventlogs connection_host=dns [tcp://9998] index=linuxauditlogs connection_host=dns

How can one determine on system level if a Splunk install is a Heavy Forwarder or an Indexer?

$
0
0
Hi team, I'm looking to find a way to identify if a Splunk server is a heavy forwarder or an Indexer in an automated way. Is there a way to find out, by looking at filesystems, processes or running commands, on the system to identify the role of the server in Splunk? I'm looking forward to your feedback, its highly appreciated. Thanks.

how do I line break winevent log events after a Universal Forwarder(UF) sends them to a Heavy Forwarder(HF)?

$
0
0
I have UFs (managed by a DS) on Windows endpoints sending winevents to a HF. The HF receives the events and then sends everything to the indexers cooked and simultaneously sends uncooked data to a 3rd party application. I have been asked to create some line breaks in the (uncooked events) via the HF before sending to the 3rd party app. Please advise how I might accomplish this. I am thinking about adding a Line_Breaker attribute but am not sure where I could place it. The 3rd party application needs event breaks, and I am thinking that this cannot be done if sending uncooked data to the 3rd party application because uncooked would remove any / all props and transforms. I don't think this is possible but looking for confirmation one way or the other. Thank you

is there any way to process data with a heavy forwarder and then send the formatted data uncooked?

$
0
0
I have raw data that I need to parse / break per event time stamp but I need to send it uncooked (with the event breaks) to a NiFi listener node... I don't believe this is possible.... Please confirm. Thank you

What architecture will work in this Splunk Distributed Environment ?

$
0
0
Hi Team, I have an infrastructure located globally multiple sites around 10 to 15 Sites which can be generated approximately 1 TB of log volume a day, I would need Splunk expertise suggestions on what architecture will suite for this use case, I have given below few options it would great someone give me inputs on this. Options 1 1. Setup Heavy Forwarders on each location with Load balance 2. Setup of Indexer cluster and search head cluster at Main Datacenter WAN Link speed 20-30 Mbps from each site all location of Heavy Forwarders will get the data from individual local site devices and sent to main data center Index cluster peers nodes and Search head will configure to perform all search events and data visualizations by pulling data from main data center indexer cluster. Option 2 1. Setup Heavy Forwarders on each locations with Load balance 2. Setup Indexer Cluster on each location 3. Setup search head cluster at main data center WAN Link speed 20-30 Mbps from each site All location of heavy forwarders will get the data from individual local site devices and sent to individual data sites index cluster peer nodes and search head cluster at main data center configure to pull data from all the location index cluster and perform search operations and data visualization Option 3 1. Setup Heavy Forwarders on each locations with Load balance 2. Setup Indexer Cluster at each location 3. Setup a single search head at each location 4. Setup Search head cluster at main data center WAN Link speed 20-30 Mbps from each site All location of heavy forwarders will get the data from individual local site devices and sent to individual data sites index cluster peer nodes and local search heads are configured to search events from there individual local sites, and Main data center search head cluster configure to have centralized dashboard from all search head data.

Without have access to the universal forwarder, can I check whether it is sending data to the heavy forwarder?

$
0
0
Hi All, I am relatively new to Splunk, In my environment we are using deployment server to manage the deployment apps on universal forwarders. During the installation of universal forwarders, we specify the deployment server in deployment.conf. But we have not mentioned anything about forwarding the data to the heavy forwarder (HF). On the web interface of our heavy forwarders, under forwarding and receiving, I cannot see any configuration set up. How can I check whether universal forwarders are sending data to HF? Are indexers or data getting managed by the deployment server? I don't have access to the universal forwarders as these are managed by some different team. So I have to check the configuration within the HF, Indexer or deployment servers. Regards Rohit

How to configure IP range in inputs.conf on heavy forwarder

$
0
0
I have logs coming to a heavy forwarder being stored under directories based on IPs (i.e. " /var/log/remote/192.168.1.6" How do I use inputs.conf to capture a range of IPs while setting the index and sourcetype? This doesn't work: [monitor:///var/log/remote/192.168.1.*/*.log] host_segment=4 sourcetype=bar index=foo

Why am I getting the following Http Event Collector (HEC) errors?: {"text":"Invalid token","code":4}

$
0
0
I created 100s of HEC tokens and put them in an app, which has been pushed down to several Heavy Forwarders. Most of them are working fine, but strangely, several of them are not working and give the following invalid token error. They are all in the inputs.conf as the other working ones in the app. Configurations all appear Okay to me. curl -k https://splunk-hec.abc.net:8088/services/collector -H 'Authorization: Splunk adf401c2-43ef-4689-a56c-ba47f907eca8' -d '{"sourcetype": "https:hectest", "event":"HEC Token Test", "index":"index_hectest"}' {"text":"Invalid token","code":4}

What are the basic troubleshooting steps in case of UF/HF is not forwarding data to Splunk?

$
0
0
Most of the time we have seen that the splunk universal forwarder or Heavy forwarder fails to forward data to the indexer. In this scenarios, what troubleshooting steps we can use to start the investigation?

Monitoring saturation of event-processing queues in Heavy Forwarders

$
0
0
I have a distributed environment with multiple indexes, search heads, and a pair of heavy forwarders. Since last days one of my HF starts to alert a issue, Monitoring Console's Health Check is warning "Saturation of event-processing queues". Besides that, the HF performance have decreased a lot, delaying event delivery and failing scripts execution. splunkd is consuming 100% of its CPU core full time. Checking docs (*Identify and triage indexing performance problems*), they suggests to determine queue fill patern through *Monitoring Console > Indexing > Indexing Performance: Instance*. But seems it applies only to indexers, not to HF. Please, how could I discover what is causing such issue? How could I monitor such issue, I mean to see when it starts and how long it takes in order do cross with other systems behavior? Is such info available in Monitoring Console? Thanks in advance and regards, Tiago

What is causing the following warning from the Monitoring Console's Health Check?: "Saturation of event-processing queues"

$
0
0
Monitoring saturation of event-processing queues in Heavy Forwarders I have a distributed environment with multiple indexes, search heads, and a pair of heavy forwarders. But over the last few days, one of my heavy forwarders started to alert a issue. The Monitoring Console's Health Check is warning "Saturation of event-processing queues". Besides that, the heavy forwarders performances have decreased a lot, delaying event delivery and failing scripts execution. splunkd is consuming 100% of its CPU core full time. Checking docs (*Identify and triage indexing performance problems*), they suggest to determine queue fill pattern through the *Monitoring Console > Indexing > Indexing Performance: Instance*. But, seems it applies only to the indexers, not to the heavy forwarder. Please, how could I discover what is causing such issue? How could I monitor such an issue? How can I see when it starts and how long it takes in order to do a cross with other systems behavior? Is such info available in the Monitoring Console? Thanks in advance and regards, Tiago

How to index and use unstructured huge volume of data - Splunk HWF and SH cluster?

$
0
0
Hi All, We are working on a clustered environment where splunk is fetching logs from various servers. In the source server we have set up splunk heavy weight forwarder which forwards the data to the load balanced HWF then to indexers. Now the issue we face is that our logs are in nested json/ unstructured format and is of huge volume. This is making the searches too slow and crash. We have tried index time extractions but that is also slower due to the volume. Could you please suggest a work around for this. TIA

Heavy Forwarder stopped sending data

$
0
0
Hello, Let's say we have Heavy Forwarder forwarding logs to groups A (Which consists of two IDX) and group B (One HF). Group B does not make LB, group A does. My question is, what will the Heavy Forwarder do with the data if group A losses connectivity? Does the HF keep sending data to group B? Thanks in advance.

How to avoid data loss on HF on restart

$
0
0
I have service now add on, db connect in Heavy Forwarder. So i cant use multiple instances of HF to avoid data duplication and licensing. My both apps Service Now and DB connect are in real time sync, also I need to do changes in props & transforms frequently. so in this case how to avoid data loss. Just using indexer ack will resolve?

Impact of installing syslog-ng in universal forwarder

$
0
0
Hello Splunkers, I have a requirement wherein I need to forward the data to the third-party system apart from sending logs to Splunk. What is the impact of having syslog-ng along with universal forwarder that sends almost the same amount (mostly 75% same data) to a third party system? Will this have a performance issue like "parsing queue getting filled" / network bandwidth consumption. Which is the best way to integrate splunk to third party system.?

IIS Heavy Forwarder Translation

$
0
0
We are working through a staged migration where two splunk instances will be running in parallel for a while before we switch over. Because naming conventions are fun, we are going to adopt an entirely new convention for index naming in the new system. To handle this we have a 7.2 heavy forwarder setup to do index translation based on the host that is sending in then send to our 7.2 development environment. We are not wanting to make any changes to the endpoints beyond a new outputs.conf file. Right now the UF's are setup to send twice, once to legacy and once to the new development heavy-forwarder. I was cruising along just fine with Linux machines and test Windows machines until I hit the sourcetype [iis] a few unproductive days ago. For some reason a simple take anything from this host and send it into this new index statement like I have below is not working for that one sourcetype and events continue to be sent to the legacy index on the new indexer. All of the other sourcetypes processed from that host including anything destined for _ indexes are being sent to the new index (which is fine with me during the transition period). #props.conf [host::LIB-IISTEST1] TRANSFORMS-index-lib-iistest1 = host_index_routing_lib-iistest1 #transforms.conf [host_index_routing_lib-iistest1] DEST_KEY=_MetaData:Index REGEX=.* FORMAT=servers-windows_library I have gone as far as trying to hijack all of the iis sourcetypes and send them to a new sourcetype named iis_translated and that is not working either. I suspect that it is related to the iis sourcetype being a known sourcetype and something with the parsed data. Any suggestions?

How do I make my heavy forwarder my deployment server?

$
0
0
I have a Splunk Cloud instance and a heavy forwarder that sends in all my data into my cloud instance. I will now be installing a universal forwarder to get Windows Active Directory data in and will point my universal forwarder to my heavy forwarder. Now, my question is how do I make my heavy forwarder that is already configured into a deployment server as well? I would also like to know how do I know if my heavy forwarder is already set up as a deployment server? (I didn't set up the HF someone else did) Thanks

How do I make my heavy forwarder, which is already configured, into a deployment server?

$
0
0
I have a Splunk Cloud instance and a heavy forwarder that sends in all my data into my cloud instance. I will now be installing a universal forwarder to get Windows Active Directory data in and will point my universal forwarder to my heavy forwarder. Now, my question is how do I make my heavy forwarder that is already configured into a deployment server as well? I would also like to know how do I know if my heavy forwarder is already set up as a deployment server? (I didn't set up the HF someone else did) Thanks
Viewing all 727 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>