Quantcast
Channel: Questions in topic: "heavy-forwarder"
Viewing all 727 articles
Browse latest View live

Why does my heavy forwarder stop sending events to indexers with error "TcpOutputProc - Possible duplication of events..."?

$
0
0
Hi, I'm fed up with this issue, one of my heavy forwarders stops sending events to the indexers, and after restart, it sends logs to the indexers again, but some time later, the same issue occurs. I'm getting these kind of errors in the splunkd.log when it stops sending logs to the indexers. Ex: 04-13-2017 15:29:01.295 -0500 WARN TcpOutputProc - Possible duplication of events with channel=source::cloudfoundry_sys|host::10.32.120.185|syslog|remoteport::51489, streamId=14010656165184201000, offset=2718 subOffset=1 on host=10.30.71.151:9997 04-13-2017 15:29:01.295 -0500 WARN TcpOutputProc - Possible duplication of events with channel=source::cloudfoundry_sys|host::10.32.120.162|syslog|remoteport::51487, streamId=6389163824992962214, offset=3263 subOffset=1 on host=10.30.71.151:9997 04-13-2017 15:29:01.295 -0500 WARN TcpOutputProc - Possible duplication of events with channel=source::cloudfoundry_sys|host::10.32.122.125|syslog|remoteport::51487, streamId=6389163824992962214, offset=3265 subOffset=3 on host=10.30.71.151:9997 04-13-2017 15:29:01.295 -0500 WARN TcpOutputProc - Possible duplication of events with channel=source::cloudfoundry_sys|host::10.32.121.54|syslog|remoteport::51490, streamId=1903603338508299248, offset=2895 subOffset=1 on host=10.30.71.151:9997 04-13-2017 15:29:01.295 -0500 WARN TcpOutputProc - Possible duplication of events with channel=source::tcp:15000|host::10.32.123.151|log4j|remoteport::49826, streamId=922401461405077376, offset=5793 subOffset=1 on host=10.30.71.151:9997 Please do post the answers.

Heavy forwarder not doing load balancing properly

$
0
0
I am forwarding data of one log file from 1 Heavy Forwarder to 2 Indexers. But the heavy forwarder is sending data only to Indexer2. **- I confirmed it by running query on my searchhead and checking value in field "splunk_server". It was showing just one indexer , i.e Indexer2.** **OUTPUTS.CONF** [indexAndForward] index = false [tcpout] defaultGroup = grp forwardedindex.filter.disable = true [tcpout:grp] disabled = 0 # server = 00.000.0.00:9997,00.000.0.00:9997 server = Indexer1:9997,Indexer2.synaptics.com:9997 useACK=true forceTimebasedAutoLB = true **INPUTS.CONF** [monitor:///var/log/Folder1/Folder2] host_segment=5 index=SomeIndex sourcetype=SomeSourcetype disabled=0 **PROPS.CONF** [SomeSourcetype] DATETIME_CONFIG = MAX_TIMESTAMP_LOOKAHEAD = 32 NO_BINARY_CHECK = true REPORT-syslog = syslog-extractions SHOULD_LINEMERGE = false TIME_FORMAT = %b %d %H:%M:%S TRANSFORMS = syslog-host category = Operating System description = Somedescription disabled = false maxDist = 3 pulldown_type = true Output of Command - ./splunk list forward-server Active forwards: Indexer1:9997 Indexer2.synaptics.com:9997 Configured but inactive forwards: None I am able to ping to both indexers. Packets are being sent. I checked it through linux command "tcpdump dst indexer1" . **Please help.**

how to use Heavy Forwarder to selectively forward WinEventLog:Security ?

$
0
0
I'm trying to use heavy forwarder to forward just the WinEventLog:Security logs. Can someone please tell me how to do it?

Splunk DB Connect 2.4: How to resolve "AuthenticationError: Request failed: Session is not logged in" error on Heavy Forwarder?

$
0
0
Hello Guys, I have a problem with Splunk DB Connect. Splunk DB Connect 2.4 is installed on a heavy forwarder and I'm using a Search Head Cluster: File "$SPLUNK_HOME/db_connect/bin/dbx2/splunk_client/../../splunk_sdk-1.5.0-py2.7.egg/splunklib/binding.py", line 300, in wrapper "Request failed: Session is not logged in.", he) AuthenticationError: Request failed: Session is not logged in. Any help ?

Should I use a heavy forwarder or an indexer cluster for my particular scenario?

$
0
0
Hi, I'm hoping for some advice as I'm trying to understand the best way to configure Splunk components in the scenario below. I have two Datacentres (DC) that operate as Active / Passive. Datacentre A (DCA) will be the active DC running all services and within it I will have a few hundred Windows machines with Universal Forwarders installed. My current plan is to create an Indexer cluster consisting of two Indexers; not to share load but allow increased processing. There will then be a single standalone Search Head and a single cluster Master instance giving me a total of 4 separate machines in DCA. I understand this is the first way to start scaling out, so in the future it would be easy to add more Indexers or move to a Search Head cluster if required. I think given the volume I am expecting to process I would be following a Splunk 'Small Enterprise' deployment. The first bit I am unclear on is around forwarding from this cluster. If I wanted the Indexing cluster in DCA to forward data onto a 3rd party SOC for example, is that possible? I think where I'm getting confused is having read that an Indexer that forwards is actually a 'Heavy Forwarder', not an Indexer. Can an Indexer clusterer forward too? If this is possible, it answers my second question. I want to mirror the DCA setup in a branch office that might have a poor link. If the link went down, could the Splunk Indexer cluster be configured to continue processing data locally and forward it onto DCA when it was back online? Originally, I was thinking I would just use a Heavy Forwarder in a branch office, but that was because it seemed to me like Indexer clusters could not forward data. I'm just not sure if I need a Heavy Forwarder or an Indexer cluster for this setup. I assume you can't cluster Heavy Forwarders so there would be processing constraints there? Many thanks! M

Is there a way to change collection interval for HTTP Event Collector?

$
0
0
I am using HTTP Event Collector to collect Symantec ATP logs, my current ingest rate varies based on log size. It is typically around 2000-5000 logs at a rate of every 1 minute. My log source is generating between 1.5 M -3 M events per day. The collector is averaging about 480k-960k events per day. This is putting me into a logging deficit where I am unable to keep up with log generation. I am looking to change the interval to every 5 seconds or vastly increase the collection rate. I am for the most part default settings, the event collector is running on a heavy forwarder and forwarding to an indexer cluster, we have tried pointing to a single indexer but performance did not change.

How to get data from host to heavy forwarder to indexer?

$
0
0
I have an indexer, search head, heavy forwarder and license master server configured. I also have a test server (host) with the Splunk agent installed. I am new to all of this and standing this up in our test environment and wanted to get started by getting logs from the test server....to the heavy forwarder......to the indexer....so they can be parsed and searched from the search head server. I'm not quite sure how to make all this happen, even after reading docs and watching some videos, so I'd be happy to try any of your suggestions. Thanks in advance.

How to prepend hostname to raw events

$
0
0
This is my current splunk setup: [User Device] --TCP Syslog--> [HeavyForwarder] --TCP Stream--> [Indexer] --TCP Stream--> [Netcat] Syslog data is being forwarded to a heavy forwarder via TCP Syslog and then the HF forwards data via TCP stream to an Indexer. I'm having the indexer forwarded to a third party server listening using netcat. The problem is that on netcat I can see the Syslog message but I need (hostname+syslog message). Can someone help with this?

What are the best searches to monitor data flow activity from the Universal Forwarder to the Heavy Forwarder to the indexer?

$
0
0
Hi , i would like to monitor the Splunk data flow activity. what are the best Splunk searches to monitor the data sending from UF (Universal Forwarder) moving to HF (Heavy Forwarder) and HF to indexer?

What is the best way to filter events at Heavy Forwarder level?

$
0
0
Hi. I am trying to send logs from a bunch of Universal Forwarders (UF) to a Heavy Forwarder which will then forward it to a SOC (managed service - we have a syslog receiver onsite). Currently, all the logs are being indexed into Splunk but I am planning to edit the outputs stanza on the UFs by adding another group with the Heavy Forwarder's IP address, so that it creates a data clone and then I can filter out the required data at the HF before sending it SOC. I am trying to figure out the best method of filtering this data. Basically, these UFs are monitoring lots of application data in addition to the local event logs and other security logs. I am only interested in the local event logs (both Windows and Unix) and security logs and want to get rid of all other logs (nullQueue). What would be the best way to achieve this? Should I filter using the source (i.e. Whitelisting a number of sources)? So that only the whitelisted sources are forwarded by the HF to the SOC and all the rest from the data clone goes to nullqueue. Would highly appreciate if someone could show me a config example. Thanks in advance?

Use a Heavy Forward to Receive Unencrypted Traffic and Send Encrypted

$
0
0
Hi, I have setup a heavy forwarder to accept TCP unencrypted traffic from a Palo Alto device, that has the Palo Alto TA installed, on our local network. I would like to send the data encrypted using SSL to our indexer in AWS. The indexer in AWS is already configured and working for receiving SSL encrypted events. Is there a configuration that needs to be done on the heavy forwarder to allow this? By running tcpdump I can see the unencrypted data coming from the Palo Alto device. I can see encrypted data going to our indexer but all that I can see is hostname related events in the _internal index, and no evidence of the pan:log sourcetype. Thanks

Cloudflare Rest API Logs

$
0
0
Hello, I'm trying to pull cloudflare logs using the REST API. It was recommended that I use the following app installed on a heavy forwarder: https://github.com/justinbatcf/splunk-logshare This is a fork of the REST API Modular Input app. However whenever splunk starts I get the error : Unable to initialize modular input "cloudflare" defined inside the app "splunk-logshare-master": Introspecting scheme=cloudflare: script running failed (exited with code 1). Does anyone have any idea where I would start troubleshooting this, or otherwise perhaps an alternate means of pulling these logs? Thanks, JG

Heavy Forwarder in DMZ

$
0
0
I have a segmented area of my network that I want to pull logs from a couple of systems. Rather than configure firewall rules for each system's Universal Forwarder to be able to hit my Indexers in the internal network, I have opted to implement a Heavy Forwarder for all systems to talk through. This way, I only have to punch one hole through the firewall, and I'm not directly exposing my Indexers to multiple systems within the DMZ, which is publicly accessible. Within my Heavy Forwarder, I have configured the inputs.conf to accept splunktcp from 9997 and syslog on UDP 514 (For my network devices in the DMZ). outputs.conf is configured to send everything to my Indexers. web.conf is set to turn the web interface off. From my Search Head, I am able to see the _internal logs from my Heavy Forwarder. So I know it's at least talking to the Indexers. Now, for my Universal Forwarders, I have set the following files, with the hope that deploymentclient traffic would get routed through to the internal deployment server, and that all log data would also get passed off. So far, I cannot find anything from these hosts in any index. ############################################ $SPLUNKHOME\etc\system\local\deploymentclient.conf ############################################ [deployment-client] phoneHomeIntervalInSecs = 60 [target-broker:deploymentServer] targetUri = :8089 ############################################ $SPLUNKHOME\etc\system\local\outputs.conf ############################################ [tcpout] server = :9997 ############################################ I would assume that these two files would at least allow data to be sent to the Indexers. However, nothing is showing up. As for my deployment client traffic, would I need to open 8089 on my inputs.conf? How would I route the traffic from there?

Is it possible to forward data to third-party systems in other formats than syslog and raw?

$
0
0
Is it possible to forward Splunk events in json format (containing all fields) to some external TCP end-point (using Heavy Forwarder)? I found that it is possible to send cooked data, but I couldn't find specs for this format, is it possible to use this kind of data in external TCP end-points or it is Splunk internal format, which shouldn't be used outside of Splunk?

vip and heavy forwarder setup for HEC

$
0
0
Can I use the same HEC token on all HF's which are behind a VIP and set up clients to send data to VIP ip? The purpose is to keep HF's conf the same and share the load. Is it a good idea?

index replication in custom index

$
0
0
Hello, i have created a new index DAP in cluster master and shared the configuration of this new indexes.conf with all peers. i put the file in cluster master folder ../_cluster/local/ then distribute the bundle to all peer. i have CM1 , IDX1,IDX2,IDX3 and IDX4. Now from Heavy forwarder i am forwarding the data to IDX2 index name DAP. In search i am able to search the data, My question here is - 1) why DAP index is not replicating? all the time in search head, i am getting my data from IDX2 DAP index, why not from other IDXs 2) Can i directly forward heavy forwarder data to cluster master index DAP? will it work? My conf in HF is - inputs.conf [monitor:///tmp/Gov.csv] disabled = false index = dap _TCP_ROUTING = DAP outputs.conf- [tcpout:DAP] server = IDX2:9997 useACK = true [tcpout-server://IDX2:9997]

Why is the custom date time path on indexers not working?

$
0
0
I have configured custom datetime_custom.xml. while It is working on Heavy Forwarder (HF) with props.conf on HF. but when I deployed to indexers, Indexers are not reading the settings. `DATETIME_CONFIG=/etc/apps/testing/local/datetime.xml` - ON HF WORKED FINE `DATETIME_CONFIG=/etc/slave-apps/testing/local/datetime.xml` - ON INDEXERS NOT WORKING. Do I need to change path on indexers?

Is it possible to read and monitor Windows server files from a Linux Heavy Forwarder?

$
0
0
I have an environment where it's going to be a hassle to add a new Windows server. However, we have a file on a Windows server we would like to monitor and log. Is it possible to do that from a Linux Heavy Forwarder? Using samba/cifs so we can map the drive? Or, as this answer implies ( https://answers.splunk.com/answers/27269/using-fschange-to-monitor-files-on-linux-server-from-windows-splunk-server.html ), will that cause more problems then it's worth? Thanks.

Why is my props.conf to extract timestamp from events not working for HTTP Event Collector?

$
0
0
getting data from Splunk HTTP Event Collector (HEC) and forwarding to indexers. On the Heavy Forwarder where the HEC is installed, I have props.conf to extract timestamp from events. But my props.conf is not working on it. If I ingest a test event then it is working but for same sourcetype. If I ingest same data through HEC, it is not extracting time. Is there any Issue?

How to prevent congestion between Heavy Forwarders and Indexers?

$
0
0
We have observed yesterday that there was around 90+% of indexing queue on our indexers. This resulted in failed connections between Heavy Forwarders (HF) and Indexers. Once the indexing queue receded, data from HFs started flowing to indexers and data was then written to disks. I have a few questions regarding this : 1. Our environment hosts Splunk IT Service Intelligence and Splunk Enterprise Security, which are both premium apps. Would the searches targeting the indexers also a cause due to which there were blocked queues? 2. What is the maximum TCP connections can an Indexer accept? 3. Any inputs on how to avoid such cases in the future?
Viewing all 727 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>