Quantcast
Channel: Questions in topic: "heavy-forwarder"
Viewing all 727 articles
Browse latest View live

Forwarding events to specific index on chained installation with Universal Forwarder and Heavy Forwarder

$
0
0
**Greetings!** Is it possible to define index=myindex only on Heavy Forwarder to forward events from Universal Forwarder without having index=myindex definition on source system? Like: **Source file** -> read by **Universal Forwarder** --> send to **Heavy Forwarde**r, tcp/9999 --> forwarded to I**ndexers/specifically index=myindex.** I do not want to use deployment structures either, just plain Universal Forwarder reading local logs and pushing to defined port on heavy forwarder. My goal is to hide/make heavy forwarder as decision point for inpuouring data. I know I can recieve data easily on that Heavy Forwader, but I do not know how to forward that specific input (say port 9999/tcp) to indexers and to SPECIFIC index. I do not need any special transforms or routing by content, just recieve-forward pair to make data moving and no (well, very little) configuration on source side. It looks like it is easy to recieve data on Heavy Forwarder and as indexers are defined, the data will go to main index - but I want that data to go index=myindex residing on separate server. Anyone done this?

Why is our third party logstash only receiving half of logs forwarded from Splunk?

$
0
0
Hi Team, We are currently forwarding Windows logs to third party siem and logstash but there is problem. Looks like third party receiving receiving only 50% of logs although we are forwarding all logs. Firewall rules are in place to forward and receive logs. Data flow is as below: Splunk Universal forwarder --->Splunk HWF ---->Third party using UDP via syslog. We are using below config: **outputs.conf** [tcpout:syslog] server = destination host:port **props.conf** [windows] TRANSFORMS-forward = windows **transforms.conf** [windows] REGEX = . DEST_KEY=_TCP_ROUTING FORMAT=syslog Am I missing something? What difference will it make if i add below config? sendCookedData=false. Are there any limitations on how much data we forward via UDP? We are trying to send almost few GB logs per second. There are no errors in splunkd logs or metrics.log Please advise.

Why is TCP data not being indexed?

$
0
0
Hi, I have a feed of events coming into my Splunk Heavy Forwarder, but they aren't being indexed, and I'm baffled. Here's my inputs.conf: [tcp://:1918] index = istr_security sourcetype = bcoat_proxysg disabled = false [tcp://:1919] index = istr_security sourcetype = bcoat_proxysg_plug disabled = false ` [tcp://:1920] connection_host = dns source = tcp:1920 index = istr_security sourcetype = bcoat_proxysg_socks disabled = false 1918 works. It's been in place for a long time. We are now sending 1920, but it's not showing up. I checked future events, and looked in the logs for any errors, but can't find any. I do see these messages, but they seem to be telling me that Splunk is now reading my port. I did a packet capture, and data is arriving. 10-26-2016 13:51:47.027 -0400 INFO TcpInputConfig - IPv4 port 1920 is reserved for raw input 10-26-2016 13:51:47.027 -0400 INFO TcpInputConfig - IPv4 port 1920 will negotiate new-s2s protocol 10-26-2016 13:51:47.027 -0400 INFO TcpInputProc - Creating raw Acceptor for IPv4 port 1920 with Non-SSL

How to configure Splunk App for Jenkins for a Heavy Forwarder to Splunk Cloud?

$
0
0
We have a number of separate environments each of which has Jenkins servers and a Splunk Heavy Forwarder that is sending events to Splunk Cloud. The docs don't mentioned HTTP token creation and there isn't one created by the App install in Splunk Cloud. I think that for each environment I need to create a new HTTP Event Collector token on the heavy forwarder and configure Jenkins plug-in to send to that HF. Given that the Jenkins indexes are all created in Splunk Cloud is that all I need to do? Do I need Splunk App for Jenkins on the HF too?

Splunk DB Connect: Why are we receiving errors when configuring resource pooling?

$
0
0
We are trying to get resource pool working on Splunk DB Connect 2.3.1 but are getting errors. Have followed the setup documentation. Have 2 servers running DBX 2.3.1 (one is to be a master node and the other a resource pool node). They are HWFs but are also to be used for DB input function. Have set inputs .conf ($SPLUNK_HOME/etc/apps/splunk_app_db_connect/local/inputs.conf) on both servers to: [rpcstart://default] bindIP=* Have configured these 2 servers as distributed search. On Master node, have created resourcepool.conf file with following stanza: [hwf_resource_pool] nodes = https://master_node_server:8089/servicesNS/nobody/splunk_app_db_connect/db_connect/rpcserver/default; https://resource_pool_node_server:8089/servicesNS/nobody/splunk_app_db_connect/db_connect/rpcserver/default Restarted both instances. When I set my DB Input to use this resource pool, I get this error (the DB input is fine when using 'Local' as resource pool setting): 2016-10-28T09:19:49+1300 [INFO] [mi_base.py], line 188: action=caught_exception_in_modular_input_with_retries modular_input=mi_input://Test_ODS retrying="4 of 6" error='thread._local' object has no attribute 'session_key' Traceback (most recent call last): File "/appl/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/mi_base.py", line 181, in run checkpoint_value=checkpoint_value) File "/appl/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/health_logger.py", line 272, in wrapper set_mdc(MDC_LOGGER, logger_class(self.label, self.bypass)) File "/appl/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/health_logger.py", line 194, in __init__ super(DBHealthLogger, self).__init__(label, bypass) File "/appl/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/health_logger.py", line 120, in __init__ self.service = SplunkServiceFactory.create(get_mdc(MDC_SESSION_KEY), server_url=get_mdc(MDC_SERVER_URI)) File "/appl/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/health_logger.py", line 58, in get_mdc return getattr(mdc, key) AttributeError: 'thread._local' object has no attribute 'session_key' Any help greatly appreciated...

How to set up a heavy forwarder in a non-production indexer clustering environment for Monitoring of Java Virtual Machines with JMX?

$
0
0
I was wondering what everyone was doing when needing a Heavy Forwarder in a clustered lower (non-production) environment. I currently have a user that wants to utilize the SPLUNK4JMX application. I did some reading and saw that the best way to do this in a clustered environment is to utilize a heavy forwarder. I suspect the reason is so that as we need to make changes to the JMX endpoints to use, it is easier to modify the HF than a cluster master. My question would be, what would be the best way to execute this in our lower environment? I don't have an extra server, so would I install the HF on a server that houses one of my other components? Would it just make sense to just install on the cluster master since I don't have a free server to use? Thanks!

How create an event filter to send an original event to the indexers and a modified event to a syslog server?

$
0
0
We are trying to filter and modify events and have both original and modified event. The original event would go to the indexers and the modified event needs to go to the syslog server. When we used UF -> HF -> Indexer & Syslog, we are unable to retain the original event. Hence, we have introduced another HF for further filtering and event modification. However, the second HF is not processing events. Is this correct approach? Please help. UF -> HF -> Filter for Security events and send it to 2 destinations 1. HF -> Filter and modify data -> Send to syslog 2. Indexer

Why does AWS CloudWatch stop receiving events with the error "400 Bad Request {u'message': u'Rate exceeded', u'__type': u'ThrottlingException'}"?

$
0
0
We have configured large number of CloudWatch log groups as a separate input in our heavy forwarder. We have noticed that when pulling the logs from AWS instance, we are getting throttling exceptions for few of the log groups as mentioned below. 2016-10-25 10:33:18,164 ERROR pid=24573 tid=Thread-12 file=aws_cloudwatch_logs_data_loader.py:describe_cloudwatch_log_streams:74 | Failure in describing cloudwatch logs streams due to throttling exception for log_group=/okapi2/nprod/var/log/custom/vmstat, sleep=3.65761418489, reason=Traceback (most recent call last): File "/opt/app/splunk/etc/apps/Splunk_TA_aws/bin/cloudwatch_logs_mod/aws_cloudwatch_logs_data_loader.py", line 64, in describe_cloudwatch_log_streams group_name, next_token=buf["nextToken"]) File "/opt/app/splunk/etc/apps/Splunk_TA_aws/bin/boto/logs/layer1.py", line 308, in describe_log_streams body=json.dumps(params)) File "/opt/app/splunk/etc/apps/Splunk_TA_aws/bin/boto/logs/layer1.py", line 576, in make_request body=json_body) JSONResponseError: JSONResponseError: 400 Bad Request {u'message': u'Rate exceeded', u'__type': u'ThrottlingException'}

How to view individual hops of data before it reaches indexer?

$
0
0
We have got "heavy forwarders" and our client has got a Splunk Heavy forwarders at their side before they send to us. So the path of flow is Individual host (A) with UF => Heavy Forwarders (B) => Heavy Forwarders (C) => Indexers (D) The hostname is coming as (A) in our indexers which is fair. Is there any chance to get information of (B) and (C) (i.e. their hostname, properties etc.)? , i.e. "hops" data went through. Cheers

Why am I only able to view 3 items in the list for Forwarded Inputs, Event Log Collections on the heavy forwarder?

$
0
0
It appears that the list is limited to showing 3 items, even though there are more in my list... This is in the heavy forwarder web GUI, has always been like this (from what I can remember): I usually choose to add Windows event logs, select my server class, and then specify Application, Security and System to be collected by the light forwarder. Ordering by class name does not bump anything (e.g. A-Z, Z-A list order), nor does increasing items from 25 to 50 or 100. Anyone else have this problem?

How to route and filter data on the Heavy Forwarder to separate indexer groups?

$
0
0
We need to route and filter data on the heavy forwarder. We are having trouble configuring the routing of security logs to a Splunk instance specifically for security logs and the main Enterprise instance. We want to direct certain logfiles to our main indexers and/or a separate Splunk instance specifically for security. We want to send security data to the security instance and send windows application/system logs to both sets of indexers. We created an app on the heavy forwarder, however, it does not seem to be working as expected. Here is our props.conf: [WinEventLog:Application] TRANSFORMS-routing_Windows_=Windows_GIS_data_app [WinEventLog:Security] TRANSFORMS-routing_Windows_=Windows_GIS_data_sec [WinEventLog:System] TRANSFORMS-routing_Windows_=Windows_GIS_data_sys **Main index** [Perfmon:CPU Load] TRANSFORMS-routing_Windows_=Windows_splunk_main_data [Perfmon:Available Memory] TRANSFORMS-routing_Windows_=Windows_splunk_main_data [Perfmon:Free Disk Space] TRANSFORMS-routing_Windows_=Windows_splunk_main_data **Perfmon index** [Perfmon:PhysicalDisk] TRANSFORMS-routing_Windows_=Windows_splunk_perfmon_data [Perfmon:CPU] TRANSFORMS-routing_Windows_=Windows_splunk_perfmon_data [Perfmon:Memory] TRANSFORMS-routing_Windows_=Windows_splunk_perfmon_data [Perfmon:MemoryStats] TRANSFORMS-routing_Windows_=Windows_splunk_perfmon_data [Perfmon:CPUTime] TRANSFORMS-routing_Windows_=Windows_splunk_perfmon_data [Perfmon:FreeDiskSpace] TRANSFORMS-routing_Windows_=Windows_splunk_perfmon_data Here is our transforms.conf: [Windows_GIS_data_app] REGEX = . DEST_KEY = _TCP_ROUTING FORMAT = ALL_INDEXERS [Windows_GIS_data_sec] REGEX = . DEST_KEY = _TCP_ROUTING FORMAT = GIS_INDEXERS [Windows_GIS_data_sys] REGEX = . DEST_KEY = _TCP_ROUTING FORMAT = ALL_INDEXERS [Windows_splunk_main_data] REGEX = . DEST_KEY = _TCP_ROUTING FORMAT = FARMERS_MAIN_INDEXERS [Windows_splunk_perfmon_data] REGEX = . DEST_KEY = _TCP_ROUTING FORMAT = FARMERS_INDEXERS Here is our outputs.conf: [indexAndForward] index=true selectiveIndexing=true [GIS_INDEXERS] indexAndForward = true [tcpout:GIS_INDEXERS] server=10.148.186.83:9997, 10.148.186.84:9997 [ALL_INDEXERS] indexAndForward = true [tcpout:ALL_INDEXERS] server=10.142.114.13:18017, 10.148.186.83:9997, 10.148.186.84:9997 [FARMERS_INDEXERS] indexAndForward = true [tcpout:FARMERS_INDEXERS] server=10.142.114.13:18015 [FARMERS_MAIN_INDEXERS] indexAndForward = false [tcpout:FARMERS_MAIN_INDEXERS] server=10.142.114.13:18013 Can anyone help to resolve the issue?

Can I update max_fd parameter in limits.conf on a list of heavy forwarders via any deployment app?

$
0
0
Hi All, 1) Can I update max_fd parameter in limits.conf on a list of heavy forwarders via any deployment app? [inputproc] max_fd = 1024 and will it override the default value $SPLUNK_HOME/splunk/etc/system/local/limits.conf? OR Should I update it in $SPLUNK_HOME/splunk/etc/system/local/limits.conf?? 2) Also can someone explain if there's any connection of this max_fd in limits.conf with ulimit open files setting? Should I modify this also if I change max_fd in Splunk? What's the connection? Appreciate help on this.

How to integrate Mcafee ePO in a distributed environment with Splunk DB Connect and the Splunk Add-on for McAfee?

$
0
0
Hi, I'm planning to install McAfee + Splunk DB Connect on several heavy forwarders (4) using the Deployment Server. The fact is, I don't know what will happen if all the TAs start collecting at the same time. Will it end up with duplicate or more entries for the same event!? not cool... Can I really use this TA in a distributed environment or must I choose a specific forwarder and do a "manual" fail over in case of failure (eg: enable/disable DB Connect ePO config)? (same behavior with opsec-lea add on)

What do Splunk Ninjas think are the top three daily Splunk tasks in a large distributed environment?

$
0
0
Hello all, I am trying to build a workflow for our new Splunk product and want to know what top three regular daily tasks you may do in Splunk Enterprise. This includes anything in regards to ES administration as well and maintenance tasks. If anyone has suggestions, I would certainly appreciate your feedback. This new environment has 5 indexers in a cluster, three search heads in a cluster and several heavy forwarders with a ton of data sent via forwarders. Comments anyone?

How should I implement a Splunk architecture on a 2 virtual machine, development environment?

$
0
0
Hi, we have to implement a Splunk architecture (for a development/test environment). We have 2 virtual devices, and we should replicate this set: 1 Deployment server, 1 Heavy Forwarder, a cluster of 3 Search Heads, 1 and Indexer. What do you suggest us to do? Thank you very much

How to undo a command that changed the name of my sourcetype?

$
0
0
Hello, For some reason, when setting-up some heavy forwarders to accept syslog data on UDP 514, a colleague of mine ran the following command: Splunk add UDP 514 -sourcetype udp:514. This added the following stanza to %splunkhome%/etc/apps/search/local/inputs.conf: [udp://514] connection_host = ip sourcetype = udp:514 This is forcing sourcetype name "udp:514" on all the data that come in on that port. My question is, if I just remove the "sourcetype = udp:514", will all future data be assigned the correct automatic sourcetypes? Thanks, JG

Whats the best way to blacklist a Windows event code?

$
0
0
I have over 300 Universal forwarders and I'm getting several eventcode=5156 events errors. Is there a way to blacklist this event on a heavy forwarder? If not, what would be the best approach for blacklisting this event code?

How to resolve when a log file falling behind?

$
0
0
Hi, We recently enabled syslog for dns devices, including query events. I checked this morning, and the events are about 4 hours behind. Looking for advice on how to fine tune this... This particular logfile is huge: 254130888464 Nov 25 08:29 system-ftcnsrtp1.log - and growing rapidly. We have lots of files on this server, but none remotely close to the size of this one. When I run the "inputstatus" command, that feed is in batch mode. I don't see any messages about thruput warnings from this heavy forwarder. /apps/logs/2016/11/25/system-ftcnsrtp1.log file position = 133013964565 file size = 9895002842 parent = /apps/logs/2*/*/*/system-ftc*.log percent = 1344.25 type = reading (batch) Thoughts?

How much more power would masking sensitive data take, especially with SED scripts on the Heavy Forwarder?

$
0
0
Hi All, I am currently working with a large client who would like to use Splunk to mask sensitive data but are worried about the computational and time overheads. Is there any data on how much more power the masking would take, especially with SED scripts on the Heavy Forwarder? Thanks Tim

Splunk Stream: Why is there inconsistent data produced between the deployment server and heavy forwarder when running streamfwd?

$
0
0
I am getting inconsistent issues when running the streamfwd on CentOS 7.x On the Deployment server some data is captured, i.e. Stream Estimate shows statistics The heavy forwarders, which are generally setup the same way, do not produce any data Setup: - CentOS 7.1 Systems cat /etc/redhat-release CentOS Linux release 7.2.1511 (Core) - Splunk Enterprise 6.5 on Deployment Server and 2 Heavy forwarders - Splunk is running with the user splunk:splunk, not root Step 1: Installing Splunk Stream on the Deploymentserver, go to app directory ./set_permissions Step 2: Deploy App, go to forwarders, ./set_permission Now the deployment server and forwarders should set up the same way. But on the forwarder I get the following message `SnifferReactor failed to open pcap adapter for device . Error message:` When the forwarder is run as root, which is not an option long term, then it works the same I first thought the permissions might be not set correctly as `splunk 4212 0.5 1.7 631520 68836 ? Ssl 17:42 0:00 /opt/splunk/etc/apps/Splunk_TA_stream/linux_x86_64/bin/streamfwd` actually calls a reference of the rhel5 version on the Deployment server `lrwxrwxrwx. 1 splunk splunk 15 Nov 25 17:29 streamfwd -> streamfwd-rhel5` `-rwxr-xr-x. 1 splunk splunk 47M Nov 5 07:28 streamfwd-rhel5` `-rws--x--x. 1 root splunk 48M Nov 5 07:28 streamfwd-rhel6` On the forwarder it actually calls a binary instead, which is identical to rhel5 `-rwxr-xr-x. 1 splunk splunk 47M Nov 25 19:00 streamfwd` `-rwxr-xr-x. 1 splunk splunk 47M Nov 25 19:00 streamfwd-rhel5` `-rws--x--x. 1 root splunk 48M Nov 25 19:00 streamfwd-rhel6` This might be because the deployment app is set up like this and it deploys the referenced binary instead of the link `lrwxrwxrwx. 1 splunk splunk 15 Nov 25 17:29 streamfwd -> streamfwd-rhel5` But this does neither to explain - why are the permissions "fixed" for rhel6 when rhel5 is actually called? - why does it work on the deployment server but not on the heavy forwarder?
Viewing all 727 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>