Quantcast
Channel: Questions in topic: "heavy-forwarder"
Viewing all 727 articles
Browse latest View live

How to connect heavy forwarder running Splunk DB Connect to Splunk Cloud?

$
0
0
Hi, I have a heavy forwarder running Splunk DB Connect (Splunk DB Connect is configured and working properly). What I need to do is get the data from Splunk DB Connect searches to Splunk Cloud. I've looked at several different documentation pages and answers but for the life of me I can't figure out where this went sideways. on the Splunk Cloud instance if I run this search index=_internal 10.30.28.220 I do see some data getting from the heavy forwarder (10.30.28.220) to Splunk Cloud 2/10/17 1:26:31.143 PM 02-10-2017 19:26:31.143 +0000 INFO StreamedSearch - Streamed search connection terminated: search_id=remote_sh1.icontrol.splunkcloud.com_1486754790.435, server=sh1.icontrol.splunkcloud.com, active_searches=3, elapsedTime=0.481, search='litsearch ( index=_internal 10.30.28.220 ) | fields keepcolorder=t "*" "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server" | remotetl nb=300 et=2147483647.000000 lt=0.000000 remove=true max_count=1000 max_prefetch=100', savedsearch_name="" date_hour = 19 date_mday = 10 date_minute = 26 date_month = february date_second = 31 date_wday = friday date_year = 2017 date_zone = 0 eventtype = external-referer eventtype = nix-all-logs eventtype = visitor-type-referred host = idx5.icontrol.splunkcloud.com index = _internal linecount = 1 punct = --_::._+____-____:_=....,_=...,_=,_=.,_='_(_=_..._ search = 'litsearch ( index=_internal 10.30.28.220 ) | fields keepcolorder=t "*" "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server" | remotetl nb=300 et=2147483647.000000 lt=0.000000 remove=true max_count=1000 max_prefetch=100' server = sh1.icontrol.splunkcloud.com source = /opt/splunk/var/log/splunk/remote_searches.log sourcetype = splunkd_remote_searches splunk_server = idx5.icontrol.splunkcloud.com timeendpos = 29 timestartpos = 0 unix_category = all_hosts unix_group = default 2/10/17 1:26:30.674 PM 02-10-2017 19:26:30.674 +0000 INFO StreamedSearch - Streamed search search starting: search_id=remote_sh1.icontrol.splunkcloud.com_1486754790.435, server=sh1.icontrol.splunkcloud.com, active_searches=4, search='litsearch ( index=_internal 10.30.28.220 ) | fields keepcolorder=t "*" "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server" | remotetl nb=300 et=2147483647.000000 lt=0.000000 remove=true max_count=1000 max_prefetch=100', remote_ttl=600, apiStartTime='ZERO_TIME', apiEndTime='ZERO_TIME', savedsearch_name="" date_hour = 19 date_mday = 10 date_minute = 26 date_month = february date_second = 30 date_wday = friday date_year = 2017 date_zone = 0 eventtype = external-referer eventtype = nix-all-logs eventtype = visitor-type-referred host = idx1.icontrol.splunkcloud.com index = _internal linecount = 1 punct = --_::._+____-____:_=....,_=...,_=,_='_(_=_..._)_|_ search = 'litsearch ( index=_internal 10.30.28.220 ) | fields keepcolorder=t "*" "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server" | remotetl nb=300 et=2147483647.000000 lt=0.000000 remove=true max_count=1000 max_prefetch=100' server = sh1.icontrol.splunkcloud.com source = /opt/splunk/var/log/splunk/remote_searches.log sourcetype = splunkd_remote_searches splunk_server = idx1.icontrol.splunkcloud.com timeendpos = 29 timestartpos = 0 unix_category = all_hosts unix_group = default 2/10/17 1:26:30.672 PM 02-10-2017 19:26:30.672 +0000 INFO StreamedSearch - Streamed search search starting: search_id=remote_sh1.icontrol.splunkcloud.com_1486754790.435, server=sh1.icontrol.splunkcloud.com, active_searches=4, search='litsearch ( index=_internal 10.30.28.220 ) | fields keepcolorder=t "*" "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server" | remotetl nb=300 et=2147483647.000000 lt=0.000000 remove=true max_count=1000 max_prefetch=100', remote_ttl=600, apiStartTime='ZERO_TIME', apiEndTime='ZERO_TIME', savedsearch_name="" date_hour = 19 date_mday = 10 date_minute = 26 date_month = february date_second = 30 date_wday = friday date_year = 2017 date_zone = 0 eventtype = external-referer eventtype = nix-all-logs eventtype = visitor-type-referred host = idx3.icontrol.splunkcloud.com index = _internal linecount = 1 punct = --_::._+____-____:_=....,_=...,_=,_='_(_=_..._)_|_ search = 'litsearch ( index=_internal 10.30.28.220 ) | fields keepcolorder=t "*" "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server" | remotetl nb=300 et=2147483647.000000 lt=0.000000 remove=true max_count=1000 max_prefetch=100' server = sh1.icontrol.splunkcloud.com source = /opt/splunk/var/log/splunk/remote_searches.log sourcetype = splunkd_remote_searches splunk_server = idx3.icontrol.splunkcloud.com timeendpos = 29 timestartpos = 0 unix_category = all_hosts unix_group = default 2/10/17 1:26:30.671 PM 02-10-2017 19:26:30.671 +0000 INFO StreamedSearch - Streamed search search starting: search_id=remote_sh1.icontrol.splunkcloud.com_1486754790.435, server=sh1.icontrol.splunkcloud.com, active_searches=4, search='litsearch ( index=_internal 10.30.28.220 ) | fields keepcolorder=t "*" "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server" | remotetl nb=300 et=2147483647.000000 lt=0.000000 remove=true max_count=1000 max_prefetch=100', remote_ttl=600, apiStartTime='ZERO_TIME', apiEndTime='ZERO_TIME', savedsearch_name="" date_hour = 19 date_mday = 10 date_minute = 26 date_month = february date_second = 30 date_wday = friday date_year = 2017 date_zone = 0 eventtype = external-referer eventtype = nix-all-logs eventtype = visitor-type-referred host = idx6.icontrol.splunkcloud.com index = _internal linecount = 1 punct = --_::._+____-____:_=....,_=...,_=,_='_(_=_..._)_|_ search = 'litsearch ( index=_internal 10.30.28.220 ) | fields keepcolorder=t "*" "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server" | remotetl nb=300 et=2147483647.000000 lt=0.000000 remove=true max_count=1000 max_prefetch=100' server = sh1.icontrol.splunkcloud.com source = /opt/splunk/var/log/splunk/remote_searches.log sourcetype = splunkd_remote_searches splunk_server = idx6.icontrol.splunkcloud.com timeendpos = 29 timestartpos = 0 unix_category = all_hosts unix_group = default but if I run this search index="dcdbtest" which is the index I need the data in, there are zero results. What do I need to look at to get this connection working? THANK YOU!!!!

Splunk App and Add-on for ServiceNow: Why does the accounts setup page display error "Unexpected error "" from python handler: "HTTP 500 Internal Server Error -- In handler 'snow_account'"?

$
0
0
We have Splunk Enterprise on-prem with a search head cluster behind F5 load balancer. I followed the steps to install the Splunk Add-on for ServiceNow and Splunk App for ServiceNow on Search heads using the deployer and also installed the add-on on the Heavy Forwarder. http://docs.splunk.com/Documentation/ServiceNow/4.0.3/Install/Installon-prem After the installation ran the remote target command to connect your search head and forwarder and was successful as well. But when following steps to do the setup ( http://docs.splunk.com/Documentation/ServiceNow/4.0.3/Install/Setuptheapp ), I get an error on the setup page directly without even displaying any fields to enter the information on the Search Head . Unexpected error occurs. In handler 'splunk_app_servicenow_accounts': Unexpected error "" from python handler: "HTTP 500 Internal Server Error -- In handler 'snow_account': External handler failed with code '1' and output: ''. See splunkd.log for stderr output.". See splunkd.log for more details. **Stack Trace from Splunkd.logs** 02-17-2017 00:25:55.516 -0800 ERROR AdminManagerExternal - Stack trace from python handler:\nTraceback (most recent call last):\n File "/opt/splunk/lib/python2 .7/site-packages/splunk/admin.py", line 129, in init\n hand.execute(info)\n File "/opt/splunk/lib/python2.7/site-packages/splunk/admin.py", line 590, in exe cute\n if self.requestedAction == ACTION_LIST: self.handleList(confInfo)\n File "/opt/splunk/etc/apps/splunk_app_servicenow/bin/snow_accounts_handler.py ", line 51, in handleList\n account = account_manager.list()[0]\n File "/opt/splunk/etc/apps/splunk_app_servicenow/bin/snow_account_manager.py", line 64, in list\n return [self.get_by_name("snow_account")]\n File "/opt/splunk/etc/apps/splunk_app_servicenow/bin/snow_account_manager.py", line 54, in get_by_name\n accounts = snow_account_collection.list()\n File "/opt/splunk/etc/apps/splunk_app_servicenow/bin/splunklib/client.py", line 1459, in list\n return list( self.iter(count=count, **kwargs))\n File "/opt/splunk/etc/apps/splunk_app_servicenow/bin/splunklib/client.py", line 1418, in iter\n response = self.get(coun t=pagesize or count, offset=offset, **kwargs)\n File "/opt/splunk/etc/apps/splunk_app_servicenow/bin/splunklib/client.py", line 1648, in get\n return super( Collection, self).get(name, owner, app, sharing, **query)\n File "/opt/splunk/etc/apps/splunk_app_servicenow/bin/splunklib/client.py", line 746, in get\n ** query)\n File "/opt/splunk/etc/apps/splunk_app_servicenow/bin/splunklib/binding.py", line 287, in wrapper\n return request_fun(self, *args, **kwargs)\n Fil e "/opt/splunk/etc/apps/splunk_app_servicenow/bin/splunklib/binding.py", line 69, in new_f\n val = f(*args, **kwargs)\n File "/opt/splunk/etc/apps/splunk_ap p_servicenow/bin/splunklib/binding.py", line 665, in get\n response = self.http.get(path, self._auth_headers, **query)\n File "/opt/splunk/etc/apps/splunk_a pp_servicenow/bin/splunklib/binding.py", line 1160, in get\n return self.request(url, { 'method': "GET", 'headers': headers })\n File "/opt/splunk/etc/apps/ splunk_app_servicenow/bin/splunklib/binding.py", line 1221, in request\n raise HTTPError(response)\nHTTPError: HTTP 500 Internal Server Error -- \n In handle r 'snow_account': External handler failed with code '1' and output: ''. See splunkd.log for stderr output.\n 02-17-2017 00:25:55.516 -0800 ERROR AdminManagerExternal - Unexpected error "" from python handler: "HTTP 500 Internal Serv er Error -- \n In handler 'snow_account': External handler failed with code '1' and output: ''. See splunkd.log for stderr output.". See splunkd.log for more details. I haven't been able to figure out the issue here . Any help would be highly appreciated. Thanks ![alt text][1] [1]: /storage/temp/185175-snow-error.png

Splunk Add-on for EMC VNX: Why is Splunk not recognizing new inputs.conf for this add-on?

$
0
0
Per the directions for Splunk Add-on for EMC VNX, we copied the inputs.conf into local and configured it with the following info [vnx_data_loader://xxxxxxxx] network_addr = xxxxxx network_addr2 = xxxxxxx username = splunk password = xxxxx platform = VNX Block scope = 0 site = xxxx index = xxxx loglevel = INFO disabled = false interval = 60  we restarted Splunkd but we are getting this error 2017-02-17 14:27:51,852 INFO 75860 - No data collection for VNX is found in the inputs.conf. Do nothing and Quit the TA Any ideas? Splunk Add-on for EMC VNX is running in a heavy forwarder - version 6.5.2 Thanks!

How to edit my configurations to perform an index time field extraction?

$
0
0
Hello All My current environment is as follows : Syslog/UF (Universal Forwarder) -> HF (Heavy Forwarder) -> Indexers I am trying to perform an indexed time field extraction so that people can utilize the fields extracted across all Search Heads in our environment. The following are what i have now after lots of trying : **transforms.conf** [ABC] REGEX = ^.*host\s(?1[^ ]+)\sat.+by\s(?2.+) FORMAT = $0:$1:$2:$3:$4:$5:$6 **props.conf** [sourcetype::XYZ] TRANSFORMS-ABC = a_B_C I tried pushing this to the indexers to populate the extraction, but it is not working. Also, the regex works in Search Time Extractions when i use it from the Search Head using a |rex "" command. Please help.

How to forward logs of a specific source to a third-party, non-Splunk system using a certificate?

$
0
0
Hello guys, we are working with a Heavy forwarder and its receiving logs from a lot of sources and of course sending them into a Splunk Indexer. However, now I'm trying add the functionality to forward (firewall) logs of a specific sourcetype via syslog to another instance which is not from Splunk using a certificate. I tried the steps of the documentation but i wasn't able to do it work properly. Can you give me some help with this please? PD: The documentation i was using: http://docs.splunk.com/Documentation/Splunk/6.5.2/Forwarding/Forwarddatatothird-partysystemsd Thanks you in advance

Splunk Add-on for Amazon Web Services: How to add account credentials and configurations via the command line interface?

$
0
0
We are building our heavy forwarders into an AMI and were trying to add the correct configuration for Accounts by a script or configuration file without having to go to Splunk Web to do it. Has anyone attempted this or knows how to do this?

Heavy Forwader data route between multiple indexer

$
0
0
Hi! I know there are several questions in this topic, but I didn't find a solution for me. I try to create a simple lab splunk system with 1 HF and 2 indexers (ix1, ix2). HF has 2 input udp://1514 and udp://1515. I tried to forward udp://1514 to ix1 and udp://1515 to ix2 with no luck. Somehow both indexers receives both logs:( inputs.conf [udp://1514] connection_host = ip sourcetype = syslog [udp://1515] connection_host = ip sourcetype = syslog props.conf [source::udp://1514] TRANSFORMS-ix1 = send_to_ix1 [source::udp://1515] TRANSFORMS-ix2 = send_to_ix2 transform.conf [send_to_ix1] REGEX = . DEST_KEY = _TCP_ROUTING FORMAT = indexer_1 [send_to_ix2] REGEX = . DEST_KEY = _TCP_ROUTING FORMAT = indexer_2 output.conf [tcpout:indexer_1] server = 192.168.10.220:9997 [tcpout:indexer_2] server = 192.168.10.221:9997 What am I doing wrong, please help me. The final goal is to filter the logs received by indexers and send everything to a 3rd party log collector. Thank your for your time, Steven

How to edit props.conf and transforms.conf on a heavy forwarder to keep specific events and discard the rest

$
0
0
props.conf [firewall] TRANSFORMS-set = setnull,setparsing transforms.conf [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = 192\.168\.1\.1 DEST_KEY = queue FORMAT = indexQueue I have a heavy forwarder with the following. What I want to do is only forward events that match the regex to our indexers for indexing and discard the rest. It doesn't matter what put in the REGEX section though nothing comes through even if I look at the logs and see that there are definitely matches. If I change props.conf to TRANSFORMS-set = setparsing I get all events from the logs so that leads me to believe that my DEST_KEY and FORMAT or configured correctly. Why isn't this filtering events and forwarding to my indexers?

NOT a question: A heavy forwarder can be listening on port 9997 and still look like that port is down or blocked.

$
0
0
First, some quick background about this tip. - Our Ops guys reported no recent events for their searches. - Universal Forwarders, Heavy Forwarders and Indexers were all up. - Those Ops guys were right! No recent events anywhere - not even _internal! - We cracked our knuckles and told them not to panic. All these machines run Windows, so from a UF node we used Powershell to test the port on the HF: $(new-object net.sockets.tcpclient).connect("10.xx.xx.xx",9997) If that command is successful it will immediately return a good old C: prompt, but will throw an error afer a few seconds if it is unsuccessful. In our case it was unsuccessful. Grrr. `netstat -an` showed that 9997 was listening on HF. Grrr. Firewall guys said everything was cruising through unfettered. Grrr. After growling for a bit and questioning the sanity of the firewall guys I looked at the indexer. Yup, it was running. Looked again and found this: There was 9997 listening on the indexer... PS C:\Windows\system32> netstat -an | findstr "9997" TCP 0.0.0.0:9997 0.0.0.0:0 LISTENING TCP 10.54.54.70:9997 10.54.52.85:60353 ESTABLISHED TCP 10.54.54.70:9997 10.54.54.32:52020 ESTABLISHED TCP 10.54.54.70:9997 10.54.54.32:52315 CLOSE_WAIT TCP 10.54.54.70:9997 10.54.54.33:51987 ESTABLISHED TCP 10.54.54.70:9997 10.54.54.33:52202 CLOSE_WAIT TCP 10.54.54.70:9997 10.54.54.33:52203 CLOSE_WAIT TCP 10.54.54.70:9997 10.54.54.34:63000 ESTABLISHED But wait a minute....it isn't..... PS C:\Windows\system32> netstat -an | findstr "LISTEN" TCP 0.0.0.0:135 0.0.0.0:0 LISTENING TCP 0.0.0.0:445 0.0.0.0:0 LISTENING TCP 0.0.0.0:3389 0.0.0.0:0 LISTENING TCP 0.0.0.0:5985 0.0.0.0:0 LISTENING TCP 0.0.0.0:8089 0.0.0.0:0 LISTENING TCP 0.0.0.0:8191 0.0.0.0:0 LISTENING TCP 0.0.0.0:9887 0.0.0.0:0 LISTENING TCP 0.0.0.0:10000 0.0.0.0:0 LISTENING TCP 0.0.0.0:47001 0.0.0.0:0 LISTENING TCP 0.0.0.0:49152 0.0.0.0:0 LISTENING TCP 0.0.0.0:49153 0.0.0.0:0 LISTENING TCP 0.0.0.0:49154 0.0.0.0:0 LISTENING TCP 0.0.0.0:49155 0.0.0.0:0 LISTENING TCP 0.0.0.0:49183 0.0.0.0:0 LISTENING TCP 0.0.0.0:49198 0.0.0.0:0 LISTENING . Well. So, the heavy forwarder accepted my incoming Powershell connection and routed that connection right over to the indexer where it failed. I bounced the indexer and like magic it was fixed. I like to share the strange, silly and stupid things I notice, so maybe this will help someone somewhere keep from staring at their screen in confusion for 30 minutes like I did today.

Unable to search Cisco eStreamer logs. How to resolve error "Insufficient permissions to read file" message?

$
0
0
we are running on Distributed Search Environment, we have two Heavy Forwarders. i'm actually unable to search estreamer logs so i have noticed this in splunkd.log "Insufficient permissions to read file='/opt/syslogs/generic/mxwmexc02r/784861.log' (hint: No such file or directory , UID: 0, GID: 0). " and one more warning message i have encountered "TailReader - File descriptor cache is full (100), trimming... " . Not sure why is this happening in only one forwarder and this is not happening in other?

How to edit my configurations to forward syslog to a third party using a Heavy Forwarder?

$
0
0
Hello guys, today i was able to send some syslogs to another non-Splunk instance, however when i tried to send 1 type of sourcetype i failed hard. These are my outputs.conf, props.conf and transforms.conf and i really have no idea why isn't working. Maybe it's something really simple but i can't figure out what is it. outputs.conf [syslog] defaultGroup = syslogGroup [syslog:syslogGroup] server = dest ip:5146 props.conf [sourcetype::WinEventLog:Security] TRANSFORMS-mcafee = send_to_syslog transforms.conf [send_to_syslog] REGEX = . DEST_KEY = _SYSLOG_ROUTING FORMAT = syslogGroup Any kind of help would be appreciate.

Splunk DB Connect: If using Output connection to insert in database, how is the Heavy Forwarder supposed to search my events in the index layer?

$
0
0
Dears, i would like to install Splunk DB connect v3 but i have questions regarding recommended setup of it in a Heavy Forwarder. In case i am using Output connection to insert in database, how is the Heavy forwarder supposed to be able to search my events in the index layer? thanks in advance

Heavy Forwarder as Indexer and License Usage

$
0
0
Hi colleagues, I've still trying to find an answer to my questions here, but it seems there is nothing helpful to me. We've got two Splunk Instances: the first one is a ****Heavy Forwarder**** and the second one is a **Indexer** and **Seach Head**. To minimize workload on Seach Head I tried to turn on indexing ( `indexAndForward`) on HF and found that Splunk started using the licence twice faster than it was before. And just to clear understand I'd like to know does Splunk try to index data in the second time even if it already did it on HF? If yes why? and what could you propose? Thank you.

Splunk Add-on for CyberArk: Should I use a Heavy Forwarder or a syslog server with a Universal Forwarder with this add-on?

$
0
0
I'm trying to decide whether I should use a heavy forwarder or a syslog server with universal forwarder to receive data from CyberArk. Can anybody tell me which approach you're using, and how well that's working out for you?

Best way to eliminate mirrored stream

$
0
0
We have a special environment that traffic goes through switch TAP which will mirror same traffic to 2 different paths. We're planning to use stream forwarder to catch packets on both side. However, it will turn out duplicated event. Though I know we probably we use **dedup** to eliminate duplicated events. I'd like to learn if any better solution to save index volume at the beginning. I just brainstorm if it's feasible eliminating mirrored packets natively by Splunk's fish bucket mechanism itself. For example, we use a heavy forwarder to collect duplicated packets first and then forward a single copy to backend indexers. Open to any advice, thanks! :)

How to get correct host information from a Universal Forwarder to intermediary heavy forwarder to Splunk Cloud

$
0
0
We have a setup where we have a syslog-ng server that forwards all events using a UF to a HF and then to the cloud. The issue we are having is that the host information is getting replaced with that of the UF name not the actual host that sent the syslog. I don't have anything in the outputs.conf or inputs.conf on the UF setting the host. If I send directly to Splunk Cloud it will keep the correct host name. It is only when I send to the HF will this name get stripped and the host gets changed to the syslog server's name. I have tried a regex to dynamically assign the host name in the inputs.conf by way of a regex based on the file path name on the UF, but cannot get it to work. An example of the file path is /var/log/splunk/network/hostname_log. I need just the hostname to be come the host. My thought is that there must be a setting somewhere either on the UF or the HF that is doing this. Any ideas or is there another way of doing the.

Splunk Add-on for Okta: How to deploy the add-on to our heavy forwarder and configure Okta inputs?

$
0
0
Hi Ninjas, Version details: Splunk Enterprise version 6.4.3, Splunk Add-on for Okta version 1.3.0 We are trying to deploy the add-on on to our Heavy forwarder (Linux) and configure inputs for Okta. File="home/Splunk_TA_okta/bin/splunktablib/httplib2/__init__.py", in connect raise SSLHandshakeError(e) SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed I didn't see any thing in document to configure specific to SSL certs. Need some help with troubleshoot the issue and an ideal hypothesis of how the add-on works for onboarding data from Okta. Thanks.

EMC Isilon App and Add-on for Splunk Enterprise: How to configure the app and add-on in distributed environment?

$
0
0
I am trying to do a distributed deployment (multiple search heads and indexers) of the EMC Isilon App and Add-on for Splunk Enterprise and the instructions call for setting it up via Splunk Web. Can you please provide details on what files to store the Isilon credentials in so I can configure it via the deployment server by editing files directly? Also, can you provide a sample syslog.conf file instead of that weirdly formatted section in the instructions? I’d like to be able to point the Isilon at the forwarder using a nonstandard port as well. Can you provide details on configuring the Isilon syslog output to go to another port instead of just 514? My heavy forwarder is configured with three other ports for syslog data to classify different sources to specific indexes or sourcetypes. The documentation says to setup the Isilon credentials via Splunk Web I but I'd like to just edit the settings directly on the deployment server and push things as needed to the correct nodes. Can you please provide instructions on what file to update with credentials? Additionally, can you please provide information on using a non standard syslog port when configuring syslog setting on the Isilon? (I'd like to have my forwarder pick up logs on a high port dedicated to Isilon logs, so I can parse multiple syslog message sources on one system with different input stanzas. Thanks!

Can a Heavy Forwarder send cooked but unparsed data?

$
0
0
Is it possible to have a heavy forwarder send unparsed (not raw) cooked data? I have a server which needs to forward data, and a universal forwarder sending compressed, unparsed data would be fine. However, I would like to use that same server to do some data collection as well. This data collection requires a full Splunk install and a 3rd party app (estreamer to be specific). However, as I understanding it using a full Splunk install as a heavy forwarder will, by default send parsed data. This is a much heavier network load, which I would like to avoid. The only option in outputs.conf related to this is: sendCookedData = true | false. If I set this to false, then it will be sending raw (uncooked data to the forwarder). If I set this to true, then it appears the heavy forwarder will send all data as cooked, **parsed** data. I'm looking for an option to send cooked, **unparsed** data. Thanks for any help!

Splunk App for Salesforce: How to install in a distributed environment?

$
0
0
I'd like to install the Splunk App for Salesforce in my test environment. I have a search head cluster, indexer cluster and heavy forwarders to deploy on (perhaps). Does anyone know what goes where? I tried deploying to my indexer cluster first, since there are indexes defined in the included indexes.conf, but I get a bunch of these messages during the deploy. So I'm doing something wrong but I don't know what it is. Can anyone throw me a rope? ; Invalid key in stanza [sfdc_event_log://EventLog] in /opt/splunk/etc/master-apps/splunk-app-sfdc/default/inputs.conf, line 3: limit (value: 1000). ; Invalid key in stanza [sfdc_event_log://EventLog] in /opt/splunk/etc/master-apps/splunk-app-sfdc/default/inputs.conf, line 5: start_date (value: ). ; Invalid key in stanza [sfdc_event_log://EventLog] in /opt/splunk/etc/master-apps/splunk-app-sfdc/default/inputs.conf, line 9: compression (value: 1). ; Invalid key in stanza [sfdc_object://LoginHistory] in /opt/splunk/etc/master-apps/splunk-app-sfdc/default/inputs.conf, line 14: query (value: SELECT ApiType, ApiVersion, Application, Browser, ClientVersion, Id, LoginTime, LoginType, LoginUrl, Platform, SourceIp, Status, UserId FROM LoginHistory). + 23 more messages like these...
Viewing all 727 articles
Browse latest View live