Quantcast
Channel: Questions in topic: "heavy-forwarder"
Viewing all 727 articles
Browse latest View live

error posting to snow_proxy among other snow_... areas

$
0
0
Hi, I am trying to get the ServiceNow add-on to work on a distributed Splunk infrastructure namely a HF. I have tried configuring it from the GUI and conf files, though there are differences as to how each modifys/creates apps in the local folder neither has worked... Splunk Enterprise 6.5.1 Splunk Add-on for ServiceNow 2.9.0 Linux OS * Using proxy settings (tried both IP and FQDN as per hxxps://answers.splunk.com/answers/312287/splunk-app-add-on-for-servicenow-why-am-i-getting.html I did try adding "http://" at the beginning), seems like my proxy is socks 4/5 as using http as a option comes up with "HTTPError: (407, 'Proxy Authentication Required')" in the logs * Tested SNOW user/pwd combo on the system itself with no issues * Changed the web.conf setting to "splunkdConnectionTimeout = 30000" so shouldn't be timing out... * When configuring the app via conf files, tried placing both username/password into the service_now.conf along with the other settings but this didn't work * When configuring the app via GUI, it placed "encrypted" in the username & password field values for both SNOW and Proxy settings... Within the passwords.conf file (which isn't on the docs page about configuring it via conf files...) it has the conf file set out as: **service_now.conf** [snow_default] collection_interval = 120 priority = 10 record_count = 1000 loglevel = INFO since_when = display_value = false [snow_account] url = https://blah.service-now.com release = Automatic username = password = [snow_proxy] proxy_enabled = 1 proxy_url = ip/fqdn proxy_port = 8080 proxy_username = proxy_password = # If use proxy to do DNS resolution, set proxy_rdns to 1 proxy_rdns = 0 # Valid proxy_type are http, http_no_tunnel, socks4, socks5 proxy_type = socks5 **passwords.conf** [credential:https\://.service-now.com:dummy:] password = [credential::dummy:] password = And yes the "dummy" is actually what the webgui puts in place of the actual username, though I have tried editing this manually in the passwords.conf file with no effect to get it to work. **splunk_ta_snow_main.log:** 2017-03-08 14:29:17,488 ERROR pid=6473 tid=MainThread file=snow.py:run:82 | Failed to setup config for Snow TA: Failed to verify ServiceNow username and password, reason=Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/snow_config.py", line 175, in verify_user_pass resp, content = http.request(url) File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/httplib2/__init__.py", line 1593, in request (response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey) File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/httplib2/__init__.py", line 1335, in _request (response, content) = self._conn_request(conn, request_uri, method, body, headers) File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/httplib2/__init__.py", line 1257, in _conn_request conn.connect() File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/httplib2/__init__.py", line 1018, in connect sock.connect((self.host, self.port)) File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/httplib2/socks.py", line 410, in connect self.__negotiatesocks5(destpair[0], destpair[1]) File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/httplib2/socks.py", line 215, in __negotiatesocks5 chosenauth = self.__recvall(2) File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/httplib2/socks.py", line 138, in __recvall data = self.recv(count) timeout: timed out 2017-03-08 14:29:17,488 ERROR pid=6473 tid=MainThread file=snow.py:run:83 | Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/snow.py", line 80, in run snow_conf = snow_config.SnowConfig() File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/snow_config.py", line 39, in __init__ default_configs = self._get_default_configs() File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/snow_config.py", line 107, in _get_default_configs self.conf_manager, conf_copy, self.appname) File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/snow_config.py", line 127, in fix_snow_release fixed_release = SnowConfig.get_snow_release(defaults) File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/snow_config.py", line 190, in get_snow_release SnowConfig.verify_user_pass(defaults) File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/snow_config.py", line 180, in verify_user_pass raise Exception(msg) Exception: Failed to verify ServiceNow username and password, reason=Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/snow_config.py", line 175, in verify_user_pass resp, content = http.request(url) File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/httplib2/__init__.py", line 1593, in request (response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey) File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/httplib2/__init__.py", line 1335, in _request (response, content) = self._conn_request(conn, request_uri, method, body, headers) File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/httplib2/__init__.py", line 1257, in _conn_request conn.connect() File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/httplib2/__init__.py", line 1018, in connect sock.connect((self.host, self.port)) File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/httplib2/socks.py", line 410, in connect self.__negotiatesocks5(destpair[0], destpair[1]) File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/httplib2/socks.py", line 215, in __negotiatesocks5 chosenauth = self.__recvall(2) File "/opt/splunk/etc/apps/Splunk_TA_snow/bin/httplib2/socks.py", line 138, in __recvall data = self.recv(count) timeout: timed out Thanks in Advance

Qualys Technology Add-on (TA) for Splunk: Why am I receiving "Error during request to location, [None] Not Found" on Heavy Forwarder?

$
0
0
We have Qualys Technology Add-on (TA) for Splunk installed on a Heavy Forwarder that stopped working shortly after this error came up. This is the log in full: TA-QualysCloudPlatform: 2017-03-02T07:54:19Z PID=9576 [MainThread] ERROR: TA-QualysCloudPlatform - An error occurred Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-QualysCloudPlatform/bin/qualysModule/qualys_log_populator.py", line 414, in _run wfc.coordinate() File "/opt/splunk/etc/apps/TA-QualysCloudPlatform/bin/qualysModule/splunkpopulator/WASFindingsFetchCoordinator.py", line 97, in coordinate self.getWebAppIds() File "/opt/splunk/etc/apps/TA-QualysCloudPlatform/bin/qualysModule/splunkpopulator/WASFindingsFetchCoordinator.py", line 56, in getWebAppIds fetcher.run() File "/opt/splunk/etc/apps/TA-QualysCloudPlatform/bin/qualysModule/splunkpopulator/webapp.py", line 55, in run super(webAppIdFetcher, self).run() File "/opt/splunk/etc/apps/TA-QualysCloudPlatform/bin/qualysModule/splunkpopulator/basepopulator.py", line 78, in run return self.__fetch_and_parse() File "/opt/splunk/etc/apps/TA-QualysCloudPlatform/bin/qualysModule/splunkpopulator/basepopulator.py", line 105, in __fetch_and_parse response = self.__fetch(params) File "/opt/splunk/etc/apps/TA-QualysCloudPlatform/bin/qualysModule/splunkpopulator/basepopulator.py", line 97, in __fetch response = self.api_client.get(self.api_end_point, api_params, api.Client.XMLFileBufferedResponse(filename)) File "/opt/splunk/etc/apps/TA-QualysCloudPlatform/bin/qualysModule/lib/api/Client.py", line 259, in get raise APIRequestError("Error during request to %s, [%s] %s" % (end_point, ue.errno, ue.reason)) APIRequestError: Error during request to /qps/rest/3.0/search/was/webapp, [None] Not Found After performing a server refresh, Qualys began ingesting again, but the file location the error mentions is nowhere on the Forwarder. Also, if it looks for nothing it probably won't find it. What caused Qualys to do this?

Splunk DB Connect: How to properly upgrade from 2.1.2 to 3.0.1?

$
0
0
We are currently at v2.1.2 of Splunk DB Connect running on our heavy forwarder in a distributed environment. I want to upgrade to eventually get to version 3.0.1 but the upgrade path says to upgrade to 2.4.0 then migrate to 3.0.1 (I'm assuming because of the config file changes and the migration scripts). I don't see any information for going to 2.4.0 from 2.1.2 though. In Manage Apps next to 2.1.2, it has a hyperlink to update to 3.0.1 but I know I don't want to do that because i have to stop at 2.4.0 on the way. Do I just install app from file to upgrade to 2.4.0? I've tar'd up the Splunk DB Connect directory in case something goes wrong. Thanks!

How to filter XmlWinEventLog in Heavy Forwarder with regex?

$
0
0
Hi, I have XML rendered log from sysmon and i need to extract from this log only interesting fields, for example: Image|UtcTime|ProcessGuid|CommandLine|User|ParentProcessGuid|ParentImage|ParentCommandLine|Hashes But my conf doesn't work. What i did wrong and how to fix that? **here is the sample xml** - - 154100x80000000000000001098206Microsoft-Windows-Sysmon/OperationalHOSTNAME - 2017-03-13 12:16:18.203{EF92ED9B-8D92-58C6-0000-0010B2A27B04}2832C:\Windows\System32\cmd.exe"C:\Windows\system32\cmd.exe" /c type "C:\ProgramData\****.txt"c:\program files\*****\NT AUTHORITY\SYSTEM{****************************}0x3e70SystemSHA1=0F3C4FF28F354AEDE2,MD5=5746BD7E255DD61,SHA256=DB06C3534964E3FC79D0CA336F4A0FE724B75AAFF386,IMPHASH=D00585440EB0A{**************************}1564C:\Program Files\****.exe"C:\Program Files\******" 1452 + **************************************************************InformationProcess Create (rule: ProcessCreate)Info **And this is my conf:** inputs.conf [WinEventLog://ForwardedEvents] disabled = false start_from = oldest current_only = 0 checkpointInterval = 5 renderXml = true suppress_text = 1 index = sysmon sourcetype=XmlWinEventLog:Microsoft-Windows-Sysmon/Operational whitelist1 = 1,5,6 props.conf [source::WinEventLog://ForwardedEvents] TRANSFORMS-setnull = sysmon-setnull TRANSFORMS-keep = sysmon-keep transforms.conf [sysmon-setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [sysmon-keep] REGEX = (?i)Name=".*(Image|UtcTime|ProcessGuid|CommandLine|User|ParentProcessGuid|ParentImage|ParentCommandLine|Hashes)" DEST_KEY = queue FORMAT = indexQueue

Why is data segregation by index not displaying events?

$
0
0
I'm trying to segregate data coming from a specific Heavy Forwarder using a specific index (my_index). So as per Answers and Manual: 1. I defined also "my_index" index on the two Indexers that receive the data. No index is defined on the Search Head. 2. In inputs.conf, I inserted on the Heavy Forwarder: [input] index = my_index 3. I configured a specific role and its users to search on this index Looking at the console the my_index is empty (zero events), zero current size. Any search like index=my_index give zero results, although events are coming to the indexer (I see tcmpdump trace of the message arriving on the Indexer when events occur). Any idea? Something different in Splunk 6.5.2? thanks in advance

Why are Windows Event Logs not forwarded after installing a new Windows server?

$
0
0
Hi there, I have the following issue detected in our environment and I'm not sure where the problem comes from. We have several Windows Server monitored with a heavy forwarder. The Event logs are grabbed remotely by WMI. So far everything works as expected. Now we have done a new installation of one Windows Server. The Server has the same name and IP address. Only the OS has changed from Windows Server 2008 R2 to Windows Server 2012 R2. If I do a wbemtest with the user on the Splunk heavy forwarder, the Splunk service is running, and I can see the events from the fresh installed server. So there are no permission or firewall issues between the forwarder and the Windows Server. But I can't see any events from this server on the indexer. Does someone has an idea what is going wrong or how I can figure out the problem? For your information. I removed the configuration of the Windows Server on the forwarder, restarted the forwarder and add the Windows Server again and restarted the forwarder. Nothing happens. I removed the index of the Windows Server from the indexer, restart the indexer and added the index again. Nothing happens. Could it be possible that the Splunk forwarder stores Information of grabbed events in another file? For any ideas I'll be very thankful. [edit: it's a heavy forwarder not a universal one]

How to max out Windows forwarder file descriptors limits?

$
0
0
I'm trying to max out Windows forwarder file limits. When using "max_fd" in limits.conf, I get the following warning: WARN TailingProcessor - Constraining max_fd from requested '20000', will use '6667', max_fd constrained to two-thirds of OS fds per process limit (NOFILES), '10000'. Increase ulimit (ulimit -n) to raise the ceiling on max_fd. Is there any way to raise the limit above "6667" in Windows? ulimit is of course a Linux feature.

What is the recommended hardware requirement for Heavy Forwarder that is indexing?

$
0
0
What is the recommended hardware spec for a HF that is now indexing locally. Essentially, I know it's an Indexer that is just forwarding, so do we treat it as such in terms of hardware requirements? 12CPU? 12GB?

Tripwire Enterprise App for Splunk Enterprise: Why am I not able to see data in the app?

$
0
0
Hello all, I have a test environment on a RHEL 7 server that is running Tripwire Enterprise App for Splunk Enterprise and Splunk trial on the same machine. I've loaded the Tripwire Enterprise App on Splunk thinking that I don't need a heavy forwarder because it's a local ingest. I'm seeing the tripwire log data, but, although the Tripwire Enterprise App loads, no data shows up and there are no errors. I'm a relative new Splunker, so what am I missing? Thanks for any help

How to resolve the Fail to update configuration for add-on error on heavy Forwarder?

$
0
0
Hi I Installed a Add-on on the Heavy Forwarder, when I try to setup the Add-On using API and credentials, its showing the error "Fail to update configuration for add-on xxx" but the add-on is working on other heavy Forwarder -2. Both are running under splunk user. I tried to copy the installation file from one HF to other and i got the below error, if its a permissions issue how to resolve this one? "Fail to load configuration for add-on xxx"

How to find out the frequency time difference between each indexer receiving events from heavy forwarder instances?

$
0
0
Hi All, currently we are facing an issue, where one of the indexer is consuming more space compared to other indexers instance in our evironment and they all share the common indexes.conf, outputs.conf file and inputs.conf files. When checked in the forum, I was recommanded to add the below stanza to over come this issue in /local/outputs.conf file. **forceTimeBasedAutoLB=true autoLBFrequency = 15** But before implementing the changes in prod, I wanted to validated any difference in time or lag in reaching these indexer instances. I had executed the below search and getting **events but not sure what I need to check in the events**. ex: index=_internal host=ourHWF splunk_server=splunk0* time frame (1 hrs) Can anyone **provide me the exact search** on how to find out the **time difference between the indexer instances and heavy forwarder**? thanks in advance.

Why do I receive error "TypeError: object() takes no parameters" while installing packaging tool kit?

$
0
0
Hi, Since Splunk does not support Splunk add for SCOM, we are using splunk packaging tool kit to breakup the addon and deploy the various component. we have a heavy forwarder which is on premise. we tried installing Splunk packaging tool kit in multiple servers Linux, Windows with no success. below is the error. Processing /tmp/splunk-packaging-toolkit-0.9.0.tar.gz Complete output from command python setup.py egg_info: Traceback (most recent call last): File "", line 1, in File "/tmp/pip-wgnvd_73-build/setup.py", line 17, in from slim import __version__ File "/tmp/pip-wgnvd_73-build/slim/__init__.py", line 7, in from . describe import describe File "/tmp/pip-wgnvd_73-build/slim/describe.py", line 11, in from slim.app import * File "/tmp/pip-wgnvd_73-build/slim/app/__init__.py", line 8, in from . _configuration import ( File "/tmp/pip-wgnvd_73-build/slim/app/_configuration.py", line 47, in

How to send JSON data (sent via HTTP POST) to a heavy forwarder?

$
0
0
Currently I have a security appliance sending JSON data via HTTP POST to an all-in-one stand alone Splunk test instance. Now I want to send the JSON data to an intermediate Heavy Forwarder in production (which feeds the indexers). The test instance is receiving the json data via HTTP POST. A Splunk user account was created to pass the RESTful API data with a RESTfulAPI role and edit_tcp capabilities. The security appliance is configured with the username and password created previously, and is sending data to: https://:/services/receivers/simple? host=&source=wmps sourcetype=fe_json The stand alone test instance has an enabled receiver directly on the indexer (I believe) and receives the data without a problem. At this point I need to reconfigure the security appliance to send data to the heavy fwdr and I am not sure how to set up a receiver on the heavy forwarder so that it will act the same as the test instance. After the connection is established I would like to edit down the amount of data from the security appliance to only the desired fields by changing the .conf files. Any advice or reference is appreciated. Thank you Thank you

Splunk Add-on for Amazon Web Services: How to resolve error "UnicodeDecodeError: 'utf8' codec can't decode byte 0xd1 in position 0: invalid continuation byte"?

$
0
0
Hi, I am trying to onboard S3 data into AWS Cloud. I am using Splunk Add on for AWS on heavy forwarder. I have added the input for S3. This is the error I am getting. 04-03-2017 19:07:11.209 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" UnicodeDecodeError: 'utf8' codec can't decode byte 0xd1 in position 0: invalid continuation byte Please let me know in case I am missing on something. Below is full error snippet from Logs: 04-03-2017 19:07:11.206 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" Process Process-1: 04-03-2017 19:07:11.206 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" Traceback (most recent call last): 04-03-2017 19:07:11.206 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" File "/opt/splunk/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap 04-03-2017 19:07:11.206 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" self.run() 04-03-2017 19:07:11.206 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" File "/opt/splunk/lib/python2.7/multiprocessing/process.py", line 114, in run 04-03-2017 19:07:11.206 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" self._target(*self._args, **self._kwargs) 04-03-2017 19:07:11.206 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunksdc/collector.py", line 135, in _collector_work_procedure 04-03-2017 19:07:11.206 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" collector.perform(name, params) 04-03-2017 19:07:11.206 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunksdc/collector.py", line 234, in perform 04-03-2017 19:07:11.206 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" self._delegate.perform(self, portal, checkpoint, name, params) 04-03-2017 19:07:11.206 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py", line 345, in perform 04-03-2017 19:07:11.206 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" self._handler.perform(app, portal, checkpoint, name, params) 04-03-2017 19:07:11.206 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/s3logs/handler.py", line 64, in perform 04-03-2017 19:07:11.207 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" if pipeline.run(portal, checkpoint): 04-03-2017 19:07:11.207 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunksdc/multitasking.py", line 151, in run 04-03-2017 19:07:11.207 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" self._delegate.done(portal, checkpoint, job, result) 04-03-2017 19:07:11.207 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/s3logs/adapter.py", line 167, in done 04-03-2017 19:07:11.208 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" self._index_records(portal, job, result) 04-03-2017 19:07:11.208 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/s3logs/adapter.py", line 212, in _index_records 04-03-2017 19:07:11.208 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" portal.write(stream) 04-03-2017 19:07:11.208 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunksdc/event.py", line 11, in write 04-03-2017 19:07:11.208 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" text = stream.render() 04-03-2017 19:07:11.208 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunksdc/event.py", line 54, in render 04-03-2017 19:07:11.208 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" return ET.tostring(self._root) 04-03-2017 19:07:11.208 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" File "/opt/splunk/lib/python2.7/xml/etree/ElementTree.py", line 1126, in tostring 04-03-2017 19:07:11.209 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" ElementTree(element).write(file, encoding, method=method) 04-03-2017 19:07:11.209 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" File "/opt/splunk/lib/python2.7/xml/etree/ElementTree.py", line 820, in write 04-03-2017 19:07:11.209 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" serialize(write, self._root, encoding, qnames, namespaces) 04-03-2017 19:07:11.209 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" File "/opt/splunk/lib/python2.7/xml/etree/ElementTree.py", line 939, in _serialize_xml 04-03-2017 19:07:11.209 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" _serialize_xml(write, e, encoding, qnames, None) 04-03-2017 19:07:11.209 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" File "/opt/splunk/lib/python2.7/xml/etree/ElementTree.py", line 939, in _serialize_xml 04-03-2017 19:07:11.209 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" _serialize_xml(write, e, encoding, qnames, None) 04-03-2017 19:07:11.209 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" File "/opt/splunk/lib/python2.7/xml/etree/ElementTree.py", line 937, in _serialize_xml 04-03-2017 19:07:11.209 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" write(_escape_cdata(text, encoding)) 04-03-2017 19:07:11.209 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" File "/opt/splunk/lib/python2.7/xml/etree/ElementTree.py", line 1073, in _escape_cdata 04-03-2017 19:07:11.209 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" return text.encode(encoding, "xmlcharrefreplace") 04-03-2017 19:07:11.209 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws_logs.py" UnicodeDecodeError: 'utf8' codec can't decode byte 0xd1 in position 0: invalid continuation byte

Props/ Transforms problems - Meraki

$
0
0
Hello everyone! I'm trying to use props/ transforms to set a sourcetype and change the hostname of my devices. Currently they are coming in as sourcetype=syslog My event looks like this: **Apr 3 22:37:36 10.77.265.178 1 1491277141.711671730 NAME_LOC_FW1 events Site-to-site VPN: notification INVALID-ID-INFORMATION received in informational exchange.** I want to extract "NAME_LOC_FW1" and change the sourcetype to meraki as well as change the host to "NAME_LOC_FW1" I have the following props: **[syslog] TRANFORMS-changesourcetypes = NAME_LOC_FW1** **[syslog] TRANSFORMS-changehost = NAME_LOC_FW1_HOST** And the following transforms: **[NAME_LOC_FW1] Regex = (NAME_LOC_FW1) DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::meraki** **[NAME_LOC_FW1_HOST] DEST_KEY = MetaData:Host REGEX = (?)(NAME_LOC_FW1) FORMAT = host::$1** This isn't working... Can anyone tell me what I'm doing wrong? Also, this is implemented on a Heavy Forwarder. Thanks a lot! JG

How to configure TCP port on NetApp filer for forwarding the syslog messages to Splunk heavy forwarder server?

$
0
0
Hii. I Have netapp filers running on 8.2.x and 8.3.x and did setup forwarding the logs to Splunk heavy forwarder. Would like to know how to use only TCP port for forwarding the logs to heavy forwarder. By default, Data Ontap is using UDP port for forwarding syslog messages, and I can see the source=UDP 16514 for what ever the filers I configured to In 9.0 Data Ontap, We can specify protocol in the command, But not sure how to specify protocol in 8.2 and 8.3 systems cluster log-forwarding create -destination 192.168.0.1 -port 514 -facility user -protocol tcp-unencrypted Appreciate for your response

Diagnosing Issues with Python and Splunk Add-on for EMC VNX data_loader scripts "hanging"

$
0
0
We are trying to perform storage monitoring and both the EMC VNX and EMC XtremIO seem to be running python scripts as part of the Splunk Add-on for EMC VNX that break after a period of time. I think it's due to sockets staying open or the .py scripts not ending cleanly, but I am not proficient in python enough to diagnose... This post is specific to the Splunk Add-on for EMC VNX. We have 2 heavy forwarders that run the Splunk Add-on for EMC VNX against several different arrays, it's consistently seems to be the python scripts staying running and doing a splunk stop and splunk start fixes the issue for half-day to several days... Here is the error from the VNX log : [splunk@log1 splunk]$ tail -f data_loader.log File "/opt/splunk/etc/apps/Splunk_TA_emc-vnx/bin/timed_popen.py", line 55, in timed_popen return _do_timed_popen(args, timeout) File "/opt/splunk/etc/apps/Splunk_TA_emc-vnx/bin/timed_popen.py", line 41, in _do_timed_popen sub = Popen(args, stdout=PIPE, stderr=PIPE) File "/opt/splunk/lib/python2.7/subprocess.py", line 710, in __init__ errread, errwrite) File "/opt/splunk/lib/python2.7/subprocess.py", line 1335, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory it repeats this over and over in both Heavy Forwarders, until Splunk is stopped and started. Running a Splunk restart from cron every morning at 5am did not work as a workaround for this issue. The healthy log looks like this (you can see it ending from Splunk stop also) : 2017-04-05 11:39:19,173 INFO 140455369918272 - Data loader is going to exit... 2017-04-05 11:39:19,173 INFO 140454121158400 - Worker thread Thread-16 going to exit 2017-04-05 11:39:19,174 INFO 140454146336512 - Worker thread Thread-13 going to exit 2017-04-05 11:39:19,174 INFO 140455200052992 - Worker thread Thread-1 going to exit 2017-04-05 11:39:19,174 INFO 140454137943808 - Worker thread Thread-14 going to exit 2017-04-05 11:39:19,174 INFO 140454154729216 - Worker thread Thread-12 going to exit 2017-04-05 11:39:19,175 INFO 140455191660288 - Worker thread Thread-2 going to exit 2017-04-05 11:39:19,175 INFO 140454129551104 - Worker thread Thread-15 going to exit 2017-04-05 11:39:19,175 INFO 140454691600128 - Worker thread Thread-5 going to exit 2017-04-05 11:39:19,175 INFO 140454683207424 - Worker thread Thread-6 going to exit 2017-04-05 11:39:19,175 INFO 140455174874880 - Worker thread Thread-4 going to exit 2017-04-05 11:39:19,175 INFO 140455183267584 - Worker thread Thread-3 going to exit 2017-04-05 11:39:19,176 INFO 140454658029312 - Worker thread Thread-9 going to exit 2017-04-05 11:39:19,176 INFO 140454666422016 - Worker thread Thread-8 going to exit 2017-04-05 11:39:19,176 INFO 140454649636608 - Worker thread Thread-10 going to exit 2017-04-05 11:39:19,176 INFO 140454674814720 - Worker thread Thread-7 going to exit 2017-04-05 11:39:19,176 INFO 140454641243904 - Worker thread Thread-11 going to exit 2017-04-05 11:39:19,178 INFO 140455369918272 - ProcessPool is going to exit... 2017-04-05 11:39:19,210 INFO 140454112765696 - Event writer thread is going to exit... 2017-04-05 11:39:19,229 INFO 140454104372992 - TimerQueue thread is going to exit... 2017-04-05 11:39:43,188 INFO 140321121437504 - thread_pool_size = 16 2017-04-05 11:39:43,190 INFO 140321121437504 - process_pool_size = 2 2017-04-05 11:39:43,807 INFO 140321121437504 - Get 0 ready jobs, next duration is 5.506924, and there are 12 jobs scheduling 2017-04-05 11:39:49,318 INFO 140321121437504 - Get 1 ready jobs, next duration is 3.996371, and there are 12 jobs scheduling 2017-04-05 11:39:49,321 INFO 140320742307584 - thread work_queue_size=0 2017-04-05 11:39:53,315 INFO 140321121437504 - Get 1 ready jobs, next duration is 8.999508, and there are 12 jobs scheduling 2017-04-05 11:39:53,315 INFO 140320733914880 - thread work_queue_size=0 2017-04-05 11:40:02,315 INFO 140321121437504 - Get 1 ready jobs, next duration is 0.999262, and there are 12 jobs scheduling 2017-04-05 11:40:02,315 INFO 140320725522176 - thread work_queue_size=0 2017-04-05 11:40:03,314 INFO 140321121437504 - Get 1 ready jobs, next duration is 11.999513, and there are 12 jobs scheduling 2017-04-05 11:40:03,315 INFO 140320717129472 - thread work_queue_size=0 2017-04-05 11:40:15,315 INFO 140321121437504 - Get 2 ready jobs, next duration is 7.999429, and there are 12 jobs scheduling 2017-04-05 11:40:15,315 INFO 140320708736768 - thread work_queue_size=1 2017-04-05 11:40:15,315 INFO 140320700344064 - thread work_queue_size=0 2017-04-05 11:40:23,315 INFO 140321121437504 - Get 1 ready jobs, next duration is 0.999395, and there are 12 jobs scheduling 2017-04-05 11:40:23,315 INFO 140320691951360 - thread work_queue_size=0 2017-04-05 11:40:24,315 INFO 140321121437504 - Get 1 ready jobs, next duration is 3.999494, and there are 12 jobs scheduling 2017-04-05 11:40:24,315 INFO 140320205436672 - thread work_queue_size=0 2017-04-05 11:40:28,315 INFO 140321121437504 - Get 1 ready jobs, next duration is 7.999428, and there are 12 jobs scheduling 2017-04-05 11:40:28,315 INFO 140320197043968 - thread work_queue_size=0 2017-04-05 11:40:36,314 INFO 140321121437504 - Get 1 ready jobs, next duration is 0.999498, and there are 12 jobs scheduling 2017-04-05 11:40:36,315 INFO 140320188651264 - thread work_queue_size=0 2017-04-05 11:40:37,314 INFO 140321121437504 - Get 1 ready jobs, next duration is 2.999524, and there are 12 jobs scheduling 2017-04-05 11:40:37,315 INFO 140320180258560 - thread work_queue_size=0 2017-04-05 11:40:40,314 INFO 140321121437504 - Get 1 ready jobs, next duration is 95.000096, and there are 12 jobs scheduling 2017-04-05 11:40:40,315 INFO 140320171865856 - thread work_queue_size=0

Transform on DBConnect Input Removing Field

$
0
0
Hi, I have an SQL input being consumed via DBConnect 2.4 which has several fields including 'Message' and 'Originating System'. They are currently being sent to our indexers under the sourcetype 'auditlog' and the fields are being extracted correctly (as the are defined fields in the SQL table). I am now trying to have every event with an 'Originating System' attribute of 'SendToABC' be given a new sourcetype. I have been able to do this but my issue is that now the 'Originating System' field is no longer automatically extracted for the new sourcetype. I could of course set the new sourcetype to extract the field but this is more a case of wanting to know WHY this is happening. I am using: props.conf `[AuditLog] TRANSFORMS-sourcetype_change= sendtoabc_sourcetype ` transforms.conf `[sendtoabc_sourcetype] REGEX = (?SendToABC) FORMAT = sourcetype::sendToABC DEST_KEY = MetaData:Sourcetype LOOKAHEAD = 20000 ` Any help would be appreciated. Thank you.

How to adjust the timezone with Splunk DB Connect and Splunk Cloud?

$
0
0
Hello all, I'm having a bit of an issue with getting the time to get parsed correctly in my Splunk DB Connect data. The setup is that we have DBX running on a heavy forwarder collecting data and forwarding it to Splunk Cloud to be indexed and all that jazz, but the problem is that the time string (with a UTC offset) is not adjusting for the offset, so the events are coming from the future! Here is the raw event _time and the Splunk time of an example event: ```_time: 2017-04-06T20:29:58.000-04:00 ``` ```time: 2017-04-06 20:29:58.0 ``` In the inputs for DBX we have the stanza for the connection setting the TZ = UTC, but I'm not super sure if that matters since it's on the forwarder and not the search head (not search time) or indexer (not index time). I was also under the impression that UTC times were automatically adjusted in Splunk.

Sending data to Splunk Cloud using multiple outputs.conf for mobile systems.

$
0
0
I am interested in the community's thoughts on forwarding data to Splunk Cloud for mobile systems. Currently I am working to consolidate all my Universal Forwarders to forwarder their data thru a Heavy Forwarder, then the Heavy Forwarder sends to Splunk Cloud. In turn, I can tighten the firewall rules by not allowing clients direct access to the Internet. Which is an easy security win to achieve. :) However, the UF is running on some laptops. When the user leaves the network, the UF can no longer forwarder data to Splunk Cloud because it does not have the configuration. It only knows to look for the internal Heavy Fowarder. Ideally, the user will connect to the VPN and the UF can send data to the Internal Heavy Forwarder. But if they are not connected to the VPN, those events are delayed until they connect back to the corporate network or VPN. Can two different outputs.conf be read, the internal heavy forwarder is read first based on folder precedence, then the cloud configuration. If the heavy forwarder cannot be found, will the UF then try the cloud configuration?
Viewing all 727 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>