In addtion, you would want to separate each line of the file into a separate event in Splunk. You now potentially have two copies in Splunk that are indexed with the timestamp of the file change (if the contents have timestamps, you may need to disable this on indexing). Every time the file mod time changes, it will re-read the whole file if the file is a structured format (xml / json / csv etc) - you can monitor the whole file and use the INDEXED_EXTRACTIONS and "CHECK_METHOD = modtime" options in a nf file on the collecting system. One other option may be to use indexed_extrations. If you want to detect the exact time the file content changed and trigger something at that specific time, initCrcLength & crcSalt may help to read the contents on change, but it can be hit and miss depending on the type of change. Another script could just run a dir / ls command to show the file timestamp and collect that data if required. This will tell you what is different, but not what time it changed (only between the two collections). Once the data is in Splunk, you can use standard tools to compare lines, but it would only really be a good use case for Splunk if the data structure is simple and changes easy to identify.įirstly the simplest option for a small file, is set up a script to read the contents on a regular basis - for example every hour, then once you have more than one copy of the file in Splunk, you can run a query to compare both versions. They are based on the assumption that the file is small and you can read the entire file contents into Splunk on a regular basis. There are lots of unknowns about your requirement such as data format or size, but here are some suggestions that may help.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |