SPL
Useful search commands
Count by tag you are searching for like "Message" will bring a count of like messages
| stats count by ip
| stats count by userName, ip
Rex is regex that you can replace a string like below replace a space with a underscore
| rex field=_raw "Account Name:\s+(?<Account_Name>[^\s]+)" | stats count by Account_Name
Pulls data from fields and organizes it into a table view
| table UserId, SourceFileName, UserAgent, CreationTime
lookup a uploaded csv table by file name
| inputlookup <filename.csv>
Find items like
| inputlookup AllUsersMfaDetails.csv | where like(lower(DeviceName), "%ip%")
Compare IP data in file with Splunk logs
lookup c2cisp.csv ip - calls the file and uses the ip column
matches the ip to the the data set d_ip
outputs the matched in a var named c2cisp
| lookup c2cisp.csv ip as d_ip OUTPUT ip as c2cisp | search c2cisp=*
Find IPs that are not in the csv or 123.123.123.123
| lookup IP.csv ip AS ipAdd OUTPUT ip AS match_ip
| where isnull(match_ip) | where ipAdd != "123.123.123.123"
| stats count by userDisplayName, ipAdd | sort - count
lookup Resources
For reference: the docs have a page for each command: lookup, inputlookup, and outputlookup.
In short:
lookup
adds data to each existing event in your result set based on a field existing in the event matching a value in the lookupinputlookup
takes the the table of the lookup and creates new events in your result set (either created completely or added to a prior result set)outputlookup
takes the current event set and writes it to a CSV or KVStore.
As an aside, when getting started with SPL commands, the Quick Reference Guide is the holy grail IMO for learning all about Splunk key concepts and common commands, along with different examples. Make sure you've got this one in your back pocket, as well as the Search Reference Docs. Yes, you can lookup two tables in the came command. You can even join the two tables together. It really depends on what you're trying to do with the lookup (whether you're trying to use multiple inputlookup calls, or multiple lookup calls).
The former requires the use of append or join:
| inputlookup lookup1| append [|inputlookup lookup2]| join ip [|inputlookup lookup3]
The latter is just sequential:
index=<index> sourcetype=<sourcetype> |lookup lookup1 ip |lookup lookup2 host OR |inputlookup3 |lookup lookup1 ip |lookup lookup2 host
Using Sort
your search....| sort -count
your search....| sort -_time
Using where
where ClientIP IN ("86.48.9.97", "92.119.17.191")
Using maps with Location of IP
Cluster map
| iplocation ipAdd
| geostats latfield=lat longfield=lon count by userName

Choropleth Map
| iplocation ip
| stats count by Country
| rename Country AS country count as numb
| sort -numb
| geom geo_countries featureIdField=country

To add multiple lookup files to a search, this should work for Cluster map and Choropleth Map
You can just stack the lookups
source="activity" load=Directory Op=Logged
| lookup Microsoft.csv subnet AS CIP OUTPUT subnet AS matched_subnet
| lookup IP.csv IP AS CIP OUTPUT IP AS matched_subnet
| where isnull(matched_subnet)
| iplocation CIP
| geostats latfield=lat longfield=lon count by UserId
IP manipulation removing port
index=your_index sourcetype=your_sourcetype
| rex field=source_IP "(?<ip>\d{1,3}(\.\d{1,3}){3})"
| stats count by ip
rex
command:Extracts the IP address portion from the
field_with_ip_port
field.The regex
(?<ip>\d{1,3}(\.\d{1,3}){3})
captures any valid IPv4 address.
stats count by ip
:Groups the results by the extracted
ip
field.Counts the occurrences of each unique IP address.
Count the location of a IP
sourcetype="activity"
| spath Operation
| search Operation=FileAccessed ClientIP!=123.123.123.123
| lookup MiUm.csv subnet AS ClientIP OUTPUT subnet AS matched_subnet1
| lookup GEP.csv GEIP AS ClientIP OUTPUT GEIP AS matched_subnet2
| where isnull(matched_subnet1) AND isnull(matched_subnet2)
| iplocation ClientIP
| stats count by Country
| sort count
Extract values from raw entry
With a default data set
category: status
details: {
new: [
{
name: status
value: offline
}
]
old: [
{
name: status
value: online
}
]
}
device: {}
Splunk grabs the data little different the dot notation for whats inside of the details object, the SPL language uses spath to extract the raw information
| spath path=details.new{}.value output=new_status
| spath path=details.old{}.value output=old_status
| table new_status old_status
What the table should look like:
| new_status| old_status |
| --------- | ---------- |
| offline | online |
| online | offline |
| offline | online |
Searching for Malicious User Agents
Needed
extra
ip.csv is the internal IP addresses were not on list to narrow it down (It is not needed)
index=* "UserAgent"
| rex field=_raw "Name:\s*UserAgent\s*\n\s*Value:\s*(?<UserAgent>.+)"
| rex field=_raw "ClientIP:\s*(?<ClientIP>\S+)"
| lookup suspicious_user_agents.csv http_user_agent AS UserAgent OUTPUT http_user_agent AS Match
| eval Match=if(isnotnull(Match), "Suspicious", "Normal")
| lookup ip.csv IP AS ClientIP OUTPUT IP AS Match
| where isnull(Match)
| stats count by UserAgent
| sort - count
File downloaded limit
This will alert when a user downloads more than 200 files
* Operation=FileDownloaded
| stats count as download_count by UserId
| where download_count > 200
Last updated