THREAD
Before I go deep on detection & alerting, let's level set on log & event collection. I'm not a fan of the old-school mode of detection where we deploy a sensor like
Snort or OSSEC and only forward alerts based on signature criteria. Instead, forward all events into a...
...central pipeline for storage, search, and alerting.
Doing this with a common schema gives you computationally cheap correlation across
source types. You also want to consider additional event decoration at ingest. What if you tagged events with your internal network data?
Would knowing that an IP was located in a specific datacenter VLAN be useful to investigators? Ideally, and this is what we've done, you use the same technology for alerting and search. This lets you use the same syntax for search and writing detection rules. In addition...
...to making detection rules easier to write, it also gives you the ability to "tune" rules against days or weeks of events in a matter of minutes instead of having to
test over actual days' time with live data. Fast prototyping of detection logic is a huge advantage.
In this model, alerts are actually an event or cluster of events being tagged and promoted for investigation. So data quality of the events matters, but your focus on data quality for the alert is different. You want to provide context to the investigator through the name field..
...the promoted/aggregated field values, and the use
of other metadata like reference links. The goal should be for the analyst that receives the alert to be able to understand what they're looking for in as few clicks
as possible. From there, the IR team can build playbooks...
...that lay out scoping & containment steps that are recommended based on the type of alert. All of this is done to make building detection and investigating alerts as fast and repeatable as possible.
You can follow @pmelson.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: