Understanding and Selecting a SIEM/LM: Correlation and Alerting

Continuing our discussion of core SIEM and Log Management technology, we now move into event correlation. This capability was the holy grail that drove most investment in early SIEM products, and probably the security technology creating the most consistent disappointment amongst its users. But ultimately the ability to make sense of the wide variety of data streams, and use them to figure out what is under attack or compromised, is essential to any security practice. This means that despite the disappointments, there will continue to be plenty of interest in correlation moving forward.

Correlation

Defining correlation is akin to kicking a hornet’s nest. It immediately triggers agitated debates because there are several published definitions and every expert has their favorite. As usual, we need to revisit the definitions and level-set, not to create controversy (though that tends to happen), but to make things easy to understand. As we search for a pragmatic definition, we need to simplify concepts to make subjects understandable to a wider audience at the expense of precision. We understand our community is not a bunch of shrinking violets, so we welcome your comments and suggestions to make our research more relevant.

Let’s get back to the end-user problem driving SIEM and log management. Ultimately the goal of this technology is to interpret security-related data to improve security, increase efficiency, and/or document security controls. If a single file contained all the information required for security analysis, we would not bother with the collection and association of events from multiple sources. The truth is that each log or event contains a piece of information, which forms part of the puzzle, but lacks context necessary to analyze the big picture. In order to make meaningful decisions about what is going on with our applications and within our network, we need to combine events from different sources. Which events we want, and what pieces of data from those events we need, vary based on the problem we are trying to solve.

So what is correlation? Correlation is the act of linking multiple events together to detect strange behavior. It is the association of different but related events to provide broader context than a single event can provide. Keep in mind that we are using a broad definition of ‘event’ because as the breadth of analysis increases, data may expand beyond traditional events. Seems pretty simple, eh?

Let’s look at an example of how correlation can help achieve one of our key use cases: increasing the efficiency of the security team. In this case an analyst gets events from multiple locations and device types (and/or applications), and is expected to figure out whether there is an issue. The attacker might first scan the perimeter and then target an externally facing web server with a series of known exploits. Upon successfully compromising the web server, the attacker sets up a new user account and start scanning internally to find more exploitable hosts.

The data is available to catch this attack, but not in a single place. The firewalls see the initial scans. The IDS/IPS sees the series of exploits. And the user directory sees the new account on the compromised server. The objective of correlation is to see all these events come through and recognize that the server has been compromised and needs immediate attention. Easy in concept, very hard in practice.

Historically, the ability to do near real time analysis and event correlation was one of the ways SIEM differed from log management, although the lines continue to blur. Most of the steps we have discussed so far (collecting data, then aggregating and normalizing it) help isolate the attributes that link events together to make correlation possible. Once data is in manageable form we apply rules to detect attacks and misuse. These rules are comprised of the granular criteria (e.g., specific router, user account, time, etc.), and determine if a series of events reaches a threshold requiring corrective action.

But the devil is in the details. The technology implements correlation as a linear series of events. Each comparison may be a simple case of “if X=Y, then” do something else, but we may need to string several of these comparisons together. Second, note that correlation is built on rules for known attack patterns. This means we need some idea of what we are looking for to create the correlation rules. We have to understand attack patterns or elements of a compliance requirement in order to determine which device and event types should be linked. Third, we have to factor in the time factor, because events do not happen simultaneously, so there is a window of time within which events are likely to be related. Finally the effectiveness of correlation also depends on the quality of data collection, normalization, and tagging or indexing of information to feed the correlation rules.

Development of rules takes time and understanding, as well as ongoing maintenance and tuning. Sure, your vendor will provide out-of-the-box policies to help get you started, but expect to invest significant time into tweaking existing rules for your environment, and writing new policies for security and compliance to keep pace with the very dynamic security environment. Further complicating matters: more rules and more potentially-linked events to consider increase computational load exponentially. There is a careful balancing act to be performed between the number of policies to implement, the accuracy of the results, and the throughput of the system. These topics may not immediately seem orthogonal, but generic rules detect more threats at a cost of more false positives. The more specific the rule, and the more precisely tailored to find specific threats, the less it will find new problems.

This is the difficulty in getting correlation working effectively in most environments. As described in the Network Security Fundamentals series, it’s important to define clear goals for any correlation effort and stay focused on them. Trying to boil the ocean always yields disappointing results.

Alerting

Once events are correlated, analysis performed, and weirdness discovered, what do we do? We want to quickly and automatically announce what was discovered, getting information to the right places so action can be taken. This is where alerting comes in.

During policy analysis, when we detect something strange occurred, the policy triggers a predefined response. Alerts are the actions we take when polices are violated. Where the alert gets sent, how it’s sent, what information is passed, and the criticality of the event are all definable within the system, and embodied in the rules that form our policies. During policy development we define the response for each suspect event. Tuning policies for compliance and operations management is a significant effort, but the investment is required in order to get SIEM/LM up and running and reap any benefit.

Alert messages are distributed in different ways. Real-time alerts, for rule violations which require immediate attention, can be sent via email, pager, or text message to IT staff. Some alerts are best addressed by programmatic response, and are sent via Simple Network Management Protocol (SNMP) packets, XML messages, or application API calls with sufficient information for the responding application to take instant corrective action. Non-critical events may be logged as informational within the SIEM or log management platform, or sent to workflow/trouble-ticketing systems for future analysis. In most cases alerts rely on additional tools and technologies for broadcast and remediation, but the SIEM platform is configured to provide just the right subset of data for each communication medium.

SIEM/LM platforms tightly associate alerts with the rules, even embedding the alert definitions within the policy management system. This way as rules are created their criticality and the appropriate response are defined at the same time. Not in a futile attempt to replace an analyst, but in order to make him/her more effective and efficient, which is the name of the game.

Selection

With SIEM, correlation and alerting are the first areas of the technology you will spend a great deal of time customizing for your organization. Collection, aggregation, and normalization are relatively static builtin features, with the main variances being number of data types, protocols, and automation supported – leaving little room for tuning and filtering. Correlation and alerting are different, and require much more tuning and configuration to fit business requirements. We will go into much more detail on what to look for during your selection process later in this series, but plan on dedicating a large portion of your proof-of-concept review (and initial installation) on building and tuning your correlation rule set.


Other Posts in Understanding and Selecting SIEM/LM

  1. Introduction.
  2. Use Cases, Part 1.
  3. Use Cases, part 2.
  4. Business Justification.
  5. Data Collection.
  6. Aggregation, Normalization, and Enrichment.

Understanding and Selecting a SIEM/LM: Correlation and Alerting

时间: 2024-08-26 16:37:08

Understanding and Selecting a SIEM/LM: Correlation and Alerting的相关文章

SANS:2014年安全分析与安全智能调研报告

2014年10月份,紧接着2014年度日志管理调研报告(Log management survey),SANS又发布了2014年度的安全分析与智能调研报告(Analytics and Intelligence Survey 2014). 正如我之前博客所述,SANS认为安全分析与日志管理逐渐分开了,当下主流的SIEM/安管平台厂商将目光更多地聚焦到了安全分析和安全智能上,以实现所谓的下一代SIEM/安管平台.而安全分析和安全智能则跟BDA(大数据分析)更加密切相关. SANS对安全智能的定义采纳

SANS:2015年安全分析与安全智能调研报告

2015年11月,SANS发布了第三次安全分析与安全智能调研报告2015年版(Analytics and Intelligence Survey 2015).报告对来自全球的企业和组织共计476位专业人士进行的调研访谈.今年的调研问题比去年更加深入. 调研表明,与去年相比,大家对安全分析与安全智能的认知更高了,应用也更多了,但距离理想目标依然还有不小的差距,尤其是合格的从业人员(分析师)的短缺问题更加突出了.对于大数据技术应用于安全分析这个问题,得到了更多人士的认同.相较于安全数据的大数据化这个

Understanding Convolution in Deep Learning

Understanding Convolution in Deep Learning Convolution is probably the most important concept in deep learning right now. It was convolution and convolutional nets that catapulted deep learning to the forefront of almost any machine learning task the

阅读笔记 CCL: Cross-modal Correlation Learning with Multi-grained Fusion by Hierarchical Network

总结 CCL: Cross-modal Correlation Learning with Multi-grained Fusion by Hierarchical Network Yuxin Peng, Jinwei Qi, Xin Huang and Yuxin Yuan 常见方法 使用深度神经网络(DNN)的跨模态检索大体分为两个步骤: 1 The first learning stage is to generate separate representation for each mo

几家SIEM

HP Arcsight Imperva is a HP Business Partner. HP is the world's largest IT company, providing infrastructure and business offerings for consumers as well as businesses of all sizes. Imperva has developed solutions that support or integrate with multi

Understanding ADC Parameters

Atmel AVR127: Understanding ADC Parameters This application note is about the basic concepts of analog-to-digital converter (ADC) and the parameters that determine the performance of an ADC. These ADC parameters are of good importance since they dete

Correlation rule tuning

Lots of organizations are deploying SIEM systems either to do their due diligence or because it’s part of a regulatory requirement.  One of the misconceptions that typically is derived from marketing material is that you plug it in, turn it on, and v

Gartner:2017年SIEM(安全信息与事件管理)市场分析

2017年度的Gartner SIEM魔力象限在比往常推迟了4个月之后终于发布了.在Gartner眼中,SIEM已经是一个成熟市场.但这个市场依然十分活跃:客户需求在变化,市场格局也在变化,技术革新也在不断重塑SIEM自身.让我们先看矩阵: 对比一下2016年度的矩阵: 可以说,这是自2014年以来,变化最大的一次(可以参见我下面的历年分析文章).我将这些变化总结为5点: 1)领头羊之争日趋激烈,去年是IBM和Splunk各执牛耳,几年则是IBM QRadar略胜一筹,颇有当年Arcsight独

10 Easy Steps to a Complete Understanding of SQL

原文出处:http://tech.pro/tutorial/1555/10-easy-steps-to-a-complete-understanding-of-sql(已经失效,现在收集如下) Too many programmers think SQL is a bit of a beast. It is one of the few declarative languages out there, and as such, behaves in an entirely different w